My First Technical Job – T-SQL Tuesday #150

I’m late to the party on May’s T-SQL Tuesday, but thought it was an interesting enough topic to be worth a belated blog post. It’s about first technical jobs – hosted by Kenneth Fisher.

T-SQL Tuesday #150

My first technical job was somewhat unusual. It was for a niche software company that made despatch software for taxi companies. It was a small company, but had the majority of the UK market as customers at the time, and offered 24/7 support, so was very busy. The company supplied their software, but also most of the applicable hardware – computers, networking, radio systems and mobile devices. I’d been a barman for the previous two years – I quit that job with the intention of going back to college to study some certifications to work in IT, but that fell through a day before they were due to start as nobody else wanted to do the courses, so they were cancelled. Out of necessity from suddenly having neither a job nor any educational funding, I hastily applied for a few tech support jobs, one of which offered an interview.

The interview was very informal. I was asked if I could take apart and put together a PC, which I could, though wasn’t made to prove it. And then some general Windows and hardware questions. The company was hesitant to hire me as I didn’t have a full degree – just a diploma in maths and computing – but I will have no doubt pleaded my case by telling them I’d started programming in BASIC while still in primary school, Visual Basic and C in my teens and spent a lot of hobby time since then messing about to various degrees with computers and coding – so I knew my way around Windows and software. I got the job, albeit on lower pay than usual given the lack of degree – and was given some thick printed-out booklets covering the ins and outs of their software, and every evening for the next few weeks, reading through those archaic texts was my life.

Fundamentally it was a tech support role. But what I had to support was where it got interesting – and challenging. First was the basic Windows kind of stuff – most customers had a mix of Windows Server 2003/2008 and WinXP, and we had the ability to remote into their sites to take control of the servers, so nothing too problematic there – but a handful of customers remained on DOS. Yep, DOS in…2011. This was the support call we dreaded. It meant no remoting, and having to talk (usually not very tech-savvy) people through troubleshooting via the CMD prompt and/or unplugging and plugging various serial cables to peform a physical DR switchover . Occasionally at 2am. That stuff was…character building.

Then there was the software on top. There was both a text-based, terminal style version, and a new GUI version. It was complicated software providing both despatching and accounting features, with extensive logging and hundreds of config flags hidden in the admin options that needed to be checked in the course of diagnosing problems. Fortunately, as well as the aforementioned manuals, there was an internal Wiki maintained by the developers documenting most of these config flags and processes, but this didn’t cover every new setting or, obviously, undiscovered bugs. We, the support team, added to this invaluable resource as we found new issues or new information about settings and processes.

Finally there was the hardware. Every taxi needed a device to communicate back to the office. At the time we were rolling out mobiles, but most customers still had radios. And thus I was introduced to the intersection of computers and radios – Moxa devices with Ethernet/Serial connections to link the server to the radio system. And radio comms logs in the software logging broadcast and received signals, retries, errors etc. Some issues we could diagnose ourselves, like areas of bad signal by piecing together radio logs with the map corresponding to different physical locations, but we also had a team of radio engineers we’d often have to take more complicated issues to.

It was a baptism of fire for a first technical job in many ways. Not only did we have to support typical Windows and networking issues, but also multiple versions of completely bespoke software, radio comms and accounting issues. For around a thousand customers, who each had their own unique configs, radio environments and incident history – and all depended on their software for their livelihoods. The team was small, and sometimes the phones would be ringing off the hook all day, especially around holidays when these companies were at their busiest. I had an extra challenge in that I had/have a mild stutter that, while not normally a problem, is worse on phones – so that was a case of adapt or die, quickly. Some of the customers, being external not internal, could be…well, rough around the edges would be an understatement. A few times I had threats someone was going to drive down and throw their system throw the office window. (they never did)

The on-call rotation, when I learned enough to join it, could be brutal. Sometimes we’d get a dozen calls in a night, and would turn up bleary-eyed at 8:45 the next day. The subsequent evening was almost always a total write-off – get home, sleep. I appreciated the extra money at the time, but it was the kind of sleep and health sacrifice only someone in their early 20s would reasonably choose to take!

Challenges aside, I’m forever thankful for that job (and for my bosses-to-be for taking the chance with me). We had good teams of people – the support team helping each other out massively – knowledgable old hands, had fun despite the challenges, and I got involved in things beyond application support – I’d completed my CCNA so I ended up doing a new standard router config, got involved in bug testing, and also picked up MySQL rollouts and support as I’d also been studying SQL. I learned a lot about how to communicate with non-technical people, manage expectations and deal with a very busy helpdesk by staying on top of the most important issues. Additionally, I got exposure to the fundamentals of software testing, the challenges developers face, and training new staff on the systems we supported.

I didn’t want to stay in a niche support role forever – and at the time, I saw the shadow of Uber looming on the horizon as an industry threat – so had explored both networking and SQL as progression routes, and ended up choosing SQL. After a few years I left the company, moving out of the trenches to a much quieter backend role supporting MSSQL-backed apps and subsequently into SQL/SSRS development and administration. It was the right move for me and I don’t miss the support life, but I will always have massive respect for tech support after being on the other side of the phone.

Upgrade Strategies – T-SQL Tuesday #147

This month’s T-SQL Tuesday is about how we look at SQL Server upgrades, hosted by Steve Jones.

T-SQL Tuesday #147

My experience of SQL upgrades is that they tend to be largely dictated by neccessity, either of the ‘the Security team is getting really twitchy about these old servers’ or ‘crap, it’s license renewal time and the vendor doesn’t support x’ variety. I’ve never performed one that wasn’t under some sort of pressure. How do we get here?

At this point I must mention the age old mantra…

if it ain’t broke, don’t fix it

They’re certainly wisdom in it, to a large extent. Accidentally applied patches and changes have wreaked plenty of havoc on systems across time and space.

The problem occurs when it’s stretched to breaking point.
That Windows 2000 box running SQL 2005 may not be ‘broke’ per se, but;
– people are scared of touching it. We shouldn’t be scared of servers – we should be careful, but confident we can handle them
– no vendor support for either OS or software should something actually break
– it’s probably a security risk
– expert knowledge to support the OS/software is harder to find
– the solution it provides is probably hampered in performance and modern resiliency features
– a dozen people probably already said ‘we’ll upgrade it at some point’ but never did

If this mantra is held onto too tightly, before we know it we end up with lots of difficult/costly to support old solutions and the concept of dealing with the situation spirals towards ‘unfeasible’.

I feel like generally management tend to veer too much to the conservative side and this is why we as DBAs or sysadmins face so much tech debt. The overall upgrade process always has risks, but if we’re talking about SQL Server itself, there are just not that many breaking changes. None, for example, for the SQL2019 database engine.

That said, I’m certainly not a proponent of being in a rush to upgrade to the latest major version. There are still too many potentially serious bugs that end up being ironed out in the first tranche of CUs, and I’d rather someone else discover these if possible. Right now, at the beginning of 2022, unless there’s a specific use case for an added feature in SQL2019, I’m also not in a rush to upgrade something still on 2016/2017 – they’re both mature, stable releases with robust HADR, Query Store support and a 4+ years of extended support left.

So, when to upgrade?

Here are three majors reasons to upgrade;

  • When a solution is going to be around a while and SQL will go OOS during its lifespan. When calculating this, take the quoted remaining lifespan from management and triple it. Additionally, consider the wider estate. The higher the volume of tech debt building up, the quicker we need to get on top of it.
  • When a solution has problems that can really benefit from an upgrade. There are loads of potential benefits here and YMMV in how much benefit is required for a business case, but, say, you have an old, flaky replication set up that could benefit from Availability Groups. Or a cripplying slow DW creaking under its own mass that could do with columnstore. Or you have plenty of spare VM cores and RAM, but run Standard Edition and have a resource-maxed system that would happily improve with the increased allowed resources from later versions.
  • When you’re upgrading the underlying hardware/OS. There are two ways to look at this – either that we’re already introducing risk with such an upgrade so don’t take extra risk, or that since we’re going through the upheaval of an upgrade we may as well take advantage of it and upgrade SQL as well. I’ll generally take the latter, opportunist view.

How?

Before any other consideration, we need to scope out the work. Is it a small project – a single database/instance backend for a non-critical with generous downtime? Or is it a much bigger one – a vast reporting instance with connections coming in from all over the place, or a highly-available mission-critical system with hardly any downtime? This will define the resource needed – the bigger the impact/reach, the bigger the project, resource allocation and stakeholder involvement needs to be.

Upgrades can be in-place or to a new server – the new server option is infinitely preferable as it makes testing, rollout and rollback far easier and safer.

A few best practise tips;

  • Have a technical migration checklist that covers everything SQL Server related – config, logins, credentials, jobs, proxies, etc etc. Go over the instance with a fine-toothed comb as there are plenty of odds and ends hidden away.
  • Use a test environment and get a solid test plan worked out with both IT and users that covers as many bases as possible. As testing coverage approaches 100%, risk approaches 0%. Note the word ‘approaches’ – it’s never 0%. If we have business analysts/testers who can focus solely on parts of this task, great – they will extract things we would miss.
  • Utilise the Database Migration Advisor to help catch any potential issues
  • Have a solid rollback plan – and test it.
  • Take the opportunity to document the solution as we go, if it hasn’t already been done.
  • Also take the opportunity to do some cleanup, if scope allows. Unused databases, phantom logins, etc. The less we need to migrate, the better.
  • Decide ahead of time which new features to enable, if any, and why. The same goes for fixing any outdated settings, like bumping CTFP up from 5. You may want/need to just leave everything as-is initially, but you might then have scope for a ‘phase 2’ where improvements are introduced once stability is established.
  • Related to the above point, make use of the ability to keep Compatibility Mode at its existing level, if risk tolerance requires it.
  • Post-migration, be sure to run CHECKDB (additionally using the DATA_PURITY option if a source database(s) was created prior to 2005) and update stats and views.

The Cloud

PaaS (Azure SQL DB, MI) changes things.

We don’t need to worry about upgrading anymore, because there’s one version of the product – constantly tested and updated by Microsoft. That’s great! – but it also means we’re at their mercy and they could potentially apply a patch that breaks something.

This is simply part of the great cloud tradeoff. We hand over responsibility for hardware, OS and software patching but at the same time lose control of it. We can’t have our cake and eat it, too.

But one thing to be said for surrendering this ability is that the more we use Azure – the more data Microsoft has to analyse, the more databases to test on, and the more reliable the whole process should get.

I think it’s a tradeoff worth making for systems suitable for Azure.

What do you do when technology changes underneath you? (T-SQL Tuesday #138)

This month’s T-SQL Tuesday is hosted by Andy Leonard who has poised the question ‘What do you do when technology changes underneath you?’

I was going to begin by saying this is a very pertinent question for 2021, but frankly, it has been for pretty much the last few decades. It’s easy to forget that it wasn’t that long ago that mobile phones were, well, actually just mobile phones, and electric cars were science fiction.

At a high level I’d simply quote Leon C. Megginson, paraphrasing Darwin;


“According to Darwin’s Origin of Species, it is not the most intellectual of the species that survives; it is not the strongest that survives; but the species that survives is the one that is able best to adapt and adjust to the changing environment in which it finds itself”

‘Lessons from Europe for American Business’, Southwestern Social Science Quarterly (1963)

Essentially – ‘adapt or die’. Technology will move either with or without you, so you must adapt to it or be left behind.

Practically – as ever, there is a sea of grey between the black and white.

The train ain’t stoppin’, but what if you don’t need to get on it? Or what if you think the train is heading off a cliff? Allow me to stick with the train analogy and split how I handle technological changes into a few broad categories and examples – then discuss the topic from a DBA perspective.

  • Do nothing, because I’m already on the train, at the front, on my 4th Heineken.

SSDs! If I could bold that harder I would. VR. 144hz monitors. Fitness trackers and health wearables.

This is technology that I’ll wholeheartedly jump into before it’s mainstream. I want the benefits of early adoption and am happy to take the early adopter risk and pay the early adopter tax. Maybe it won’t take off. That’s OK.

  • Jump on the train when it gets here as it’s shinier than I thought it would be

The Cloud. Cryptocurrency.

This is the kind of technology that I am initially skeptical of due to limitations or other issues, but after attaining a certain level of quality and usefulness, it’s a no-brainer to get involved in.

  • See the train coming, but don’t get on, because the road is still open and I like driving

‘Smart Home’ stuff.
Windows Vista….

This is change that was easy to predict, but is *not* required or going to be required for work or life in general. Yes, I know you can turn off lights with your phone now. I’m happy to use the switch.

  • Reluctantly get on the train after walking for days because the powers that be have blocked the road to build another trainline

Streaming. Generally, things-as-a-service (except PaaS!). Social media. JSON…

This is change that I’d rather not embrace because of the ramifications of it, but have to. Do you want to stream 4k TV? You basically need Netflix (or similar service) at this point, despite their user-unfriendly practises of removing ratings, purposefully obfuscating view of their library, forced-autoplay, etc.
With ‘as-a-service’, control is relinquished when the service hits critical mass.
It has benefits, but also drawbacks.
Another example is videogames – home consoles and physical games aren’t dead yet, but ultimately they will all be digitised. That’ll be a sad day for me, but I get it.
Social media is a unusual one because originally they were great tools – Facebook for keeping up with friends, Instagram for sharing and exploring themed images via tags. They have unfortunately been engineered into dopamine-devouring addictive monsters that I’d now rather not grace with an install on principle, but have both a captive market and the original benefits of the tools largely still exist somewhere in there – so they remain installed.

SQL Server

Here in the MSSQL world, we’re faced with specific, cloud-based technological changes – PaaS and NoSQL.

PaaS services are gobbling up more parts of the traditional DBA day job as time goes by, and businesses and developers are exploring data solutions outside the traditional RDBMS, like the NoSQL offerings from CosmosDB etc. I believe we broadly have three options for the future;

  1. Do pretty much nothing. Let’s face it, we all know that loads of SQL installs will be around for another decade or two. And stuff that hangs around long enough to become legacy is often legacy for a reason – it’s too important to the business to mess with, for one reason or another. If your retirement is just a decade or so away, this is totally a viable option. You may need some cloud/PaaS supporting knowledge, but I don’t believe you’ll *need* to specialise.
  2. Adapt by expanding your knowledge into cloud infrastructure. OK, Microsoft have taken the details of Windows, backups and SANs away – but now we have RBAC! Performance tiering! VNETs! Geo-replication! And of course, somebody needs to administer CosmosDB too. There’s lots to learn and it’s always growing. Simply replace the stuff you don’t need to remember anymore with shiny new stuff. (But performance tuning isn’t going away – so don’t forget that.)
  3. Adapt by getting into ‘data’. Data engineer, data scientist, data analyst. Maybe you don’t want to stick with the Production DBA route with the changes to the role PaaS brings – this is your chance to switch tack. Try Databricks, learn Python, explore NoSQL – data is only getting bigger, and the world needs more people than ever to actually do something with that data.

    ’til next time!