The series at a glance:
After reading part 1 of this series, you were [hopefully] motivated to step up your testing work. Encouragement, yes!! From part 2, you learned the importance of knowing where you are and where you want to be. This time around, we’re going to illustrate where to spend your time and effort next.
WARNING: I still haven’t located a decent automated run-on sentence extractor!
Get out of the manual mindset
I’ve seen it dozens of times. Developers get so accustomed to their 9-keystroke shortcuts that they don’t bother looking for a better way. How many times have you shoulder-surfed a co-worker and told them something like this:
It’s simple. All you have to do is:
- Open SSMS.
- Open Visual Studio.
- In SSMS, open the query window.
- Press Ctrl+U, press the down arrow twice, then press enter.1
- Select the first 2 queries (Ctrl+Home, followed by 5 to 7 Shift+Down Arrow).
- Press F5.
- Alt+Tab; Now you should be in Visual Studio.
- Open the 5th saved schema compare in the “OurComparisons” folder.
- …on and on and on until you’ve run the comparison and placed the final result into source control.
1 In other words, how to select your database in SSMS without using the mouse.
I’ve done it. Now that I’ve admitted to it, it’s ok for you to do the same. We all know you mean well. But after the 2nd or 3rd new member of the dev team you do this with, you start to wish you could say “run these 2 scripts after every edit, that’ll run the tests and save successful code to source control.”
Where do you start?
This is where we pick a task and focus. The path you’re on is all about momentum; kind of like me writing this series. Once you commit, it’s time to get in and do it.
Only you know what motivates you best. However, I’d suggest starting with the easier and/or shorter tasks. Nothing beats a quick, public success for gaining some momentum. It also doesn’t hurt your chances for that cash bonus, either.
Will it be enough to get you over the finish line? No. But it will be enough to get you onto the next task. We’ll tackle “done”-ness in part 4, so don’t get ahead of the task at hand.
Let’s take a piece of the Development Process, Hero Edition from the previous post. You’ve managed to get a SQL Server instance dedicated to your Continuous Integration (CI) build testing, but you still manually push db changes to that CI server when you’re ready. For this exercise, let’s say you’re using the Red Gate SQL Developer Bundle for your deployment and SVN for your source control. These same concepts apply even if your toolset of choice is different.
What you need to do is turn this
- Setting the reference and target for each execution manually.
- Double-checking the settings, if you’re using saved comparisons.
- Trying to remember what the state of the SVN repo was at the time you ran the comparison.
- Sometimes forgetting to deploy your latest tweak to the CI db server in time for the next CI pass to use it.
- Setting the reference and target one time. In the case of the source code (reference) side, dynamically by version tag.
- Establishing the comparison settings one time in a config file that you also add to source control.
- Saving the output reports to know what the state of your coding world was when the comparison was executed.
- Never having to remember to deploy something to CI. If you added it to source control, it’ll make its way to the CI server when the build script runs.
I spent a lot of time working in the first setup above. I’ve been there, done that, and bought the t-shirt. It seemed good enough, so why bother? Well, how do you think I knew to make that list in the first place? They were all details that eventually became frustrating or mind-numbingly repetitive. Sometimes both.
Where I really started to get peace of mind is when we started comparing from a script. Sure it meant learning the command-line config file syntax for SQL Compare. It also involved learning enough NAnt to call the command line from CruiseControl.NET. Then I had to put the commands in place for CC.NET to pull the latest version of the database code tree from SVN so the comparison command could use it. Naturally, I had to tweak some of those settings to account for the subtle behaviors the tools have when you’re not running it via the GUI.
That’s a lot of work. Are you sure it was worth it?
You know what? It was all worth it.
- We no longer had to remember to deploy something that was checked in.
- The schema and static data comparisons were filter-based. If we added a new schema or a new table that held static/lookup data, the most we had to change was a single compare tool config file and place it in the right location on the build server.
- We didn’t have to guess at the settings used for the comparisons. The values in the config files served as that documentation.
- No more errors from magic keystroke combinations.
- New team members didn’t have to learn much about our deployments at first. They could ease their way into understanding the full process. Also, changes to the CI deployment infrastructure were rare once this was in place.
- We could get to the output of each comparison, including individual T-SQL change scripts, in case we had to troubleshoot a problem on the target databases. Yes, this could be produced manually. However, if anyone told you they remembered to produce the reports and scripts 100% of the time, you should stand back. Their nose is about to grow.
That was an example of just one piece of the puzzle: moving from manual GUI-based deployments to scripted db deployments via CI.
Now, keep going with this exercise. Replace manual source code edits with a tool that will track working copy vs source control deltas. Script the execution of your tests. As you get closer to the Hero Edition of your process, the overall picture will likely look closer to the following table. The items are listed in no particular order.
|Tedium Edition||Hero Edition||Tools Involved|
|Copy & Paste db object comparisons||GUI & script per-object comparisons (whole db)||Red Gate SQL Compare
Visual Studio Schema Compare
|Ad hoc T-SQL scripts to verify data values||GUI & script per-table data comparisons (whole db)||Red Gate SQL Data Compare
Visual Studio Data Compare
|Hard-coding function and sub-proc return values (“only uncomment when testing”)||Test your code without altering it 2||tSQLt Dependency Isolation
Visual Studio SQL Unit Tests
|Carefully crafted manually maintained ad hoc scripts for test sequencing||Testing framework||tSQLt (+ Red Gate SQL Test GUI)
Visual Studio SQL Unit Tests
|Manual deployment of T-SQL code from source control to the CI db server||Scripted deployment of T-SQL code from source control to the CI db server||Red Gate Schema & Data Compare CLI
Visual Studio DACPAC deployment
|You emailed or walked over to your teammate(s) to let them know your code was ready||Anyone who cares can check the CI build status||Team Foundation Server
and so on, and so on…
|Run ad hoc scripts on multiple servers – one by one – to compare values||Run the ad hoc queries on multiple databases at the same time and get a consolidated result set||SSMS Server Group
Red Gate SQL Multi Script
|Manually edit the source-controlled db object scripts||Use a GUI to show you where your working copy and source control differ||Red Gate SQL Source Control
Visual Studio + standard source control integration
2 Note that there is some shenanigans going on in the backgroud, so it’s not true dependency injection. It is better than (a) uncommenting “debug only” commands or (b) adding more conditionals and using a boolean input parameter named “debug”.
Most of those links go to feature-specific documentation and blogs posts. It’s not a coincidence that Red Gate / Simple-Talk shows up on a LOT of Google searches on these topics. They’ve literally written the book on SQL Server team development.
It’s mentioned in the disclaimer at the bottom, but I want you to notice that for every item in the Hero Edition, you can probably conceive of what it would take to perform that work using programs you write on your own. But I really want you to consider using the tools that are out there. It’s not that you can’t do it on your own, it’s whether you should.
I have intentionally left out the topic of deployment managers because that deserves more than a 1-line summary here. Also, the commercial ones are not well-priced for small shops with more than a handful of servers. Many of them know all about deploying DLLs/JARs, but don’t have a clue about T-SQL.
Unless something strange is going on, your Continuous Integration (CI) environment is testing the db as if it were incrementally deployed from commit to commit. But that’s not what you’re going to do in the real world. When it counts, your DBA and ops team will upgrade from one tagged version to the next tagged version. You should be testing that way, too.
You’ll need another set of servers to hold your db and app code. The server head count will be the same as or similar to your main CI environment. The overall process is the same between the two, but the Big Distinction™ for the CI version-to-version (CIVV) is that before each test run, you’re going to revert your database and application code back to the “starting” version your test scenario calls for. This could either be a (1) restore db backup, then proceed or (2) a scripted teardown + initialization. There is no right answer on that one. However, I am a believer that if you have the luxury of the restore option from a production db backup (even if you have to anonymize it), that is a better test of your deployment.
Remember that if your CIVV is restoring production databases, you’ll probably need to script the replacement of the production user accounts with role memberships for CI user accounts. Your use of SQL Server contained databases can help you here. Also be aware that you may need to remove personally identifiable information (PII) from your restored data. If you’re in a privacy-conscious field like healthcare or finance, you’re probably already be aware of any more stringent regulations that apply to you.
If you want to take this idea to the Master of the Universe level, you could also create a meta-script or meta-config that lets you specify the “from” and “to” version you want to test with minimal manual work.
What makes this extra work worth it? That’s easy…
First, you’re now testing both your code and your deployment. Think about it. Even if you have dev, CI, QA, UAT, stage, and production environments, you really only get 3 chances to test the deploy (QA, UAT, & stage) before you have to do it for real. I’m sorry to say it, but most shops only have 1 of those 3. So now, you’re really walking a deployment tightrope.
When this CIVV runs, you get a whole-version db deployment script. You can then let this CIVV test that script under as many conditions as you wish until you’re comfortable that it’s release-ready. You can even duplicate the CIVV for a performance test environment to check how you’re deploy will affect the system under a production workload. I’m not going to pretend that even 5% of companies out there have a test environment with production-level hardware and can simulate live load. But you could at least test performance on relative hardware with a relative application load.
Second, because you automated your CIVV, you should also now have the tools you’d need – at least the building blocks – to let QA, UAT, and stage roll back a version should their deploy go haywire. You could even use it for production rollbacks, provided the version rollback scripts allow for a “protect the existing data” mode.
Does it ever end?
Tune in for the fourth installment in this series to find out when you can start to uncork the champagne.
FULL DISCLOSURE: Because I have been a user of the Red Gate SQL Developer bundle in the past, I am somewhat fond of their way of doing things. I am also glad to see that the Microsoft tools related to SQL Server Projects (with SSDT) have come a long way since their introduction as Database Projects. It doesn’t even need a separate install as of Visual Studio 2013.
Of course, you could accomplish things like schema/data compare, source control tracking, etc. with free or roll-your-own solutions. Many people have successfully written utilities that use SMO to generate scripts and various methods of comparing the output. However, it is fair to say that the functionality provided by the packaged tools can save money, even in the short-term. No matter what you end up with, just make sure you’re not spending a dollar to save a dime.
The series at a glance: