top of page
Writer's pictureTravis Howerton

CI/CD - Get the DBA Out of the Way

Updated: Feb 6, 2020



At C2 Labs, we are major proponents of DevSecOps and working with Continuous Integration (CI) and Continuous Deployment (CD) tools to empower developer productivity. Ideally, a developer should have all the tools they need to do their job, be able to push new code at will, trust that it will be built and tested properly, and then deployed to Production with no manual intervention at any step of the process.


We realize that for many companies this sounds more like science fiction than reality. However, we are using these tools today within our R&D division and across client sites to deploy code to Production multiple times per day in Microsoft Azure using Azure DevOps as our CI/CD tool. We demo this process to clients, show them how to build a Dockerfile to containerize their app, and show the CI/CD tool move the code across DEV, QA, and PROD environments. We discuss the benefits of containers, why stateless applications are the future, how you can auto-scale in Kubernetes (K8s), and many other benefits for developers. However, without fail, there is always the one DBA in the room who has to make a snarky comment, "All that CI/CD stuff is great for the app, but what about my database?"


The challenge is generally always the same. The DBA has 20+ years of experience and points out that no tool can do everything that they do on a daily basis. They point out that while containers are stateless, the databases that they rely upon are not. How do you keep the database in sync with all of these automated changes? Basically, their point is that the developer won't be able to live without the DBA and therefore the deployment process will remain manual.


While their comments are largely true and DBAs will continue to provide valuable services to developers in optimizing database performance, backups, and other related tasks, it is our opinion that DBAs should not be necessary to support code deployments. These types of comments (you will also get them from infrastructure people with similar arguments) come from the haters who fear that these tools may replace their jobs, that the skillset necessary to use them is beyond what they can currently support, or that it is just plain make believe and not even possible. While there are challenges in how you support databases and related changes in a CI/CD environment, they are truly just challenges and not impossible obstacles. To these haters, we say challenge accepted :)


The rest of this blog post will walk you through a use case of how we integrated database updates into our CI/CD pipeline and overcame these challenges to empower our developers to be more productive and to push their code to PROD with no fear it will break the database or cause data loss. Like most hard problems in IT, the answer starts with good architecture. This blog is based on the software engineering work of Jed Thornock, a C2 Labs Senior DevOps Engineer.


Our application was designed using multiple components that would allow both scale and advanced application automation. These tools included:


- Angular (latest, 8.2) - for Single Page Application (SPA) front end code

- C# 3.1 on .NET Core - APIs, middle tier, and business logic

- SQL Server (Azure PaaS) - scalable back-end using Entity Framework w/ Code First

- Docker - for containerizing the application

- Kubernetes - for scaling the application, deployment, and improved resiliency

- GitHub - source code management


Our development is done using complex Git branching strategies. What this means is that CI/CD jobs are connected to different events in GitHub around Pull Requests. These events then trigger pipelines to run in Azure DevOps based on the event. A high level overview of our CI/CD pipeline is shown in the graphic below:



This set of pipelines enables a developer to have full autonomy whereby they can just code their functionality in a feature branch, merge it, and then have automation provide testing, security results, deployments, and other functionality. One of the most complex pieces of this automation was getting the database update script to work. Below is a screenshot of the production deployment pipeline that includes the DB script:


This pipeline consists of a multi-stage build where:


1) Container is built and pushed to our registry

2) SQL migration script is built, stored, and executed to deploy changes to the QA database

3) Container is pushed to Azure Kubernetes QA environment for final testing before PROD deploy


The magic happens inside of step 2. By leveraging the power of Entity Framework, we can pull all of the migration data for the database and programmatically build an upgrade script from the command line. This happens in multiple tasks as shown in the screenshot below:


For this script to work, we perform multiple actions:


1) Create a container to run the job

2) Pull the latest code from GitHub with the directory of Entity Framework migrations

3) Install the current version of .NET Core into the container

4) Install the .NET Entity Framework tools

5) Run a command to generate the SQL Script

6) Run the script to update the database

7) Publish the script to an Azure Blob for audit traceability later if something goes wrong

8) Tear down the container and wrap up


All of these steps happen automatically when the Pull Request (PR) is made to merge into Master. This step allows the person approving the PR to review the change in the QA environment prior to approving the merge. The approach automates what used to be a manual and painful task, results in repeatable and higher quality testing, and improves governance through the workflow approvals and code review processes in GitHub.


The key command to generate the SQL upgrade script is as follows:

The script is timestamped in the filename to assist with tracking changes to the database over time. In the screenshot above, you can also see that the script is stored as an artifact in the pipeline to see exactly what happened on this build. The result is that the developer just needs to merge their code into a protected branch (DEV or Master) and the pipeline automates all of the other steps associated with testing and deploying; including the database updates!


This approach has multiple benefits, to include:


1) Developer productivity - they only have to worry about code

2) Quality - all of the manual steps are removed resulting in higher quality outcomes. In addition, since everything runs in a container, the environment always matches precisely between DEV, QA, and PROD.

3) Governance - as the manager approving the PR, you can see exactly what changed in the code, testing results, security scans, and exactly what will happen to the database before you approve the PR.

4) Lower Costs - since the manual steps were removed for the supporting work, labor testing and deployment costs are effectively zero once automation is completed.


At C2 Labs, we believe in automating everything to maximize developer productivity. By deploying the latest techniques in DevSecOps, we can rapidly iterate application features to solve real world business problems without sacrificing quality or security. If you are interested in how you can improve your DevSecOps program or just don't know how to get started on your journey, Contact Us today for a free consultation.

363 views0 comments

Comments


bottom of page