Practical Microservices Testing Strategies
Apr 13, 2020 | By Saurabh Deshpande
As we migrated from a monolithic to a microservices architecture, we made substantial changes to the tooling and processes to support multiple high-quality software releases per day
At FloQast, we started off with a monolithic software architecture, but as the number of products and scrum teams grew, we started transitioning to a microservices-based architecture. Initially, during this transition, we continued to utilize the single-shared QA Environment (a holdover from monolith deploy period) for end-to-end testing prior to deployment to production. As the number of microservices grew, we started to encounter bottlenecks in our QA environment — bugs found in the QA environment resulted in few hours delay due to fix-deploy-verify cycle and held up upcoming releases for same or dependent services. This was primarily because it was the first time the services were deployed in a production-like environment that contained other services and exposed to more realistic test data.
To resolve this bottleneck, we relooked at the whole release process, infrastructure, and tooling and made a few improvements, shown in the above picture and described below:
Introduced local dev environments
As most of our microservices are Dockerized and run in a Docker container, our Dev Team utilized Docker Compose to build local dev environments containing a majority of our microservices. For services utilizing AWS Services (Lambda etc.), we utilized local testing tools provided by AWS such as SAM local. This allowed devs to test their code changes locally in an integrated way before creating pull requests.
Built AWS test environments per Scrum team
At FloQast we are all in on AWS and the DevOps team at FloQast had already started terraforming all AWS infrastructure as code. For teams that were encountering bottlenecks in the shared QA environment, DevOps created production-like environments using Terraform. To keep costs in control, we built these environments with only the subset of services needed for each team and utilized lower-instance sizes compared to production. Then, we set up Jenkins-scheduled jobs to keep the services updated on a daily basis with the branch deployed in production.
Configuring UI/API Tests to run in multiple environments
After we had these per-team-test environments, we modified our UI/API Test configurations so they could successfully run against multiple-team environments. This involved figuring out a reliable way to populate and maintain test data in multiple environments.
Optimizing the test stage of each microservice’s CI/CD pipeline
Microservices are supposed to be independently deployable and testable. So, we looked at each microservice’s functionality and selected tests from our main UI/API test automation suite that apply to the specific microservice under test. This further reduced the feedback cycle for bug detection in test environments.
Over the past two years, as we put these changes in place, we observed a significant drop in bottlenecks in the shared QA environment and gradually went from a weekly release cycle prior to May 2018 to deploying code over five times a day in March 2020 (see pic below).
To derive full business value from a microservices migration, take a close look at all the stages of your software release process. Some areas to explore include introducing local dev environments, building per-team, cloud-based test environments, and adjusting UI/API test automation. These changes take time, but once done, you start to reap the rewards of multiple high quality releases per day!