Blackbox Testing: Dealing With Complexity

At FloQast, we offer customers a variety of financial software solutions, with each typically managed by a single team. Recently, the team that manages our amortization product began experiencing an uptick in customer-reported defects and needed to get a handle on the situation before their next major release. My engineering manager asked me if I might be interested in temporarily joining the team and helping out. Being a fan of new challenges, I agreed!

Initial Analysis

After going through the team’s onboarding process, I began reviewing the aforementioned defects, as well as their collection of existing tests. The defect analysis resulted in the discovery that most of the offending changes were small, seemingly harmless backend changes that caused errors not covered by either the existing unit tests or functional frontend tests. Thus, the primary focus of my work became writing functional backend tests.

Plan A – Whitebox Testing

Whitebox testing can be defined as:

A method of software testing where the internal workings of a given system are known, and this knowledge is used to design test cases.

Initially, I wanted to study all of the backend logic in great detail so that I would know exactly how many possible paths there were through the code, and then write tests for each of those scenarios to ensure full coverage of the underlying functionality. A quick cyclomatic complexity check led me to rule such an approach as impractical, especially given the short amount of time I was supposed to be with the team. Calculating this was fairly easy, using the eslint complexity rule, and this configuration:

// .eslintrc.js

module.exports = {
  env: {
    commonjs: true,
    es6: true,
    node: true,
    jest: true
  parserOptions: {
    ecmaVersion: 'latest'
  rules: {
    complexity: ['error', 12]

Unfortunately, it seemed that most of the commonly shared backend code had fairly high code complexity (> 15). I had to find another way.

Plan B – Blackbox Testing

Blackbox testing can be defined as:

A method of software testing where the inner working of a given system are not known, and so we have to rely on observable behavior to design our test cases.

Since the code that needed to be tested the most was quite complex, and there wasn’t enough time to refactor the offending code, I decided to go with a blackbox testing approach over my initial whitebox testing plan. The testing focus shifted from learning how the system worked, to what the system actually does. For example, instead of obsessing over how an amortization schedule is updated, it was easier to actually perform the updates myself and record the outputs of the system. Thus, I met with the developers to learn more about the most common operations performed by our customers, and started to study the individual backend request/response pairs that were being fired off by the front end. Assuming current behavior was correct behavior, I was able to start writing functional backend tests quickly. While it’s hard to guarantee complete coverage via blackbox testing, I was at least able to capture a large amount of current behavior, and therefore catch any minor deviations from that in the future.


4 months and ~100 functional backend tests later, defects have decreased significantly – I joined the team at a time when ~70% of the tickets created were for addressing defects, and that amount has since decreased to ~7%. While I can’t take all of the credit for this great success, there were multiple occasions where the functional backend tests were able to catch very subtle errors before we released our changes to customers. The end result of all of this work is a team that can deliver more value to our customers with increased confidence. That’s a win in my book.

Daniel Vargas

Daniel is a Senior SDET at FloQast with a passion for automating everything! Outside of work, he enjoys spending time with his family and playing retro video games.

Back to Blog