MongoDB Queues – Why and How We Use Them at FloQast

Platform teams vary from company to company. At FloQast, the Platform team’s mission is to enable product teams to deliver innovative products faster and autonomously by improving the developer experience and reducing inter-team dependencies.

In this article, we will discuss how the Platform team at FloQast utilizes MongoDB to increase developer efficiency.

The Problem

As FloQast continues to grow, we continue to create more engineering teams in order to provide more functionality for our customers. One common use case amongst each of these teams is the use of a message queue. Traditionally, we’ve chosen AWS SQS as our message queue service.

A typical developer workflow would look like the following:

  • Create a queue
  • Update application code to push messages to the queue
  • Update consumers to consume from the queue:
    • Update the existing monolithic worker service to consume from the queue, or
    • Create a new service and re-implement SQS
  • Update local setup scripts

This worked out well while we were a relatively small team; however, this has now caused some issues with regard to maintainability and duplication of work. We’ve had to create numerous queues for each of our environments, as well as individual queues for each developer, requiring the need to maintain local setup scripts and documentation (which more often than not is an afterthought). Additionally, we were then forced to either continue making updates to an existing monolithic service or re-implement SQS in a new service.

Our Solution

The way we’ve chosen to alleviate some of the pain points is by simplifying and abstracting the queue. Developers shouldn’t have to worry about the queue implementation. They only need to know how to add messages to a queue and how to process messages from a queue.

Although we’ve already used AWS SQS to an extent, we’ve opted to use MongoDB as a message queue instead. There were several reasons we decided to take this approach. The first was so that we could avoid having to maintain a separate queue for each environment (we have an environment per team in addition to QA and production) as well as a separate queue per developer (I know, crazy!). The reason why we have separate queues for each developer is so that we can avoid consuming each others’ messages. The second was that we already had MongoDB available in all of our existing systems. This significantly reduced ramp-up time as developers no longer needed to familiarize themselves with SQS, modify infrastructure code (we use Terraform), or maintain local setup scripts.

What it looks like in development

As we continue to move towards a microservice-based architecture, we’ve chosen AWS Lambda as our technology of choice for both processing messages from a queue as well as providing the ability to push messages to queues. This removes the need to modify existing services in favor of a separately deployable Lambda. We’ve provided abstractions via Lambda Layers that allow developers to easily add messages to any queue.

The typical developer workflow now looks like this:

  • Update application code to push messages to the queue
  • Write lambda code to consume from the queue

Adding messages to the queue looks like so:

// Available via a lambda layer
const { addMessage } = require('floqast-queue');

exports.handler = async (event, context) => {
    const queueName = 'test-queue';
    const payload = {
        message: 'Hello World!'
    const messageId = await addMessage({
    console.log(`Added message: ${messageId} to queue: ${queueName}`);


Lambda code that will run when a message is added to the queue:

exports.handler = async (event) => {
    const {
        message: {
    } = event;

    console.log(`Received message: ${id} from queue: ${queueName} with payload: ${JSON.stringify(payload)}`);


Lastly, in order to facilitate the polling and processing of messages, we set up a service that uses MongoDB’s atomic findAndModify method to grab the first item in the queue (FIFO) and invoke a lambda based on a set of rules provided by the developer. As part of the Lambda deployment, a rules.json file must specify both the name of the lambda and the name of the queue so that they can be synced with this service at deploy time. The rules.json will look like so:

    "rules": [
            "queueName": "test-queue",
            "lambdaName": "test-lambda"


There are additional properties such as the visibility timeout, dead queue, max concurrency or max retries that can be configured as well via the rules.json but are out of scope for this article.


We’re not oblivious to the pros of having a dedicated messaging system; however, the maintainability and ease of implementation made MongoDB a good choice for us at this time. We will continue to monitor our systems to see whether MongoDB becomes a bottleneck. Furthermore, due to the way we’ve abstracted the queueing implementation, in the event that this becomes an issue, we can easily swap out MongoDB in favor of a dedicated messaging system without ever having to modify a developer’s existing Lambda code.

Joseph Vu

Joseph is a Senior Software Engineer on the Platform team at FloQast. You can find him making his way through every coffee shop.

Back to Blog