Blog -
Integrating SIEM With CI/CD
We have a scalability-focused outlook on security here at FloQast. We’re fanatics about automation and Jenkins is already being used all over the place to automate mundane tasks, so why not extend this automation to our SIEM (Panther)?
A brief definition to make sure that we are on the same page, an SIEM is a tool that pulls data from many sources, analyzes that data, and triggers alerting or corrective actions based on defined rulesets. Often they are used to identify and investigate suspicious activity.
A quick note: Panther has official documentation on setting up CI/CD with CircleCI; however, we’re already in a happy relationship with Jenkins so we decided to venture out and build our own solution.
Regardless of the exact tooling, there are tons of benefits in having a CI/CD pipeline for your SIEM. One aspect that we feel strongly about are the advantages of leveraging “detections as code.” Page wrote a great blog post about DaC that can be seen here. Beyond that, there is great value in having a standardized process for interacting with systems. Standardization yields a cleaner environment with less human error, which makes understanding and using the system a breeze. As an automation freak myself, what I find even more exciting is the prospect that we can take the automation up another notch. If all inputs follow a certain convention, then we can continue to add more and more layers of automation as needed. This is the dream — inception, but for automation.
What’s the Point?
Before getting into how we integrated Panther into Jenkins, an understanding of our desired outcomes is needed. We had the following criteria for what we would consider a worthy integration:
- Any addition or modification of detections will be reviewed by another security member before deployment into production
- Testing will be done on all applicable detections
- Version history will show what detections were enabled at any point in time
- The process needs to be user-friendly and easily teachable
Source of the CI/CD Pipeline
Panther comes with some nice pre-built detections in their public panther-analysis repo. We created a private fork of this repo and created a directory named floqast where we can begin adding our own detections. Forking the repo is important because it means that we can periodically fetch for changes in the upstream repo to stay up-to-date with the awesome detections that Panther provides.
This private repo is effectively the “source” of our CI/CD pipeline. We then added webhooks to this repo that will fire off Jenkins jobs based on certain conditionals:
- If a pull request is opened against the main branch, and the feature branch ends with -security-eng, then…
- Trigger job in dev Jenkins that tests all detections that show up in the PR
- If a pull request is merged to the main branch, then…
- Trigger job in prod Jenkins to upload all detections to our Panther instance
- Send slack alert to confirm that upload was successful
Here are the stages of our pipeline that test or upload detections and schemas based on the triggered webhook
stage("upload schemas") { when { expression { env.FQ_JENKINS_ENV == 'production' } } steps { script { sh("panther_analysis_tool update-custom-schemas --path floqast-schemas") } } } stage("upload or test detections") { steps { script { pwd if (env.FQ_JENKINS_ENV == 'production') { sh("${WORKSPACE}/jenkins/test-or-upload.sh ${params.FOLDER} upload") } else if (env.FQ_JENKINS_ENV == 'development') { sh("${WORKSPACE}/jenkins/test-or-upload.sh ${params.FOLDER} test") } else { sendNotifications('FAILURE') } } } }
That gives an idea of when the automation is triggered, but what does it look like from the perspective of a security engineer?
As an engineer, you create your detections in a feature branch ending with -security-eng and open a PR. The job to perform testing against the detections kicks off automatically, and you paste the successful output into the PR template. Engineers also have the option to manually trigger the testing phase in Jenkins, without opening a PR. This is helpful when making large changes, or introducing a new class of detections. With the testing evidence on hand, you then request another member of the security team to confirm that the detections look good and that the tests passed. If all is well, the coworker will approve and merge the PR — the prod Jenkins job will be triggered and upload the new detections into Panther!
Jenkins Jobs (Build, Test, and Deploy)
Webhooks and stuff are cool, but what’s cooler are the jobs that the webhooks are triggering! Panther provides their own panther_analysis_tool that is used for testing and uploading detections into Panther. We baked a custom AMI that has this tool pre-installed, and then assigned this AMI to the Jenkins agent that runs all of our Panther jobs.
The challenge was that different types of resources require different command-line arguments to upload. Detections, for example, require the upload command while schemas require update-custom-schemas. That being said, we easily overcame this hurdle by organizing resources by their type (work smart, not hard).
Smart-Not-Hard
The panther_analysis_tool is also great because you can require tests for detections within the tool itself with the –minimum-tests flag, this means that we didn’t need to build out that logic ourselves! We ended up writing a script called test-or-upload.sh that gets called in the triggered Jenkins jobs (see code snippet in the previous section). Here you can see the contents of the script:
# Usage: # ./test-or-upload <param.FOLDER> <TODO> # In above, replace <TODO> with "upload" or "test" dirlist=$(find $1 -mindepth 1 -maxdepth 1 -type d) echo "Subdirectories detected: $dirlist" uploadOrTest=$2 for dir in $dirlist do panther_analysis_tool $uploadOrTest --path $dir --minimum-tests $MINIMUM_TESTS echo "Testing/Uploading files in $dir" done
With all of that out of the way, you can see that our Jenkins jobs are essentially wrappers around the panther_analysis_tool. All of the functionality to interact with our Panther instance is built into the tool, it’s just a matter of using the appropriate command-line arguments.
When a PR is opened we are just passing the relevant files into the tool with the test argument. And when a PR is merged to main we are also passing the files, but with some additional logic to make sure that we are using the appropriate upload argument for each resource.
It’s also important to note that Panther gave us an AWS role that we can assume in order to upload our detections programmatically. With this role, the panther_analysis_tool, and some bash scripting, we were able to create a CI/CD pipeline for Panther that lets us sleep easy at night
Back to Blog