- Product
Product Overview
Sophisticated security with unmatched simplicityCloud SIEM
Pre-configured detections across your environmentHoneypots
Deception technology to detect lateral movementEndpoint Visibility
Real-time monitoring with added detection & responseSecurity Reports
Data visualizations, compliance reports, and executive summariesAutomated Response
Detect, prioritize, and neutralize threats around the clockIntegrations
Cloud, on-prem, and open API connectionsXDR Platform
A complete view to identify risk, and things operational
- Pricing
- Why Blumira
Why Blumira
The Security Operations platform IT teams loveWatch A Demo
See Blumira in action and how it builds operational resilienceUse Cases
A unified security solution for every challengePricing
Unlimited data and predictable pricing structureCompany
Our human-centered approach to cybersecurityCompare Blumira
Find out how Blumira stacks up to similar security toolsIntegrations
Cloud, on-prem, and open API connectionsCustomer Stories
Learn how others like you found success with Blumira
- Solutions
- Partners
- Resources
This new AI-powered component of the Blumira platform is designed to enhance, not replace, human decision making. SOC Auto-Focus helps analysts focus on what matters: the whole picture with deep, rich context. And it’s as easy as clicking a button in the Blumira dashboard.
Artificial Intelligence (AI) in cybersecurity has already demonstrated potential to have a wide and lasting impact on incident detection and analysis. But with so much at stake, it’s vital to move forward deliberately and strategically. For the last several months, Blumira has been designing, developing, and testing AI integration into the platform. Our goal has been to lay the groundwork for an evolution of enhancements that will empower analysts with better information, faster.
This is the second article in our technical AI series. The first article shows how the Blumira team integrated AI into the platform and tested it, while maintaining the highest levels of security. Here, we will talk about lessons learned and plans for future development.
Learning and Evolving
Building AI into the Blumira platform will be a continual process. However, we knew we had to start with the basics and proceed with security in mind. We learned three main lessons in this process:
Start simple, validate early: Our most important lesson is the value of beginning with simple, functional implementations rather than complex architectures. The LLM landscape is evolving rapidly, so starting simple allows teams to deliver value quickly while maintaining flexibility to incorporate new capabilities as they prove worthwhile.
Context management is critical: Effective context management that balances comprehensive information with token limitations proved essential to our success. Our teams iterated extensively on data selection and prompt chaining to optimize this balance.
Security-first development: Building AI systems for a security environment requires careful consideration of data flow, input validation, and potential manipulation vectors. Even seemingly simple architectural decisions like one-way communication models can significantly impact security posture.
Development Challenges and Technical Insights
As the saying goes, smooth seas don’t make good sailors. We’re sharing our challenges and decision making so users can understand how we got to where we are and what we’re mapping out for the future.
Development Challenges
Managing context length limitations: Our approach to context limitations evolved through extensive experimentation. We initially tried excluding certain data fields entirely or implementing broad inclusion rules. Through iterative testing, we discovered that focusing on high-value evidence fields provides the optimal balance between contextual richness and token efficiency.
We also found that adjusting the time windows for related finding searches significantly impacts our token usage. Rather than casting wide temporal nets, we refined our correlation periods to capture the most relevant related events within context limits. This approach maintained analytical depth while making our token usage sustainable.
Ensuring consistent output quality: Output consistency was a significant challenge initially. We addressed this through targeted prompt engineering: specifically instructing the LLM to express confidence in its analysis rather than hedging with uncertain language. This simple change dramatically improved the consistency and actionability of our outputs.
Temperature adjustment also played a crucial role in achieving consistency. By fine-tuning this parameter, we found the sweet spot between creative analytical insights and reliable, consistent formatting and reasoning patterns.
Technical Decisions
Simplicity as a foundation: Our choice of single prompt architecture stemmed from a deliberate focus on simplicity. Rather than implementing complex retrieval-augmented generation (RAG) or extensive fine-tuning, we opted for a single prompt method that we could understand, debug, and iterate on quickly.
This decision proved valuable as we gathered user feedback and refined our approach. The simple architecture allowed us to make rapid adjustments without getting bogged down in complex system dependencies or optimization challenges.
Building for evolution: We designed our current architecture with the explicit understanding that it will evolve significantly as we continue developing our capabilities. Starting with a strong, simple foundation has enabled us to validate core concepts and gather meaningful user feedback before adding complexity.
This approach has allowed us to identify which aspects of our system provide the most value and where additional sophistication might be warranted, rather than prematurely optimizing areas that may not need enhancement.
However, our future technical roadmap remains deliberately flexible. We're prioritizing customer feedback to guide development rather than committing to specific architectural directions that may not address user needs and enhance value.
Where do we go from here?
AI work promises to keep our development team busy for the foreseeable future, and beyond. Blumira already has a number of enhancements in the works and on the drawing board. We’re taking a deliberate approach, while at the same time watching the landscape for innovations we can use. Here are a few of the projects we’re working on:
MCP integration for enhanced context Our team is exploring MCP (Model Context Protocol) integrations as a way to provide additional contextual layers for analysis. Specifically, we're considering how to use supplementary information about different pieces of evidence that could enhance the LLM's understanding and depth. |
Continuous testing and prompt evolution We’re expanding our testing processes to achieve even better prompt accuracy and consistency. That includes incorporating the feedback we're collecting from the team and future users. This feedback-driven approach ensures that our technical enhancements align with real-world needs rather than theoretical improvements that may not translate to practical value. |
Security for interactive features The roadmap includes more user flexibility and interactive experiences. To prepare for that eventuality, security measures need to be in place to prevent nefarious manipulation. The Blumira team is developing frameworks for sanitizing and validating inputs that will keep the platform safe while maintaining the analytical quality users expect. |
Future security architecture includes prompt sanitization techniques, user input validation, and monitoring systems that detect manipulation attempts in both source data and user interactions. This involves establishing baselines for normal analytical patterns and implementing detection mechanisms for anomalous outputs.
Join the Conversation
Blumira SOC Auto-Focus is ready to launch, but it’s far from complete. You could easily argue that the work will never be done, because AI capabilities will continue to evolve. However, our current simple architecture is a solid foundation for future innovation. We’ve validated core analytical capabilities and established effective feedback loops, creating a platform that can evolve with both technological advances and user needs.
Thoughtful technical implementation and continuous validation will remain central to our development of Blumira SOC Auto-Focus. But it’s not just about improving performance. It's about enhancing the lives of the real people doing the work.
AI-powered security analysis is an evolving field where technical approaches and architectural decisions are still being established. We welcome discussion and feedback from customers, potential customers, and teams working on similar challenges. Because we know that collective wisdom will drive better solutions for everyone in security and AI communities.
Andy Blyler
Andy Blyler is Director of Software Architecture at Blumira, where he leads the technical development of AI-powered security solutions including SOC Auto-Focus, an intelligent investigation tool designed to combat alert fatigue and accelerate threat response. With nearly two decades of experience in security and data...
More from the blog
View All Posts
Product Updates
12 min read
| October 17, 2025
AI Integration at Blumira: How We Did It
Read More
Product Updates
5 min read
| October 15, 2025
SOC Auto-Focus Cuts Investigation And Response Time Through AI-Powered Analysis
Read More
Product Updates
5 min read
| October 24, 2023
Elevate Security Response with Blumira’s Security Operations Team
Read MoreSubscribe to email updates
Stay up-to-date on what's happening at this blog and get additional content about the benefits of subscribing.