October 17, 2025

    AI Integration at Blumira: How We Did It

    Introducing Blumira SOC Auto-Focus: This new AI-powered component of the Blumira platform is designed to enhance, not replace, human decision making. SOC Auto-Focus helps analysts focus on what matters: the whole picture with deep, rich context. And it’s as easy as clicking a button in the Blumira dashboard.

    There’s no doubt about it: effective security relies on finding and stopping risks to your business fast, and artificial intelligence (AI) is becoming a critical component of accelerating cybersecurity technology.With that in mind, it would be malpractice for Blumira to ignore the possibilities for AI to enhance detection and analysis. We’ve completed the first phase of a major AI integration, but this is not just about more and faster automation. It's about building safe and reliable systems that enhance human expertise.

    Unfortunately, a lot of AI solutions promise the world and deliver black-box answers without transparency or clear reasoning you can validate. Some companies believe that protecting proprietary algorithms and intellectual property means keeping everything under wraps. At Blumira we believe in sharing our approach so you can see what went into the process of designing, building, and validating our AI security solution: Blumira SOC Auto-Focus.

    Our guiding principle, as always, is to transform overwhelming volumes of security data into actionable insights that analysts can trust and act upon. We didn’t set out to integrate AI just say we did. Every decision in the process was weighed against the goal of staying ahead of security threats and making your life easier.

    A user-centric approach to AI

    To understand the Blumira approach to AI integration, we’re sharing our strategies in the following areas:

    Architecture: Learn how we've structured our data pipeline to handle diverse security findings, evidence, and institutional knowledge to balance strict privacy controls and context-rich analysis.

    Strategy: See our approach to LLM integration, prompt engineering, and structured output generation. Our strategy transforms raw security data into actionable JSON and plain-text insights.

    Quality and security: Understand the validation methodologies, security measures, and testing approaches that ensure our AI outputs are both accurate and safe.

    The goal is to share the technical thinking, architectural decisions, and implementation patterns that have already proven effective in our environment. We’ll start with the technical foundation that makes it all possible.

    The Blumira Technical Foundation

    Data Sources

    Three core data components create the foundation of SOC Auto-Focus, the Blumira AI security analysis system: Findings and evidence, playbooks, and context enrichment. The curation and creation of these data sources comes from years of expert knowledge developed by our security team. These are not generic or off-the-shelf models.

    Findings and evidence

    Security findings originate from custom-built rules that represent years of iterative development by Blumira Incident Detection Engineers. These rules encode the threat patterns, behavioral anomalies, and security indicators our team has identified through real-world experience. We continuously test and evolve these rules based on emerging threats and lessons learned.

    Each finding comes with evidence. Concrete details like IP addresses, process names, and user identifiers that provide factual support for analysis. This is the contextual information that’s used for security assessment.

    Playbooks

    Playbooks are the distilled investigative expertise of the Blumira security team. They contain proven methods and analytical approaches, not generic security procedures. Blumira playbooks capture specific reasoning patterns, evidence correlation techniques, and decision frameworks our experts have developed through years of hands-on experience.

    Each Blumira playbook is tailored to the particular type of finding generated by the platform’s custom rules. This ensures that Auto-Focus leverages the most current and relevant investigative approaches.

    Context enrichment

    To understand whether events are isolated incidents or part of larger patterns, Blumira searches for related findings occurring around the same timeframe. This temporal correlation, combined with behavioral baselines and historic data, helps distinguish genuine threats from routine anomalies.

    Integrating AI into Blumira

    Blumira SOC Auto-Focus is built on an already robust framework that organizes all available data before putting it through intelligent analysis informed by years of hands-on cybersecurity expertise. While the process has been significantly enhanced behind the scenes, Blumira users can still rely on clear insights that provide the information necessary to take appropriate action.

    The Data Pipeline

    Rich context aggregation with LLM-optimized data preparation

    The Blumira data pipeline begins by assembling comprehensive packages for each security finding. This involves gathering the primary finding, temporally-related findings, supporting evidence, and relevant playbooks from the Blumira knowledge base. Rather than analyzing events in isolation, the platform creates detailed analytical contexts that the LLM uses to identify patterns across the environment.

    After event data is aggregated, it’s transformed to maximize analytical effectiveness. Blumira structures diverse data types, including findings, evidence, and playbooks, into formats optimized for LLM processing. Semantic relationships remain intact while the information is organized for progressive analysis through our multi-stage pipeline.

    Prompt Engineering and Management

    Progressive analysis with a structured methodology

    At the beginning of our development, we started by using multiple interconnected prompts to handle context limitations while the system maintained analytical depth. We developed a three-stage process from initial information synthesis, to pattern identification, to final  generation of insights. Each stage produced outputs that became inputs for subsequent prompts. Outputs were combined with additional context as needed.

    This approach allowed Blumira to process a full rich dataset without exceeding context windows. Early prompts distilled large volumes of information into focused analytical inputs, while later stages generated actionable recommendations and structured outputs.

    While the LLM provides speed and depth, the prompts are designed by Blumira to incorporate proven investigative methodologies and analytical frameworks. The result is consistent output quality across all security scenarios. The prompts guide the LLM toward established analytical approaches while allowing flexibility for different types of findings and levels of complexity.

    Output Generation

    Structured JSON creates a rich user experience

    The final stage of the Blumira AI-assisted process in SOC Auto-Focus produces comprehensive JSON outputs seamlessly integrated into the React front end. Users have easy access to severity assessments, prioritized recommendations, evidence summaries, and analytical reasoning. Critical information is displayed prominently with supporting details a click away.

    Blumira translates complex analytical patterns into immediately actionable insights. Security analysts receive clear recommendations, investigation priorities, and suggested next steps, enabling them to act quickly. The structured format allows our UI to present information contextually, highlighting urgent items, grouping related findings, and providing clear investigation pathways.

    Internally Tested and Validated

    Auto-Focus started as a proof-of-concept by our co-founder and CEO Matt Warner two years ago. Our team of deeply knowledgeable security experts developed and expanded it until it was ready for testing. In the spirit of “eating our own dogfood,” we began with internal testing in order to detect potential issues and areas for improvement. This rigorous testing included a Slackbot to facilitate rapid iteration and quality control.. 

    Slackbot makes real-world testing fast and simple

    Validation of Blumira SOC Auto-Focus centered on a Slackbot that served as our testing interface. Team members sent finding IDs directly through Slack and received complete AI analysis in return. This drove rapid iteration and experimentation so the Blumira team could test analytical outputs in real-world scenarios.

    The Slackbot eliminated the usual testing bottlenecks. Instead of waiting for UI features or formal testing environments, our team could immediately evaluate new prompt iterations, test edge cases, and validate analytical quality using actual security findings from the Blumira production environment.

    Using Slack for feedback collection enabled real-time iteration on both prompt design and data selection. Team members could identify issues with analytical quality, reasoning gaps, or output formatting and provide immediate feedback through the same channel. This allowed us to quickly tweak prompts, adjust data gathering processes, and experiment with different contextual approaches.

    The informal nature of Slack proved particularly valuable for capturing nuanced insights that could have been missed in a formal testing process. Team members shared specific examples of where the analysis succeeded or failed so they could target improvements.

    Quality assurance driven by experienced experts

    Our quality assurance process leverages the deep cybersecurity expertise of Blumira team members who are intimately familiar with the nuances of security analysis. These experts systematically test prompts against a suite of findings that represent the full spectrum of scenarios encountered in production.

    This process doesn’t just rely on automated metrics. It’s a human-driven evaluation process to ensure that our AI outputs meet the analytical standards security experts expect—with a focus on accuracy, reasoning quality, and the actionability of recommendations.

    A critical aspect of our quality assurance involves optimizing the balance between contextual richness and LLM limitations. The Blumira team continuously evaluates both the data that gets analyzed and the specific information exposed to a prompt. Dual optimization maximizes relevant context while staying within token constraints.

    It’s a balancing act that requires ongoing refinement as our experts encounter new finding types, discover additional context sources, and identify patterns in analytical quality. The iterative process of adjusting data selection and prompt context is a core Blumira competency that helps us maintain high-quality AI outputs.

    Security and privacy are paramount

    It goes without saying that the first job of any cybersecurity platform is to protect the data and integrity of the environment it serves. Blumira takes this mandate seriously, and we’ve implemented multiple measures to address AI-related threats.

    An emerging challenge is the potential for manipulated data to influence LLM analysis. Systems that use security logs and findings as source data can allow sophisticated attackers to create malicious log entries designed to mislead AI analysis. While this kind of attack presents a higher barrier since an attacker would have to compromise logging systems or inject malicious data upstream, it’s a threat that requires ongoing vigilance.

    The Blumira security approach currently centers on data selectivity and a one-way communication model. We’ve purposely held off on implementing interactive features as we study the evolving sophistication of nefarious prompt injections.

    Selective data transmission

    Blumira uses data selectivity rather than extensive filtering or sanitization. This means carefully choosing which data elements provide analytical value to the LLM and transmitting only the information necessary for effective analysis. This approach minimizes exposure while providing the AI system with sufficient context to generate meaningful insights.

    Blumira maintains control over data selection at the pipeline level, so unnecessary information doesn't enter the LLM processing environment. At the same time, we maintain the contextual richness needed for accurate security assessment.

    One-way communication model

    Our current system architecture implements a one-way communication model that simplifies security considerations. Users interact with finding IDs and receive generated analysis without direct prompt input. This has eliminated risks that could be introduced through an interactive prompt so we can focus on perfecting core analytical capabilities before introducing additional features. The result has been rapid iteration and validation without the need for users to sanitize their input, and it’s given us valuable insights.

    What we learned: Start simple, add complexit

    When developing LLM solutions, it pays to start simple. The AI landscape is evolving rapidly, so something that seems like a cutting-edge decision today may become obsolete or mainstream in months.

    Our initial focus is on functional implementation that levels up security resolutions by the humans that interact with it. This has allowed us to deliver value quickly while maintaining the flexibility to incorporate new techniques and capabilities as they prove their worth. Simplicity as a design principle prevents teams from biting off more than they can handle. Complexity can then be added where it demonstrably improves outcomes.

    The Blumira AI team has prioritized delivering value to the user in our initial implementations. Architectural sophistication can come later, after a solid foundation has been built. Our next article in this series will look into the future, taking what we’ve learned for the next iterations of SOC Auto-Focus and the Blumira AI vision.

    Try out SOC Auto-Focus now.

    Andy Blyler

    Andy Blyler is Director of Software Architecture at Blumira, where he leads the technical development of AI-powered security solutions including SOC Auto-Focus, an intelligent investigation tool designed to combat alert fatigue and accelerate threat response. With nearly two decades of experience in security and data...

    More from the blog

    View All Posts