October 16, 2025

    The Human Side of AI: Why Blumira's Investigation Capabilities Put Partnership First

    I joined Blumira with a simple belief: the best security tools don't just solve problems, they empower the people using them. In my time in cyber insurance, I saw firsthand what happens when organizations feel abandoned by their security stack: skyrocketing costs, unidentified threats, and lost time because teams couldn't get the context they needed to respond effectively.

    Today, I want to share why we're building AI investigation capabilities at Blumira, and more importantly, how we're building them. This isn't just another AI story, it's about fundamentally changing what it means to never go it alone in cybersecurity.

    The Investigation Paralysis Crisis

    The numbers tell the story:
    64% of SOC teams complain about pivoting among too many disparate security tools.

    Let me paint a picture that's probably familiar: The average business now uses as many as 80 distinct security tools, with some organizations reaching as many as 130 different solutions.

    If you're a technology leader reading this, you're probably nodding—and maybe wincing. Behind these statistics is a more human reality I’ve witnessed repeatedly in my career: talented people spending their time not on strategic work, but on manually correlating information across disconnected systems.

    Each tool operates in its own silo, leaving teams with a fragmented view of their security landscape. As security teams scramble to piece together information from multiple sources, time is ticking. Prioritizing threats and understanding the full context of an attack become constant challenges.

    It's like trying to solve a puzzle when each piece is in a different room, technically possible, but you're spending more time running around than actually solving the problem.

    Why AI Partnership, Not AI Automation

    The easy mistake is to position AI as the silver bullet, the technology that will finally let you "set it and forget it." But after years building products, I've learned that the most successful technologies don't replace human expertise; they amplify it.

    AI tools offer a force multiplier for analysts, freeing senior analysts to focus on higher-order tasks, while educating their junior team members. The goal of AI in cybersecurity is to make people more efficient. This isn't just theoretical, forward-thinking organizations are turning to AI tools to enable their teams to operate at their highest potential.

    At Blumira, we're building AI investigation capabilities around three core principles:

    Never Go It Alone

    Empower analysts with AI insights based on the human expertise built into our platform, keeping human judgement at the center of critical decisions

    Focus On What Matters

    Reduce time to act from hours to seconds of “Here’s exactly what I need to do now”

    Safe By Design

    AI tools don’t create new risk vectors, they consider their impact before you ever hit “execute”

    The Blumira Difference: Empowerment Over Automation

    "We're not trying to remove humans from the equation-we're trying to make them more capable."

    Our upcoming AI investigation features will provide deep contextual analysis of security findings, along with prioritized remediation steps and intelligent prioritization. But here's what makes our approach different: we're not trying to remove humans from the equation-we're trying to make them more capable.

    Investing in training and development prepares teams for the future and maximizes the benefits of AI. When our AI provides context about a security finding, it's not just giving you an answer, it's teaching you why that answer matters for your specific environment.

    This connects directly to our "Never Go It Alone" philosophy. We believe that to help empower your organization we must help you maintain a security-focused culture that entitles all employees across your company to make the right decisions regarding security. Our AI investigation capabilities extend this empowerment down to every analyst, every day.

    Think of it this way: instead of AI making decisions for you, it's like having a senior analyst sitting next to every team member, providing instant context, suggesting next steps, and helping build expertise over time. You're still making the decisions, but you're doing it with comprehensive support.

    Building Trust Through Transparency

    In a past life, I learned that the best gifts are built on intent and choice, let people know you care and empower them to pick what they want for themselves. In security, I’ve similarly learned that trust isn't built through promises-it's built through transparency.

    At Blumira, we're extending that transparency to our AI development journey.

    We always start out product development testing internally, taking advantage of our team's decades of experience in security, and the same applies to our AI capabilities. We’re working to understand exactly what our customers will experience. And we’re working with you, in live testing with customers giving us feedback who want to try the earliest forms of Blumira’s upcoming experiences. So that, when we tell you about the value of AI-augmented investigation, we're speaking from firsthand experience.

    Being honest about cyber risk can empower senior leadership and executives to make effective data-based decisions. The same principle applies to AI capabilities. We're not promising magic-we're promising partnership, backed by real-world testing and transparent communication about what works and what doesn't.

    The Human Side of Security Intelligence

    Human validation remains essential to interpret nuances and wider context in security scenarios that automated tools may miss. We’re aiming for a balance between automation and human analysis to truly harness this positive force multiplier.

    Which is why our AI investigation capabilities are designed to enhance human decision-making, not replace it. When you're facing a security incident at 2 AM, you don't need another tool that gives you more alerts, you need intelligent context and response capabilities that help you understand what's happening and what you should do about it.

    Our AI will help junior analysts perform at senior levels by providing rich environmental context and suggesting remediation steps tailored to your specific infrastructure, the logical next step from the expert guidance we've built into our platform from ready-to-go detections to best-practice guided workflows for triage and response. The final decisions-the ones that matter for your business-those remain with your team.

    What Comes Next

    This is part one of our AI development series. In the coming weeks, our Director of Architecture, Andy Blyler, will dive deep into the technical approach we're taking, the specific AI evaluation criteria we're employing, and how we're ensuring our AI remains explainable and trustworthy.

    I wanted to start here, with the why, because I want to build a partnership with the security community and our customers. We're not building AI investigation capabilities because it's trendy or because investors want to hear about AI. We're building them because we believe there will need to be a blend of humans and AI-driven solutions working together, and we want to make that partnership as powerful as possible.

    Never Go It Alone—Even with AI

    The cybersecurity industry has a history of promising silver bullets and leaving teams to figure out implementation on their own. At Blumira, we're taking a different approach. Our AI investigation capabilities aren't the end of the story-they're the beginning.

    Whether you're a CISO looking to amplify your team's capabilities or an IT leader trying to do more with existing resources, the goal is the same: turning technology into a force multiplier for human expertise.

    Because in cybersecurity, as in everything else, you should never have to go it alone.


    Coming next: Our Director of Architecture, Andy Blyler, explores the technical foundations of Blumira's AI investigation capabilities—how we're building transparent, explainable AI that security teams can trust.

    Michael Kellar

    Corvus Insurance culminating in their acquisition by Travelers, and has spent the past decade leading product development organizations. Connect with him on LinkedIn to continue the conversation about human-centered security technology.

    More from the blog

    View All Posts