How Blumira Uses AI

    Last updated: May 2026

    AI is reshaping security operations, and we believe customers deserve a clear answer to a simple question: what is your security platform doing with my data, and where is AI involved? This page explains where AI shows up in Blumira, how it's built, and how we protect your data along the way.

    If you're evaluating Blumira, completing a vendor security review, or just curious how we use AI internally, this is the place to start.


    Our approach: AI that makes people better

    We aren't building AI to remove humans from security decisions. We're building it to make the humans on your team (and ours!) faster, more confident, and better informed.

    Three principles guide everything we do with AI in the product:

    1. Context over autonomy. AI explains, prioritizes, and recommends. People decide and act.
    2. Transparency over magic. Every AI-inferred insight in the Blumira app is labeled, scored for confidence, and traceable back to the underlying evidence.
    3. Security first, always. AI features are built on the same isolation, access control, and auditability standards as the rest of the Blumira platform — and in some cases, stricter.

    This is the same "Never Go It Alone" philosophy that drives our 24/7 SecOps team. AI is one more way to bring expertise to your environment, not a replacement for it.


    Where AI shows up in Blumira

    SOC Auto-Focus (available today)

    SOC Auto-Focus is an AI-powered investigation companion built into the Blumira platform. When a security finding lands in your dashboard, Auto-Focus:

    • Translates the technical evidence into a plain-language summary of what happened
    • Provides criticality and recommended response timeframes
    • Includes a confidence rating so you know how strongly the model supports its assessment
    • Surfaces guided response steps drawn from Blumira's expert-built playbooks

    Auto-Focus is available in our Automate editions and works immediately on deployment — no training period, no model tuning required.

    Expanded AI capabilities (rolling out in 2026)

    We're expanding from Auto-Focus delivering per-finding summaries into broader case-level intelligence. New AI capabilities group related findings into investigatable cases, score them by likely impact, and help your team focus on the small subset of activity that actually warrants attention. The goal is straightforward: cut alert volume without cutting visibility.

    These capabilities are in private pilot today and roll out to customers in 2026. Pilot customers are giving us direct feedback on accuracy, prioritization, and how the experience fits into real day-to-day operations.

    What's next

    We're exploring AI-assisted compliance reporting, natural-language detection authoring, and natural-language report building. We'll publish more here as those features approach pilot.


    How we handle your data when AI is involved

    This is the section most customers and security reviewers care about. The short version: AI in Blumira runs on your data, but never with your data leaving our infrastructure for training, and never blended across tenants.

    Where AI runs

    AI workloads run inside Google Cloud Platform, in dedicated GCP projects isolated from the production Blumira data plane. AI services have no direct access to your raw logs at rest. Instead, they query your data through read-only, auditable interfaces with org-scoped credentials.

    In practice:

    • No customer data is copied or exported into AI infrastructure for storage. Queries run against your data in place.
    • AI services run in ephemeral, constrained environments (Google Cloud Run), separate from the production Kubernetes cluster that handles your day-to-day platform traffic.
    • Every AI access to customer data is logged and auditable.

    Tenant isolation

    Each customer organization is its own logical and access-controlled boundary in the Blumira platform. AI features inherit the same isolation:

    • All AI-driven queries are scoped to a single organization at the platform layer before any data is read.
    • AI does not blend, correlate, or learn across customers. One customer's findings never influence another customer's experience.
    • The same role-based access controls that govern who can view findings in your account also govern who can see AI-generated content for those findings.

    Models we use

    Blumira's product AI features today are powered by Google's Gemini family of models, accessed through Google Cloud Vertex AI. We chose Vertex AI specifically because:

    • Customer prompts and outputs are not used to train Google's foundation models under the Vertex AI terms of service.
    • It runs in the same Google Cloud environment as the rest of our platform, keeping data inside a single trust boundary.
    • It supports the access controls, logging, and regional residency commitments we need.

    If our model providers change, or we add additional providers, we'll update this page and notify customers in advance through our standard sub-processor change process.

    What we don't do

    To be explicit:

    • We don't train AI models on your data. Not our models, not third-party models. Your logs, findings, and case data are not part of any training corpus.
    • We don't share your data with model providers for any purpose other than processing the request you triggered.
    • We don't use AI to make automated decisions that materially affect your account (billing, retention, access, etc.) without a human in the loop.

    Human oversight

    Every AI-generated artifact in Blumira is built to support a human decision, not replace it.

    • Auto-Focus summaries and recommendations include confidence ratings and link directly to the underlying evidence so analysts can verify before acting.
    • Response actions (host isolation, account disable, blocklisting) are presented as recommendations. A person on your team or ours executes them.
    • Our 24/7 SecOps team reviews high-severity findings and is available as an escalation path for any finding you're unsure about.

    Your controls

    • Org admins can disable AI features for their organization. AI use is not a precondition for using Blumira.
    • All AI-assisted actions are recorded in the standard Blumira audit log alongside the user or service that took them.
    • Feedback on AI quality can be submitted directly from any AI-generated content in the app and feeds into our model evaluation process.

    Security and compliance

    Blumira's AI features inherit the platform's existing security and compliance posture. For our current certifications, attestations, sub-processor list, and DPA, visit blumira.com/trust or contact your account team.


    Questions?

    If you're a current customer, reach out to your CSM or open a ticket through the Blumira app.

    If you're evaluating Blumira and have questions specifically about how we use AI, contact us at security@blumira.com or request a conversation with our security team through your account contact.

    We'd rather answer the hard questions up front than have you guess at the answers.