fbpx
Share on:

A poorly configured SIEM can result in an overwhelming amount of useless alerts — or worse, a lack of alerts for real security incidents. Neither option is ideal. 

Many new Blumira customers often ask, How do I know where to start when it comes to logging security events? Which log sources should I prioritize? 

Fortunately for Blumira customers, we don’t charge based on data ingestion — so the possibilities are endless. But cost aside, it’s important to glean the maximum amount of value from your SIEM’s log sources. Sending data from sources that you don’t actually need can create significant overhead and waste time for your staff. Plus, logs that contain PII can create compliance scenarios that put undue pressure on your architecture.

Here’s a breakdown of the most important log sources to ingest in a SIEM.

What To Log In a SIEM

The scope of what to log in a SIEM can be a contentious issue. There are two schools of thought on this:

Everything. This generally stems from the point of view of ‘you don’t know what you don’t know.’ In this approach, you store everything, then search and filter later. This does provide access to all the possible data that you may need, but also presents challenges related to storage, indexing, and in some cases, transmitting the data. If you use a commercial solution, licensing may also depend on volume. (It doesn’t for Blumira customers, though!) 

Only what you need. Unsurprisingly this is the polar opposite to the first approach. In this scenario, technology resources are way less utilized, but there is a risk that you will miss something. When you’re starting with a new system of log collection and correlation, it is best to start slowly with what you need and then build upon it. This is definitely not a bad way to start ingesting logs if the majority of the configuration is in house.

In many cases, the answer to what to log is often driven by costs. If this is the case, it is best to consume logs more aggressively from high-value systems, high-risk systems, and those facing external networks. Then you can save in areas that are of lesser importance from a security perspective. Tying specific log ingestion to a standards framework will help to focus important log types and event_ids.

If you’re new to the idea and implementation of a SIEM, begin with systems that are already delivering security logs, such as IPS/IDS and endpoint protection. This will allow you to become familiar with the software and configuration options while combining several applications into one log management system. After you’ve defined and followed processes and procedures, you can add other logs such as Windows, DNS, honeypots, application, and database for a deeper look into the infrastructure.

1. Standard Web Applications

Most standard web applications generate logs of one type or another. This includes services such as email servers, web servers, and other (probably) internet-facing services. These logs can provide useful insight into adversaries performing attacks against these hosts or performing reconnaissance. For example:

  • Too many 4XX responses from a web server can indicate failed attempts at exploitation. 4XX codes are failed responses such as “not found,” “access denied,” and as such, are a common byproduct of people attempting — and failing — to call vulnerable scripts. Searching for a high frequency of 4XX errors may indicate an attack or scan in progress.
  • Too many hits on one specific URL on a web server could indicate that something is being brute forced or enumerated. Of course, repeated calls to the same URL could be normal depending on the website setup, which is why it’s important to apply the context of the environment to determine if this is applicable.
  • Connects and disconnects with no transaction on multiple types of servers can be an indication that someone is probing the network. This can be caused by simple port scanning, scraping protocol banners, probing TLS stacks, and other types of reconnaissance.
  • New services, processes, and ports should not be an unplanned configuration change, especially on servers. Identifying a baseline per host as well as unwanted or unapproved services, processes, and ports can be useful for detecting malicious program installations or activity.

2. Authentication Systems

Authentication systems are an obvious starting point for analysis as usage of user credentials is typically well understood. By analyzing the contents of authentication logs within the context of the environment, there are several tests that can yield immediate results.

  • Users logging into systems at unusual hours may be a possible alert in an organization.  If users typically only connect during office hours, alerts on activity between 7pm and 7am, for example, are likely to be suitably few and suitably relevant for investigation.
  • Repeated login failures above a set threshold could be a sign of a password spraying attempt, brute force attack, misconfigured clients, or simply a user who cannot remember his or her password.
  • You can track users logging in from unusual or multiple locations. In certain environments, users may have a predictable number of IP addresses that they connect from. Most users also log in from one geographic location at a time. If a user is seen logging in from their normal office IP address, and five minutes later logging in from somewhere that is five time zones away, something might be wrong.
  • Certain types of authentication are inherently insecure, such as logins over HTTP or telnet that are transmitted in cleartext over the network.

3. Databases

Databases are often where all the good information resides: customer records, transaction data, patient information, and any other large data set that would result in chaos if it was lost or breached. You can choose a variety of different monitoring and alerting options, depending on what data is housed in a database and what type of software the database is built on.

  • Access or attempted access to sensitive data from users as well as applications accessing servers that are not associated with the application (or attempted access) can provide a detailed look into where data is moving to and from.
  • Besides access, you should closely log and monitor other activities such as the copying and deletion of databases and tables, queries, and activity from all privileged users. You can create alerts for any activity outside of the normal baseline to monitor for data exfiltration or malicious access.
  • Other authentication activity such as brute force, multiple unsuccessful logins or queries, or a user performing privilege escalation may point to malicious behavior.

4. DNS

Logging detailed DNS queries and responses can be beneficial for many reasons. The first and most obvious reason is to aid in incident response. 

DNS logs can be largely helpful for tracking down malicious behavior, especially on endpoints in a DHCP pool. If you receive an alert with a specific IP address, that IP address may not be on the same endpoint by the time someone ends up investigating. Not only does that waste time, it also gives the malicious program or attacker more time to hide themselves or spread to other machines.

DNS is also useful for tracking down other compromised hosts, downloads from malicious websites, and if malware is using Domain Generating Algorithms (DGAs) to mask malicious behavior and evade detection.

If you’re running Windows OSes, Sysmon is your best bet when it comes to logging this type of traffic.

5. Endpoint Solutions

Endpoint solutions — sometimes called EDR, XDR, etc. — secure and protect endpoints against malware, attacks, and inadvertent data leakage resulting from human error. Earlier anti-virus programs relied solely on signature-based detections, but modern endpoint tools use a combination of signatures, hashes, and identifiable actions and behavior that can be identified as malicious. Just as with many other technologies, you must tune an endpoint solution on its own first for any effectiveness to carry over to a logging solution.

6. Intrusion Detection and Prevention Systems (IDS/IPS)

It’s common to include IDS/IPS in the beginning stages of security alerting because they are fairly easy to configure and often have hundreds or thousands of signatures or rule sets enabled. 

Begin by setting these to monitor the traffic to allow sufficient tuning. In the beginning, you’ll likely receive a large amount of notifications from activities that you may be unaware of in the environment or true false positives. To curb these notifications, alert on all high and critical alerts that aren’t automatically blocked, and use other less critical signatures as informational during investigations. 

7. Operating Systems

Collecting endpoint operating system (OS) logs is an absolute must for any type of advanced detection and monitoring. Both Windows and Linux/Unix-based systems have a large amount of local logs that can be very beneficial in many ways.

  • It’s fairly difficult to baseline any operating system, but once completed, you can track and alert on new unseen processes. This is especially simplified with a good quality endpoint solution and standard settings. 
  • For Windows OSes, Sysmon is a very valuable and free tool that can enhance Windows logging and provide connections on activity back to components such as the MITRE ATT&CK framework. For Linux environments, software such as osquery or ossec can offer additional logging and detection such as File Integrity Monitoring (FIM) that is required for PCI-DSS environments. FIM is a technology that monitors files and detects changes that may indicate a cyberattack.
  • Command-line logging is an extremely powerful way to see in-depth information on what is being run on endpoints. For Windows it can be used to alert on PowerShell scripts being run, like bloodhound or mimikatz. You can monitor Linux for permission changes or scripts running.

8. Cloud Services

Many cloud-based productivity applications operate on an assumed trust model and don’t require users to authenticate in the same way or frequency as on-premises applications, so there’s often less of a high-fidelity connection between the user and the authentication. For organizations that use cloud-based productivity suites such as Google Workspace or Microsoft 365, logging cloud services can help to identify patterns of access into those services, such as: 

  • How users are authenticating into cloud services i.e. are they using old clients? 
  • How users are interacting with files within cloud services i.e. who they are sharing files with
  • Password management applications like LastPass can generate useful logs that identify what users are doing, where they’re logging in and how they’re accessing passwords

9. User Accounts, Groups and Permissions

  • Only use default accounts when necessary, as they are considered shared accounts. Alerting on the use of these accounts when not approved can point to activity of an unknown origin. If you must use default accounts in an environment, you should put them in a highly scrutinized and controlled group of accounts.
  • Only make changes to the domain admin group in AD on rare occasions. While a malicious addition is normally well into the later stages of an attack, it is still vital information to alert on.
  • For any device that has the option of local vs centralized, always consider local authentication, creation of local users, or addition to local administrative groups — especially in an environment where there is centralized location everywhere.

10. Proxies and Firewalls

Firewalls, proxies, and other systems that provide per-connection logging can be very useful, particularly in discovering unexpected outbound traffic and other anomalies that could indicate a threat from an insider or an intrusion that is already in progress:

  • Outbound connections to unexpected services can be an indicator of data exfiltration, lateral movement, or a member of staff who hasn’t read the acceptable use guide properly. Again, context is necessary, but you can often identify connections to cloud-based file storage, instant messaging platforms, email services, and other systems by hostnames and IP addresses. If these connections are not expected, they are probably worthy of further investigation.
  • Matching IP addresses or hostnames on blacklists is a little contentious, because blacklists are generally not complete, nor up-to-date. However, seeing connections to known command and control infrastructure for malware, for example, is often a sign that a workstation is infected.
  • Connections of unexpected length or bandwidth can be an indicator that something unusual is happening. For example, running an SSH on Port 443 can fool many proxies and firewalls, however an average HTTPS session does not last for six hours, and even a site that is left open for long periods typically uses multiple smaller connections. The SSH connection to a nonstandard port would quite possibly appear to be long in duration, and low bandwidth.

Blumira Takes The Complexity Out Of SIEM 

Tuning and configuring a SIEM can be a full-time job. Not only is this task time-consuming, but it’s complicated, frustrating and often impossible for non-security experts. 

At Blumira, we believe that traditional SIEMs are too complex to deploy and manage — that’s why we handle the bulk of the work on our end. That includes data parsing, normalization, and fine-tuning. This cuts down the total deployment time to hours, not weeks or months.

Blumira customers automatically get free access to a team of security experts, ready to help you with anything from interpreting an event to understanding what log sources you should ingest. 

Try Blumira for free today.

Security news and stories right to your inbox!