Effective Searching

There are two critical reasons end users should learn how to search their SIEM effectively. Ineffective searching is a risk to your organization, where end users can produce inaccurate data, and thus provide inaccurate investigation results. Ineffective searching can also degrade the SIEM’s performance, increasing the amount of time required for analysts to obtain data, while affecting the overall stability of the system.

If a security analyst is asked to perform an investigation and searches incorrectly, the results for a query on malicious traffic may return null when in fact there are matches. A compromised user account may be generating significant log data, but your analysts can’t obtain logs for it because they are searching for “jsmith” instead of a case-sensitive “JSmith.” End users can also match on incorrect fields, believing they are finding the correct data when they are not.

Ineffective queries can lengthen the amount of time required to complete them, and increase the system resources used by the SIEM. Many queries can improved to significantly increase their performance, making the end user happier with a faster response time, and a healthier system that has more CPU and RAM to work on other tasks. A simple rule of thumb is to match as early in the query as possible to limit the amount of data the system searches through. Searching for data in particular fields rather than searching all fields is also a way to reduce the amount of processing the system must do. Additionally, some SIEM tools allow you to easily check for poorly performing queries. For example, Splunk’s Search Job Inspector can not only show you which queries are taking the longest, but even which parts of the query are taking longer than others, allowing you to optimize accordingly.

It’s also common for security analysts to get requests for excessive data. In many cases requestors will ask for more information than is required in order to let them drill-down into the information they need, instead of having to submit multiple requests for data. For example, there may be a request to pull log data on a user for the past two months, when all that is required is some proxy traffic for a few days. These types of requests can be resource-intensive on SIEMs, especially if there are multiple queries running simultaneously. The impact can be more severe when the queries are scheduled reports. Scheduling multiple, large, inefficient queries on a regular basis can consume a significant amount of system resources. With a few inquiries to the requestor, the security analyst may be able to significantly reduce the amount of data searched for.

While ineffective searching is a risk, it’s a simple one to reduce. Training sessions, lunch-and-learns, or workshops can significantly reduce the risks of analysts searching incorrectly and consuming unnecessary system resources. I find a simple three-page deck can provide enough information to assist analysts with searching, highlighting the tool’s case sensitivity, common fields, and sample queries. Nearly all SIEM vendors offer complimentary documentation that will show you how to search best with their product. Thus, a few hours of effort can reduce searching risks while optimizing your SIEM environment.

SSISS: A SIEM Requirements Gathering Case Study

Streamlined Seamless Integrated Security Solutions (SSISS): A SIEM Requirements Gathering Case Study

Implementing a SIEM is a challenge for any organization. As SIEMs can take years to implement, it’s critical to build them appropriately, as even minor changes can result in months of re-work. SIEMs can be resource-intensive applications, reading and writing several thousand events per second, and thus an inadequate amount of RAM, CPU, or slow storage can mean poor performance and application instability. SIEM environments can have many stakeholders, so changing them can require approvals from many departments and subject matter experts within an organization.

To reduce risks associated with a SIEM implementation, a thorough requirements gathering exercise should be performed. We need to know who is going to be using the tool, how often it will be used, what type of data it will collect, how long the data will be stored for, and more. This will allow solution designers to best architect a solution, determine what product is most suitable for your organization, select an appropriate licensing model, and in general eliminate any assumptions made in the design. We also need to provide the requirements to the vendors in order for them to provide an accurate, high-performing solution.

One of the major things to keep in mind during a requirements gathering exercise is to challenge all requirements. If a requirement significantly raises the cost of the solution or makes it more difficult to maintain, then there should be a justification for it. Common misconceptions with using log data as legal evidence and data retention periods can make solution designers unnecessarily increase the complexity and cost of a SIEM. For example, some organizations mandate that log data must be retained for seven years, but this is typically a requirement for transactional data, not system log data.

Let’s dive into a sample requirements gathering exercise to highlight some of the major items we need to capture in order to design an appropriate solution. SSISS (Streamlined Seamless Integrated Security Solutions, not a real company to my knowledge, but what a name), one of the industry’s leading IT security companies, has hired me to assist one of their clients, Company A. Numerous problems exist within the environment and Company A isn’t sure where to even start.

The following Monday I arrive at Company A and meet with the VP of Security Operations. I learned that the SIEM was implemented a couple of years ago by a provider Company A is no longer doing business with. Complaints about the SIEM range from slow search response times, data loss, application instability, to audit deficiencies. The VP has budget to build a new SIEM, and doesn’t want to spend efforts on fixing the existing one. I ask the VP for all stakeholders and a point of contact within each group that I can work with. The stakeholders are the security operations team, network team, server team, and the compliance team.

I first meet with the security operations team. Their main complaints with the system are that it’s unstable, slow, and overall difficult to get work done. Searches and reports time out, and the system generally stops working at least once a week that forces a reboot. As I note the issues down, I ask if there’s anything else the system doesn’t do today. They reply that the data sources they need are there and they can create the required correlation rules, it’s just that the system is slow and unstable. Log data growth of 20% over the next 5 years seems reasonable, and they don’t currently forward any of the log data to another application, nor do they plan to. They agree to send me an email with their documented requirements.

Next is a session with the network team. They use the SIEM mainly to troubleshoot firewall and router issues, and note the searching is slow. They are mandated to log all of their devices to the SIEM, but there have been no integration issues to date. Log data growth of 20% over the next five years seems reasonable to them as well. I ask that they send me an email documenting what they need the SIEM to do along with the quantity of devices they intend to integrate.

I next meet with the server team. They generally don’t use the SIEM, but they are mandated to configure all servers to log to it. The only output they get from it is a monthly report to verify that all systems are logging. When asked about growth, they by chance give the same number used by the security operations and network teams. They agree to send me a note documenting the type and quantity of servers that they own.

Finally, a session with the compliance team lands me with some large requirements: seven years of log data and encryption of data at rest. I reply that most industry standards mandate one year of log data, and that seven is typically for transactional data, which is not stored in the SIEM. Additionally, there is no personally identifiable or financial data within the logs, so while many SIEMs offer masking capabilities, it can decrease performance. The compliance team replies that as long as there is no financial data, one year of data suffices, and access to data needs to be restricted, but not encrypted or masked. They agree to send me their documented requirements.

The next day, I receive responses from all teams. Everything looks okay with the security operations requirements:

I note that since we’ll be using Syslog for many data sources, the logs from the source to the Processing Layer will be unencrypted, but will be from the Processing Layer to the Analytics layer. The team confirms this will not be an issue. I also ask if there are any opportunities for filtering data, and discover the team doesn’t need any of the events I’m proposing to drop in the new solution. The team does a quick count and finds that dropping these events will reduce EPS rates by 20%. They also note that their current SIEM doesn’t do any aggregation.

The only additional request I have for them is to compile the distinct amount of systems logging to the SIEM, each device type’s average sustained and peak sustained EPS rates, and the total amount of correlation rules they intend to have in production. This will help me provide an accurate architecture and storage requirements.

Requirements for the network team look good with the quantity of network devices:

The server team requirements are documented as expected with the total quantity of servers:

Fortunately, the conversation with the compliance team worked in my favour, and the only requirement from them is to retain log data for one year.

The security operations team responds to my request a few days later, and I’m now able to populate my SIEM Architecture Sizing, Storage and Infrastructure Costs Calculator spreadsheet, which I’ll be sending to each vendor.

The first tab lists the data source requirements:

The two most important numbers in the above table are the Total Average Sustained 24h EPS (events per second) rate, which is the total amount of events received in a day divided by the amount of seconds in a day, and the Peak Sustained EPS rate, which is the maximum amount of EPS processed by the system in a day, typically during business hours. The Total Average Sustained 24h EPS rate will be used to determine storage requirements and licensing costs depending on the product licensing model, while the Peak Sustained EPS rate will be used more to size an appropriate architecture. One of the biggest mistakes many SIEM consultants commit is using the sustained EPS rates to size an architecture. This results in the system being undersized during peak hours. For example, proxy traffic at night can be a mere fraction of what it is during the day. Thus, an architecture designed using the sustained rate will cause the system to slow down and cache data during the day, when it’s dealing with 10,000 peak EPS rather than the sustained 1,500 EPS it was designed for.

The second table lists the functional requirements (what the system must do):

The third table lists the Environment Parameters, which are other design requirements that need to be built into the architecture.

The first item is the Processing Layer Forwarding Factor, which is how many copies of each event the Connectors/Collectors/Forwarders will forward to the Analytics Layer. Many SIEM products forward a second copy of each event to two destinations to make the solution highly available. If I want a highly available solution, I’m going to need to enter a value of two or greater. The second item is the Analytics Layer Replication Factor, which will indicate how many copies of each event will be copied to another server for high availability. Given that these two items can vary per SIEM product, I’m going to let the vendor enter these numbers.

The third item is the Analytics Layer Forwarding Factor, which indicates how many systems the Analytics Layer will need to forward to. As confirmed by the Security Operations team, we will not be forwarding data from the SIEM to another system. This is a critical design consideration, as missing this requirement can cause the architecture to be severely undersized.

The fourth item, Filtering Benefit, will reduce the amount of data sent from the Processing Layer to the Analytics layer by the entered percentage. I’m going to enter a value of 20%.

The fifth item, Aggregation Benefit, will depend if the SIEM product can aggregate. I’ll leave that value for the vendor as well.

The Processing Layer Spike Buffer and Analytics Layer Spike Buffer are designed to prevent caches from being formed when there are surges of log data. Why this is an important consideration is detailed in the article A SIEM Odyssey: How Albert Einstein Would Have Designed Your SIEM Architecture. I’m going to use a value of 25% for the Processing Layer and 15% for the Analytics Layer.

Finally, as discussed with many Company A teams, log data growth of 20% will be assumed.

After all the parameters are input, I get the following table from the Architecture Requirements tab. The two key values are the Total Processing Layer EPS Requirement and the Total Analytics Layer EPS Requirement. The SIEM architecture will need to be sized based on these numbers. However, these values may change per vendor depending on how their product works.

I’m now ready to send the SIEM Architecture Sizing and Storage Costs Calculator spreadsheet to the three shortlisted vendors.

While we should be focused on meeting Company A’s requirements, we shouldn’t forget to be equally focused on meeting the vendor’s requirements. We need to ensure the proper system requirements are met, including CPU, RAM, and storage. Failure to meet these requirements can result in numerous complications, including reduced search response times, application instability, data loss, operational nightmares, and ultimately increased risk to the organization.

The first vendor to respond is Vendor 1. Their SIEM product doesn’t structure logs during ingestion, doesn’t have the ability to aggregate data, but retaining the logs in raw format should keep the message size low at 700 bytes. Their product provides high availability by replicating data across multiple Analytics Layer servers. For the Non-HA solution, the Processing Layer will need to process 15,000 EPS, while the Analytics Layer will process 11,000 EPS. The average event size of 700 bytes produces offline storage costs of $21,300/year, while the online storage will be provided by the local disks on the server. For the HA solution, the Processing Layer requirement stays the same, but Analytics Layer Requirement jumps to 22,000 EPS, and storage requirements double to $42,600/year.

Vendor 1 recommends four servers in the Processing Layer, which is more than enough to handle the anticipated 15,000 EPS. This should prevent any caching or log data delays when there’s a surge of log data. For a highly available Processing Layer, they recommend to double the amount of Processing Layer servers, but note that if one of the servers is unavailable, the remaining three should still be more than sufficient to handle the 15,000 EPS. They also note a load balancer can be used in front of the Processor servers to provide balanced traffic and high availability. The 500GB of local storage on each server should provide 3.5 days of cached data in the event the Analytics Layer is unavailable, but can be bumped up if desired.

For the Analytics Layer, Vendor 1 recommends a large physical server with 48 cores, 256GB of RAM, and highly recommends a combination of solid state and local hard disks for the storage medium. Should all the server requirements be met, they have no concerns with meeting the search speed and correlation rule requirements. And to meet the high availability requirement, a second server can be added, and the primary Analytics server will replicate a copy of each event to the secondary server. Vendor 1 can also provide the entire solution as a cloud-based service.

The licensing model for Vendor 1 is based solely on stored GB per day. You can add as many servers or end users as needed without any additional costs.

Vendor 2 responds next, which provides a SIEM product that structures data as it’s ingested, can provide significant aggregation benefits, but will produce an average event size of 2,000 bytes. The product can provide high-availability by the Processing Layer sending a copy of each event to another destination, commonly known as “dual-feeding.” For the non-HA environment, the Processing Layer will need to process 15,000 EPS, but given the product’s aggregation functionality, the Analytics Layer will only need to process 5,500 EPS. However, the average event size of 2,000 bytes will bump up the offline storage costs to $30,400/year. For the HA solution, the Processing Layer requirement doubles to 30,000 EPS, Analytics Layer requirement to 11,000 EPS, and storage costs to $60,800/year.

For the non-HA solution, Vendor 2 claims four servers in the Processing Layer will be more than adequate to handle the expected 15,000 EPS, and can be augmented with a load balancer and additional servers for better performance and high availability.

The Analytics Layer is a single server with significant resources. Vendor 2 claims it should provide all required functionality and as well provide fast search response times for end users. To make the solution highly available we can simply add another server and configure the Processing Layer to “dual-feed.”

Vendor 2’s licensing model is a combination of CPU cores, peak EPS rates, and the amount of end users logged in simultaneously.

Last in line is Vendor 3, who provides an appliance-only SIEM solution that structures data as it’s ingested, can provide aggregation benefits, and produces the highest event size of 2,300 bytes due to the large field set the product uses. Like Vendor 2, they also anticipate an aggregation benefit of 50%, which will produce a non-HA Processing Layer requirement of 15,000 EPS and Analytics Layer requirement of 5,500 EPS, but the increased message size will bump up offline storage costs to $35,000/year. For HA, the values can simply be doubled.

Vendor 3 only proposed two servers in the Processing Layer, which they are certain can handle the proposed EPS rates, and can be scaled horizontally for high availability.

The Analytics layer is a single appliance the vendor claims can meet all the searching, reporting and analytics requirements. The high availability architecture is similar to Vendor 2, where a second Analytics Layer server can be added and the Processing Layer can be configured to “dual-feed.”

The licensing model is simply based on the average daily EPS rates on the Analytics Layer. Vendor 3 supports the entire appliance, from the hardware, OS, to the application, and issues patches for all of it. Patches and upgrades seem simple with the single files the vendor provides that can be uploaded at the click of a button.

The appliance-based solution sticks out in my mind, given the statements the VP of SecOps made about the revamping of the server teams. Previously, server issues could take months to resolve, and patching was practically non-existent due to the shortage of resources within the team. Additionally, there have been cases where the server resources promised were not delivered.

While I haven’t discussed cloud options with the VP of SecOps, I’m going to follow up with him to see if this will fit into his roadmap and verify if sufficient bandwidth is available to an external site. Vendor 1’s track record of managed Cloud SIEM environments is also impressive and will likely be the quickest to deploy. I’m also going to confirm that there are no other stakeholders that we missed, such as data scientists, to ensure the solution can handle any mega calculations if necessary.

All solutions appear to meet Company A’s requirements, so I’ve got some more work to do to see which will be the best fit. I like Vendor 1’s simple licensing model, architecture, low storage costs, and think the cloud solution may be the best bet for Company A. I like Vendor 2’s blazing fast searches, as there was nothing that frustrated me more as an analyst than slow search response times, making a simple investigation take hours. However, I’m confused with their licensing model, and I’ll have to follow up with them to understand it better. I like Vendor 3’s solution as well, similar to Vendor 2 but appliance-based with a simpler license model, but I’m not sure two servers in the Processing Layer will suffice.

When I look at the operational costs of all vendors, Vendor 1 produces the lowest storage costs. Vendor 3 produces the lowest support cost, but that’s due to the two servers in the Processing Layer compared to the four for the other vendors, which again I’m not convinced will be sufficient.

As you can see, selecting a SIEM product for your organization can be a complex task. There are many high-quality SIEM products on the market, but depending on your organization and how it operates, one product may be more suitable than another, and the cost can differ significantly. Minor functionality from how the solution will process the required data sources to the amount supported parsers can drastically make one solution more feasible than others.

A thorough requirements gathering exercise can reduce the risks associated with a SIEM implementation. It will help your organization select a product that maps best to your requirements. It will allow vendors to provide an accurate, high-performing solution. Ultimately, it will pave the way for an efficient and effective solution that lowers implementation and operational costs, reduces waste and rework, and provides end users with a tool that increases your organization’s security posture while providing business value.

What is SIEM and how it differs from other security tools

Now that we understand what log data is, let’s discuss the technology that will allow your organization to collect and use it.

Security Information and Event Management is a technology that will process log data from your various systems, analyze it, make it available for searching, and store it. SIEM itself is a combination of two more abbreviations: Security Information Management (SIM) and Security Event Management (SEM). SIM is focused on the collection of log data for investigative and compliance purposes. SEM is focused on alerting and analytics: threat detection, pattern anomalies, and correlating different data sources.

SIEM tools can vary in architecture, but generally have two layers: A Processing Layer and an Analytics Layer. The Processing Layer is where data is structured, aggregated, and forwarded to the Analytics Layer. The Analytics Layer is where data is stored, made available for searching, and where security analytics is performed.

Using the above diagram as an example, the data sources are your various systems that will produce log data and send (push) to your Processing Layer, or your Processing Layer will reach out to (pull) via a database or API call. Depending on the SIEM product, the Processing Layer will structure the log data, normalizing it into a standard format, aggregate it by combining similar events into one, or may simply add an index to it and forward to the Analytics Layer. The Processing Layer is strictly used for processing; the only data that is typically retained are caches when the Analytics Layer is unavailable.

The Analytics Layer is where end users will search for data, create reports and use cases. Depending on the SIEM product, it may structure log data, and may act as a long-term retention repository.

While SIEM is defined as a security application, it differs significantly from your other security tools. Your SIEM will process log data, while your proxy, IDS/IPS, and some malware detection tools will typically process network traffic. Packets going over your network use the same protocols, so you don’t need to customize your firewall or IPS to detect TCP traffic. Your SIEM will need to process log data in various formats, many which may not be supported by your SIEM vendor.

Your IDS/IPS and malware tools provide you with a list of signatures that will be automatically updated on a regular basis. Some IDS/IPS tools allow you to implement custom signatures, but for the most part your analysts won’t have to write custom signatures for known vulnerabilities, exploits, and attacks. Your SIEM staff will need to create custom use cases and update them regularly, as you’ll unlikely be using much of your SIEM vendor’s default content.

SIEM vendors support many log sources, but your engineers will need to ensure the right parsers are being used, update them regularly, and write any that are not supported by your SIEM vendor. This is in stark contrast to your network devices and other security tools that only have to work with limited protocols such as TCP/IP, HTTP, and HTTPS.

Staff will be logging into your proxy, firewall, IDS/IPS, and malware tools often, but it will mostly be for administrative purposes. Your SIEM will have many end users, ranging from admins to users searching for data. In large environments, it’s common to have several users searching for data simultaneously. For MSPs, your customers may be logging in to search for data as well.

While most of your other security tools can block a malicious host from egressing your network or block users going to an uncategorized site, SIEMs don’t have the capability to block.

The following table summarizes the differences between SIEM and your other security products.

As you can see from the above table, a SIEM differs significantly from your black-boxed IPS or malware tools. While it may seem that it’s simply a log aggregator, a SIEM is a complex tool that will need significant customization. The environment can have many stakeholders from security analysts, to compliance and access management teams. Ultimately, it will need to be implemented, operated, and maintained differently.

What’s the big deal with log data?

So what’s the big deal with all this log data, and why on earth should I spend a large chunk of my budget to collect it? Aren’t the other security tools I have good enough? What exactly is in all this log data, anyway?

Log data is one of today’s most valuable assets: data. Google, Twitter and Facebook collect enough data on people to detect flu outbreaks faster than medical professionals can. Without owning a single taxi, GPS data gave a software company the opportunity to become the world’s largest taxi service. A computer algorithm can recommend a movie you’d like to watch and spare you from having to read reviews of movie critics. Amazon can tell you what book you’d like to read next or what household products you may be running low on.

In the context of cyber security, log data contains records of activity from your various IT systems. These records can help you understand what goes on inside your network. They can show you which user accounts are being used. They can show you which users are consistently visiting blocked websites. They can show you the suspicious files being blocked by your endpoint protection application. They can highlight suspicious processes running on your servers. They can tell you which exploits your web servers are vulnerable to and if anyone is trying to attack them. Ultimately, they can uncover activity in your network that is adding risk to your organization.

Log data is typically output to a file or database, where it was traditionally used for troubleshooting purposes. If someone couldn’t log into a particular application, the system admins would check the log files to see if they could find out why. If a customer application was down, the support team would check the log files to see if they could find out the cause of the crash.

As the amount of log data grew, many saw that the files sitting on their servers contained invaluable data. Many applications were born to manage all of this data, helping organizations search through it and assist in detecting issues before they became outages. In the early 2000’s, some programmers with a security mindset thought of creating an application that would act as a centralized repository of log data for security investigators, and be able to alert in real-time when particular values or suspicious patterns were detected in the log data. The result of this was the birth of SIEM, Security Information and Event Management.

Let’s take a quick peek at some log data. Here’s a small sample of authentication activity, which is a user failing to login, and then successfully logging into their workstation.

-May 1 2018 1:00PM, IP=, User=Bob, Message=login failure
-May 1 2018 1:01PM, IP=, User=Bob, Message=login failure
-May 1 2018 1:02PM, IP=, User=Bob, Message=login success

Most log files will at minimum answer who, what, when, where, why, and how. Given the advent of SIEMs, most vendors now provide detailed logging for their applications, and some even allow you to customize what is output.

Here you can see a couple of punctual users logging into their company network in the morning, generating VPN login data:

-May 1 2018 8:50AM, IP=, User=John, message=VPN Login Success
-May 1 2018 8:54AM, IP=, User=Bob, message=VPN Login Success

Log files can also be specific to an application. Here we have some startup activity on the billing server:

-May 1 2018 9:54AM, hostname=billingserver01, message:NOTICE: Application starting
-May 1 2018 9:55AM, hostname=billingserver01, message:NOTICE: Running startup scripts

That’s great, you may think, but why should you devote resources to collect and manage this data? Let’s expand the above entries and see what the big deal is.

Using the authentication activity again:

-May 1 2018 1:00PM, IP=, User=asmith, Message=login failure
-May 1 2018 1:01PM, IP=, User=bsmith, Message=login failure
-May 1 2018 1:02PM, IP=, User=csmith, Message=login failure
-May 1 2018 1:03PM, IP=, User=dsmith, Message=login failure
-May 1 2018 1:04PM, IP=, User=esmith, Message=login failure
-May 1 2018 1:05PM, IP=, User=fsmith, Message=login failure
-May 1 2018 1:06PM, IP=, User=gsmith, Message=login failure
-May 1 2018 1:07PM, IP=, User=hsmith, Message=login failure

These log entries become interesting now that someone is trying to log into the billing server using an incremental version of “smith.” This small story could be many things, from a developer testing something, a script running in the background, or it could be indicative of someone trying to guess a username, attempting to gain unauthorized access to the server.

Let’s take a look at the VPN log again:

-May 1 2018 8:50AM, IP=, User=Bob, message=VPN Login Success
-May 8 2018 8:55AM, IP=, User=Bob, message=VPN Login Success
-May 15 2018 8:52AM, IP=, User=Bob, message=VPN Login Success
-May 22 2018 8:59AM, IP=, User=Bob, message=VPN Login Success
-May 29 2018 8:44AM, IP=, User=Bob, message=VPN Login Success
-May 29 2018 9:30PM, IP=, User=Bob, message=VPN Login Success

Nothing unusual about Bob being his punctual self logging into work, except that “he” logged in from Bulgaria at about 9:30PM on May 29. Scenarios like this could be John on a business trip, or not John at all.

Finally, let’s take a look at some file executions in a log file. Here is a sample system updating itself, but for some reason the last file executed doesn’t seem to be a standard update file, which could be indicative of a malicious file being executed.

-May 4 2018 1:10AM, hostname=billingserver01, msg=file “update_01.exe” executed
-May 4 2018 1:13AM, hostname=billingserver01, msg =file “update_02.exe” executed
-May 4 2018 1:15AM, hostname=billingserver01, msg =file “update_03.exe” executed
-May 4 2018 1:50AM, hostname=billingserver01, msg =file “A2.exe” executed

As you can see, log data can contain invaluable data that can help your organization investigate suspicious activity and detect attacks in real time. Log data can indicate issues brewing in your systems that can be caught in advance before an outage or breach occurs. SIEM is a technology that centralizes log data, makes it available for searching, allows staff to alert on suspicious activity, and ultimately enhance the efficiency and effectiveness of your organization’s security operations.

If Milton Friedman Created Your SIEM Team

When you mix an economist with the Godfather, you get an offer you can’t understand. But when you mix the philosophy of a famous economist with your SIEM team, you can create a high-performing team that continuously improves the environment, plans accordingly, creates better use cases, and ultimately reduces the probability of your phone ringing on a Friday afternoon for a SIEM issue.

Milton Friedman was one of the 20th Century’s most influential economists. Without going into detail or starting a debate on economic policy, he argued that a single owner would take better care of something than multiple entities or an unclear entity. The single owner likely has a direct interest in the value of it and will maintain it better than an entity that doesn’t. And thus his famous quote:

“When everybody owns something, nobody owns it, and nobody has a direct interest in maintaining or improving its condition.”
– Milton Friedman

A SIEM is likely one of your more complicated security products to manage, and needs extensive customization over the other black-boxed security applications your vendors manage for you. Not only do you need to manage the content and use cases, you need to manage the data feeds, ensure data is parsing correctly, troubleshoot issues with the application, support SIEM end-users, and plan for growth. All this effort requires input from various teams within your organization. Given the multiple teams involved, it’s critical to establish accountability and know who is responsible for what part of the environment.

SIEM Environment Requirements

The first requirement of any SIEM solution is clear, single ownership; an entity that has a direct interest in improving and maintaining the overall SIEM environment, and is ultimately accountable for its entire operation. Without clear ownership, staff and end users will be discouraged from escalating issues. Teams will not have a dispute mechanism, and instead of resolving issues, they will point the finger at each other. Those issues will then be brushed under the rug, and will result in a major outage or security issue down the road for leadership to deal with. Work will not be distributed accordingly, and highly-skilled staff that are overworked will leave, taking valuable knowledge and training investments with them. Relationships between the teams will be strained, and ultimately entropy will overrun your environment, in which significant investment will need to be made in order to turn it around.

The second requirement of a SIEM solution is a healthy, teamwork-oriented environment. Given that many teams are involved in the implementation and operation of your organization’s SIEM, positive and open communication between the teams is required for issues to be raised, work to be assigned to the appropriate teams, and for knowledge to be shared. Healthy teams will raise pertinent issues to leadership and resolve them quicker than teams that don’t. Healthy teams share valuable knowledge and train each other. All of this contributes to a work environment that retains staff, and attracts new talent into the team.

The third requirement of a SIEM solution is a strong skillset. SIEM environments are complicated, and you’ll need many skills to manage it from architecture and design planning, parser development, rule logic development, to social skills required to obtain and maintain data from other teams. Before making investments in your SIEM skillset, the first two requirements should be met, or else you risk losing highly skilled staff that are hard to find and retain.

The fourth requirement of a SIEM solution is documented roles and responsibilities. Many mistake this as the first requirement, but a RACI, for example, will not be followed or enforced if the first three requirements are not met. If your staff don’t have the proper skillset, one or two employees may end up doing everyone else’s work, and leave when they burn out. If your teams have poor communication with each other, issues may end up going unresolved and unnoticed by leadership, leading to an outage down the road.

Where practical, entire SIEM teams should be under one VP or line of business. Having one VP accountable for the implementation and operation of your SIEM gives the VP incentive to ensure the solution isn’t rushed into production, and that it has adequate resources for operations. The single VP will have more of an incentive to ensure the health of the SIEM environment than another organization that makes one VP accountable for the implementation only, and another VP for the support of it. Such a situation can incentivize the implementer VP to get it in as soon and cheap as possible and leave the support VP with a mess. Given that SIEMs can take years to fully implement, this should be avoided at all costs. The single VP also acts as a single escalation point and can’t deflect the issue to another VP or line of business. When there are 2 VPs and the roles and responsibilities aren’t clear, disputes can arise or the issue can be ignored. Again, it’s ideal to have your entire SIEM environment under a single VP, but in organizations with a good working environment, different parts of it owned by different VPs or lines of business can work out well. There are also some roles and responsibilities, such as server and storage administration, that are common to be outside of your security organization.

RACI Matrix Overview

One of the industry’s most common roles and responsibilities document is a RACI Matrix, which stands for Responsible, Accountable, Consulted, and Informed. The goal of a RACI is to list all stakeholders involved in the solution and the required tasks, and then assign one of the following values to a stakeholder(s) for each task.

While a RACI is designed to document roles and responsibilities, it has another valuable benefit: quantifying work efforts. Once you see all the various tasks involved in your SIEM environment, you can see how much work effort the various stakeholders are assigned. For example, if Engineering is responsible for Parser Management, and they spend 20 hours per week maintaining the 40 custom parsers, they can justify the half of an FTE they’re requesting.

It’s easy for a SIEM RACI to span several hundred lines given the amount of tasks and teams involved, and I’d thus recommend to keep it as high level as possible. The objective should be to assign tasks to the teams, and then leave the teams responsible for figuring out how work is managed. This avoids the SIEM Owner having to resolve disputes within teams. The SIEM owner should have a single point of contact within each of the teams to work with directly.

A SIEM environment should have at minimum an overall RACI that defines the roles and responsibilities of all stakeholders. Additionally, each team may want to create an internal RACI that clarifies who within the team is doing what. This is optional, but highly recommended, as it can help employees understand their tasks, assist management in understanding the required tasks and work efforts, and most important establishes accountability. For example, if you have 100 correlation rules and leave it up to “the team” to manage it, you may find that the task of keeping the rules relevant is being ignored. When you break up the rules, the first 40 to be “owned” by Bill, the next 40 “owned” by Bob, and the final 20 to be “owned” by Joe, who also gets to own reporting, you may find rule updates happening more frequently. There is accountability and you can follow up with Bill, Bob, and Joe to check the status of the tasks. If there isn’t progress, you can further narrow down the issue, whether it’s a skillset gap or work overload, and then coach the employee accordingly.

Many argue that assigning work to an individual rather than a team introduces a skillset gap when that employee leaves. The advantages of assigning it to an individual are a better understanding of the task via specialization and repetition, better documentation of the task as a result of the understanding, and ultimately a position for the individual to improve the condition of the SIEM relating to the task, for example correlation rule updates. Having a group manage something that is not well understood leads to the team ignoring the task, something they can do when no one is accountable for it. A group that doesn’t understand the task cannot document it properly or improve its condition. There’s nothing more frustrating working on something you don’t understand.

An overall RACI is a requirement for any SIEM environment, but as all organizations are different, how a team manages tasks within itself should be at the discretion of leadership.


We’ll walk through a sample SIEM RACI to give you an idea on what it may look like in your organization. The RACI will be divided into subsections below by Category and commented on individually. A link to the full RACI Matrix is available at the end of the article.

The Stakeholders in this sample RACI are the SIEM Owner, Engineering, the Content Team, and Incident Response, who all fall under the Security Operations team. The Server Support and Storage Support teams fall under a different line of business, Infrastructure Services.

The first Category is Governance, and you can clearly see how the SIEM Owner is both Accountable and Responsible for the overall SIEM solution, dealing with the vendor, and internal escalations from any stakeholders.

The second Category is Architecture and Design, in which the SIEM Owner is also Accountable and Responsible, but Consults the Engineering, Content, and Incident Response teams. The SIEM Owner needs to work with them to make sure their requirements are met, the search speeds are adequate, the required data sources are available, and that the SIEM solution adequately meets all these requirements, and if not, are built into future versions.

For the Logging Configuration category, the SIEM Owner needs to make sure not only are the required log sources logging to the SIEM, but that they are logging the correct data. Engineering needs to be Consulted to ensure correct parsing, and the Content and Incident Response teams need to make sure the data they need is available within the logs.

The SIEM Owner is also Accountable and Responsible for leading all new projects, and ensuring the SIEM solution is compliant with the organization’s compliance and governance standards. You can also see at this point the SIEM Owner isn’t a mere decision maker; he or she will be active in the management of your company’s SIEM.

The Engineers are Accountable and Responsible for the health and stability of the SIEM solution, and to ensure data feeds are integrated into the SIEM correctly. They do everything from application support to patching. The only two support-related tasks that they are not Accountable and Responsible for are Server and Storage Support, but will be Consulted when necessary.

The Content Team are the SIEM end users, and are strictly Accountable and Responsible for developing and maintaining rules and reports. They are also active in providing input for new use cases, but the Accountability and Responsibility for that task falls on the SIEM owner.

The Incident Response Team is Accountable and Responsible for responding to the alerts generated by the correlation rules, and reviewing reports. They are also Accountable and Responsible to provide tuning recommendations for the rules and reports based on their investigations and observations.

The Engineers tried to get the Content Team to manage user accounts, but they lost the battle and ended up getting the task.


As you can see, a RACI is a simple document that can clarify who is responsible for what part of the SIEM environment. It can also be used by leadership to quantify work efforts, assist in understanding the various tasks employees do, and identify areas that require improvement. Issues can be raised and be visible to leadership on Monday morning instead of Friday afternoon, or during a breach.

A RACI is not practical without three other major requirements: clear ownership, a teamwork-oriented environment, and a strong SIEM skillset. Clear ownership gives the owner an incentive to maintain and improve the SIEM, and prevents issues from being ignored or assuming they’re the responsibility of another entity. A high-performing team maintains and improves the environment, retains highly-skilled staff, and attracts new talent into the team. A strong SIEM skillset allows staff to execute the required tasks. All of this contributes to a better return on investment the SIEM will provide your organization, and ultimately a better security posture.


Link to a sample SIEM RACI Matrix: Sample_SIEM_RACI

Please like, share or comment if you found this article useful. Thank you!

A SIEM Odyssey: How Albert Einstein Would Have Designed Your SIEM Architecture

Albert Einstein taught us that there are four dimensions: the three physical dimensions plus time. The light being generated by the sun exists, but it will take about eight minutes to reach the earth before it exists in our environment. Many of the lights you see in the sky at night were generated by stars millions of years ago, and may no longer exist today.

The four dimensions of spacetime can teach us a lot about the universe, and now a good lesson on SIEM architecture design. In SIEM environments, log data is sent through various layers, introducing a delay between the data source and destination. In a properly designed SIEM architecture, the delay between the source and destination should be minimal, a few minutes at most. But in an undersized SIEM architecture, delays between the source and destination can be high, and in worst cases, data may not reach the destination at all.

SIEM environments have three main layers. The first is the data sources, the various Windows servers, firewalls, and security tools that will either send data to your SIEM, or your SIEM will pull data from. The second layer is the Processing Layer, which consists of applications (Connectors/Forwarders/Ingestion Nodes) designed to process and structure log data, and forward it. The final layer is the Analytics Layer, which is where log data is stored, security analytics is performed, and end users search for data.

To highlight the risk introduced into your organization by an undersized SIEM architecture, we’ll use a DDoS attack against your organization as an example. The Bad Guyz Group has found a clever way to funnel millions out of your organization. Before they initiate the fraud scheme, they want to distract your organization from what is actually going on in order to buy themselves time, and thus launch a large-scale DDoS against your web servers.

Your DDoS protections begin sending out alerts, notifying your SOC of the attack and that there’s no concern at the moment. The amount of traffic directed at your web servers seems to be increasing, but is not near a level that will void your DDoS protections. Your SOC notifies leadership that even though there’s an active attack in progress, there’s nothing to be worried about.

While your DDoS protection is working as expected, your SIEM Processing Layer is being flooded with a 400% increase in firewall and proxy traffic. Your SIEM Processing Layer was only designed to process 10,000 events per second (EPS), and is now struggling to process a surge of 40,000 EPS. Cache files start to appear within minutes, growing at a rate of 100GB per hour, which will exhaust cache space within eight hours.

Hours later, SOC Analysts notice that the timestamps on most of the log data are several hours behind. They send an email to the SIEM Engineers, asking to see if there’s anything wrong with the SIEM application. When the SIEM Engineers get out of their project meeting a few hours later, they login to the servers and notice the cache files, extremely high EPS rates, and maxed-out RAM/CPU usage. They then notice that the surge in data is being caused by firewall and proxy logs. After a conversation with the SOC, the SIEM Engineers are then informed of the DDoS attack that happened earlier in the day.

Later in the evening, the SIEM Engineers warn leadership that the Processing Layer is dropping cache files of log data and is refusing new connections, resulting in data loss. The average log data delay now stands at eight hours as the DDoS attack continues.

Fortunately, the DDoS attack stopped the following morning, and the SIEM Processing Layer began reducing the amount of cache files on the servers. The SIEM Engineers anticipate that cache files should be completely cleared by 5PM.

Later that morning, the SOC Manager gets a call from the Fraud team, asking if they can see traffic to several IP Address. The SOC Analysts begin searching, but the response times are very slow and the latest data available is from last night. Just as the SIEM Engineers expected, by 5PM all cache files were cleared, and analysts were searching data in real-time again. They found only one hit from one of the IPs provided. The Fraud team insists there should be more than that, but the SIEM Engineers note that the other hits may have been dropped when the Processing Layer was refusing new connections during the surge in data. Leadership isn’t happy, and calls for an immediate review of the SIEM environment.

The bad news is that many SIEM environments are not sized appropriately to deal with such scenarios, or with legitimate data surges in general. These situations can leave your organization blind to an attack in progress, as the data required for an investigation exists, but is not yet available to your analysts, or worse is being dropped from existence.

The good news is that you can significantly reduce the probability and severity of this scenario. While SIEM environments can be expensive, the costly part is typically the Analytics Layer, and for many organizations over-sizing this layer isn’t an option. However, the Processing Layer tends to be much less expensive, and in some cases would simply result in the cost of the physical or virtual servers required.

A SIEM Processing Layer should be significantly larger than your sustained average event per second rate. While this number is a requirement to determine SIEM application licensing costs, many organizations make the mistake of sizing their SIEM according to this metric. In addition to spikes, the amount of traffic received by your SIEM during the day is likely to be much higher than at night. If your sustained EPS rate is 20,000 EPS, then it can be possible for your EPS during the day to be 30,000, and EPS at night to be in the 5,000-10,000 EPS range. If you receive a spike in traffic during the day, the 30,000 EPS can turn into 60,000 EPS. In many SIEM environments, this would cause the Processing Layer to quickly exhaust caches and begin dropping data. Supporting a large spike in traffic could simply be done by adding more devices (Connectors/Forwarders/Ingestion Nodes) within the Processing Layer. The increase in processing power and overall cache availability would reduce the risk of log transmission delays and data dropping.

In addition to reducing the probability of data delay and loss, an over-sized Processing Layer brings high availability benefits, and as well can make migrations and upgrades easier. If you have a single point of failure, you can lose your Processing Layer entirely if the device fails. If you have enough Connectors/Forwarders only to process your sustained EPS rate, you risk the above scenario when one of the devices within the Processing Layer fails, as the others have to make up for the extra EPS rates. If you need to upgrade your Processing Layer, the extra devices can make the upgrade smoother and transparent to any operational issues.

While we may have solved the issue with the Processing Layer, the surge in data can also result in transmission delays, data loss, slower end-user search response times, and system stability issues on the Analytics Layer. However, while it may seem logical to build an Analytics Layer to support double the sustained EPS rates, it can be cost prohibitive for many organizations. An adequately-sized Processing Layer can assist during surges by aggregating data (combining similar events into one, for SIEM products that support aggregation), caching it locally, and limiting the EPS-out rates to the Analytics Layer. Your SIEM Engineers can also control what data is sent over others, so if there’s a dire need for a particular data source, your staff can limit other data sources to be sent to the Analytics Layer so that the pertinent sources can be sent in priority.

In summary, there’s a strong return on investment for building an adequate SIEM Processing Layer given the low costs, risk reduction, and invaluable security benefits. Even with a minimal Analytics Layer, a properly-sized Processing Layer will be sufficiently able to process a surge of data, cache data that can’t be forwarded, prevent data loss, and reduce risks caused by large increases in log data. Make the investment and leave spacetime issues for the physicists!

Please like, share, or comment if you enjoyed this article. Thank you!

The Pros and Cons of Structuring Log Data at Ingestion Time with SIEMs

Another important but often overlooked part of a SIEM architecture and design or product analysis exercise is whether the product structures the data or not as it’s ingested/processed, and how that can affect your organization’s environment. This seemingly miniscule functionality can have a significant effect on your SIEM environment, and can even introduce risk into your organization.

Let’s start with the advantages of structuring (parsing) log events at ingestion time. In general, structuring data as its ingested/processed can give you many opportunities to manipulate the data in a positive way.

Advantages of Structuring Log Data at Ingestion Time

1. Ability to Aggregate

Aggregation gives the SIEM tool the ability to combine multiple, similar events into one single event. The biggest advantage of this is the reduction in EPS rates processed by the SIEM and the reduced storage requirements. Ten firewall deny events from the same source and destination can be combined into one, using 1,000 bytes of storage instead of 10,000 bytes. SIEM tools typically have the ability to aggregate hundreds of events over a period of thirty seconds or more, so it’s common to see aggregation successfully combine hundreds of events into one. Caution must be used with aggregation settings as the higher the aggregation window (the maximum amount of time the Connector/Log Processor will wait for similar events to combine into one), the higher the memory requirement for it.

2. Ability to Standardize Casing

Most SIEM tools can easily standardize casing of all fields parsed by the Connector/Log Processor. This is often an overlooked benefit that comes with structuring data, as the various data sources in your environment will log in various casings, and thus introduce a potential security risk. The security risk that can be introduced is that your security analysts may be getting null search results when the data they need is actually there.

In several investigations, SOC analysts were having issues getting hits for a particular user’s data. Upon closer inspection, we found the desired data, and discovered that the initial searches were coming up null because the casing in the searches did not match that of the log data. The SOC analysts were searching for “frank,” but the SIEM tool was configured to be case-sensitive, and the Windows logs were being logged in uppercase as “FRANK.” Thus, by simply having the Connector standardize casing for particular fields, you can minimize the above scenarios.

Standardizing casing can also increase search performance. When the SIEM tool only has to search in one case, it reduces the amount of characters it needs to search for. A simple search for “FRANK” only requires the tool to match on five characters instead of thirty-two (FRANK, frank, Frank, FRank, FRAnk, etc).

Why not simply disable case-sensitivity for searches? This is seemingly the best option, as the risk of missing data is mitigated. However, the major disadvantage of this option is that it increases the processing power required for the searches. For smaller environments, this is practical and the effects will likely be negligible. However, in larger environments where the SIEM is processing several thousand events per second and there are several end users, the results can be noticeable.

A practical work around can be to disable case-sensitivity for particular searches at the discretion of the analyst. Many SIEM tools offer this option for this very reason. There’s typically an option before the search to disable case-sensitivity.

A best practice to mitigate missing data due to casing issues while maximizing performance is to start with a generic case insensitive search, and once you get hits on the data you’re searching for, switch to the casing you see. For example, if you’re looking for user Frank’s Windows logs, start with a small, e.g. few minute, case-insensitive search “frank”, and once you see that the Windows logs are logging it as “Frank,” switch to the proper casing and then expand the search. This is a practical option that will help analysts avoid missing data, and will not require you to configure your tool to be case insensitive.

Regardless of how you chose to configure case-sensitivity, simply ensure your staff understand how your environment works and best practices for searching your data.

3. Ability to Add and Modify Fields

Many SIEMs can append data to existing fields, override fields with new data, and modify values put into fields. A common nuisance when searching for log data is how some systems have their FQDN logged (e.g. server01.ca.companya.com) while others simply log the server name (server01). This can cause a similar risk as case-sensitivity, where SOC analysts search for Device Host Name =”server01” but get no results as the server appears in the logs as “server01.ca.companya.com”. This forces the SOC analyst to do a wildcard search of Device Host Name =”server01*” etc, and ultimately requires more processing power from the SIEM.

When data is structured/parsed at ingestion time, the SIEM can do a simple lookup of the e.g. first period, strip whatever proceeds the period, and then put that into another field. Using “server01.ca.companya.com” as an example, the parser can leave server01 in the host name field, and then put the stripped ca.companya.com in the e.g. domain field. Analysts then know that they only need to search for the server name in the host name field, and to search the domain field if they want to know the domain of the server.


Now that we’ve fallen in love with the advantages of structuring data at ingestion time, let’s look at the disadvantages before we leave for the honeymoon.

The Disadvantages of Structuring Log Data at Ingestion Time

1. Increased Event Size

The first, and potentially most costly aspect of structuring the data at ingestion time is the increase in event size. When you structure the data, you increase the size of the event, in many cases doubling its size or more. Please see my related article, The Million Dollar SIEM Question: To Parse or Not To Parse, for more on this.

2. Potential Data Loss and Integrity Issues

Given that your parser is instructed to place values in specific fields, for example taking the value after the second comma and putting it into the user name field, there’s potential integrity and data loss issues if your parser is not updated at the same time the logging format for a particular data source changes.

Let’s take a look at a sample log event:

01-11-2018 14:12:22,, frank, authentication, interactive login, successful
The parser takes the timestamp from the characters before the first comma, the IP Address after the first comma, the user name after the second comma, the type of event after the third comma, the type of login after the fourth comma, and finally the outcome after the fifth comma. All is well until the vendor decides to add a new field, an event code, and change the order of the events:

01-11-2018 14:12:22, 4390000,, authentication, interactive login, successful, frank

This is a simple change, but your parser needs to be updated to ensure that the values are put into the correct fields, and to add in the new field. Should this new log type be implemented without a corresponding parser change, we’re not only going to have data in the wrong fields, we’re not going to know who did the login, as the value “frank” will not be visible to the parser.

3. Increased System Requirements

The more modifications the parser has to make to the event, the more processing power the Connector/Forwarder/Processor will consume. Ensure there are sufficient system resources able to process the required modifications.


A Summary of the Pros and Cons of Structuring data at ingestion time:

The Million Dollar SIEM Question: To Parse or Not To Parse

Given that SIEMs process and store data, one of the major requirements of a successful SIEM environment is proper and sufficient storage. Depending on your organization’s SIEM requirements, the cost of storage alone for your SIEM can exceed application licensing costs.

The most common omission in a SIEM product selection analysis is differentiating how the proposed applications process and store data. SIEMs process and store data differently, and thus will all produce different storage requirements. Given that storage is a major cost of your environment, how the application processes and stores data can alter your storage costs significantly.

Traditional, as well as newer SIEM products are designed to parse data as it’s ingested, and thus store data as a parsed event. Additionally, SIEM products will parse the data into different field sets; a Windows event in SIEM Product A can be parsed into 200 fields, while SIEM Product B will parse it into 193 fields. This will result in a different event size for the same data. Some newer SIEM products do not parse data as it’s ingested, stores the data raw, and only parses it when required, e.g. when you run a query, report, etc.

There are advantages and disadvantages to both. Parsing (or normalizing/enhancing/enriching) the data structures it, by adding applicable metadata. For example, the following log entry ‘jsmith,, failed login’ would appear parsed as ‘username=jsmith, IPAddress=,event name=failed login’. While this makes the data more organized and allows for more refined searches, it makes the log entry bigger in size. A 500 byte raw event can turn into a 1000 byte parsed event, doubling the storage requirement for this event.

While parsing increases the size of the event, the SIEM tool does gain the ability to manipulate the data. Many SIEMs that structure data have the ability to aggregate events, which can take multiple, similar events and combine them into one. For example, the event ‘source address=, source port=9022, event name= deny, destination address =, destination port= 443’ that occurs 10 times can be combined into one, with an extra field added, e.g. event count =10, to indicate how many times the event occurred. Thus, 10 events at 1000 bytes each use 1000 bytes of storage with the structured SIEM application instead of 10,000 bytes.

To highlight how this can affect your organization, let’s look at Company A as an example. The technical staff at Company A have been reading about the benefits of SIEM from some guy on the Internet, and decide they want to implement one. They want 2 months of online data followed by 10 months offline, and for the data to be highly available. Their storage team can provide high-speed storage for $5,000 per terabyte, and lower-speed storage for $2,000 per terabyte.

Next, they’ve invited a couple of vendors in for a product overview, and they’re making each complete the SIEM Storage Requirements and Costs spreadsheet they created.

First up for Company A is Product A. Product A does not parse event data as it’s ingested and stores logs in raw format. It gets up to 50% compression on live data, and 85% compression on archived data. The product can replicate data and thus easily meet the high availability requirement. The tech staff at Company A also went the extra mile and determined that the average log event size from all their systems is 700 bytes.

A summary of the requirements and weights so far:

Again, since Product A stores data only in raw format, the Average Normalized Event Size is 0, as it does not store a parsed/normalized event. Product A cannot aggregate data, so there is no Aggregation Benefit. The product will create a copy of each event, bringing the replication factor to 2.

Next, we’re going to determine the average sustained events per second rate from the numbers the tech staff provided. Based on the total number of devices, the SIEM will need sufficient storage to store 5,000 Total Average EPS (events per second) at 700 bytes per event. Using the SIEM Storage Requirements and Costs spreadsheet, we get the following table.

The total daily uncompressed storage requirement is 607 GB. At 50% compression, the Total Online Storage Requirement is 18 TB. At 85% compression, the Total Offline Storage Requirement is 28TB. 18 TB at $5,000 per TB brings the cost to $91,000, and 28 TB at $2,000 adds another $55K, bringing the total to approx. $146,000.

Up next is Product B, which is a SIEM tool that’s designed to work with structured data. It will parse log events and create an Average Normalized Event Size of 2,000 bytes, and will be able to reduce events by 40% through aggregation. It can’t replicate data, but the Connector/Forwarder will be configured to send to 2 destinations. And by complete chance, it has the exact same compression ratios.

Product B is going to process the exact same raw EPS, but the Aggregation Benefit drops the sustained Total Average EPS to 3,000.

The total daily uncompressed storage requirement is approx. 1 TB. At 50% compression, the total Online Storage Requirement is 31.2 TB. At 85% compression, the total Offline Storage Requirement is 47.6 TB. 31.2 TB at $5,000 per TB brings the cost to $156,000, and 47.6 TB at $2,000 adds another $95K, bringing the total to approx. $251,000.

As you can see, for the sample insurance company, Product A produces a significantly different storage cost than Product B due to the way the products process and store data. The tech staff at Company A like Product B better, but know that it will be tough to sell the VPs a solution that will cost $500,000 more over the next five years.

However, the result could be completely different at Company B, which has different requirements than Company A. Company B is a telco that wants their IT staff to monitor their network infrastructure with a SIEM. The tech staff at Company A were generous enough to share the SIEM Storage Requirements and Costs spreadsheet with their buddies at Company B.

Company B decides to bring in Product A first and has them fill out the spreadsheet. As the environment mainly consists of network devices, the Average Raw Event Size is going to be 450 bytes. Again, since Product A doesn’t parse log data, there is no Average Normalized Event Size or Aggregation Benefit.

Next, we can calculate a total daily uncompressed storage requirement of 1.2 TB from the 15,780 EPS rate. That will produce an online storage requirement of 37 TB and offline of 56 TB.

That will bring the total yearly storage cost to approx. $300,000.

Next, the tech staff at Company B bring in Product B for an overview.

The mostly network devices will produce an Average Normalized Event Size of 1500 bytes, and the Aggregation Benefit will be very high for the firewall data, reaching an overall total benefit estimated at 80%.

Through the strong aggregation benefit, the total EPS is reduced from nearly 16,000 to just over 3,000. As a result, the storage requirements for Product B are 24.5 TB for online, and 37.4 TB for offline, respectively.

So for Product B, the yearly storage costs sit at $200,000 per year.

The tech staff at Company B like Product A better for some reason, but like at Company A, they know it will be tough to justify the extra $500,000 in storage costs over the next five years.

So as you can see, To Parse Or Not To Parse is not some Shakespearean-sounding cliché by some geek on the Internet. Requirements such as storing two copies of each event, storing both the raw and normalized event (add the storage costs for Product A and B together, roughly!), or retaining two years of log data can create tremendous storage cost differences over the tenure of your SIEM environment.

The best product for your company will be that which meets your requirements best, and storage costs alone should not be the deciding factor for which product is selected for your organization. There are many advantages and disadvantages to leaving data in raw format or parsing it, but as the data shows, you can’t ignore a question that can really be in the millions!


For the record, the lads at Company A have shared the SIEM Storage Requirements and Costs spreadsheet.