SSISS: A SIEM Requirements Gathering Case Study

Streamlined Seamless Integrated Security Solutions (SSISS): A SIEM Requirements Gathering Case Study

Implementing a SIEM is a challenge for any organization. As SIEMs can take years to implement, it’s critical to build them appropriately, as even minor changes can result in months of re-work. SIEMs can be resource-intensive applications, reading and writing several thousand events per second, and thus an inadequate amount of RAM, CPU, or slow storage can mean poor performance and application instability. SIEM environments can have many stakeholders, so changing them can require approvals from many departments and subject matter experts within an organization.

To reduce risks associated with a SIEM implementation, a thorough requirements gathering exercise should be performed. We need to know who is going to be using the tool, how often it will be used, what type of data it will collect, how long the data will be stored for, and more. This will allow solution designers to best architect a solution, determine what product is most suitable for your organization, select an appropriate licensing model, and in general eliminate any assumptions made in the design. We also need to provide the requirements to the vendors in order for them to provide an accurate, high-performing solution.

One of the major things to keep in mind during a requirements gathering exercise is to challenge all requirements. If a requirement significantly raises the cost of the solution or makes it more difficult to maintain, then there should be a justification for it. Common misconceptions with using log data as legal evidence and data retention periods can make solution designers unnecessarily increase the complexity and cost of a SIEM. For example, some organizations mandate that log data must be retained for seven years, but this is typically a requirement for transactional data, not system log data.

Let’s dive into a sample requirements gathering exercise to highlight some of the major items we need to capture in order to design an appropriate solution. SSISS (Streamlined Seamless Integrated Security Solutions, not a real company to my knowledge, but what a name), one of the industry’s leading IT security companies, has hired me to assist one of their clients, Company A. Numerous problems exist within the environment and Company A isn’t sure where to even start.

The following Monday I arrive at Company A and meet with the VP of Security Operations. I learned that the SIEM was implemented a couple of years ago by a provider Company A is no longer doing business with. Complaints about the SIEM range from slow search response times, data loss, application instability, to audit deficiencies. The VP has budget to build a new SIEM, and doesn’t want to spend efforts on fixing the existing one. I ask the VP for all stakeholders and a point of contact within each group that I can work with. The stakeholders are the security operations team, network team, server team, and the compliance team.

I first meet with the security operations team. Their main complaints with the system are that it’s unstable, slow, and overall difficult to get work done. Searches and reports time out, and the system generally stops working at least once a week that forces a reboot. As I note the issues down, I ask if there’s anything else the system doesn’t do today. They reply that the data sources they need are there and they can create the required correlation rules, it’s just that the system is slow and unstable. Log data growth of 20% over the next 5 years seems reasonable, and they don’t currently forward any of the log data to another application, nor do they plan to. They agree to send me an email with their documented requirements.

Next is a session with the network team. They use the SIEM mainly to troubleshoot firewall and router issues, and note the searching is slow. They are mandated to log all of their devices to the SIEM, but there have been no integration issues to date. Log data growth of 20% over the next five years seems reasonable to them as well. I ask that they send me an email documenting what they need the SIEM to do along with the quantity of devices they intend to integrate.

I next meet with the server team. They generally don’t use the SIEM, but they are mandated to configure all servers to log to it. The only output they get from it is a monthly report to verify that all systems are logging. When asked about growth, they by chance give the same number used by the security operations and network teams. They agree to send me a note documenting the type and quantity of servers that they own.

Finally, a session with the compliance team lands me with some large requirements: seven years of log data and encryption of data at rest. I reply that most industry standards mandate one year of log data, and that seven is typically for transactional data, which is not stored in the SIEM. Additionally, there is no personally identifiable or financial data within the logs, so while many SIEMs offer masking capabilities, it can decrease performance. The compliance team replies that as long as there is no financial data, one year of data suffices, and access to data needs to be restricted, but not encrypted or masked. They agree to send me their documented requirements.

The next day, I receive responses from all teams. Everything looks okay with the security operations requirements:

I note that since we’ll be using Syslog for many data sources, the logs from the source to the Processing Layer will be unencrypted, but will be from the Processing Layer to the Analytics layer. The team confirms this will not be an issue. I also ask if there are any opportunities for filtering data, and discover the team doesn’t need any of the events I’m proposing to drop in the new solution. The team does a quick count and finds that dropping these events will reduce EPS rates by 20%. They also note that their current SIEM doesn’t do any aggregation.

The only additional request I have for them is to compile the distinct amount of systems logging to the SIEM, each device type’s average sustained and peak sustained EPS rates, and the total amount of correlation rules they intend to have in production. This will help me provide an accurate architecture and storage requirements.

Requirements for the network team look good with the quantity of network devices:

The server team requirements are documented as expected with the total quantity of servers:

Fortunately, the conversation with the compliance team worked in my favour, and the only requirement from them is to retain log data for one year.

The security operations team responds to my request a few days later, and I’m now able to populate my SIEM Architecture Sizing, Storage and Infrastructure Costs Calculator spreadsheet, which I’ll be sending to each vendor.

The first tab lists the data source requirements:

The two most important numbers in the above table are the Total Average Sustained 24h EPS (events per second) rate, which is the total amount of events received in a day divided by the amount of seconds in a day, and the Peak Sustained EPS rate, which is the maximum amount of EPS processed by the system in a day, typically during business hours. The Total Average Sustained 24h EPS rate will be used to determine storage requirements and licensing costs depending on the product licensing model, while the Peak Sustained EPS rate will be used more to size an appropriate architecture. One of the biggest mistakes many SIEM consultants commit is using the sustained EPS rates to size an architecture. This results in the system being undersized during peak hours. For example, proxy traffic at night can be a mere fraction of what it is during the day. Thus, an architecture designed using the sustained rate will cause the system to slow down and cache data during the day, when it’s dealing with 10,000 peak EPS rather than the sustained 1,500 EPS it was designed for.

The second table lists the functional requirements (what the system must do):

The third table lists the Environment Parameters, which are other design requirements that need to be built into the architecture.

The first item is the Processing Layer Forwarding Factor, which is how many copies of each event the Connectors/Collectors/Forwarders will forward to the Analytics Layer. Many SIEM products forward a second copy of each event to two destinations to make the solution highly available. If I want a highly available solution, I’m going to need to enter a value of two or greater. The second item is the Analytics Layer Replication Factor, which will indicate how many copies of each event will be copied to another server for high availability. Given that these two items can vary per SIEM product, I’m going to let the vendor enter these numbers.

The third item is the Analytics Layer Forwarding Factor, which indicates how many systems the Analytics Layer will need to forward to. As confirmed by the Security Operations team, we will not be forwarding data from the SIEM to another system. This is a critical design consideration, as missing this requirement can cause the architecture to be severely undersized.

The fourth item, Filtering Benefit, will reduce the amount of data sent from the Processing Layer to the Analytics layer by the entered percentage. I’m going to enter a value of 20%.

The fifth item, Aggregation Benefit, will depend if the SIEM product can aggregate. I’ll leave that value for the vendor as well.

The Processing Layer Spike Buffer and Analytics Layer Spike Buffer are designed to prevent caches from being formed when there are surges of log data. Why this is an important consideration is detailed in the article A SIEM Odyssey: How Albert Einstein Would Have Designed Your SIEM Architecture. I’m going to use a value of 25% for the Processing Layer and 15% for the Analytics Layer.

Finally, as discussed with many Company A teams, log data growth of 20% will be assumed.

After all the parameters are input, I get the following table from the Architecture Requirements tab. The two key values are the Total Processing Layer EPS Requirement and the Total Analytics Layer EPS Requirement. The SIEM architecture will need to be sized based on these numbers. However, these values may change per vendor depending on how their product works.

I’m now ready to send the SIEM Architecture Sizing and Storage Costs Calculator spreadsheet to the three shortlisted vendors.

While we should be focused on meeting Company A’s requirements, we shouldn’t forget to be equally focused on meeting the vendor’s requirements. We need to ensure the proper system requirements are met, including CPU, RAM, and storage. Failure to meet these requirements can result in numerous complications, including reduced search response times, application instability, data loss, operational nightmares, and ultimately increased risk to the organization.

The first vendor to respond is Vendor 1. Their SIEM product doesn’t structure logs during ingestion, doesn’t have the ability to aggregate data, but retaining the logs in raw format should keep the message size low at 700 bytes. Their product provides high availability by replicating data across multiple Analytics Layer servers. For the Non-HA solution, the Processing Layer will need to process 15,000 EPS, while the Analytics Layer will process 11,000 EPS. The average event size of 700 bytes produces offline storage costs of $21,300/year, while the online storage will be provided by the local disks on the server. For the HA solution, the Processing Layer requirement stays the same, but Analytics Layer Requirement jumps to 22,000 EPS, and storage requirements double to $42,600/year.

Vendor 1 recommends four servers in the Processing Layer, which is more than enough to handle the anticipated 15,000 EPS. This should prevent any caching or log data delays when there’s a surge of log data. For a highly available Processing Layer, they recommend to double the amount of Processing Layer servers, but note that if one of the servers is unavailable, the remaining three should still be more than sufficient to handle the 15,000 EPS. They also note a load balancer can be used in front of the Processor servers to provide balanced traffic and high availability. The 500GB of local storage on each server should provide 3.5 days of cached data in the event the Analytics Layer is unavailable, but can be bumped up if desired.

For the Analytics Layer, Vendor 1 recommends a large physical server with 48 cores, 256GB of RAM, and highly recommends a combination of solid state and local hard disks for the storage medium. Should all the server requirements be met, they have no concerns with meeting the search speed and correlation rule requirements. And to meet the high availability requirement, a second server can be added, and the primary Analytics server will replicate a copy of each event to the secondary server. Vendor 1 can also provide the entire solution as a cloud-based service.

The licensing model for Vendor 1 is based solely on stored GB per day. You can add as many servers or end users as needed without any additional costs.

Vendor 2 responds next, which provides a SIEM product that structures data as it’s ingested, can provide significant aggregation benefits, but will produce an average event size of 2,000 bytes. The product can provide high-availability by the Processing Layer sending a copy of each event to another destination, commonly known as “dual-feeding.” For the non-HA environment, the Processing Layer will need to process 15,000 EPS, but given the product’s aggregation functionality, the Analytics Layer will only need to process 5,500 EPS. However, the average event size of 2,000 bytes will bump up the offline storage costs to $30,400/year. For the HA solution, the Processing Layer requirement doubles to 30,000 EPS, Analytics Layer requirement to 11,000 EPS, and storage costs to $60,800/year.

For the non-HA solution, Vendor 2 claims four servers in the Processing Layer will be more than adequate to handle the expected 15,000 EPS, and can be augmented with a load balancer and additional servers for better performance and high availability.

The Analytics Layer is a single server with significant resources. Vendor 2 claims it should provide all required functionality and as well provide fast search response times for end users. To make the solution highly available we can simply add another server and configure the Processing Layer to “dual-feed.”

Vendor 2’s licensing model is a combination of CPU cores, peak EPS rates, and the amount of end users logged in simultaneously.

Last in line is Vendor 3, who provides an appliance-only SIEM solution that structures data as it’s ingested, can provide aggregation benefits, and produces the highest event size of 2,300 bytes due to the large field set the product uses. Like Vendor 2, they also anticipate an aggregation benefit of 50%, which will produce a non-HA Processing Layer requirement of 15,000 EPS and Analytics Layer requirement of 5,500 EPS, but the increased message size will bump up offline storage costs to $35,000/year. For HA, the values can simply be doubled.

Vendor 3 only proposed two servers in the Processing Layer, which they are certain can handle the proposed EPS rates, and can be scaled horizontally for high availability.

The Analytics layer is a single appliance the vendor claims can meet all the searching, reporting and analytics requirements. The high availability architecture is similar to Vendor 2, where a second Analytics Layer server can be added and the Processing Layer can be configured to “dual-feed.”

The licensing model is simply based on the average daily EPS rates on the Analytics Layer. Vendor 3 supports the entire appliance, from the hardware, OS, to the application, and issues patches for all of it. Patches and upgrades seem simple with the single files the vendor provides that can be uploaded at the click of a button.

The appliance-based solution sticks out in my mind, given the statements the VP of SecOps made about the revamping of the server teams. Previously, server issues could take months to resolve, and patching was practically non-existent due to the shortage of resources within the team. Additionally, there have been cases where the server resources promised were not delivered.

While I haven’t discussed cloud options with the VP of SecOps, I’m going to follow up with him to see if this will fit into his roadmap and verify if sufficient bandwidth is available to an external site. Vendor 1’s track record of managed Cloud SIEM environments is also impressive and will likely be the quickest to deploy. I’m also going to confirm that there are no other stakeholders that we missed, such as data scientists, to ensure the solution can handle any mega calculations if necessary.

All solutions appear to meet Company A’s requirements, so I’ve got some more work to do to see which will be the best fit. I like Vendor 1’s simple licensing model, architecture, low storage costs, and think the cloud solution may be the best bet for Company A. I like Vendor 2’s blazing fast searches, as there was nothing that frustrated me more as an analyst than slow search response times, making a simple investigation take hours. However, I’m confused with their licensing model, and I’ll have to follow up with them to understand it better. I like Vendor 3’s solution as well, similar to Vendor 2 but appliance-based with a simpler license model, but I’m not sure two servers in the Processing Layer will suffice.

When I look at the operational costs of all vendors, Vendor 1 produces the lowest storage costs. Vendor 3 produces the lowest support cost, but that’s due to the two servers in the Processing Layer compared to the four for the other vendors, which again I’m not convinced will be sufficient.

As you can see, selecting a SIEM product for your organization can be a complex task. There are many high-quality SIEM products on the market, but depending on your organization and how it operates, one product may be more suitable than another, and the cost can differ significantly. Minor functionality from how the solution will process the required data sources to the amount supported parsers can drastically make one solution more feasible than others.

A thorough requirements gathering exercise can reduce the risks associated with a SIEM implementation. It will help your organization select a product that maps best to your requirements. It will allow vendors to provide an accurate, high-performing solution. Ultimately, it will pave the way for an efficient and effective solution that lowers implementation and operational costs, reduces waste and rework, and provides end users with a tool that increases your organization’s security posture while providing business value.