Don’t be one of them.

Output driven filtering. Do you know what that is? Does your organization leverage it in their log collection and management? Do you think that you need all of your logs? With all the innovation at the analytics level, interest in well executed data collection has suffered. Log collection tools lack sophistication and have almost regressed in recent years, and those are the solutions that at least work.

While our broad set of Snare functionality that helps you “reduce the noise” can help you save money, I want to zero in on filtering, an output driven strategy and how the Snare Reflector makes implementation palatable for even the most log hungry incident responders.

Filtering is simple, collect some logs, and ignore others. Having an output driven strategy means zeroing in on the log data that you know how to utilize immediately and how to present it. In other words, forward the longs that serve a purpose in your SIEM on to your SIEM and simply store the rest in an easily accessed repository empowering incident response while contributing to compliance. Spamming your network and your SIEM with log data you have no current use for is a waste of time and money. This is key. Time, because you’ll eventually have to scale your solution and also the lengthy mean time to detection that results from hefty amounts of unneeded data. Money because the increase in data load can mandate new hardware, and a lot of SIEMs charge by the amount of data they take in.

So, how exactly does filtering work? You basically cut out all the data you don’t have an immediate use for. This can make incident responders nervous as you can never be 100% sure what data will be relevant in the future. What if seemingly useless logs have hidden forensic value? There are tons of log types, how can I be sure which ones I want and which I don’t? Sweat not my friend, this is a rare occasion where you can have your cake and eat it too. The Snare Reflector can not only filter the collected logs but can forward immediately pertinent logs to your SIEM while archiving the rest for any number of reasons including compliance and incident response (and forensics, of course).

Filtering in Snare is incredibly easy. What some call a “white list” and a “black list” are simply the “Include” and “Exclude” fields in Snare, simplifying both setup and future modifications.
Logs you need to strongly consider collecting or absolutely should include:

  • Application Servers
  • Databases
  • IDS
  • Firewalls
  • Antivirus
  • Routers
  • Switches
  • Domain Controllers
  • Wireless Access Points
  • Intranet Applications
  • Data Loss Prevention
  • VPN Concentrators
  • Web filters
  • Honeypots

Some examples of logs you don’t need to include:

  • Access tokens
  • Commercially sensitive, non-pertinent information
  • Application source code
  • Sensitive personal data (although pseudonymization is a work around)

That about sums it up. If you are already a Snare user you can get started on this today. If not download the free trial and try it out yourself. If you’re looking for more ways to reduce the noise, check out our post on it.

Over the years I have helped many organizations implement logging solutions. For better or for worse, a security incident of some sort is the event that tends to drive change in an organization; often, it is an external attack, or perhaps an internal HR related matter that would benefit from the sort of historical evidence that logs can supply. Many organizations have limited or no logging infrastructure in place prior to the incident, and it is only when the requirement for historical evidence or real time information arises, that senior management realizes that there is a big piece missing
from their security jigsaw puzzle. For some this task can seem daunting. “We have so many systems. Where do we start?”, is a common question. The task can appear insurmountable when you are faced with tens or hundreds or thousands of systems, but roll-out plans can always be broken down in to bite size chunks.

One of the best places to start is with your critical systems. Most sysadmins will intuitively know the key systems, services and network architecture in an organization. Even when the administrator has no need to understand the data that passes through such systems, they will invariably have a dynamically updating mental map of the parts of the IT infrastructure that will result in frustrated users, and lost business, if they go down. Some basic examples are:

  • Servers. Physical or virtual systems that host mission critical applications or services onWindows, Linux or another server-level operating system.
  • Desktops or endpoints where the user interacts with the backend applications. Typically these are Windows based systems but there is a growing community of Mac OSX and Linux based users in businesses.
  • Databases. Databases often contain the bulk of an organizations sensitive data, and are usually the first priority to track “who, what and when” from a data-access perspective. The integrity of the data also needs to be assured by management and regulatory authorities.
  • Applications. Applications are the interface layer between users and data, and often generate useful logs that assist in analysis and forensic investigation. These can be web server logs, proxy logs, AV logs, or even logs generated by custom applications. For organizations that favor a ‘bring your own device’ strategy, or use ultra-thin-clients such as laptops, the application may be the closest logging source to the client.
  • Network. Firewalls, routers, switches, Wireless APs, IDS and IPS systems – each of these can generate vast amounts of data that can be of use individually, or can be correlated with other system log data.

Each of these sources can generate mountains of data. Although vendors tend to make each individual log entry reasonably human-readable, when you are faced with trying to read an encyclopaedia every second of the day, every day of the year, having some level of centralized analysis engine is of great benefit. These systems are known as ‘SIEM’ servers (“Security Incident and Event Management”). In our case, we have a product called “Snare Central”, and its capabilities address goals such as:

  • Store the logs away from the system that generates them, in order to reduce the likelihood of tampering or deletion. In many instances if a system has been hacked the intruder will clean up the local system logs in an effort to hide the activity. If the logs are stored securely away from the system that generated them then more forensics data will be available for review.
  • Keep the logs secure so only staff that have a need to know have access to the logs. Many logs can contain sensitive information or can reveal usage patterns on systems. Credit card numbers, for example, are often leaked to local system logs by overenthusiastic applications.
  • Perform regular reporting and analytics on the logs to analyse usage patterns, threats and compliance activity.
  • Help with incident management. One of the key aspects of a centralized logging solution is assisting with incident management for a security breach in showing the “who, what, when and how” that relate to the incident.
  • A SIEM can run on a virtualized system, dedicated hardware or in the cloud.  Some of this will depend on the capacity required for the log storage and other factors such as Event Per Second (EPS) rates, and the security posture of the organization where the logs can live given the nature and sensitivity of the data.
  • Most compliance standards require that logs be kept for a reasonable period of time. The mean time to detection for a security breach is measured in months, so having logs available for a reasonable period of time contributes to successful incident analysis.  PCI DSS requires that logs be kept for at least 1 year, with 3 months readily available. Other regulations can require longer storage periods, and some companies and government agencies have their own data retention requirements.   A SIEM therefore needs to have a data retention capability commensurate with the log retention need of the organization. In the end this usually comes down to planning an optimal disk allocation based on a combination of the number of logging systems, expected activity volumes. It’s rare to get this right first go but as long as you have the ability to grow the system to use more storage space, you are generally covered.
  • It allows the security teams to get actionable intelligence of what is going on, in the organizational IT infrastructure.

Once these strategic goals are understood, a rollout plan will usually lead to changes to your infrastructure components in order to generate and direct log traffic to your SIEM solution. Network and infrastructure equipment such as firewalls, authentication gateways, or switches will generally implement the ‘syslog’ protocol. Turning on syslog logging, and pointing the device to the SIEM collection service is usually a very quick and easy way to start to collect logs. In the case of the Snare Central Server, no configuration changes are required at the collection end – data will start rolling in, and it will be stored, categorised and be available for reporting.

Servers and desktops will usually entail collecting the logs via one of two methods:

  • Installing an agent or
  • Activating an agentless collection

The pros and cons of agent-based vs agentless solutions are covered in another whitepaper. For ease of use, and scalability, my preferred and recommended method for most organizations, is the agent-based solution. The process of installing Snare agents is usually quick and painless, and they provide a sane default configuration that will meet the needs of many small to medium environments out of the box. Once Snare agents are installed, they can be configured to send logs to your SIEM server, and you are up and going. For large environments deploying an agent will usually involve using one of the following:

  • Configure and deploy the agent using an MSI with a template configuration. Sometimes the security or admin staff will want to review what logs are being collected and adjust the standard install and the objective filtering needs. Destination server information can be changed, or other options can be changed to enable USB tracking, monitor file activity, watch for registry changes, exclude noisy events, and so on. These extra settings can often be driven by compliance needs for security standards or regulations such as PCI DSS, ISO 27001, SOX, or HIPAA, where specific logs need to be collected and reviewed on a regular basis.
  • Using tools such as Microsoft GPO, SCCM, or IBM Big fix, that can handle the remote authentication and installation of the software. Most companies have something in place to push out applications and updates so any of these can be leveraged.

So we now have senior management support for logging, an infrastructure that is capable of sending us data, and a central server to collect, store, analyse and archive our log data. We still need to know: what sorts of logs should we collect on a modern network?  The answer to this question varies from customer to customer, often substantially; but there are some basic guiding rules of what to collect:

  • Login/logoff events – know when users are using the system and from which source. Should a user be logging from Singapore when they are in New York, why are they logging in during the middle of the night? 
  • All administrative activity – to track all system changes performed by administrators. Was this approved or authorized activity or should it be considered a security incident and subject to follow-on analysis? All administrators have the ability to override technical controls either at the operating system or in a database.  If the administrators credentials are compromised, then the account can usually perform high level modifications to network and system infrastructure, including stealing data or changing database contents; all of which can affect the operation of the organization  significantly.
  • Account changes – password resets, group membership changes. Why was a user granted domain admin, not long after their password was reset? Was this a breach? 
  • Track commands that are run on systems. Are these white listed applications or are staff using unauthorized applications? Maybe it’s some malware that the AV does not know about exploiting the system or using Power-Shell commands on windows. Was it linked to a web link someone clicked on from an email Phishing attack that resulted in payload being run on the system that bypassed the AV controls. Was it something like Rubber Ducky where it was a USB device emulating a keyboard that created its own power-shell script to perform some malicious activity. Tracking commands can highlight many potential problems.
  • File auditing and activity monitoring. Are users performing authorized changes to files or accessing files they should not be. File auditing can highlight problems with access controls to sensitive data not being set correctly, and can help detect the abuse of access privileges.
  • Software installation or removal. Are tools or other software being used on the network with out permission?  This could be from a backdoor from an application or maybe a malicious staff member installing their own hacking tools to perform unauthorised activity.
  • Tracking removable media. This can lead to data loss as a result of staff copying sensitive data to removable media such as USB or DVD, before removing it from the organizations network. It can also be a source of viral or worm infection in much the same way WannaCry and its variants infected many companies.
  • Networking logs. The value of firewalls and tracking what goes in and out of a network is sometimes not well understood. Firewall logs can highlight applications and users attempting to access sites or services that could pose a risk to the organization. Many firewall reviews uncover unauthorized applications installed on systems when they attempt to “phone home” to get updates or exfiltrate data from the organization. Seeing what ports are being used on switches can alert to unauthorized devices being connected to the network. I have had customers identify cleaners accessing the network and it was found by the switch port logs.

So overall, this sort of monitoring allows the security teams to receive actionable intelligence on threats and incidents from the organizational IT infrastructure, which leads to improvements to security controls and operations through threat mitigation. This should improve the environment with a reduction and frequency these threat activities occurring to the business.

Steve Challans
CISO
Prophecy International

In case you missed it, and you’d have to be watching Sky News on May 23rd 2017 not to, Snare was mentioned as one of Australia’s software security industry leaders. The story was exploring the space in light of recent cyber attacks that have taken the world by storm.

You can watch the full clip here. If you’re looking to learn more about the buzz surrounding Snare and why we are going “gangbusters” try our free demo or reach out to us and we will answer any questions you might have.

Also, for the latest follow us on TwitterFacebook and LinkedIn!

Security Information and Event Management, or SIEM, is worthless unless you are precise in your data collection. The old adage, garbage-in garbage-out, or GIGO, continues to hold true. Successful SIEM deployments are built on rock solid log management and real-time transport. What do we mean by “rock solid logging”? It means you are efficiently collecting the data you need and not getting bogged down in the superfluous data you don’t. This requires you to properly archive data for compliance and incident response while feeding pertinent real-time data to your analytics platform that should ideally be purpose-built for SIEM solutions.

How can you make sure your logging is up to par? Simple. Make sure you, and your organization, do these five things.

Properly Configure Windows Audit Policy

We see this less often than we used to, but we still find people wondering why their network isn’t generating logs. The simple answer is that Windows Audit Policy needs to be turned on, and often times users need to go in a select want events they want generated. In other words you have to switch on event log generation in Windows environments. You can learn how to do that on Microsoft’s TechNet.

Once on you need to tune the policy to your log collection needs. Our Snare Agents can do that automatically by checking a box in settings, you can also find Microsoft’s recommendations on there website.

You may notice Microsoft doesn’t recommend collecting everything. While we understand the temptation to put up a catch all, you will bog yourself down in copious amounts of worthless data. Our Snare experts are happy to walk through your audit policy with you as needs easily vary from company to company.

Secure and Reliable Log Transport

There are only three protocols for transporting logs from A to B. UDP, TCP and TLS each have their own pros and cons. In an ideal world everybody would use TLS, and if you are sending sensitive data, or transporting across an untrusted network, you better be using TLS.

It used to be that all log data was considered trivial, nice to have but not essential, and UDP was the default protocol for transporting logs. UDP can lose upwards of 10% of your logs and that network bandwidth you think you’re saving can easily be saved with output driven filtering. Rather than transporting your logs with a “fire and forget” mentality, implement logging software that efficiently and securely transports logs without introducing the potential of critical logs being lost while traversing your network. Exceptions to these best practices are rare, and odds are your organization should be using TLS or TCP. Take the time to make sure you are using the ideal protocol for your company’s logging needs and networking environment

Output Driven Filtering

Security professionals should only collect logs where there is a clear requirement defining how they will be used. Are you archiving for compliance? For potential forensics? Do you need to forward on to an analytics platform? Strong analytics is a pivotal piece in reducing MTTD/R, or Mean Time to Detection and Response, as well as automated response tools.

While listing the logs you want collected creates efficiency in your entire SIEM ecosystem, company is unique and you should be thorough in exactly what logs are required to achieve your desired result. You can also add in a blacklist of everything you don’t require. A sound strategy is to collect too many logs at first, and eliminate unnecessary logs (noise) through an iterative practice of continuous evaluation and improvement.

Now you are not only saving system resources and reducing network traffic, but saving money on the SIEM side where many vendors charge on the volume of data ingested.

Centralized Collection

Centralizing your logs saves you time, money and increases the readability of your logs. It’s a crucial step in minimizing MTTD/R. The average time it takes to detect and respond to an event. A critical KPI as it can vary from minutes, to hours, to days and in extreme cases, months. Identifying events in minutes is not only critical, but is a key ROI metric used for justifying the initial investment in a SIEM, in turn protecting your company from potential hazards and liabilities.

So the question is, how easy is it to centralize your logs? The answer often times is “not very”. The problem with this is, it shouldn’t be the case. Software can do almost anything we can dream up, so when companies force vendor lock-in or require substantial coding to make the simplest of changes, you have to wonder, do they really have my best interest in mind. Invest in a logging solution (*cough* Snare *cough*) that can be your SIEM, or tie into the SIEM of  your choice so that no matter where you collect from, the logs end up where you need them. There is also the incredibly large bonus of being able to easily change SIEM vendors if you don’t have to rip and replace software on every machine in your organization. Pretty cool, right?

Sophisticated Architecture Capabilities

Sophisticated enterprise logging architecture enables your organization to maximize bandwidth by archiving logs for forensics and compliance before forwarding on critical logs to the central SIEM for analytics. This way only mission critical logs eat up system resources, but you preserve any logs necessary for potential work in the future.

Flexible architecture also allows you to implement custom deployments across various business units and still consolidate everything in an efficient manner. Give your head office visibility via analytics, but take care of the more granular forensic and compliance work at branch offices. Or maybe different departments have different security and compliance standards. Simply deploy accordingly. There are a near infinite number of ways to deploy your SIEM efficiently across your organization.

There you have it. The five logging musts in an enterprise level logging solution. For a deeper dive into this check out our resources page. There are a number of brochures you’ll want to check out. Of course sometimes talking through everything with a human is better than scrolling through digital content. Please feel free to reach out and get in touch with us as well!