With the advent of staff being more and more mobile with work activities, they are often not in the office. This remote teleworking has been a growing trend for many years and can mean that employee systems are not on the corporate network ever or very infrequently. If the systems are not on the corporate network, then the audit logs and other activity from their laptops cannot always be collected in near real time as there is no connection to the internal SIEM system that is typically on the corporate network. The corporate SIEM systems need to be protected and rarely on open networks as they contain sensitive information and need to be protected from tampering and viewing from unauthorized parties. This can leave the end point systems exposed to unauthorized activity from the staff member doing something they should not be doing, or to being hacked from an external party while on some other open network like a Starbucks, a cafe, a hotel or an airport’s wireless network for example. If the system gets compromised, then no log or alert information can be sent to the corporate SIEM and the security teams won’t know that one of their employees was just hacked. In general, most connections would require the employee to VPN when remote or go into an office location to connect to the LAN so the logs can be sent to the corporate SIEM. But by then the system may already be compromised so it could spread the malware on the corporate network and result in a larger scale incident. Many attacks can go unnoticed after a seemingly innocuous event such as not patching the system, a user clicked on a malicious link and malware was installed, a remote hacker exploits some weakness in the systems settings or via a new day zero vulnerability. Some attacks may try and hide on users’ systems until the user connects back to the corporate network but there will still be subtle bits of activity that can be detected and reported on with software installs and process execution.

So how can Snare help with this problem?

We have the capability to collect the logs from the employee’s system in near real time over the internet, all securely over TLS using a mutual authentication key to our Snare Collector/Reflector technology. The system can be open to the internet and only allow authorized connections from the Snare agents to the Snare Collector/Reflector. Any system that does not have the relevant authorization keys won’t be allowed to connect. Along with TLS certificate strict validation the destination connection can be trusted and securely send the log data to the central SIEM. The connection works much like a VPN does for the traditional laptop to corporate network when a user is remote, but it limited only to the Snare Agent and the Snare Collector/Reflector for sending log data. This then allows all remote workers systems to have near real time monitoring and collect the audit logs whenever they are on the Internet such as in a cafe, hotel, airport or a Starbucks, which are all common areas they can be exposed from a remote exploitation attack, so this ability helps with early incident detection and data breaches of the users endpoint system before it can spread to other users and the corporate network. The technology can be deployed on the corporate network or in the cloud and reflected around to other parts of the network and multiple SIEM systems as needed to facilitate early warnings and reporting for the security team and any SOC an organisation has in place. The time to detection of a breach is always critical to containment and minimizing any business impact. That’s why collecting the data in near real time is always important to minimize the impact to the business.

As many know, the US Cyber Command issued a recent emergency directive for DNS Infrastructure Tampering.

While much of the directive relates to validating organisational DNS, password and MFA settings, one key aspect of the directive discusses the monitoring and management of authorised and unauthorised changes to the DNS environment. In order to meet this requirement, adequate logging should be in place to monitor changes to the DNS settings, and log data should include date/time information as well as information on who is making the changes. Snare can help meet this requirement in several ways.

The Snare enterprise agents can track all access and modification to the DNS settings on Windows and Unix systems.

The key aspects of the logs that can be collected are:

  • All user authentication activity. If the user logs into the system either from the local console, Active Directory, or via ssh on Unix then Snare can collect the relevant operating system audit events or kernel events to show that a specific user logged into the system. This data will include the source IP, authentication type, relevant success and failure of the attempt and the date and time stamp of the activity.
    • Microsoft has technical articles on how to configure your audit policy to generate the specific events both on legacy 2003 and newer 2008R2, 2012R2, 2016 and 2019 systems that support advanced audit policies.
    • All the events are quite detailed, and include:
      • Who made the changes,
      • What the changes were,
      • What zones were affected and obviously,
      • When these changes occurred.
  • The Microsoft custom event logs on Windows 2008R2, 2012R2, 2016 and 2019 also include DNS Server and DNS client eventlog categories. The Snare agent will collect these using the default objectives. The events collected show additional changes to the DNS records that can occur through either manual or dynamic updates associated with Active Directory DNS and zone files. A summary of the event types are:
    • 512, 513,514,515,516 – ZONE_OP – These can be part of major updates and changes to the zone files.
    • 519,520 DYNAMIC_UPDATE
    • 536 CACHE_OP
    • 537,540,541 Configuration – these events will be the areas of main concern with systems changes.
    • 556 SERVER_OP
    • 561 ZONE_OP
  • The Snare agent for windows will collect DNS Server logs as part of the default configuration.
  • As part of the installation process, the Windows agent can be told to manage the configuration of the Windows audit subsystem, to ensure that it generates the relevant administrative events.  Alternatively, the Snare for Windows agent can be configured to be subservient to manually configured local policy or group policy settings. It should be noted that unless the associated audit subsystem is appropriately configured, events may not be delivered to the Snare for Windows agent, for processing.
  • For Unix systems the the DNS files are usually flat text files.  The Snare Linux agent can use two aspects to monitor the files
    • File watches: The agent can be configured to watch for any and all changes to specific files related to DNS configuration settings, and will raise kernel audit events on access or modification, including details of who accessed/changed the file, and date/time information associated with the event. On Linux  systems, configuration files related to bind, dnsmasq or other DNS server tools may be monitored.
    • The default administrative Objectives for the Linux agent, track all user logins, administrative activity, an privileged commands. File watches are also configured for for changes to the /etc directory, which hosts system level configuration files for the operating system.
  • File Integrity Monitoring – The Snare Linux agent can also perform sha512 checksum operations on system configuration files, such as DNS configuration files, in order to watch for changes. This will track all new files, changes to files or deletion of files and directories being monitored. These events dont show who did the change but will track the actual changes and permission changes to files. The FIM monitoring can be run on a configurable schedule (eg: once per hour or once per day) depending on the level of granularity wanted.
  • Once the logs have been generated then its up to the SIEM and reporting systems to provide reports or alerts relating to the changes. Snare offers two complimentary method for this:
    • Snare Central – this can provide objective reports looking for the specific event IDs and produce a report in tabular format as well as graph and pie charts of the activity. These can be emailed out on any schedule needed to include the PDF report, CSV and text output as needed.
  • Snare Advanced Analytics – For this we can provide a a view of changes that occur in the system and update the dashboard in near real time as the logs are being collected.
  • As part of normal operations all changes should be validated as part of approved activity as per your normal operating procedures and anything that is not approved would be escalated as a incident for investigation.

If your organisation needs help in this area and you would like more information, please contact our friendly sales team at snaresales@prophecyinternational.com for a chat on how we can help your business achieve a more effective and efficient CISA DNS monitoring solution.

Steve Challans

Chief Information Security Officer

https://www.snaresolutions.com

 

How good data management applies to log collection.

I love data. I was a math geek growing up and turned my affinity for statistics into a career. Intuitively most of us know that data drives informed decision making leading to better business outcomes. That’s only if, however, you do a good job collecting, managing, and interpreting that data. Often times this doesn’t seem to be the case. Data management best practices apply to log collection, as that is essentially what it is. The caveat, though, is that not collecting certain logs can have dire consequences. There are plenty of tools, though, that can strengthen your security posture by drastically improving the way you collect and manage your logs.

When it comes to log collection, two approaches seem to dominate the marketplace. The first one is, “Whatever we have to do to steer clear of negative consequences – particularly auditors.” These people, of course, take the Minimalist approach, collecting as little as possible to make managing the whole system as easy as possible. And who can blame them? Unless you are passionate about data management, you probably have dozens of other priorities you’d rather spend your valuable time on. There is inherently a lot of risk in this approach as you will probably not have the information you need when the time comes. If you ride a motorcycle or know somebody who does you may have heard the common phrase “There are two types of riders, those who have been in an accident, and those who will be in an accident.” The wisdom being that an accident is inevitable and it’s best to be prepared, motorcycles are dangerous after all. This axiom is equally applicable to cybersecurity, because there are really only two types of business, those that have been breached, and those that will be breached. It’s one thing to pass an audit, it’s another beast entirely coping with a breach.

The other common approach I often see, when it comes to log collection, is “D. All the Above.” If you have unlimited resources, it’s an awfully attractive option. After all, why risk it. When push comes to shove, if you have all the data at your disposal, you know you have the answers in your archives somewhere. While I don’t totally take issue with this “Maximalist” approach, I think there are a lot of ways to enhance it. For starters, a small environment is generating at least 5 gigs in logs a day, which means around 125 plus logs per second, or 10.8 million logs per day. That is a lot of data and larger environments produce thousands of times more data than that. This means more overhead from hardware costs to SIEM costs. When you are being charged by your solution by units of data ingested into the system, ingesting everything not only costs your business unnecessarily, but for many organizations it makes their SIEM solutions prohibitively expensive. We see it every day. A medium size enterprise goes with a market leader in Security Analytics only to see their bill go from a couple hundred thousand a year to several million. You only have to mingle at the RSA conference story to hear frustration after frustration, and it doesn’t have to be this way.

There are so many ways to focus log collection, but I have three favorites:

  • Log Truncation
  • Forensic Storage
  • Tiered Analytics

Log truncation is simple enough, Snare has a whole paper on it and how it works. What has always surprised me is the reluctance to do it. Log truncation is the removal of the superfluous text on windows event logs. Every windows event has a string of verbose text that has no forensic value. There is no need to collect it and it will only bog down your network and your storage. We’ve seen environments where upwards of 70% of log data was verbose text, and if cut from the log would have saved the organization considerable cost. Generating cost on data that your analytics tool will eventually ignore is quite simply, a waste of money.

Forensic storage is a trending topic in our industry now. Data volume is growing at an incredible rate and we need a lot of this data to piece together what happened in the unfortunate event of a breach. The problem is trying to detect a breach by always sifting through all that data all the time increases mean-time-to-detection (MTTD) which also increases mean-time-to-response (MTTR). That’s where forensic storage comes in. Forward on all critical event logs to your security analytics platform, while keeping everything else with even a shred of forensic value on separate servers. They are there if you need them, but they aren’t driving up hardware and software costs, and more importantly, aren’t bogging down your analytics platform or your incident response team. Every study on data management you see has data scientists reporting that they spend over 70% of their time on data preparation.1 That is wild. Highly skilled and highly paid employees, like data scientists, should not be spending up to 4/5ths of their time on menial tasks. That’s what happens when you inundate your systems with data though. Forensic storage is an easily implemented solution that not only improves security KPIs, but saves you money as well.

That brings us to Tiered Analytics. Tiered analytics, as the data dork that I am, is my favorite solution to data management, but it is also the most complex. While in theory companies of every size can take advantage, it gets increasingly important as your organization grows. A lot of companies do it to a degree already. When branch offices and/or individual departments have their own KPIs and dashboards while that same data is fed into executive level dashboards at corporate, that is Tiered Analytics. This approach helps your business get insight into that data your business generates at multiple levels, with varying degree of detail and varied perspectives.

SANS has a white paper written by Dace Shackleford that gives great examples from each tier. The dashboards and KPIs, for example, built for the C-Level executives would need to answer the following questions:

  • What is our overall risk posture?
  • What are our high-value targets?
  • What are the risks if our high-value targets are compromised?
  • What are the most cost-effective ways of reducing risks?

While IT management would need analytics to answer questions like:

  • What is really happening on the network?
  • Are any systems operating outside of policy?
  • Based on current system workloads can anything be virtualized?
  • What impact would an outage or downtime of a system have on the business?
  • Can we decommission any assets?

Then, another tier down you have various monitoring and response teams looking to answer an almost indefinite number of questions requiring a number of purpose-built dashboards and potentially custom KPIs:

  • Are website links in various emails from a known list of bad websites?
  • Are their network assets that should be monitored more frequently?
  • Are there changes to any host configuration settings or files closely tied to a website visit?
  • What is the impact on other network assets if this one is compromised?
  • Have there been any unusual example of port and protocol usage?
  • Should we monitor some assets more frequently because of the amount of use they get?
  • Is an employee who is supposed to be on vacation logged in at the office?
  • Is their suspicious activity after a USB port was used?

These questions are obviously geared to reducing MTTD to detection and improving incidence response. Some are also aimed at understanding the impact of individual assets on the business.2 Several questions require pulling together disparate datasources, not just logs. Workplace management software, for example, can help you identify when an employee in Barbados on vacation is also somehow at their machine in the office. STIX can help you correlate activity on the network with malicious websites and IPs. Inundating your analytics tools with superfluous data from logs only makes it that much more tedious to bring in data from the rest of your business, which is already becoming an imperative. With a tiered analytics solution, you can even pick and choose which datasources to bring in where, giving your teams easily digestible data sets to analyze and report on increasing the efficiency of each business unit and drastically improving your security posture.

You have only to look at the most recent Verizon Data Breach Report (2018) to see how little progress we’ve made in uncovering breaches. It takes 68% of businesses months or more to uncover they’ve been breached.3 There is so much an organization can do to improve its posture that it can be daunting to even begin. The first step is better data management, make life easier for the people living and working in all that data. The second is to prioritize and work towards more sophisticated approaches action item by action item. Truncation is an easy first step, and forensic storage is a no-brainer. After that comes security analytics whose architecture will vary company to company, but whose implementation will be critical to improving both an organization’s security posture and the cost effectiveness of their security solutions. By improving the way we tackle today’s security challenges, we’ll be better equipped to meet tomorrow’s.

 

1 Cleaning Big Data: Most Time-consuming, Least Enjoyable Data Science Task, Survey Says: Gil Press – https://www.forbes.com/sites/gilpress/2016/03/23/data-preparation-most-time-consuming-least-enjoyable-data-science-task-survey-says/#44e4090d6f63

2 Shackleford, D. (2016, January). Using Analytics to Prevent Future Attacks and Breaches. Retrieved December 18, 2018.

3 Verizon. (2018, March). 2018 Data Breach Investigations Report. Retrieved November 27, 2018, from https://www.verizonenterprise.com/resources/reports/rp_DBIR_2018_Report_execsummary_en_xg.pdf