Fill your SIEM with DNS activity

April 10, 2018
In our initial post we demonstrate how the dns-logger from NoSpaceships can be used to capture and feed real-time DNS activity into an Elasticsearch based SIEM.


More and more teams are turning to DNS activity to gain a better understanding of where end-clients are connecting. For example, it helps identify end-clients connecting to known malicious domains. Other domains whose DNS lookups resulted in the same IP addresses being returned could also then be identified.

There are many solutions on offer to capture DNS activity, with many existing technology areas such as IDS, application firewalls, and even the DNS vendors offering up some form of solution.

Many of these solutions fall short in one way or another though. For example:

  1. They collect DNS activity between DNS servers and not between the end-client and DNS server - the end-client who initiated the query is then unknown
  2. No DNS responses are captured
  3. DNS logging must be enabled - which often incurs a performance penalty, and the challenges associated with rotated/truncated log files must also be addressed
  4. Multiple vendors not supported by a single method
  5. Very resource intensive - disk I/O, memory and CPU requirements
  6. Require a third-party driver (e.g. WinPCAP or npcap) - packet capture collection methods typically require drivers like this to be installed

The most recent versions of the open source DNS servers, and Microsoft DNS, provide some form of DNS activity capture which fulfil all the above. Realistically though most environments will not be running the latest software and cannot utilise these features. Additionally, in a multi-vendor environment there would be multiple capture methods to support and administer.

The dns-logger from NoSpaceships aims to address all these points and more. It offers a simple and light-weight method to capture DNS activity on the end-client facing DNS servers. It does not require DNS logging to be enabled and will capture DNS responses.

This post demonstrates how easy it is to get the dns-logger collecting DNS activity and feeding it into an Elasticsearch based SIEM.

Example Environment

For our example we will have two servers.

The first is a 64bit Windows 2012 R2 server. This server is running Microsoft DNS which has simply be instructed to forward queries to another DNS server outside of the lab environment.

The second is a 64bit CentOS 7 server. This server is running version 6.0.1 of Elasticsearch, Logstash and Kibana (i.e. an ELK stack). These components were simply installed, i.e. there are no modifications away from the default.

Configure Logstash

Our first step is to configure the Logstash component on the ELK server to receive the JSON messages generated by the dns-logger.

Create and edit the /etc/logstash/conf.d/dns-logger.conf file:

sudo vi /etc/logstash/conf.d/dns-logger.conf

Place the following in this file and save it:

input {
    tcp {
        port => 5145
        codec => json

filter {
    date {
        match => ["timestamp", "ISO8601"]

Here we instruct Logstash to listen on TCP port 5145 (a different port can be specified) and to accept JSON formatted messages (which will be separated using a new-line character). We also let Logstash know which field the message timestamp can be found in each message and it’s format.

Now we just need to restart the logstash service:

systemctl restart logstash

Install & Configure dns-logger

Next, we install the dns-logger on the Windows DNS server. The latest version of the dns-logger can be obtain from the dns-logger Downloads page.

In our case we are installing dns-logger on a Windows server, so once the installer has been transferred to the server we run the following to install it (we transferred the installer to the c:\temp directory):

cd c:\temp
start /wait dns-logger-2.2.2-windows.exe /S

Once installed, we edit the c:\Program Files\dns-logger\dns-logger.config file in notepad and appended the following target to the end of the file:


NOTE By default the dns-logger will generate JSON messages so there is no need for us to modify any other configuration parameters.

Before we restart the dns-logger service, we can run it at the command line and have it print messages to standard-output instead of sending to Logstash (from this we will know it is capturing DNS activity):

cd "c:\Program files\dns-logger"
dns-logger.exe --run-stdout config\dns-logger.config 2>NUL

NOTE Log messages will be printed to standard-error so here we have added 2>NULL so we only see the JSON messages in the command window.

Then we issue a simple DNS query:

c:\temp> nslookup
> server
> set type=a

We should then see the dns-logger print a few JSON messages to its standard-output like this:

cd "c:\Program Files\dns-logger"
c:\Program Files\dns-logger> dns-logger.exe --run-stdout config\dns-logger.config 2>NUL

Simply issue a CTRL+C to stop the dns-logger.

Now we restart the dns-logger (remember to do this as an administrator):

Restart-Service dns-logger

The dns-logger is now installed, collecting DNS activity, and forwarding it to Elasticsearch via Logstash in real-time.

Check Kibana for Data

Before we check Kibana we issue another DNS query just to make sure some data was generated following our restart of the dns-logger service:

c:\temp> nslookup
> server
> set type=a

We should then be able to see the data in Kibana:



This post demonstrated how easy it is to use the dns-logger to feed JSON based messages into an Elasticsearch based SIEM - simply install dns-logger and configure it with a Logstash target.

For large DNS deployments or ELK instances multiple Logstash collectors can be employed with failover built-in to the dns-logger. Automation tools can also be used to install and upgrade the dns-logger to large groups of servers simultaneously.

If you have any questions, queries or feedback regarding this post please contact us >