Adding Windows DHCP logs to Elastic – part 1

Prerequisites

In order to add Windows DHCP server logs to Elastic, we assume that you have the infrastructure needed.

  • Windows DHCP Server 2012 R2 or higher
  • Elasticsearch cluster
  • Logstash

We are going to work with Elastic 6.x in this setup.  

Filebeat

Install filebeat on your DHCP server in a directory of your liking.

The DHCP logs are located in %systemroot%\system32\dhcp\dhcpsrvlog*.log. You will also find ipv6 logs, but we will focus on ipv4 logs.

Add the following code to your filebeat.yml. This is the prospector, that is going to watch for your DHCP logs. Notice that I am adding a field called Type under root with the value dhcp. This is a field , we will using in the Logstash configuration.

    -
      # Paths that should be crawled and fetched. Glob based paths.
      # To fetch all ".log" files from a specific level of subdirectories
      # /var/log/*/*.log can be used.
      # For each file found under this path, a harvester is started.
      # Make sure not file is defined twice as this can lead to unexpected behaviour.
      paths:
        #- /var/log/*.log
        - c:\windows\system32\dhcp\DhcpSrvLog-*.log
      input_type: log
      include_lines: ["^[0-9]"]
      document_type: dhcp
      close_removed : false
      clean_removed : false                 
      ignore_older: 47h
      clean_inactive: 48h     
      fields:
        type: dhcp
      fields_under_root: true

And add an output section also. There are multiple ways of shipping data from Filebeat. In this example we are shipping the logs to Logstash for parsing. DNS names are logstash01 and logstash02.

output:

  ### Logstash as output
  logstash:
    # The Logstash hosts
    hosts: ["logstash01:5044" , "logstash:5044" ]

    # Number of workers per Logstash host.
    worker: 2

    # Set gzip compression level.
    #compression_level: 3

    # Optional load balance the events between the Logstash hosts
    loadbalance: true

    # Optional index name. The default index name depends on the each beat.
    # For Packetbeat, the default is set to packetbeat, for Topbeat
    # top topbeat and for Filebeat to filebeat.
    #index: filebeat

    # Optional TLS. By default is off.
    #tls:
      # List of root certificates for HTTPS server verifications
      #certificate_authorities: ["/etc/pki/root/ca.pem"]

      # Certificate for TLS client authentication
      #certificate: "/etc/pki/client/cert.pem"

      # Client Certificate Key
      #certificate_key: "/etc/pki/client/cert.key"

      # Controls whether the client verifies server certificates and host name.
      # If insecure is set to true, all server host names and certificates will be
      # accepted. In this mode TLS based connections are susceptible to
      # man-in-the-middle attacks. Use only for testing.
      #insecure: true

      # Configure cipher suites to be used for TLS connections
      #cipher_suites: []

      # Configure curve types for ECDHE based cipher suites
      #curve_types: []

After these steps, filebeat should be able to watch the DHCP server and ship them to Logstash.

Logstash

In order for logstash to process the data coming from your DHCP server , we create an input section and specify it as beats input.

input {
  beats {
   port => 5044
  }
}

Next we define the filter section, where we will parse the logs. Notice that we are using the dissect filter here. This is very convinient for this kind of stuff and you dont have to use grok for simple stuff like this.

filter
{
    if [type] == "dhcp"
    {
        dissect {
          mapping => {
          "message" => "%{ID},%{Date},%{Time},%{Description},%{IP_Address},%{Host_Name},%{MAC_Address},%{User_Name},%{TransactionID},%{QResult},%{Probationtime},%{CorrelationID},%{Dhcid},%{VendorClass_hex},%{VendorClass_ascii},%{UserClass_hex},%{UserClass_ascii},%{RelayAgentInformation},%{DnsRegError}"
                   }
           } 
         mutate
         {
            add_field => { "log_timestamp" => "%{Date}-%{Time}" }
         }
         date {
              match => [ "log_timestamp", "MM/dd/YY-HH:mm:ss" ]                
              timezone => "Europe/Copenhagen"
         }
         if "_dateparsefailure" not in [tags]
         {
             mutate
             {
                remove_field=> ['Date', 'Time', 'log_timestamp', 'message']
             }
        }
     }
}

And finally we define the output section, where we ship data from Logstash to Elastic. We are using a daily index in this example, but you could use a weekly or even monthly approach instead as there will not be a huge amount of data in this index.

output {
  if [type] == "dhcp"
  {
    elasticsearch {
      hosts => ["http://localhost:9200"]
      index => "dchp-%{+YYYY.MM.dd}" 
    }
  }
}

Elasticsearch

In order for Elasticsearch to correctly handle our DHCP data , we need to provide a index template. Notice that we are just using Keywords here.

{
  "dhcp": {
    "order": 10,
    "index_patterns": [
      "dhcp-*"
    ],
    "settings": {},
    "mappings": {
      "dhcp": {
        "dynamic_templates": [
          {
            "strings_as_keyword": {
              "mapping": {
                "ignore_above": 1024,
                "type": "keyword"
              },
              "match_mapping_type": "string"
            }
          }
        ],
        "properties": {}
      }
    },
    "aliases": {}
  }
}

That’s is pretty much it for you to get data going. We will leave  it for you to define a dashboard to display the data in a meaningful manner.

Stay tuned for next part of this series where we will be expanding the logstash filter by enriching the data with Mac Vendor lookup. 

Security Distractions

With each new “inspirational” Instagram, Twitter and Facebook New Year post cropping up on our feeds. We couldn’t help but jump on the bandwagon of “new beginnings” and finally launch our blog….

We have talked and talked and talked and talked about how we might one day come around to the idea, that we might eventually be distracted enough to consider creating our own blog. Cue the long back and forth messages over Signal on what to call the god damn thing. Purchasing a domain name these days is so easy, but coming up with a catchy name… Not so much…

We love security and we love getting distracted by it, so eventually it was only logical that the blog should be named something along these lines……

Both myself and Kim find ourselves getting overly excited about the technical side of security everyday to the point that the next new thing we have created ends up being all we can talk about for an hour or so, before we move on to the next new cool thing and forget about the last….

We figured that since we think we are doing some pretty exciting things both professionally in the security world, but also in our own home *cough* datacenter *cough* lab’s that we would try our hand at writing about it.

A lot of our work is based around how we can get the most out of open source security platforms and tools. Most of our focus is around ElasticSearch and the full ELK stack, squeezing every last bit we can out of the platform to write and develop our own custom detection and enrichment methods. We will also talk a lot about MISP, The Hive Project, Squid, Elastalert, Kafka, Sysmon, Threat Intelligence and many many other topics that would be sure to set off your 2019 bull shit bingo card…

There is a lot that can be achieved, when you have a nice and simple logging setup. ELK is free from a license perspective and your limits are pretty much your imagination on what you can do with it.

Enough about the blog and a little on us…

Collectively we work in the Danish finance industry, where we unfortunately share the same corner of the office. Much to the dismay of our colleagues, who have often commented on our old married couple like tendancies…

There is a little about us in the ironically titled “About Us” page..

We promise to try to keep the tone positive and deeply technical, but there also has to be room for a little bitching too right?

Watch this space, there is a lot to come!

Disclaimer: The opinions expressed on this blog and all posts, are our own and do not reflect our employer, this blog is purely for personal use.