TF-CSIRT – Whats it all about?

I have recently taken a break from blogging to focus on other things, before jumping back into my Incident Response 101 blog series. I want to write a little bit about TF-CSIRT and the reasons for joining a community like this. It is a process I am slowly becoming familiar with and it definitely deserves more words written about it…

First off…. What is TF-CSIRT?

Task Force Computer Security Incident Response Teams or TF-CSIRT for short, was established by the European CERT community back in the year 2000. The idea was to create a community of incident response groups/teams, which can work together for a common goal. That goal being, spreading information/knowledge sharing, assisting each other in incidents, and any other way they can leverage such a strong community to help in the incident response world.

In order to provide access to the community, a service was created called “The Trusted Introducer Service”. This service is used to provide a directory of incident response teams which are members of TF-CSIRT. The Trusted Introducer Service acts as a clearing house to ensure that members meet the correct requirements when joining. and then offering further processes for becoming accredited or certified TF-CSIRT members.

So what are the benefits?

The main backbone of the TF-CSIRT community is the member database, where emergency contact details for each incident response team are displayed. This information can prove vital in an incident response situation. To maintain this vital community spirit TF-CSIRT hosts regular conferences and meetups for its members, these are great for getting to know other teams and sharing knowledge.

Another huge benefit of TF-CSIRT actually lies within the certification process. This process provides strict requirements based on the SIM3 audit model and essentially means that when you hit the magic certification level, you are one of the best prepared incident response teams in Europe (at least on paper). This is a standard that a lot of teams aspire towards, but unfortunately don’t make it, due to time commitments usually.

The TF-CSIRT community also works very closely with FIRST (Forum of incident response and security teams). This partnership helps deliver a yearly joint conference.

There are many other benefits from becoming a member at TF-CSIRT, and I would highly recommend it!

So how do I join?

Joining TF-CSIRT is broken up into 3 different “memberships” or processes.

Listed Member

The first processs is to become a listed member. This means you become part of the community and you will get your team listed in the TF-CSIRT database. This also means you can begin attending the European conferences and meetups that are offered.

To become a listed member, you need to fulfill some requirements:-

  1. You need to be sponsored by at least 2 other already accredited or certified teams. A good idea here, would be to look at the Trusted Introducer directory and see if you know teams that have already gone through this process. The TF-CSIRT community is becoming larger and larger within Europe, so the chances are you already know the relevant teams to get the process moving.
  2. Get PGP/GPG keys for your team to communicate with TF-CSIRT. This one is a tiny bit of hassle as there is a large debate out there about using PGP, it can be quite difficult to get PGP supported within certain organizations and ad-hoc processes may end up being needed to facilitate this requirement.

Once you have these two main requirements met, you simply fill out a form and email it to the Trusted Introducer email address and VOILA… Well not quite VOILA, there is still an internal process which is undertaken within TF-CSIRT where various members are voting about your membership. But after a period, you will find yourself a listed member!

Accredited Member

A lot of teams who aim for the certification membership, will first need to become accredited members. By becoming accredited you recieve access to the members only part of the Trusted Introducer service where you have access quite a lot of nice information about other teams within the directory which is not publically available. Many teams reach this stage aiming for certification, but for multiple reasons do not progress to that step. You should look at the accreditation step as “we are who we say we are”, an incident response team who wants more than simply being listed, but wants to show the community they mean business.

To become accredited your team must:-

  1. Already be a listed member
  2. Use RFC2350 (I will blog about this soon)
  3. Fill out a large amount of information about your team and their capabilities and service offerings

Once these requirements are met, this information is supplied to the Trusted Introducer team. This time it is not quite VOILA at all. There is a long process where the information you have provided is vetted and assessed. This assessment takes around 3 months to complete and can result in further questions being asked by the Trusted Introducer team. After it is completed and you are accepted, then you gain a shiny new status of “Accredited” within the directory!

Certified Member

Saving the best type of membership for last, a certified member is a team who has met the gold standard for incident response teams. They have adhered to the strict SIM3 model and achieved a maturity rating within this model that is set by the Trusted Introducer team, and essentially means “your team is one of the best in Europe at incident response” (on paper!).

The requirements to become certified:-

  1. Must already be an accredited member
  2. Have a positive SIM3 assessment based on current Trusted Introducer thresholds

The idea with number 2. is that the team will spend time assessing their current maturity within incident response. To do this they use the SIM3 model, something which I will be blogging about very soon! This model is used to ensure that a team has all necessary processes documented and in place, plus that there is a measurable maturity within these processes.

If the team discovers they are not quite ready after completing a SIM3 assessment, they can then spend some time improving processes and documentation to a higher standard. Another low hanging fruit is ensuring that the processes you define are signed off and audited by someone independant from your incident response team. Once you are confident you have met the correct maturity level within your documentation, you can then apply to be certified.

A SIM3 auditor will then be appointed to you, this auditor will perform an onsite workshop at your location and audit all of your documented processes. Performing interviews of certain team members, and really digging deep to ensure that processes are not just something written on paper, but are understood too.

Once this audit is passed, your status will then be changed within the directory to “Certified” and you can then go and show off to your friends! *cough* I mean constituents…

I may make the certification process sound like a long drawn out process, but in fact how else could you achieve such an important gold standard, without being audited externally and being put before a committee who decides if you are mature enough to be certified, any other process like this would also take time. However the benefits that come after being certified are huge, your constituents and management can have safer knowledge that they are being served by a certified team.

Final words…

I hope that you learned something from this blog post, I have become familiar with the whole Trusted Introducer/TF-CSIRT grouping over the last 2 years and I think it is incredibly exciting to be a part of this community. The certification process is also an incredible learning experience and will ensure that you really have everything in order to run your incident response team!

The Trusted Introducer website has far more details and interesting information about the processes, and can be found here:-

My next blog post in this area will talk about the SIM3 model and how awesome it is for measuring the maturity of your incident response team…

Incident Response 101 – The Why?

In the previous post we discussed the background for my knowledge within incident response. Now we will jump into the exciting stuff and talk about “The Why?”

I guess a pretty good place to start in defining the incident response process, is understanding why do we need incident response at all?

Incident response wouldn’t exist without something to actually trigger the process. To trigger the process you need an incident, and what will generate that incident?


Incidents are generated from a threat, whether this threat is a nation state attacker, a script kiddie, a pandemic, or even some sort of natural disaster. So then what is a threat, and how do we define it?

I like to start out this explanation by showing the following diagram:-

Diagram by IncibeCERT

Intent + Capability + Opportunity = Threat

Each one of these conditions needs to be met to fulfill the criteria to create a threat. To make it more understandable I use an example of whether there is a threat at home from my child trying to steal Nutella from the cupboard.


Intent is pure and simple, does my child want the Nutella? Do they have the desire and drive to get it? Without intent, I could leave Nutella all over the house and not be worried about anything happening to it.


Does my child have the capability to get the Nutella? I may have left the cupboard door open, and my child may desperately want the Nutella. But they haven’t learned to open a jar yet. So the threat is not there…


Did I leave the Nutella jar open on the kitchen top? So now my child has the perfect chance to get hold of it. The opportunity has been given to them, now they can combine it with their intent and capability to create the threat!

Well what can we do about this?

You may look at these three points, and think there is alot to be done to protect against each part of the “threat process”. But there isn’t… You cannot take actions to reduce the capabilities of your attackers yourself.

You also cannot influence an attackers intent against you. In some niche cases, you could argue that by “doing good things” you might reduce the intent. But this is a relatively difficult issue to measure.

So this leaves only “opportunity” where you can have some sort of impact. I say “some sort of” because an attacker will always get an opportunity. An opportunity can be something as simple as a misconfigured firewall, a vulnerability in a public facing server and many more.

But you can do your best to restrict the number of opportuntities presented to an attacker. A good example of this, is vulnerability management, when an exploit or vulnerability is released and this effects you, taking actions to patch or mitigate it can help reduce the attackers opportunity to become a threat.

But what about incident response?

You may be thinking, well wait a minute, where does incident response fit into this? Incident reponse assumes that the attacker had the opportunity to become a threat and then carried out actions against you which have resulted in an incident needing to be handled. Incident response is purely a reaction process and is driven by threats.

In some cases the lessons learned from the root cause analysis within the incident response process can also assist with reducing the attacker opportunities. An example of this… Imagine having a perimeter firewall hole, which is too wide and allows external access to a number of test servers which are not patched. The subsequent incident from an attacker compromising these servers, can lead to a report which identifies the broad firewall rule and gives advice on how to fix it. Thus reducing the next attackers opportunity to become a threat!

Closing remarks…

In the next post we will look at how we can have an understanding of the threat landscape, and how to figure out which threats might be relevant to us…

Incident Response 101 – The Background

In the previous post, I gave an introduction to my planned set of blog posts around incident response.


The first question is, how have I made it to this stage in my understanding of the incident response process. Which materials, courses, books etc have lead me to develop my current knowledge level in this field. I will try to give a short description of each resource and why it is important…

All authors start with some background about them, so the audience trusts them a little more when they begin reading, “oh this guy has read alot, and is certified in xx and xx, they must know what they are talking about”.

This is a list of resources, that I turn to at least once a week in my work within incident response.


FIRST CSIRT Services Framework 2.0

It took me quite some time to find this document, and it was quite a way into my journey of discovery within building a Cyber Defence Center before I found it. But once I did, it answered so many of the outstanding questions I had. This document lays it out flat, what you need to do to deliver a large selection of services within the CSIRT world. It opened up a door to a large community for me too, as I found the authors to be very interesting and the FIRST group a very welcome aid in my service architecture. I treat this book like the bible for the services I needed to build.

Just like any religious text, there is always room for intepretation and this resource is very good, but it does not answer every single question. In some areas it raises more questions, which require deeper research and more technically focused answers. But this we will touch in on later in the blog posts on this subject.

SIM3 – Security Incident Management Maturity Model

I started learning about the SIM3 Model whilst beginning research into joining the TF-CSIRT community (something we will look into in later blog posts). This model lays out the perfect foundation for the building blocks you need to assemble an international class incident response team. Attaining a good maturity rating within this model, enables you to join the TF-CSIRT community and know that you have a very well oiled incident response process. The SIM3 Model is written by Don Stikvoort, who has also been highly influential in the FIRST CSIRT Services Framework.

This model is the golden standard for creating an incident response service, and I will reference it alot throughout the blog posts coming up. It gives you some of the backbone structure that you need to then build upon, to create your own service.


Intelligence-Driven Incident Response

I bought this book after attending the SANS FOR578 course that I mention above. I wanted a supplmental resource to aid my studies in Cyber Threat Inteligence, and this book went beyond my expectations. It really breaks down the incident response process in detail and shows where you can begin to look at it as a driver for gaining threat intelligence. This book really helped solved the problem I will later discuss, around “incident recording” language.

I recommend this book to everyone who I meet within the incident response world.

MITRE – Ten Strategies of a World-Class Cybersecurity Operations Center

This book is available for free from the link above. I was lucky enough to recieve a printed copy from someone I met at the FOR578 training course. This book goes into a lot of great details on how to build a SOC and which resources you should look at to do it. Although the book was written back in 2014 and a lot has changed since then, it still holds alot of relevancy today. The section called “Strategy 4” is very useful in determining which functions should an incident response team have, and how can they be developed if needed.



This course was the first “none vendor” focused training course I ever took, before this I was heavily focused on studying Network Security through the CCNA books. This course helped me understand that the security world was bigger than specific vendors offerings and opened up the gates to my eventual drive into cyber security and incident response. For anyone starting out in this field, this course is very useful as it is very broad and tries to get around most of the important topics in cyber security.

SANS FOR578 – Cyber Threat Intelligence

If I look back at any course, or anyhing I have ever studied in general. This course holds the top honours for how much I learned. I went into this course with an understanding of how I thought cyber security worked, and then came out the other side with an entirely deeper knowledge and thought process. This course really helped me understand that data can be so powerful when absorbed from the incident response process. Providing that the data is organized into structures and frameworks to present it in a clear way. I also had the added bonus that the course was being taught by Jake Williams (@malwarejake), and his anecdotes helped to further the understanding of the materials. I would say that this course was the straw which broke the camels back and changed me from being a purely technical orientated person to being much more focused on process and structure. I do not have enough great words in my dictionary for this course!

Other resources:-

Don’t ever underestimate the value you can get from just talking to people, whether they are in the incident response field, or in other fields. A great example is the crossover between incident response and incident management in an ITIL sense. Essentially they are the same process and flow, just that incident response has the “cyber” tag.

Closing words…

This is just a list of the resources that I have used, and they are not complete, you need to find the bits you need from each of them and use it to define your own process.

I have also had the massive benefit of learning from some great people and spending time with organizations like CIRCL, Mandiant, Red Canary to name a few… I just try to absorb as much from the experts as possible…

Incident Response 101 – Intro

I have been wanting to write a set of blog posts about this for a while, possibly I will one day turn this into a book! But for now, it can live here.

Over the last year, I have given a few presentations and lectures about incident response, some of which live on our Github in the presentations folder. But they are not tied together and they aren’t “alive” like a series of blog posts could be…

I would like to share alot of the knowledge I gain whilst working within this field, and studying alongside. A lot of the words coming in the next few blog posts, will be coming from experience of delivering exactly what they say.

A problem that I have found whilst trying to understand incident response deeply, is that most incident response books, courses and sales folk seem to really focus on the deep technical parts of incident response… The forensics, the detections, the reverse engineering, the indicators of compromise etc etc. The “sexy” analysis parts, and the easy sell. What I have been missing is a comprehensive guide to the underlying process behind the whole incident response stack.

Then it struck me, most of the people working within incident response are deeply technical and do get down and dirty with the analysis stage. But they aren’t really strong when it comes to the process. A process which is made up of far more stages that just analysis. This ends up creating a vacuum, where incident response seems highly expensive and complex to the outside observer.

So I have decided to write some blog posts to the “2019 me”. So I can help others who are in my shoes, those who need to build something much more than just an analysis team. Those who need to architect the entire process from alert to end report that delivers great actionable results.

Creating detection rules in Elastic SIEM App

It has been quite a long time since I wrote my last blog post, as with everything, life gets in the way! But I have been spending some quiet time rebuilding my lab, and I have upgraded my ELK stack to 7.6, and I am totally blown away by how awesome the Elastic SIEM app is. So I thought I would put together a few blog posts about how to use it!


  • You must be running 7.6 (duh)…
  • You must be running the basic license.
  • You must be running at a minimum basic authentication within your setup, between Kibana, Elastic, Logstash etc.
  • You must be running TLS on Elastic.

Enabling each one of these prereqs takes time, and if you are using your stack just for testing purposes and haven’t set up TLS or auth before, then good luck! You are in the lucky position I was last week, and welcome to 2 days of work…
However once you are done, you are ready to move on to the real good stuff…

The good stuff

We will use an example to aid in the instructions, this example is based on creating a detection for each time that there is a Windows Defender event ID 1116 – Malware Detected entry in my logs.

First you will need to open the Elastic SIEM app, and then click on “Detections”.

Once you are in the detections window, on the right hand side you will find “Manage signal detection rules”.

In this window “Signal detection rules”, you can see all the rules you currently have created, or imported. You can manage whether they are activated rules, and many other configuration changes can be done here.

To create a new rule click on “Create new rule”

Within the “Create new rule” section, the first thing you will need to do is to define the index you wish the rule to point at, and then the query you want the rule to run. In this example as I am splitting Defender into a separate index, I have chosen my “sd-defender” index, and then my query is written in KQL (Kibana query language). This query is set to use the ECS (elastic commond schema) field of event.code and will match when it finds event.code 1116. Once you have built this first part, click on “Continue”.

The 2nd stage of building a rule, is to add some more description to the rule…

Here you can name the rule, and write a description of what it is/does. You also assign a “Severity” from low to critical, and a “Risk score” from 0-100. In this case I have chosen “Severity” = High and “Risk score” of 75. When you have finished in this section, click on “Continue”.

In this section you can also add some “Advanced settings”… Where you can supply some reference materials to the alert, if you created it from a blog post, or if it came from a Sigma rule, you could supply a URL here. You can also add some examples of false positives, and then also enrich the rule with some MITRE ATT&CK TTPS! In this example, we won’t add them. But I will be blogging again soon about how to do this part using Sigma rules!

The last part of rule creation, is the “Schedule rule” section. Here you can setup how often you would like the rule to run, and when it does run, how far back in time should it run. This is interesting because if you have just created a new rule, and you would like to see how it would have performed over the last days of logs, then you can adjust that setting here. When you are done setting up the schedule, you can then choose to simply “Create rule without activating it” or “Create and activate rule”, both options are pretty self explanatory!

Once the rule is created, we can try to provoke it and see how it turns out… If you head back to the “Detections” page of the SIEM app. In my example, I am lucky because it is my lab and there is nothing else going on…

Now we will trigger a malware detected alarm, by downloading the EICAR test file to one of my lab machines.


And here is the alert landing in the “Signals” pane, from here we can then begin investigation. Right now there is not very much information about how these alerts will then make it to the attention of someone not using the SIEM app directly. But the SIEM app has some incredible offering here, for free! I have also added a bonus item on how to extract the alerts out to case management tools, slack, etc etc.

Bonus bonus bonus

If you want to extract the alerts out of the SIEM app, you can use a tried a tested tool “Elastalert”. The SIEM app uses a system index called “.siem-signals-default-00001”. This index can be read via Elastalert and the alerts can make it out to your SOC team!

Enriching ElasticSearch With Threat Data – Part 3 – Logstash

In our previous post in this series, we have prepared MISP and its API, memcached and created the python script we need to pull data from MISP and push it into our memcached application. In this next blog post, we will cover how to use Logstash to lookup the data stored within Memcached, and then how to enrich ElasticSearch when we get a hit!

A quick mention before we go much deeper, this enrichment setup is capable for ultra fast lookups and working with huge amounts of IoC’s. Without giving away too much, I know of a very large production setup which is running this with close to 120,000 events per second and multiple feeds enabled within MISP…. It will do enrichment in realtime as the logs are being written to ElasticSearch!

Part 1:-

Part 2:-

This image has an empty alt attribute; its file name is image-1024x547.png

Logstash – Brief Intro

Logstash is the powerhouse behind our enrichment setup… Since you should already hopefully be familiar with the ELK stack, we won’t touch too much on Logstash and how it is working. But we will focus on parts of it…

Logstash is essentially split up into 3 sections… Input, filter and output.

The input section, is where we define the source of the logging data we want to work with.

The filter section, is where we then work with the logging data. This could be via parsing, normalizing, transforming or multiple other methods to prepare the data for sending out to ElasticSearch…

The output section, is where we define how to then send the data out of logstash, this could be sending directly to ElasticSearch, Kafka or many other output options.

Our blog will focus much more in future on the filter section, about how we can map all logs up against the Elastic Common Schema via grok parsing. But right now in this example, we will just keep it simple and assume you already have some sort of parsing in place for the logging source you want to enrich.

Logstash – Memcached filter

The Logstash Memached filter has recently been made into a fully supported release, which we are very happy for over at Security Distractions. It comes installed by default with Logstash 7.0…

This means all we need to do within our logstash configuration to enable the memcached plugin, is to write the function in as shown below.

The placement of the memcached section is quite important… It should be after your grok parsing and transforming sections. Preferably as the last function within the filter section.

	        hosts => [""]
		get => {"domain-%{destination.domain}" => "[misp_src]"}

A quick breakdown of this function, “hosts” is where we specify the location and port of our memcached application.

The “get” is used to tell Logstash which field within the logs it needs to lookup against memached, the result of this match is then written to a new field “misp_src”.

Using the example from our previous blog post, we will use as the value within the destination.domain field.

Logstash will append “domain-” to “”, resulting in “”. It will then make a get request against the memcached application….

“domain-securitydistractions” is populated within the memcached data store, with the value “Feed-RansomwareTracker”. So we get a hit and then this value is written to the new field “misp_src”.

When Logstash does a lookup for a value which is not within the memcached data store, then it will not return a value into misp_src. So just for the sake of good practice we will add a function within Logstash that will populate the misp_src field with the value “none” if there is no match.

if ![misp_src]
			add_field=> {"[misp_src]" => "none"}	

Since this setup leverages your already existing ELK stack, you will then only need to handle the new field “misp_src” via visualisations or whatever other fancy way you want to display this field.

In my lab, I use a locally running instance of pihole to generate logs for testing the enrichment setup….

When I get round to it, I will make a part 4… Featuring extensions to the integration. You can run with as many feeds are your heart desires… Your only limit is your imagination for tagging/feed names!

It is possible to further integrate MISP and ELK by using the http plugin. Once the misp_src field is populated, you could take this result and then make a http call to MISP again for further enrichment.

Enriching ElasticSearch With Threat Data – Part 2 – Memcached and Python

In our previous post we covered MISP and some of the preparation work needed to integrate MISP and ElasticSearch. With MISP now setup and prepped, we can now focus on Python and Memcached.

Part 1:-

This image has an empty alt attribute; its file name is image-1024x547.png


First a little background into why we chose to use Memcached for our ElasticSearch integration…..

Threat data feeds are dynamic by nature, they are being constantly updated and multipe times a day. The updates contain both additions to the feeds and deletions. This means our enrichment engine would need to be dynamic too…. To explain this better, we will use Ransomware Tracker as an example..

Lets say a new IP is published to the Ransomware Tracker feed, this would be easy to manage in an enrichment engine, as we could simply add the new IP to our list. But what if an IP is removed from Ransomware Tracker, now we have to monitor the Ransomware Tracker feed to find the deletion, then check our own list to see if we have this IP and then delete it from our list. This can very quickly get complex to handle…

Another way to handle it could be to monitor the Ransomware Tracker feed for changes, when a change is made then clear our list completely and pull the latest feed instead….. This would solve part of the problem, but it can result in having a small period where the enrichment engine is empty, it also increases complexity as we would have to delete the list each time, which is definitely not what we wanted!

We decided to look into a way of simply assigning a TTL to each IoC on the feed, and then age out the IoC’s which are no longer present on the feed. We would setup our script to pull the feed at a given time interval, then push this into our enrichment engine store. Simple yet incredibly effective… This method also had to be supported by ElasticSearch, and how lucky we were that Logstash has a filter plugin for memcached. So it was this we settled on using to store the feed data for enrichment.

Memached – Preparation

Memcached meets our requirements for being simple, and handling aging of IoC’s, it is also supported by ElasticSearch/Logstash which makes it perfect for this task. It also comes with the huge additional benefit of storing the data in memory, so lookups from Logstash to the data will be ultra fast.

The Memcached application is a very simple key-value store running in memory, you can telnet into the application running by default on port 11211.

The application is made up of only a few commands. The ones we are in need of here, are the “get” and “set” commands. Both of which are quite self explanatory….

The set command will be used by our Python script, to set the data into the store.

The get command will be used by the Logstash filter plugin, to query the store for a specific IoC and return the result back to Logstash.

The only thing we need to do, is set the structure of the data within the key-value store. Since we are going to be working with multiple data types, domain names, IP addresses etc. We will make our key a combination of the datatype and the IoC. So in the example that is on the RansomwareTracker feed, it will be represented as: “”.

Using the key as the combination of the data type and the IoC will be easier to understand later when we look at the Logstash configuration.

The value will be the feed name, so in this example “Feed-RansomwareTracker”.

The TTL can be set to whatever suits your organisation, in our example we will use 70 seconds. This is because we are going to run our Python script for pulling the feed from MISP every 30 seconds, this would then allow us to miss 1 pull and not age out all IoC’s within the memcached store.

So the set command for memcached with our example data will be as follows:- “”, “Feed-RansomwareTracker”, “70”.

It is highly recommended that you run Memcached on the same machine as logstash, for latency purposes. In our lab we are running everything on a Debian VM. There are Debian packages available for Memcached…..

Python – Memcache/MISP integration

Caveat: I am not a developer, and my programming skills are limited… The script here only had to be very simple, so it suited by skill level. There will be multiple ways to improve it in the future… But this is what we are running with here, and it works!

As ever, any form of integration between tools is probably going to require some form of scripting. In our case we knew we needed a script that would handle the pulling of the data from our MISP platform API, and then pushing this data into Memached. The full script can be found at the bottom of the page….

The first part is our interaction with the MISP API….

def misppull():
    headers = {
            'Authorization': 'INSERT YOUR OWN MISP API KEY',
            'Accept': 'application/json',
            'Content-type': 'application/json',

    data = '{"returnFormat":"text","type":"domain","tags":"Feed-RansomwareTracker","to_ids":"yes"}'

    response ='https://*INSERTYOUROWNMISPHERE*/attributes/restSearch', headers=headers, data=data, verify=False) #Call to MISP API

    return response

Remember to change the “Authorization” section within the header to your own API key.

The data variable, is used to tell the MISP API which IoC’s we want to retrieve, in this example we are asking for all domain names that are tagged with the “Feed-RansomwareTracker” and where the “to_ids” setting is set to yes. This will be returned as plaintext…

Remember also to change the URL within the response variable to reflect the domain name or IP address of your own MISP instance. I have also disabled verification of SSL as it is done within my lab. It is not recommended to keep this setting if you are running in production.

Reliable as always, there are multiple python libraries for interacting with the Memcahed application. We settled on the first one we found, “pymemcache”.

if __name__ == '__main__':
    response = misppull()
    domains = (response.text).splitlines()
    for domain in domains:
               client.set("domain-" + domain, "Feed-RansomwareTracker", 70)

Using the structure we settled on earlier in this blog post, this is how it is reflected when using pymemcached. Using the client.set command to push the IoC’s we retrieved via the “misppull” function into the memached application.

Full script:-

When I get round to it, this will be uploaded onto our github, it is released under the MIT license.

import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
from pymemcache.client.base import Client

client = Client(('', 11211)) #Location of memached application

def misppull():
    headers = {
            'Authorization': 'INSERT YOUR OWN API KEY HERE',
            'Accept': 'application/json',
            'Content-type': 'application/json',

    data = '{"returnFormat":"text","type":"domain","tags":"Feed-eCrimes","to_ids":"yes"}' #Setting up the data format we require from MISP

    response ='https://*INSERTYOUROWNMISPHERE*/attributes/restSearch', headers=headers, data=data, verify=False) #Call to MISP API
    return response

if __name__ == '__main__':
    response = misppull()
    domains = (response.text).splitlines()
    for domain in domains:
               client.set("domain-" + domain, "Feed-RansomwareTracker", 70)

Next in the post series, is overing the last step… Integrating it all together using Logstash!

Part 3:-

Enriching ElasticSearch With Threat Data – Part 1 – MISP

This image has an empty alt attribute; its file name is image-1024x547.png

There is a lot of great blog posts and reads available on the MISP platform, so I don’t want to do it injustice by writing a huge intro here… I have a plan to write a more in depth blog post about MISP in the distant future, but before then please go on over to the MISP project site:

What we are interested in for our enrichment, is how to leverage MISP to produce our own threat data feeds.

MISP allows you to create your own events made up of IoC’s and then leverage these as a threat data feed.

MISP out of the box also has support for many open source threat feeds and it can aggregate these and display them in a chosen standard. This can really help with centralizing your organisations threat data.
So you can combine OSINT and your own intelligence for enrichment into ElasticSearch.

We will begin our example by working with the Ransomware Tracker CSV feed which can be enabled in MISP. This feed is well known by the community and will give a good understanding of how the integration works.


To get started you can download a training MISP instance here (or use your own MISP instance):-

Once you have your instance running and can access the WebUI, you should navigate to “Sync Actions” and then down to “List Feeds”

This will present you with a screen showing all of the default available threat data feeds and their sources.

If you scroll through this list, eventually you will find Ransomware Tracker.

You will need to check the tick box next to the feed, and then scroll to the top and select “Enable selected”.

One the feed is enabled, you will need to return to the Ransomware Tracker section and all the way at the right handside there is a button with the tooltip “fetch all events”

This will then begin the job to fetch the entire Ransomware Tracker feed into a MISP event. To find the event highlight the “Event Actions” button and then click on the “List Events” option.

This will take you to your MISP instance’s event section. Yours will look slightly different to mine, if you are using MISP already then it will be populated with events you have been working with or synced with. If not and you are new to this, then it should be populated with only 1 event… With the Info set to “Ransomware Tracker CSV Feed”.

When you drill down into the event, you will find some information relating to the threat feed, including an item in red “Published : “No”. This means that the event is currently stored inside MISP, but is not available for distribution, via the API or a sharing method. This allows us to work on the event without fear of publishing something by accident.

You can scroll through the event and see all of the IoC’s contained within the Ransomware Tracker feed, but what we are interested in now is tagging the Ransomware Tracker feed so we can export it via the API as one feed.

To do this, we will need to create a new custom tag within MISP….

Hover over the “Event Actions” button and then click on “Add Tag”.

You will then be presented with the Add Tag section, here you can give your new tag a name. For this example we will name it “Feed-RansomwareTracker”, choose the colour the tag will have in your event view, and then ensure “Exportable” is checked. Then click “Add”.

You can then go back to your Ransomware Tracker CSV event….

This image has an empty alt attribute; its file name is image-7-1024x489.png

As part of the event info, you can see a section called “Tags” with a + button next to it. Click on the + button, and then add your newly created Feed-RansomwareTracker tag to the event.

The last step is to then publish the event, so it can be retrieved via the API for pushing into ElasticSearch!

On the left hand side next to the event info, you can find the option for “Publish Event”. Click on this and then click “Yes” when prompted to publish the event.

This has now published the event and the tags and it is ready to be retrieved by the API.


Alongside the amazing WebUI for MISP, there is an incredibly strong API engine running underneath. Again I won’t focus too much here on singing it’s praises, this I will save for a later post!

But in this example, we will use the MISP API to pull out the tagged Ransomware Tracker feed for use within ElasticSearch.

To prepare the API for our scripts, all we need to do is find the automation key…

Hover over the “Event Actions” button within the MISP WebUI… And click on the “Automation” button.

Within the Automation section you can find your automation key:-

Save this key, you will need it later for your Python script!

This concludes our preparation work within MISP, next up…. Python and Memcached….

Part 2:-

Enriching ElasticSearch With Threat Data – Intro

Since my last blog post back in January, I have been seriously distracted! I promised blog posts relating to my lab but have not had the time…. But to keep you guys going until then… I am going to open source my enrichment at scale setup, combining ElasticSearch, MISP, logstash and memcache into one seriously powerful platform.

Have you ever wanted to check your entire logging estate against a threat feed? Multiple threat feeds? If so, you have probably seen that many of the big SIEM providers charge a premium for this service.

What I will demonstrate over the next few posts, is how to accomplish this for free! Well not quite for free, since you need time but you know…..

Lets talk about the diagram above… For my threat data source, I have chosen MISP. My logging sources are Squid Proxy and PiHole. These are the choices you have yourself. The rest of the setup is required to run…

Instead of choosing MISP, you could simply use a single threat data feed, Ransomware tracker could be a good place to start as they offer an open source feed via CSV, which you could quickly parse. The important thing is that you have the right data structure to put the feed into memcache. But we will go over this in further blog posts….

Across the next blog posts, I will talk about the various pieces in the puzzle and how to put them all together… The result is a very scabable, powerful enrichment engine that can ingest and enrich logs in realtime without delaying the log process.

Building Your Own Blue Team Lab


Every blue team member should spend some time investing in building their own lab setup. It can be a great and fun learning experience, where you pickup some “low hanging fruit” skills. Hopefully comíng out with something you will continue to use and develop over the rest of your InfoSec career.

Having your own lab, allows you to quickly test theories and detection methods. It can be adapted to support any usecase you need, your only limit is your own imagination….

How does it look?

The lab is designed to work as follows…. You add an IoC to an event within MISP, this is then distributed to the black list’s on your prevention tools. At the same time, this IoC is used to alert based on logs coming into ElasticSearch. The IoC will be added to an ElastAlert rule, which then takes care of searching back through the ElasticSearch logs for previous activity. ElastAlert needs somewhere to send its alerts to and this is where The Hive comes into play…. Sounds simple right?

Open source is the only way…

The Security Distractions lab is only based on Open Source tools, so your only investment if you decide to build this, will be your own time! It can be used for production with a few modifications…

Over the next few blog posts, we will go into each tool and their integration points. We promise to try to keep it exciting!

But how will I run the lab?

This lab can be built using whatever method you want… We will supply the configuration files for each tool where needed, but it is up to you how it is run. I like to run using VM’s but others are obssessed with Docker. So it is about using whatever you feel most comfortable with. For those planning on using VM’s, the first post will be about VirtualBox, so you can get started…. If you’re using Docker, then ummm…… You’re on your own!

All configuration files will be found over on our GitHub page:-