The Kings In Your Castle Part #1

All the lame threats that own you, but will never make you famous.

In March 2016 I presented together with Raphael Vinot at this year’s Troopers conference in Heidelberg. The talk treated research of targeted malware, the how’s and if’s of malicious binaries, the involvement of sophistication and exploits, the presence or absence of patters within advanced persistent threats (APTs). The write-up of all happenings has taken its time, and as expected kept growing longer and longer and.. finally we figured splitting the outcome into multiple posts would make all our lifes significantly easier.

Five compact blogposts are the result, easy to digest and understand, while covering the following points:

  1. Introduction, hypotheses and analysis process with respective toolset
  2. Description of the data set and the feature extraction process
  3. The findings’ curiosities and results of exploit-per-APT analysis
  4. The use of packers, crypters and commodity RATs within the sample set
  5. Actor correlations and APT naming wars, future research

Here we go, please enjoy part 1 about the kings, your castle, and cyber. At this point I would like to thank Raphael Vinot for joining in on this data shoveling excurse, the team of CIRCL being busy feeding and maintaining their data dragon, and Morgan Marquis-Boire as well as the many other wizards from the research community who kept sending us malicious software.

Part 1: The kings, your castle, and cyber

It is the same question being directed to audiences around the security conference scene: How many people in the room can tell their machine or network is currently not compromised? No hand has been seen to rise in answer.  APT has been fashion five years ago and still rocks the most-feared charts on every cyber threat survey. While tabloid press is generally after the latest most-sophisticated-threat, the analyst community has long resorted to talk about threats that are advanced and persistent.. enough. In terms of sophistication targeted attacks show all shades of grey, on average though tend to be rather shallow. On the other hand, security products all have a single weak spot in common that they will always rely on patterns; whether patterns that are there, like signatures, or patterns that are not there, like anomalies. This enables attackers to evade detections with shallow, but unknown tools which manage to fly under the radar.

In research we conducted in cooperation with CIRCL, the Computer Incident Response Center Luxemboug, we took on the APT myths by formulating hypotheses based on a set of APTs documented in the MISP platform. MISP stands for Malware Information Sharing Platform and is used by hundreds of organizations to share data on incidents, among them a large number of targeted attacks. It is possible to split the content of the information shared between reports of vendors and events seen by the users of the platform. MISP is maintained and developed by the fine people at CIRCL, data is constantly added by CIRCL members, security companies or independent contributors.

Having this information in one single place allows to correlate -supposedly- new threats reported by vendors with existing events seen in the wild now or in the past. At the time of conducting the research, MISP held information about more than 2.000 events.

The data contained helps understand the overall nature of the threats, the tools of trade, the preferred approaches of the attackers, and their evolution. It potentially even allows for actor tracking as the correlation of attributes reveals hidden treasures.

The gathered events from MISP are pre-classified by threat level. We concentrated on targeted threats and conducted a survey on the nature of malware and infrastructure used therein. How much of the analyzed malware is custom made, how much off-the-shelf or simply installs known RATs? How much of it is packed or crypted? Does the fact that malware is not crypted allow conclusions on whether it is used for targeted attacks? How often are exploits used in attack? Does the use of exploits imply more sophisticated tools as the attacker is expected to dispose of higher resources?

On a more abstract level, we also wanted to know if we were able to uncover links among actors, their tools and infrastructure, solely based on OSINT data (open source intelligence).

The reason why this is possible lies in the nature of targeted attacks in general. A targeted attack in reality is not a piece of malware. It is a process, consisting of several phases, and many times even a repetitive circle if one finds himself targeted by a determined attacker.

howapthappens

Figure 1 – The APT process

Furthermore, the stages involving malicious software frequently imply the use of different pieces of malware, and also a certain infrastructure to operate attacks. This means, the attackers require servers, domains and maybe even e-mail addresses or fake social media accounts and fake web sites; to orchestrate the malware, store exfiltrated data and to drive hands-on attacks. All of these items come with a certain cost, in time and money, and believe it or not, most attackers have restrictions – in time, and money.

Limited resources call for repetition, or, in other words, attackers with a hint of economical thinking won’t reinvent the wheel anew every time a new target is chosen. Advanced attack suites, exploits, tailored malware, techniques and practices are assets that are costly to change.

This said, reinventing the wheel is possible, but not a very established practice. Besides requiring extensive resources, building and maintaining more than one attack platform and a large set of unique attack tools is prone to errors. After all, the folks building sophisticated malware are human software developers too. Getting back to the actual cost of an attack, being discovered is not only annoying for an attack group, it is also expensive and potentially dangerous. Exposure might get law enforcement on track or even inspire counter attacks.

Enough about the motivations though, APTs will keep being APTs; so let’s go on with exploring their malware and infrastructure.

Toolification

MISP is a system that is usually fed with threat indicators, which are then shared and internally correlated with other indicators already present in the database. The primary purpose of MISP is of course the distribution of threat indicators, aided by additional information gained by indicator correlation. As by-product though, MISP provides a decent event documentation with a timeline.

That said, obviously we didn’t just walk in and query databases; the process of gathering data, cleaning up records, writing tools and forming theories that got again discarded was lengthy. In the end what is left is a neat sample set, a ton of data and a set of homegrown tools as well as extensions for already existing tools.

Please note though, this project is by no means an effort to perform large-scale clustering of malware with machine learning methods, nor does it involve any whatsoever sophisticated algorithms. The tools we designed are merely simple, the grouping of samples a mere sorting approach. Keep it simple, then scale it up, a data analysis wizard once told me.

toolification

Figure 2 – Tools and extensions created during data processing

MISP provides a web interface, which helps in manually investigating one or more events and look for correlations, but it does not serve an automation purpose. Therefor we established different ways to access and process the MISP dataset. We used a Python connector to shovel MISP data into Viper, a malware sample management framework. This way we were able to easily sort through events and selected the ones which, based on predefined criteria, were highly likely involved with targeted attacks. These events were the base from where we started processing. Selection criteria and musings about attack nature will be outlined in a follow up blogpost. To sketch the workflow, it went roughly the following way:

  1. Event data from Viper -> sample hashes -> sample processing / tools development with SQLite backend
  2. Event data from MISP -> pull to Redis backend -> event attribute processing
  3. Importing of sample processing data into Redis
  4. Data correlation and analysis with Redis

We faced a challenge when seeing to collect all the malware binaries, involved with events that were selected for further processing. MISP holds primarily threat indicators, including sample hashes, domains, e-mail addresses, file names and occasionally Yara signatures; rarely ever binaries. Also, malware, especially when involved in targeted attacks, is not always shared with the broader public. Eventually we managed to aggregate most of the fancied binaries through help of public and private repositories.

The use of two different backend solutions has historical reasons, mainly we started off to work independently with our preferred solution, and in the end found there is no reason to abolish either. Redis is a strong, scalable backend, suited for medium to large scale data analysis; SQLite is practical in portability, small, elegant and effortless to set up.

For feature extraction from binaries we used the Python library pefile and instrumented IDAPro with help of IDAPython. Furthermore, we made use of the ssdeep fuzzy hashing library, exiftool for detailed file type analysis and used RapidMiner community edition for visualization.

Our developments were published in the course of Troopers conference and are available on github at as part of the MISP repository.

The next blogpost of the Kings in your castle series will cover the nature of the analysis data set and include a discussion of the extracted feature set.