DGA classification and detection for automated malware analysis

Introduction

Botnets are one of the biggest current threats for devices connected to the internet. Their methods to evade security actions are frequently improved. Most of the modern botnets use Domain Generation Algorithms (DGA) to generate and register many different domains for their Command-and-Control (C&C) server with the purpose to defend it from takeovers and blacklisting attempts.

To improve the automated analysis of DGA-based malware, we have developed an analysis system for detection and classification of DGA’s. In this blog post we will discuss and present several techniques of our developed DGA classifier.

The DGA detection can be useful to detect DGA-based malware. With the DGA classification it is also possible to see links between different malware samples of the same family. Such a classification is expressed with a description of the DGA as a regex.
Moreover, our analysis methods are based on the network traffic of single samples and not of a whole system or network, which is a difference to most of the related work.

DGA-based Botnets

A Domain Generation Algorithm (DGA) generates periodically a high number of pseudo-random domains that resolve to a C&C server of a botnet [H. 16]. The main reason of its usage by a botnet owner is that it highly complicates the process of a takeover by authorities (Sinkholing). In a typical infrastructure of a botnet that uses a static domain for the C&C server, authorities could take over the botnet with cooperation of the corresponding domain registrar by changing the settings of the static C&C domain (e.g. changing the DNS records).

infrastructure of typical botnets
Typical infrastructure of a botnet

With the usage of a DGA that is generating domains dynamically which resolve to the C&C server there is no effective sinkholing possible anymore. Since the bots use a new generated domain after every period to connect to the C&C server, it would be senseless to take control of a domain that is not used anymore by the bots to build up a connection to the C&C server.

infrastructure of typical dga botnets
Typical infrastructure of a DGA botnet

The C&C server and the bots use the same DGA with the same seed, so that they are able to generate the same set of domains. DGA’s use mostly the date as a seed to initialize the algorithm for domain generation. Hence the DGA creates a different set of domains everyday its run. To initialize a connection to the C&C server the bot needs to run first the DGA to generate a domain, that could be possibly also generated on the side of the C&C server, since both are using the same algorithm and seed [R. 13]. After every domain generation, the bot attempts to resolve the generated domain. These steps are repeated until the domain resolution succeeds, so that the bot figures out the current IP address of the corresponding C&C server. Through that DGA domain the bot can set up a connection to the C&C server.

Motivation

DGA detection can be very helpful to detect malware, because if it is possible to detect the usage of a DGA while analyzing the network traffic of a single sample, then it is very likely that the analyzed sample is malicious, since DGA’s are used commonly by malware but not by benign software. DGA classification is the next step in the analysis after a DGA has been detected. A successful classification returns a proper description of a DGA. With such a unified description, it is possible to group malware using the same DGA. Being able to group malware by correlating characteristics, leads to an improvement to the detection of new malware samples of these families. Therefore, the signatures of recently detected malware samples will be automatically blacklisted. The following figure shows for an non-DGA malware that grouping malware families based on the same domains in their DNS requests traffic will be only possible, if they use all the same and static C&C domain:

static domain
Two samples using the same static C&C domain

If the malware uses a DGA, then the grouping of malware will not be trivial anymore, because generated DGA domains are just used temporarily, thus using those to find links between samples would not be very effective.

domain mismatch
Two samples with the same DGA but a different seed

Note also that occurring domains in the network traffic of the recently analyzed malware sample could differ on another day with the same sample analyzed, since many DGA’s use the date as a seed. The solution is to calculate a seed-independent DGA description for every analyzed sample using a DGA. That description can be used then as a bridge between malware samples using the same DGA.

pattern descriptor
Pattern descriptor to abstract different DGA seeds

To solve this problem, we have divided it into three smaller tasks. Thus, the DGA classifier is structured into three components. Each component solves a task that contributes to the result of the DGA classification.

In this blog post we concentrate only on approaches for DGA detection and classification that are automatable, since we want to analyze a very high number of samples. Furthermore, we want to avoid as much as possible unnecessary network traffic, therefore we focus only on offline methods.

DGA Detection

This approach for DGA detection is based on statistical values calculated over the relevant label attributes of the domains. Since the domains generated by a DGA follow mostly a pattern, it is very useful to calculate the standard deviation of some attribute values [T. 13]. The average value can be also used for some attribute values to measure whether a domain is generated by a DGA or not. Those statistical values are also calculated over a list of domains from the Top 500 Alexa Ranking. These are considered as reference values for non-DGA domains regarding the relevant label attributes.

The domains with multiple levels are split into their labels for further analysis.
E.g. this domain: http://www.developers.google.com is split into:
com – Top-level domain (TLD)
google – Second-level domain
developers – Third-level domain
www – Fourth-level domain

All domains in the domain list resulting from a sample are compared level-wise, such that the labels of every domain are only compared with the same level.

To find proper indicators for DGA usage, we have done a level-wise comparison of statistical values calculated over several lexical properties of DGA domains and non-DGA domains (e.g. from Top 500 Alexa Ranking).

domain level.png
Different kinds of DGA patterns

Our experiments have proven that these statistical values over domain levels are very effective for DGA detection:

  • Average of the . . .
    • number of used hyphens
    • maximum number of contiguous consonants
  • Standard deviation of the . . .
    • string length of the label
    • consonant and vowel ratio
    • entropy
  • Redundancy of substrings or words in case of a wordlist DGA

With all these arguments, we can build a score with a specific threshold. If the score exceeds the threshold, the component will decide that the analyzed domain list was generated by a DGA. Since the arguments are based on statistical values, which lose their significance with smaller sets, it is also important to consider the case with too few domains. In this regard, the score is scaled down.

Separation of non-DGA Domains

Malware tries often to connect at first to a benign host (e.g. google.com) to check their connectivity to the internet. So, in case of DGA-based malware, the samples do not only send requests to DGA domains, but also to non-DGA domains. Hence the program for DGA classification needs to expect a domain list containing DGA domains and non-DGA domains. Before the program can classify a DGA, it needs to filter out the non-DGA domains. In this process, we assume that the majority of the domains in the domain list of the sample are DGA domains. Therefore, the non-DGA domains are considered as outliers. We used different outlier methods to identify non-DGA domains:

  • Outlying TLD-label
  • Outlying www-label
  • Find outlier with the method of Nalimov [A. 84] regarding following label attributes:
    • String length of the label
    • Digit and string length ratio
    • Hyphen and string length ratio
    • Consonant and vowel ratio
  • Outlying label count
  • Find outliers by too few occurring values regarding the following label attributes (the more a value occurs, the higher is the probability that it belongs to a DGA domain. We use the opposite case to find non-DGA domains here):
    • String length of the label
    • Consonant and vowel ratio
    • Digit and string length ratio
    • Entropy
    • Hyphen and string length ratio
    • Relative position of the first hyphen in the label

DGA Classification

After the separation process of DGA domains from non-DGA domains, we start with the classification of the DGA. The classifier analyzes the list of DGA domains and creates a, specific as possible, regex that matches all these DGA domains. If the separation is not completely successful, the program will continue with the classification based on DGA domains and non-DGA domains which could lead to a wrong description of the DGA. But not every failing separation process causes a wrong classification. In some cases, if non-DGA domains cannot be differentiated from DGA domains regarding any domain attributes, then the classification will still return the correct DGA description, since it covers only the relevant domain attributes. If the failing separation process causes a wrong DGA description, then the resulting wrong or imprecise DGA description could be interpreted still as a fingerprint calculated over the requested non-DGA domains and the DGA domains. That fingerprint is still useful to group malware of the same family, because it is very common that those requested non-DGA domains occur in other malware samples of the same family, too.

DGA’s do not generate necessarily always the same set of domains, because in most cases the seed of the DGA is changed (usually the date is used as seed).
In the following picture, you can see that the calculated DGA regexes are not matching because of the differentiating first letter, which seems to be seed-dependent in that case:

not equal dga
DGA with seed dependent first character

An important requirement to the automatically generated DGA description is that it needs to be independent of the seed. Since it is in our perspective not possible to determine which part of a DGA domain is seed dependent, we use an approach that tries to generalize the seed-dependent part of a domain.
For this task, we use three layers of regexes that are hierarchically arranged:

  1. Layer: very generalized pseudo-regex
  2. Layer: generalized regex
  3. Layer: specific regex

All those regexes can be interpreted as DGA descriptions (calculated with only one sample) of the same DGA with different precision.
Such hierarchy could look like this:

tinba dga
Tinba-DGA
simda dga
Simda-DGA

Evaluation

Out of 113.993 samples, the DGA classifier detects 782 DGA-based malware samples.
To determine the false positive rate, we have reviewed the results of the analysis system manually. Regarding the DGA detection we have found 38 false positives in our result set. Hence we have a false positive rate that is lower than 0.049% (with the assumption that the DGA-based malware samples queried a relative high number of different domains). A false negative evaluation is hard in this case, because the number of input sample is too high for manual evaluation. For an automatic false negative evaluation, the required ground truth of a large sample set is missing.

The following excerpt shows some specific DGA regexes from the DGA classification, which used 38.380 DGA-based malware samples as input:

Domain Fingerprint / Regex Matches Family name
[0-9A-Za-z]{8}\.kuaibu8\.cn 569 Razy
[a-hj-z]{3}y[a-z]{3}\.com 2082 simda
[a-z]{11}\.eu 829 simda
[a-z]{6,12}\.(com|info|net|org|dyndns\.org) 2047 Pykspa
[b-y]{12}\.(com|in|net|ru) 8508 tinba
[b-y]{12}\.com 6296 tinba
[b-y]{12}\.pw 7714 tinba
[b-y]{12}\.(biz|pw|space|us) 35 tinba
[b-y]{12}\.(cc|com|info|net) 17 tinba
[d-km-y]{12}\.(com|in|net|ru) 172 tinba
[d-y]{12}\.(com|in|net|ru) 110 tinba
[c-y]{12}\.(com|in|net|ru) 45 tinba
[df-km-su-x]{12}\.(com|in|net|ru) 110 tinba
[a-z]{8}\.info 216 tinba
v[12]{1}\.[a-uw-z]{7}\.ru 77 Kryptik
v1\.[a-uw-z]{7}\.ru 579
[b-np-z]{7,11}\.(com|net) 113
[0-9A-Za-z]{8}\.[aiktux]{3}i[abnu]{2}8\.cn 114
[a-y]{6,19}\.com 644 Ramnit
[a-z]{14,16}\.(biz|com|info|net|org) 173

It is conspicuous that the Tiny Banker Trojan (Tinba) has a very high occurrence with different specific regexes in the result set. After generalizing the most regexes of Tinba, as described in section 2.3, it will be possible to group all samples with only one regex. The missing family names are given by the fact that we could not detect automatically to which malware family the samples that used the DGA belongs.

Conclusion

The result shows that DGA detection and DGA classification can be very useful to detect new malware samples by their DGA. Hence it is also possible to find links between old and new malware samples of the same family via their classified DGA.
The DGA detection seems to be very reliable for samples that have queried many different domains. Our implemented concept for DGA classification seems to be in many cases successful. However, there are still cases where the calculated DGA descriptions are not correct, because the created patterns are sometimes overfitted to the given domain lists or rather non-DGA domains were considered in the calculation of the DGA descriptions, too. To confine this problem, we use a multi-layered regex generalization.
Even wrong DGA descriptions can be still considered as fingerprints calculated over the domain list of the sample. That fingerprint could be used to classify the DGA-based malware, so that it makes still a good contribution to automated malware analysis.

Literature

[A. 84] A. Zanker. Detection of outliers by means of Nalimov’s test – Chemical Engineering, 1984.

[H. 16] H. Zhang, M. Gharaibeh, S. Thanasoulas, C. Papadopoulos Colorado State University, Fort Collins, CO, USA. BotDigger: Detecting DGA Bots in a Single Network, 2016.

[R. 13] R. Sharifnya and M. Abadi – Tarbiat Modares University Tehran, Iran. A Novel Reputation System to Detect DGA-Based Botnets, 2013.

[T. 13] T. Frosch, M. Kührer, T. Holz – Horst Görtz Institute (HGI), Ruhr-University Bochum, Germany. Predentifier: Detecting Botnet C&C Domains From Passive DNS Data, 2013.

Negative Result: Reading Kernel Memory From User Mode

I were going to write an introduction about how important negative results can be. I didn’t. I assume you can figure out for yourself why that is and if not you got all the more reason to read this blog post. If you think it’s trivial why my result is negative, you definitely need to read the blog post.

The memory subsystem

I think most researchers would immediately think that reading kernel memory from an unprivileged process cannot work because “page tables”. That is what the Instruction Set Architecture or ISA says. But in order to understand why that was unsatisfactory to me we need to dig a bit deeper and I’ll start at the beginning.

 
When a software running on a Core requires memory it starts a so called “load” command. The load command is then processed in multiple stages until the data is found and returned or an error occurred. The Figure below shows a simplified version of this sub system.

Memory hierachy.png

Software including operating system uses virtual addressing to start a load (which is what a memory read is called inside the CPU). The first stage of processing is the L1 cache. The L1 cache is split between a data and an instruction cache. The L1 Cache is a so called VIPT or Virtually Indexed, Physically Tagged cache. This means the data can be looked up by directly using the virtual address of the load request.  This along with central position in the core makes the L1 incredibly fast. If the requested data was not found in the L1 cache the load must be passed down the cache hierarchy. This is the point where the page tables come into play. The page tables are used to translate the virtual address into a physical address. This is essentially how paging is enabled on x64. It is during this translation that privileges are checked. Once we have a physical address, the CPU can query the L2 cache and L3 cache in turn. Both are PITP caches (Physically Indexed, Physically Tagged) thus requiring the translation in the page tables before the lookup can be done. If no data was in the caches then the CPU will ask the memory controller to fetch the data in main memory. The latency of a data load in the L1 cache is around 5 clock cycles whereas a load from main memory is typically around 200 clock cycles. With the security check in the page tables and the L1 located before those we see already at this point that the because “page table” argument is too simple. That said – the intel manuals software developer’s manuals [2] states that the security settings is copied along the data into the L1 cache.

 

Speculative execution

In this and the next section I’ll outline the argumentation behind my theory, but only outline. For those a bit more interested it might be worth looking at my talk at HackPra from January [3] where I describe the pipeline in slightly more detail, though in a different context. Intel CPU’s are super scalar, pipelined CPU’s and they use speculative execution and that plays a big role in why I thought it may be possible read kernel memory from an unprivileged user mode process.   A very simplified overview of the pipeline can be seen in figure below.

 

pipeline.png

The pipeline starts with the instruction decoder. The instruction decoder reads bytes from memory, parse the buffer and output instructions.  Then it decodes instructions into micro ops. Micro ops are the building blocks of instructions. It is useful to think of x86 CPU’s as a Complex Instruction Set Computer (CISC) with a Reduced Instruction Set Computer (RISC) backend. It’s not entirely true, but it’s a useful way to think about it. The micro ops (the reduced instructions)  are queued into the reordering buffer. Here micro ops is kept it until all their dependencies have been resolved. Each cycle any micro ops with all dependencies resolved are scheduled on available execution units in first in ,first out order. With multiple specialized execution units many micro ops are executing at once and importantly due to dependencies and bottlenecks in execution units micro ops need not execute in the same order as the entered the reorder buffer. When a micro op is finished executing the result and exception status are added to the entry in the reorder buffer thus resolving dependencies of other micro ops. When all Micro ops belonging to a given instruction has been executed and have made it to the head of the reorder buffer queue they enter the retirement processing. Because the micro ops are at the head of the reorder buffer we can be sure that retirement is in the same order in which the micro ops were added to the queue. If an exception was flagged in the reorder buffer for a micro op being retired an interrupt is raised on the instruction to which the micro op belonged. Thus the interrupt is always raised on the instruction that caused even if the micro op that caused the interrupt was executed long before the entire instruction was done. A raised interrupt causes a flush of the pipeline, so that any micro op still in the reorder buffer is discarded and the instruction decoder reset. If no exception was thrown, the result is committed to the registers. This is essentially an implementation of Tomasulo’s algorithm [1] and allows multiple instructions to execute at the same time while maximizing resource use.

 

Abusing speculative execution

Imagine the following instruction executed in usermode

mov rax,[somekernelmodeaddress]

It will cause an interrupt when retired, but it’s not clear what happens between when the instruction is finished executing and the actual retirement. We know that once retired any information it may or may not have read is gone as it’ll never be committed to the architectural registers.. However, maybe we have a chance of seeing what was read if Intel relies perfectly on Tomasulo’s algorithm. Imagine the mov instruction sets the results in the reorder buffer as well as the flag that retirement should cause an interrupt and discard any data fetched. If this is the case we can execute additional code speculatively. Imagine the following code:

mov rax, [Somekerneladdress]

mov rbx, [someusermodeaddress]

If there are no dependencies both will execute simultaneous (there are two execution units for loads) and while the second will never get it’s result committed to the registers because it’ll be discarded when the firsts mov instruction causes an interrupt to be thrown. However, the second instruction will also execute speculatively and it may change the microarchitectural state of the CPU in a way that we can detect it. In this particular case the second mov instruction will load the

someusermodeaddress

into the cache hierarchy and we will be able to observe faster access time after structured exception handling took care of the exception. To make sure that the someusermodeaddress is not already in the cache hierarchy I can use the clflush instruction before starting the execution. Now only a single step is required for us to leak information about the kernel memory:

Mov rax, [somekerneladdress]

And rax, 1

Mov rbx,[rax+Someusermodeaddress]

 

If this is executed last two instructions are executed speculatively the address loaded differs depending on the value loaded from somekerneladdress and thus the address loaded into the cache may cause different cache lines to be loaded. This cache activity we can we can observe through a flush+reload cache attack.

The first problem we’re facing here is we must make sure the second and third instruction runs before the first one retires. We have a race condition we must win. How do we engineer that? We fill the reorder buffer with dependent instructions which use a different execution unit than the ones we need. Then we add the code listed above. The dependency forces the CPU to execute one micro op at a time of the filling instructions. Using an execution unit different than the one we used in the leaking code make sure that the CPU can speculatively execute these while working on the fill pattern.

The straight forward fill pattern I use is 300 add reg64, imm. I choose add rax, 0x141. I chose this because we have two execution units that are able to execute this instruction (Integer alus) and since they must execute sequentially one of these units will always be available to my leakage code (provided that another hardware thread in the same core isn’t mixing things up).

Since my kernel read mov instruction and my leaking mov instruction must run sequentially and that the data fetched by the leaking instruction cannot be in the cache the total execution time would be around 400 CLK’s if not the kernel address isn’t cached. This is a pretty steep cost given that an add rax, 0x141 cost around 3 CLK’s. For this reason, I see to it that the kernel address I’m accessing is loaded into the cache hierarchy. I use two different methods to ensure that. First, I call a syscall that touches this memory. Second, I use the prefetcht0 instruction to improve my odds of having the address loaded in L1. Gruss et al [4] concluded that prefetch instructions may load the cache despite of not having access rights. Further they showed that it’s possible for the page table traverse to abort and that would surely mean that I’m not getting a result. But having the data already in L1 will avoid this traverse.
 

All said and done there are a few assumptions I made about Intels implementation of Tomasulo’s algorithm:

1)    Speculative execution continues despite interrupt flag

2)    I can win the race condition between speculative execution and retirement

3)    I can load caches during speculative execution

4)    Data is provided despite interrupt flag

 

Evaluation

I ran the following tests on my i3-5005u Broadwell CPU.

I found no way I could separately test assumption 1,2 and 3. So I wrote up the code I outlined above and instead of code above I used the following code:

Mov rdi, [rdi];  where rdi is somekernelmodeaddress

mov rdi, [rdi + rcx+0]; where rcx is the Someusermodeaddress

Then I timed accessing the Someusermodeaddress after an exception handler dealt with the resulting exception. I did 1000 runs and sorted out  the 10 slowest out. First I did a run with the 2nd line above present with the value at the kernel mode address 0. I then did a second run with the second line commented out. This allows me test if I have a side channel inside the speculative execution. The results is summarized in the below histogram (mean and variance respectively: mean= 71.22 std = 3.31; mean= 286.17 std =  53.22). So obviously I have a statistic significant  covert channel inside the speculative execution.

 
In a second step I made sure that the kernel address I was trying to read had the value 4096.  Now if I’m actually reading kernelmode data the second line will fetch a cache line in the next page. Thus I would expect a slow access to Someusermodeaddress. As the speculative fetch should access a cache line exactly one page further. I selected the offset 4096 to avoid effects due to hardware prefetchers I could’ve added the size of a cache line. Fortunately I did not get a slow read suggesting that Intel null’s the result when the access is not allowed. I double checked by accessing the cache line I wanted to access and indeed that address was not loaded into the cache either. Consequently it seems likely that intel do process the illegal reading of kernel mode memory, but do not copy the result into the reorder buffer. So at this point my experiment is failed and thus the negative result. Being really curious I added an add rdi,4096 instruction after the first line in the test code and could verify that this code was indeed executed speculatively and the result was side channeled out.

 

Pandoras box

While I  did set out to read kernel mode without privileges and that produced a negative result, I do feel like I opened a Pandora’s box. The thing is there was two positive results in my tests. The first is that Intel’s implementation of Tomasulo’s algorithm is not side channel safe. Consequently we have access to results of speculative execution despite the results never being committed. Secondly, my results demonstrate that speculative execution does indeed continue despite violations of the isolation between kernel mode and user mode.

This is truly bad news for the security. First it gives microarchitecture side channel attacks additional leverage – we can deduct not only information from is actually executed but also from what is speculatively executed. It also seems likely that we can influence what is speculative executed and what is not through influencing caches like the BTB see Dmitry Evtyushkin and Dmitry Ponomarev [5] for instance.It thus add another possibility to increase the expressiveness of microarchitecture side channel attacks and thus potentially allow an attacker even more leverage through the CPU. This of cause makes writing constant time code even more complex and thus it is definitely bad news.

Also it draws into doubt mitigations that rely on retirement of instructions. I cannot say I know how far that stretches, but my immediate guess would be that vmexit’s is handled on instruction retirement. Further we see that speculative execution does not consistently abide by isolation mechanism, thus it’s a haunting question what we can actually do with speculative execution.

 

Literature

[1] Intel® 64 and IA-32 Architectures Software Developer Manuals. Intel. https://software.intel.com/en-us/articles/intel-sdm

 

[2] Tomasulo, Robert M. (Jan 1967). “An Efficient Algorithm for Exploiting Multiple Arithmetic Units”. IBM Journal of Research and Development. IBM. 11 (1): 25–33. ISSN 0018-8646. doi:10.1147/rd.111.0025.

 

[3] Fogh, Anders. “Covert shotgun: Automatically finding covert channels in SMT”

https://www.youtube.com/watch?v=oVmPQCT5VkY.

 

[4] Gruss, Daniel, et al. “Prefetch side-channel attacks: Bypassing SMAP and kernel ASLR.”

Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016.

 

[5] Evtyushkin D, Ponomarev D, Abu-Ghazaleh N. Jump Over ASLR: Attacking Branch Predictors to Bypass ASLR. InProceedings of 49th International Symposium on Microarchitecture (MICRO) 2016.

Statistics and Infosec

 

Introduction

 

The work by John Ioannidis [1] among others started the “reproduction crisis”. The “reproduction crisis” is the phrase used to describe that in many branches of science researchers have been unable to reproduce classic results which form the basis for our understanding of the world. Since the basic principle of modern science is about standing on the shoulders of giants, it is incredibly important that sound methodology is used to insure that our foundation is solid. For now the reproduction crisis has hardly reached information security and I hope it will stay that way. But unfortunately the information security scene is full of issues that make it vulnerable to reproduction problems. Top among them is methodology and the lack of reproduction studies. A huge amount of papers is accepted as true, but never reproduced and many papers suffer from methodologic problems.

 

In this blog post I will argue that among information security papers, very basic statistics could improve the process and reproducibility. Statistics is important as soon as observed data are noisy. Noisy data are everywhere even in computer science. Users are non-deterministic, unobserved variables, manufacturing variance, temperature and other environmental influences all effect our measurements. This of course does not mean that we cannot say anything meaningful about computers – it just means we need to embrace statistics and so far the information security science has not been very successfull at that. This purpose of this blog post is to suggest a baby step in the right direction – using sample size, mean and standard deviation as a mean of improving the expose on noisy data. I intentionally keep the theory short – I encourage everybody to read a real statistics book. Also, with the intention to provoke, I picked two examples of papers where the statistics could have been done better.

 

Statistics, Sample Size, Mean, and Standard Deviation 101

During a lecture my first statistics professor said that he would automatically flunk any of us in the future if we reported on noisy data without mentioning 3 data points on them. These data points are the sample size (N), the mean, and the standard deviation. The first reason for bringing these three data points is that it is as close as one gets to a standard in science . It is accepted in other sciences ranging from physics over medicine to economics. Having a standard on reporting noisy data allows us to compare studies easily and gives us an easy way to reproduce the findings of other people. Unfortunately, this standard appears to not have arrived in full force in information security yet.

But it is not only about a standard. The thing is: using these three values is not just a historical accident. There is sound theory why we want these three values. Imagine a process generating our data – usually called the data generating process or d.g.p. If the d.g.p. produces observations where the noise is independent of other observations we say the d.g.p. produces independent, identically distributed observations or i.i.d. If observations are not i.i.d. we cannot treat any variation as noise and consequently we need further analysis before we can draw any conclusion based on the data. Hence, i.i.d. is often assumed or considered a good approximation. And very often there are good reasons to do so.

With i.i.d. data the arithmetic mean of our observations will almost surely converge towards the mean of the d.g.p as the number of observations increases. This is called the law of large numbers. If one rolls a dice a thousand times and gets an arithmetic mean of 3.0, it is an indication that the dice is not a fair dice. This explains why we wish to report the arithmetic mean. If the number of samples is sufficiently high, then the arithmetic mean is “close” to the real mean of the data generating process.

Also with i.i.d. data the central limit theorem almost always applies. The central limit theorem says that we can expect the mean to tend towards normally distributed as the sample size increase. The normal distribution is characterized by only two parameters, its mean and its standard deviation. It allows us to check hypothesis about the data. The most basic tests can even be made on the back of an envelope. This allows to put a probability on the dice used in the previous example not being a fair dice – that we are not just observing noise.

Why we should include the sample size should be pretty obvious by now. Both the law of large numbers and the central limit theorem hinge on convergence over the sample size. Thus if we have a small sample size, we are unable to rely on any conclusions. That should not surprise you. Imagine throwing a dice 3 times getting 1,2, and 4 – you would be a fool to insist that your data show that 6 rarely happens. If you have thrown the dice a million times and do not see any 6’s the dice almost surely is not fair and the aforementioned conclusion would be pertinent.

In most first (or second if applied with rigor) semester statistics books, you can find a discussion on estimating how many samples you need based on the estimated standard deviation from a pilot sample or prior knowledge on the d.g.p. That of course can be turned around to evaluate if a study had sufficient sample size – yet another reason why these three numbers are valuable.

In short three small numbers in a 14 page paper can provide deep insight into the soundness of your methodology as well as making your paper easily comparable and reproducible.If  you are in doubt, you need to bring these 3 numbers. One word of caution though: I think these numbers are necessary but they may not be sufficient to describe the data. Also you will have no quarrel with me if I can easily calculate the standard deviation from reported data . For example report variance instead of standard deviation or omitting the standard deviation when reporting on proportions (the Bernoulli distribution is described only by its first moment) is just fine.

 

Case study 1: A Software Approach to Defeating Side Channels in Last-Level Caches

I will start out with a pretty good paper that could have been better if standard deviation and mean had been mentioned: “A Software Approach to Defeating Side Channels in Last-Level Caches “ by Zhou et al. [2]. The authors develop CacheBar – a method to mitigate cache side channel attacks. For Flush+Reload attacks they do a copy-on-access implementation to avoid sharing memory which effectively kills Flush+Reload as an attack vector. I shall concentrate on Zhou et al.s [2] protection against Prime+Probe. For this purpose, Zhou et al [2] use an overcommitted page coloring scheme. Using overcommitted page coloring allows for a significantly more flexible memory allocation and thus less performance penalty than the classic page coloring scheme. However, an attacker will remain able to Prime a cache set to a certain extend. Consequently, the scheme does not defeat Prime+Probe, but it does add noise. The paper analyzes this noise by first grouping observations of defender demand on a cache set into “None, one, few, some, lots and most” categories. They subsequently train a Baysian classifier on 500.000 Prime+Probe trials and present the classification result in a custom figure in matrix form. I have no objections thus far. If the authors think that this method of describing the noise provides insights then that is fine with me. I am however missing the standard deviation and mean – they do offer the sample size. Why do I think the paper would have been better if they had printed these two values?

  • The categories are arbitrary: To compare the results to other papers with noise (say noisy timers) we would have to categorize in exactly the same fashion.
  • Having reported means we would immediately be able to tell if the noise is biased. Bias would show up in the figures as misprediction but an attacker could adjust for bias.
  • With standard deviation we could calculate on a back of an envelope how many observations an attacker would need to actually successfully exfiltrate the information she is after and thus evaluate if the mitigation will make attacks impractical in different scenarios. For example, you can get lots of observations of encryption, but you cannot ask a user for his password 10000 times.
  • Reproducing a study will in the presence of noisy data always have slight differences. When the black and white matrix figures are close enough to consider reproduced? For means and standard deviation that question has been resolved.

That all said, I do not worry about the conclusion of this paper. The sample size is sufficient for the results to hold up and the paper is generally well written. I think this paper is a significant contribution to our knowledge on defending against cache side channel attacks.

 

Case study 2: CAn’t Touch This: Practical and Generic Software-only Defenses Against Rowhammer Attacks

Caveat, Apology and More Information

Before I start with my analysis of the statistics in this paper, I should mention that I had given my off-the-cuff comments on this paper before and I generally stand by what I wrote on that occasion. You will find my comments published in Security Week[6]. I did get one thing wrong in my off-the-cuff remarks: I wrote Brasser et al [3] draws heavily on Pessl et al. [4]. Brasser et al. do not use the Dram-mapping function used by Pessl et al.[4], as I initially thought they did. Instead, they use an ad-hoc specified approximate mapping function. Also it is very important to mention that I am referring to an early version of the paper on archiv substance to my previous off-the-cuff remarks or what I write in this blog post. As my critique of the paper is relatively harsh, I emailed the authors ahead of posting the blog post. As of publishing the blog post, I have not yet received an answer.

Getting dirty with it

“CAn’t Touch This: Practical and Generic Software-only Defenses Against Rowhammer Attacks” by Brasser et al. [3] implements two mitigations for the rowhammer problem: B-Catt and G-Catt. G-Catt uses memory partitioning, so that the kernel is not co-located in the same banks as user mode memory which prevents code hammering the kernel space from user mode. I am not convinced that G-Catt cannot be circumvented but that is not the subject of this blog post. I will instead focus on B-Catt. Kim et al [5] suggested not using vulnerable memory addresses and B-Catt implements this idea in a boot loader, rather elegantly using int 15h, subfunction 0xe820 which provides the operating system a list of memory regions it should avoid using. If the computer is not using any vulnerable addresses, one cannot flip bits with row hammer and the system is safe. Obviously, not using some of the physical memory comes at a cost for the end user. Kim et al. [5]concluded that “However, the first/second approaches are ineffective when every row in the module is a victim row (Section 6.3)”. Contrary, Brasser et al.[3] concludes: “…we demonstrate that it is an efficient and practical solution that effectively prevents rowhammer attacks as a short-term solution”. So, the big question is who it gets right here. The key issue is how many pages contain bit flips. Kim et. Al[5] do not provide numbers on pages, but Brasser et al do. They evaluate 3 test systems to support their conclusion: “However, our evaluation (Section VI-A2) suggests that only a fraction of rows are vulnerable in practice.”

And this is where my gripe with the statistics in this paper is. Sometimes you can get away with a tiny sample size. That is when you have prior evidence that the sample variance is either irrelevant or negligible. Relevance of the variance here is given. The prior evidence of a small variance is not present. In fact, I will argue quite the contrary in later.

But first let me get rid of the terrible loose language of “practical and efficient”. My 15 year experience as a professional software developer tells me that software needs to work as advertised on at least 99% of the systems. That is in my opinion a conservative value. Further assume that 5% memory overhead is acceptable for B-Catt users. Notice here that I defined practical as a fraction of systems that must be running with an acceptable overhead. The alternative would have been defining practical as a maximal acceptable average overhead. I consider the fraction approach the most applicable approach in most real world scenarios – it certainly is easier to do back-of-envelope calculations with, which I will be doing for the remainder of this blog post.

Now we can put the sample size of 3 into perspective. Imagine that in reality 10% of systems have more pages with bit flips than are acceptable. With a sample  size of 3 that gives us a 73% (0,93) chance that our evaluation will be “suggesting” that we are indeed practical. Being wrong 73% of the time is not a suggestion of being right. We need more data than 3 observations to assume practicality. Obviously, you may argue that my 10% is pulled out of a hat – and it is indeed. It is a conservative number, but not unrealistic.

This leads us to the question of how much data do we need? The answer to that question depends on the variance in the data, something that we do not know. So, we can either make an a-priori guess based on domain knowledge or do a pilot sample to get an idea. My first semester statistics book [6] states: “We recommend taking at least 20 observations in a pilot sample”. Pp. 419. So, the author’s samples do not even qualify as a pilot sample.

So we are left with turning to domain knowledge. Kim et. al [5] in my opinion has the best available data. They sample 129 modules but do not mention the number of affected pages. They do however mention that 3 modules have more than 40% affected rows. On a two channel skylake system (to my knowledge the best case) we can have 4 rows per page and assume all bit flips occur so that the smallest number of pages are affected – then we end up with 3 modules with at least 10% overhead out of 129. Thus no lesss 2,3% of modules will have too much overhead, according to my definition, in Kim’s samples. This certainly constitutes a reason for skepticism on Brasser et al’s[3] claims.

Kim et al. [5] unfortunately do not calculate a mean or standard deviation – they do report their complete data, so that I could calculate it, but instead I will use a bit of brute force to get somewhere. We know that rowhammer is fairly bad because 6 of 129 samples have more than 1 million bit flips. Assuming that each cell is equally likely to flip that gives us a 14,8% probability that a given page is vulnerable for any of these 6 modules. This assumption has weak support in Kim et. al. [5] Consequently, at least 4.6% of the modules are not practical under my definition and my fairly conservative assumptions. With a finite base of modules (200) and under the assumption that modules are equally common in the wild and 129 observations, we can calculate a 95% confidence interval. This confidence interval tells us that between 2.44% – 6.76% of the assumed 200 modules in the population are vulnerable. Obviously, it is not realistic to assume only 200 modules in existence, but if there are more the confidence interval would be even larger. Thus, under these assumption we must dismiss that B-Catt is practical with 95% confidence. It looks like B-Catt is in trouble.

The assumptions above are of cause very restrictive and thus more research is required. What would this research look like? A pilot sample with at least 20 samples would be a good start. Alternatively, we could use my above assumptions to calculate a sample size. If we do this we end up with the number 80.

Obviously, everything here rides on my definition of practicality and efficiency. My gut feeling is that B-Catt won’t turn out to be practical if we had a reasonable sample. Nevertheless I would love to see data on the issue. While I think my assumptions are fair for a mainstream product, there might be special cases where B-Catt may shine. For a data center that can search for a ram product with few bit flips B-Catt may very well be a solution.

 

Literature

[1] Ioannidis, John PA. “Why most published research findings are false.” PLos med 2.8

(2005): e124.

[2] Zhou, Ziqiao, Michael K. Reiter, and Yinqian Zhang. “A software approach to

defeating side channels in last-level caches.” Proceedings of the 2016 ACM SIGSAC

Conference on Computer and Communications Security. ACM, 2016.

[3] Brasser, Ferdinand, et al. “CAn’t Touch This: Practical and Generic Software-only

Defenses Against Rowhammer Attacks.” arXiv preprint arXiv:1611.08396 (2016).

[4] Pessl, Peter, et al. “DRAMA: Exploiting DRAM addressing for cross-cpu

attacks.” Proceedings of the 25th USENIX Security Symposium. 2016.

[5] Kim, Yoongu, et al. “Flipping bits in memory without accessing them: An

experimental study of DRAM disturbance errors.” ACM SIGARCH Computer

Architecture News. Vol. 42. No. 3. IEEE Press, 2014.

[6] Berry, Donald A., and Bernard William Lindgren. Statistics: Theory and methods.

Duxbury Resource Center,

[7] Kovacs, Eduard. “Researchers Propose Software Mitigations for Rowhammer

Attacks” http://www.securityweek.com/researchers-propose-software-mitigations-

rowhammer-attacks

New cache architecture on Intel I9 and Skylake server: An initial assessment

 

Intel has introduced the new I9 CPU which is seen as HEDT (High-End-DeskTop) product. The micro architecture is in many respects shared with the new Skylake server micro architecture.I f history is a guide technology introduced in this segment slowly trickles down to more budget friend desktops. From a Micro architecture point of view it seems that several things about these CPU’s will force changes on micro architectural attacks – especially in the memory subsystem. In this blog post I’ll do a short overview on some of relevant changes and the effects they’ll may have on micro architecture attacks. Since I don’t own or have access to an actual I9 processor or Skylake server processor this blog post is just conjecture.

 

Resume of the “old” cache hierachy

The major changes over earlier process from a micro architecture point of view is that the cache system has received a significant overhaul. Current Intel CPUs have a 3 level cache hierarchy – two very small L1 caches, one for data, one for instructions. The second level cache (L2 or Mid Latency Cache) is somewhat larger. L1 data, L1 code and L2  part of each core and private to the core. Finally, Intel CPU’s had a huge 3rd level cache (usually called L3 or  largest latency cache) shared between all cores. The 3rd level cache is subdivided into slices that are logically connected to a core. To effectively share this cache, Intel connected them on a ring bus called the Quick Path Interconnect. Further the 3rd level cache was an inclusive cache, which means that anything that is anything cached in L1 or L2 must also be cached in L3.

 

Changes

Some of the important changes that has been announced in the Intel Software Optimization Manuals [1]  are:

–    A focus on a high number of cores in the CPU (up to 18 in the HEDT models)

–    A reduced over all cache size per core (compared to similar older models)

–    A very significant increase in the size of the L2 (factor of 4)

–    Doubled the bandwidth of L2, while only slightly increasing the latency

–    Slightly more than offset by a reduction of the shared L3.

–    Reorganized  L3 cache to be a non-inclusive cache

–    Replaces the QPI with a mesh-style bus.

Why does these changes make sense?

Increasing the size of L2 on the cost of L3 makes sense as the L2 is much faster than L3 for applications – one can only assume that making the L3 helps reduce die size and cost. Also the increase in size of the L2 caches reduces the marginal utility of the L3. Finally  as the probability of cache set contention rises with the number of cores, it becomes advantageous to make a larger part of the total cache private. Cache contention in L3 is a problem Intel has battled before. With Haswell they introduced Cache Allocation Technology (CAT) to allow software to control cache usage of the L3 to deal with contention.

The number of cores is probably also the reason why Intel dropped the QPI ring buffer design. There is a penalty for having memory served from another core’s slices. On a ring bus this penalty is proportional to how far the cores are a part on the ring. With more cores, this penalty increases. Moving to a more flexible system seems sensible from this vantage point.

Having an inclusive L3 makes cache coherency easier and faster to manage. However, an inclusive cache comes with a cost as the same data are loaded in multiple caches. The relative loss of total cache storage space is exactly the ratio of the (L2 +L1) to L3 sizes. Previous this ratio has been around 1:10 (depending on actual CPU), but multiplying the L2 size by 4 and making the L3 a tiny bit smaller the ratio is now about 1:1.5. Thus the making the L3 cache non-inclusive is very essential to performance. At this point it’s important to notice that Intel uses the wording “non-inclusive”. This term is not well defined. The opposite of inclusive is exclusive meaning the content of L3 cannot be loaded in L1 and L2. I think Intel would have used the defined term exclusive if the cache really where exclusive. Thus,   it is probably fair to assume that non-inclusive means that data may or may not be cached in L1, L2 and L3 at the same time, but exactly how this is governed is important. Unfortunately there is no information available on this.

It’s worth noting that many of these changes has been tested and developed by Intel for the Knights landing micro architecture. Knights landing is a high throughput micro architecture, sold in relative small amounts. Thus it’s likely that many features developed for this CPU will end up being trickled down.  It’ll be interesting to see if Intel plans to trickle it down into laptop/small desktops or use different cache designs for different classes of CPUs.

 

Effects

Cache side channel attacks

This new cache layout is likely to have profound effect on cache side channel attacks. I think Flush+Reload will work even without the non-inclusive cache. The flush primitive is based on the CLFlush instruction which is part of the instruction set architecture (ISA). Intel has been very reluctant in the past to change the ISA and therefore my estimate is that flushing will work as always. I think the reload primitive will remain active – I find it likely that an uncached load will also load stuff into the shared L3. Also it’s likely that the QPI replacement bus can be used to fetch data from private L2 caches similar to AMD’s cross CPU transmission. This will give Flush+reload a nice flush+transfer flavor, but it’ll still work. For more on Invalidate (Flush)+Transfer see [2]. Since the L3 cache must be filled in someway we can fairly certain that at least one of these things are true, if not both. The latter being my guess.

The big change are likely to be related to evict and prime primitives. Assuming that cache contention between cores was a major reason for redesigning the cache, it’s unlikely that one can can load stuff into another core’s private hierarchy. Thus, the prime and evict primitives for  cross core attacks.  However,both are likely to work decently within a core (Hyper threading or scheduling on same core).

While the ISA behavior of CLFLush is almost certain to remain unchanged, the micro architecture below it will see significant changes. With QPI gone using the flush+flush attack by Gruss et al. [3] to find out how many hubs on the ring bus away you are from a particular slice almost certainly won’t work. This does not mean that you won’t find a side channel here in CLFlush, buses are typically bandwidth limited thus being an obvious source of congestion – without an inclusive L3 the bus bandwidth might even be easier to consume. Also the Flush+Flush attack as a Flush+reload replacement, is likely to produce a different timing behavior in the new micro architecture. My guess upfront is that a timing difference and thus a side channel remains.

Also affected by the non-inclusiveness of the L3 is row buffer side channel attacks such as those presented by Pessl. et al.[4]  Without an effective eviction cross core attacks may be severely stifled. The ability to reverse engineer the DRAM complex mapping function is likely to remain unchanged as it hinges not on eviction, but the CLFlush instruction’s ISA behavior.

With the CLFlush instruction and evict likely still working on local cores, row hammer will  remain effective. But in some scenarios, indirect effects of  micro architecture changes may break specific attacks such as that of  Sarani Bhattacharya, Debdeep Mukhopadhyay [5]. The reason is that such attacks rely on information leakage in the caches, that becomes more difficult to leverage as described above.

 

Future

While the changes to the cache makes sense with the significant number of cores in the affected systems, it seems unlikely that the changes will trickle down to notebooks and laptops. With only two cores the existing cache design seems sensible. Thus we are likely to see a two tier cache design moving on.

Conclusion

Having a non-inclusive L3 cache is significantly more secure from a side channel perspective than an inclusive in cross core scenarios. This opens up for defending these attacks, by isolating different security domains on different cores potentially dynamically. While flush+reload is likely to be unaffected, this attack is also the easiest to thwart in real life scenarios as avoiding shared memory cross security domains is an available and effective countermeasure. Lots of new research is required to gauge the security of these micro architecture changes.

 

Literature

[1] Intel. Intel® 64 and IA-32 Architectures Optimization Reference Manual. July 2017. https://software.intel.com/sites/default/files/managed/9e/bc/64-ia-32-architectures-optimization-manual.pdf

[2] Irazoqui, Gorka, Thomas Eisenbarth, and Berk Sunar. “Cross processor cache attacks.” Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. ACM, 2016.a

[3] Gruss, Daniel, et al. “Flush+ Flush: a fast and stealthy cache attack.” Detection of Intrusions and Malware, and Vulnerability Assessment. Springer International Publishing, 2016. 279-299.

[4] Pessl, Peter, et al. “DRAMA: Exploiting DRAM Addressing for Cross-CPU Attacks.” USENIX Security Symposium. 2016.

[5] Sarani Bhattacharya, Debdeep Mukhopadhyay: “Curious case of Rowhammer: Flipping Secret Exponent Bits using Timing Analysis”. http://eprint.iacr.org/2016/618.pdf

The Kings In Your Castle Part 5: APT correlation and do-it-yourself threat research

Welcome back, to the fifth and last part of our blog series The Kings In Your Castle, where we aim to shed light on how A.P.T. functions, how targeted malware looks like and the issues us analysts might find on it. If you are interested on how it started, please check out the parts 1, 2, 3 and 4; namely here, here, here and here.

In part 5 now I will describe how we leveraged our gathered data for correlations, to unveil connections among targeted attacks, reported to CIRCL’s MISP instance. Furthermore, this blog looks ahead on what happened after the presentation of our proof of concept at this year’s Troopers conference in Heidelberg. Large parts of the parsing and correlation functionality has made its way into the code base of MISP. Raphael Vinot has published the MISP Workbench, where MISP users can now perform their own correlation of events found in MISP.

Corre…what?

For starters, let’s define the term “correlation”. We define this as the uncovering of links of any kind among events reported to MISP. A requirement for reports that support a correlation is that they refer to one single event which goes beyond the general information already contained in MISP. Through this correlation we can glean extended knowledge of a toolset used by an actor as well as information on target preference. We also get to know about shared tools or techniques among two or more actors. We would also count it as a successful correlation if we find proof that two supposedly related events do not share any links at all.

A big step from classical (mass-)malware detection to research and mitigation of targeted attacks was, to recognize a pattern over time. The fundamental difference between targeted and non-targeted threats is that non-targeted threats does not know and/or care about their target. While campaigns of non-targeted threats also tend to improve over time, targeted actors have a significantly more developed need to stay ahead of their victims. This way, when tracking threat actors, one might spot new additions to their toolset, new intrusion techniques or even see them picking up new “business lines”.

Also, looking at malware campaigns from a historical perspective helps uncovering false flags as well as attempts to cover their tracks. This is especially significant when looking at how actors learned to do this over time.

With all of this reasoning in mind we dug through data sets with (and, occasionally, without) structured approaches, in order to unveil hidden treasures.

Naming is hard

And frequently, when uncovering supposably unknown links among different events, we found ourselves poking at the very same group of aggressors; using the very same malware and attack methods as the linked event would show. But how come?

There are several reasons to this, the most obvious one being that two events were reported, commenting on differently named groups, but actually referring to the same. This is an old issue in threat analytics; naming is hard. The reality was that in most common cases we quickly realized that each time we had a link, we were looking at identical events that were just named differently.

Probably the most popular case of multi-naming is Havex; otherwise known as EnergeticBear, DragonFly, or CrouchingYeti. This group has been mentioned in at least four different reports, with dedicated naming on all four. Similar cases have been observed for Sakula, also called BlackVine, and for WhiteElephant, also called Seven Pointed Dagger.

5_2_naming

What we found

The purpose of correlating different events was to try and find any existing links between events that were either instigated by different actors and/or performed at different points in time by the same group. For one, we wanted to proof that we could uncover links among sample sets and groups with little technical effort. For instance, we were able to spot a spear phishing attack within MISP which clearly related to other campaigns driven by an actor known as APT1. APT1 is said to be a Chinese group, performing a plentitude of attacks in (years?). It is hugely beneficial to have advance knowledge of past attacks and current changes to an attacker’s toolset. This is especially true if an attack by a specific group against a given target may be imminent or already in progress.

What turned out interesting also though, was uncovering events documenting the same group, but extending the view to include  the capabilities which were added over time. One example for this is TurboCampaign. This was first reported in 2014. By that time it went by the name of Shell_Crew. When spotted again in 2015, they sported a 64-bit Derusbi implant for Linux machines, which had not been observed earlier. It is unclear at this point, whether this component was just missed in the 2014 report or if it was added less than a year later. The fact that attack groups learn as they go and extend their toolset is not surprising. This learning process gives us exactly the kind of information we set out to learn.

Other things we found in similar ways were for example a link between a report on The Dukes and another campaign dubbed Hammertoss; we linked an RTF spearphishing campaign from 2014 to the PittyTiger group, and we discovered a connection between RedOctober and the Inception Framework.

How to determine what’s related

We should underline again that what we did was not yet another machine learning exercise. The attributes outlined in previous episodes of this series can potentially serve in malware machine learning research, but exploring this was out of scope for what we had in mind. But what DID we do then?

We followed a rather simple approach. The set of indicators mentioned in blogpost #2, combined with attributes already present within MISP, went into a Redis backend, hosted on strong computing power. Redis allows, to perfom rather quick queries on large datasets in memory. By calculating hashes to index data items, large sets can be processed from different angles.

This way we can perform correlation runs based on IP addresses, domain names and file hashes, as well as compilation timestamps, original file names, import table and ssdeep hashes and many more. As outlined mentioned last time, targeted malware is likely to not be packed or obfuscated. It is also likely to be reused, either in its compiled form or only parts of its source code.

The following graphic illustrates a snapshot of data, related to one event within MISP. One can quickly identify certain patterns, as well as absent data. Naturally, PE-specific attributes can only be retrieved from Windows PE binaries. Help on that front is provided by ssdeep hashes, which also serve for file clustering approaches.

5_3_data

Ssdeep clustering

Ssdeep, apart from computing a piece-wise hash of a given file or data blob, is capable of measuring the distance between two or among multiple ssdeep hashes. These distances are named match scores, weighed measures of how similar two given files are to each other. This is possible because of the fact that ssdeep does not calculate cryptographical hashes, which identify the entire file, but only pieces of a file. Those can then be compared to the piecewise hash of a different file. Naturally, this method can be extended to compare multiple files to one file, and also looking into groups of files, calculating distances among all of those.

This principle allows for clustering of files, based on ssdeep hashes. The most popular implementation of ssdeep clustering, and also the one we based our toolchain on, is ssdc by Brian Wallace. As extension to the original implementation of ssdc, we added a multi-process computation module and a Redis connector, to be able to store clustering results directly to the backend.

This way we are able to spot events which are “close” to each other based on similarities within the involved binaries. With this information one could, for instance, establish that a group targeting entities within Russia, is using malware which bears disturbing similarities to malware used by a (different..?) group targeting entities within Afghanistan and Tajikistan. Just saying….

Ssdeep clustering challenges

Processing fuzzy hashes is a very resource intensive task, even more so when clustering samples of them. This doesn’t scale well for sample sets of certain sizes. Our test set is limited, but given that most malware repositories deal with a sample size within the hundred thousands or bigger, some presorting of groups of interest might be feasible. This pre-sorting of samples which match a certain set of criteria (e.g. events of within a specific timeframe, targeted platforms only geographical constraints) can speed up the clustering approach significantly.

Lastly, it’s worth mentioning that ssdeep hashes, just like any other statically retrieved attribute, only help describing the outer surface of a sample. It does not carry information about the purpose or behavior of a binary and is easily disturbed by runtime packers and might even fail its purpose after malware authors apply a simple change in their compiler settings.

Taking things further: MISP Workbench

Please note that, by now, this kind of APT research has been performed by designated threat research teams for a long time and presented techniques are not considered new. The community benefit we see here is the integration of our toolset into MISP’s open source code body, as well as providing results and conclusions to the broader public. Research of targeted attacks is difficult without access to a large malware set and high quality threat intelligence data.

So now here we are. With MISP Workbench the tools we developed and the data we gathered were integrated into the MISP platform in order to support incident responders and researchers to easily perform their own queries when investigating one or more events.

Workbench was built with the objective to incorporate all the presented features into one single tool. Also, it is intended to enhance the existing MISP dataset with a rich feature set, especially regarding the presented PE features. Workbench can now be used to easily group events by attacker tool sets; in connection with MISP Galaxy this is also possible with focus on dedicated adversaries. Workbench supports full text indexing and lookups for keywords, as well as picking through events on singular features or indicators.

Fine folks at CIRCL have built Workbench for standalone use, with a lightweight user interface. A more detailed description can be found here.

The following screenshot shows statistics about binary compilation timestamps within our targeted malware test set. One can quickly see, how 1970 and 1992 must have been very busy years for lots of malware authors. Err… kidding, of course. But it does seem obvious how compilation timestamps, or even just parts of compilation timestamps, can occasionally serve as an interesting grouping attribute.

5_3_data2

Furthermore, Workbench in combination with Galaxy, supplies information about threat actors and events linked to them. This enables the analyst to search for links among threat actors on a binary feature level.

5_3_grouping

As mentioned, the extracted PE features can be used for grouping as well. The following screenshot shows how the filename ‘chrome.exe’ seems to be an all-time favorite of The Dukes, a threat actor operating out of Russia.

5_3_orifname

Finally

And here it ends, this lengthy blog series on Kings In Your Castle. The work will continue, of course. The tools presented,  just as any information sharing platform such as MISP, live from the collective effort of contribution to the tool stack, the threat information base and the distribution chain. In that sense, questions and comments are welcome, just like pull requests, bug reports and feature ideas. At this point let me say thank you once more, to my co-sufferer Raphael Vinot, as well as the team at CIRCL, the team at ERNW and Troopers conference for letting us present our work as well as Morgan Marquis-Boire for supplying samples and ever more samples.

Happy APT hunting everyone 🙂

The Kings In Your Castle Part 4 – Packers, Crypters and a Pack of RATs

In part 4 of our series “The Kings In Your Castle”, we’re back with the question, what does sophistication even mean? I’ll be outlining what complexity from a malware analyst’s perspective means, why malware intends to be undecipherable and why it sometimes just wouldn’t even try. Also, this blog entry serves to present our findings on commodity RATs within the corpus of malware we analyzed, as part of our talk at Troopers conference in March.

If you are interested in previous parts of the series please check them out here, here and here.

What does sophistication even mean?

The complexity of software is a rather soft metric, that hasn’t undergone much scrutiny in definition. For a malware analyst, this sphere even takes on a whole lot of different shades, as malware by nature aims to hide its threats. At least most of it, as one would think?

For analysts, what poses challenges are techniques such as code obfuscation or the use of well-fortified crypters. What also presents a remarkable challenge  is structured application design, although this might sound somewhat counter-intuitive. Multi-component malware with a well thought-out object oriented design and highly dependent components cause more of a headache for an analyst than any crypter out there. In fact, many of the well-known high-profile attack toolsets aren’t protected by a packer at all. The assumption is, that for one, runtime packers potentially catch unwanted attention of security products; but also, for highly targeted attacks they might not even be necessary at all. Malware developers with sufficient dedication are very well able to hide a software’s purpose without the use of a runtime packer. So, why do we still see packed malware?

A software crypter is a piece of technology which obfuscates software and its intentions, but also to changes its appearance. Crypters and packers are frequently applied to malware in order to ensure the reusability of the actual malcode. That way, when malware is detected once, the same detection will not apply to the same malware running on a different system.

Let’s take a step back though. The ‘perfect targeted attack’ is performed with a toolset dedicated to one target only. We can conclude that a crypter is needed if either the authors aren’t capable of writing undetectable malware, or, more likely, if the malware is intended to be reused. This makes sense, if you reconsider, writing malware takes time and money, a custom attack toolset represents an actual (and quite substantial) investment.

Finally, with economical aspects in mind, we can conclude that attacks performed with plain tools are the more expensive ones, while attacks using packers and crypters to protect malware are less resource intensive.

4_1_sophistication

The actual hypothesis we had started our research with was, that most targeted malware comes without a crypter. In more detail, we put up the statement, that targeted malware was significantly less protected than random malicious software. Again, random malware doesn’t know its target and by definition is intended to infect a large number of individuals; while targeted malware supposedly is designed for a limited number of targets.

Packer Detection (Like PEiD Was Broken)

Now, the usage statistics of runtime packers and crypers would be easy to gather from our respective dataset, if the available state-of-the-art packer detection tools weren’t stuck somewhere in 1997. The question whether a file is packed or not in practice is not trivially answered. One of the tools that is frequently used to identify packers is named PEiD. PEiD applies pre-defined signatures to the first code bytes starting from the executable entry point, which easily fails if the code at the packed binary’s entry point changes only slightly.

Aiming for more accurate results, we decided to come up with our own packer evaluation. We put together a set of indicators for abnormal binary structures, which are frequently seen in relation with runtime packers. Please note, the indicators are just that – indicators. The evaluation algorithm leaves some space for discrepancies. In practice though, manual analysis of randomly picked samples has proven the evaluation to be reasonably precise.

We gathered the following PE attributes:

  • Section count smaller than 3
  • Count of TLS sections bigger than 0
  • No imphash value present, thus import section empty or not parseable
  • Entropy value of code section smaller than 6.0 or bigger than 6.7
  • Entry point located in section which is not named ‘.text’, ‘.itext’ or ‘.CODE’
  • Ratio of Windows API calls to file size smaller than 0.1

Of course, no single one of the gathered attributes is a surefire indicator in a packer evaluation process. In reality, some of the mentioned presumed anomalies are frequently seen within unpacked binaries, and depend, for example, on the executable’s compiler. Nevertheless, any of the mentioned features is reason enough to grow suspicion, and as mentioned before, the evaluation works rather reliably on our chosen dataset.

According to our algorithm the values weigh into an evaluation score, based on the final score an analyst can then draw his conclusion. It is worth noting at this point, that the chosen thresholds are quite sensitive, and one would expect to rather detect too many “potentially-packed” samples, instead of too few.

Further details about our packer evaluation method can be found within the published code.

The results can be found in the following charts, showing the evaluation values in relation with sample counts. The maximum score a sample can reach on our scale is 220, meaning that all eval attributes exceed the chosen threshold. The following graphics show the evaluation performed on a benign sample set, on a targeted malware sample set and on a random malware sample set. Attention should be paid to the sample frequency count on the y-pane of the graph.

benign
The benign set
targeted
The targeted malware set
random
The random malware set

The graphs show very well, how roughly a third of the benign samples show low rated indicators; while for the random malware sample set, it is less than a third of the overall set showing no indicators, while more than a third of the set show remarkably high rated indicators. It shall be mentioned, that a look into benign samples rated at 40-50 resulted in the finding, that most of them were packed with UPX, a binary packer used mainly for binary compression.

The remarkable bit at this point is that the set of targeted malware binaries has the overall lowest count of packer indicators. This leaves us with two possible conclusions. Following our hypothesis that targeted malware is significantly less protected by crypters than random malware, we take this result as a proof.

On the other hand, what surely biases the result, is that the chosen attributes are potentially influenced by compilers used to compile the given binaries. This means though, as the results for the targeted set are notably homogenous, that the larger part of targeted malware within our dataset has probably not experienced exotic compilers either. As a research idea for future analysis I’d like to put up the somewhat far-fetched hypothesi, that most targeted malware is written in C/C++ and compiled using Visual Studio compiler. Curious, anyone?

Commodity RATs

Taking the question of malware sophistication further, in the past the analysis community was frequently astonished in the light of yet another incredibly complex targeted malware campaign. While it is true that certain targets require a high level of attack sophistication, most campaigns do not require components for proprietary systems or extremely stealthy methods. An interesting case of high profile attacks being carried out with commodity malware was uncovered last year, when CitizenLab published their report about Packrat. Packrat is a South American threat actor, who has been active for around seven years, performing targeted espionage and disinformation campaigns in various South American countries. The actor is most known for targeting, among others, Alberto Nisman, the late Argentinean prosecutor, raising a case against the Argentinean government in 2015.

Whoever is responsible for said campaigns did have a clear image of whom to target. The actor certainly possessed sufficient personal and financial resources, yet made a conscious decision to not invest in a high-end toolchain. Looking into the malware set used by Packrat, one finds a list of so called commodity RATs, off-the-shelf malware used for remote access and espionage. The list of RATs includes CyberGate, XTremeRAT and AlienSpy; those tools themselves are well-known malware families associated with espionage.

Again, using repackaged commodity RATs is notably cheaper than writing custom malware. The key to to remaining stealthy in this case is the usage of crypters and packers. In the end though, by not burning resources on a custom toolchain, the attacker can apply his resources otherwise – potentially even on increasing his target count.

Hunting down a RAT pack

Looking at the above facts, one question emerges: how prevalent are pre-built RATs within the analysis set at all? To establish a count for commodity RATs count we again relied on detection names of Microsoft Defender. The anti-virus solution from Microsoft has shown in the past to be rather slow in picking up new detections, while providing quite accurate results once detections are deployed to the engines. Accuracy at this point includes certain reliability when it comes to naming malware.

For evaluation, we chose to search for the following list of commodity malware:

  • DarkComet (Fynloski)
  • BlackShades (njRat, Bladabindi)
  • Adwind
  • PlugX
  • PoisonIvy (Poison)
  • XTremeRAT (Xtrat)
  • Handpicked RAT binaries

The selected set is what we noted seeing when going through the malware corpus manually, please note though, the list of existing commodity RATs is by far longer.

The so to say “lazy king of APTing” is PlugX. The commodity RAT popped up in all together 15 different events listed in the MISP database.

4_2_plugx

The winner in sample numbers was Adwind, a Java based RAT, dedicated to infect different platforms. Adwind itself is malware, that has been sold under different names, including Frutas RAT and Unrecom RAT. Security firm Fidelis published nice insights on Adwind, under the much appreciated title “RAT in a jar”.

The following graphic shows the total number of RATs and related events, found within our data set of 326 events containing 8.927 malware binaries.

4_3_ratpack

In total, we counted that almost a quarter of inspected events made use of one or another RAT. Looking at the sample set, 1/9th of the total set is composed of pre-built RATs. These numbers are rather high, considering that, at least in the heads of analysts, targeted malware is complex and sophisticated.

Still, though, why do we even bother with high numbers of commodity malware? For one, as mentioned before, they help driving down attack cost. Furthermore, they provide functionality that is quite advanced from the beginning, equipping even unskilled attackers with a Swiss Army Knife of malware they couldn’t implement themselves, even if they tried really hard. These do-it-yourself APT tools enable wannabe spies with little budgets, growing the number of potential offenders.

Furthermore, off-the-shelf RATs have been seen in use by certain advanced attackers. They could lead to confusion about the actual offender, as they do not allow for attribution on the base of the binaries at all. In other words, one would not know whether he is being targeted by a script kiddie or a nation state actor. Currently, it remains unclear whether commodity RATs have been used in an attribution concealment approach, but the assumption does lie close.

The Kings In Your Castle Part 3 -Ssdeep being fuzzy while exploits are being scarce

Welcome back, still on it? This is part 3 of our blog series, if you’re curious about part 1 and 2, please check them out here and here. This time I’m happy to introduce a set of borderline funny findings and tackle one of the hypotheses we put together for Raphael Vinot’s and my talk “The Kings In Your Castle”, presented at this year’s Troopers Conference in Heidelberg. I will discuss our findings regarding exploits present in known targeted attacks, the obstacles we faced during analysis and how we worked our way around. So, sit back, relax, here we go.

Curiosities

As you might be aware of, most data sets come with information, as well as most show one or another curiosity. Finding curiosities means learning literally unexpected things, which is why researchers jump at those with the passion of a hungry wolf. Thus, let me start the list of our findings with a curiosity.

While performing clustering on ssdeep hashes we found something we dubbed sddeep collisions, due to lack of better naming. Ssdeep is a program for computing context triggered piecewise hashes. These so called fuzzy hashes, as opposed to cryptographic hashes, do not uniquely identify a given data blob. They are calculated piecewise and are able to identify shared byte sequences among two targets. They are frequently used to ‘describe’ malicious binaries, in order to be able to match with similar binaries and eventually find groups of malware samples or even identify malware families.

The nature of piecewise hashes though implies, that hashes of two binaries cannot be identical, if the binaries show differences. Hence, it is a curious finding, that a number of unique samples within our set showed identical ssdeep hashes. Without spending too much time picking at the implementation of the fuzzy hashing algorithm itself, we assume that ssdeep does not consider the entire binary for hashing. We found a case, where 5 MB of empty padding were no reason for ssdeep to show any difference in the resulting fuzzy hash.

ssdeep

More than padding, ssdeep on some occasions indeed missed significant differences in the data sections of compared binaries. Given that analysts and corporations use ssdeep in work benches and production systems we found it worth a mention, that identical fuzzy hashes do by no means proof, that the compared binaries are identical.

diffs

We learned another curiosity when randomly staring at the gathered data. It is fascinating how the human eye manages to find patterns, and indeed very instructive before starting to implement queries or planning for application of machine learning. This way we saw, that for example compilation timestamps of binaries usually follow lose patterns within an event. A number of events though show outliers when it comes to timestamps; such as a binary compiled in 1996 while others are compiled post-2007, or a binary with a stamp from 2036. Of course, such outliers can have multiple explanations. Ones that come to mind the fastest are either the timestamps are falsified, the attackers forgot to falsify all timestamps, the campaign made use of readily compiled (old) tools, or maybe a runtime packer was used which falsifies the timestamps without explicit intention of the attackers.

20136time

One conclusion though lies at hand. To freely quote a splendid APT researcher, attackers learn just like we do and improve over time, which implies that they might have made more mistakes in the past. Thus, by analyzing historical data about a campaign or group one might be able to learn unexpected tidbits. Moreover, by looking at things learned by the attacker as in changes in malware and intrusion techniques, one might gather insights about obstacles the attackers faced in the past. Adoption of a runtime packer or adding of a stealth module to a given RAT might expose, that the attacker’s tools at some point were detected by security software.

1970time

 

OMFG!! They used e.x.p.l.o.i.t.s.!!1!

Like in real life, humans tend to conclude that a digital attack, which caused big damage, naturally came with big power. In the world of cyber though, this equation has a lot of unknowns. In fact, the success of an attack and the damage it can cause are influenced by many factors, that are independent of an attacker’s capabilities and wealth. While not true in all cases, mostly, the possession of one or more 0-days involves some level of resources or at least explicit know-how combined with engineering time.

This leads to a natural assumption: Folks who do APTs involving 0-days must be either rich, or very very dedicated. Or both. Or do they? When Stuxnet finally happened, the general public seemed to believe that APT goes hand in hand with 0-day. A considerable time span passed by, understanding started to sink in, that targeted attacks can have all sorts of faces, and barely any post-Stuxnet attack looked anything like what we now call “the first cyber weapon”.

Until today, analysts seem to have settled for the consciousness that Word Macros are just as dangerous to organizations as the latest Flash exploit. There always is someone to open the damn document.

Finally, what this leaves us with is a set of uncertainties. How important are exploits in the APT world? How frequently are they used, how common is the use of precious 0-days? This is the fog we meant to shed some light on.

 

Exploit prevalence at a glance

In the mass malware scenery, the number of malware strains and infection campaigns that make active use of exploits is rather low, it feels, and seemingly even declined; at least since attackers found out that Macro downloaders do the job just as well. It won’t fail the attentive reader; Word Macros are a lot easier to write and cheaper to get hold of. And back here we are, reducing the cost of an attack allows to maximize the number of potential targets. It’s all about resources.

But let us get to the numbers. In total, we analyzed 326 events within our final analysis set, of which 54 were labeled to involve one or more exploits. Such labels are usually tags of CVE numbers that are added by the initial reporter of an event. About these tags we do know, that a present tag is a strong indicator for an actual exploit being involved (given analysts didn’t make things up); the lack of any tag does not proof at all though that no exploits were used. As a counter metric, we utilized detection names of Microsoft Defender, filtering for names containing one or another CVE number. This way we detected a total of 68 events involving exploits.

Juggling numbers, with considerations of potentially missed detections in mind, roughly a fifth of the analyzed events involved the usage of exploits. With all potential data incorrectness in mind, we are very confident to state that is it not a majority of targeted attacks that involves exploits.

 

The APT exploit landscape

Relying on tags that are present in the MISP data set, we went on to evaluate the exploits we indeed did see. The graphic below shows a chart of CVE numbers, sorted first by tag counts, secondly by year. The numbers refer to the number of events that make use of the listed CVEs.

It is worth mentioning, that human analysts as well as security software tend to be more reliable in labelling of well-known exploits, than fresh ones or even unknown ones. This chart cannot be used to determine which attacks involved 0-day exploits; in fact, none of the data we got at hand can.

cves

What it does show though is how the curve from frequently to non-frequently seen CVEs holds remarkably old vulnerabilities in the top spots. Absolutely killing it is CVE-2012-0158, a vulnerability within an MS Office OCX library. It can be triggered through a malicious Word document. The vulnerability has long been fixed by Microsoft, but, perhaps, not all Office installations are updated all that frequently? Who knows.

Furthermore, we can see that only a minority of 7 CVE numbers can be called more or less up to date. Given that our data collection ended January 2016, vulnerabilities from 2015 are considered fresh (enough). A total of 12 events involved exploits for non-cyber-stoneage vulnerabilities.

Exceptionally interesting is place number three on the list, CVE-2015-5119, sported by a total of five events. This vulnerability has a history, indeed.

 

HackingTeam exploit gone wild

CVE-2015-5119 is a vulnerability in Adobe Flash, which got leaked with the tremendous breach of the Italian offensive security company HackingTeam last year. The vulnerability was reported and fixed soon after the breach, but nevertheless made it into at least one exploit pack and the toolsets of four quite well known APT groups. According to our data, Group Wekby and a not closer identified spearphishing campaign targeting US government institutions adopted respective exploits right after the breach went public, in July 2015.

The most recent spotting of CVE-2015-5119 within our data happened beginning of 2016 when it was seen in context with BlackEnergy, the notorious malware that took the Ukrainian power grid offline end of 2015.

5119

 

Discussing the results

The numbers in the dark, or, everything we do not know, is a significant blind spot to our dataset. There are two considerable unknowns. For one, we do not know whether the data is complete. Two, we do not know whether the labels we retrieved are correct.

Concerning problem number 1, intentionally incomplete reports aside, it is very well possible that an attack is detected and remediated, but the actual infection vector never recovered. In the case of an exploit being used, given for example that it is not contained in an e-mail or a file, the forensic reconstruction of the entire intrusion path can be challenging. A good example of such a case and also a very instructive read on the topic poses Phineas Fisher’s write up of the HackingTeam breach.

Problem number 2, incorrect labeling, stems from the fact that determining a CVE number for an actual exploit involves careful analysis work of a specialist. In practice, deriving CVE numbers from AV detection names is “cheap” and works reasonably well, but relies on the respective analysts doing a scrupulous job when looking into the sample. Nevertheless, mistakes are actually quite possible.

As in all given cases, I am happy to receive suggestions on how to improve on both shortcomings. Meanwhile, we present the discussed numbers with a pinch of salt on the side.