Defcon 2019 DFIR CTF – Memory Forensics Write-up

In an effort to improve my forensics skills I have been working through publicly available forensics CTFs when I have some free time.

The 2019 Unofficial Defcon DFIR CTF was created by the Champlain College Digital Forensics Association and made public by David Cowen on his Hacking Exposed Computer Forensics blog. The CTF covers Windows and Linux “dead” forensics, a “live” triage VM, memory forensics, and a cryptography challenge. This write-up focuses on the memory forensics questions.

Links to the CTF files and registration/scoreboard can be found on this HECF blog post.

The following MD5 hash was provided for the Triage-Memory.mem file:

MD5: c0c80a06ad336a6e20d42c895a0e067f

Let’s get started!

flag 1 – get your volatility on

We begin with a straightforward one; calculate the SHA1 hash of the memory image.

sha1sum Triage-Memory.mem


flag 2 – pr0file

Again, reasonably simple. We need to work out which profile to use with Volatility to conduct our analysis. The imageinfo plugin will suggest a number of suitable candidates. -f Triage-Memory.mem imageinfo

The output lists a few possibilities, but Win7SP1x64 is a sensible choice for now.


flag 3 – hey, write this down

There are a few Volatility modules we can use to list running process IDs. My personal preference is pstree just because it makes the parent/child relationship more obvious, which can help to spot anything unusual. -f Triage-Memory.mem --profile=Win7SP1x64 pstree

We are looking specifically for notepad.exe so we can cut out a lot of the unrelated output using grep -f Triage-Memory.mem --profile=Win7SP1x64 pstree | grep -i "notepad.exe"

Using grep to filter the output we can easily see that the PID is 3032.


flag 4 – wscript can haz children

We already have the answer to this from the pstree output in Flag 3, but using grep as a filter will make it easier to spot. The -A 2 option tells grep to display the matching line and the 2 lines that follow it. -f Triage-Memory.mem --profile=Win7SP1x64 pstree | grep -A 2 -i "wscript.exe"

The output shows that wscript.exe has one child process, UWkpjFjDzM.exe, which itself has a child process called cmd.exe; a good indication that something is wrong there.


flag 5 – tcpip settings

The question asks for the IP address of the machine when the memory dump was taken. The netscan plugin will show details of network artefacts captured in the memory dump. -f Triage-Memory.mem --profile=Win7SP1x64 netscan

The question asks for the IP address of the machine at the time of the memory dump. We can see processes listening on addresses and, which are unlikely to be our flag, and which is a much more likely. Another interesting observation is that our process from Flag 4 is connecting to another machine on port 4444 – the default port for many Metasploit payloads. Keep that in mind for later.


flag 6 – intel

Based on our work to find Flag 4 and Flag 5, we can be answer this quite easily. Let’s examine the netscan output more closely. -f Triage-Memory.mem --profile=Win7SP1x64 netscan | grep "UWkpjFjDzM.exe"

We can see our “infected” process UWkpjFjDzM.exe connecting to on port 4444.


flag 7 – i <3 windows dependencies

As the name suggests, the dlllist plugin will list the DLLs loaded by each process. This is a lot output, so to find the answer I used grep with increasing values for the -B option to show the associated process. Not very subtle but it worked! -f Triage-Memory.mem –profile=Win7SP1x64 dlllist | grep -B 33 “VCRUNTIME140.dll”

There we are! Initially I submitted OfficeClickToRun.exe as the flag, but when that was rejected I tried the short name (OfficeClickToR) next to the Process ID near the top of the output.


flag 8 – mal-ware-are-you

Ok, we have already identified our potential malware from Flags 4, 5, and 6. So take its PID (3496) and dump the executable using the procdump plugin, then calculate the MD5 of the extracted binary. -f Triage-Memory.mem --profile=Win7SP1x64 procdump -p 3496 -D .
md5sum executable.3496.exe


flag 9 – lm-get bobs hash

Another relatively simple one. The hashdump plugin will, predictably, dump the password hashes from memory. -f Triage-Memory.mem --profile=Win7SP1x64 hashdump

We need to supply the LM hash, which is the first portion of the hash after Bob’s user ID (1000)…


flag 10 – vad the impaler

Information on VAD nodes can be extracted using the vadinfo plugin. It outputs a lot of data, so for ease of reading I’ve used grep to focus on the lines we are interested in. -f Triage-Memory.mem --profile=Win7SP1x64 vadinfo | grep -A 2 "0xfffffa800577ba10"

We can see the VAD node at offset 0xfffffa800577ba10 has the PAGE_READONLY protection. That’s our flag.


11 – more vads?!

This is essentially the same question as Flag 10, just with a bit of extra grep. Instead of filtering on the starting offset, we filter on both start and end. -f Triage-Memory.mem --profile=Win7SP1x64 vadinfo | grep -A 2 "Start 0x00000000033c0000 End 0x00000000033dffff"

This time the protection is PAGE_NOACCESS.


flag 12 – vacation bible school

We are looking for a VBS script now. There are a few ways we could do this but the easiest is simply to check the command-line used to start the wscript.exe process that we observed in the pstree output for Flag 4. We know the wscript.exe PID is 5116, so we can pass that to the cmdline plugin to reduce the reading we have to do. -f Triage-Memory.mem --profile=Win7SP1x64 cmdline -p 5116

We can see that wscript.exe was called with a script called vhjReUDEuumrX.vbs from the %TEMP% directory. Strip the extension and we have our flag.


flag 13 – thx microsoft

The Application Compatibility Cache (or shimcache) contains details of program execution and can be parsed using the shimcache plugin. We are looking for an application executed at a specified date and time, so we can use grep to filter the output. -f Triage-Memory.mem --profile=Win7SP1x64 shimcache | grep "2019-03-07 23:06:58"

This time we need to include the file extension.


flag 14 – lightbulb moment

Extracting the text from a running notepad.exe process is relatively straightforward but does require a couple of steps. First we need to dump the process memory using the memdump plugin; we found the PID for notepad.exe in Flag 3 (3032). The next step is to use the strings utility to extract all of the human-readable little-endian strings and write them to a file. -f Triage-Memory.mem --profile=Win7SP1x64 memdump -p 3032 -D .
strings -e l 3032.dmp > 3032.dmp.strings

Once we have our strings output we can use grep to search for our flag. The word “flag” seems a reasonable place to start.

grep "flag" 3032.dmp.strings

There we go!


flag 15 – 8675309

Details about file records are held in the Master File Table (MFT) and can be extracted from the memory dump using the mftparser plugin. We are specifically looking for the Short Name of the file at record 59045; once again, grep will help us here. -f Triage-Memory.mem --profile=Win7SP1x64 mftparser | grep -A 20 "59045"

Reading through the output we can see the 8.3 short file name EMPLOY~1.XLS.


flag 16 – whats-a-metasploit?

We can be pretty sure that UWkpjFjDzM.exe (PID: 3496) is our Meterpreter process, given what we found out answering Flag 4, 5, and 6, but let’s make sure. First we dump the process executable using the procdump module, then calculate its SHA1 hash so we can search public sandboxes like VirusTotal. -f Triage-Memory.mem --profile=Win7SP1x64 procdump -p 3496 -D .
sha1sum executable.3496.exe

Searching VirusTotal for the SHA1 hash (ab120a232492dcfe8ff49e13f5720f63f0545dc2) gives us a report clearly showing that the sample is malicious.

We submit the process ID (3496) and we have completed the memory analysis section!


This is a really well put together set of challenges, and when I have more time I will probably return to take on the Windows and Linux challenges as well.

Memlabs Memory Forensics Challenges – Lab 1 Write-up

In an effort to improve my forensics skills I have been working through publicly available forensics CTFs when I have some free time.

Memlabs is a set of six CTF-style memory forensics challenges released in January 2020 by @_abhiramkumar and Team bi0s. This write-up covers the first memory image which has three flags to uncover.

Unlike most CTFs I have encountered, Memlabs does not actually ask any questions or give hints regarding the flags, only that the flags have the following format:


No hashes were provided to check against but I calculated the following:

MD5: b9fec1a443907d870cb32b048bda9380
SHA1: 02a58ccf572e6b369934268842551722c4411a60

Let’s go!

Flag 1

First let’s determine what kind of memory image we are working with. As usual for memory forensics, I’m going to work with Volatility. -f MemoryDump_Lab1.raw imageinfo

The first suggestion is Win7SP1x64; this seems like a sensible starting point.

We have no clues as to what we are supposed to be looking for. Let’s check the running processes using the pstree module and see if anything stands out. -f MemoryDump_Lab1.raw --profile Win7SP1x64 pstree

The only processes that stand out are WinRAR.exe (PID: 1512), cmd.exe (PID: 1984), and mspaint.exe (PID: 2424). DumpIt.exe is likely the tool used to capture the memory dump so I am ignoring it for now. We can use the cmdline and consoles modules to show the command that launched these processes, and any console output associated with them. -f MemoryDump_Lab1.raw --profile Win7SP1x64 cmdline -p 1512,1984,2424 -f MemoryDump_Lab1.raw --profile Win7SP1x64 consoles

The output from cmdline tells us that WinRAR.exe was launched with a file called Important.exe which seems, well, important but the consoles plugin shows a command St4Ge$1 being run and the following output:


Decoding this from base64 gives us our first flag:

echo "ZmxhZ3t0aDFzXzFzX3RoM18xc3Rfc3Q0ZzMhIX0=" | base64 -d


Flag 2

I actually found Flag 3 before Flag 2 as I spotted the reference to Important.rar in the cmdline output, but for ease of reading I’ll keep this order. As I had examined the cmd.exe and WinRAR.exe processes already I guessed that Flag 2 was hidden in the mspaint.exe process, so began by dumping the process memory. -f MemoryDump_Lab1.raw --profile Win7SP1x64 memdump -p 2424 -D .

After some Googling I found a blogpost detailing how to extract RAW images from memory dumps. I renamed the dump from 2424.dmp to and opened it up with the GIMP image editing suite, setting the Image Type to RGB Alpha, and fiddling with the Offset, Width, and Height vaules through trial and error until I got something that looked intelligible.

That’s definitely text but not very easy to read. I’m better with Volatility than with GIMP so I took a screenshot of the image preview and flipped it vertically, revealling the flag.


Flag 3

The output from the cmdline module showed that WinRAR.exe had been launched with a file called Important.rar. Lets extract that from the memory image and take a look. -f MemoryDump_Lab1.raw --profile Win7SP1x64 filescan | grep -i "important.rar" -f MemoryDump_Lab1.raw --profile Win7SP1x64 dumpfiles -Q 0x000000003fa3ebc0 -D .
file file.None.0xfffffa8001034450.dat

I renamed the file to Important.rar and tried extracting the contents.

unrar x Important.rar

Unfortunately we need a password. Fortunately the password hint tells us where to find it. We can use the hashdump module to dump the NTLM hashes. -f MemoryDump_Lab1.raw --profile Win7SP1x64 hashdump

We only need the second part of the hash, but we do need to convert it to upper-case first. Rather than doing it manually I used CyberChef to do it for me.

Now we have the password we can extract the archive and view its contents – a PNG image containing our flag.



Despite completing the first challenge I found the lack of direction or motivation incredibly frustrating. The whole point of forensic investigation is to follow a trail, building on what has been found already to come to a specified conclusion. Real investigations have a purpose. Why was this memory dump captured in the first place? Why are you asking me to take the time to do some analysis? I did learn a new technique in finding Flag 2, but for now I am skipping the rest of Memlabs to work on something more representative of real-world DFIR.

OtterCTF 2018 – Memory Forensics Write-up

In an effort to improve my forensics skills I have been working through publicly available forensics CTFs when I have some free time.

OtterCTF dates from December 2018 and includes reverse engineering, steganography, network traffic, and more traditional forensics challenges. This write-up only covers the memory forensics portion, but the whole CTF is available to play as of the publication of this post.

The first thing to do is download the memory image (OtterCTF.vmem). There weren’t any hashes published to check against, but I calculated the following:

MD5: ad51f4ada4151eab76f2dce8dea69868
SHA1: e6929ec61eb22af198186238bc916497e7c2b1d2

Let’s get on with it…

Question 1 – What the password?

Question 1 - you got a sample of rick's PC's memory. can you get his user password?

Before we can get started on analysis we need to tell Volatility what kind of memory image we are working with. The imageinfo plugin will scan the image and suggest a number of likely profiles. -f OtterCTF.vmem imageinfo

The Win7SP1x64 profile seems like a sensible choice for now (we can always revisit this later if we run into errors). Onto the analysis!

The hashdump plugin will, unsurprisingly, dump the NTLM hashes from the SYSTEM and SAM registry hives. -f OtterCTF.vmem --profile="Win7SP1x64" hashdump

The question asks for the user password, not the password hash, so we can either try to crack this using tools like John the Ripper or Hashcat (or Google), or we can try extracting the plaintext password from the LSA secrets using the lsadump plugin. -f OtterCTF.vmem --profile="Win7SP1x64" lsadump

And we have our first flag:


Question 2 – General Info

Question 2 - Let's start easy - whats the PC's name and IP address?

We need to find the IP address and hostname of Rick’s machine. The netscan plugin will give us the network data we need. -f OtterCTF.vmem --profile="Win7SP1x64" netscan

We can rule out and, leaving us with


The hostname is stored in the SYSTEM registry hive. Before we can query the hive we need to find the offset. -f OtterCTF.vmem --profile="Win7SP1x64" hivelist

Supplying the printkey plugin with the offset and the name of the relevant registry key gives us the second flag for this question. -f OtterCTF.vmem --profile="Win7SP1x64" printkey -o 0xfffff8a000024010 -K "ControlSet001\Control\ComputerName\ComputerName"


Question 3 – Play Time

Question 3 - Rick just loves to play some good old videogames. can you tell which game is he playing? whats the IP address of the server?

The pstree plugin gives us a nice view of running processes. -f OtterCTF.vmem --profile="Win7SP1x64" pstree

Google tells me that LunarMS is associated with an old MMORPG, so there’s the first part of our answer.


Finding the IP of the server is simply a matter of running the netscan plugin and using grep to filter on the LunarMS process. -f OtterCTF.vmem --profile="Win7SP1x64" netscan | grep "LunarMS"


Question 4 – Name Game

Question 4 - We know that the account was logged in to a channel called Lunar-3. what is the account name?

The account name will be somewhere in the process memory; let’s dump that out to make the next step a bit easier. We know the PID of the LunarMS process is 708, so pass that to the memdump plugin, then use strings and grep to filter the output. The -C 10 flag tells grep to return the 10 lines above and below the matching line. -f OtterCTF.vmem --profile="Win7SP1x64" memdump -p 708 -D .
strings 708.dmp > 708.dmp.strings
grep -C 10 "Lunar-3" 708.dmp.strings

Given the previous references to otters in this CTF, one line stands out:


Question 5 – Name Game 2

Question 5 - From a little research we found that the username of the logged on character is always after this signature: 0x64 0x??{6-8} 0x40 0x06 0x??

We are given a sequence of bytes and told that the data we want will follow. We already have a dump of the LunarMS process memory from Question 4 so this is all about searching. For simplicity I only used the last eight bytes in the sequence in my search, employing xxd to display the bytes and grep to search for the end of our target pattern.

xxd 708.dmp | grep "5a0c 0000"

There is some human-readable text at 0x0c33a4ac so let’s use xxd again to give us the next 16 bytes of our process memory dump.

That looks like our flag.


Question 6 – Silly Rick

Flag 6 - Silly rick always forgets his email's password, so he uses a Stored Password Services online to store his password. He always copy and paste the password so he will not get it wrong. whats rick's email password?

We get a hint that Rick always copies and pastes his password, so the clipboard plugin is likely to give us what we need for this question. -f OtterCTF.vmem --profile="Win7SP1x64" clipboard

And there we are – Rick’s  email password.


Question 7 – Hide and Seek

Flag 7 - The reason that we took rick's PC memory dump is because there was a malware infection. Please find the malware process name (including the extension) BEAWARE! There are only 3 attempts to get the right flag!

Listing the processes with pstree we can see one called Rick and Morty, with a child process called vmware-tray.ex – that’s unusual. -f OtterCTF.vmem --profile="Win7SP1x64" pstree

By supplying the PIDs to the cmdline plugin we can see the full command lines associated with both our unusual processes. -f OtterCTF.vmem --profile="Win7SP1x64" cmdline -p 3820,3720

An executable running from the user’s AppData\Local\Temp directory is particularly odd. Submitting the name and extension of the executable gives us our flag.


Question 8 – Path to Glory

Flag 8 - How did the malware got to rick's PC? It must be one of rick old illegal habits...

In Question 7 we found a file path suggesting that Bittorrent was involved; let’s go find the associated torrent file. Using the filescan plugin and filtering with grep gives us a few places to look. -f OtterCTF.vmem --profile="Win7SP1x64" filescan | grep -i "rick and morty"

We can extract files from the memory image by passing the offset to the dumpfiles plugin. -f OtterCTF.vmem --profile="Win7SP1x64" dumpfiles -Q 0x000000007d8813c0 -D .
cat file.None.0xfffffa801af10010.dat

Using the cat utility to display the contents of the file, we see that it is the Zone Identifier rather than the torrent itself. The line ZoneId=3 indicates that the torrent was downloaded from the internet – that might be useful for later. Let’s extract the next candidate for our torrent file. -f OtterCTF.vmem --profile="Win7SP1x64" dumpfiles -Q 0x000000007dae9350 -D .
strings file.None.0xfffffa801b42c9e0.dat

Running strings this time we can see the details of the torrent, including a comment on the final line that looks like our next flag.


Question 9 – Path to Glory 2

Flag 9 - Continue the search after the way that malware got in.

The Zone Identifier file we extracted by mistake in the last question indicates the torrent was downloaded from the internet. The number of chrome.exe processes observed in our pstree output suggests that Google Chrome is the primary browser. As with Question 8 we can use the filescan and dumpfiles plugins to find and extract the Chrome history database. -f OtterCTF.vmem --profile="Win7SP1x64" filescan | grep -ie "history$" -f OtterCTF.vmem --profile="Win7SP1x64" dumpfiles -Q 0x000000007d45dcc0 -D .

Chrome stores history data in a SQLite database. I renamed the file to chrome-history.sqlite, and used the sqlite3 utility to run the following query:

select current_path, site_url from downloads;

From the output of the database query we can see that the torrent file was downloaded from

Let’s dump the strings from our memory image and look for any artefacts related to

strings OtterCTF.vmem > OtterCTF.vmem.strings
grep "" OtterCTF.vmem.strings

The second line of the grep output resembles the address field of an email header; perhaps some message content was still in memory when the image was made. Using grep with the -A 20 flag to show the 20 lines following Rick’s email address gives us the following:

grep -A 20 "<>" OtterCTF.vmem.strings

Near the bottom of the output is a curious line of text that looks like our flag, and submitting it as an answer confirms it.

As an alternative method, because we have Rick’s email address and found his password in Question 6, we could try logging into his email account to check. But this is a memory forensics challenge.


Question 10 – Bit 4 Bit

Flag 10 - We've found out that the malware is a ransomware. Find the attacker's bitcoin address.

The question tells us that the malware is ransomware of some kind and asks for the associated Bitcoin address. Ransomware tends to drop a ransom note on the Desktop, so let’s look for that first. -f OtterCTF.vmem --profile="Win7SP1x64" filescan | grep "Desktop"

READ_IT.txt looks promising, and flag.txt might be useful to remember later on. -f OtterCTF.vmem --profile="Win7SP1x64" dumpfiles -Q 0x000000007d660500 -D .
cat file.None.0xfffffa801b2def10.dat

Unfortunately the note only tells us to Read the Program for more information. We identified the ransomware PID in Question 7, so let’s dump the process memory and run strings and grep to search for any mention of “ransom“. Note the slightly different strings command this time; the -e l flag is used to search for Unicode strings. -f OtterCTF.vmem --profile="Win7SP1x64" memdump -p 3720 -D .
strings -e l 3720.dmp | grep -i -A 5 "ransom"

We have found the payment demand, including the price in Bitcoin and the Bitcoin address.


Question 11 – Graphic’s for the Weak

Flag 11 - There's something fishy in the malware's graphics.

The only hint we have is to examine the malware’s graphics. We can dump the process executable using the procdump plugin, then use binwalk and foremost to identify and carve any graphics from the executable. There’s no real need for binwalk here, I just like to have an idea of what to carve for before running foremost. -f OtterCTF.vmem --profile="Win7SP1x64" procdump -p 3720 -D .
binwalk executable.3720.exe
foremost -t png executable.3720.exe

Checking the foremost output, we only have one PNG file but it does contain our flag.


Question 12 – Recovery

Flag 12 - Rick got to have his files recovered! What is the random password used to encrypt the files?

I expected this question to take a lot of trial and error with grep, so for speed I first extracted the human-readable Unicode strings to a file on disk instead of running strings over the whole memory image for every search. The wc -l command shows 374402 lines; let’s try to reduce that to something more manageable by searching for some of the things we have identified so far.

Searching for “password” didn’t turn up anything useful, and “rick” gave too many hits. Searching for the hostname was more promising, returning 212 hits. By using the sort and uniq commands we can eliminate duplicates and end up with a reasonable list to examine manually.

strings -e l 3720.dmp > 3720.dmp.strings
wc -l 3720.dmp.strings
grep "WIN-LO6FAF3DTFE" 3720.dmp.strings | wc -l
grep "WIN-LO6FAF3DTFE" 3720.dmp.strings | sort | uniq

The second last line looks interesting; the hostname and username concatenated together with a seemingly random alpha-numeric string.

Using grep again we see that this seemingly random string appears multiple times, making it a pretty good candidate for our password.


Question 13 – Closure

Flag 13 - Now that you extracted the password from the memory, could you decrypt rick's files?

Our final challenge is to use the password from Question 12 to decrypt Rick’s files. First thing we need to find out is the kind of ransomware are we dealing with. We were able to extract the executable from the memory image in Question 11, and it’s possible that someone has already uploaded it to an online sandbox like VirusTotal. Let’s get the SHA1 hash and check.

sha1sum executable.3720.exe

Sure enough, there is a hit on VirusTotal, referencing an alternative executable name (VapeHacksLoader.exe) which is associated with the $ucyLocker ransomware referenced in the graphic we extracted in Question 11. $ucyLocker is a variant of the open-source Hidden Tear ransomware, and with a few Google searches I was able to find a pre-compiled decrypter.

Now we have identified the ransomware and found a decryption utility, let’s extract the file containing the final flag. In Question 10 we saw a file named Flag.txt on Rick’s Desktop. -f OtterCTF.vmem --profile="Win7SP1x64" filescan | grep "Flag.txt$" -f OtterCTF.vmem --profile="Win7SP1x64" dumpfiles -Q 0x000000007e410890 -D .

After extracting the file from the memory image we can examine it with xxd, which shows a block of 48 seemingly random bytes, followed by null-byte padding.

xxd file.None.0xfffffa801b0532e0.dat

The padding might cause problems for decrypting, so we extract the bytes we want to a new file called flag.txt using dd.

dd bs=1 count=48 if=file.None.0xfffffa801b0532e0.dat of=flag.txt
xxd flag.txt

As the decryptor would only run on Windows, and because I didn’t entirely trust a pre-compiled decryptor downloaded from the internet, I spun up a Windows 7 VM and copied flag.txt to the Desktop.

After specifying the file extension and supplying the password we extracted in Question 12, the tool ran and output a plaintext file named flag (the file extension having been stripped during decryption).

Opening the newly decrypted flag file gives us our final flag, and completes the memory forensics portion of the CTF.



Enabling Switch Configuration in OpenWRT Luci on Linksys WRT1900ACS

Since I installed ProxMox on my HP Microserver I’ve been meaning to build an isolated lab network rather than having vulnerable VMs in my main environment. My Microserver has two physical interfaces so my plan was to split one of the ports on my router off onto its own VLAN and bind my lab VMs to that.

VLAN configuration in OpenWRT isn’t quite as straightforward as I’d hoped so being able to use the web-based interface to do this would be nice.

Sadly, the “Switch” configuration wasn’t available by default but after a bit of searching I found this OpenWRT forum thread with the answer!

By adding the following snippet to /etc/config/network and rebooting the router, the Switch config tab will be available in Luci:

config switch
        option name 'switch0'
        option reset '1'
        option enable_vlan '1'

config switch_vlan
        option device 'switch0'
        option vlan '1'
        option ports '0 1 2 3 6'

config switch_vlan
        option device 'switch0'
        option vlan '2'
        option ports '4 5'


Proxmox & Software RAID5 on HP Microserver Gen8

I recently bought a HP Microserver Gen8 with the intention of installing Proxmox and expanding my virtual lab environment. In my setup, Proxmox is installed and boots from the internal Micro-SD card reader, with 4 2TB spinning disks (actually 3x2TB & 1X3TB, due to a shipping error in my favour!) to hold the virtual machines. The built-in hardware RAID controller lacks decent Linux drivers so I used software RAID instead of the HP Smart Provisioning utility.

The setup wasn’t particularly difficult, but wasn’t particularly obvious either. This post is a rough guide to the process as I will almost certainly need to rebuild this thing at some point in the future! As Proxmox is Debian-based the Digital Ocean documentation was very useful.

First, disable the built-in RAID controller and have it operate as a standard SATA one instead.

  1. Boot into Setup (F9)
  2. System Options -> SATA Controller Options -> Embedded SATA Configuration -> Enable SATA AHCI Support

Reboot into Proxmox and update/upgrade the OS packages:

apt-get update && apt-get upgrade -y
apt-get dist-upgrade

Install mdadm. As Proxmox is installed on the SD card, there is no need to start the array before the OS boots, so enter “none” when prompted:

apt-get install mdadm

View the raw devices and build the RAID:

mdadm –create –verbose /dev/md0 –level=5 –raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd

This will take a few hours; progress can be checked:

cat /proc/mdstat

Once the array is built, create the filesystem, create a mount point, and mount the RAID:

mkfs.ext4 -F /dev/md0
mkdir -p /mnt/md0
mount /dev/md0 /mnt/md0

Confirm that the RAID is available:

df -h -x devtmpfs -x tmpfs

Save the array layout, update initramfs, and automatically mount the array at boot:

mdadm –detail –scan | tee -a /etc/mdadm/mdadm.conf
update-initramfs -u
echo ‘/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0’ | tee -a /etc/fstab

The RAID is now ready for data. One tip a friend gave me (a little too late, unfortunately) was to check the logical identifiers for each disk and physically label each one so that when one fails, finding and replacing it will be easier.

Making Sense of 10010 OnionScan Results

A few months ago, Sarah Jamie Lewis released the wonderful OnionScan; a tool for enumerating (and resolving) potential security issues arising from poorly configured Tor Hidden Services. It’s kind of a big deal for people who are interested in that sort of thing.

As cool as OnionScan is, scanning Hidden Services one at a time tends to become rather tedious. Fortunately, Justin Seitz wrote up a nice tutorial on automating OnionScan through a Python wrapper, and being one of those people who are interested in that sort of thing, I set it all up on a dedicated server and left it to run for a few days.

Using Justin’s initial list of 8592 Hidden Services as a starting point, I ended up with 10010 completed scans (which was good) and 10010 distinct JSON files containing the results (which was not so good). “There’s bound to be something interesting in there”, I thought. I could get a rough idea of the state of things by grepping the results files for JSON entries, and even tried throwing EyeWitness at the web and VNC services, but still didn’t really get anywhere. What I really needed was some kind of database.

Introducing onionscan2db…

OnionScan can write its results out to machine-readable JSON files, so parsing them is fairly straightforward. I used Python for no other reason than I like it, and SQLite3 because it’s simple and Python supports it without the need for any additional modules.

The code is probably best described as “functional”. It’s not particularly pretty and there’s definitely room for speeding up the database writes, but it’ll take a take more than 10000 JSON formatted OnionScan results and build an SQLite database that can then be used to do something useful.


The tool is available from GitHub and can be run from the following command

python onionscan2db -d <onionscan-results-directory> -o <output-database>

I’ll try to keep it updated in line with OnionScan, but the code is relatively modular and it shouldn’t be too difficult for anyone else to improve the database structure and import functions as necessary.

CREST Registered Intrusion Analyst

A little while ago I took (and passed) the CREST Registered Intrusion Analyst exam. This post won’t give anything away in terms of the exam itself, but hopefully will serve as a bit of background for anyone who happens to be thinking about trying for the certification, as I found information a bit lacking when I was preparing for it.

I’m not sure the CRIA certification is particularly well recognised. I only knew about CREST’s pen testing certs before, and none of my friends who still focus on forensics had even heard of it. In summary, CRIA is an entry level certification which covers aspects of network traffic analysis, host-based forensics, malware analysis, and briefly touches upon relevant laws and professional standards. The exam itself is split into a closed-book written multiple-choice paper and a longer open-book (but effectively no internet access) practical exam, which again, uses a multiple choice format.

CREST provide so little information on what kind of topics will be covered that it’s easy to become a bit overwhelmed when trying to prepare (a complaint I hear a lot about CREST’s other exams). Remember that it’s an entry level certification – think “a mile wide, but an inch deep”. The suggested reading list is a great example of this lack of context:

Reading Material:
Hacking Exposed – Scanning and Enumeration
The Art of Memory Forensics:  Detecting Malware and Threats in Windows, Linux, and Mac Memory (by Michael Hale Ligh/Andrew Case/Jamie Levy/Aaron Walters)
Malware Forensic Field Guide for Windows Systems (by Syngress)
Practical Malware Analysis
Network Fundamentals: CCNA Exploration Companion Guide
Real Digital Forensics (particularly chapter 1, Windows Live Response)
TCP/IP Illustrated

TCP/IP Illustrated? Really? It’s three bloody volumes! While I’ve read at least parts of most of the suggested books, I didn’t pay a great deal of attention to CREST’s list. Instead, I’ve listed a few books I found helpful and included chapters where I could:

  • Red Team Field Manual (it’s just a good resource to have anyway)
  • Real Digital Forensics (Chapter 1, Windows Live Response)
  • Practical Packet Analysis
  • Practical Malware Analysis (Part 1, Basic Analysis)
  • Windows Forensic Analysis Toolkit, 3rd Edition

Another thing to consider is CREST’s policy of retaining your hard drive and wiping it before returning it. Rather than go through the hassle of imaging my day-to-day work laptop I used a spare one and just installed Kali linux on it. This was fine for the majority of the exam, but I realised I tend to use a lot of Windows tools when doing malware analysis in particular. Kali has equivalents for everything you’re likely to need, though in my case it meant frantically scanning the man pages for the right command-line switches!

In all, I didn’t find the exam particularly difficult but the wide scope of the material was a little daunting. The more specialised follow-up certifications look a bit more interesting and actually strike me as being easier to prepare for, as at least they limit the scope of material to network traffic, malware, or host-based analysis.

Thoughts on Running a Tor Exit Node for a Year

I’m a big fan of Tor. Both as a concept in that it allows people to access information that might otherwise be inaccessible*, and as an interesting technical project. In an effort to support the Tor network and to learn more about how it actually works, I’ve been hosting various Tor nodes on various boxes for a few years now but around this time last year I stepped things up a bit and began running an Exit node that has consistently ranked in the top 100 world-wide in terms of usable bandwidth.

When I mention this to people I tend to get the same questions, so I thought it best to write the answers here, and maybe save a few people (including myself) some time.

Do you need special hardware?

No, not really. The Tor daemon doesn’t really take advantage multi-core CPUs, so in most cases throwing extra processing power at it won’t give you much of an advantage. I rent a relatively low-end physical server (Celeron G530, 2GB RAM) but I found the biggest limitation to be affordable bandwidth. I have an uncapped 100MBit/s line to my server – not blisteringly fast but it’s saturated almost 100% of the time. In a typical month my Exit will shift somewhere around 35TB of traffic, combined upstream and down.


What do your hosting company think about that?

They’re ok with it! Not all hosting companies are though so if you’re thinking of running any kind of Tor node make sure to check first. I’m in the UK, my hosting company are not. Depending on where you, your hosting company, and their data centre are geographically it’s unlikely to be illegal to run a node, but there’s a good chance it will be against the hosting companies T&C’s, particularly in the case of Exit nodes.

The Tor Project wiki holds a pretty comprehensive list of good and bad hosting companies and ISPs.

What about abuse reports?

There will be abuse reports. Learn to deal with them – ignoring them altogether is usually a good way to get on the bad side of your hosting company. There are things you can do to cut down on the number of abuse reports you receive; the most effective in my experience is to configure a reduced exit policy, blocking ports commonly used for things like SMTP and BitTorrent**. It’s not perfect, but it has dramatically cut the number reports I have to deal with – I tend to get about 1 a week on average now.

Can I run a Tor node from home?

You can, but it’s really best not to. That’s especially true for Exit nodes. For one thing, your home broadband connection is probably not fast enough to contribute any meaningful bandwidth. Second, the IP addresses of all Tor Relay (Middle node) and Exit nodes are publicly available, and as a straightforward way of cutting down on the sort of abuse I described above, more and more online services are just blocking all those IP addresses outright. It’s not very subtle but it does work! So you can run a Relay or an Exit from home, but you’ll probably find that sooner or later Netflix will stop working. Your call.

A better option for those who want to contribute to Tor from a home connection is running a Bridge node, or donating directly to an organisation like

Aren’t you worried about the Police/GCHQ/Mossad/3PLA/etc?

Not especially, I’ve certainly never had any legal troubles because of Tor. By it’s very nature though traffic from a system like Tor is likely to be more interesting than the rest of the internet as far as a nation-state is concerned, and with only about 1000 Exit nodes running, monitoring all of them is well within the capabilities of a reasonably funded SIGINT agency. I assume my Exit (along with all the rest of them) is being monitored, if not actively targeted. Other than hardening the box as far as possible there’s not much more that can be done against an adversary like that. Who knows, maybe one day I’ll end up with some fun malware to analyse.


* I’ve done a lot of forensics work in my time and been exposed to all kinds of Bad Stuff as a result. I am by no means naive enough to suggest that systems like Tor don’t help people access Bad Stuff, but I think on balance the positive uses outweigh the bad ones.

** BitTorrent over Tor is a bad idea in general. Firstly, it doesn’t give you any anonymity. And second, it slows the network down for everyone else. I block common BitTorrent ports on my exits. Don’t like it? Run your own.

Making Sense of 2,027,189 Login Attempts

Back in January I began setting up a Kippo SSH honeypot on an old VPS that I wasn’t really using for anything else. As it was a spur-of-the-moment kind of thing I spent an hour or two making the Kippo SSH service look a bit more interesting (and less like Kippo) before hardening the real SSH service and promptly forgetting all about it.

Until last week when I logged into the VPS for the first time in close to six months, noticed a suspiciously large number of SSH connections, and after a brief moment of panic, realised I had six months of honeypot data to play with! The excitement didn’t last very long.

Kippo Graph

There’s quite a lot of 3rd party development around Kippo, and one of the really nice projects is kippo-graph. Kippo-graph was written by @ikoniaris and is a collection of (mostly) PHP files which extract data from Kippo’s MySQL database, and generate and display lots of nice graphs, charts, and statistics showing what’s been happening on the honeypot. It’s exactly what I needed! The only problem was that in my haste to get Kippo up and running I’d forgotten to enable MySQL logging…

6,732 Text Files


Instead of a nice database I was left with roughly 6,700 text files detailing every IP address, password attempt, and console command issued over a six month period. Analysing them manually obviously wasn’t going to work, so my only other option was to parse the text files and build the database myself. While I could probably have taken a few days to throw something together in Python, a bit of searching pointed me at Ion’s blog about kippo-log2db. I was getting closer.


Kippo-log2db is a Perl script by Jim Clausing (@jclausing) that parses the Kippo log files and creates a MySQL database following the original Kippo schema. After downloading the script and giving it the correct MySQL credentials, my initial attempts were met with a couple of recurring errors:

DBD::mysql::st fetchrow_array failed: fetch() without execute() at ./ line 98,


DBD::mysql::st execute failed: Column ‘sensor’ cannot be null at ./ line 125,

The first of these errors appears to be due to the script trying to reference an empty set of results pulled from an earlier MySQL query. I’m not a Perl coder, and the script was adding records to the database, so I just let that one go. The second error was more concerning, referencing a database error as a result of trying to insert a null value into the “sensor” column. My main concern was that this indicated my logs were incomplete, or otherwise lacking values, but being too impatient to go digging through the log files I simply modified the database schema to allow null values in the “sessions” table. This is likely to have caused a few problems later on.

“This could take some time to complete”

Following the modifications I restarted the script and left it to run. Jim warns in his script that it might take some time; he’s not joking. I originally used an old 256MB Raspberry Pi to run the import but after running it overnight and seeing it had only completed 200 of the 1,600 log files, I moved the data to one of my dedicated boxes and restarted the process, even so, the import took a little over 3 days to complete but left me with the nice database I needed.

I quickly downloaded and configured kippo-graph, then fired up a web browser to see what it made of my efforts.


2,027,189 login attempts! The errors I encountered during the database creation seem to have propagated through in places. For example, kippo-graph seems to think that every single login attempt failed.

success_ratioOther charts are more useful though, and the kippo-input and kippo-playlog functions are simply brilliant. The process didn’t work perfectly, but well enough that I should be able to get something interesting out of my data.

Of course, this could all have been avoided if I’d remembered to enable MySQL logging in the first place!

Forensic Analysis of the Nintendo Wii Game Console

By popular* demand… my MSc thesis from 2010. Still, as far as I’m aware, the most complete analysis of the original Nintendo Wii console. Possibly for good reason!


Like other modern game consoles, the Nintendo Wii provides users with a powerful networked device capable of performing many of the tasks carried out by a conventional desktop personal computer. Unlike other modern game consoles however, the Nintendo Wii utilises an internal NAND flash storage device in lieu of a standard hard disk drive, and thus cannot be imaged in the same manner as the Microsoft Xbox or Sony Playstation 3. The difficulties in imaging the device are exacerbated by the tightly-controlled, proprietary nature of the platform, and have led to forensic examiners being faced with the choice of ignoring the Nintendo Wii completely, or performing a live examination and potentially destroying evidence.

Through a series of experiments, investigates the feasibility of a number of hardware and software procedures designed to capture the raw data held by the Nintendo Wii’s NAND flash storage device so that conventional digital forensic techniques may be applied to the console. In addition to the successful capture of data, this report also describes a process by which the console can be restored to a previously-captured state, reducing the risks associated with performing a live examination. Also described is the analysis of the captured NAND flash image, which has demonstrated the recovery of a partial history of internet usage and sent Wii Message Board communications – information which was previously thought to be inaccessible by any other means.

Link: Forensic Analysis of the Nintendo Wii Game Console
SHA1: 376de01c8e404cd6674199c19c20b7cb456355d3
MD5: 24ec2d5cc539d3f7d2dc7168b077af77