Magnet Weekly CTF – Week 6 – The Elephant in the Room

The Magnet Forensics Weekly CTF has been running since October and sets one question each week using an image that changes each month. The October questions were based on an Android filesystem dump. November’s image is Linux, more specifically a Hadoop cluster comprising of three E01 files. The images were created by Ali Hadi as part of his OSDFCon 2019 Linux Forensics workshop; the November CTF questions are based on Case 2, which can be downloaded here.

This week is a little different and split into two parts, with the second part revealed once the first has been successfully answered. You can find my other Magnet Weekly CTF write-ups here.

Part 1 (25 points)

Hadoop is a complex framework from Apache used to perform distributed processing of large data sets. Like most frameworks, it relies on many dependencies to run smoothly. Fortunately, it’s designed to install all of these dependencies automatically. On the secondary nodes (not the MAIN node) your colleague recollects seeing one particular dependency failed to install correctly. Your task is to find the specific error code that led to this failed dependency installation. [Flag is numeric]

We have three E01 images making up the Hadoop cluster – Master, Slave1, Slave2 – and this time we are looking at either Slave1 or Slave2. I started with Slave1, and mounted the E01 file in the same way as in the Week 5 challenge. From a root shell:

# ewfmount /mnt/hgfs/Shared/mwctf/linux/HDFS-Slave1.E01 /mnt/ewf
# mmls /mnt/ewf/ewf1
# losetup --read-only --offset $((2048*512)) /dev/loop20 /mnt/ewf/ewf1
# mount -o ro,noload,noexec /dev/loop20 /mnt/ewf_mount/

Now we have the main ext4 partition mounted we can get on with the analysis. We are looking for logs relating to package management; checking the release information the underlying system is Ubuntu 16.04 so the APT package manager seems a reasonable place to start looking.

cat /mnt/ewf_mount/etc/lsb-release

APT keeps two logs under the /var/log/apt directory:

/var/log/apt/history.log
/var/log/apt/term.log

Checking history.log first, I found that the oracle-java7-installer, oracle-java8-installer, and oracle-java9-installer packages all failed to install correctly. Hadoop requires Java to function, so this is looking good.

less /mnt/ewf_mount/var/log/apt/history.log

The history.log file shows that the dpkg sub-process failed with error code 1; unfortunately this isn’t the answer we are looking for, so let’s try term.log instead.

We can quickly filter the errors using grep, with the -C 5 argument to provide some context around the matches.

cat /mnt/ewf_mount/var/log/apt/term.log | grep -C 5 "ERROR"

Examining the grep output from term.log, we find that the oracle-java7-installer package failed to download, resulting in the dpkg error we saw in history.log, but now we see the HTTP 404 error code indicating that the package file was not found. Submit this error code, and we have completed Part 1!

Flag (Part 1)

404

Part 2 (50 points)

Don’t panic about the failed dependency installation. A very closely related dependency was installed successfully at some point, which should do the trick. Where did it land? In that folder, compared to its binary neighbors nearby, this particular file seems rather an ELFant. Using the error code from your first task, search for symbols beginning with the same number (HINT: leading 0’s don’t count). There are three in particular whose name share a common word between them. What is the word?

The question is a bit of a riddle, but given that the oracle-java packages failed to install, and we know from Week 5 that the Java JDK was installed to /usr/local/jdk1.8.0_151 so there’s a starting point. The question also references binary files and ELF, which is a standard binary format on Linux systems, so my guess is that we need to examine the symbol table within the ELF binaries.

The Java binaries are contained in the /usr/local/jdk1.8.0_151/bin directory.

ll /mnt/ewf_mount/usr/local/jdk1.8.0_151/bin/

We can check the file types using the file command, and filter the ELF executables using grep:

file /mnt/ewf_mount/usr/local/jdk1.8.0_151/bin/* | grep "ELF"
file /mnt/ewf_mount/usr/local/jdk1.8.0_151/bin/* | grep "ELF" | wc -l

According to the file output there are 42 binaries; we can dump the symbol tables using the readelf utility, but which executable are we looking for? Rather than checking each file individually, I dumped the symbol table from all of the binaries (sending the error messages to /dev/null) and used grep to filter out “404“.

readelf --symbols /mnt/ewf_mount/usr/local/jdk1.8.0_151/bin/* 2>/dev/null | grep 404

We are looking for a common word, shared between three symbols. One jumps out – deflate.

The answer was accepted but for completeness sake, let’s find out which executable the question referred to.

readelf --symbols /mnt/ewf_mount/usr/local/jdk1.8.0_151/bin/* 2>/dev/null | grep -E "File: |404" | grep -B 1 "deflate"

Employing a bit of grep-fu to tidy things up, we can see that the executable in question is:

/mnt/ewf_mount/usr/local/jdk1.8.0_151/bin/unpack200

Flag (Part 2)

deflate

Memlabs Memory Forensics Challenges – Lab 5 Write-up

Memlabs is a set of six CTF-style memory forensics challenges released in January 2020 by @_abhiramkumar and Team bi0s. I have been working on the Magnet Weekly CTF recently so the other write-ups I had in progress have been sitting partially finished for a while now. This write-up covers Lab 5 – Black Tuesday, which I worked on back in July! You can find the rest of my Memlabs write-ups here.

MD5: 9dd6cb1134c9b018020bad44f27394db
SHA1: 289ec571ca6000b6234dee20c28d4cdba13e4ab7

After downloading the memory image and calculating the hashes, the first thing to do is determine which profile Volatility should use for the rest of the analysis.

vol.py -f MemoryDump_Lab5.raw imageinfo

The imageinfo plugin suggests a few profiles we can use; let’s go with Win7SP1x64 for now, and check the running processes with pstree.

vol.py -f MemoryDump_Lab5.raw --profile=Win7SP1x64 pstree

There are a few user processes that warrant investigation but no immediately obvious starting point. The cmdline plugin will display any command-line arguments that were passed when the process was started. Initially I ran the cmdline plugin with no additional arguments, but I have truncated the output to focus on two of the more interesting processes – WinRAR.exe and NOTEPAD.EXE.

vol.py -f MemoryDump_Lab5.raw --profile=Win7SP1x64 cmdline -p 2924,2724

We can see that the WinRAR process has a file associated with it; using the filescan and dumpfiles plugins we can extract this file from the memory image.

vol.py -f MemoryDump_Lab5.raw --profile=Win7SP1x64 filescan > filescan.txt
grep -E 'SW1wb3J0YW50.rar$' filescan.txt
vol.py -f MemoryDump_Lab5.raw --profile=Win7SP1x64 dumpfiles -Q 0x000000003eed56f0 -D . -n

Now we have the RAR file…

unrar e SW1wb3J0YW50.rar

…but we don’t have the password to open it. The RAR file contains Stage2.png, let’s go find the Stage 1 password.

After going down a few dead ends, I tried the screenshot plugin which, as the name suggests, allows us to see what was displayed on the desktop at the time the memory dump was taken. The screenshots are actually wire-frame drawings showing the positions and titles of the displayed windows. It won’t show us the full window content, but it is often enough to get an idea of what was displayed on the desktop.

vol.py -f MemoryDump_Lab5.raw --profile=Win7SP1x64 screenshot -D screenshot-output/

The screenshot plugin outputs a number of images; most are empty but one (session_1.WinSta0.Default.png) shows us that the Windows Photo Viewer was displaying a file with what looks to be a base64 filename:

ZmxhZ3shIV93M0xMX2QwbjNfU3Q0ZzMtMV8wZl9MNEJfM19EMG4zXyEhfQ

We can decode the base64 with CyberChef

…and there’s our Stage 1 flag:

flag{!!_w3LL_d0n3_St4g3-1_0f_L4B_5_D0n3_!!}

Now, we can go back to the RAR file and extract the Stage2.png:

Success!

flag{W1th_th1s_$taGe_2_1s_c0mPL3T3_!!}

Between “completing” this lab in the middle of July 2020 and finding time to write it up (in November!), the challenge description has been updated with the following note:

This challenge is composed of 3 flags. If you think 2nd flag is the end, it isn’t!! 😛

What?

This kind of thing was exactly what put me off the Memlabs challenges in the first place. Maybe the NOTEPAD.EXE process (PID: 2724) is worth examining, but I’m moving on.

Magnet Weekly CTF – Week 5 – Had-A-Loop Around the Block

The Magnet Forensics Weekly CTF has been running since October and sets one question each week using an image that changes each month. The October questions were based on an Android filesystem dump. November’s image is Linux, more specifically a Hadoop cluster comprising of three E01 files. The images were created by Ali Hadi as part of his OSDFCon 2019 Linux Forensics workshop; the November CTF questions are based on Case 2, which can be downloaded here.

You can find my other Magnet Weekly CTF write-ups here.

Had-A-Loop Around the Block (75 points)

What is the original filename for block 1073741825?

New month, new image. I’ve done some Linux forensics before but never anything involving Hadoop. And this question is worth 75 points. Week 3 was only worth 40 so this gives an indication that it’s going to be a long one!

The Case 2 image set is comprised of three hosts:

  • HDFS-Master
  • HDFS-Slave1
  • HDFS-Slave2

I started with HDFS-Master.E01 just because it seemed like a sensible place to begin. The first thing to do is mount the disk image and see what we have.

Part 1 – Mounting E01 files using SIFT Workstation

Most Linux forensics tools are happiest when they are working with raw disk images. The fact we have Expert Witness Format (E01) files complicates things a little, but not too much.

I like to use free or open-source tools as far as possible for CTFs so we are going to mount the image as a loopback device using ewfmount and tools from The Sleuthkit – all available in the SANS SIFT virtual machine.

One of the advantages of E01 files is that they can also contain case metadata. We can view this metadata using the ewfinfo tool.

ewfinfo /mnt/hgfs/Shared/mwctf/linux/HDFS-Master.E01

Before we create the loopback device we need to get our E01 file into something resembling a raw disk image. We could convert the E01 to raw using ewfexport but that takes time and expands our image to the full 80GB disk. Instead, we will use ewfmount to create something the standard Linux tools can work with.

sudo ewfmount /mnt/hgfs/Shared/mwctf/linux/HDFS-Master.E01 /mnt/ewf

efwmount creates a read-only, virtual raw disk image located at /mnt/ewf/ewf1. The next thing to do is check on the geometry of the disk.  I used mmls from The Sleuthkit to dump the partition table; we’ll need this data for the next step.

(From here on I had to sudo to a root shell due to the permissions that ewfmount left me with)

sudo -s
# mmls /mnt/ewf/ewf1

Partition 002 is the one we are interested in. Its description tells us it is Linux ext2/3/4 and the length means it is the largest single partition on the disk. The part we need to take note of for now is the Start sector offset: 2048. We will use this later to mount the partition. First though, let’s get some more information about the filesystem on the partition.

# fsstat -o 2048 /mnt/ewf/ewf1 | tee /home/sansforensics/mwctf/fsstat-2048.out

The fsstat command gives us a lot of information that might be useful later on, so I used tee to save the output to a file. The output confirms that we are dealing with an ext4 filesystem which, helpfully, was unmounted correctly! Now, we can move on and create the loopback device which will then allow us to mount the filesystem.

# losetup --read-only --offset $((2048*512)) /dev/loop20 /mnt/ewf/ewf1
# file -s /dev/loop20

This step gave me a lot of problems relating to the loop device being “unavailable”; losetup should be smart enough to use the next available device without prompting, but eventually I found that if I set the device myself (/dev/loop20, in my case) the command succeeded. The other aspects to note are that I created the loopback device as read-only – ewfmount already created a read-only device for us, but practice safe mounting – and that the offset value is the sector offset from mmls (2048) multiplied by the sector size in bytes (512).

Now we can move on to the final stage of preparation and actually mount the filesystem.

# mount -o ro,noload,noexec /dev/loop20 /mnt/ewf_mount/

I also ran into a problem with my initial attempt to mount the filesystem. I suspect this was because the journal was in need of recovery (as per the file -s output above). Adding the noload option tells the filesystem driver to ignore the journal errors, and allows us mount the filesystem successfully! Again, read-only.

Part 2 – ext4 Block Analysis

Now we have the filesystem mounted we can get going on the analysis. The question asks for the filename for block 1073741825. My first thought was the ext4 block. I have recovered deleted files from ext4 in the past by working from the inode via the block group, to the raw blocks on disk (Hal Pomeranz gave an excellent webcast covering exactly this scenario), maybe I can work backwards from the block number?

But that block number looks awfully large, especially for an 80GB disk. Let’s take another look at our saved fsstat output.

cat mwctf/fsstat-2048.out | grep -A 6 "CONTENT INFORMATION"

The question asks about block number 1,073,741,825 but the filesystem only contains 20,446,976 blocks. Okay, so we are not looking for an ext4 block. But, this is a Hadoop cluster. How does Hadoop store data?

Part 3 – Investigating Hadoop

The best resource I found to get a quick overview of performing forensic analysis of Hadoop (rather than using Hadoop to perform analysis) was Kevvie Fowler’s helpfully titled Hadoop Forensics presentation from the 2016 SANS DFIR Summit. Armed with this and some Googling, I located the Hadoop installation and data in the following directory:

/mnt/ewf_mount/usr/local/hadoop

I was looking for the namenode location, which hold the fsimage files, which in turn, hold the metadata we are looking for. I found this by examining the hdfs-site.xml configuration file:

cat /mnt/ewf_mount/usr/local/hadoop/etc/hadoop/hdfs-site.xml

Looking under the namenode directory we find the fsimage files. The edits_ files can be thought of as being like transaction logs; best-practice would be to merge these before doing the analysis but for our needs this wasn’t necessary.

ll /mnt/ewf_mount/usr/local/hadoop/hadoop2_data/hdfs/namenode/current

Now that we have found the fsimage files, we need to get intelligible data out of them. Hadoop makes heavy use of a utility named hdfs. Among the many functions hdfs provides is the Offline Image Viewer (oiv) which can be used to parse the fsimage files and output something human-readable . That sounds exactly what we are after, the next problem is how to run it!

I don’t have Hadoop on my SIFT VM and installing it looks a bit fiddly, but we have a disk image from a (presumably) working Hadoop installation so maybe we can use that instead?

ll /mnt/ewf_mount/usr/local/hadoop/bin/

This is where things get a bit hacky. I mounted the filesystem using the noexec option as a protection against accidentally executing scripts and binaries from the disk image, but now that’s exactly what I want to do, so I unmounted and remounted the filesystem to allow this.

# umount /mnt/ewf_mount
# mount -o ro,noload /dev/loop20 /mnt/ewf_mount/

However, the Offline Image Viewer (hdfs oiv) throws an error because the Java path is incorrect.

/mnt/ewf_mount/usr/local/hadoop/bin/hdfs oiv -h

The Offline Image Viewer is looking for Java under /usr/local/ instead of /mnt/ewf_mount/usr/local/ taking the mounted disk image into account. I tried inspecting the script and exporting a new $JAVA_HOME environment variable, but it seems the Offline Image Viewer is taking the variable from a file, and as we are working on a read-only filesystem, we can’t easily change that. So instead of fighting to get the Offline Image Viewer to recognise an updated path, I simply copied the Java installation from the image to my native /usr/local directory and tried again.

sudo cp -r /mnt/ewf_mount/usr/local/jdk1.8.0_151 /usr/local
/mnt/ewf_mount/usr/local/hadoop/bin/hdfs oiv -h

Better! We have an exception because hdfs cannot write to its log file on a read-only filesystem, but the Offline Image Viewer runs! Let’s see if it can extract anything from the imagefs files we identified earlier.

/mnt/ewf_mount/usr/local/hadoop/bin/hdfs oiv -i /mnt/ewf_mount/usr/local/hadoop/hadoop2_data/hdfs/namenode/current/fsimage_0000000000000000024 -o /home/sansforensics/mwctf/fsimage_24.xml -p XML

We have the same exception because of the read-only filesystem, but…

cat /home/sansforensics/mwctf/fsimage_24.xml

… we have an XML file! After making the XML look pretty and searching for the block number, we find our answer in the name tag.

Week 5 done!

Flag

AptSource

Magnet Weekly CTF – Week 4 – Animals That Never Forget

Magnet Forensics have announced a weekly CTF running from October 2020. A new challenge will be released each week on Monday, and the first few are based on an Android filesystem dump. You can find my other Magnet Weekly CTF write-ups here.

MD5: 3bb6abb3bf6d09e3e65d20efc7ec23b1
SHA1: 10cc6d43edae77e7a85b77b46a294fc8a05e731d

The Week 3 challenge really increased the difficulty, involving a bit of file carving to determine the location a photograph was captured. This week though I mostly relied on luck. And grep.

Animals That Never Forget (25 points)

Chester likes to be organized with his busy schedule. Global Unique Identifiers change often, just like his schedule but sometimes Chester enjoys phishing. What was the original GUID for his phishing expedition?

I really had no idea where to start with this one. Calendar data? ALEAPP says no. I decided to simply extract the evidence TAR file and grep through the dump for phish saving the output for later with tee.

grep -r -i 'phish' MUS_Android/data/ | tee grep_phish.out

Not a bad start. We have a lot of matches in binary files so let’s exclude those from the output and see what’s left.

cat grep_phish.out | grep -v '^Binary file ' | grep -i 'phish'

Better! We have a few matches in a log belonging to the Evernote application, and we might even have our answer already!

oldGuid=7605cc68-8ef3-4274-b6c2-4a9d26acabf1

As with Week 3, we only have three attempts to answer this so let’s dig into the log file and see what else we can find.

grep 'oldGuid=' MUS_Android/data/data/com.evernote/files/logs/log_main.txt

The oldGuid key appears four times, along with timestamps; the question asks for the original GUID, presumably the earliest one in the file. We can use grep again to show only the lines relating to our phishing note.

grep 'oldGuid=' MUS_Android/data/data/com.evernote/files/logs/log_main.txt | grep 'title=Phishy Phish phish'

The first hit is the earlier of the two so things are looking good. The screenshot below highlights the oldGuid value.

I think that’s probably enough to verify that we at least have a reasonable answer before burning one of the three attempts. Submit the oldGuid value and we have successfully completed the Week 4 challenge!

Flag

7605cc68-8ef3-4274-b6c2-4a9d26acabf1

Magnet Weekly CTF – Week 3 – Cargo Hold

Magnet Forensics have announced a weekly CTF running from October 2020. A new challenge will be released each week on Monday, and the first few are based on an Android filesystem dump. You can find my other Magnet Weekly CTF write-ups here.

MD5: 3bb6abb3bf6d09e3e65d20efc7ec23b1
SHA1: 10cc6d43edae77e7a85b77b46a294fc8a05e731d

The Week 1 and Week 2 challenges didn’t require too much in terms of analysis – some UNIX file knowledge, and some output parsed by ALEAPP – Week 3 really stepped up the difficulty!

Cargo Hold (40 points)

Which exit did the device user pass by that could have been taken for Cargo?

Max of 3 attempts

Ok, so we are looking for location data of some kind and something relating to cargo. I found some references to airport (and bus) WiFi in the ALEAPP Wi-Fi Profiles report but ALEAPP didn’t give me anything specifically relating to locations so I moved on from that.

In the video announcing the Week 3 question, Jessica Hyde specifically called-out a presentation she gave with Christopher Vance comparing similar artefacts on Android and iOS devices hinting (quite strongly) that it might be helpful. The location data provided by Google Takeout sounds exactly what we need here, but we don’t have Google Takeout data, so on to the next idea.

The next thing to stand out was the device media store – the photos and videos captured by the device camera – which can (but don’t always) contain location data in their EXIF metadata. I started with the default camera storage directory…

/data/media/0/DCIM/Camera

…and extracted a total of 53 images and videos using FTK Imager.

Switching to my SIFT virtual machine, based on the coordinates in the EXIF there were some nice photos taken in Norway, and a few around New York. There was nothing obviously related to cargo, but one photo showed a truck on what seems to be a US highway, with part of a signpost in the background. Trucks carry cargo, signposts point to exits. Curious.

MVIMG_20200305_145544.jpg

The EXIF data for this photo contains GPS data:

GPS Position: 42 deg 42' 39.97" N, 73 deg 49' 26.94" W

Massaging that to satisfy Google Maps, we can see that this photo was taken on Highway 87 north of Albany, New York.

Dropping to Street View looks promising.

There we go! We have a photo of a cargo truck passing exit 2E. Week 3 done…

…or not.

One attempt down, two attempts left. Let’s think again.

The next artefact examined in the iOS/Android comparison presentation was Android Motion Photos. These are equivalent to iOS Live Photos and, in addition to taking a photo, records a video of roughly 1.5 seconds on either side of the photo capture.

Taking a closer look at the name of the truck photo, I noticed it is not IMG… but rather MVIMG… so we are indeed dealing with Motion Photos. This is the exact scenario Jessica Hyde sets out. We may not be able to see location details in the still image, but the video might show us something interesting. So how do we get to the video?

I found a great blog post outlining how Motion Photo files are structured, and put together a quick Bash script to extract the videos from our 8 Motion Photo files:

#!/bin/bash

INPUT_FILENAME="$1"
FILESIZE=$(stat -c%s "$INPUT_FILENAME")

MICRO_VIDEO_OFFSET=$(exiftool -MicroVideoOffset -s -s -s $INPUT_FILENAME)
OUTPUT_FILENAME=${INPUT_FILENAME%.*}

dd if="$INPUT_FILENAME" of="$OUTPUT_FILENAME.mp4" bs=$(($FILESIZE-$MICRO_VIDEO_OFFSET)) skip="1"

Essentially, we use exiftools to determine the Micro Video Offset value and subtract that from the filesize to find the start of the video data, then use the dd utility to skip the photo and write the video data to a new file. Running the script over each of the 8 Motion Photo file results in 8 short videos showing the second or so before and after each photo was captured.

I watched each video frame-by-frame using VLC and soon found what I was looking for in MVIMG_20200307_130326.jpg

MVIMG_20200307_130326.jpg

This photo was taken on the road approaching Oslo Airport, and moving through the embedded video frame-by-frame we can see a road sign directing cargo traffic via exit E (or maybe F) 16.

I could have used Google Maps and Streetview as before to confirm the answer, but I had two attempts left so tried E16 first, which was accepted. Week 3 Done!

Flag

E16

Magnet Weekly CTF – Week 2 – PIP Install

Magnet Forensics have announced a weekly CTF running from October 2020. A new challenge will be released each week on Monday, and the first few are based on an Android filesystem dump. You can find my other Magnet Weekly CTF write-ups here.

MD5: 3bb6abb3bf6d09e3e65d20efc7ec23b1
SHA1: 10cc6d43edae77e7a85b77b46a294fc8a05e731d

Week 1 was pretty straightforward. On to Week 2!

PIP Install (30 points)

What domain was most recently viewed via an app that has picture-in-picture capability?

In the last challenge I didn’t need to do any analysis or parsing of the data, simply read the timestamp of a particular file using FTK Imager. This time I needed to dig a little deeper and used Alexis Brignoni’s ALEAPP to parse the Android filesystem dump.

I have previously used iLEAPP to perform analysis of Apple iOS dumps; ALEAPP – the Android Logs Events And Protobuf Parser – works in much the same way, but for Android data. ALEAPP can process the dump directly from the TAR file. I simply started the GUI, set the input and output, and clicked Process.

A few seconds later I was presented with a nice HTML report of the analysis.

Given that the question asks about a domain being accessed, I guessed that the Chrome history would be a good place to start. I also found an article containing a list of Android applications which support the picture-in-picture featureChrome is listed. Another good sign.

Navigating to the Chrome History report and sorting by the most recent entry, we find the answer to the Week 2 question.

Flag

malliesae.com

Memlabs Memory Forensics Challenges – Lab 4 Write-up

Memlabs is a set of six CTF-style memory forensics challenges released in January 2020 by @_abhiramkumar and Team bi0s. This write-up covers Lab 4 – Obsession. You can find the rest of my Memlabs write-ups here.

As usual I started by calculating hashes for the image…

MD5: d2bc2f671bcc9281de5f73993de04df3
SHA1: bf96e3f55a9d645cb50a0ccf3eed6c02ed37c4df

…and running the Volatility imageinfo plugin to determine which profile to use for the rest of the analysis.

vol.py -f MemoryDump_Lab4.raw imageinfo

Having chosen Win7SP1x64 from the suggested profiles, I next checked the running processes with the pstree plugin.

vol.py -f MemoryDump_Lab4.raw --profile=Win7SP1x64 pstree

I briefly investigated the StikyNot.exe process but found nothing of interest. Let’s try the filescan plugin instead. I directed the output to a file, and used grep to filter the files in user profile directories, guessing that anything “very important” to the user would be stored there.

vol.py -f MemoryDump_Lab4.raw --profile=Win7SP1x64 filescan > filescan.txt
grep '\\Users\\' filescan.txt

Indeed, examining the filtered list we see a reference to Important.txt in a Desktop directory. My first approach was simply to run the dumpfiles plugin to recover the file from the memory image…

vol.py -f MemoryDump_Lab4.raw --profile=Win7SP1x64 dumpfiles -Q 0x000000003fc398d0 -D . -n

… but in this case no output was created.

We can’t extract the file directly but there is another place the data might be stored. On an NTFS partition, small files (up to a few hundred bytes) may be stored as resident in the $DATA attribute in the Master File Table (MFT). We can use the mftparser plugin to, well, parse the MFT, and use grep again to filter the relevant entry. The -C 20 argument in my grep command instructs grep to return the 20 lines above and below the matching line.

Looking at the $DATA attribute, we can see something that looks like the flag but broken up with whitespace characters; with CyberChef we can easily convert the raw bytes to text then remove these…

… leaving us with our flag to complete the challenge.

inctf{1_is_n0t_EQu4l_7o_2_bUt_th1s_d0s3nt_m4ke_s3ns3}

Magnet Weekly CTF – Week 1 – Mapping the Digits

Magnet Forensics have announced a weekly CTF running from October 2020. A new challenge will be released each week on Monday, and the first few are based on an Android filesystem dump.

MD5: 3bb6abb3bf6d09e3e65d20efc7ec23b1
SHA1: 10cc6d43edae77e7a85b77b46a294fc8a05e731d

Let’s go!

Mapping the Digits (20 points)

What time was the file that maps names to IP’s recently accessed?

(Please answer in this format in UTC: mm/dd/yyyy HH:MM:SS)

A pretty simple one to start with. On Linux-based systems (like Android) hostnames are mapped to IP addresses in the /etc/hosts file; find that file in the TAR archive and check the timestamp.

I opened the TAR archive up using FTK Imager, and navigated to the directory containing the hosts file:

/data/adb/modules/hosts/system/etc

There is only one timestamp, but it is worth noting that I have FTK Imager set to display dates in the common European format (day/month/year):

05/03/2020 05:50:18

So swap the day and month values to match the US format required by the question, and we have our first answer.

Flag

03/05/2020 05:50:18

As an aside, confusion around date and timestamps is exactly why we have ISO 8601.

DFA/CCSC Spring 2020 CTF – Apple iOS Forensics with iLEAPP

In May 2020 the Champlain College Digital Forensics Association, in collaboration with the Champlain Cyber Security Club, released their Spring 2020 DFIR CTF including Windows, MacOS, and Apple iOS images, as well as network traffic analysis, OSINT, and reversing challenges. I published my network traffic analysis write-ups earlier in the year, and after spending some more time on memory analysis, I decided to take a look at something different.

Over the last few months I have been seeing more and more chat about iLEAPP – the iOS Logs, Events, and Plists Parser – from Alexis Brignoni (with plugins and artefact definitions submitted by many others!), and wanted to set some time aside to play with it myself. I didn’t have any jailbroken iOS devices or fancy iOS acquisition kit to hand, but I remembered the iOS section of the Champlain College CTF and realised I could use that instead.

This post is nowhere near a full write-up of the iOS analysis section. Instead, think of it more as an illustration of what can be accomplished with iLEAPP and a spare hour on a Sunday afternoon! I used the CTF questions as a guide for the analysis, however if I couldn’t see a way to get the answer directly from iLEAPP I simply moved on to the next one. With that in mind, let’s go!

Installing iLEAPP (on Ubuntu 20.04 LTS)

The first thing to note is that iLEAPP requires at least Python 3.7.4 in order to run.

As far as possible, I like to use free or open-source tools when writing-up CTF challenges so that they are accessible for people without access to commercial tools. Purely out of habit I have been using an old copy of the SANS SIFT virtual machine based on Ubuntu 16.04 for most of my write-ups to this point. I recently took the time to setup a copy of the updated SIFT VM based on Ubuntu 18.04, but after fighting with multiple different versions of Python for a while, I gave up and built a new virtual machine with a default installation of Ubuntu 20.04 which (in my case, anyway) shipped with Python 3.8 and made the iLEAPP installation so much simpler!

The iLEAPP readme contains easy to follow installation instructions but does assume that Python pip is already present. I had to install pip for Python 3 using the Ubuntu apt package manager. I also had to install the git client, again, using apt.

sudo apt update && sudo install git python3-pip

Once the correct version of pip is installed the remaining steps are quite straightforward. First, clone the repository from Github:

git clone https://github.com/abrignoni/iLEAPP.git

We can handle almost all of the requirements with pip:

cd iLEAPP
pip install -r requirements.txt

As I am installing on linux I also need the python3-tk package to support the GUI; apt can take care of that for us:

sudo apt install python3-tk

Now that we have iLEAPP installed we can get on with the analysis!

Processing with iLEAPP

The iOS dump used in the CTF was distributed as a TAR archive:

MD5: 09683cf41534735ab1c24dbb898a3a4a
SHA1: 1211ed593e5111e45d04077ede3d0cd9728349b0

iLEAPP can process the dump directly from the TAR file, or as a directory structure after manual extraction. In this case I chose to process the TAR file directly, using all of the available modules (selected by default):

Processing took 1 minute 18 seconds to complete on my rather underpowered virtual machine…

…and produced a nice HTML report of the findings.

As my focus was on playing with iLEAPP rather than fully completing the CTF, I have only answered six of the 19 questions in the iOS analysis section:

01 – I’m just trying to do my iObeSt (50 points)

What ios version is this device on?

The first question was simple – using the Device Details tab in the Case Information section of the report.

One thing that stood out was that the iOS version was 9.3.5, which is a little outdated in 2020. According to the documentation, the current version of iLEAPP (version 1.6) supports iOS versions 11, 12, and 13. I’m not sure if this made any difference to the analysis that iLEAPP was able to perform.

flag<9.3.5>

03 – The Man, the Myth, the Legend (50 points)

Who is using the iPad? Include their first and last name.

Digging through the iLEAPP report I found the Calendar Identity Report which listed the display name and email address of our old friend Tim Apple.

flag<Tim Apple>

07 – High Fi! (150 points)

What is the name of the WiFi network this device connected to

Again, iLEAPP makes this very simple – the report extracts a list of known WiFi networks. In this case there is only one entry, an SSID named black lab.

flag<black lab>

12 – Let me in LET ME INNNNNN (200 points)

What app was used to jailbreak this device?

iLEAPP also gives us a nice list of installed applications:

The com.saurik.Cydia package immediately stood out as one of the alternative App Stores commonly installed on jailbroken devices, but after some searching I found that the com.VN337S8MSJ.supplies.wall.phoenix package corresponds to the Phoenix Jailbreak for iOS 9.3.5 and 9.3.6

flag<phoenix>

18 – Interstellar Docking Scene Music (300 points)

Name one of the apps that were on this device’s dock. Provide them in bundle format: com.provider.appname

Going by the high number of CTF points awarded for this question one might expect the answer to be held in some obscure plist, but iLEAPP makes this easy!

The Apps per Screen report plots the position of each application icon in an easy to follow table, including those in the dock (icons bottom bar). We have four applications to choose from; I submitted the first one, which was accepted as a correct flag.

flag<com.apple.MobileSMS>

19 – Remind me later (300 points)

A reminder was made to get something, what was it?

Again, another 300-point question easily answered with iLEAPP.

The Calendar Items report lists two reminders; one to “go to bed before 5AM” and another to “get milk“. Easy!

flag<milk>

Summary

Again, this is not a complete write-up of the iOS analysis section of the Champlain College Spring 2020 CTF, and might not even be a complete representation of the current state of iLEAPP! Even so, I think this has shown how powerful a tool iLEAPP is for performing quick analysis of Apple iOS devices, and iLEAPP is regularly updated to support new analysis modules so I fully expect that its capabilities will improve further in future.

While the CTF has now formally closed meaning that no points will be assigned, the challenge site is still available and functioning as of October 2020 if you would like to download the images and have a go yourself. I was able to make quick progress with iLEAPP alone; the most recent version of the SANS Smartphone Forensics poster looks like a great help too for anything that iLEAPP might miss!

Memlabs Memory Forensics Challenges – Lab 3 Write-up

Memlabs is a set of six CTF-style memory forensics challenges released in January 2020 by @_abhiramkumar and Team bi0s. This write-up covers Lab 3 – The Evil’s Den. You can find the rest of my Memlabs write-ups here.

Before starting with the analysis I calculated the MD5 and SHA1 hashes of the memory dump

MD5: ce4e7adc4efbf719888d2c87256d1da3
SHA1: b70966fa50a5c5a9dc00c26c33a9096aee20f624

And determined the correct profile for Volatility:

vol.py -f MemoryDump_Lab3.raw imageinfo

The imageinfo plugin suggests a few profiles; I used Win7SP1x86_23418 to complete the analysis. To begin with, let’s check the running processes with pstree:

vol.py -f MemoryDump_Lab3.raw --profile=Win7SP1x86_23418 pstree

There are two notepad.exe processes running (Pid: 3736 & 3432) but nothing else that immediately jumps out. Using the cmdlines plugin we may be able to see which files were opened.

vol.py -f MemoryDump_Lab3.raw --profile=Win7SP1x86_23418 cmdline -p 3736,3432

That’s a bit more interesting now! Notepad was used to open two files – evilscript.py and vip.txt – let’s try to extract them:

vol.py -f MemoryDump_Lab3.raw --profile=Win7SP1x86_23418 filescan > filescan.txt
grep -E '\\Desktop\\evilscript.py|\\Desktop\\vip.txt' filescan.txt
vol.py -f MemoryDump_Lab3.raw --profile=Win7SP1x86_23418 dumpfiles -Q 0x000000003de1b5f0 -D . -n
vol.py -f MemoryDump_Lab3.raw --profile=Win7SP1x86_23418 dumpfiles -Q 0x000000003e727e50 -D . -n

I used the filescan plugin to list all of the file objects (and their offsets) within the memory image, and redirected the output to a file. Using grep we get the offsets for the files (and also notice that evilscript.py is actually evilscript.py.py), then the dumpfiles plugin will extract the files to disk for analysis.

evilscript.py

import sys
import string

def xor(s):
  a = ''.join(chr(ord(i)^3) for i in s)
  return a

def encoder(x):
  return x.encode("base64")

if __name__ == "__main__":
  f = open("C:\\Users\\hello\\Desktop\\vip.txt", "w")
  arr = sys.argv[1]
  arr = encoder(xor(arr))
  f.write(arr)
  f.close()

vip.txt

am1gd2V4M20wXGs3b2U=

So, we have a Python script that works as follows:

  1. Take a string as a command-line argument
  2. Break that string down to its individual characters, and XOR each character with 3
  3. Encode the XOR’d string as Base64
  4. Write the Base64 encoded string to the file vip.txt

We could write another Python script to reverse these steps, but CyberChef is much easier!

Reversing the steps from evilscript.py, CyberChef spits out the first part of our flag:

inctf{0n3_h4lf

Now we have the first part of the flag, but we still need to find the second. None of the other running processes look particularly interesting, but the challenge description specifically mentions that steghide is required.

Steghide is a steganography program that is able to hide data in various kinds of image and audio files.

Using grep and my saved filescan output I filtered for common image file extensions, and spotted the following after searching for JPEG files:

grep -E '.jpg$|.jpeg$' filescan.txt | head
vol.py -f MemoryDump_Lab3.raw --profile=Win7SP1x86_23418 dumpfiles -Q 0x0000000004f34148 -D . -n

After extracting suspision1.jpeg:

We can feed it into steghide to check for any hidden data. Presumably the first part of the flag is the password…

steghide extract -sf suspision1.jpeg
cat secret\ text

And there we go, the second part of our flag.

_1s_n0t_3n0ugh}

Put them together, and we have completed the challenge!

inctf{0n3_h4lf_1s_n0t_3n0ugh}