Cover V13, i03

Article

mar2004.tar

Linux Memory Forensics

Michael Ford

Forensic analysis is the investigation of an event that involves looking for evidence and interpreting that evidence. In the case of a computer crime in which a system was compromised, the investigator needs to find out who, what, where, when, how, and why.

There are three main areas from which evidence of an intrusion can be gathered. The first and most common is the hard drive. A file system on a hard drive contains the least volatile data. Whether the investigator's strategy involves shutting down the system or just removing the computer's power, the file system will still be there. The investigator's response strategy will dictate what changes are made to the file system. If the file system is shut down or if the investigator issues commands to the system to collect information, the file system may be changed, but in the end, it's still there. There are then many tools, such as The Sleuth Kit or The Coroner's Toolkit (TCT), that can be used to analyze the file system.

The second, and most volatile, of the three areas is network traffic. Once a packet has reached its destination, it's no longer on the wire, and it will only exist briefly in memory on the received system. Regional and national laws dictate the legality of collecting network traffic, but many tools exist to do so. These include intrusion detection systems (IDS), such as Snort, and network monitoring facilities, such as tcpdump or Ethereal.

The third area to investigate is system memory, which I will focus on in this article. The system memory contains everything that is currently happening on the system. If the system is shut down or turned off, this information is lost. We can run many commands before the system is taken down, however, to capture some of this information. For example, we can use date to find the current system time and date. We can use netstat to find a list of the current connections to and from the system and ps to find a list of processes running on the system. But there isn't a command that will show, for example, the changes to files that haven't yet been written to disk. We also cannot be sure that the netstat, date, and ps commands have not been changed by an attacker. An attacker may have modified these and other commands to keep investigators from seeing information about his connections, or he may have replaced them with booby-trapped versions that would erase all of his files, among other things.

Also, running these commands changes their associated access times, which are important when creating a timeline of system activity. Every command that is typed changes the system state. It would be better to save the system memory and analyze it later to find this information. When an attacker breaks into a server and installs a rootkit, many tools can be used to analyze the filesystem of the victim server and find out what the attacker did. Unfortunately, there are far fewer tools and published methods for capturing the memory of an attacked machine and analyzing that image.

One way to save an image of the system memory is to use the crash dump facility. Most Unix operating systems provide a facility for saving the system memory for later analysis. This is useful in cases where a kernel bug causes a crash, and the bug can later be analyzed and fixed. We can use the same facility to analyze the memory after an attack.

I will demonstrate crash analysis to find the answers to three problems. The analysis requires familiarity with the C programming language and some knowledge of Linux kernel structures. To save space, I will omit command output that is irrelevant to the demonstration, and I will replace unneeded structure elements with an ellipsis (...).

The first problem is that, on the victim system, ps does not show all of the processes being run. Originally, ps returned:

[root@host]# ps -f
UID        PID  PPID  C STIME TTY          TIME CMD
root      1211  1209  0 18:15 pts/0    00:00:00 -bash
root      1266  1211  0 18:22 pts/0    00:00:00 ./malware 41000
root      1267  1211  0 18:22 pts/0    00:00:00 ./invisible 41001
root      1268  1211  0 18:22 pts/0    00:00:00 ./removed 41002
root      1270  1211  0 18:23 pts/0    00:00:00 ps -f
Unfortunately, a trojan ps command was installed to hide any process called "malware".

A second problem is that the "adore" Loadable Kernel Module (LKM) was installed. Adore can be used to hide files or processes on the system. Here it was used to make the "invisible" process invisible and to remove the "removed" process from the ps listing forever. So now, ps returns:

[root@host]# ps -f
UID        PID  PPID  C STIME TTY          TIME CMD
root      1211  1209  0 18:15 pts/0    00:00:00 -bash
root      1281  1211  0 18:24 pts/0    00:00:00 ps -f
The third problem is that the netstat command has been replaced with a trojan version, and the new one doesn't show all the connections or listening ports on the system. Originally, it showed:

[root@host]# netstat -an | egrep '(^[AP]|ESTAB|LISTEN)'
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address    Foreign Address   State
tcp        0      0 0.0.0.0:41000    0.0.0.0:*         LISTEN
tcp        0      0 0.0.0.0:41001    0.0.0.0:*         LISTEN
tcp        0      0 0.0.0.0:41002    0.0.0.0:*         LISTEN
tcp        0      0 0.0.0.0:22       0.0.0.0:*         LISTEN
tcp        0      0 127.0.0.1:25     0.0.0.0:*         LISTEN 
tcp        0      0 10.0.7.128:22    10.0.7.1:32831    ESTABLISHED
and now it only shows:

[root@host]# netstat -an | egrep '(^[AP]|ESTAB|LISTEN)'
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address    Foreign Address   State
tcp        0      0 0.0.0.0:22       0.0.0.0:*         LISTEN
tcp        0      0 127.0.0.1:25     0.0.0.0:*         LISTEN
Creating Crash Dumps with LKCD

I will be using the Linux Kernel Crash Dumps (LKCD) package on a Linux 2.4.20 kernel on an X86 system to generate a crash dump. This package consists of three parts: a patch to the kernel, the configuration utility and patches to init scripts, and the lcrash program.

To begin, download and install a patch to the kernel to add the source code for the new feature.

# cd /usr/src
# wget http://lkcd.sourceforge.net/download/4.1.1/patches/lkcd-2.4.20.diff
# cd linux-2.4.20
# patch -p1 < ../lkcd-2.4.20.diff
In the kernel configuration file, the CONFIG_MAGIC_SYSRQ and the new CONFIG_DUMP options must be enabled to allow the crash dump to happen. Also, CONFIG_DUMP_COMPRESS_RLE and CONFIG_DUMP_COMPRESS_GZIP are good options to enable so that the dump is compressed to use less disk space. After building the new kernel and rebooting with it, get the LKCD utilities and install them:

# wget http://lkcd.sourceforge.net/download/4.1.1/lkcdutils/lkcdutils-4.1-1.i386.rpm
# rpm -i lkcdutils-4.1-1.i386.rpm
You will also want to review the configuration in the "/etc/sysconfig/dump" file to match your system.

Changes are also required to the initialization script specified on the "si" line in /etc/inittab. On Red Hat Linux, this file is "/etc/rc.d/rc.sysinit". The file should be changed to run:

/sbin/lkcd config
/sbin/lkcd save
before the swap partitions are activated to keep the dump file from becoming corrupted as the system is booted.

To save a dump file, hit a three-key combination (Alt-SysRq-C), and the system will save a dump file to disk, usually to the swap partition specified by the symbolic link to /dev/vmdump. In some cases, the three-key combination above may not work. In that case, you can put the console in sysrq mode by issuing the following command:

[root@host]# echo 1 > /proc/sys/kernel/sysrq
Then on the console, hit the "c" key, which will cause a crash and dump. More information on installing and using LKCD can be found on the LKCD Web site listed in the References section of this article.

When the system is rebooted, the crash dump will be saved to a file on disk under the directory specified in the configuration file. During a forensic analysis, this is sub-optimal because it would cause major changes to the evidence. It's better to restore the dump from an image of the evidence disk, rather than from the evidence disk itself. One way to do this is to make an image copy of the swap partition containing the dump and then save the dump from this copy. If we write the copy to an unused partition on the forensics workstation, we can save the dump to a file with lcrash, which is the dump analysis tool that comes with the LKCD package. If we don't write the copy to a partition or other block device, we must modify the lcrash program to be able to read from regular files. In either case, the command to save the dump is:

# lcrash -P -s dumpdev dumpdir
where dumpdev is the partition or file containing the raw dump, and dumpdir is the directory in which the dump should be saved. The dump will be saved as a filed named "dump.0".

Analysis with Crash

The "crash" utility is another great tool for analyzing crash dumps, and it understands the dump format used by LKCD. It uses the GNU debugger (gdb), which is provided in the package, for most of its work. I'm using crash version 3.7-5 to analyze the LKCD dump that I got from the victim system. In the common case, the crash command takes two arguments. One is the kernel, which contains debugging symbols, and the other is the dump file generated by the crash dump facility. The debug kernel can be generated by adding the "-g" flag to the "CFLAGS" define in the top-level Makefile when building the Linux kernel.

Crash will use this kernel to find the symbols it needs when analyzing a dump file. Make sure that the kernel on the victim system and the debug kernel are the same version and use the same source code. Otherwise, crash will complain that the versions don't match:

# crash vmlinux.dbg dump.0
crash 3.7-5
Copyright (C) 2002, 2003  Red Hat, Inc.
Copyright (C) 1998-2003  Hewlett-Packard Co
Copyright (C) 1999, 2002  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public 
License, and you are welcome to change it and/or distribute copies 
of it under certain conditions.  Enter "help copying" to see the 
conditions.
This program has absolutely no warranty.  Enter "help warranty" 
for details.
 
GNU gdb Red Hat Linux (5.3post-0.20021129.36rh)
Copyright 2003 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, 
and you are welcome to change it and/or distribute copies of it 
under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" 
for details.
This GDB was configured as "i686-pc-linux-gnu"...

      KERNEL: /evidence/vmlinux         
    DUMPFILE: dump.0
        CPUS: 1
        DATE: Sun Nov  9 18:47:26 2003
      UPTIME: 00:33:39
LOAD AVERAGE: 0.00, 0.01, 0.00
       TASKS: 36
    NODENAME: host
     RELEASE: 2.4.20
     VERSION: #19 SMP Mon Nov 3 21:47:26 EST 2003
     MACHINE: i686  (1799 Mhz)
      MEMORY: 160 MB
       PANIC: "sysrq"
         PID: 0
     COMMAND: "swapper"
        TASK: c037a000  
         CPU: 0
       STATE: TASK_RUNNING (PANIC)
When crash starts, you'll notice that it provides some interesting information about the system. Most notable for a forensic investigation is the current date "Sun Nov 9 18:47:26 2003", the uptime "00:33:39", and the release "2.4.20". The first two pieces of information will be useful when constructing an event timeline, and the release will be useful for finding the correct version of the kernel source code. The panic message "sysrq" also validates that the system dumped because the investigator instructed it to do so, as opposed to crashing for some other reason. At the crash prompt, you can use the "help" command to get information about the other commands available and their usage.

As a first step, we can use the ps command in crash to find a list of processes that were running on the system:

crash> ps
   PID    PPID  CPU   TASK    ST  %MEM     VSZ    RSS  COMM
>     0      0   0  c037a000  RU   0.0       0      0  [swapper]
   1209   1112   0  c87ee000  IN   1.3    6796   2116  sshd
   1211   1209   0  c86d6000  IN   0.9    4312   1412  bash
   1266   1211   0  c8580000  IN   0.2    1360    248  malware
   1267   1211   0  c8550000  IN   0.2    1360    248  invisible
   1312   1211   0  c83ee000  IN   0.9    4668   1460  tcsh
Note that crash wasn't fooled by two of the hidden processes -- process 1266, which was obscured by the trojan ps, and process 1267, which was hidden by the adore LKM. Adore replaces the "getdents" system call so that when ps looks in "/proc" to get the list of processes, it doesn't find the invisible processes. Crash doesn't use the getdents system call; instead it reads the memory directly. Process 1268, which adore removed from the task list entirely, will be a little trickier to find.

In crash, we can enter a kernel symbol name, and the symbol's value will be returned. In the case of a structure, crash will report the values of each element of the structure. The Linux kernel keeps hash lists of all the TCP sockets. The two lists of interest to us are the "established" list, named "__tcp_ehash", and the "listening" list, named "__tcp_listening_hash". These are both members of the "tcp_hashinfo" struct in the Linux 2.4 kernel. We can use these to find any TCP sockets that netstat didn't report:

crash> tcp_hashinfo
tcp_hashinfo = $4 = {
  __tcp_ehash = 0xc9f40000,
  __tcp_bhash = 0xc9f20000,
  __tcp_bhash_size = 16384,
  __tcp_ehash_size = 8192,
  __tcp_listening_hash = {0xc9e39500, 0xc9e39c00, 0x0, 0x0, 0x0, 
    0x0, 0x0, 0x0, 0xc86fcb00, 0xc86fc780, 0xc9e67180, 0x0, 0x0, 
    0x0, 0x0, 0xc9e38400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc9e39180, 
    0x0, 0x0, 0xc95e0480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0},
  ...
}
We can use the p command to print out the value of an expression and also use the * character to de-reference a pointer. When looking through each of the buckets in the listening hash, we find something interesting when we reach bucket 10:

crash> p *tcp_hashinfo.__tcp_listening_hash[10]
$13 = {
  ...
  num = 41002,
  ...
  sport = 10912,
  ...
  sleep = 0xc88c06bc,
  ...
}
This is a socket listening on TCP port 41002. Note that the source port ("sport") value of 10912 is the network byte order representation of 41002. This can be shown by swapping the bytes of the short from big-endian byte order to little-endian. We can find out which process is listening on this port by looking at the task_list pointed to by "sleep":

crash> p *tcp_hashinfo.__tcp_listening_hash[10].sleep
$16 = {
  lock = {
    lock = 1
  },
  task_list = {
    next = 0xc8519e80,
    prev = 0xc8519e80
  }
}
"sleep.task_list" contains a pointer into the stack of the process that is waiting on this socket. We know by examining the kernel source that each process is represented in Linux by a task structure that is at the bottom of two pages (8192 bytes) of memory. By "rounding down" this address by clearing the low-order 13 bits (2^13 = 8192), we get the address of the task structure. We can then have crash cast this address to a specific type and de-reference it by specifying the type, followed by the address. Also, simply specifying the type will show a list of elements in a struct, but specifying the type followed by the "-o" option will also show the offset of each element:

crash> task_struct c8518000
struct task_struct {
  state = 1,
  ...
  next_task = 0xc8548000,
  prev_task = 0xc8550000,
  ...
  pid = 1268,
  ...
  uid = 0,
  ...
  comm = "removed\000\000\000\000\000\000\000\000"
  ...
}
This task is in a "TASK_INTERRUPTIBLE" state. Note that the pointers to the previous and next tasks in the task list have not been cleared. These could point to other possibly hidden processes that are worth examining. In this case, one of the processes is 1267, which we already found, and the other already exited, freeing its memory for reuse. The pid is listed as 1268, which corresponds to a process we were missing from our ps listing earlier. We also see that the name of this process is "removed". Note that the entire comm buffer is printed, even though it contains a NUL-terminated string, which is why we see the escaped NUL characters. We've now located all of the processes that were missing.

After examining all of the entries in the listening hash, we can go through the established hash. The kernel source will show that although the listening hash is of fixed length, we must look up the length of the established hash list. This value is stored conveniently in the tcp_hashinfo struct:

crash> p tcp_hashinfo->__tcp_ehash_size
$133 = 8192
It would take quite a long time to go through 8192 buckets by hand, so we can use the gdb backend to crash and automate this process by setting up a simple loop and printing out the entries that are in use:

crash> gdb set $i=0
crash> gdb while $i < 8192
 >if tcp_hashinfo.__tcp_ehash[$i].chain != 0
  >p $i
  >p tcp_hashinfo.__tcp_ehash[$i]
  >end
 >set $i=$i+1
 >end
This loop uses $i as a variable and increments it from 0 to 8191. If a chain of entries is attached to the hash bucket, the bucket number and the contents of the first entry will be printed out. The following will be printed in this case:

$19 = 296
$20 = {
  lock = {
    lock = 16777216
  },
  chain = 0xc9e67500
}
Thus, bucket 296 has something in it. Let's examine it:

crash> p *tcp_hashinfo.__tcp_ehash[296].chain
$5 = {
  daddr = 17235978,  
  rcv_saddr = 2147942410,
  dport = 16256,   
  num = 22,
  ...
  sport = 5632,    
  ...
}
The four values of interest here are the destination address of this connection "daddr", the source address "rcv_saddr", the destination port "dport", and the source port "sport". These numbers may not look interesting because they are in network byte order, and also, the addresses are in decimal, which is difficult to read. We can use a built-in command to convert the addresses to something more familiar:

crash> net -n 17235978
10.0.7.1
crash> net -n 2147942410
10.0.7.128
We can then use a tool, such as the calculator "bc", to convert the port numbers from big-endian to little-endian. We then find a destination port of 32831 and a source port of 22. We can assume this is the server end of an ssh connection from the host at 10.0.7.1. We can then examine the associated socket structure to find that the connection is in the connected, or established, state:

crash> p *tcp_hashinfo.__tcp_ehash[296].chain.socket
$19 = {
  state = SS_CONNECTED,
  ...
}
We can also find the owner of the socket by looking at the sleep structure:

crash> p *tcp_hashinfo.__tcp_ehash[296].chain.sleep
$65 = {
  lock = {
    lock = 1
  },
  task_list = {
    next = 0xc840c02c,
    prev = 0xc840c02c
  }
}
It contains a task_list, which is a pointer to the task_list in a wait_queue_t. By subtracting the offset of the task_list in the wait_queue_t from the address, we can see the full wait_queue_t. We can examine kernel header files for the wait_queue_t declaration and find that this number equals eight:

crash> p *(wait_queue_t *)0xc840c024
$68 = {
  flags = 0,
  task = 0xc87ee000,
  task_list = {
    next = 0xc879d8c0,
    prev = 0xc879d8c0
  }
}
With the task_structure, we can find more information about the process; for example, we see that its pid is 1209 and its name is "sshd":

crash> p ((struct task_struct *)0xc87ee000)->pid
$70 = 1209
crash> p ((struct task_struct *)0xc87ee000)->comm
$135 = "sshd\000og\000\000\000\000\000\000\000\000"
We can also see a list of sockets used by each process by using the net command. The -s option to net will show us the sockets associated with a specific process. In the following case, we see that the two processes, 1266 and 1267, are listening on ports 41000 and 41001, which were missing from the netstat output:

crash> net -s 1266
PID: 1266   TASK: c8580000  CPU: 0    COMMAND: "malware"
FD   SOCKET     SOCK    FAMILY:TYPE   SOURCE:PORT    DESTINATION:PORT
 3  c88c0ea0  c86fcb00  INET:STREAM   0.0.0.0:41000  0.0.0.0:0
crash> net -s 1267
PID: 1267   TASK: c8550000  CPU: 0    COMMAND: "invisible"
FD   SOCKET     SOCK    FAMILY:TYPE   SOURCE:PORT    DESTINATION:PORT
 3  c88c0aa0  c86fc780  INET:STREAM   0.0.0.0:41001  0.0.0.0:0
By iterating through all of the buckets in the established and listening hash lists, we've now recovered all of the network sockets missing from the netstat output.

Saving Memory with memget

There are times when the kernel is built without any way to dump its memory to disk. For this, the investigator can use memget. Memget reads the sparse file "/dev/kmem" and transfers the data over a TCP network connection to the address and port specified on the command line for later analysis:

[root@host]# ./memget 10.0.7.1 1026
Saving memory...
Saved 45250 pages.
Using mempeek to Analyze Saved Memory

Once the investigator has transferred the memory image to a forensic workstation, he or she can use mempeek to examine that image. This is a very crude way of examining the memory, especially compared to the interface of crash, but if a dump facility isn't available, it's better than nothing. For example, we know that the system time is stored in the xtime variable in memory. We can find the address of this variable in the System.map file from the /boot directory:

% egrep xtime System.map
c03de4b0 B xtime
It's a good idea to have the kernel source handy during the investigation. We can see that xtime is a "struct timeval" containing the number of seconds and microseconds since January 1st, 1970. Using the peek command at the mempeek prompt returns the requested address and the 32-bit data at that address represented in hexadecimal, decimal, and as a string of four characters:

mempeek> peek c03de4b0
c03de4b0: 0x3faed015 (1068421141) " P.?"
We find that the decimal value of xtime.tv_sec is 1068421141. This isn't very useful by itself, but we can use a Perl expression to convert it to a time in the local timezone:

% perl -e 'printf( "%s\n", scalar localtime(1068421141))'
Sun Nov  9 18:39:01 2003
This matches up pretty closely with the time that crash reported on startup in the previous investigation. The reason for the difference is that memget was run several minutes before the crash dump was taken.

We can use mempeek to find the process hidden by adore. Adore sets two bits in the flags element of the task struct to mark a process as invisible. If no other flag bits are set, the value will be hexadecimal 0x03000000. To find the invisible process, we must first find the address of init_task_union.task, which points to the list of processes. We can obtain this address from /boot/System.map on the victim system:

% egrep init_task_union System.map
c037a000 D init_task_union
We can then use mempeek to find the value at that address, as well as other values in the task_struct by specifying hexadecimal offsets after the address. We'll look for the flags, next_task, prev_task, and pid at offsets 0x04, 0x48, 0x4c, and 0x7c, respectively. If we were using crash, we could obtain these offsets by issuing the command task_struct -o. In this case, where the system isn't set up to take a crash dump, it probably doesn't have a debug kernel either so we'll have to determine the offsets by looking at the kernel header files. "next_task" will contain the address of the next process in the doubly linked list of processes, so we can use it to loop through the list:

mempeek> peek c037a000 4 48 4c 7c
c037a000: 0x00000000 (0) "    "
c037a004: 0x00000000 (0) "    "
c037a048: 0xc9d3c000 (-908869632) " @SI"
c037a04c: 0xc83ee000 (-935403520) " '>H"
c037a07c: 0x00000000 (0) "    "
mempeek> peek c9d3c000 4 48 4c 7c
c9d3c000: 0x00000001 (1) "    "
c9d3c004: 0x00000100 (256) "    "
c9d3c048: 0xc9d30000 (-908918784) "  SI"
c9d3c04c: 0xc037a000 (-1070096384) "  7@"
c9d3c07c: 0x00000001 (1) "    "
...
mempeek> peek c8580000  4 48 4c 7c
c8580000: 0x00000001 (1) "    "
c8580004: 0x00000000 (0) "    "
c8580048: 0xc8550000 (-933953536) "  UH"
c858004c: 0xc86d6000 (-932356096) " 'mH"
c858007c: 0x000004f2 (1266) "r   "
mempeek> peek c8550000  4 48 4c 7c
c8550000: 0x00000001 (1) "    "
c8550004: 0x03000000 (50331648) "    "
c8550048: 0xc83ee000 (-935403520) " '>H"
c855004c: 0xc8580000 (-933756928) "  XH"
c855007c: 0x000004f3 (1267) "s   "
The last process has the flags value set to "0x03000000", which corresponds to adore's flags PF_INVISIBLE and PF_AUTH. So, we've now found process 1267, which was missing from the ps listing.

Summary

Sometimes data in memory on compromised systems, such as processes and network connections, are hidden from us by attackers. The attackers may have changed commands to hide this data or may have changed the behavior of the kernel to hide the data from various commands. In this article, I covered two tools that can be used to look at kernel memory. By examining certain kernel structures, I showed how to uncover the hidden processes and network sockets. Crash can be used along with a debug kernel to analyze the memory in a dump file symbolically. In the absence of a crash dump facility, memget can be used to save the system memory, and mempeek can be used to analyze the memory by address. Symbol addresses can be found in the /boot/System.map file on the victim system.

References

adore -- http://wwww.team-teso.net/releases/adore-0.42.tgz

The Coroner's Toolkit -- http://www.fish.com/tct/

crash -- http://freshmeat.net/projects/crash

ethereal -- http://www.ethereal.com/

LKCD -- http://lkcd.sourceforge.net/

memget/mempeek -- http://www.rndsoftware.com/

The Sleuth Kit -- http://www.sleuthkit.org/

snort -- http://www.snort.org/

tcpdump -- http://www.tcpdump.org/

Michael Ford began his career as a network and systems administrator 10 years ago while attending the Massachusetts Institute of Technology. He holds the CISSP and GCFA professional certifications and is the founder of RND Software, Inc., where he performs security research and consulting. Michael can be reached at: mike@rndsoftware.com.