The Art of Attack and Penetration - Understanding Your Security Posture
George Kurtz and Chris Prosise
Distributed computing environments have proliferated over the last several years to support core business operations. Distributed environments are heterogeneous, complex, and geographically dispersed; however, most organizations have become reliant upon these environments for everything from executive decision-making to customer service. To this already complicated and rapidly changing systems landscape, enabling technologies such as electronic commerce, remote access, intra/extra/Internets, and a host of other applications have extended the perimeter of an organization's computing environment into areas that boggle the mind of most CIOs. Of course, implementing necessary and viable security controls is vitally important as this expansion takes place.
What Is an Attack and Penetration Review? Essentially, an Attack and Penetration (A&P for short) review provides a real-life test of an organization's exposure to security vulnerabilities and determines the extent to which it is susceptible to external/internal attack and penetration. Testing is accomplished by trying to perform real incursions into an organization's computing environment. This can be accomplished in several different environments as listed below:
Internet - Assesses the ability of an organization's corporate firewall system to thwart attacks from unauthorized individuals on the public Internet.
Intranet - Assesses the security of systems directly connected to an organization's internal network.
Extranet - Assesses the security of an organization's connections with its business partners, vendors, and other entities.
Dial-in - Assesses the security of an organization's authorized and unauthorized dial-in access points.
Benefits of A&P Reviews A&P reviews provide a fast, efficient, and tangible assessment of the security of information assets, communications, and control infrastructure of an organization. They identify network and system vulnerabilities that could be exploited from outside or inside the organization with varying levels of access and information.
Attack scenarios include the ability to view, steal, corrupt, modify, deny access to, or destroy corporate information assets as an "outsider" with no knowledge of organization operations and as an "insider" (employee, consultant, or business partner).
Properly performed A&P reviews should provide a highly structured approach to examining the security posture of an organization. This is not "hacking" in the sense in which it is portrayed in the media. Instead, it is a detailed technical examination of the security functionality of an organization and its related technologies. The ultimate goal of the A&P review is to identify system vulnerabilities so that they can be corrected before they are used for unauthorized purposes. The rest of this article focuses on the technical aspects of the A&P review, next month we'll examine the corrective actions necessary to prevent exploitation.
Review Framework Over the course of performing hundreds of these reviews, we have compiled what we consider a comprehensive methodology to performing A&P reviews. This article will focus primarily on performing A&P reviews via the Internet; however, many of the concepts and techniques can be applied to other access paths and technologies. It is our goal to provide the reader with a better understanding of A&P reviews, and provide a framework to help execute them.
Footprinting Each organization with connectivity to the Internet has its own unique profile or "footprint". By using a combination of tools and techniques, a security practitioner can easily determine an organization's profile by performing what we call a "Footprint Analysis". Essentially, you take an unknown quantity (e.g., ABC Company) and reduce it to a specific range of domain names, IP network ranges, and host systems associated with ABC Co. The footprinting triangle depicts this relationship. In most cases, you can also identify each device by platform (e.g., SPARC, X86, proprietary router hardware), operating system/version number (e.g., Solaris 2.6, Linux, Windows NT), and application (e.g., Apache 1.3.3, Sendmail 8.8.8).
This footprint analysis consists of the following steps.
Network Enumeration The first step in the network enumeration process is to identify domain names and associated networks related to a particular organization. Fortunately for us, the InterNIC (whois.internic.net) and American Registry for Internet Numbers (ARIN); (whois.arin.net) databases are excellent resources that we can use. Most whois clients will allow you to query these databases easily. The following commands illustrate the whois syntax from a Linux and OpenBSD system. The whois clients provided by most BSD flavors seem a bit more elegant than the stock Linux version, as they provide various command switches to search different databases (man whois) for more info.
Figures 1-4 demonstrate the various queries we can perform. As you can see, there are several different types of queries we can issue. The "organizational" query searches for information related to the name of the organization. The "domain" query searches for information related to a specific domain. The "network" or ARIN searches for the owners of specific net-blocks. Most databases can be accessed via the Web (www.internic.net, www.arin.net) or via a simple telnet interface (telnet rs.internic.net). See Figure 5 for a listing of additional whois servers.
With this information in hand, we have an idea that the organization has connectivity to the Internet, and we even know the network IP addresses that they own. Note that many organizations will register domains because they want to protect a trademark or keep a domain name for later use. In actuality, many of these domains don't have any real network blocks associated with them.
After identifying the appropriate networks, we can systematically determine access paths into the network by "tracerouting" each network and host. In addition to using standard traceroute clients, we can use special tools to traceroute to specific TCP or UDP ports, effectively bypassing traditional ICMP filtering rules. Thus, we can determine both legitimate access paths as well as access paths of which the target organization may not be aware. This exercise will produce a comprehensive "access path" map that will allow us to logically view the Internet access paths available to an outsider. Our goal is to determine access paths and ascertain access control lists (ACLs) that may be implemented on a router or firewall. We can use a standard traceroute client, as well as a utility like firewalk (www.es2.net/research). Firewalk is an interesting utility that helps determine access control lists. By identifying which services are allowed through an access control device, we greatly increase the probability of attacking systems via "allowed access paths."
DNS Interrogation After querying the aforementioned databases, it is a good idea to see what type of information can be gleaned from Domain Name Services (DNS). Many times it is possible to coax the organization's name server to provide all hostname and IP address information via a "zone transfer". If the DNS server is misconfigured, it may give out not only legitimate external host information, but return internal addresses as well. This provides the security practitioner with a complete blue print of the organization's network. In addition, it may provide additional "enticement" information that can be used to subvert the security of the target system. One of the best examples of enticements or information leakage is to pull back a host information record (HINFO). If this information is accurate, it tells you exactly what type of operating system you are targeting. Thus, any vulnerabilities can be discovered in short order. Many different commands can be used to query DNS including nslookup, dig, and host. In our example (Figures 6 and 7), we used host to perform the zone transfer and determine where mail is handled (MX record). One of our primary objectives is to determine the address of the firewall so that it can be tested. Many times, the firewall is the system that handles mail, or is at least located on the same physical network.
Even simple hostnames that might ordinarily seem innocuous can be used to help identify systems that have not been adequately secured. By greping the output of our zone transfer we can find any system with "test" in its name (Figure 8). Often these servers are misconfigured, or unattended, and are easy prey for an unauthorized user. If we wanted to target a specific OS, we could always grep for "linux", "sun", "bsd", etc.
Host Identification After obtaining possible targets from our DNS query, we need to determine which systems are actually alive and connected to the Internet. Just because a system shows up in a DNS listing doesn't mean it is accessible. We have seen many instances where a misconfigured DNS server will list IP addresses of 10.0.0.x. Needless to say, you would have a hard time trying to route to a non-routable internal IP address.
We have several free tools at our disposal, including nmap from Fyodor, and fping. Each tool provides the ability to perform "mass" ping sweeps quickly and efficiently. nmap is much more than just a ping sweep tool, and is one of the necessary tools you will need to perform A&P reviews. We can see from our example (Figure 9), that three hosts are alive on that particular subnet.
While most ping programs rely on Internet Control Messaging Protocol (ICMP), nmap can perform TCP pings. Thus, if ICMP is blocked at the border router, nmap will enumerate systems behind the router by taking advantage of "allowed access paths" (e.g., access to port 80).
Service Scan Once systems that are "alive" and connected to the Internet have been identified the above steps, it is time to begin port scanning each system. Port scanning includes determining which service ports (TCP/UDP) are present and listening. Identifying listening ports is critical to determining the type of operating systems and applications in use. Active services that are listening may allow you (the security practitioner) or an unauthorized user to gain access to systems that are misconfigured or running a version of software known to have security vulnerabilities.
Several freely available tools can be used to help identify open ports. Some of the better known and more sophisticated port scanners include: nmap, strobe, tcp_scan, udp_scan, and netcat. For our purposes, we will use nmap. Although a detailed discussion of nmap is outside the scope of this article (see the article written by Arthur Donkers in the November 1998 issue of Sys Admin for more information), it is worth noting a few different types of scanning techniques. The type or combination of port scans employed by the security practitioner will vary based upon the "stealth" factor of the review. These scans may include:
Standard TCP/UDP scans - These are the most common and basic types of port scans.
Stealth Scans - FIN, SYN, and ACK scans. By setting these flags on the TCP packet, malicious packets can often evade detection. Additionally, some rudimentary access control devices are flawed and allow these types of packets to pass though to internal systems.
Fragmentation scanning - This is a modification of other techniques and allows the user to break the packets into several small IP fragments. Many access control devices do not handle this properly and allow fragmented packets to pass through the monitoring device undetected.
TCP reverse ident scanning - Systems that run the ident protocol (RFC 1413) allow for the disclosure of the username of the owner of any process connected via TCP, even if that process didn't initiate the connection. Thus, it may be able to obtain the process owner of each listening service.
FTP bounce attack - The ftp protocol (RFC 959) allows support for "proxy" connections. If systems directly connected to the Internet have ftp enabled and are "bounce-able" it is possible to port scan other systems via the ftp server. Depending on the architecture of the organization's Internet firewall system, this technique may be used to bypass access control devices to perform port scanning activities. For more information, see Hobbit's ftp bounce paper at ftp.avian.org/random/ftp-attack.
TCP Fingerprinting - This type of technology is extremely powerful, and enables the security practitioner to quickly ascertain each hosts OS with a high degree of probability. TCP fingerprinting generally requires access to at least one listening service port (for maximum reliability), thus, it is possible to determine the exact operating system of the target by sending specially created packets to one open port (e.g., port 80). This information will allow us to mount a very focused and methodical attack against the target system. Excellent tools to accomplish this include nmap (v2.0; www.insecure.org/nmap/) and queso (www.apostols.org/projectz/queso/).
In Figure 10, we can see that many TCP ports are open, thus, if this server has not been hardened, there is a significant risk that an unauthorized user could compromise the security of this system.
Information Retrieval After determining the listening ports on each system connected to the Internet, we will attempt to extract as much information as possible from the target system. This includes banners or other information specific to a listening port. Information "leakage" provided by services such as SNMP, finger, rusers, SMTP, or NetBIOS may allow us to obtain detailed configuration and user information on each system. While information leakage/banner retrieval is not always completely accurate, this information is invaluable in trying to determine the operating system as well as the version of the services running.
We can easily port scan each server and grab banners with netcat or strobe (see Figure 11). In addition to port scanning, we can connect to each open port with Hobbit's netcat, one of the best tools available, and one that should be in any security auditor's toolkit (ftp://coast.cs.purdue.edu:/pub/ \ tools/unix/netcat/nc110.tgz). This little utility is the "Swiss army knife" of our security toolkit and should be yours as well. After port scanning each host, we can manually connect to each TCP/UDP port. By connecting to each port and noting the response, we can glean version information that may indicate a vulnerable server (Figure 11). The astute reader may be wondering, "Can't I do that with telnet?" The answer is "sort of". telnet will try to perform some negotiation even if you telnet to a port other than 23. This negation may not give you a "clean" connection, thus hindering your testing. Additionally, try telneting to a UDP port; it's not going to happen.
In Figure 12, we can see that this Web server is running Netscape-Enterprise version 3.0J. Thus, any vulnerabilities associated with this Web server can be manually ascertained by the security practitioner (preferred) or by an automated vulnerability scanner.
We are not only looking to determine specific version numbers, but also system specific information. If "informational" services like finger and rusers are open, we can obtain username and account information, which may aid us in logging into the target system.
In addition to simple banner grabbing, it is advisable to see whether each server is running Simple Network Management Protocol (SNMP). This can easily be determined by performing a UDP port scan to determine whether UDP port 161 is open. If so, snmpget, snmpwalk, and snmptest can be used to help query SNMPD to obtain a plethora of information.
Vulnerability Mapping Now that we have a solid understanding of the systems that are alive, the services they are running, and specific information such as users names, we can perform vulnerability mapping. Vulnerability mapping is the process of mapping specific attributes of a system to associated vulnerable that have been discovered. There are several methods that we can employ, such as:
- We can take all the information we have collected including OS version, specific version numbers for each listening service, and the architecture of each system and manually perform vulnerability mapping. Although tedious, it can be accomplished by checking publicly available sources such as CERT, CIAC, and the Bugtraq archives for any specific vulnerabilities associated with the systems you are testing.
- You can use actual exploit code that you have written, or that has been posted to various security mailing lists, or any number of Web sites. This will help you verify the existence of a potential vulnerability with a high degree of certainty.
- You can use any number of automated vulnerability scanning tools to help identify true vulnerabilities. On the commercial front, Cybercop Scanner from NAI (www.nai.com) and the Internet Security Scanner from ISS (www.iss.net) seem to be the most thorough. On the freeware side, the Nessus Project (www.nessus.org) shows promise.
Limitations of Automated Scanning ToolsWhile automated scanning tools provide a solid foundation for vulnerability detection, they have several limitations. Many of the tools generate inconclusive reports due to false positives, false negatives, and inherent ambiguity in automated scanning techniques. It is important to evaluate the results of each tool and manually verify the existence of difficult to detect vulnerabilities to ensure accurate reporting.
Finally, scanner technology has not evolved to the point of being able to perform vulnerability linkage. Vulnerability linkage is the practice of using several low-to-medium risk vulnerabilities discovered across different platforms to gain privileged access. Thus, a scanner may note several low or medium-risk vulnerabilities, but cannot determine if an attack combining or "linking" these vulnerabilities would result in a gaping security hole. This type of expertise is the true value-added benefit a skilled security practitioner can offer an organization.
Conclusion If this were a real-life client review, we would actually compromise the security of the vulnerable systems and penetrate as far into the organization's network as possible. Most organizations that hire us are interested in tangible evidence of what can be done rather than just a simple report of potential vulnerabilities. Tangible evidence of security vulnerabilities is a tremendous benefit in getting additional security resources after a successful penetration review and can go a long way in justifying a security budget increase.
While we have tried to provide a framework for performing A&P reviews for your organization or clients, it is difficult to touch on the many facets of this exercise. We have spent many years trying to perfect these techniques, but realize it is an uphill battle given the rapid pace technology is evolving. Our best advice is to attempt to profile your own network (with permission of course), and then enlist the aid of an experienced security practitioner. The time and effort you spend now may pay big dividends down the road. After all, seeing your organization's name appear in the Wall Street Journal or New York Times under the headline "Hackers Strike Again", will not be the most pleasurable experience of your career.
Unfortunately, identifying existing vulnerabilities is only half the battle. The identified vulnerabilities and their underlying causes must be corrected. Next month, we will address countermeasures for the types of attacks presented in this article.
About the Author
George Kurtz is a Senior Manager in the Information Security Services practice of Ernst & Young and serves as the National Attack and Penetration leader within the Profiling service line. Additionally, he is one of the lead instructors for "Extreme Hacking" defending your site" a class designed to help others learn to profile their site (www.ey.com/security). Mr. Kurtz can be contacted at george.kurtz@ey.com.
Chris Prosise is a Manager in the Information Security Services practice with extensive experience in attack and penetration testing, incident response, and intrusion detection. Mr. Prosise can be reached at chris.proise@ey.com or chris@prosise.net.
|