When 100Mb/s Isn't Fast Enough
Chris Chroniger and David Yarashus
Switched Ethernet and Fast Ethernet connections at 100 megabits per second (Mb/s) are becoming commonplace, but some applications demand still more speed. We were recently approached by a company that had found dedicated Fast Ethernet connections too slow for its scientific visualization application running on high-end SGI systems, and this article is a summary of the faster-than-100-Mb/s networking options we found. We hope that it will save you some time if you need to make a decision about how to build a higher speed network.
The Need for Speed Traditional networking technologies such as Ethernet (shared and switched 10 Mb/s), Fast Ethernet (shared and switched 100 Mb/s), FDDI (shared 100 Mb/s), and Token Ring (4 or 16 Mb/s) are not capable of delivering the high-bandwidth, low-latency I/O required for the new breed of data-intensive, client/server applications. As companies try to realize the productivity benefits that high-throughput networking can deliver, demand is growing for increased throughput in every part of the network. Table 1 summarizes some of the driving applications and their impact on computer networks. The combination of these bandwidth-hungry applications and the increasing number of impatient network users is fueling the need for higher performance networks.
Some high-speed networks are better suited for very data-intensive applications such as scientific modeling, while others are more suited for the clustering and aggregation of data streams that you might see in LAN server farms. We will discuss both types of applications and describe the network characteristics of each.
How Fast Is Fast? Everyone wants computer networks to be fast, but not everyone agrees on what "fast" means. In some situations, adding Ethernet switches to give each workstation a dedicated 10 Mb/s port may turn a slow network into a fast network. Some simple traffic engineering can also bring massive improvements to a network, especially if the heavy bandwidth servers and users on the network can be given faster connections, perhaps with 100BaseT or, in the future, Gigabit Ethernet. Fast Ethernet (100Base-X) is relatively well known, so we'll focus on faster-than-100-Mb/s alternatives.
During the past 10 years, the term "gigabit networking" was only heard when discussing a supercomputer system running some specialized application and communicating with high-speed proprietary peripheral devices. With the rapid technological advances made by the PC and workstation industry and the new generation of data-intensive killer apps such as real-time multimedia, the need for gigabit networking is increasing. This is especially true for companies that gain a real competitive edge from real-time or rapid information distribution and processing.
System Performance Versus Network Performance The move to gigabit networking may be inevitable, but how to make the transition and optimize the complete end-to-end system performance isn't always obvious. When designing a high-throughput networked system, every component of the system must be considered. The four primary components effecting overall performance of a networked system are CPU, memory, network latency, and network bandwidth. CPU and memory have been increasing at a very rapid rate, but most companies continue to run the same type of shared Ethernet network they have had for the past 10 years to the desktop, utilizing either FDDI or Fast Ethernet as their backbone technology. As a comparison, modern Intel-based desktop computers now utilize a PCI bus architecture, which provides a maximum internal data rate of 132 megabytes per second (MB/s), compared to the 12.5 MB/s achieved by FDDI and Fast Ethernet, which is also further reduced if you count the overhead associated with each of these datalinks.
Essential Communications likens a network connection to a "pipeline" through which data moves, like water in a sprinkler system. The simplest version of this view suggests that to decrease the time required to move a large volume of data, you just increase the size of the pipe. Their pipeline analogy is a good one, and goes on to describe how, like a sprinkler system with valves, pumps and controllers, the reality of high-speed networking is more complex.
If a bigger data pipeline were the only thing required, companies could rip out their old copper cable and install optical fiber to solve their data-transfer problems. Unfortunately, the infrastructure of a legacy network also includes the existing routers, switches, hubs, NICs, and software drivers installed in workstations and servers. Thus, to build a faster network, not only do companies need bigger pipes, but they also need new pumps, controllers, and valves that can handle the increased data flow.
One of the biggest bottlenecks in end-to-end network performance is the ability of the host's protocol stack to move data in and out of the computer. In the sprinkler system analogy, this bottleneck is like a valve. So, we have a powerful pump (the computer) and large-diameter piping (fiber-optic or category 5 cabling), but we need a bigger valve to move data more efficiently. This might take the form of larger TCP segment and IP packet sizes, I/O coprocessors, or any number of other system-dependent variables. To achieve very high throughput networking, you must consider all aspects of the system.
For example, if your users are complaining about how slow the network is, and the bottleneck turns out to be the speed at which a database server can crank out answers to complicated queries, a faster network would have been a waste of money. To build a high-performance system, you must consider not just the hosts, not just the network, not just the operating systems, but everything along the complete data transfer path. Having a gigabit network attached to a supercomputer won't yield good performance unless the system is tuned and the application is at least reasonably efficient. You may sometimes see order-of-magnitude improvements in data transfer over high-speed networks by tuning TCP segment sizes on a server, and sometimes you can see even bigger improvements by application tuning. Sometimes programmers haven't paid any attention to how a network affects their application, so relatively simple changes like reading or writing a large buffer instead of a character or line at a time can change an application's networked performance dramatically.
Evaluation Criteria for High-Speed Networks The following five key factors should be considered when evaluating high-speed networking technologies:
Throughput With workstation and server buses operating at 132 MB/s (1.056 gigabits/sec) and higher, the network will be required to operate at comparable or faster speeds. Ideally, the networking technology will natively support multiple speeds so that higher speeds can be used for servers and in backbones, and lower speeds can be used where the extreme bandwidth demands are not required.
The primary determining factors of network throughput are bandwidth and latency. All of the technologies we discuss here provide significant increases in bandwidth over legacy networks such as Ethernet, Token ring, and shared FDDI, some by orders of magnitude. While bandwidth can be scaled almost infinitely in theory (ATM was designed with that goal in mind), no one has figured out how to transmit data faster than the speed of light. This can create a serious imbalance in very high-speed networking, especially over very long distances where long latency cannot be avoided. One of the differentiators between the major competing high-speed network types is the amount of latency they introduce and the variability of that latency (sometimes called "jitter").
Packet Size There is a huge variation in the size of the data packets supported by gigabit technologies, and the maximum packet size supported has a great affect on total application throughput. The ratio of data to overhead is better when using large packet sizes, which results in a more efficient network. When smaller packet sizes are used, the hosts and networking devices spend more time examining headers, making routing decisions, and on I/O in general, which leads to a less efficient network. Each individual packet received may even generate a CPU interrupt for the host to process.
Larger packets are not always better, however. Make sure you understand all the links in your network. If the packet size is too large, then an intermediate device like a Cisco router on the network may have to fragment the packet (chop up a big packet into several smaller ones), which will then require reassembly at the destination. Fortunately, IP can deal with this, but segmentation and reassembly introduces some delay, and may increase out-of-order packet delivery. When using very small packets on WAN links, you may not be able to achieve even 50 percent link utilization before saturation. The typical packet sizes in most networks running mixed protocols and applications will be between 64 bytes and 1500 bytes. Packet size distribution graphs will usually show spikes around 64 bytes (acknowledgments), 576 bytes (older systems or ones not performing MTU discovery), and 1500 bytes (maximum Ethernet size, default maximum on most serial lines). In the major gigabit networking technologies, packet sizes range between 64 bytes and 64 kilobytes, and are independent of cell sizes.
Availability Not every high-speed networking technology that you hear about is immediately available for the systems you need to support. If an OC-48 (2.048 Gb/s) ATM card becomes available for an SGI server, it doesn't do you any good if you need to support an HP. Make sure that your decisions are based not just on predictions of the future, but also on current availability for your computing platform of choice.
Interoperability Due to high costs and lack of familiarity with new networking technologies, it is rare that a complete network upgrade is performed at one time. Most organizations do not have a lab environment adequately equipped for testing very high-speed networking technologies that are to be phased in. During the transition period, compatibility and interoperability with existing technologies is required. Thus, it is a good idea to select a networking technology that is based on a published standards and that has multiple vendors supplying interoperable products for it.
Cost It is always difficult to determine the cost of implementing a new networking technology. It is even more difficult to determine the savings associated with the implementation of a new technology. When determining the cost of upgrading to a new networking technology, be sure to include all of the peripheral costs such as installing new cable, upgrading network interface cards, retraining staff, and upgrading the network management system. When determining the cost savings, don't forget to factor in the value of increased productivity brought by higher application throughput if you have a way to quantify it in your business. People running certain types of critical applications (medical diagnostics, product development, stock forecasting, etc.) over their networks may well be able to cost justify almost anything that creates a significant increase in the application's throughput.
High-Speed Network Options The gigabit-class technologies we researched for possible implementation included ATM at OC-12 and above, Fibre Channel, Gigabit Ethernet, and HIPPI. For those of you with Ethernet, Token Ring, and FDDI networks, we'll also mention switched FDDI and switched Fast Ethernet as relatively easy ways to upgrade the throughput of these legacy technologies. A brief overview of each of these technologies is provided below, along with some specific advantages and disadvantages.
Switched Fast Ethernet Fast Ethernet, also known as 100Base-X or IEEE 802.3u, delivers 100 Mb/s over Category 5 UTP, multimode fiber, and single-mode, fiber optic cable. It is the most popular 100 Mb/s technology available today. Like 10 Mb/s Ethernet, Fast Ethernet employs the carrier sense multiple access/collision detection (CSMA/CD) network access method for shared environments. Fast Ethernet technology also uses the same frame format and maximum length (1518 bytes) as Ethernet, so it does not require changes to the upper layer protocols, applications, or networking software that run on LAN workstations. Switching Ethernet data between 10 Mb/s, 100 Mb/s, and even 1000 Mb/s varieties of Ethernet does not require frame translation, which enables low-latency, media-rate performance in relatively inexpensive switches.
Many Fast Ethernet devices support both manual configuration and automatic negotiation of the mode of operation (half or full duplex) and the speed (10 Mb/s or 100 Mb/s). Full-duplex Fast Ethernet technology delivers up to 200 Mb/s bandwidth by providing simultaneous bidirectional communications - meaning that 100 Mb/s is available for transmission in each direction. Additionally, full-duplex mode isn't bounded by the collision diameter requirements of half-duplex transmission, so it supports a maximum point-to-point distance of two kilometers when using multimode fiber cables between two devices.
The greatest advantage of switched Fast Ethernet is that it is a lower cost, higher performance solution with the greatest ease of migration from traditional 10 Mb/s Ethernet. Switched Fast Ethernet is also an ideal step in a gradual migration to Gigabit Ethernet. Combining switched Fast Ethernet with switched Ethernet, shared Ethernet, and shared Fast Ethernet is the most cost-effective way to build a balanced and moderately high-throughput network. Light users can share a 10 Mb/s segment, moderate users can have dedicated 10 Mb/s connections or shared 100 Mb/s connections, and servers or power users can have dedicated Fast Ethernet connections. This provides a clear and easy migration as users' requirements change.
Switched FDDI Most industry analysts are predicting that FDDI sales will continue to decline rapidly as more Fast Ethernet, Gigabit Ethernet, and Asynchronous Transfer Mode (ATM) products come to market, and that the future of FDDI switches is to serve the legacy market. Furthermore, most believe that FDDI to the desktop is too expensive and will not have a smooth upgrade path. However, in a data center environment that is already using FDDI, significant throughput improvements may be possible by adding a FDDI switch. Additionally, some FDDI equipment supports full-duplex operation when used with a FDDI switch, allowing significantly higher throughput.
FDDI (ANSI X3T9.5) and its copper variant, TP-PMD (sometimes imprecisely called CDDI, a trademarked term referring to a specific company's implementation), deliver a 100 Mb/s data rate over Category 5 unshielded twisted-pair (UTP), multimode fiber, and single-mode fiber optic cable. FDDI employs a token passing scheme as its network access method. Because this scheme is combined with FDDI's dual ring architecture, FDDI is a self-healing LAN technology. FDDI supports single attachment stations (SAS) to one ring for connectivity, or dual attachment stations (DAS) for connectivity and redundancy. FDDI also has a Station Management (SMT) layer, which defines an extensive set of statistics designed for monitoring the LAN. Finally, FDDI supports frame lengths up to 4500 bytes for greater throughput. Data is generally routed between FDDI networks and to Ethernet family networks because of the different frame formats used and the problems associated with different packet size limits.
In a shared backbone environment, FDDI's sustained data throughput performance is better than shared Fast Ethernet. As the number of end stations on an FDDI ring increases, performance will not degrade significantly. FDDI's token passing scheme is capable of sustaining traffic rates of 90 percent of the total bandwidth. On the other hand, as the number of users increases on a shared Fast Ethernet segment, performance degrades with increased collisions. Most network managers should expect a shared Fast Ethernet network to sustain traffic rates of 30 to 40 percent, with limited bursts to 90 percent of bandwidth. However, today's architectures are moving away from shared networks and migrating to switched networks for greater bandwidth capacity.
The two greatest advantages of FDDI are its dual fault-tolerant rings and its slower degradation of performance with increasing traffic loads. Although FDDI can be used for high-speed client connectivity, in most cases FDDI has been used for server connectivity and high-speed backbones where its higher costs have been justified by delivering fast and reliable connections. Switched FDDI is currently used at several of the Internet's major peering locations. FDDI has also been strong in the campus backbone where its redundancy, throughput, and distance support have been primary requirements, but it is now facing a challenge there from ATM-based networks.
ATM By bringing together the best points of Time Division Multiplexing (TDM), statistical multiplexing, and cell switching, ATM provides a versatile, multifunctional network that can support a variety of services and traffic types. ATM's strongest points are its scalablity and its ability to mix data types (voice, video, data). These capabilities make ATM popular across a wide spectrum of vendors, carriers, and end users.
ATM technology is based on the efforts of the International Telecommunication Union Telecommunication Standardization Sector (ITU-T) Study Group XVIII to develop Broadband Integrated Services Digital Network (BISDN) for the high-speed transfer of voice, video, and data through public networks. Through the efforts of the ATM Forum, ATM is capable of transferring voice, video, and data through private networks and across public networks. ATM continues to evolve as the various standards groups finalize specifications that allow interoperability among the equipment produced by vendors in public and private networking industries. Today, most ATM LAN products run 155 Mb/s. Once the overhead is accounted for, ATM is in the same category as full-duplex FDDI and Fast Ethernet. OC-12 (622 Mb/s) is the next standard ATM speed above OC-3, but very few host NICs are available at this speed. As of this writing, the only OC-12 ATM host adapter the authors are aware of is a single NIC for Sun servers. Additionally, ATM networks can be complicated to configure and troubleshoot, and debugging tools are both expensive and hard to use.
While ATM is still a popular buzz word in most networking environments, particularly in the WAN world, we think that ATM will not become a dominant LAN technology because of the easier, cheaper, and faster alternatives.
Fibre Channel Fibre Channel (FC) is a standard defined by the American National Standards Institute (ANSI) in the ANSI X3T11 specification. FC can deliver gigabit-speed networks today and can support future emerging technologies such as Gigabit Ethernet. These features, along with FC's ability to work with any existing WAN or LAN technology, give one reason to consider it for future-proof networks. However, FC is a complex standard - it defines four data rates, three media types, four transmitter types, three distance categories, three classes of service, and three possible switch fabrics. It is designed more for streaming large amounts of data than for low latency.
Until recently, Fibre Channel was considered by many to be no more than a storage interface, rather than a network backbone, computer cluster, and client/server solution. While the storage industry was the first to understand FC's price/performance advantages and was also the first to implement products based on this ANSI standard, other environments are now starting to adopt FC to meet high-bandwidth needs. Although FC supports several peripheral-oriented command sets, it can also be used as a data link for IP-based traffic.
FC switches are extremely scaleable. At present, they can achieve aggregate bandwidths exceeding 800,000 Mb/second and can accommodate more than 3000 users. Next-generation devices in development are expected to increase those capabilities.
Although the FC specification's top signaling rate is currently 1.0625 Gb/s (4 Gb/s has been proposed) with a data rate of 800 Mb/s, the most common implementations support a data rate of 200 Mb/s. FC is still being developed, particularly in the LAN environment where no clear networking standard exists. Arbitrated Loop, similar to a Token Ring, is a ring environment where adding users cuts down the amount of time that each station will have access to the network. When implemented as a peripheral attachment mechanism with a fixed number of peripherals, arbitrated loop works well, but scaling quickly becomes a problem in the network environment. Fibre Channel Arbitrated Loop (FC-AL) was developed with peripheral connectivity in mind and can support up to 126 devices per loop (Ring).
Fibre Channel has become the industry standard for connectivity of high-speed peripheral devices, and there are two associations developing FC technology: The Fibre Channel Association (FCA) and the Fibre Channel Loop Community (FCLC). According to Sun Microsystems, FC-AL is the highest performance storage interconnect on the market today. Because of its superior performance, and its broad industry support, Fibre Channel is the storage interconnect of choice for users that need high reliability, hot pluggability, improved connectivity, and the ability to send large volumes of data quickly over long distances. Relatively few vendors are making LAN products based on this today.
Gigabit Ethernet Gigabit Ethernet is an extension to the enormously successful 10 Mb/s and 100 Mb/s 802.3 Ethernet standards. Gigabit Ethernet provides a raw data bandwidth of 1000 Mb/s while maintaining full-frame format compatibility with the installed base of over 70 million Ethernet nodes. Gigabit Ethernet will include both full- and half-duplex operating modes. In the case of half duplex, Gigabit Ethernet will retain the CSMA/CD access method. Initial products will be based on the FC physical signaling technology adapted for a data rate of 1000 Mb/s running over fiber optic cabling. Advances in silicon technology and digital signal processing will eventually enable cost-effective support for Gigabit Ethernet operation over Category 5 UTP wiring.
Based on the widespread popularity of Ethernet in its various forms, Gigabit Ethernet seems obviously destined to become the preferred gigabit technology of the future. Gigabit Ethernet, will function in essentially the same way as both Ethernet and Fast Ethernet, making it an easy transition and providing full backward compatibility. Its primary weaknesses are the continued use of CSMA/CD and the relatively small Maximum Transmission Unit (MTU) of 1500 bytes. These will prevent Gigabit Ethernet from efficiently supporting the high-volume data flows required by applications driving the high-end gigabit networking world.
The IEEE 802.3z subcommittee is working on a Gigabit Ethernet standard targeted for completion in early 1998. Gigabit Ethernet is the most interoperable nonstandard the networking industry has seen in a long time, but until the standard is completed and final products have been tested for interoperability prudent implementers will exercise caution (and write free upgrades into their purchase orders).
HIPPI HIPPI is an abbreviation for "High Performance Parallel Interface." There are two major types of HIPPI: the original parallel version uses many wires to transmit 32 bits or 64 bits at a time over copper cables, and the newer serial type that sends data one bit at a time over fiber optic cables. HIPPI is a point-to-point communications standard designed to provide up to 100 MB/s throughput (as compared to the 12.5 MB/s delivered by FDDI), and unlike Ethernet, 802.5 Token Ring, and FDDI, HIPPI does not use a shared medium. HIPPI switches must be used between HIPPI-connected hosts. HIPPI is supported by most supercomputer and parallel processing machines.
HIPPI provides a basic data rate of either 800 Mb/s or 1.6 Gb/s (depending on the bus width used and whether it is serial or parallel) and may be configured for either simplex or duplex data transmission. Most implementations are done with the 800 Mb/s serial variety, which can run at distances of up to 10 kilometers when used with single mode fiber. HIPPI is defined in several ANSI standards, including X3.183-1991, X3.222-1993, X3.218-1993, and X3.210-1992. Work is already under way for a 6.4 Gb/s (full-duplex 3.2 Gb/s) flavor called HIPPI-6400 or Super HIPPI. Serial HIPPI, the form becoming most popular, is based on an interoperability agreement between manufacturers and is a working document in the HIPPI network forum.
High-volume data stream applications, such as simulation and medical imaging, require higher throughput than a typical network can provide. HIPPI provides the highest throughput rates available today based on currently available networking products. HIPPI runs at a raw rate that is 8 times faster than FDDI and Fast Ethernet, and it can use larger packet sizes (up to 64 KB) to optimize data transfer rates.
Conclusion With the proliferation of massive Internet/Intranet connectivity requirements, high-speed networks must have the capability to interoperate with lower speed networks. For example, our customer's high-speed network devices needed to interoperate with their legacy Fast Ethernet workstations. Serial HIPPI switch and NIC availability for our computing platform of choice, combined with strong legacy network connectivity support, made HIPPI the best choice for our specific requirements.
Based on our experiences working with and designing high-speed networks, we have concluded that single-technology solutions are often impractical and optimal price/performance ratios can only be achieved through a careful examination and understanding of your real requirements.
Table 2 is a summary of some of the advantages and disadvantages of the high-speed networking technologies discussed in this article, along with our opinions of them. Although you may not have heard some of these technologies mentioned in a networking context before, we were pleased by the range of technology choices available to meet our needs. Combining these gigabit technologies with switching in your existing legacy network can be a cost-effective way to meet very high throughput requirements. But remember, when looking at high-speed networking technologies, don't limit your search to technologies with "Ethernet" in their names. There are alternatives that may be much better for your needs, as HIPPI was for ours.
References
Quick Reference Guides to 100-Mbps Fast Ethernet
By Charles Spurgeon
http://wwwhost.ots.utexas.edu/ethernet/descript-100quickref.html
Cisco Systems
http://www.cisco.com/
Gigalabs
http://www.gigalabs.com/
Ancor Communications, Inc.
http://www.ancor.com/
Essential Communications
http://www.esscom.com/
Gigabit Ethernet Alliance
http://www.gigabit-ethernet.org/
The ATM Forum
http://www.atmforum.com/
FDDI Switching Information
http://www.networks.digital.com/html/white-papers.html
Fibre Channel Information
http://www.intel-sol.com/solutions/ancor/wp_index.html
http://www.sun.com/storage/wp/fc_comp.html
HIPPI Information
http://www.cern.ch/HSI/hippi/
http://www.sgi.com/support/QNA/FAQ.book_1355.html
About the Author
Chris Chroniger is a Senior Network Engineer with TimeBridge Technologies, a consulting firm in the Washington, D.C. area. Chris has been working in the computer industry for 10 years and specializes in multiprotocol networking. Chris has a B.S. in Computers and Information Sciences from the Univeristy of Maryland and previously worked at the White House and NASA Headquarters. Chris enjoys playing sports and thanks his wife, Rhina, and daughter, Rachel, for putting up with his frequent work-related absences.
David Yarashus is a Senior Consultant with Chesapeake Computer Consultants, where he focuses his time on high-end, networking-related issues, especially routing, troubleshooting, and network management. He has contributed to several books, and is both a Cisco Certified Internetwork Expert (CCIE# 2292) and a Certified Network Expert (CNX).
|