Cover V14, i07

Article

jul2005.tar

Questions and Answers

Amy Rich

Q We have a SunFire 280R that's acting as a cheap high-speed firewall for our internal network. We have a quad-gigabit card in the machine that's attaching it to three internal LANs and one that's connected to the DMZ. The problem is that we're seeing dismal throughput on this box, nowhere near gigabit speeds. We specifically went with a host-based solution instead of an actual firewall appliance because we wanted the cheap gigabit speed. Is there something that we can tune to get better performance out of this?

A You're probably running into several issues, both hardware and software related. You state that you're using a 280R for this purpose, but you don't mention which operating system you're running or the exact hardware configuration. To start, take a look at your adapter under the Sun System Handbook at http://sunsolve.sun.com/ to see whether you're running at a supported OS revision and whether you have all of the necessary patches. If you have an X4444A, for example, you should take a look at:

http://sunsolve.sun.com/handbook_private/Devices/Ethernet/ETHER_QGE_UTP.html
This is one of the quad-GigE cards supported by the 280R and is handled under the ce driver. The driver requires Solaris 7 Patch 112327-17, Solaris 8 Patch 111883-23, or Solaris 9 Patch 112817-16. If you're trying to use VLANing, you also need Solaris 8 Patch 112119-04 or Solaris 9 Patch 114600-02.

Next, make sure that both ends of the connection (e.g., the switch and the port on your Sun) are running at the same speed and duplex. You can check the speed and duplex under Solaris by running:

kstat -p ce | grep link_
The settings are translated as:

link_up - 0 down, 1 up
link_speed - speed in Mbit/s
link_duplex - 1 half duplex, 2 full duplex, 0 down
All of your GigE ports should be running at 1000, full duplex.

If you're not running Solaris 10, then as a general rule of thumb you want 500 MHz per GigE port (assuming that you want to saturate the port as much as possible). Sun recommends this number when speaking of their gigaswift adapters. The following is from http://www.sun.com/products/networking/ethernet/gigaswift/faq.xml:

500 MHz of CPU per adapter is recommended, with a minimum of 167 MHz processor to handle interrupts.

A general rule of 1 MHz of CPU power per 1 Mbps may help the user to assess the processor requirement for the Adapter. This rule is based on ttcp results that send maximum speed data streams from point to point.

Other system parameters in assessing processor requirements are MTU packet size, single thread/multi-thread and network bandwidth dedicated to a single point. For applications such as data, video streaming and network backup characterized by maximum MTU and maximum network bandwidth dedication, users may apply the 1 MHz per 1 Mbps rule.

The 208R only holds two CPUs, so you're probably constrained there, especially if you only have one CPU installed. Also, the 280R only has one 64-bit 66-MHz PCI slot. Make sure that you have your quad card in slot 1; otherwise you're using a 64-bit, 33-MHz slot that's sharing the bus with the other two 64-bit, 33-MHz slots, 3 and 4. If you have the card in a slot other than 1, you're not only running the card slower, but you're competing with the other two slots for bus access. If you're trying to move data at gigabit speeds on all four ports, you'll want to obtain hardware that has high-speed PCI slots on multiple buses, then purchase separate cards instead of using one quad card.

There might also be some tuning in /etc/system that will help, depending on where your bottleneck is. For additional information, take a look at the Blueprint article, "Understanding Gigabit Ethernet Performance on Sun Fire Servers", at:

http://www.sun.com/blueprints/0203/817-1657.pdf
Q We have script that runs via cron that needs to modify some files non-interactively. Previously, we were running this script as root, and there were no issues because root had write access to everything. We're trying to tighten security, though, so we no longer want to run this script as root but as the application user. Unfortunately, the application user doesn't own its own files; they're owned by root. To get around this, we were looking at using sudo. The problem is that one of the cron jobs does an echo redirected to a file, and this doesn't appear to work. We get cron output saying permission denied on writing the file. Do you have any suggestions?

A Tightening security is always a good idea, but if you have files that must be owned by root and not writable by this other user, then writing to the file is very difficult to do securely. Sudo will not allow you to do redirection because those things are a function of your shell, not a function of the command being run under sudo.

There are a few options to accomplish what you need, but you might be better off leaving the script running as root in the end. None of these are particularly secure, given your requirements, but one may be a better trade-off than another:

1. Create another user just to do the modifications. If you're in a position where you're running an application such as Apache and you don't want the user the httpd process runs as to have write access to the document root, then you can create another user to own all the files. This third user can own all the files but not own the process, so there is no chance of httpd being able to write to the document root via a CGI hole or something similar. The specialized user can still modify files at will interactively or via cron, though. This is fairly secure, but does not help you if the files must be owned by root.

2. Write a shell script wrapper for your cron jobs so that the user who needs to make the modifications can just run the whole shell script under sudo. This is dangerous because the user could always break out of the script and run anything as root. A better option here is to write a C program or a shell program with a C wrapper that only allows very specific commands to be run and does not allow the user to break out of the script into the shell.

3. If you must run sudo and make output redirection work, then you can fork a separate shell under sudo:

sudo sh -c "echo foo > newfile"
This, again, is akin to giving the user full sudo access because anything can be spawned under the shell running as root.

Q We've just moved to keeping our internal document repository under CVS. We have mostly HTML files with some images and a few Star Office documents. CVS works great for the HTML files and any PHP or Perl code we're checking in, but when we check in images, tar archives, or other sorts of binary documents, CVS messes with the file so that it's no longer readable. Is there a way to check in files so we have usable backup copies?

A When you check in binary files, CVS cannot do partial diffs to show you the modifications to the file. If you're looking for something that will be able to save state change information instead of entire binary files at every new revision, CVS is not the tool to use. You can, however, configure CVS so that it keeps whole copies of each binary file without making them unusable. There's a file called cvswrappers under your CVSROOT. You can add lines to this file to inform CVS that files matching a certain pattern should be handled in a certain way. To avoid modifying binary files when checking in images, archive, compressed, and doc files, etc., you can add the following lines to the cvswrappers file:

# Do not modify images
*.gif -k 'b'
*.jpg -k 'b'
*.jpeg -k 'b'
*.png -k 'b'
*.tiff -k 'b'
 
# Do not modify archives or compressed files
*.Z -k 'b'
*.gz -k 'b'
*.bz2 -k 'b'
*.tgz -k 'b'
*.tar -k 'b'
*.rar -k 'b'
*.jar -k 'b'
*.zip -k 'b'

# Do not modify documents
*.ps -k 'b'
*.pdf -k 'b'
# Microsoft extensions
*.doc -k 'b'
*.xls -k 'b'
*.ppt -k 'b'
# StarOffice and OpenOffice extensions
*.sdc -k 'b'
*.sdw -k 'b'
*.sda -k 'b'
*.sdd -k 'b'
*.sxc -k 'b'
*.stc -k 'b'
*.sxd -k 'b'
*.std -k 'b'
*.sxi -k 'b'
*.sti -k 'b'
*.sxm -k 'b'
*.stm -k 'b'
*.sxw -k 'b'
*.stw -k 'b'
The first part is a regular expression match, and the -k flag tells CVS to set the keyword expansion flag to "b" for binary. You can include whatever regular expression matches you'd like in this file.

For files that have already been checked in incorrectly, future check-ins will also happen incorrectly unless you fix them. To do so, check out the files that have been corrupted. Replace the existing copy with a good copy that hasn't been tampered with by CVS. Now check in the file using the -kb flags so that they are checked in as binaries:

cvs ci -kb GoodFile.doc
See the CVS man page and documentation for more information on the cvswrappers file, keyword expansion, and command-line options.

Q We've just switched to hosting our Web site at an external hosting company so that we don't have to deal with the maintenance and security concerns of administering it on our own. The rest of our company is firewalled off from the Internet, but there is an OpenSSH gateway running on a FreeBSD 5.3 machine. We allow only a select few IPs connect to our OpenSSH gateway, and one of those is the address of the Web server.

Unfortunately, the company that does the hosting uses virtual IPs and puts many companies on one machine. This means that the machine itself has its own IP, which does not match our Web server's IP. The hosting company's is the IP address that's checked when a connection is initiated from the Web server. We don't want to allow the Web hosting company's machine IP access, we just want to allow our virtual IP access. Is there any way around this? Will we have to just allow the Web hosting company's IP (and consequently anyone who connects to their machine)?

A You can continue to allow access from your Web server IP only and not the generic Web hosting machine's IP, but you have to specify the bind address when you ssh from the Web server. From the ssh man page:

-b bind_address -- Specify the interface to transmit from on machines with multiple interfaces or aliased addresses.

To specify the bind address from the command, type:

ssh -b your.virt.ip your.gateway.machine
You can also have users set the variable BindAddress in their private $HOME/.ssh/config files instead of always specifying the setting on the command line.

This doesn't actually buy you much in the way of security, because anyone with an account on that machine can also specify your virtual IP as the bind address. There is some level of security through obscurity, however, because people trying to connect to your gateway would need to know that they have to specify the bind address.

Q We do software development across a wide variety of Unix operating system platforms. One of our file servers is a Linux box, and I've been trying to set up a new Solaris 10 box to talk to it without luck. I was attempting to get things working with automounter, but when that failed, I stopped automounter and just tried to get your basic mount working.

From the Solaris 10 box, I try to mount the Linux NFS share:

mount -F nfs /linux/fs /mnt/fs
and I get a weird permissions error:

nfs mount: mount: /mnt/fs: Not owner
I've gone over the permissions, and everything looks just fine, not to mention that I'm root when I'm trying the mount, not a normal user. This works fine on every other machine that we have in house, including Solaris 2.6, 7, 8, and 9. I know that a bunch of things changed with the service manager and startup scripts in Solaris 10, but I'm not sure why that would be affecting me here. I've tried restarting the client several times without any luck using the command:

svcadm restart nfs/client
Did I miss something obvious, or should this just work and I've found a bug?

A I've seen several people mention having issues between Linux and Solaris 10 when it comes to doing NFS out of the box. The theory is that the Linux NFS implementation isn't playing well with the Solaris 10 NFSv4 implementation. To determine whether this is your problem, try using the vers option to the mount command to specify that you want version 3, not version 4, of NFS:

mount -F nfs -o vers=3 linux:/fs /mnt/fs
If this fixes the issue, edit /etc/default/nfs and add the following line to turn off NFSv4 on the Solaris 10 machine permanently:

NFS_CLIENT_VERSMAX=3
Then restart the nfs client for the changes to take effect:

svcadm restart nfs/client
Amy Rich has more than a decade of Unix systems administration experience in various types of environments. Her current roles include that of Senior Systems Administrator for the University Systems Group at Tufts University, Unix systems administration consultant, and author. She can be reached at: qna@oceanwave.com.