Cover V13, i08

Article

aug2004.tar

Questions and Answers

Amy Rich

Q I'm migrating from Solaris 8 to Solaris 9. I use JumpStart to install all of our machines with 10MB of space for the replica databases. I kept the same configuration when moving to Solaris 9, and it told me that there was not enough room for the state databases. Obviously things have changed, so I'm wondering how much space I should be allocating for that slice now.

A With the version of Solaris Volume Manager (called SDS in older releases), Sun recommends 100MB for the slices containing the replicas. In Solaris 8, each replica occupied 1034 disk blocks of the partition. With the new version of SVM, the default replica size is 8192 blocks, eight times the size of those in Solaris 8. If you are upgrading to SVM from SDS, be sure to increase the replica slice size. If you keep the slice size the same and then remove a default-sized SDS replica and add a new SVM one, you'll likely overwrite the blocks of the filesystem located after your replica slice. You can specify the size of a replica by setting a length when invoking metadb:

metadb -l length_in_blocks
Q Our sendmail 8.12.11 server handles mail for a domain that belongs to someone else. If this user's machine is up, he wants all mail delivered to it. If his machine is down, he wants all of the mail for his domain delivered to a username on a different machine. Can you provide a custom ruleset that would do what he's after?

A Let's say your user's domain is client.domain, his machine is host.client.domain, and this other email address is user@other.domain. You can create a mailertable entry that will deliver his mail as desired.

First, add in the mailertable feature to the mc file with the following and generate a new cf file:

FEATURE('mailertable')dnl
This will default to using a hash map for the mailertable file. If you want to use a different kind of map, say dbm, specify this line as:

FEATURE('mailertable', 'dbm -o /etc/mail/mailertable')dnl
Second, put the following line into /etc/mail/mailertable (depending on your OS or any local customizations, this file may be called something else):

client.domain   esmtp:[host.client.domain]:user@other.domain
The brackets around the machine name ensure that no MX lookups will be done on the host, in case mail for that machine is usually handled by another server. If MX lookups are desired, just remove the brackets.

Change directory to the location of the mailertable file and build the mailertable map with the following commands (specifying the correct map type for your installation):

makemap hash mailertable < mailertable
You should now have a file called /etc/mail/mailertable.db. Move your new cf file into place and send a HUP signal to sendmail.

Q We have a UDP application that a consultant has written for us that's spawned from inetd. The application isn't working quite right, and we're looking for some way to connect to the daemon directly and interact with it for debugging purposes. If this were a TCP application, I could just use telnet, but, as far as I know, there's no way to use telnet with UDP applications. Is there a switch to telnet or another tool that would work?

A Telnet can only do TCP connections, but you can use a program called netcat (ftp://coast.cs.purdue.edu/pub/tools/unix/netutils/netcat/) to form the same kind of interactive connection with your UDP application. Netcat is an invaluable debugging tool for UDP and TCP alike. Instead of using telnet to debug your TCP applications, you might consider switching to netcat. Even for TCP applications, netcat has a few advantages. To quote from the README distributed with the code:

Telnet has the "standard input EOF" problem, so one must introduce calculated delays in driving scripts to allow network output to finish. This is the main reason netcat stays running until the *network* side closes. Telnet also will not transfer arbitrary binary data, because certain characters are interpreted as telnet options and are thus removed from the data stream. Telnet also emits some of its diagnostic messages to standard output, where netcat keeps such things religiously separated from its *output* and will never modify any of the real data in transit unless you *really* want it to. And of course telnet is incapable of listening for inbound connections, or using UDP instead.

Q I've been trying to jumpstart a U60 with Solaris 8 05/03. I get to the post-install script that's installing patches, and it bombs out with the following error:

Patch number 110668-04 has been successfully installed.
ERROR: Unable to use /tmp/patchadd-18782543 due to possible security issues.
ERROR: Unable to use /tmp/patchadd-18782543 due to possible security issues.
Patchadd is terminating.
I've successfully used this same JumpStart setup on another machine, so I'm not sure what's wrong. What security issues could possibly exist on a machine that's just been installed and hasn't even left the private JumpStart network yet? Logins aren't enabled, and everything is automated so the procedure should be exactly the same.

A You're running into an issue with the patchadd program, not anything external to your machine. The version of patchadd that comes with Solaris 8 05/03 does not remove its tmp files when installing patches. This occasionally results in tmp filenames being reused before they're cleaned out. Patchadd flags this as a possible security violation because the name it wishes to use already exists in the tmp directory. For more information, take a look at bugid 4678605:

http://sunsolve.sun.com/private-cgi/ \
  retrieve.pl?type=0&doc=bug%2Fsysadmin%2Fpatch_utility%2F4678605
The workaround is to manually remove the tmp files after each patchadd run, but if you're automating things, this may be easier said than done. If you're running your post-install scripts from /etc/rcX.d files, you may want to boot single-user mode, install the patches by hand (defeating the purpose of automating the installs), and then reboot, picking up where you left off in the post-install scripts. You could also split your patching into files and call rm in the tmp directory between two separate runs of patchadd. This latter approach will preserve your automation but take slightly more time, because patchadd is invoked twice.

Q We process a large number of digital pictures and put them into online photo albums using Autogallery. Unfortunately, the digital pictures almost always come in as FILENAME.JPG, and Autogallery wants the filename extension to be a lowercase .jpg. Is there an easy way to have all the files converted on the fly when they're transferred from their respective cameras?

A You don't say how you're pulling the photos off the camera, so doing it on the fly there would depend on your method. If you're using Autogallery, though, the easiest approach might be to modify the build_list script in the tools directory. Before the ls line that creates the file "thelist", add code similar to the following:

for i in *.JPG;
do
  if [ ! -f 'basename $i .JPG'.jpg ]; then
    echo "moving $i -> 'basename $i .JPG'.jpg"
    mv -i $i 'basename $i .JPG'.jpg
  fi
done
You can also make this a separate script and invoke it somewhere else before the ls line, too.

Q I've just started working at a company that's running an old version of sendmail with a sendmail.cf file that's been hand modified for years. For security purposes, we definitely need to upgrade sendmail. Is there a way I can generate a valid sendmail.mc file from the existing sendmail.cf file so we have a fairly smooth transition?

A There is no way to actually generate an mc file from a cf file, but you can get a good idea of what's changed if the version you're currently running supported mc files from the beginning. Very old versions of sendmail did not use m4 to generate the cf file and modifying it by hand was the only way to go.

If you have a sufficiently recent version of sendmail, make sure that you have the necessary files to generate a cf file from an mc file. If you don't have the files on the machine already, download the source for the same version of sendmail that's running on the machine.

Determine which features and mailers were used by doing the following:

grep '$Id:' sendmail.cf
Or, for older versions of sendmail:

grep '@(#)' sendmail.cf
Next, create an mc file with those features and mailers defined. Generate the cf file from the mc file you just created and use diff to determine any differences between the two.

Read the contents of the file cf/README from the old version of sendmail and try to correlate any changes to definitions or other macro settings. Modify your mc file and create new cf files until you've eliminated the differences between your test cf file and the production cf file.

As a note of warning, make absolutely sure that you use the same version of sendmail to build the test cf files, or there will likely be a large number of differences that you won't be able to accurately connect with an mc file modification.

Q I'm trying to compile libxml2 2.6.7 on a Solaris 8 machine with SMCgcc 3.3.2. It gets most of the way through the compile and then fails near the end with the following error (there are several files that have linking issues with libz.a):

Text relocation remains referenced against symbol offset in file
<unknown>                       0x734
/usr/local/lib/libz.a(inflate.o)
.
.
.
ld: fatal: relocations remain against allocatable but non-writable sections
collect2: ld returned 1 exit status
make[3]: *** [libxml2.la] Error 1
I verified that /usr/local/lib/libz.a is there and has the right permissions. Other programs compile fine against it, too. If I disable libz support, everything builds fine, but I'd really like to compile with libz. Is this a bug in gcc, libz, or libxml2? Is there a workaround?

A I've seen a number of people report the same sorts of failures with various versions of gcc and Solaris, but the common fix seems to be using a shared libz library instead of a statically built version. This also has the benefit of only needing to upgrade the libz library in the event of a security libz hole instead of recompiling every software package that was statically linked against libz.a.

Q Our environment includes a mix of Unix- and Windows-based machines. One of the scripts on the Unix machines needs to know the NetBIOS names of the Windows machines, but I don't know a good way to cull this information from the network. I was going to try to hack something together from bits of Samba code, but thought I'd check here first to see whether I'd be reinventing the wheel.

A There's a tool called NBTscan that runs on both Windows and various flavors of Unix. Check the NBTscan Web page at:

http://www.inetcat.org/software/nbtscan.html
for the source code, or you might be able to install it via a package or port depending on your OS. NBTscan queries machines at UDP port 137, so it will pick up any machine answering there, including Samba-enabled machines. If you're blocking UDP 137 on any of the machines from which you need information, you'll miss those boxes. NBTscan also has a GTK2-based GUI called xNBTscan, available from:

http://md2600.dyndns.org/~daten/
If you're using the command-line version, NBTscan, you can specify the address/address range in one of three forms:

192.168.1.1       A single IP in dotted-decimal notation.
192.168.1.0/24    A network address and subnet mask.
192.168.1.1-127   A network address range.
Without any other options, the nbtscan binary outputs the IP, NetBIOS Name, server, user, and MAC address:

nbtscan 192.168.1.0/24

  Doing NBT name scan for addresses from 192.168.1.0/24

  IP address      NetBIOS Name   Server    User      MAC address
  --------------------------------------------------------------------
  192.168.1.0     Sendto failed: Permission denied
  192.168.1.3     MACHINE1       <server>  USER1     00-50-da-b7-e5-7c
  192.168.1.4     MACHINE2       <server>  MACHINE2  d2-80-e4-6c-a1-33
  192.168.1.255   Sendto failed: Permission denied
  192.168.1.5     MACHINE3       <server>  MACHINE3  00-00-00-00-00-00
Various other command-line options allow you to format the output so that it's suitable for /etc/hosts format, lmhosts format, or for importing into a database.

Q I've enabled BSM on my Solaris 8 boxes and now my root cronjobs are failing. Other users are working just fine, though, so I'm not sure what's so special about root. Generally, if there's a permissions problem, it's ONLY root that works, but I seem to have the reverse of that. Any clues as to what might be wrong?

A This Web site:

http://www.boran.com/security/sp/Solaris_bsm.html
lists several items to check if cron is failing on BSM-enabled boxes. For example, make sure that all of the c2 audit patches are installed. Also make sure that each user who has a crontab file also has a .au file:

/var/spool/cron/crontabs/root
/var/spool/cron/crontabs/root.au
If you're using ssh to connect to the machine before issuing the crontab command to edit the file, the USER.au file is not created correctly because of the default way UID switching in sshd happens. Without a valid USER.au file, the cronjobs aren't executed, as mentioned above. To work around this, enable UseLogin in the sshd_config file, but be sure that you don't have security issues with the login program before enabling this option. If you're up on your Solaris patches, this shouldn't be a problem.

If your cronjobs are running but you're receiving messages saying that the job failed, verify that you're not sending stdout/stderr to /dev/null so that you can identify the issue. You'll then want to correct whatever is setting the exit code for the cronjob to something other than zero.

Amy Rich, president of the Boston-based Oceanwave Consulting, Inc. (http://www.oceanwave.com), has been a UNIX systems administrator for more than 10 years. She received a BSCS at Worcester Polytechnic Institute, and can be reached at: qna@oceanwave.com.