Cover V13, i11

Article

nov2004.tar

Questions and Answers

Amy Rich

Q I have installed syslog-ng and am using stunnel to encrypt the connection between syslog clients and a central server. I followed the directions from:

http://www.stunnel.org/examples/syslog-ng.html
I know I have syslog-ng working without the stunnel bit, so I'm guessing that's where the issue is. When I start syslog-ng, my logs show the following error:

Sep  2 17:10:35 src@host.my.domain syslog-ng[7633]: \
  syslog-ng version 1.6.4 starting
Sep  2 17:10:35 src@host.my.domain syslog-ng[7633]: \
  connect_callback(): connect() failed
Sep  2 17:10:35 src@host.my.domain syslog-ng[7633]: \
  Error connecting to remote host AF_INET(127.0.0.1:514), \
  reattempting in 60 seconds
When I try to send a message to syslog via the logger command, after a bunch of "Garbage collection" and "Object" lines, I get this error:

Sep  2 17:11:36 src@host.my.domain syslog-ng[7633]: \
  connecting fd 6 to inetaddr 127.0.0.1, port 514
Sep  2 17:11:36 src@host.my.domain syslog-ng[7633]: \
  io.c: do_write: write() failed (errno 32), Broken pipe
Sep  2 17:11:36 src@host.my.domain syslog-ng[7633]: \
  pkt_buffer::do_flush(): Error flushing data
Sep  2 17:11:36 src@host.my.domain syslog-ng[7633]: \
  Connection broken to AF_INET(127.0.0.1:514), reopening in 60 seconds
I tried googling for the answer to this, but the only thing I really came up with was a post from 2002 on the syslog-ng mailing list by someone having a similar problem:

https://lists.balabit.hu/pipermail/syslog-ng/2002-June/003416.html
No one came up with a solution as far as I could see, and they suggested he upgrade. I'm already running the latest stable versions of both pieces of software, syslog-ng 1.6.4 and stunnel 4.05, so that's not going to help me much. Is there a known bug with stunnel or syslog-ng and Solaris 9 or is there another issue?

I've included my simplistic stunnel.conf and syslog-ng.conf files from both the client and the server. Here's the client stunnel.conf:

client = yes
cert = /usr/share/certs/syslog-ng-client.pem
CAfile = /usr/share/certs/syslog-ng-server.pem
verify = 3
[5140]
   accept = 127.0.0.1:514
   connect = 192.168.1.3:5140
And the stunnel.conf from the server:

cert = /usr/share/certs/syslog-ng-server.pem
CAfile = /usr/share/certs/syslog-ng-client.pem
verify = 3
[5140]
   accept = 192.168.1.3:5140
   connect = 127.0.0.1:514
The syslog-ng.conf file from the client:

options { long_hostnames(off); sync(0);};

source src { sun-streams("/dev/log" door("/etc/.syslog_door"));
             internal();};

destination dest { file("/var/log/messages");};
destination stunnel { tcp("127.0.0.1", port(514));};
       
log { source(src);destination(dest);};
log { source(src);destination(stunnel);};
And the syslog-ng.conf file on the server:

options { long_hostnames(off); sync(0); keep_hostname(yes);
          create_dirs(yes); chain_hostnames(no);};

source src { sun-streams("/dev/log" door("/etc/.syslog_door"));
             internal();};

source stunnel { tcp(ip("127.0.0.1") port(514) max-connections(1));};

destination dest { file("/var/log/messages");};
destination remotelog { file ("/var/log/remote/$HOST/messages");};

log { source(src);destination(dest);};
log { source(stunnel);destination(remotelog);};
On the clients, things log fine to the local /var/log/messages, but I never see anything in /var/log/remote on the server.

A The problem is that syslog-ng on the client is having trouble talking to the server over stunnel, as you surmised. Your configuration files look fine, though, so the problem lies elsewhere. Did you compile in tcp wrappers support? My first guess would be that you have a packet filter in the way or you're blocking the connection with tcp wrappers. Make sure that you can actually make a normal unencrypted TCP connection between the client and server. Check firewalling and tcp wrapper logs to see if they're catching anything.

Once you've verified that an unencrypted session works on those ports, make sure that the pem files you're using are valid. Take a look at the stunnel log files (turn on debugging if you haven't already) and see if it mentions having authentication issues. Make sure that you can verify your pem files on the client with the command:

openssl verify -CAfile /usr/share/certs/syslog-ng-server.pem \
 /usr/share/certs/syslog-ng-client.pem
And on the server with the command:

openssl verify -CAfile /usr/share/certs/syslog-ng-client.pem \
 /usr/share/certs/syslog-ng-server.pem
You might have forgotten to add the certificate of the signing CA (and all the intermediaries) to your CAfile if you get an error of:

error 20 at 0 depth lookup:unable to get local issuer certificate
Q We're planning to roll out a number of FreeBSD servers in the near future, but all the 1U hardware we've ordered comes without a CD-ROM drive or a floppy drive. In truth, the initial install is the only time such a device would be needed, so we didn't bother with them. I know there must be some way to install FreeBSD machines over the network completely without a boot floppy or CD-ROM, but I was hoping for a push in the right direction. Do you have suggestions or good links that detail how to do custom install setups?

A Presuming your clients support the Intel PXE netboot option and you have at least one machine with a floppy or CD-ROM drive, you can use one machine as an install server for the rest. Install FreeBSD from floppy/network or CD-ROM on the machine that will act as a server. Then follow the instructions from Alfred Perlstein's FreeBSD Jumpstart Guide at:

http://www.freebsd.org/doc/en_US.ISO8859-1/articles/pxe/article.html
to create a PXE boot server. Once you have a PXE server up and running, you can create custom configurations for your servers.

As the Jumpstart Guide notes, the PXE boot server will be insecure since it'll be running tftp and NFS with no restrictions. I highly suggest putting your PXE server on its own air-gapped network and then reconfiguring the newly built machines to go on your internal LAN or in your DMZ. If you're going to put the PXE server on a network that can access the Internet, heavily firewall it at the very least. RPC, needed for NFS, is known for its insecurity.

Q I'm a consultant who works from home most of the time, generally sshing into client machines to do admin work. I've just started working with a new client who is wary of the past OpenSSH holes and would prefer to use a VPN instead. Of course they have no budget for this project, so anything they install needs to be free or very cheap. They need to support Linux, BSD, Solaris, OS X, and Windows clients, and their server is a Linux Fedora machine. Do you have any suggestions for software I might look at?

A Assuming your Windows users are running 2000 or XP, you might want to consider OpenVPN:

http://openvpn.sourceforge.net/
Their VPN product is based on open source software and licensed under the GPL. It's a user-space daemon (instead of a kernel module) that uses OpenSSL to create cross-platform tunnels between machines running Linux 2.2+, Solaris, OpenBSD 3.0+, OS X Darwin, FreeBSD, NetBSD, and Windows 2000/XP.

If you want to use IPsec instead of SSL, you can opt to pay money to one of the bigger players that have clients for each of your platforms, or you can roll your own on each platform that you need to support. Take a look at Tina Bird's IPsec RFCs and How-To page for more information on using IPsec on various platforms:

http://vpn.shmoo.com/vpn/vpn-ipsec.html
Q I'm creating a Solaris software package for some internal software. As part of the preinstall, this package must remove a different piece of software (they are incompatible, and the second package MUST be removed first). My preinstall script, among other things, runs the script that tries to do the pkgrm of the second software package. I tried just doing a pkgrm alone, but that failed as follows:

The following package is currently installed:
   pkg2

Do you want to remove this package?
1 package not processed!

pkgadd: ERROR: preinstall script did not complete successfully
So I figured that it must want an answer to something during the pkgrm. To get around this, I created a new admin file with the following contents:

mail=
instance=unique
partial=nocheck
runlevel=nocheck
idepend=nocheck
rdepend=nocheck
space=nocheck
setuid=nocheck
conflict=nocheck
action=nocheck
basedir=default
Then I ran pkgrm from my script as:

pkgrm -n -a /tmp/default pkg2
Now instead of getting an error, I get an endless number of messages saying the following when the preinstall script is executed:

NOTE: Waiting for pkgadd of pkg1 to complete.
How can I work around this?

A The problem you're encountering is that the Solaris package database can only be open for writing by one process at a time. Because you already have the database open to write entries for the installation of pkg1, you can't remove pkg2 until pkg1 has finished. If you had put the pkgrm statement in the preinstall file itself instead of calling it as an external script, you would have seen the following error message when you tried to do a pkgmk:

ERROR: script <preinstall> attempts to modify locked package database \
at line <the line where you called pkgrm>.
Unfortunately, there is no good workaround except to write a wrapper script to install pkg1 that does the pkgrm of pkg2 first. Or, you could just tell people that they must remove pkg2 before starting the install of pkg1.

To remind people that they need to remove pkg2 before installing pkg1, you could use a depend file to indicate that pkg1 and pkg2 are incompatible. Your prototype file for pkg1 would need to include the line:

i depend=depend
Then in that directory, create the file depend with the contents:

I pkg2
Rebuild your package, and when pkg2 is installed and you attempt to run pkgadd on pkg1, you'll get the output:

WARNING:
    A version of <pkg2> package "pkg2" (which is
    incompatible with the package that is being installed)
    is currently installed and must be removed.

Do you want to continue with the installation of <pkg1> [y,n,?]
As you see, though, you can still choose to continue with the pkgadd even though you've marked pkg1 as being incompatible with pkg2.

Q I'm trying to lock down our network at the border routers by blocking a number of things. I've got a decent handle on what needs to be permitted with regard to TCP and UDP, but I've seen varying suggestions for ICMP. Should I block all ICMP both in and out, or should I try to let some things through?

A People who block all ICMP traffic often don't understand that they're interfering with path MTU and debugging as well as protecting themselves against things like smurf attacks. ICMP packets have a type and often a code, as seen at:

http://www.iana.org/assignments/icmp-parameters
The type defines the ICMP message that's being passed, and the code is an optional sub message. For example, a "Destination Unreachable" (type 3) message might have a code of 7, "Destination Host Unknown". The following ICMP types/codes should be allowed to pass through your border routers at the very least:

NAME            TYPE  CODE  REASON
ICMP_ECHO       8     0     ping.
ICMP_ECHOREPLY  0     0     ping responses.
ICMP_UNREACH    3     4     needed by Path MTU to determine
                            optimal setting.
ICMP_TIMXCEED   11    0     used by traceroute, which also
                            requires high numbered UDP ports open.
                            also used to detect routing loops.
For more information about various ICMP types and codes, take a look at:

http://www.networksorcery.com/enp/protocol/icmp.htm
Q I'm trying to debug a startup script I wrote for a Solaris 9 machine, but I'm not sure how to go about it. I don't want to have to redirect the output on every single line since that would be a lot of work. Is there a way to redirect all output from a script file? I tried using #!/sbin/sh -x, but that doesn't appear to have worked.

A First, scripts in /etc/rc*.d/ are not executed; they're either sourced if the file name ends with .sh, or passed to /sbin/sh for execution. Second, your #!/sbin/sh line at the top is ignored. If you want to turn on execution tracing, place the following line at the top of your file:

set -x
As for redirecting the output, rc scripts log to the location defined in /etc/inittab:

>/dev/msglog 2<>/dev/msglog </dev/console
So you should be seeing error messages on the console. If you want to redirect things to a file, call exec at the top of your script to replace the running shell. This example will turn on execution tracing and log STDERR and STDOUT to /tmp/<scriptname>.log:

exec > /tmp/'/usr/bin/basename $0'.log 2>&1 && set -x
Amy Rich, president of the Boston-based Oceanwave Consulting, Inc. (http://www.oceanwave.com), has been a UNIX systems administrator for more than 10 years. She received a BSCS at Worcester Polytechnic Institute, and can be reached at: qna@oceanwave.com.