Viewing Category: Linux  [clear category selection]

YAWDLCSB: Yet Another Way to Disable Linux Console Screen Blanking

I wrote about this topic several years ago, but I'll be darned if I can find it. I run several Linux virtual machines using VMware Fusion, and I'd rather the console text stay visible for the purpose of identification and to see any console messages. By default, they will go blank after a few minutes. If you do a Google search for how to disable console screen blanking, you'll see that the most common recommendation is to add /usr/bin/setterm -blank 0 to the end of /etc/rc.d/rc.local. For some reason, that feels icky to me. Another way is to append the control characters to the end of the /etc/issue file, with /usr/bin/setterm -term linux -blank 0 >> /etc/issue.

For background, the function of setterm is to issue specific control character sequences to the terminal to cause the terminal to change its behavior. There is a database of terminal capabilities (/etc/termcap) that correlates with the terminfo function that setterm uses. When Linux boots, it starts up terminals identified in /etc/inittab. My installation of CentOS 5.5, and I believe many others as well, starts six /sbin/mingetty virtual console listening to /dev/tty[1-6]. When mingetty starts up, it echos the content of /etc/issue, unless given the -noissue argument.

I should also mention that I made the modification to the /etc/issue file while connected to the (virtual) machine over an SSH terminal. Therefore, my TERM environment variable was not neccessarily the same as the terminal identifier that the virtual console at boot. In fact, it isn't (xterm vs. linux). For this reason, it's neccessary to include the -term linux argument to setterm command so it's query of the terminal escape codes is correct. You can verify that indeed there is a difference by running /usr/bin/setterm -blank 0 | /usr/bin/hexdump when connected to both terminals; there is no escape sequence meaningful to xterm about screenblanking. After making the modification to /etc/issue I see that the bytes are appended to the file: /usr/bin/hexdump /etc/issue. Obviously, this will only take effect when each /sbin/mingetty is restarted. The easiest thing to do is reboot.

Firewall Blacklist Tool

A few months ago I read an article on nixCraft about blocking IPs by country. It's a bit harsh block an entire country, but some machines just don't need to be accessible to the entire world. I've also had the need to quickly drop packets from an IP that is running an attack. Here are my design goals:

  • Easy to add/edit/delete blocks, but also maintain version control (history/log)
  • Ability to add notes about why the IP was blocked (password guessing, vulnerability scan, flood, etc.)
  • Automatic updating of data files.
  • Reporting on rules in place and packets dropped.
  • Easy installation to multiple machines.
  • Integrate nicely with existing iptables configuration on RHEL/CentOS
  • Packaged as an RPM to properly check for system dependencies.
  • Authenticate downloaded country data by computing message digest hash.

For the impatient, here's a link to the project source directory: blacklist

The project includes an installer, appropriately named install that will: 1) create a data directory (/etc/blacklist), 2) create a cron file (/etc/cron.daily/blacklist), 3) create a sample rules file (/etc/sysconfig/blacklist) and 4) copy the main application script (/usr/local/sbin/blacklist). Beyond the typical Linux shell tools, blacklist requires wget and perl. The script could be rewritten to use curl and gawk, if one was so motivated.

To integrate blacklist with the standard firewall configuration, insert the following lines into /etc/sysconfig/iptables:

:Blacklist - [0:0] # Rules are inserted into this chain by /usr/local/sbin/blacklist # Rules are defined in /etc/sysconfig/blacklist :Blocked - [0:0] -A Blocked -p tcp -j LOG --log-prefix "Blocked: " -A Blocked -j DROP :Firewall - [0:0] -A Firewall -p tcp -j Blacklist

The rule syntax takes two basic forms: a source IPv4 address (with an optional destination TCP port) and a two-character country code (see ISO 3166-2 and IPdeny Country List). When blacklist reads the country code rule, it injects the data collected from downloaded zone file.

When the iptables rules are in place, a simple summary can be displayed using the blacklist -s option. Here's a sample summary:

/usr/local/sbin/blocklist -s 4528 rules, 51 pkts, 2596 bytes

Details of the specific packet dropped are logged to /var/log/messages in the block chain.

The cron script will purge and refresh the zone files every week. There's probably no reason to pull new data any more often, however it can be done manually with the blacklist -p option.

Bundling this up as an RPM package is on the TODO list. I haven't built an RPM from scratch in many years. If you are interested in using blacklist on your systems via an RPM package, please let me know.

Time Zones and Linux

Today I learned a bit about how Linux, specifically Red Hat Enterprise Linux, knows about the local time zone of the server. The GNU C library (glibc and glibc-common) includes localization information. It also provides a tool to chose a time zone environment variable /usr/bin/tzselect. However, it doesn't include the actual time zone data appropriate for use by programs at runtime. The time zone data comes from tzdata (who would have guessed?). It installs files for every time zone into /usr/share/zoneinfo arranged by location. The system expects /etc/localtime to be a copy or symlink of the appropriate time zone file. For me, it's /usr/share/zoneinfo/America/Los_Angeles. With that in place, my system now uses PST instead of UTC. Red Hat also provides fancy tools for choosing the time zone. The server I'm working on currently, however, is a minimal installation and doesn't have their system-config-date package installed.

As I create leaner and leaner virtual CentOS installations, it's handy to know how to properly configure the system by hand, rather than relying on programs that expect a user interface.

Configuring a Production Open BlueDragon Server

I've just finished building up a couple production servers to host web applications. The servers are Xen guests on an AMD Quad-Core Opteron x86_64 host. The VPS template is a minimal installation of CentOS, to which I added packages as needed. The release of Sun Java 1.6u12 came out just as I was writing this, so these instructions will need to get updated slightly when JPackage has a new RPM (more on that later). Both Matt Woodward and Dave Shuck recently wrote about configuring CFML engines with Tomcat. The installation I'll describe is somewhat similar.

  • CentOS 5.2
  • Tomcat 5.5.23 (tomcat5-5.5.23-0jpp.7.el5_2.1)
  • Apache 2.2 (httpd-2.2.3-11.el5_2.centos.4)
  • Sun Java 1.6u11 (java-1.6.0-sun-1.6.0.11-1jpp)
  • Sun JavaMail 1.4.1
  • Open BlueDragon 1.0.1

The installation of packages using yum is a snap, however there was an issue with the architecture detection. There is a simple workaround, to hard-code i386 as the basearch:

sed -i -r 's/\$basearch/i386/g' /etc/yum.repos.d/CentOS-Base.repo

The procedure is to install jpackage-utils, then download and repackage the Sun Java SE Development Kit 6 (jdk 1.6) using the JPackage Project non-free nosrc RPM. I install some, but not all of the, resulting RPMs:

yum --nogpgcheck localinstall java-1.6.0-sun-1.6.0.11-1jpp.i586.rpm java-1.6.0-sun-devel-* java-1.6.0-sun-fonts-*

The CentOS Wiki has a thorough article on installing Java on CentOS. I've considered using OpenJDK, but I don't know what sort of compatibility issues that would raise.

The Tomcat server starts up just fine with GNU's version of the Java runtime (libgcj and java-1.4.2-gcj-compat). However, using the GNU version of JavaMail (classpathx-mail) instead of Sun JavaMail, the following chunk of CFML will fail with a javax.mail.NoSuchProviderException exception from within the Open BlueDragon web application:

<cfscript> server = "localhost"; port = 25; username = ""; password = ""; mailSession = createObject("java", "javax.mail.Session").getDefaultInstance(createObject("java", "java.util.Properties").init()); transport = mailSession.getTransport("smtp"); transport.connect(server, JavaCast("int", port), username, password); transport.close(); </cfscript>

Open BlueDragon does include include the correct Jar, but the JVM that Tomcat configures loads the system version first. Rather that muck about with the classpaths, I downloaded the current version of JavaMail, extracted mail.jar, and created alternatives link:

unzip -j -d /tmp javamail-1_4_1.zip javamail-1.4.1/mail.jar mv /tmp/mail.jar /usr/share/java/javamail-1.4.1.jar alternatives --install /usr/share/java/javamail.jar javamail /usr/share/java/javamail-1.4.1.jar 5000 alternatives --auto javamail file /var/lib/tomcat5/common/lib/\[javamail\].jar

Tomcat installs a set of symlinks to /usr/share/tomcat5. Configuration files are placed in /etc/tomcat5. For this installation, I use a stripped-down version of server.xml that provides web application hosting on a per-user basis.

<Server port="8005" shutdown="SHUTDOWN"> <GlobalNamingResources /> <Service name="Catalina"> <Connector port="8080" address="127.0.0.1" protocol="HTTP/1.1" /> <Connector port="8009" address="127.0.0.1" protocol="AJP/1.3" /> <Engine name="Catalina" defaultHost="localhost"> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" debug="0" /> <Host name="localhost-username" appBase="/home/username/webapps" unpackWARs="false" autoDeploy="false" debug="1"> <Context path="" docBase="openbd" allowLinking="true" caseSensitive="true" swallowOutput="true" /> </Host> </Engine> </Service> </Server>

The standard Tomcat configuration has a single Host within an Engine named Catalina. I've added a second Host that is specific to a system user username, which allows each user on the system to manage their own deployed web applications and choose their own root Context. Installing Open BlueDragon as the default web application simplifies the Apache HTTP configuration.

The username user has an Apache HTTP configuration file in /etc/httpd/conf.d/username.conf with mod_rewrite rules to proxy all requests for CFML files to the Tomcat HTTP Connector. I had intended to use the AJP Connector with mod_proxy_ajp, but there is a problem with the the proxy request not specifying the proper hostname. There might be a solution to that issue, but I haven't found it yet. The plain mod_proxy_http module works properly in the following configuration:

<VirtualHost *:80> DocumentRoot /home/username/websites/sitename ... RewriteCond %{SCRIPT_FILENAME} \.cfm$ RewriteRule ^/(.*)$ http://localhost-username:8080/$1 [P] </VirtualHost>

The rest of the Apache HTTP configuration handles web requests for flat files, served from ~/websites/sitename. The CFML files can be placed in ~/webapps/openbd, however an easier deployment is to place everything in ~/websites/sitename (like you would with a typical ColdFusion server). Symbolic links can be added for directories containing CFML. Consider the following:

cd ~/webapps/openbd ln -s ../../websites/sitename/MachII MachII

It would probably be a good idea to set the Open BlueDragon root mapping appropriately. There are a few issues with file ownership and permissions that I didn't address above. I've added username to the /etc/sudoers file, granting that user limited access.

Tomcat Monitoring and Startup Via Cron

Over the last week, my virtual private server needed to be restarted a couple times. Once, I was there to see the restart, and manually run the script to bring Tomcat up. However, another time I wasn't. Since I run Tomcat from a plain user account, it doesn't start up with the server itself using the SysV-style init scripts from /etc/init.d. Many years ago, I created a cronjob to check on an Eggdrop IRC bot that sometimes died or went haywire. The same solution works fine for Tomcat. The following shell script (re)starts Tomcat, if needed. It searches for all the processes with the command name java. Any found processes are output with a user-defined format that includes just two fields, and no header. The next command in the pipeline filters out lines that do not start with the appropriate username -- the one that kicked off the cronjob. Any lines making it to the second grep are matched for the Tomcat class name. Lastly, wc counts up the number of lines, which should accurately specify the number of Tomcat instances started by this user. Currently, there aren't any other user accounts that would start an instance of Tomcat, but it's best to prepare for the possibility that a different user account will run Tomcat.

#!/bin/sh export CATALINA_HOME=$HOME/server/tomcat export JRE_HOME=$HOME/java/jre/default PROCS=`/bin/ps -C java -o euser=,args= | grep -E "^$USER" | grep -o -F org.apache.catalina.startup.Bootstrap | wc -l` case $PROCS in 0) echo "Tomcat does not appear to be running. Starting Tomcat..." $CATALINA_HOME/bin/catalina.sh start exit 1 ;; 1) exit 0 ;; *) echo "More than one Tomcat appears to be running. Restarting Tomcat..." $CATALINA_HOME/bin/catalina.sh stop && $CATALINA_HOME/bin/catalina.sh start exit 2 ;; esac

The crontab for the plain user account will run the script above every 5 minutes, which seems pretty reasonable.

0-59/5 * * * * $HOME/bin/check-tomcat.sh

While working on this VPS, I decided to update the JRE to Java 6 update 10 because I've heard that some operations, such as CFC instantiation, are faster. It seems faster, but I don't have any actual performance data to prove it.

CentOS 5.1 on VMware Server

I created a pretty awesome CentOS 5.1 virtual machine. It's quite lean, using only 256 Mb of RAM and a 2.0 Gb virtual disk. One issue that I encountered when doing the installation from a DVD (virtual CDROM using an ISO image), was that the installer boot kernel didn't have support for the virtual SCSI controller created by VMware Server 1.0.5. Apparently, my choice of OS (RHEL 4) when using the Create New Virtual Machine Wizard caused the VM to use a BusLogic controller. The fix was to edit the VMX file and add the following:

scsi0.virtualDev = "lsilogic"

For whatever reason, the scsi0.virtualDev was undefined, so I added the line, rather than editing an existing definition. The CentOS installation worked perfectly using the LSI Logic controller, and continues to function properly after the using the guest OS.

I see that there is an Open Source project to replace the proprietary VMwareTools package: Open Virtual Machine Tools. I would really like to use these, but I don't want to go through the effort of compiling them myself -- every time there is a kernel update. Hopefully they'll be added to a current repository soon. I installed the VMwareTools package, but since I'm not running X Windows on this VM and don't want shared folder support, I removed them.

While installing CentOS 5.1, I created a new kickstart script. It's a no-frills install: centos51.cfg. By the way, it will take more than 2.0 Gb of disk space for the installation using that kickstart script. On my first pass, the /var and /home logical volumes were so big that it didn't leave enough space on the root filesystem. When this happens, Anaconda presents an amusing error message:

Very funny, guys.

arptables_jf FTW

I've been working on a modest Linux cluster project for several weeks. This is a high-availability cluster, rather than a compute cluster. It is a Linux-HA Release 2 configuration that manages the load balancer. The Linux Virtual Server is configured for direct routing (LVS-DR). When I did the research in my test lab using several virtual machines running CentOS 5 it worked properly. However, when I started configuring the production servers, there were ARP issues that crippled the cluster. Initially the virtual IP addresses on the real servers were suppressed from ARP announcements and responses with the arp_ignore/arp_announce sysctl flags for the Linux 2.6 kernel. For whatever reason, it was just not working. Since the real servers are indeed directly accessible to the client, the packets weren't hitting the load balancer.

So, arptables_jf comes to the rescue. This code was initially written by Jay Fenlason at Red Hat. I searched all over for a project page, but found none. Its user space tool and init script behave very much like the iptables package. There is a simple guide to configuring arptables_jf in the Red Hat Virtual Server Administration manual.

Check another issue off the list.

GRUB'ing It

For some reason, I often need to run GRUB manually to install the boot loader. And for some other reason, I seem to always forget the commands. Well, here they are, straight from the manual.

grub> find /boot/grub/stage1 grub> root (hd0,0) grub> setup (hd0)

Typically, I create a small bootable primary partitions on two disks, then mirror them as /dev/md0. So, the same procedure would be done on the second disk to make its MBR match the first of the pair.