Archive for April, 2009

Efficient use of mod_rewrite, part 1

Consider the following mod_rewrite configuration :

RewriteCond %{REQUEST_URI} !^/FR/fr RewriteCond %{REQUEST_URI} !^/common/ RewriteCond %{REQUEST_URI} !^/favicon.ico$ RewriteRule ^(.*)$ /FR/fr$1 [PT,L]

The need here, is to get anything that wouldn’t begin with /FR/fr from $DOCROOT/FR/fr (through, possibly, any other module rules, which was important in the setup where these rules were), except if they begin with /common/ or is /favicon.ico.

Depending on the requests pattern, reordering the RewriteConds can be better. Anyways, the case at hand was to massively duplicate this setup over different countries and languages.

Basically, would need to come from $DOCROOT/FR/fr/foo/bar, from $DOCROOT/GB/en/foo/bar, while anydomain/common/foo/bar would come from $DOCROOT/common/foo/bar.

In such a case, you have a lot of possible solutions, each of which has its pros and cons.

  • Create a virtual host per domain, duplicating all the rules in each of them.
    <VirtualHost *:80> ServerName RewriteCond %{REQUEST_URI} !^/FR/fr RewriteCond %{REQUEST_URI} !^/common/ RewriteCond %{REQUEST_URI} !^/favicon.ico$ RewriteRule ^(.*)$ /FR/fr$1 [PT,L] </VirtualHost> <VirtualHost *:80> ServerName RewriteCond %{REQUEST_URI} !^/GB/en RewriteCond %{REQUEST_URI} !^/common/ RewriteCond %{REQUEST_URI} !^/favicon.ico$ RewriteRule ^(.*)$ /GB/en$1 [PT,L] </VirtualHost> (...)
    Pros: quite efficient for execution time. Cons: Doesn’t scale very well for humans : a lot of virtual hosts and rewrite rules to maintain ; if the list of urls that don’t need rewrite increases, all set of rules need to be updated.
  • Put all the rewrite rules into a single virtual host, adding a condition on the SERVER_NAME:
    <VirtualHost *:80> ServerName ServerAlias RewriteCond %{SERVER_NAME} RewriteCond %{REQUEST_URI} !^/FR/fr RewriteCond %{REQUEST_URI} !^/common/ RewriteCond %{REQUEST_URI} !^/favicon.ico$ RewriteRule ^(.*)$ /FR/fr$1 [PT,L] RewriteCond %{SERVER_NAME} RewriteCond %{REQUEST_URI} !^/GB/en RewriteCond %{REQUEST_URI} !^/common/ RewriteCond %{REQUEST_URI} !^/favicon.ico$ RewriteRule ^(.*)$ /GB/en$1 [PT,L] (...) </VirtualHost>
    Pros: are there ?. Cons: Doesn’t scale very well : a lot of rewrite rules to maintain ; rewrite rules being executed sequencially, the more domains there are, the more checks are done for the last domains ; if the list of urls that don’t need rewrite increases, all set of rules need to be updated.
  • Same as above, refactoring the common parts.
    <VirtualHost *:80> ServerName ServerAlias RewriteCond %{REQUEST_URI} ^/common/ [OR] RewriteCond %{REQUEST_URI} ^/favicon.ico$ RewriteRule .* - [L] RewriteCond %{SERVER_NAME} RewriteCond %{REQUEST_URI} !^/FR/fr RewriteRule ^(.*)$ /FR/fr$1 [PT,L] RewriteCond %{SERVER_NAME} RewriteCond %{REQUEST_URI} !^/GB/en RewriteRule ^(.*)$ /GB/en$1 [PT,L] (...) </VirtualHost>
    Pros: gives a canonical place for urls that don’t need rewrite. Cons: Doesn’t scale very well : a lot of rewrite rules to maintain ; rewrite rules being executed sequencially, the more domains there are, the more checks are done for the last domains.
  • Use a RewriteMap. Pros: Scales better ; gives a canonical place for urls that don’t need rewrite. Cons: a separate file to maintain ; can be tricky to setup.

The main problem with mod_rewrite is that there is no way to use variables or back references in the test patterns. In our case, we’d like to be able to do something like this:

RewriteMap l10n # switch to dbm when necessary RewriteCond %{REQUEST_URI} !^${l10n:%{SERVER_NAME}} RewriteCond %{REQUEST_URI} !^/common/ RewriteCond %{REQUEST_URI} !^/favicon.ico$ RewriteRule ^(.*)$ ${l10n:%{SERVER_NAME}}$1 [PT,L]


RewriteMap l10n # switch to dbm when necessary RewriteCond ${l10n:%{SERVER_NAME}} ^(.+)$ RewriteCond %{REQUEST_URI} !^%1 RewriteCond %{REQUEST_URI} !^/common/ RewriteCond %{REQUEST_URI} !^/favicon.ico$ RewriteRule ^(.*)$ %1$1 [PT,L]

The map file would have on each line a domain name followed by the corresponding url start (/FR/fr for, etc.).

Unfortunately, the above setup is not possible. A way around this lack of functionnality is to use some nice perl regexp trick.

RewriteMap l10n # switch to dbm when necessary RewriteCond ${l10n:%{SERVER_NAME}} ^(.+)$ RewriteCond %1%{REQUEST_URI} !^(.+)\1 RewriteCond %{REQUEST_URI} !^/common/ RewriteCond %{REQUEST_URI} !^/favicon.ico$ RewriteRule ^(.*)$ %1$1 [PT,L]

As you can see, only the second RewriteCond differs. What we really want to test is whether the %{REQUEST_URI} begins with the proper url start or not. Let’s say we’re considering and the map gave us /FR/fr. Following the first RewriteCond, %1 contains /FR/fr.

At the second RewriteCond, if %{REQUEST_URI} begins with /FR/fr then %1%{REQUEST_URI} begins with /FR/fr/FR/fr. Otherwise, it will only begin with /FR/fr.

What we can test, then, is whether this %1%{REQUEST_URI} aggregate contains a repeating pattern at its beginning. This is exactly what the perl regexp does: it captures at least one character at the beginning of the tested string (^(.+)), and wants to find this captured string again (\1).

It is worth mentioning that this obviously falls flat when the url start itself contains a repeating pattern (e.g. /fr/fr instead of /FR/fr).

As we want the RewriteRule to work only when our %{REQUEST_URI} does not begin with /FR/fr, we negate the regexp, which means nothing will actually be captured, such that %1 in the RewriteRule will still be the last captured text, from the very first RewriteCond.

Note that if all the tested url starts are the same length and/or pattern, it may be worth changing the regexp to be more precise (and faster to match or not). Such as ^(/../..)\1 in our present case.

2009-04-14 22:58:43+0900

p.d.o | 5 Comments »

Following up on SSH through jump hosts

I got interesting comments on my previous post about SSH through jump hosts, that made me think a bit.

The first one of them suggested to use a syntax that unfortunately doesn’t work on anything other than zsh, but had the nice idea to add a login for each jump server. Sadly, this only works when there is only one hop, for reasons similar to those we will explore below.

The second one suggested “/” would not be so good a separator, because it would fail in svn+ssh urls and such, which is a good point that I didn’t think of only by lack of such use case. The simplicity of the resulting configuration was nice enough.

This decided me to take a shot at implementing something that would allow adding both login and port number for each jump server.

The first implementation detail to set was which characters to use for login, port number and hop separators. I chose respectively “%” (@ is unfortunately impossible to use because ssh would take it, leaving out any hop before the character), “:” (pretty standard) and “+” (not allowed in DNS, and not too illogical as such a separator, I thought).

The result can look like dark magic when you are not savvy with sed and regular expressions:

Host *+* ProxyCommand ssh $(echo %h | sed 's/+[^+]*$//;/+/!{s/%%/@/;s/:/ -p /};s/\([^+%%]*\)%%\([^+]*\)$/\2 -l \1/') PATH=.:\$PATH nc -w1 $(echo %h | sed 's/^.*+//;/:/!s/$/ %p/;s/:/ /')

The syntax you can use to connect through jump hosts is the following:

ssh login1%host1:port1+login2%host2:port2+host3:port3 -l login3

We’ll see further down why we can’t put a login for the final destination host. Or if you want to skip the boring implementation details, you can also skip to the end, where a slightly more compact version lies: while writing this post, I got some optimization ideas.

Let me try to split the sed syntax to make it a little more understandable:

s/+[^+]*$//; # Remove all characters starting from the last “+” in the string (i.e. keep the n – 1 first hops) /+/!{ # If the string doesn’t contain a “+” after the previous command (i.e. there is only one hop remaining), do the following, otherwise, skip until the closing curly brace s/%%/@/; # Replace “%” with “@” (% is doubled because of the ProxyCommand) s/:/ -p / # Replace “:” with ” -p “. Combined with the previous command, we rewrite “login%host:port” as “login@host -p port” }; s/\([^+%%]*\)%%\([^+]*\)$/\2 -l \1/ # Rewrite “hop1+hop2+login%lasthop” as “hop1+hop2+lasthop -l login”

The second sed goes like this:

s/^.*+//; # Remove everything up to the last occurrence of “+” in the string (i.e. only keep the last hop) /:/! # If there is no “:” in the string, do the following, otherwise, skip the next statement s/$/ %p/; # Add ” %p” at the end of the line s/:/ / # Replace “:” with ” “. These last three instructions prepare a “host port” combination for use with nc.

Let’s see what the ProxyCommand looks like for some examples, skipping PATH setting and nc option for better readability:

ssh host1 nc host2 %p
ssh login1@host1 nc host2 port2
ssh login1@host1 -p port1 nc host2 port2

Now, maybe you start to see why we can’t put a login on the last hop: not only does it make no sense for nc, but also the main ssh process that runs the ProxyCommand and will actually talk to the remote host won’t have any knowledge of it.

From the above examples, it also appears obvious why we replace “login%host:port” with “login@host -p port” : so that the ssh command in the ProxyCommand gets the proper arguments for login and port. Note we could also replace with “host -p port -l login” for the same effect.

With even more hops, this is what happens:

ssh host1+host2 nc host3 %p
ssh login1%host1:port1+host2:port2 nc host3 %p

Each of these ProxyCommands will trigger another ProxyCommand quite looking like our first few examples.

In the first hops, if we’d just replace all “%” with “@”, in cases where logins are given everywhere, we’d end up with a ProxyCommand like the following:

ssh login%host1+login%host2 nc host3 %p

As we saw above, we can’t use a login on the last hop, which means this ProxyCommand wouldn’t work as expected and is why we have to rewrite “login1%host1+login2%hop2” as “login%host1+host2 -l login2” with the last instruction in the first sed.

In the end, this is what happens:

ssh login1%host1+host2 -l login2 nc host3 %p
ssh login1%host1:port1+host2:port2 -l login2 nc host3 %p

In this last example, you see the “:port2” part is not changed into “-p port2”. Actually, it would still work with the latter form : the ProxyCommand ssh login1%host1:port1+host2 -p port2 -l login2 would itself have ssh login1@host1 -p port1 nc host2 %p as ProxyCommand, in which %p would be replaced by port2, given as argument to -p.

Based on this and with further small optimizations, we can slightly shorten our ProxyCommand to the following :

Host *+* ProxyCommand ssh $(echo %h | sed 's/+[^+]*$//;s/\([^+%%]*\)%%\([^+]*\)$/\2 -l \1/;s/:/ -p /') PATH=.:\$PATH nc -w1 $(echo %h | sed 's/^.*+//;/:/!s/$/ %p/;s/:/ /')

sed instructions “decryption” is left as an exercise to the reader ;).

Update : modified the shortened version to fix the issue spotted in the comments.

2009-04-11 23:25:50+0900

p.d.o | 22 Comments »

SSH through jump hosts

I often need to connect to a server with ssh from another server because I don’t have direct access. I even gave a small configuration example to use such jump hosts using ProxyCommands.

A while ago, I got fed up to have to add new entries for each host I wanted to join with a jump server, especially when I only need these entries sporadicly, and decided to write a generic configuration. I ended up with this setup:

Host *%* ProxyCommand ssh $(echo %h | cut -d%% -f1) nc -w1 $(echo %h | cut -d%% -f2) %p

The trick here is that you can use subshell expansions in a ProxyCommand. So, when I ssh to “host1%host2”, the first subshell expansion returns “host1” and the second “host2”, this setup ends up being the equivalent of :

Host host1%host2 ProxyCommand ssh host1 nc -w1 host2 %p

which is quite similar to the setup from my previous post.

Later on, I came up with an even more powerful implementation:

Host *%* ProxyCommand ssh $(echo %h | awk -F%% '{OFS="%%"; NF--; print $0}') nc $(echo %h | awk -F%% '{print $NF}') %p

Here, the first awk splits at the % characters and returns all fields except the last one, and the second awk returns only the last field. As a consequence, sshing to “host1%host2%host3%host4” will have the first subshell expansion return “host1%host2%host3” and the second “host4”. The setup will then be equivalent to:

Host host1%host2%host3%host4 ProxyCommand ssh host1%host2%host3 nc -w1 host4 %p

The ssh in the ProxyCommand will, in turn, trigger the rule again so that the result is that host4 will be contacted from host3, which is contacted from host2 that we contacted from host1.

In the meanwhile, I decided % was not that nice a separator, and switched to using /, which also allows for a nicer setup with the same recursive effect:

Host */* ProxyCommand ssh $(dirname %h) nc -w1 $(basename %h) %p

Finally, since some remote hosts don’t have nc installed, I usually copy it in my $HOME on these servers and changed my setup to:

Host */* ProxyCommand ssh $(dirname %h) PATH=.:\$PATH nc -w1 $(basename %h) %p

The main drawback of this method is that the more jump hosts you use, the more your ssh traffic is encapsulated (recursively) in other ssh traffic. The advantage, though, is that you don’t need to forward an agent onto untrusted servers to use ssh key authentication on any of the jump or final servers, nor to forward X11 or tunnel multiple times.

2009-04-09 23:19:34+0900

p.d.o | 8 Comments »

Detecting per-process namespaces

Back with per-process namespaces. This time, I will give several methods to more or less reliably detect their use, some requiring root access, some not. We will, though, limit ourselves to “mount namespaces”, that is, not PID, UTS or network namespaces.

The first method, which is based on the contents of /proc/pid/mounts should work on any kernel supporting namespaces (since 2.4.19):

$ md5sum /proc/[0-9]*/mounts | awk -F'[ /]*' '{ ns[$1] = ns[$1] ? ns[$1] "," $3 : $3 } END { for (n in ns) { print n, ns[n] } }'

/proc/[0-9] is used instead of /proc/ to avoid listing self, too. The awk script displays a summary : a md5 sum followed by the PID of all processes which /proc/self/mounts file has this md5 sum. This method relies on the fact that the /proc/pid/mounts files may be different between namespaces, even when the same mount points are used because the ordering may differ. This method is obviously unreliable, except when all namespaces have different mount points. During my tests, it has been reliable to detect whether a process uses the same namespace as the init process, but different processes with different namespaces were reliably listed together as having the same namespace.

The next method, based on the contents of /proc/pid/mountinfo will work on any kernel implementing this file (since 2.6.26):

$ md5sum /proc/[0-9]*/mountinfo | awk -F'[ /]*' '{ ns[$1] = ns[$1] ? ns[$1] "," $3 : $3 } END { for (n in ns) { print n, ns[n] } }'

The script is essentially the same, only pointing at mountinfo instead of mounts. This method is quite reliable, as /proc/pid/mountinfo does contain different information even when no mount point was changed within a namespace. Unfortunately, chrooted processes can have a different content while not being in a separate namespace. I’ve not investigated further, but you may be able to spot chrooted processes by comparing /proc/pid/mountinfo and /proc/pid/mounts.

Both the above methods can be executed as unprivileged user, which alleviates the fact they are not totally reliable.

The last method, on the other hand, is reliable, but requires root privileges, and cgroup filesystem support in the kernel (since 2.6.24).

$ mkdir -p /dev/cgroup/ns $ mount -t cgroup -o ns /dev/cgroup/ns $ find /dev/cgroup/ns -name tasks | xargs -l1 sed ':a;N;s/\n/,/g;ta'

The output here is a group of PID on separate lines for separate namespaces. The sed one-liner replaces all carriage returns in the tasks files with a comma. You don’t actually need root privileges for the last command, if the cgroup filesystem is already mounted. But there has been no normalization as to where the cgroup filesystems should be mounted yet, and they aren’t mounted by default on most GNU/Linux distributions, I believe.

Cgroups (Control groups) allow to put sets of processes together to give them special behaviour (cpu provisioning, memory limitations, namespaces, etc.). They also allow to do so in a hierarchical way, such that a cgroup can inherit the properties of its parent cgroup.

As its name and the way it is mounted indicates, our /dev/cgroup/ns is specialized for namespaces cgroups. The directory itself contains a few files, most notably a tasks file containing a list of processes in the current set/cgroup (in our case, namespace). It also contains a subdirectory for each child namespace. Each of these subdirectories is similarly structured, so that it also contains a tasks list and possible subdirectories for its own children.

This means that not only can you spot different namespaces, but you can also spot how they relate to each other.

There are a few glitches, though. As with other cgroups subsystems, you can create new cgroups by creating a directory in the cgroup filesystem, but it happens not to create a new namespace in the case of the namespace subsystem. With all the kind of namespaces that now exist in Linux, this is not a big surprise, but there doesn’t seem to be neither a feedback in the cgroup filesystem to know what kind of namespace is involved, nor a control to force creation of a really separate namespace. You also can’t move processes from one mount namespace into another by writing a PID in the tasks file in a given namespace ; that just gives an “Operation not permitted” error.

On the other hand, contrary to the second method, chrooted processes are not separated out.

I will investigate further cgroups with PID, UTS and network namespaces and will post my findings.

2009-04-08 21:43:10+0900

p.d.o | Comments Off on Detecting per-process namespaces

How not to provide robust clustered storage with Linux and GFS

(The title is a bit strong, on purpose)

LWN links to an article describing how to provide robust clustered storage with Linux and GFS.

While explaining how to setup GFS can be nice, the incentive made me jump.

The author writes:

Load balancing is difficult; often we need to share file systems via NFS or other mechanisms to provide a central location for the data. While you may be protected against a Web server node failure, you are still sharing fate with the central storage node. Using GFS, the free clustered file system in Linux, you can create a truly robust cluster that does not depend on other servers. In this article, we show you how to properly configure GFS.

In case you don’t know, GFS is not exactly a “clustered storage”. It is more a “shared storage”. You have one storage, and several clients accessing it. You have one storage array, compared to the NFS case, where you have one central server for the data. But what is a storage array except a special (and expensive) kind of server ? You don’t depend on other servers, but you depend on other servers ? How is that supposed to be different ?

Conceptually, a clustered file system allows multiple operating systems to mount the same file system, and write to it at the same time. There are many clustered file systems available including Sun’s Lustre, OCFS from Oracle, and GFS for Linux.

OCFS and GFS are the same class of file systems, but Lustre is definitely out of league and would, actually, provide a truly robust cluster that does not depend on other servers. Lustre is a truly clustered filesystem, that distributes data on several nodes such that losing some nodes don’t make you lose access to the data.

With the incentive given by the author, and considering he lists Lustre as an example, I would actually have preferred an article about setting up Lustre.

2009-04-07 20:29:04+0900

miscellaneous, p.d.o | 13 Comments »

A revolution happening at the W3C ?

It seems undergoing discussion is leading towards a free (as in speech) HTML5 specification.

That would really be great.

2009-04-02 21:31:00+0900

miscellaneous, p.d.o | Comments Off on A revolution happening at the W3C ?