PHP – Why is image activation such an overhead?

As I’ve discussed in earlier articles, such as Using PHP applications on a Webfusion hosted service (Linux), a webserver will be able to achieve far higher thoughput processing PHP requests if the PHP process is itself persistent.  Such persistent PHP runtime environments can process many PHP requests within a single image activation if the web server only needs to supports a small number of applications / services; in the case of Apache plug-in modules such as mod_php, mod_fastcgi and mod_fcgid can implement this and IIS has a similar FastCGI solution.  However this approach doesn’t scale well in service offerings where the ISP wants to use UID-based access control to enforce per-account access control to serving hundreds or thousands of separate accounts – as the overall physical memory requirements are directly related to the number of accounts being supported.  Webfusion is a pretty standard template based on suPHP, and each request involves running up a PHP per request in the owning accounts UID with this solution.

On Unix-type OSs, such as Linux, creating the new process is quite a cheap fork operation, but the new process then has to initiate PHP and it is this initiation that costs.  It is easy to understand why on a Linux system by using strace to instrument this, for example by the following commands:

strace -tt -o /tmp/strace.log php -r 'echo "hello world\n";'
grep "open("  /tmp/strace.log | vi -

This allow you to follow millisecond-by-millisecond the progress in image activation. On my PC this involves loading some 17 text files (mostly configuration files of some type such as the PHP ini files) and 101 binaries and shared libraries comprising some 40Mbytes of code. OK, the OS is smart about doing this in two ways: first it doesn’t do a traditional serial read of all these files but instead uses memory mapping to enable the content to be loaded on demand.  However, because the OS uses a minimum cluster size for faulting in memory whenever a new region is referenced and the PHP environment has a reasonable coverage of this codebase, most of this 40 Mbytes will still end up getting falted into memory.  The second way is that the OS will move this content through its filesystem cache (which is over 1 Gbyte on my laptop), so that subsequent calls normally avoid the need to do physical disk I/O to map this content into the process; on a webserver, this will always be the case for PHP, since it one of the most frequent executables to be activated.

If I time the second of two consecutive activations on my laptop, so that all executable content is fully cached in the filesystem cache, then this still takes roughly 0.07sec and this figure is pretty consistent across a number of replications.  There is no physical I/O going on in this window, and I doubt that there is much multi-threading in this processing, so this 70 milliseconds represents all system+user time. There is almost none of the latter for compiling and executing “hello world “, so this time is a reasonable measure of the overhead of opening and loading these 120 or so files into the process space.  Simple scaling of this number would imply that a quad CPU webserver of this power could only handle ~60 such image activations per second without the actual application processing load.  This is an intrinsic performance constraint of suPHP based web-server solutions.

Leave a Reply