RSS Feeds

A slight case of ME/CFS – four years on

  • TerryE
  • Tue 7th Feb 2012 15:07 PM

I’ve just finished reading my last update, written just over a year ago.  From the ME/CFS perspective, I guess that things are pretty much on a plateau now.  For example, my walking range hasn’t improved much over this last year and I still have to be very careful about what I eat, as my food intolerances can punish me if I am lax in this department.  The first half of 2011 was constrained by my knee injury, surgery to repair the cartilage (or more accurately cut out the wrecked bits).  Knee arthroscopy is now a routine operation, but even so it took about 6 months to recover strength and flexibility in my knee.  No doubt this impacted my general mobility and exercise levels.  I am still sometimes troubled by knee-pain when sitting or in bed.

So one decision that I did make was to set myself the target of reducing my body weight to what it was when I was 30 years of age.  Like most people moving into late middle-age, my weight has been slowly and steadily creeping up over the years; nothing sudden, just the odd pound or so each year.  Achieving this target involved a conscious dietary change.  Given that I already avoid nearly all processed foods and the wheat family, you might think that there is little more that I could do. However, in essence I have materially eliminated carbohydrates and potatoes from my diet, and increased the amount of vegetables and fruit to compensate.   Within six months, I’d lost maybe 25lb, and I’ve lost a few more since.  This is largely what Dr Myhill calls a stone age diet, and it’s very similar to one that Prof Terry Wahls described in her very thought provoking TEDTalk: Minding Your Mitochondria.  An interesting side-effect of moving onto this diet is that my bowels have really settled down, and I now seem to have no sign of “irritable bowel” symptoms.

It has still been a long slow road to recovery, but I regard myself as lucky because none of the ME/CFS sufferers that I know have made anywhere near the improvments that I have, and my life is a world-away from the year that I spend exhausted and bedridden four years ago.

  More ... >>>

A quick point about timing

  • TerryE
  • Wed 1st Feb 2012 18:12 PM

I've just deployed my V3 blog engine to live.  In doing so, the time per query jumped from <10 mSec per query to ~2.5s — which meant that the typical page response time for non-cached pages increased from an average 0.25s to nearly 3s.  (Cached pages are loaded from a static copy, so the render time is typically under 0.2s.  This was odd as the timing for V3 remained much the same as the V2 engine (< 10 mSec) for my development and test versions.  After quick 'binary chop' through the code, looking at the micro-timing (I have debug routines to do this), I found that entire increase was down to a single MySQL query which I used in the initialisation of my extension to the mysqli class:

SELECT TABLE_NAME AS name
FROM information_schema.tables
WHERE TABLE_SCHEMA = '<DBname>'
AND TABLE_NAME LIKE '<TablePrefix>%'

  More ... >>>

More fun with the Webfusion configuration

  • TerryE
  • Mon 5th Dec 2011 1:20 AM

This is a follow-up to my article on building a Webfusion test VM.  My aim is to create a test environment which reflects my Webfusion environment at a file system organisation and PHP programmatic level.  The Webfusion service itself uses name-based dynamically configured mass virtual hosting that is more complex than the examples in this Apache technical note, and its implementation requires patches to the Apache build.  (See below for more details).  In my case, it is a lot simpler to stick to a standard Debian Apache configuration, with a virtual host for each site to be tested in the /etc/apache2/sites_available directory as in listing 1 below, as this removes the need to add rewrite rules in the <VirtualHost> section and works with an “out of the box” Apache install.  Note that 192.168.1.245 is my static IP address for my VM and this would need updating for another installation.  With this configuration, I now have full access to the error and rewrite logs and can now fully debug my applications locally.

The last thing that I need to do is to set up my D/B account (either in phpMyAdmin or by executing the following from the MySQL root account, with the database name, username and password as appropriate):

   CREATE DATABASE dddddddddddddd   CHARACTER SET utf8;
   CREATE USER    'uuuuuuuuuuuuuu'@'localhost' IDENTIFIED BY 'pppppppppppppp';
   GRANT  ALL ON   dddddddddddddd.* TO 'uuuuuuuuuuuuuu'@'localhost';

  More ... >>>

Creating a test VM to mirror the Webfusion Shared Service

  • TerryE
  • Tue 29th Nov 2011 23:02 PM

In previous articles I have discussed how difficult it is for users of a shared web hosting service such as Webfusion's to develop and to debug applications on a service that they're using to provide services such as a blog or a forum to a client community over the web.  In my view, there are two main challenges to developing on such 'production' environments:

  • The supplier of the hosting service will typically optimise the service for normal access – that is for live use – and this usually involves disabling all debug features and diagnostics.

  • The development process can temporarily 'break' functional code or introduce bugs.  You will normally want to keep the existing service running in parallel to this development without its failing.

For these reasons, mature IT organisations invariably separate out development from live 'production' and use multiple environments for development teams, pre-production testing, user trials, production itself and support.  Not doing this is asking for trouble and is just bad practice, so I use VMs hosted either on my laptop or on my home Ubuntu server to do all such testing.  I find that a standard Ubuntu LAMP stack (such as on my laptop) is good enough for functional development.  However this type of LAMP stack is different in subtle but important ways from the Webfusion shared service architecture, so I have also set up a VM configuration which far more closely mirrors the Webfusion set-up. 

  More ... >>>

Oh, the embarrassment!

  • TerryE
  • Fri 25th Nov 2011 0:35 AM

I’ve just discovered a bug in my blog: any comment posts get thrown into the bit-bucket and are never passed to me for acceptance / publishing if the user making the comment is a guest (that is not a blog author) and accessing the public instance on my blog.ellisons.org.uk domain rather than my private and locally installed development system. 

I can only plea in mitigation is that this combination made it easier to miss this bug in my module and integration tests.  I did a quick check on my access logs (I've got the last 18 months archived), and I've work out how many comment posts were made in this period – roughly 4 a week.  If I ignore the 21 spammer probes that were 404'ed, there were a total of 295 reader comments.  The comments themselves are gone forever, but appended an article league table for comments, with a cut at 5 comments.  My sincere apologies to all these blog readers, and I really regret losing this valuable feedback.

There were a couple of factors that helped my missing this.  The first is that I decided to do a complete reimplementation of the blog engine based on some of the conclusions that I've come to in these articles on PHP performance, and I've been working on that on my development system.  The second is that I had some heavy commitments in apache.org that took most of my spare time for a couple of months. Both of these will be the subject of future articles.  However, I first needed to fix this bug before I could publish this article from my test system to live – otherwise I would have whithered with shame!  More ... >>>

More on using Rewrite rules in .htaccess files

  • TerryE
  • Mon 21st Nov 2011 16:58 PM

This article is a further discussion how to use rewrite rules on a shared hosting service (SHS) such as the one supplied by Webfusion and that I use.  It develops some earlier discussions in the following blog articles:

  More ... >>>

HTTP Caching Revisited

  • TerryE
  • Wed 29th Jun 2011 19:13 PM

I find the issues around of Web performance very interesting.  I’ve researched pretty comprehensively and written a few articles in this area. I also routinely use various web tools to instrument sites that I visit to see how they perform, and quite frankly a lot are middling to terrible in performance terms.  An example is the MoneyCorp GPS application that I discussed in a previous article, I have subsequently found worse sites (e.g. the RadioTimes TV website which scores 18-25 depending on page), though in these cases the main effect is a case of loading slowly, rather than failing to load at all as in the case of GPS.

However, what I have noticed with some sites is that they seemed to perform reasonably well from a user-perspective even though Google PageSpeed marks them down for not specifying caching parameters.  The relevant Google recommendations on “Leverage browser caching” describe the use of the HTTP headers Expires, Cache-Control max-age, Last-Modified and Etag for resources that you wish cached by the client browser, as I have discussed perviously.  This guidance really only relates to the mandatory rules for browser caching detailed in RFC 2616 (HTTP/1.1) in section 13.2.  However, in addition to these mandatory rules, most browsers also implement an advisory rule that is discussed in section 13.2.4, and which is based on any Last-Modified header if provided.  If present, then life of the cached resource is the age at download (the delta of Date and Last-Modified values) divided by a factor X.  The factor “X = 10” is suggested in the RFC and this is what IE, Firefox and Chrome use.  So by example, if a resource is 10 weeks old at download, then its cached copy will be treated as valid for one week from download.

  More ... >>>

A good web application spoilt by poor Internet performance

  • TerryE
  • Sat 11th Jun 2011 19:01 PM

My wife and I have a cottage on the Greek island of Alonissos – at the very top of the village in the photo to the right.  We like the remoteness, food, walking, swimming, our terrace with a fantastic view over Aegean and the life-style in general.  Internet connectivity isn’t a high priority when I am on the island, so I haven’t gone through all the hassle of getting an ADSL line; I just buy a drink or two at one of the local tavernas and use its WiFi when I need Internet access.  This works fine for all the access that I need: managing the websites that I look after, Skype, access to my various bank / credit cards services, YouTube, etc. – with one annoying exception: I need to transfer money routinely from my UK Sterling bank account to my Greek Euro one to cover living expenses.

I use MoneyCorp GPS to do this.  MoneyCorp is a market leader in this Forex sector that offers competitive rates and seems to be widely recommended (e.g. by the Telegraph).  The GPS application provides all functionality that I need except that it is unusable on these taverna connection  More ... >>>

The Anatomy and Timing of a Web Request – Part II

  • TerryE
  • Mon 4th Apr 2011 23:12 PM

In Part I of this analysis, I looked at the overall timeline of viewing a webpage, and my main recommendations were:

  • The correct webserver configuration is essential to ensure that files are correctly cached at the client’s browser and compressed to ensure that any network transfer times are kept to an absolute minimum when content is transferred over the internet.

  • The application should be written to minimise the number of supporting files needed to render the page by aggregating content where practical.  It should also ensure that when the client browser revalidates scripted content that the script processes the request correctly and issues a status 304 request when appropriate.

Whilst the application changes are beyond the scope of most application installers, getting the webserver correct, through properly configured .htaccess files can easily improve response times by a factor of 3 or more.  Having done this though, the application response for the delivery of the main document content becomes the main performance constraint and I want to use phpBB to explore the factors which drive this response time.  Whilst I realise that this article is in many ways a reprise of two earlier articles, it is clear from my dialogue on the phpBB developers forum that we continue to talk at cross purposes, so I wanted to drill down and parameterise some of these performance factors to put quantitative numbers of this responsiveness.

  More ... >>>

The Anatomy and Timing of a Web Request – Part I

  • TerryE
  • Sat 2nd Apr 2011 14:39 PM

My academic background was mathematics, specialising in operations research and statistics.  I put this to good use when I first started my career in an IT consultancy, in working one the development and use of detailed simulations of large-scale Army Command and Control Systems.  You might wonder what on earth modelling large voice communications networks over thirty years ago has to do with using modern web services, but in fact the conceptualisation and analytic techniques are very similar, (though the time constants involved have shrunk from seconds to milliseconds).  This foundation in modelling and analysing large communicating sequential systems has proved invaluable in my work in systems optimisation during my career, and has influenced my approach to systems engineering.

What I show in this article is that:

  • Correctly optimising your webserver configuration to ensure that files are correctly cached at the client’s browser and compressed to ensure that any network transfer times are kept to an absolute minimum when content is transferred over the internet.

  • The application should be written to minimise the number of supporting files needed to render the page by aggregating content where practical.  It should also ensure that when the client browser revalidates scripted content that the script processes the request correctly and issues a status 304 request when appropriate.

Using the Google Chrome network analyser

I now want to explore a typical phpBB query in depth, and in one specific case: displaying the phpBB community forum’s board index.  To understand what goes on in the interval between a user clicking the board index link and the time taken to complete page is assembled for viewing, you need to have a suitable tool to instrument this process.  I recommend using Google Chrome, because the developer tools that you need are part of the standard download (though most modern browsers have an add-on pack which provides similar functionality).  The main view that I will use to do this instrumentation is the Network Panel.  You can access this when visiting any website with Chrome by typing Shift+Ctrl+i whilst viewing the page.

  More ... >>>

phpBB Performance – Reducing the script load overhead

  • TerryE
  • Fri 11th Mar 2011 17:17 PM

This is the third in a series of phpBB performance articles and in this one I want to investigate how feasible it would be to use code globbing to reduce both the number of include files needed to be parsed and the total size of this source in order to reduce the overall response time for users of a phpBB forum running on a shared service.  This technique is one that I use to good effect on my blog application.  It’s a fairly long article, so I’ve spilt it into sub-sections.

Inclusions – the basics

However, a little review of how applications include sub-modules is probably first in order.  Basically PHP offers six built-in functions to compile source code into an application.  Two, eval() and create_function(), allow developers to compile source in a text file within an application.  However, I want to focus on the four remaining forms that read from PHP source files: require(), require_once(), include() and include_once().  I will refer to these collectively as “included” code.  All four essentially do the same except:

  • the require forms raise a fatal error rather than a warning if a compile error occurs; and

  • with the _once forms, PHP maintains an internal array of all modules loaded, and uses this to bypass any repeated load.

You may have come across a number of posts and articles in the blogshpere discussing the differences in runtime speed impact of these variants, but quite frankly if you look at the source code for these functions, then you will see that you should ignore such claims: there are no material performance differences; only the functional ones that I’ve just summarised.

  More ... >>>

phpBB Performance – Reducing the data cache overhead

  • TerryE
  • Wed 9th Mar 2011 22:31 PM

As I described in a previous article, you can use Linux utility strace to instrument a web request simulation and suPHP model.

sudo -u www-data strace -tt -o /tmp/strace.log php -r '$_REQUEST["f"]="2"; include( "viewforum.php" );' > /dev/null
egrep "( exec| open| exit)" /tmp/strace.log | vi -

Using $_GET["f"]="2"; and include() are a standard trick to make the script act as if a request parameter f=2 has been included (but because of the way that the phpBB argument parsing function work, you have to set $_REQUEST instead).  Suppressing the output prevents console overheads contaminating the result.  Filtering this logfile for open statements lets me to track which files are being loaded in the process.  By removing the image activation related opens, I get the summary shown in Table 1 below.  The PHP environment activation takes some 95 mS and the script execution takes another 110mS.  (This is with my filesystem cache fully primed.) The application has to load to execute this request:

  • 18 application script files comprising some 18.5K lines (550Kbytes) of source and

  • 14 data cache files comprising some 288 lines (77Kbytes) of data values. (I’ve indented these in the listing to stand out.)

  More ... >>>