A good web application spoilt by poor Internet performance

My wife and I have a cottage on the Greek island of Alonissos – at the very top of the village in the photo to the right.  We like the remoteness, food, walking, swimming, our terrace with a fantastic view over Aegean and the life-style in general.  Internet connectivity isn’t a high priority when I am on the island, so I haven’t gone through all the hassle of getting an ADSL line; I just buy a drink or two at one of the local tavernas and use its WiFi when I need Internet access.  This works fine for all the access that I need: managing the websites that I look after, Skype, access to my various bank / credit cards services, YouTube, etc. – with one annoying exception: I need to transfer money routinely from my UK Sterling bank account to my Greek Euro one to cover living expenses.

I use MoneyCorp GPS to do this.  MoneyCorp is a market leader in this Forex sector that offers competitive rates and seems to be widely recommended (e.g. by the Telegraph).  The GPS application provides all functionality that I need except that it is unusable on these taverna connections: it hangs and times-out.  It seems to need a reasonable internet bandwidth to work robustly, and luckily for me there is an Internet service in the main port on the island which offers Enet connection and a reasonable Internet bandwidth, so this an inconvenience rather than a show-stopper. Even so, for an application implementing a money transfer service, I would expect it to provide a functional service 24×7 from pretty much anywhere in the world, given a reasonable minimum Internet service.

So following the theme of my last few articles on optimising web applications for network and browser performance, I decided to drill down into the GPS application to see just why it was the worst application that I had measured with Google Page Speed, scoring 36/100 (my next article will discuss a few more that I have subsequently found).  As far as I can see, the designers may have developed a good application in functional terms, but they seem to have ignored some basic rules for web optimisation:

  • None of the text content is compressed.  Rendering the GPS home page requires downloading some 29 HTML, CSS and JS files.  None is compressed.  This can be enabled by a simple configuration option in the appropriate IIS web.config files, so I am not sure why this hasn’t been done.  Compressing these 29 resources totalling 460 KiB would reduce this size by roughly a factor of 10.  Some admins cite performance reasons for not doing so, but this is a bogus argument because (i) the CPU overhead on modern Intel CPUs in minimal; (ii) there is a CPU and context switch overhead saving in avoiding the larger network transfers, and (iii), the cost of compression also has to be offset against the saving in only having to encrypt the 10% of the bytestream in the case of HTTPS.  This one is a no brainer.
  • Most static files do not have an expiry date set.  There are 22 CSS and JS files which do not have an associated Expires tag; this is again a simple IIS web.config setting.  The client browser can safely use any locally cached resources when they are tagged with a future expiry date, without querying the server.  If any document has an associated Etag but no valid Expires date, then the browser must issue a conditional request to the server to download if Etag is mismatched.  This is in effect an RPC call that optimises out unnecessary downloads.  However, even when the download is optimised away, each RPC call itself still has an associated round trip time.  Also whilst modern browsers support multiplexing of requests, most limit the concurrent number to six, so issuing these requests will block and serialise other load activities.  So again, setting expiries at say now+1 month is a no brainer.
  • The main document is 80% boilerplate.  There is a ticker-tape currency bar on the GPS home page when logged in: occasionally a nice little goodie, but not really core functionality.  However, this content is common to all user home pages and it comprises 80% of the document (some 180 KiB out of 223 KiB).  Scrolling is already handled by a javascript so why on earth doesn’t the script just do an AJAX-style asynchronous (compressed) XML/JSON load of this table data?  The ticker-tape is above this business content so the <div> for the ticker tape is easiest placed ahead of this content in the HTML body, but there is absolutely no reason to place the ticker-tape script here.  Best practice is to place all such furniture scripts at the foot of the body so that their invocation will not block or or prevent the actual main page content loading.  Doing this and compressing the remaining document content would reduce the main document size from the current 223 KiB to less than 12 Kib.  This is a simple application change using standard functionality within the ComponentArt Web UI toolkit used by this application.  There are various other dubious implementation details, and in my experience the current implementation is something that a novice might do.
  • Marshal CSS and JS files.  This is more of a nice-to-have as most users will be repeat visitors for this use-case, and therefore (if static CSS and JS have appropriate expiry dates) the browser will optimise away any such server requests by using locally cached content.  Nonetheless, applications are usually more responsive, especially on initial load, if the CSS files are grouped and collated into composite style sheets: 3 big style sheets are better than 22 small ones.  Likewise, server-side javascript aggregation helps improve performance.  There seem to be two main group of script files: the GPS application ones, and the Web UI ones.  It is simpler and better to aggregate and “minify” the former into a compressed composite as a one-off server-side function, and then load this once into the local cache of each user.  In the Web UI case, again these are loaded one file per UI component, through I note that the ComponentArt website does marshal Web UI components on its own pages to reduce the numbers of script files.  Another benefit of marshalling is that any compression can be done as a pre-process to remove the per-request overhead as I did in my TinyMCE loader.

Having browsed some of the other pages such as the Transfers and Payments functions, I see that javascript components seem to be randomly loaded from either WebResource.axd or ScriptResource.axd, with the latter being compressed, but not the former.  Bizarre.

So in previous articles I’ve discussed how to optimise your own web applications and packages such as phpBB.  Here we have an example of a corporate B2C application that is undermined by sloppy implementation details and ignoring some basic rules for good web performance:

  • don’t send content over the net that doesn’t need to be (i.e. can be cached in the client browser);
  • set up the transfer defaults so that the browsers and servers don’t even need to “talk” about transfer if this isn’t needed;
  • when you do need to send content, make sure that it’s compressed;
  • sensibly glob up content that you do need to send, because this is more efficient and responsive.

I am not sure of the exact reasons for the time-out because I haven’t any access to the server-side logs, but the system is clearly struggling.  The secure HTTPS protocol is a must for this type of financial application, and this complicates some techniques for page optimisation.  Even so, GPS is sending out 223 – 595 KiB (depending on caching) over this HTTPS stream, when with a few tweaks it needs to send 15 KiB to render the home page, plus another 15 KiB or so asynchronously to initialise the ticker-tape.  This poor implementation means that the system traffic is perhaps 5-10x greater than it needs to be, and any poor performance is very much self inflicted.  All I hope is that MoneyCorp manage to fix this before the autumn, so that I can do my Forex transactions from my local taverna on my next visit to the island!

Leave a Reply