Categories
Posts

Performance Analysis: beta-newsroom.lds.org

A new version of newsroom.lds.org.org has recently been launched at http://beta-newsroom.lds.org/. This seemed like a good opportunity to give it a once over Steve Souders style, looking at the performance of the new site.

The first thing I did was run it through webpagetest.org; with the Dulles, VA – IE 8 – FIOS settings. You can see the results at http://www.webpagetest.org/result/100924_5TGG/. A quick glance at the category scores (top right) shows that there is likely plenty of room for performance improvements.

Compression

The first thing that stood out was the lack of compression for resources. Starting with the very first request, the HTML for http://beta-newsroom.lds.org/ there is no compressed version made available. A simple way to confirm this is with curl:

curl -v – compressed http://beta-newsroom.lds.org/ > /dev/null

Which makes an HTTP request that looks like:


> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3
> Host: beta-newsroom.lds.org
> Accept: */*
> Accept-Encoding: deflate, gzip

and gets back a response of:


The raw HTML is 33,232 bytes. The compressed version of the HTML is 8,549 bytes. An easy way to trim the page size by 24,683 bytes. I'd be surprised if their web server (MarkLogic) doesn't support compression, so this is likely a simple configuration change some where.

Going done the list of easily compressed resources:

  • http://beta-newsroom.lds.org/assets/styles/screen.css - 105,478 bytes, 16,689 compressed, saves 88,789 bytes
  • http://beta-newsroom.lds.org/assets/scripts/jquery-1.4.min.js - 69,838 bytes, 23,666 compressed, saves 46,172 bytes
  • http://beta-newsroom.lds.org/assets/scripts/common.js - 19,574 bytes, 5,080 compressed, saves 14,494 bytes
  • http://beta-newsroom.lds.org/assets/scripts/jquery.cycle.all.min.js - 23,729 bytes, 7,122 compressed, saves 16,607 bytes
  • http://beta-newsroom.lds.org/assets/videos/uvp/scripts/swfobject.js - 21,543 bytes, 4,728 compressed, saves 16,815 bytes
  • https://secure.lds.org/stats/s_code_ldsall.js - 31,543 bytes, 12,415 compressed, saves 19,128 bytes

On a fully loaded page that weighs over 600KB turning on compression support for these 7 items would reduce it by 226,688 bytes.

The https://secure.lds.org/stats/s_code_ldsall.js file is servered by a different web server, Netscape-Enterprise/4.1, which is fairly old. I'm not sure if it even properly supports compression. If not throwing a caching server (nginx, varnish, squid, etc.) in front would do the trick.

Another method for reducing the file sizes is minifying the CSS and Javascript (in some cases this is being done already).

Blocking

Loading all of that Javascript at the top of the page is causing other downloads to block. As a result the site doesn't start to render until we are more than a second into the page. This is a good place to remember the rule of thumb: load CSS as early as possible and Javascript as late as possible.

There is one block of inline Javascript in the page, but it is at the very bottom.

For the most part there doesn't appear to be too many places where parallel downloads are blocked. One thing that could be done to improve parallel downloads though is to spread out more of the resources across different host names.

Images

The page loads 51 images, totaling 248KB. The Page Speed results indicates that these could be optimized to reduce their size by 104KB.

Serving images (and other static content) from another domain would also cut down on cookie data sent back for each request.

Caching

Here is another area that I was really surprised by, the bulk of those images don't provide cache headers. As a result browsers will re-download those images for each page load through out the site. The repeat view waterfall chart should be much smaller, no need to fetch all those images on every page view.

Throwing in some Last-Modified and ETag headers will clear that up.

Conclusion

There are some other techniques that could be employed to help speed things up, but they depend heavily on how the site is developed and deployed. There is already enough low hanging fruit to make a big difference.

I think you could conservatively target the total size for the page at less than 300KB. This would reduce the amount of data transmitted to browsers by more than 50%. Another benefit is the time it takes for the page to fully load. Currently it is right at 2.5 seconds. Something closer to 1.5 seconds seems reasonable. With proper caching headers return visitors could see times under 1 second.

All of this was just for the front page of the site. I haven't looked at any other pages on the site, but I suspect that they'd benefit from the same items listed above.

4 replies on “Performance Analysis: beta-newsroom.lds.org”

Joseph, I am in involved in the site. There are different levels where performance comes into play: the app server, caching appliance, and client performance. Client performance would include page weight, number of requests, browser caching, compression, and other speed tricks, like puting js files at the end of the html. And depending on who your target audience is, you will make cost-benefit decisions and tradeoffs along the way.

The new Beta Newsoom site has been live for a little over a week and a lot of the browser-side performance improvements have not been the focus or priority yet. We wanted to make sure that the functionality was correct and solid, that the app ran well, and that we had a good foundation for everything else. Conference of course was a great test of all this. After most of the functionality is in place, I would expect that we will spend more time on the browser-side performance improvements. And like you said, there is some low hanging fruit that shouldn’t be too difficult to implement. In a few weeks we may have addresses some of these issues.

Thanks for the analysis!

There are different levels where performance comes into play

Yes, and as I don’t have any familiarity with the back end structure or systems (other than what is mentioned in HTTP responses) so I focused specifically items that impacted the client and were measurable from there. The site is small enough traffic wise that scaling the back end should not be a difficult problem. Ultimately the goal is to present a high performance, fast loading site to the client. They won’t really care how that gets accomplished.

The new Beta Newsoom site has been live for a little over a week and a lot of the browser-side performance improvements have not been the focus or priority yet.

For some of the items I mentioned I could see this being acceptable, but for the most part performance is part of development. Something as simple as: under conditions X we expect the page load time to be under Y 95% of the time. Then test for that. This usually gets skipped over when performance comes into the range of “good enough”. We’ve become spoiled with high speed connections though, and our sense of “good enough” often doesn’t take into account people on slower connections.

The lack of compression support for instance it pretty bad and obvious. I imagine this was someone simply mis-configuring a server and should only take a few minutes to fix. But this is also more than just performance. Quantcast puts your monthly visits ( http://www.quantcast.com/newsroom.lds.org ) to newsroom.lds.org at around 150K per month (obviously direct measurement would give a more accurate number, but this works for a ball park). If we take that as page views, 150,000 pv/mo x 600k = 90GB per month. Cutting that in half is a good thing not just for clients, but for network infrastructure as well (for a variety of reasons it isn’t actually half, but that’s the ball park), giving you room to serve more clients with few resources. And we haven’t even touched on peak load issues. Spending less tithing dollars to accomplish the same task with better performance is a good thing 🙂

You mentioned General Conference being a test for the site, I just ran webpagetest.org against the site again and the page weight is now over 700KB – http://www.webpagetest.org/result/101004_6VEY/ – so compression would make an even bigger difference. That said, the number of HTTP requests was smaller, so the page actually loads faster now. Still no compression support though, which is sad.

I’m happy to chat more about this ( contact form – http://josephscott.org/contact/ ).

Well, like I said, everything gets prioritized. The site is still in active development. It’s like a stream going by, and at some point you say, “we are going public at this point,” but you are still not done. I don’t know if I’d say the goal is to provide “high performance, fast loading site to the client.” Of course we want it not to be a painful user experience, but we have to weight the costs and benefits of speed and functionality, and when certain work is done. And like I said, your target audience influences the benefits. For example, even if 15% of your users use IE6, the cost to support a positive user experience on IE6 still may not be worth it.

Actually your performance goal is to provide a high performance, fast loading site. If it took 10 minutes for the site to load each page on a 20Mbps fiber connection would you use it? No. That is why I said, the problem is when people testing the site get into the realm of “good enough”. Obviously that isn’t the only goal for the site, it is “a goal”, but not “the only goal”. Unfortunately this often translates into simply having no performance goals.

The IE6 example, well, hey, no one likes supporting IE6 🙂 However simply saying that throwing out 15% of your users is alright is insufficient data to come even close to making a decision. On WordPress.com for instance – http://en.wordpress.com/stats/traffic/ http://www.quantcast.com/p-18-mFEk4J448M we average 65million page views a day, 15% of that is 9 million page views a day. Deciding to ignore all those users will result in a huge increase in support requests. It still might be worth it, but looking at just the percentage can be very mis-leading if the numbers are very high. And that doesn’t even touch the issue of individual attributes of those users. What if that 15% make up 90% of your most critical users (by revenue, target market, etc.)?

I’m all for throwing IE6 under a bus, it just turns out to be a harder decision than most web devs would like.

I want to be clear that I agree 100% that performance is not the only issue, but it should included in the list of issues. In the context of the beta-newsroom.lds.org, just turn on HTTP compression support to start with, that would have a large impact and require zero code changes to the site. Same for HTTP caching headers. If it takes more than 5 minutes to enable compression for the site then “you’re doing it wrong”(TM) 🙂

I look forward to seeing an iteration of the site with improved performance. I’m happy to offer pointers on techniques for improving performance.

Leave a Reply

Your email address will not be published. Required fields are marked *