I can’t remember where, but a while ago I read something about using subdomains to serve up a sites resources as a way to potentially speed up loading. The theory was that the protocol that browsers use to communicate with servers only allows some limited number of things to be download concurrently from a single domain (like 2 or 4?). But a site fairly commonly has dozens of resources. So if you were to create a subdomain (e.g. images.css-tricks.com) and use that to serve up images, that would be treated as a different domain and you would double the number of concurrent downloads possible.
In trying to research it, I haven’t been able to turn up a lot of quality information. Some forum threads are condemning it saying that multiple DNS lookups would then be required slowing things down more than speeding things up. Others going to far as to say that Google may penalize for this (which seems relatively absurd).
I’m always trying to improve the efficiency and speed of my sites where I can. This past weekend I was trying to improve my CSS Sprites use on this site. This was the result. It is fairly trivial to create a subdomain, so I tossed it up on there. This is just one image so it doubt it will make any big difference, but I’ll definitely look into moving all of the images from the theme onto a subdomain if there is any conclusive evidence this is a smart idea.
Anyone have any good information on this?
I don’t think that Google will penalize you for spreading the content over several subdomains.. what will happen is that your pagerank is going to be spread into the different subdomains. Check this presentation that debunks some other myths Google Myths and Misconceptions.
You should avoid using too many different subdomains because it will increase the DNS lookups ( http://developer.yahoo.com/performance/rules.html#dns_lookups ) but it’s definitely a healthy and common technique.
Steve Souders also created Cuzillion to help testing different approaches.
Furthermore, google itself uses not only subdomains but other domains for images, YouTube images are in a domain ytimg.com in several subdomains.
And it’s not the only one, yahoo does it too, and wikipedia has it’s subdomain uploads.wikipedia.com, etc.
I haven’t tested this yet but it is actually recommended by the either one (or both) Firefox add-ons (they should come in handy while testing your results):
Page Speed by Google: http://code.google.com/speed/page-speed/
Yslow by Yahoo: https://addons.mozilla.org/de/firefox/addon/5369
That’s a great article but it doesn’t speak to the fact if subdomains are “enough” to trick the browser into allowing 4 streams instead of 2.
I think the graphs are slightly misleading too. We all know that downloading lots of stuff in parallel slows down each one of them. Like if everybody in your house is downloading a 2 GB movie at the same time on the same connection, everybody’s download will chug along s l o w l y.
I can imagine the same theory would apply to downloading 8 resources simultaneously, if on a smaller scale.
your analogy is incorrect… a much better analogy would be copying a 2GB file to a USB drive and copying 2000 1MB files to that same drive… the latter will be *significantly* slower
the problem with 2 or 4 requests per hostname (depending on your browser) isn’t the speed at which you’re downloading the assets, it’s the connection set-up and teardown that take a certain amount of time, regardless of the size of the actual asset
if your website needed 100 images to display, 1KB each… and you combined all of them into a single sprite, 100 KB big… you’d still see a *noticable* improvement on the load time, even though you’re still transferring 100 KB, you’re only wasting one connection setup (TCP/IP handshake etc) for it
also, splitting up assets (css/js/images) on subdomains works up to a certain point when you hit the number of connections per IP (if your subdomains are just different names for the same IP)
if you want more detail, feel free to email me :)
The limit is 2. I’ll have a poke around, I have a book (somewhere) with a whole lot of information about this sort of thing in.
The limit is defined by both the particular HTTP spec used, and the browser.
Safari, for example, follows HTTP 1.1 spec, and the max is 4 concurrent requests.
You’re really only going to find a limit of 2 on really crappy browsers that you really shouldn’t be using anyway.
I’m looking at you, IE*
Yeh, It works for me. I’ve been doing this for at least a year.
I put images on img.website.com
I do see a significant difference in load tmes with google’s firebug addon ‘Page Speed’
It’s especialy a good idea, for example your video files, because, say you put your videos on video.css-trics.com, and need to change the server that hosts the files to make it more cost effective, it is significantly easier than moving the whole site.
Firebug was created by Joe Hewitt (http://joehewitt.com/)
But Page Speed was created by Google :P
( http://code.google.com/intl/de-DE/speed/page-speed/ )
Good for Joe, but he still did not develop the page speed addon for firebug.
I’ve been thinking on using it for my sites. But I think the smart way to do it its to use wildcard dns.
Considering of course that you have access to change your dns settings and apache configuration. You can make
be an alias for your www subdomain.
No need to have files across folders or anything then you can progressively enhance and use the sub domains when needed.
Hope this helps somebody
I haven’t peeked around your site lately so I don’t know if you’re using a CDN or not but if not, then you might consider it.
Here’s a quick tutorial I wrote for net.tutsplus.com on using Amazon’s CloudFront CDN with S3: http://net.tutsplus.com/articles/general/supercharge-website-performance-with-aws-s3-and-cloudfront/
From the research I’ve done, you won’t get penalized but you also don’t want to use more than two subdomains. I generally serve my blog’s theme assets from “cdn.websitedomain.com” (from CloudFront for low latency) and stuff like post images from “files.websitedomain.com” which pulls directly from S3 (no CDN for those).
Anyway, I hope this helps. If you have any questions feel free to give me a hollar. I’ve spent a lot of time playing with this stuff. :-)
I once ran into an article on this subject, looked it up in my history, maybe it’s this one you were looking for?
I don’t think it’s logical that Google would penalize you for separating your images onto a subdomain. You could use your
robots.txtfile to disallow access to the subdomain for the search bots just like you would with an /images/ folder. I think this is a good practice anyway, but I’m no SEO expert.
I’ve often wondered about this too – apparently not enuff to actually try it out though.
Unfortunately, the YUI folks talk a lot about domains, but not sub-domains a’la Martin’s suggestion.
I’ll bookmark this and check back later. Thanks for bringing it up for us non-CDN users.
Steve Souders is the expert on things like this and has created browserscope for this purpose. You can check out the list of how many connections per hostname and max connections
hostname is equivalent to subdomain. Although if your storing a large number of cookies on your domain you may consider using a separate cookie free domain to store static content. So that you cut down the bandwidth needed to load the page since every time an object on a domain with cookies is requested the cookies have to be sent in the request.
I agree that Steve Souders is the authority on things like this. And the number of connections varies between browsers.
One big advantage of using this is also that you can run a different web server setup on a particular machine, set to heavily cache files and optimized to serve static files rather than database requests and similar.
If you’ve ever seen a domain or subdomain with “cdn” in it, that stands for Content Delivery Network and is basically this.
A lot of larger sites do this. You’ll often see the JS, CSS, and theme images coming from something like media.example.org. They generally offload their theme media to a separate server, generally a CDN.
Simply changing to a subdomain should force the browser to download more than 2 files from your server at once, but using a CDN to host your theme’s CSS, JS, and images will have even more benefit. A CDN not only puts the files onto another server, freeing yours up for other things, but it synchronizes them across high-performance machines all over the world, which makes things faster for your readers.
The hosting provider I use offers a fairly affordable CDN, which Joost de Valk has been using on his blog for awhile. https://vps.net/cdn-signup
If the subdomain is mapped to a CDN, yes, clearly that is of huge benefit.
I’m curious though, if the subdomain doesn’t use anything fancy, just is hosted on the same server, does that still help?
Yes, it still helps. But keep in mind that now your server has to handle 2, 3 or 4 times the number of concurrent requests. So instead of 2 assets at a time, with 3 subdomains you could be serving 8 at a time. It is my understanding that during heavy traffic this could cause your server to bottleneck.
However, its still really a good idea! Rails has a built in mechanism to split requests over a number of subdomains.
I think you are also supposed find a way to turn off cookies for the subdomain (not sure if it is possible) so on each connect the browser isn’t sending cookies along to fetch the images.
Yep, at my corporate day job I went to some lengths to ensure that all cookies were served only on www so that I could have a cookie-free sub-domain for static content. Use this parameter to make sure your Google Analytics cookie is set only on www:
Speed gain on its own is negligble, but combined with all the other techniques it all helps.
Since the majority of incoming traffic lands on the home page, the best optimisation I made was to inline as many assets as possible on the home page and then use some jQuery lazy-loading ($.get()) to stick the ‘real’ external assets into the browser cache ready for a secondary page view.
You should read these books:
High Performance Web Site
Even Faster Websites
Defiantly point a subdomain at an s3 or cloudfront bucket. you cant beat the speed and number of locations amazon has.
But can it serve gzipped assets yet? Last time I tried this, it was a HUGE deal to trick it into serving gzipped files.
DNS lookups are normally cached by the browser until it is restarted.
The book High Performance Web Sites has a companion site that provides most of what you need. http://stevesouders.com/hpws/rules.php
I can safely say from experience that having a second host name for your assets server will definitely speed up load times. You can use a subdomain or another domain entirely.
I’ve been doing this on every created site for at least a year, too.
Firebug and YSlow shows you the truth :)
it all depends on the server settings, sometimes subdomains have cookies.
I do this all the time serving map tiles for custom maps (open layers) like google maps. It’s basically doubling the speed on IE and improving it on FF. I have been planning to add it into my CMS for all my customer sites as I expect it to make a significant performance increase.
As far as the map tiles go though – it’s recommend and definitely a speed boost (I use 2 servers – each with 2 subdomains) more then 2 subdomains from one server seems to loose the speed benefit.
IE8 does six parallel requests.
I think Firefox defaults to four — at least it does according to network.http.pipelining.maxrequests in about:config.
Firefox defaults to six even though the W3C recommendation is 2. It uses the network.http.max-persistent-connections-per-server.
You can really easily see the difference that this makes on Yahoo’s site. They use two domains for their images: d.yimg.com and l.yimg.com.
If you load yahoo.com with firebug open to the net tab, filter to just see the images and you can see that it will be waiting for data for up to 6 files at a time from the same domain. So, Yahoo will have a max total of 12 images being served at a time since they use 2 domains.
You can also change the value in Firefox in the about:config and load yahoo.com again and see the difference.
Thx for the real world example!
I’ve been thinking about this for a while but have come to the conclusion that unless I have multiple sites that need a particular image/script/style (lets say a logo) and the sites themselves have decent traffic then it’s possibly worth it. This is what makes the Google Ajax Libraries API and Yahoo’s YUI important to me. The scripts/styles are elsewhere and there is a greater change the user has a cached version meaning quicker response times and helps resolve any script downloading issue.
For self hosted stuff I day just do the standard best practices that we all know: minimize scripts and style, create sprites, use clean html code etc
I think just optimizing the site images in general is enough because most of time the pages are already cached locally on visitors computer already anyways so that leaves only the newcomers to download the page but they only have to do it one time and “hopefully” they are repeated visitors….
How did you do it? It seems so awfully complicated. I know it easy to on in the control panel at my web host though.
Very interesting thought. I never would have thought of it but it does seem to make sense.
Would it be possible to do a followup on this once you figure it out?
I think an article about different ways to speed up a website would be very handy, I’ve always wondered about the minify and gzip abilities…
The only thing I can think of that might lead to this theory is the maximum request limit that most web server software imposes. For Apache, you can read a little about it here: http://httpd.apache.org/docs/2.0/mod/core.html#maxkeepaliverequests
The basically tells the server it is not allowed to deal with any more than 100 concurrent requests at any given time. If you have a lot of users, this can slow things down pretty quick.
The only way a subdomain will fix this is if it’s pointing to a different server entirely. Send me an email if you want more information… I’m heading out ATM
This isn’t a limitation, it’s a feature.
You can adjust the number of requests on the server side to whatever your server can reasonably handle considering its resources.
If anyone uses drupal someone created a module that handles this:
dns lookups are inconsequential. They’re cached by your OS, and usually upstream. eg, if I make a dnslookup, it’s cached on my laptop, on my router, and at my isp. If I move to my desktop, my router still has cached. If someone on the other side of the country tries, my isp still has cached.
I heard about this theory but was not able to find sufficient research to either deem it worthy or unworthy but it seems that there are alot of diffenert views to this.
In fact, Google uses this strategy in Google Map. What Google do is have four different subdomains (mt0.google.com, mt1.google.com, mt2.google.com, mt3.google.com) to share the tile fetching between them.
Another real world example. Thanks!
I don’t really know if it better but many “big” company already do this :
• Twitter with twimg.com
• Youtube/Google with ytimg.com
• Facebook with his CDN fbcdn.net
• Yahoo! with yimg.com
By the way I bump into this http://forums.whirlpool.net.au/forum-replies-archive.cfm/957366.html (last post)
These are separate domains though, not sub-domains.
It’s not just about increasing the number of concurrent connections (though that can certainly help improve total download/rendering time on lightly loaded sites) — moving your static content onto another domain also allows you to run a much simpler web server configuration for that domain, which gives you a nice additional speed-up.
Static assets can be served without PHP/ASP/Rails/[your dynamic content engine here], and don’t require cookies to be set or returned by clients. Furthermore, you can aggressively set cache-related headers to insure that a particular media asset will only be fetched once by each client.
For a real performance and scalability win, try hosting your media assets with a lighter-weight web server light nginx or lighttpd. Either one will consume fewer resources than Apache, which will let you serve more pages, faster, without paying for more hardware or system quotas from your hosting provider.
It’s more than theory – it’s fact. The HTTP 1.1 spec requires that browsers download no more than 2 components per hostname at once. Firefox, Safari, Chrome, and IE8 ignore this and download 4 to 6 components in parallel. But IE6 and IE7 will only download 2 at a time. So this idea of domain sharding is aimed at increasing performance specifically for IE6 + IE7. Increasing the number of parallel downloads beyond 4 is not recommending and will actually hamper performance.
You can achieve this my hosting static assets from a subdomain or simply by setting up a CNAME that points to your /images/ or whatever directory.
If you want proof that it works you can use a packet sniffer like Fiddler to see the waterfall chart for the requests.
As a few others mentioned, Steve Souders is the guru of web performance and both of his books are fantastic.
I recently wrote an article on using subdomains and i think i submitted it to script and style so it may be this one you saw? anyhow here’s the link.
On a tangentially related note, on thing to keep in mind is that having a separate subdomain for resources has a particular benefit for those whose sites have other subdomains anyway and share resources across them (pretty normal:design elements, stylesheets, jQuery, etc.).
So, your content’s at ‘sweet.whatever.com’ and ‘sour.whatever.com’ and all of your images are at ‘images.whatever.com’. Since the images are cached when I go to the ‘sweet’ subdomain, it doesn’t have to be downloaded again when I go to ‘sour’. You’re effectively your own CDN.
I don’t have any links to resources, but I know Sitepoint host their images on a different subdomain.
3rd real world example. I think I’m satisifed. Thanks to everybody that stayed on topic!
Be careful with this. Domain sharding can actually decrease performance on newer browsers. The browser tries to load so many assets at once that it gets bogged down.
It’s a balancing act, but in most cases multiple subdomains will decrease page load time. There is a DNS lookup cost, but it’s negligible at best. The potential performance increase of more parallel downloads would almost assuredly make up for any loss there.
The way it works is thus: browsers will only pull two resources at a time from a single domain. Particularly if you have several large images, you end up clogging the pipe with a single domain. Calling resources from one or more subdomains tricks the browser into doing more concurrent downloads, decreasing the chance of bottleneck. Google uses this for Maps, actually. Each of the 4 primary map tiles gets pulled from it’s own unique subdomain, allowing them all to download concurrently instead of just two at a time.
There is a good article by Jeff Atwood on the stackoverflow blog on how they improved performance by using a static cookieless domain http://blog.stackoverflow.com/2009/08/a-few-speed-improvements/
Conceptually this is correct as well – a domain in itself is just a name and not a server. For example there is no “com” server as that is a TLD. There should be no “domain.com” server because that is just the name of the domain. All servers should have their own name, including www, and the browsers apparently still respect this.
I usually revile the term “best practices” but in this case it makes sense.
WOW. Thanks for all the great information people. Just this comment thread alone is a killer resource.
I think the general consensus is that YES, it is a good idea. If you can map the subdomain to a differnet server entirely, one running a stripped down web server with special settings and stuff (aggressive caching, cookie-less) OR you can map it to a bonafied CDN, that’s clearly better.
However, just the subdomain all by itself seems like it has benefits enough that it is worth doing.
I’m back 3 weeks behind you, but I think you nailed it with that summary Chris.
Thanks again for bringing it up!
The best solution for the script files is having them in one minified file loaded at the bottom of the page. But there’s also a trick that you can use by creating a google code account and uploading that file over there and then use the google code url of that file at the bottom of your page where u load it, therefore using google as a CDN network for your scripts. This way you improve the scripts file performance by loading it from the closest google server to you.
I havent tried it with a css files and image sprites but works perfectly with scripts file.
Just looking at your CSS / JS a few things that would improve performance:
1. Combine as many JS files as you can, minify and then gzip them. Do the same for your CSS.
2. make sure everything in your CSS is lowercase
@Martin: using google code as a CDN, i like it. am going to try that with CSS and images.
I don’t see why Google would penalize if you’re just using the sub-domain for image storage. It’s not like your literally spreading content across multiple domains.
Another point I’d like to make is image optimization. I often use a program like RIOT that really chops down image file size. Massively, with very little visible degradation. Works with JPEG, GIF and PNG file type and strips out a ton of unnecessary code from within the image. (Ever open an image in a text editor?)
If you go crazy and create let’s say 3 subdomains for your files, THEN your site will actually get slower due to the many dns lookups so keep it at 1 or 2 subdomains. Usually people use one, but 2 can help too (images on one, css/js on another).
Here’s links from Google and Yahoo about this technique:
This is a really good article, well worth researching into, would be good to have some solid proof regarding SEO problems etc. Would your page rank be affected by using 2 domains if one was only used for storing images? Im not sure… Its not as if its duplicate content?
Its all well and good speeding a site up but my main concerns is SEO. Im going to do a lot of reading into this i’m sure! Il pass it onto my SEO colleagues as well and see if they know anything.
Thanks for this!
For those of us with our own server, and control over Apache (or brickweb, whatever you’re using), I wouldn’t think this is as much an issue – we can expand the threading to allow for more multiple connections to our tech. I’d think the issue is more for those of you on shared hosting environments where they put governors per domain.
Seems to work great for Anna Wolf’s Site, loading images pretty fast.
I think it’s using up to 6 subdomains to load images!
I think it has been slightly considered above but one of my coworkers came to me with an epiphany one day. He said, “I finally know why sites host images and media on an external domain.” He said that for every request a browser sends all cookie information for that domain for every item. Therefore, if you use another domain each request will be accompanied by less data making it much more efficient. Sorry if this is not true, but he is very smart and convincing.
@Benn: It is true, considering the Cookie spec says browsers must support at least 20 cookies per hostname and at least 4K per cookie, that’s an 80K per request potential.
On top of that, an average uncached page is I guess at least 10 requests? If only requests that were required to send cookies did so, that would be a significant saving in some cases.
You shouldn’t host images one needs to be authorized to see on an unprotected domain. If you’re in the business of offering such a service.
This may work if the subdomain is on another server.
Bad call Bogdan Pop. If that server goes down then the site loads with no images. Making for a poor impression. A sub-domain would be on the same server making either both be on or off line at the same time. Giving an overall more seamless / cohesive impression. You don’t want to see broken sites now do you?
Sprite as much as you can (be sensible), optimize all images then dump some but not all the images on a sub-domain. If your trying to allow for more concurrent downloads there is no point shoving it all over. Go 50 50.
Well, that was a question worth asking!
Ask more q’s like that and we’ll all be learning a great deal from each others comments.
Thanks for the post.
Here is someone who did it:
The thing is more about cookies and cache than the multiple downloads.
Maybe he wrote about this on codinghorror too but I can’t find the post right know.
Learn with the best: Steve Souders, buy your both books and you will know exactly what to do when performance became a issue.
And if you have something between 2 or 3 subdmains for static resources (image, css, js) it will increase the frontend speed, but too many subdomains will increase DNS lookups what is not good.
Also take a look on real CDN instead a only subdmain idea. Take a look in Amazon Cloud and many others.
Yeah I have heard of this before, the BBC uses it on their sites, and I think it works best for sites with a large amount of images like news sites, however if you are only on a site with say 100 images I doubt it would make much of a difference.
To be honest I have never tried it, but I imagine it would be most useful on dedicated servers, but that’s just my opinion…
A few links from http://performance.survol.fr/ (Éric Daspet, blog in french but links in english. He worked for Yahoo! with Nicole Sullivan and I believe Stoyan Stefanov, those responsible of Smushit!)
A few things I can remember : static and dynamic contents should be separated and then optimized very differently (no Apache server for images, no cookies, Amazon S3 is a cheap CDN if you need one, …). And users really really hate to wait for a page to load, it’s been measured by the big ones.
how about using skydrive (from hotmail) ?
with hotmail, you get a free skydrive account of about 5gb. it recently occurred to me that skydrive could serve as a sort of personal cdn, because they must have multiple servers in different locations from which they serve your content. hosting your images and assets on skydrive and then referencing them in your code could give your visitors a speed boost…
does anyone think this is a good idea?
I forgot: HTML with Meta refresh
I can say that using subdomains doesn’t work. The limitation of two simultaneous loads refers to the server (as a piece of hardware) and not to the domain. Because content of the domain and the subdomain live on the same server, the browser treats both as one source, not two.
This limitation makes sense: Ten times two files is faster than one time ten files. Remember: When loading a page, two machines are working at the same time (actually even more than two). Splitting the work into small bits allows the browser to do something while the server is working.
You can see this phenomenon in a coffee shop, where one clerk is only receiving the order and chashing, the second one is brewing coffee. If ten people enter the shop at the same time, all ten people have to wait longer as if one client would serve a client from A to Z.
So-called coffee-shop-buffer-overflow-phenomenon :)
Helen: Browsers limit connections by the host name, not the domain name. The browser doesn’t know that 2 subdomains refer to the same physical device.
Here’s an excellent article with charts showing the difference in performance.
In my case the static host resolves to a different machine with a minimal config. There’s no reason to load mod_perl and mod_apache for handing out static files. This can probably be done as a vhost as well.
For something like css-tricks.png, you can get faster times by slightly altering the file. Turning off interlacing cuts the file size from 105K to 83K. Changing to an indexed palette (256 colors down from ~4000) reduced the file size to 27K..
That will cut the load time by way more than adding a subdomain. And, it cuts the server load a little bit as well.
That, plus CSSTidy and Google’s Closure Compiler will make a much greater difference than a subdomain.
I really hate it when I show up 3 YEARS late to the party though!
Browsers load more content at the same time from different domains (subdomains included) thats mean faster loading…
I have already used this trick to js and css files on my blog http://lirent.net and looks faster than before.
I didn’t apply to images, since was a lot posts to change.
Amazon does it too…