There is a real need for serving media that is appropriate for the device and circumstance, since we know so little about any particular web request. I recently posted a blog post with so many images on it the page weighed in at 2.29 MB. I should have posted a warning when I tweeted it: “Don’t click this if on a 3G network, it probably take forever, just check it out when you get home.”
Ideally, all those images I served up could have had a lower-res version of themselves that display on browsers with smaller browser window sizes and/or slower connection speeds. Even in cutting edge browsers, there is no native way to do this yet. So it’s a good time to start talking about how we want that to work as web builders. Perhaps we can influence how the spec shapes up.
Let’s limit this conversation to inline raster images. As in, the things today served as <img>
. As I see it there are three paths we can go.
- Create new element that exists just to serve this problem.
- Create a new image format designed to solve this problem.
- Do nothing, fix our problems with other technologies.
Each of them has advantages and disadvantages. Let’s look at each.
Create New Element
The most likely candiate is <picture>
as being discussed here in the W3C community. Scott Jehl has a JavaScript polyfill that mimics what it’s functionality would be. The syntax would be:
<picture alt="description of image">
<!-- low-res, default -->
<source src="small.jpg">
<!-- med-res -->
<source src="medium.jpg" media="(min-width: 400px)">
<!-- high-res -->
<source src="large.jpg" media="(min-width: 800px)">
<!-- Fallback content -->
<img src="small.jpg" alt="description of image">
</picture>
Advantages
- Mimics other media syntax like
<video>
and<audio>
, which makes sense. - The fallback makes it backwards-compatible with browsers that don’t support it, which is extremely important. We can’t have images that just don’t work in older browsers.
- Gives us web authors the control to show exactly what we want under situations we specify.
Disadvantages
- It’s a whole lot more complicated than
<img>
. Harder to teach, harder to learn, more code to write. Easy to screw up. - Muddies the water of CSS and HTML a bit, by bringing the media query syntax into HTML.
- Similar issues to why inline styles are bad. Makes future updates more difficult. Not a reusable abstraction.
New Image Format
The impetus behind this blog post came from conversations I with Christopher Schmitt and a blog post he wrote. Christopher is of the opinion that a new image format is the ideal solution.
This new image format would essentially have multiple versions of itself inside of it.
Which image is delivered by a program like a web browser is a determination that can be made by a virtual handshake between the browser and the web server.
So perhaps the file is 800k all together, but within it is four different versions of itself: 500k, 200k, 50k, 10k. Through some kind of standardized set of rules, one of those four images would come across and be displayed by the browser.
Seem like a fantasy? There is already an image format like this called FlashPix, which handles even more drastic versioning. Think new image formats are impossible to implement? WebP is gaining support at a decent pace.
Ultimately the syntax would remain just as it is now:
<img src="unicorn.rpng" alt="fancy unicorn">
I just made up that file extension, but “responsive PNG” would be fine with me.
Christopher likes this approach because
it allows the continued use of the IMG element which is ingrained into the bones, the very marrow, of the Web.
I like that thinking. No need to turn our backs on an element which has worked so well for so long. But of course we wouldn’t. There is no need to replace <img>
, only build upon it and offer alternatives.
Advantages
- Keeps the syntax simple. Works the same way it always has.
- Keeps authoring simple as well. One file, not multiple. Adoption would probably be quicker and more people would actually do it (less people would make 4 versions of every image and hand craft queries to serve them).
Disadvantages
- Possible loss of control. In order to keep things simple, the image format would do the logic of what exactly gets served. Will it always make the right call? What does it factor in? Parent container width? Network connection speed? Pixel density? A combo?
- Not backwards-compatible. What happens in browsers that don’t support the new format?
Other Technologies
We could certainly lean on JavaScript to help us here. It could help us with the new image format, for one. We would use regular old png’s and jpg’s in our <img>
and hot-swap the src
with the new format for a while until the new format has ubiquitous support. A little heavy-handed perhaps, but workable.
And if it can do that, maybe we should just let it solve the entire problem. JavaScript is able to do all the “tests” we likely need it to do (e.g. window size testing, network speed testing), and it could be in charge of swapping our the src
of our images with more appropriate ones. The foresight.js by Adam Bradley library is doing that already. Perhaps that is what JavaScript is for and we don’t need to interfere.
Think the client-side nature of JavaScript isn’t ideal? There are a couple of solutions that bring things server-side.
Adaptive Images by Matt Wilcox is a ridiculously clever solution that using a tiny sprinkle of JS just to measure the current screen size and set a cookie, and then all requests for images are routed through some PHP which determines which version of an image to server, appropriate to screen size.
Sencha.io Src is another solution that is completely JavaScript-free. It does UA sniffing to determine the device and makes the call on what size image to serve up based on that. You simply prefix the src of the image with Sencha’s service URL:
<img src='//src.sencha.io/http://mywebsite.com/images/unicorn-hires.jpg' alt="unicorn" />
That’s the simplest possible usage, it can get a lot fancier than that. It’s a third-party Beta service though, so be aware of the inherit concerns of that (e.g. they go down, your images don’t load). I imagine it will ultimately be a paid service.
Advantages
- We don’t rock the boat of standards.
- No waiting for browsers to catch up on supporting anything new.
Disadvantages
- Is this just ignoring the problem? Aren’t standards supposed to help with real problems?
- The code needed to do this right is heavy-handed.
Where I Land
Relying on older technology and hacks isn’t enough for me, but I can’t decide whether I prefer a new format or new syntax. Maybe both? Maybe a hybrid? I feel like the syntax is more likely because there is more discussion about that. A format is a much taller order and I’ve heard no whispers of active development on anything like that.
It’ll be a fun day when I can update this blog post with “official” best practices!
Related
- Brad Frost: Optimizing web experiences for high resolution screens
- W3C: Responsive Images Community Group
- Tim Kadlec: Media Query & Asset Downloading Tests
I’m of the opinion that we should really have a screen dimension header that is sent with the HTTP/SPDY headers. This means that server side code can be implemented as IIS/mod_rewrite plugins so that
1) coders don’t have to code extra special in most cases
2) You can modify based on a session
3) Some of the JS stats packages can drop the extra code that measures this stuff
That’s the ideal solution .. but what about offline applications ?!!
Screen dimension detection imho is way too limited. You can have a huge screen and still be on low bandwidth.
It’s worth noting that
<picture>
wouldn’t mean the introducing media queries to HTML—they’re already specced as part of<video>
’s source elements, and I believe implemented in a few nightlies.We want to ensure that whatever element potentially comes about here falls in-line with the existing standards, y’know?
I think the real issue with putting media queries into the HTML is that it’s not centralised (the way js/CSS are) so you have more code repetition, and more places to edit if you ever need to change breakpoints. Sure,
<video>
may already be spec’d this way, but that doesn’t mean it’s the best way to do it.I think that the rpng idea is great , and the size problem is not that big, because it’s close to the size of the large version anyway .. the small and medium are relatively small !!
I agree that a new image format seems like the best ultimate response. This would feel a lot like the current implementation of progressive JPEGs: Loading each successive layer would be dependent on things like screen size and network latency.
Your icon fonts blog post loaded for me pretty fast(less than 3 seconds, compressed to 105 kb) on Opera Mini 4.3 on an EDGE network. Just saying.
I like the idea of a new image format. Like you say it’s already apart of the web. Also, if we used a new element we would need to have a myriad of different sized pictures all with different names screwing up SEO, and a much larger code snippet.
With the image format we can have one image with one name that automatically responds based on the file sizes previously specified. If a new attribute was added for the image tag we could easily have backwards compatibility. This means less HTML, better SEO and one image to deal with and store.
Apple actually uses that approach for their icons. If you copy one in the “get info” window then paste it into Preview the icon is actually 4 different resolutions of the same image.
Is a logo a
<picture>
? There are many img’s used on a site that aren’t pictures.Would you be able to target individual sources with CSS for padding/margin reasons?
<picture>
<source class=”small”>
<source class=”large”>
</picture>
Leaning toward .rpng this week.
By the time this gets figured out and implemented across the board, everyone will be on 4G or higher anyway. Just a thought or discussion point.
I like the picture solution as well, however it could as easily be

I def agree with Chris that the html/css should not get muddied and would think that classes on the source would/should be a requirement for targeting specific css rules.
Definitely prefer the “image” suggestion by Tim than “picture” as it’s is too specific.
What about this syntax:
I could be totally off on this, but doesn’t this make more sense than a parent/child relationship in this situation?
I realize IMG is not a pretty or greatly descriptive tag, but why not just upgrade the decently adequate tag we already have?
From my admittedly really limited testing, it seems backwards compatible.
No, it cannot as easily be 
And then look at the <image> element in web inspector. What you will find is that all of the child elements have been discarded.
This is because <img> should not have child elements and <image> is essentially a synonym for <img> as far as the browser is concerned.
Like it or not, <img> and <image> are off the table for any solutions that use child elements.
I think the syntax makes the most sense, however why can’t we do a combination of that and the syntax allowing a self closing tag if there is only one URL. Also, what’s the benefit of the tag name picture? Why not just keep it img?
I’m using .htaccess UA sniffing to serve either normal images or small optimised once. Im open to any suggestions or toughts on improving it or why not to use it :-) http://sandermangel.blogspot.com/2012/02/responsive-images-trough-htaccess.html
This is the same approach that i had in mind…
Works pretty well for me as it doenst require extra HTML so can be implemented easily on older sites.
Which is, in my view, a big plus versus other, front end techniques that need browser support
You’re serving Opera (Desktop Browser) your lowres images.
… and that’s why every way of responsiveness that relies on UA Sniffing sucks.
Hey, sorry to keep chiming in: just wanted to point out that there are a couple of common questions and concerns around the proposed
<picture>
element that we’ve gone ahead and posted to the Responsive Images Community Group, including the reasons we can’t use<img>
/<image>
and how current (and proposed) CSS and script-based solutions fall short.The page you link (as well as the responsive assets breakdown) goes through HTML-related citations of multiple backups of an image– thus modifying that 1-to-1 relationship I wrote about in my blog post.
Whereas my point in the post was about novice web builders, it seems like there is also a “one IMG, one file” correlation as well that can’t be undone in browsers.
That underscores why I think working partnership between a new image format (.rpng) and browser-with-server-side approach is the better than trying to shoehorn a solution out of HTML.
This is a topic I’ve invested a lot of thought into. It’s one of those issues in responsive web design I don’t think we have a perfect solution currently. If you havn’t already, I would suggest reading through Jason Grigsby’s posts on repsosnive images. He’s done a lot of great research on this topic.
http://blog.cloudfour.com/responsive-imgs/
http://blog.cloudfour.com/preferred-solutions-for-responsive-images/
Ideally, I’d like to see the picture element become supported ASAP. I’ve followed along with the W3C responsive image picture element discussions and I prefer it because it keeps all of your logic in one area. I also feel like you have more control of the logic with the picture element syntax as opposed to a new image file type. For example, you can write media queries for min-width and min-device-pixel-ratio on one source tag in the proposed picture element syntax. I’m not sure how you would add that logic to a new image file type.
However, I’m not completely against a new file type for images.
I’ve read through Christopher Schmitt’s post and thought it was rather interesting. Matt Wilcox had proposed an idea for the picture element where it would work similar to how the video element works on loading file formats.
For example, with the video element, you usually supply a few different versions of your video file, and the browser uses whichever file format it supports. I think this is a great idea for the picture element. This way we can continue to keep our logic in the markup if we choose, or we can use new image files for the source in the picture element like webP, “rpng”, or whatever other image format, and then have .jpg at the end for browsers that don’t support the new image formats.
One little thing, you are missing the noscript tag in Scott Jehl’s picturefill example code. That little bit of code prevents the fallback img src image from being downloaded if JS is enabled. Or in other words, prevents duplicate downloads . Without the noscript tag, the benefits of the picture element polyfill becomes a mute point.
I have more thoughts on this but figure they would probably be best suited for a blog post rather than making this massively long comment any longer.
Sometimes the best solution is the the solution which is the least bad.
Matt Wilcox raised an excellent point about the tag during his video presentation of Adaptive Images, I’ll paraphrase:
What happens when you want to update your site design with new breakpoints? In CSS the change is easy, you update the media queries and any necessary layout updates. However what happens with multiple tags? You will need to revisit every one of them and update their breakpoints. Potential nightmare.
Set your picture element media queries to a PHP variable and keep it DRY.
Code example: https://gist.github.com/1893129
Yeah, you still have to update media queries in two places, once in your CSS, and once in your PHP variables, but you don’t have to update every instance you’ve used the picture tag if you use this method.
Of course I agree that the variable-based route is the sensible option and that the majority of the community would probably adopt this approach, although there is just a part of me that thinks this sort of thing should be doable with using pure HTML or CSS.
lowsrc! :D
Maybe we should modify the old “lowsrc” attribute for the img tag with a modern twist. Nobody really uses it anymore. <img src=”hiresimg.png” lowsrc=”forsmallerdevices.png” /> . Obviously it would need to be fleshed out ( a lot ) more but it would be a start.
lowsource sounds like a great option. The tag is way to much syntax for my taste. And this way it can be omitted easily when no small image is present
“You mustn’t be afraid to dream bigger, darling.” Devices come in more than 2 sizes. With the picture element, you can target devices with small screens, medium screen, large screens, really large screens, hi-res (Retina) screens.
If bandwidth media queries ever become a reality, I would assume you could use those too. For the time being, I’m guessing you could bake in support for bandwidth media query functionality using the network API – http://w3c-test.org/dap/network-api/ once that is complete. I’m not as familiar with the network API as I would like to be, so I’m only guessing.
Point is, the lowsrc attr is somewhat interesting, but I believe a more robust solution like the proposed picture element is a better solution to responsive images.
LOWSRC was the inspiration for the HiSRC plugin, which uses the spec for network detection.
However, this is only a piece of the puzzle. Tablet screens are getting better, high resolutions — and desktop machines are going to follow suit as well.
There is going to be a need for more than two sizes or flavors of the same image and one of the reasons why I think saving multiple types of the same image in one container file is the way to go.
On
lowsrc
We’ve been told by browser vendors on a number of occasions that it’s unlikely we’ll see modifications to the “image prefetching” behavior present in a number of modern browsers—meaning that the contents of an image’s
src
will always be fetched prior to the application of any custom logic. For that reason,lowsrc
doesn’t seem to be considered a viable solution by vendors—and for our purposes, it leaves a huge grey area in terms of implementation. How does one set the the breakpoint for the smaller image, when we can’t apply custom logic prior to the prefetching phase? Is that set by the browser? Will we then be limited to that single breakpoint, and what would it be based on?lowsrc
won’t account for a number of image sizes or resolutions, the way media queries might—and will be forced to rely on a single hard-coded breakpoint rather than an ever-expanding roster of media queries.I certainly can see the appeal in terms of keeping things concise, but longer syntax is only an issue for developers. The solution with the most potential benefit to our users should always trump the solution most convenient to developers.
It seems like there are really three factors that we should be worried about here:
1. Device resolution
2. Connection speed
3. Whether or not the user is paying for bandwidth
These things have traditionally been tied together (a cell phone was small, slow, and people paid based on usage), but those three factors aren’t necessarily tied together anymore:
-On an iPad 3, on a 3G network (huge potential resolution, relatively slow connection speed, pay based on bandwidth usage)
-On a smartphone on Wifi (low res, fast speed, unlimited usage)
-On a laptop on Wifi with data cap (high res, fast speed, pay more at some point for more bandwidth)
I’m not sure what the answer is, but it would be nice if we could sniff for all three of these things, instead of just the first (which is what the media query does). I realize that some of this has been discussed here: https://css-tricks.com/bandwidth-media-queries
Whether or not the user is paying for bandwidth is an interesting idea. You mention sniffing for this, yet I’m not really sure how you would detect this? Though I do agree , it seems like it is something we should be considerate of.
What image would recommend serving to each of these devices.
iPad 3, on 4G LTE network, data limit – pay based bandwidth usage.
iPad3, on WiFi network, unlimited data use.
Both have Retina displays, both have a fast internet connection, only one has a data limit.
Should retina images be an opt-in feature like HD videos on YouTube?
+1 here. Bandwidth is at least as important as screen size — and for sure screen size should not be used to infer connection speed. I could be on a high res laptop on a 3G connection and not want to wait on full quality images.
I agree with your three factors although I think we could merge the latter two into a single browser setting. By default, the setting is automatically set by the browser to request the “most appropriate image” based on device resolution, bandwidth, and whether or not they’re using wifi or mobile network. Alternatively, the user can set their preferred image “type” based on their unique situation if they want more control. Not that I see anything wrong with letting a website know whether or not I’m paying for bandwidth, but I do think many individuals would. In the future, any additional factors that play into what image is best to request is built into their browser.
All of those variables can be determined today by browsers on the OS side of things, so it wouldn’t take long to implement a meta tag or query in the spec to “grab” that preference on our side of things. Alternatively, we can pass that information as a header and do mod_rewrite, eliminating the need for additional src attributes or tags.
Also, by letting the browser determine what’s best for itself, we eliminate the need to profile every single visitor EVERY single time they visit which also speeds up their user experience.
On our side, we can build the various versions of the image ourselves or leave it up to a server module to generate/cache it for us. Then we serve the various options through a new tag/attributes or through the header feature mentioned above.
For future proofing, all you’d need to do is either manually add more iterations of the img yourself or update your server module to generate those variations and cache them for viewing.
This gives us designers absolute control over the quality of images we want to serve and lets the users request from us different images if they want a richer/faster experience. It also puts an impetus on browser venders to improve their browsers to provide the best default experience for their users.
What I do is instead of using IMG tags I use divs with background images and I use a media query to change the background image or height, width etc. based on pixel density or screen size so I can serve up different size images for different devices as well as serve high dpi images to Phones with ultra high dpi screens.
Since we’re just making up solutions that don’t exist yet, I propose encapsulating our images in an XML file, a la Android. (Does iOS image delegation work the same? I’ve never tried it.)
So the HTML tags would stay the same (good for backward compatibility) but we could specify the XML file in the image tag src (or possibly in the header — like we do with style sheets) and identify the correct tag possibly through a ‘data-‘ element.
The XML file would list all the image identifying names and the acutal file name and where each resolution class resides (low, high, medium subfolders, though we could also assume this for ease in coding).
Doing it this way also opens up the possibility of making it work for images retrieved through CSS as well.
I’m pretty sure this sort of scheme is how we’ll eventually resolve this.
Hey Chris, have you or Christopher Schmitt cross-posted this stuff at the W3C community group for further community discussion?
Nicolas, I mentioned my idea on the Responsive Images Community Group.
If you have suggestions for other areas, mailing lists to continue the discussion, please let me know.
I believe it is clearly both.
1. It isn’t simply about file size and resolution. That may be the issue most of the time, but it is also likely that you will want to display different versions of an image at different sizes. For example, it may make sense to crop to a person’s face on a small screen. Examples of this:
http://www.useit.com/alertbox/9611.html
http://www.slideshare.net/bryanrieger/prime-sky (slide 20)
2. Sometimes it is about file and resolution and how that impacts a person’s service. For example, retina images on an iPad. It would be helpful to only deliver those images if the person is on a high-bandwidth connection, when they have plenty of bandwidth left on their contract, and only if they haven’t explicitly said they don’t want high-res images.
For the second case, I believe the browser has to be involved. Network speed is transient. Only the browser, which can monitor the network speed and could know more about the person’s carrier contract situation, can make decisions about what resolution is best to download at any given time.
In the short run, solutions like Apple’s proposed image-set spec would help solve this problem by specifying where to find files that represent the same file at different resolutions and then letting the browser decide which one should be displayed. In the long run, something like JPEG2000, where the file can be progressively downloaded at different resolutions, would be ideal.
But it seems clear to me that we need both. On the CSS side, we have this flexibility already and it just gets better if image sets are adopted. For images that are content, the IMG tag specifically, we need something similar.
Using new attributes for high and low-resolution images would mean:
– all websites have to add support to make it work
– at least two versions of every picture need to be created
A new picture tag would mean the same.
A new format like rpng will:
– take ages to be adopted by sites and image creating tools
Creating new media queries based on resolution/connection speed:
– will take time and too much time testing and perfecting
The best way in my opinion is not to bother web dev with this but have mobile browsers offering the same functionality as opera mobile/mini out of the box. In this way users may choose whether they want compression or not, sometimes even when people are on wi-fi they want fast loading sites, some may want to view a photo portfolio page at its best even if they are on.
Very soon, this shouldn’t be a problem as networks become faster, the new ipad has higher theoretical download rates than my laptop through cable.
Who said… Opera?
For me, like classicc, it’s not for us to bother with that kind of things, and it’s already taken care by some of the best browser here, beginning with Opera.
Well, the issue with using Opera Turbo and similar services is that TLS/SSL breaks, so you’ll have to choose between security and features – even though it could be fixed by just adding some stuff to the (yet unfinished) HTML5-standard.
Don’t make the users think and reward good practice.
Why not just rely on the cms to deliver images. I know wordpress creates a couple different sizes. I am sure someone can come up with a way to tap into this.
@Brian Unless you use a device detection database, the CMS knows nothing of the device/browser capabilities on first page load. I outlined the challenges for the image tag at http://blog.cloudfour.com/responsive-imgs/
@Jason – If your thinking about device detection, this is where a resource like Categorizr can help – http://goo.gl/SlIZ1
I know when clients often upload images to their WordPress site, they upload the raw image, usually jpegs, from their camera. This is usually a 2000-3000+ pixel wide img, which is actually a good thing.
With this, you can use PHP you can generate all the images you need from the original image the user uploads. You can generate your image for retina displays, an image for regular “desktop” devices, an image for tablet devices, and image/s for mobile devices. That way, the user doesn’t have to do any extra work to create the other images for the different source tags in the picture element.
I like the new format the most. It could be made backwards compatible, like animated gifs and animated pngs have been. The server figures out the which size it needs, then sends that with a mime type of (for example) image/png, overriding the .rpng extension in the html. The browser would only see what appears to be a normal png image.
Although I’m not particularly fond of it the element seems like the least worst option.
Another image format isn’t really the answer, developers/designers/etc seem to have enough problems optimising our current images options without add another to the mix.
It’s also worth bearing in mind that a if you’re connecting via 3G, a lot of the time you will be automatically going through your carriers’ proxy and quite a few carriers will downsample images automatically anyway.
A format for that could make our lifes really easier.
But, assuming it is2012, we rely each time more on js\css animations and we still dont have an image format that combines transparency mapping + jpg comoression, so I dont get my hopes high on any format solution.
My biggest concern with the image format would be will CS5 and other image editing programs accept this new format so I can create them in the software I already use for images? Also, how long will it take for IE to accept a new image format? I’m sure firefox, chrome, and others will adopt it quickly but IE might take 2 or 3 years and have us all using js and hacks to make it work anyway.
I don’t know a lot about these things but I’m thinking about the html5 video player and how there has to be 3 different video formats, flash fall back, video link fall backs to cover all browsers. Would it be a similar situation if there’s a new open image format introduced to the mix?
After a few tries, I ended using an attribute data-responsive + js code to deal with that “issue”.
Pro:
No htaccess or gd lib needed
No JQuery or any js lib
It’s compatible with low-end devices using old XHTML browsers
Compatible down to IE5
Cons
You have to generate your image in 4 different sizes (240, 480, 600, 800)
I should write a blog post about it… :)
While completely ignorant to the roadblocks in implementing a new image format, I put these forward as what I deem to be completely necessary for a new format to succeed:
• the ability to ‘manually’ package individual images into the one file, (eg. Different cropping or entirely different images for different breakpoints, rather than just scaling, which should still be the default),
• the ability to delegate breakpoint criteria to a central controller – hooking into CSS or JS queries, an external instructions file, whatever – to keep the control (of what is shown/when) in the designer’s hands (again, this should be sugar on the format’s defaults),
• the ability to be somewhat backwards compatible with browsers that don’t support the new format – this probably means extending a current format with fancy responsive tech, and keeping a basic image accessible to older browsers,
• not exactly necessary for responsive-ness, but yes we should definitely hope for alpha transparency AND great lossless compression.
Again I don’t know if ANY of those are possible, but it’d be great to have them all in one shiny new format.
This has probably already been discussed by the community, but what about some sort of new CSS property?
Something like:
(I’m using SASS syntax here, by the way)
I can definitely see how some people would argue that this crosses some lines as far as separation of style/content, but from a semantics point of view, we’re already declaring image sources in the background-image property… I know it’s different, but at some point maybe semantics should give way a bit to functionality? From an SEO standpoint, this shouldn’t cause any problems since your fallback image would be declared in the actual html image source (maybe the mobile version of the image specified in my code wouldn’t be necessary, but it may help keep breakpoints clearly defined).
I’m not really sure what the issues would be here as far as performance… what images would be would be loaded when by the user… I don’t know how CSS works in this regard.
Of course, this option also doesn’t help if the user has CSS turned off…
Well, yeah.. there’s that. I’m sure there are performance issues with this, but what? What other issues might there be? This seems like a viable approach…
…ah, well, I can suddenly see some serious CMS issues with image-heavy sites
The more I think about this…
1. The more it hurts my head.
2. The more I think that users should be in control of this rather than media queries.
While it may be possible to work out device sizes, and in the future we may be able to work out connection speeds and whether or not the user has to pay for their data… it’s impossible to determine purpose and context.
Am I going on that photographers website to show a client his brilliant images? Or am I going on there to get his number quickly because I’m late for a shoot?
Crude example – http://jsfiddle.net/MerlinMason/LmgdQ/
Not a bad idea to provide a 240px width image that switch to the best possible when you click on it
I’m really a print guy, but I’m going to test the “no such thing as a dumb question” theory with some questions I have about a possible new image format.
Since sites are so flexible now, I expect different file types for the same image are currently beneficial under different circumstances. For example, let’s say you’ve used a transparent png, but when a site restructures itself for another device, transparency might not matter and a jpeg might offer better compression.
So, would this new file format act like a zip file, letting me dump in whatever file types I want? Or would it be an entirely new image format that mimics current (and eventually future) benefits in combinations I choose? Or would it just be progressively smaller versions of the exact same thing, and that’s that?
I can imagine the answers to those questions could also effect how easily updating these files with new versions of the image might be.
It sounds to me like the benefit of less code, in this case, might also come with less flexibility. Am I wrong, and does it matter?
I have been developing what I call WebHD for about 3 months now, where I first load a regular 1:1 image pixel density, and hidden, I load the high-res image (in the background) and onload I replace the low-res image with the high-res one. This takes advantage of retina displays (if the original image is twice the size and scaled down with CSS).
As of now, as the article mentioned, there are no native way to know how you are downloading content (3G or Wifi) but it speeds up the page flow preview and the basic images.
The other way, would be to have a mobile and tablet site which would sport the appropriate images.
It is sad the everything else is developing faster than the web can catch up. All of that because companies fight for standards.
I once build a similar thing, with the difference that users initiated HD mode. They get the basic quality image and if desired, they hit a HD button to get the bigger image.
Note though that your technique, if I understand it correctly, in all cases loads both images. This does not resolve the bandwidth issue, it only gives the perception of performance.
So many solutions out there, it’s great to see that our industry is really taking this puzzle seriously. :) lowsrc seems like a very interesting approach too, but I’m in no doubt that there’ll come an official standard (hopefully) soon enough.
For those who like less.js, I took my stab at responsive images here: http://forr.st/~VE1 (or you can go directly to the TinkerBin, just remember to set CSS to LESS. One note though, it’s purely experimental. Maybe it would be useful in a seperate “images.less” file where you would handle all images, but pff… it’s still so tedious!
I vouch for the unicorn.rpng though ;) Happy Easter!
CSS is for layout, I wouldn’t use any CSS based solution to this problem… clearly not future-proof, and not even present-ready since that wouldn’t work on WindowsPhone (even 7.5) neither WebOS neither Blackberry < 5. Good chance it doesn't neither on latest Nokia Symbian Belle browser.
I prefer the pictures element…here is why:
1.) Keeps complete control of which images get rendered when in the front end designers hands…as he/she gets to decide the break points
2.) Is completely backwards and forwards compatible…meaning backwards compatible as you have a fallback to the original fullsize or midsize (again – your choice) image file…and forwards compatible in that you can always go back and edit your breakpoints or eliminate them
I do not mind the extra syntax…surely it must be less writing and less processor intensive then writing complex JavaScript or hitting the server to automatically resize images…
3.) This picture element I assume like the image element could contain any image format that is agreed upon by the browser makers for web compatibility…meaning it could handle any current or future web image formats (like that webP format that looks intrigueing)…
These are just my thoughts…one of hundreds here…great discussion….
How about using background-images to replace ALL images? You just include a down-sized version in your HTML and replace it with the right one via CSS – depending on your screen size or whatever you like (Media Queries).
Knowing the user’s connection speed seems like the hardest thing to get around. I wonder if browsers would adopt something similar to the iOS app SDK for high res images (i.e. background.jpg and [email protected]) and it would fetch the correct asset for a high res screen. Then the web browser attached to the device could check for a WiFi versus Cell network and fetch the image not just based on the resolution of the screen, but the connection as well. Problem is that’d make a lot of extra http requests on WiFi connections.
Tough problem.
Isn’t it dangerous for us to assume that just because someone is on a slower connect that they actually want lower resolution images….
Screen size makes sense to me as their is not much point in downloading large images for small screens….but if someone is on a laptop connected via 3G or a slow connection are we to assume that they want a degraded visual experience….perhpas they do or perhaps do not…point is we do not know…
@michaelwhyte,
I wouldn’t throttle it based on speed, but by the type of connection. WiFi versus Cellular instead of fast versus slow. A 4G LTE could potentially be faster than a WiFi broadband connection, but a 4G user is still likely to have a limited data plan. Meaning even though they have the speed for a hi-res image, they may not have the bandwidth for it. Using up someone’s precious data plan up seems more dangerous.
Browsers should maybe have an additional feature to toggle whether to download hi-res assets when available. Something like that would be best for mobile users. Something that gives them the choice.
We have done content negotiation for several years already, at least since HTTP/1.1 came out. A standard HTTP request contains headers specifying preferred document types, charsets, encodings and languages.
Introducing a new header with the viewport’s width and height should be no breeze at all. And it would be 100% backwards compatible.
To serve responsive images today, the javascript solution is by far the best. It is backwards compatible, cacheable and fairly easy to maintain. It does require advanced HTML modification, but nothing that can’t be done easily with a regexp filter.
The UA sniffing and cookie solutions all create images that can not (or should not) be cached by neither HTTP accelerators nor proxies. The UA sniffing teqnique is also very prone to errors and requires a lot of maintenance.
I came here to write this, but you said it better than I could have so thank you. I think if you look at this thread and all of the concerns, caveats and question marks it becomes clear that html and new abstract formats do not include the final full answer. This is a page manipulation problem that should continue to be handled by Javascript. If there are any content negotiations that happen, it should happen via a header.
Exactly – let the client tell the server what it wants. In addition to the viewport width and height, I would add a header for “prefer [low/high] resolution images”. Mobile clients could then toggle this header based on whether the device was connected to a cellular or WiFi network, with a user preference to select “always low”,”always high”,”automatic”.
Thks for all ;) great
Just a different idea for consideration… how about using a single high-res image file with a proxy behavior file (like a codec) to define how many pixels (or how much resolution) to display based on device-pixel-ratio or device-width (with @media-queries)?
Think of it like editing HD video and just using a lower res preview to speed up redraw, or the preview-quality display of linked high-res images in Indesign layouts.
Pros: one src file for each image, not reliant on JS, could be set globally or on individual images, no change to HTML syntax, completely responsive through CSS.
Cons: possibly performance issues downsampling large images on the fly for smaller devices?
Just throwing it out there.
Can someone not just make portable network graphics turn into progressive network graphics?
Just update the format so that existing applications see the default image and applications that support the ‘progressive’ bit can load the different sizes, proportions, etc (whatever they spec in).
Hey presto you have graceful degradation! :-)
FlashPix saves multiple images into one image, we don’t want/need that. We have 1 main image and for the smaller images ( even thumbnails, cropped versions etc) we just want to take a fraction of that image and serve it.
I waste hours creating duplicate images in different sizes, thumbnails cropped etc. I wish I could just spec these in 1 png file and upload that. Then just use the img tag to load it. That might add some syntax to the img tag but only to define which version of the image you want to use.
Realistically, I think the new image syntax is the more likely candidate for the future, although it leaves much to be desired.
My main concern is that the majority of the “web” isn’t going to deal with this “problem”. At all. Sure, web craftsmen visiting sites such as these will lead the way, but the rest of the web will not follow at all or very slowly. Many websites are on low/no maintenance, there’s a lot of homegrown CMS, or no CMS, and all of these people simply aren’t going to create multiple versions of the same image. I just don’t see that happening.
Therefore, I think the solution with the most automation (by browser intelligence, a new image format, whatever) will be the superior solution for reasons mentioned above.
By the way, in the comments above I also see a lot of overestimation and wrong assumptions on bandwidth available to users, paid or not. Coupling these assumptions to content delivery is a risky business.
I’m of the opinion that:
<img rsrc=”responsive.rpng” src=”non-responsive.png”/>
would be a great compromise, with “src” only being processed if “rsrc” is not present
Doesn’t .icns have multiple images in it?
Would it be possible to roll the new image format into a progressively enhanced, “silent” upgrade of image formats? If we could, for example, upgrade pngs while still keeping the same extension, maybe we could actually pass them variables in their url like this:
Maybe that could be set up so that older browsers (IE) just ignore the variables, and still serve it like a png? By passing variables to the new format, we could also have more control over the way it ran. Maybe an api of sorts could allow us options of what happens at different bandwidths, and how responsive the images are to screen size?
I don’t know enough about image format handling to know if any of that would be possible, but it could be awesome.
What is the standing opinion with timthumb?
http://code.google.com/p/timthumb/
I’ve followed this issue for quite a while, and now I’m feeling like we’re going through a lot of work to solve yesterday’s problem. Mobile bandwidth is growing rapidly (though data caps aren’t, admittedly) and is sometimes higher than desktop bandwidth. I could see having this discussion if the solution was just a tweak here or there, but we’re talking new image formats, new HTML — is this problem really worth it? 500K covers very large, very high quality JPEGs. That against my 3GB iPhone cap is barely measurable compared with even a small video.
As a photographer, I fine tune and sharpen my images for a destination size. That would work with a new image format with multiple versions, but now we have a huge file the server has to pick apart. Sure it can be done, but is it worth it when roll-out would be pretty slow?
I think we have bigger fish to fry — maybe better multi-column text format control; kerning and ligature control; all kinds of things that don’t involve bandwidth or screen size.
So my solution is to not worry about it; just keep using the same img tag. Am I being lazy? I prefer to use the term “pragmatic,” but don’t have a big retort for “lazy.”
Some really good points. If you’re designing your sites with web fonts and lots of CSS created elements, those elements are going to look great on high res displays. If you want your logos to look good, consider using SVG images where you can. JPEG’d Photographs still look pretty decent as long as they’re not including a lot of line art.
Also, being a lazy programmer is a virtue, not a problem. :)
Mobile bandwidth is growing rapidly…
Wait, let me clarify, mobile bandwidth is growing rapidly in the United States…
Actually, let me refine that a bit, mobile bandwidth is growing rapidly in urban areas in the United States…
Ok, now that I think about it, mobile bandwidth is growing rapidly in most urban areas in the United States, but not uniformly which means people can end up on slow connections even in areas that supposedly support 4G.
4G/LTE isn’t going to save us and even if it did, Google, Amazon, Yahoo and Microsoft have all documented how millisecond improvements in performance improve usability and usage.
So you may not care about squeezing extra performance out of your site, but you also likely don’t have millions of dollars on the line that could be impacted by a slow user experience. For those companies, solutions to these problems are paramount.
One of the companies that cares most about performance is Google. Apple has already proposed image-set to help with resolution issues so they’re looking at the problem as well. Both companies also happen to be one of the leading contributors to WebKit. It seems very unlikely that there won’t be some change to the way we handle images to address these problems given the people involved.
Thankfully, they’re not waiting for bandwidth improvements to save us.
I think most of you are missing Forest Tanaka’s point whom I believe to be a realist.
Lets look at PNG’s which were created to be a better more image format for web and other digital media. From Wikipedia it looks like discussion on PNGs began in 1995 when the web was a baby and 56k modems were high tech….It is now 2012 and there are still people who have to support IE6 which does not completely support transparent PNGs….17 years later and we still do not have universal support for PNG…by the time we do it looks like we will looking into yet another image format….
For mega sites like Google and Apple I can see how it is important to squeeze every drop of bandwidth out of their web sites…but for people building smaller sites…what I think Forest is saying that their is other areas where we should be concentrating our efforts where they can make a difference now…
For me personally I like to build my sites for the future…Yes people in the third world do not have high speed….but some do not even have the internet….do we print out the site on paper and distribute it out to the far reaches of the globe….Internet speeds are getting faster and faster….cell phones are getting cheaper and cheaper….In five years imagine how cheap an iPhone 4 will be in the second hand market….imagine what resolutions we will be using in the developed world….imagine how fast our internet connections will be here….maybe our speeds of today here will be available to most citizens in less fortunate countries…
If I had to choose it would be to not develop some new image format and instead do the multi-line image selection in our html….which is similar to the video or audio tag….but I believe in 5 or 10 years this discussion entirely will seem silly as we enjoy our 100mb/s download speeds and the developed world gets their 5mb/sec download speeds…
By the way according to this page: http://www.netindex.com/download/2,97/Sudan/ …The average internet connection speed in the country of Sudan is 0.97Mbps per second…about 18 times faster than what our high tech modems of 1995 could do here in the developed world….imagine in 17 years what Sudan and the rest of the developed worlds internet speeds will be…
I think Forest made a good point.
I think we need to keep it as simple as possible. I like the ‘rpng’ idea and it could always deliver a beloved .png format to maintain backwards compatibility.
Given the number of internet users worldwide who are on low(er) bandwidth than us lucky folk with high-speed internet, local wi-fi, and un-capped mobile packages, there has to be a solution that is globally inclusive.
If that solution is more clunky than elegant (at least to start with) I think it’s something that us developers (and designers) should suffer gladly.
Wouldn’t you feel better knowing you were enabling more users around the world? A little more (say) JS may be a small price to pay for delivering much smaller files to those on low-bandwidth or ancient equipment.
As for the solution, well I don’t know what it is, or what it should be; all I know is that those who define the standards and those who develop the browser engines need to get a wiggle on and focus on delivering content appropriately.
Personally though, I would like to see hybrid / media-query tags (with the inline media-queries overriding a stylesheet, as one would expect).
Here is my solution with Cloudinary service in Rails app:
http://zogash.tumblr.com/post/21324203574/responsive-images-with-cloudinary-on-rails
The above solutions look very difficult to implement – backwards compatability, rewriting pages or playing havoc with caches all look to be problems.
How about something in CSS?
Have something server side in the css file like:
The above lists the filters the server will support with “none” and “100%” the default. The client when requesting an image then adds the filter name and value somewhere in the http request header. Probably first as an ‘X-…’ value but later as something more standard sort of like ETags.
Then just setup a media query for mobile users which forces resize:
With above solution if the browser does not support it it’s not a big deal they just get full size images and there will be pressure for them to upgrade and support extra headers to speed up browsing. Also as there are http-header fields caches should be able to recognize them and cache the right version of the image also defaulting to full size until upgraded.
Above is off top of head…feel free to tell me if i have missed something obvious.
Right now I’m trying Riloadr, a responsive images loader written in Javascript, and I have to say I’m really impressed by the flexibility this library offers.
No dependencies other than JS, HTML & CSS, image groups, unlimited breakpoints that work in a similar way to CSS media queries, callbacks, lazy loading…simply put it’s the most complete tool I’ve tried to date and…it just works!!
@Chris: Have you tried it? https://github.com/tubalmartin/riloadr
I wouldn’t use UA sniffing because it’s hard to maintain and may not match every device/browser you expect.
Most solutions don’t rely on UA sniffing so I would suggest you to approach this differently using one of the well known solutions or trying some new ones like Riloadr (https://github.com/tubalmartin/riloadr) or foresight.js (https://github.com/adamdbradley/foresight.js).
I will simply say that the browsers in many of the phones have an option of Low, Medium and high resolution pics. One example of it is Opera Mini. The same can be achieved in chrome. But, in some smartphone browsers; that is a serious issue. While, helping out my readers is important for me. That is why i will be using the above code snippet that you added. Thanks for sharing.
I vote for a new highsrc attribute for the good old img tag.
Even thinking about adopting a new image file format is pretty unrealistic. As someone else pointed out, we’re still using iepngfix to have proper transparency support in IE6, 17 years after PNG was designed. New image format adoption and debugging by browsers and graphics tools is just way too slow for something as important and *time-critical* as this. (How many of us are reading this very article on an iPad 3 already, hmm? We need a standardized solution NOW).
A new image file format with multiple versions in one file also doesn’t mesh well with caching (at the browser, transparent in the network path, and at the server end with CDNs), and caching is of *utmost* importance for performance. Simple, effective caching is *everything*.
In the long term the proposed <picture> tag is probably worth having and provides a nice, general solution to the core problem, assuming the browser is smart enough to not choose an insanely large version on a very slow link, and hopefully gives the user some kind of control over that decision in its settings. I’m all for <picture> but…
<picture> is very verbose for something as common as images. That verbosity, and the resulting flexibility, is great for <video> because you’re unlikely to have more than one on a page, and you can see in many cases the flexibility will be useful, even necessary. But images are way more common, so brevity and simplicity matter much more. And we want compatibility with all those scripts out there which read or manipulate img elements. And, most important of all, we *know* the overwhelmingly most common case for images will be just 2 versions, a normal version and a high-DPI version at 2x resolution. That’s going to account for 99% of all uses.
So why not just keep it simple, by adding that common case directly to the good old <img> tag, as a new attribute called highsrc, with behavior defined to be:
1) The browser may, at any time, choose to set src to the value of highsrc. This would usually happen during initial page loading on high-DPI systems, unless the user turned it off in the browser’s settings. But it might also happen after loading the initial src images, if the link is slowish but not too slow, where it’s worth loading the normal ones first then replacing in the background as the high-DPI ones come in during idle time.
2) If JavaScript code sets src, that implicitly also sets highsrc to the same value. JavaScript needs to set highsrc explicitly after setting src when doing something like a high-DPI rollover, to be safe and simple, and to not break any existing scripts, while allowing easy adoption of the new attribute.
I can’t see anything obviously problematic with that approach myself. It declares the versions available in the HTML in a simple, brief way, then it gives the browser the decision, which seems to be the right place. That means the user can have an input through the browser’s settings, and the current link speed can be taken into account too. So if a user is paying per MB he can turn high-DPI images off, just like he can turn off all images in most browsers today. And on a slow link the browser can adapt, either automatically or with some kind of user setting to help – personally I would provide a simple setting like “don’t load high-DPI images that will take more than X seconds”.
Simple. Easy to understand (and teach). Easy to implement quickly in browsers today. Easy to standardize. Doesn’t break any existing pages or code.
If anyone is using WordPress for their responsive layouts you can check out a small plugin I wrote to deliver mobile images through PHP.
For the standards that we currently have, a server-side solution I think is the best way, although relying on the user-agent string is needed which is not as efficient as browser width.
If anyone is interested, here is the link:
http://www.spaceheadconcepts.com/blog/wordpress/responsive-images-responsage/
yup, I agree. PHP is a relatively good solution as it can do server side image resizing… considering the impracticallity for the web designer of making 3 or more image sizes… and cms hmmm not a great path in my opinion… So yeah php serverside resizing seems good.
I am trying to develop responsive web Application, Which is resolution independent.
I want to know which units I can used for height of div.
Currently I am using percentage(%) units for width and height, but it does not work all time , example: in case of float div and relative position div.
But when I use pixel unit for height and percentage (%) for width it works fine.
Is this correct method ??
On responsive images polyfills, take a look at this script. It does exactly the same thing that Scott Jehl picturefill does, but with a JSON oriented approach and a better support for IE8. https://github.com/verlok/picturePolyfill
Would this be a potential feature of server-side scripting? Just a function like “renderOptimum” which would take the target image in a 300dpi resolution and render it at the appropriate resolution for the user? The preference of fast vs. quality could be determined by either prompting the user, or overriding via code; if their connection is detected as “slow” but their pixel ratio is greater than 1.