It’s a pretty low-effort thing to get a big fancy link preview on social media. Toss a handful of specific
<meta> tags on a URL and you get a big image-title-description thing. Here’s Twitter’s version of an article on this site:
It’s particularly low-effort on this site, as our Yoast SEO plugin puts the correct tags in place automatically. The image it uses by default is the “featured image” feature of WordPress, which we use anyway.
I’m a fan of that kind of improvement for that so little work. Jetpack helps the process, too, by automating things.
But let’s say you don’t use these particular tools. Maybe creating an image per blog post isn’t even something you’re interested in doing, but you still want something nice to show for the social media preview.
We’ve covered this before. You can design the “image” with HTML and CSS, using content and metadata you already have from the blog post. You can turn it into an image with Puppeteer (or the like) and then use that for the image in the meta tags.
Ryan Filler has detailed out that process the best I’ve seen so far.
- Create a route on your site that takes dynamic data from the URL to create the layout
- Make a cloud function that hits that route, turns it into an image, and uploads it to Cloudinary (for optimizing and serving)
- Any time the image is requested, check to see if you’ve already created it. If so, serve it from Cloudinary; if not, make it, then serve it.
This stuff gets my brain cooking. What if we didn’t need to create a raster image at all?
Maybe rather than needing to create a raster image we could use SVG? SVG would be easy to template, and we know
<img src="file.svg" alt="" /> is extremely capable. But… Twitter says:
Images must be less than 5MB in size. JPG, PNG, WEBP and GIF formats are supported. Only the first frame of an animated GIF will be used. SVG is not supported.
Fifty sad faces, Twitter. But let’s continue this thought experiment.
We need raster. The
<canvas> element can spit out a PNG. What if the cloud function that you talked to was an actual browser? Richard Young called that a “browser function” last year. Maybe the browser-in-the-cloud could do that SVG templating we’re dreaming of, but then draw it to a canvas and spit out that PNG.
Meh, I’m not sure that solves anything since you’d still have the Puppeteer dependency and, if anything, this just complicates how you make the image. Still, something appeals to me about being able to use native browser abilities at the server level.
Hey, it’s me!
I don’t think I mentioned it in this article, but I did actually mess around with trying to return a base64 encoded string directly in the response, but like you said, Twitterbot is kind of picky about what exactly it will accept.
Doing something with SVG is super interesting though, and not even something I thought about and might open up a lot of cool possibilities.
Interesting – I know that .net can generate images on the server, as I used to do this back in the 00s; that probably still would work in dotnet core in a lambda or Azure function, so could generate these images as a lightweight service?
Failing that, imagemagick has a lambda layer function so could certainly do this.
Either of those would likely be more lightweight than puppeteer or phantom.
I have been thinking about this and if you are already editing content in a browser couldn’t we render preview via service worker or background task when clicking submit on the content?