Many programming languages go through a code review before deployment. Whether it’s a quick once-over, in-depth peer review, or complete unit testing, code reviews help us release code into the wild with confidence.
I started to imagine what a CSS code review might look like. CSS can be written in a number of ways, and the best way is often subjective to the project. I’m definitely not trying to get dogmatic with a post like this, but instead lay the foundation for what could be a starting point for getting the most out of CSS before it is released.
Why should CSS be reviewed at all?
It’s fair to wonder why we would want to review CSS in the first place. A review is yet another step that adds time and cost and effort and all that other stuff that seems to hold us up from shipping a product.
At the same time, a code review allows us to step back, take off our battle gear, and give our work an honest assessment—or hand it off to someone else for a fresh perspective. This is what separates us from the machines and, in the end, can save us the headache of having to deal with bug and performance fixes that could have been caught before they were shipped.
There’s much more benefit here than just preemptively squashing bugs. I find that splitting a review into specific areas focused on these benefits helps frame the conversation and prioritize those positive outcomes. The benefits for you may be different from what I’ll cover here, but these are ones I find myself coming back to time and again.
Area 0: It does the job
Let’s not dwell on this, but not forget it either. The first job of CSS is to style the page so it looks like it’s intended to. It matches the designers mockup or fits the style guide or whatever it’s supposed to do. And it can do so with variable content (titles and content of various lengths, images of various sizes, etc.) If it doesn’t, that needs to be fixed first.
If the design is responsive, make sure the design does what it’s supposed to do fluidly and at each breakpoint.
Area 1: Style and Consistency
The goal here is to ensure that CSS is well-written, organized, and documented. Those of us who have inherited projects from other developers know the benefits here. Code that uses a consistent naming convention and includes thorough documentation is easier to dive into and provides instruction for how to maintain and enhance the code in the future.
Questions to ask:
- Is there a CSS style guide available for this project? If so, does the code follow it?
- Is the code thoroughly documented? Are there confusing elements, properties, or hacks that would prevent another developer from knowing why something was written the way it was?
- Are there any blatant inconsistencies in how elements are named or how their properties are organized?
Available resources:
- CSS Lint: An excellent tool to find mistakes or inconsistencies. It’s even available as Grunt and Gulp tasks that can be configured to check against your own set of rules.
- CSS Style Guides: A roundup of style guide examples that can be used as inspiration for creating your own.
- What makes for good docs?: Dave DeSandro with the down low on how documentation is related to marketing.
Area 2: Browser Compatibility
Once CSS is organized and consistent, I tend to turn my attention to how it looks on different browsers and devices. Well-written code is one thing, but it isn’t worth much if it doesn’t work where and when it should.
Questions to ask:
- Which browsers and devices does this project support? Do we have access to them for testing?
- Are there analytics for this site? If so, do they give us insight into what browsers are more important to be testing for?
- Are there any instances of “hacks” aimed at a specific browser or device? If so, is there another way around them? Are they well-documented?
Available resources:
- Can I Use: A central repository for referencing which CSS properties are compatible with any browser and version.
- Ghostlab: An app for synchronized browser testing across multiple devices.
- OpenDeviceLab.com: An interactive map to locate nearby device labs for testing.
- Establishing an Open Device Lab: Smashing Magazine does a deep dive on the the benefits of device labs and tips for how to set one up.
- Support vs. Optimization: Brad Frost does a fine job of covering the difference between the two and how that can affect the way code is written.
- Cross browser testing services: Cross Browser Testing or Browserstack.
Area 3: Specificity
Now is the time to gauge just how specific the elements in the code are and identify whether there are opportunities to refactor. This creates a responsible cascade of styles where elements are either as modular or as specific as we want them to be.
Questions to ask:
- Are IDs used anywhere? If so, will a class name work instead? Does the style guide have anything to say about this?
- Does
!important
make its way into the code? If so, why was it used and can the code be refactored to avoid using it? - Do we rely on prefixing and, if so, are prefixes organized in such a way that there are proper defaults in place for unsupported browsers?
- How modular are the elements? Do they hold up in a “Kitchen Sink” test, where they are all used together in the same HTML document?
Available resources:
- CSS Specificity Graph Generator: A tool to visualize the specificity across an entire stylesheet.
- Specificity Calculator: A nice way to visualize just how specific an element is.
- Specifics on CSS Specificity: A post on what specificity is and its effects on how styles cascade.
- Responsive Deliverables: I love the idea of creating “Mini Bootstraps” in general, but it would be great to test specificity as well.
Area 4: Preprocessor Utilization
Ignore this if the project isn’t using a preprocessor. But if it is, there are certainly some extra things to think about.
Questions to ask:
- Are existing variables being used where they should be? Have new variables been introduced? If so, do they make sense and are they documented?
- Were other abstractions (e.g. extends, mixins, loops, maps, etc) used effectively and in adherence with how the rest of the code is doing things?
- Is the new CSS placed into the correct partials? Were new partials used? Do they make sense architecturally?
- Are they following any established preprocessor-specific style guides?
Available resources:
- Sass Style Guide here on CSS-Tricks
- Sass Guidelines by Kitty Giraudel
Area 5: Performance
I include performance toward the end of a review not to belittle it, but to make sure it is the cherry on top of our review sundae. Performant CSS is often about refinement and how it is packaged and served, so it seems appropriate for it to cap off the review process—even if performance is always in the back of our minds as we work.
Questions to ask:
- How much does the final CSS file weigh? Are we planning to minify and gzip it (yes, please!) and what is the weight difference when we do?
- How many different stylesheets are we loading (via
or native
@import
)? Can we reduce that number? Can they be served conditionally? Async? - We’re caching CSS in production, right?
Available resources:
- CSS Stats: Pop in the URL of a site and get a slew of stats returned, including a report of all the fonts and colors that were used.
- CSS Dig: Very similar to CSS Stats, but packaged as a convenient Chrome browser extension.
- unCSS: A task-runner that removes unused CSS by comparing it to the HTML and JavaScript markup. Available for Grunt, Gulp, and Broccoli.
- Critical CSS: Another task-runner, but one that creates individual CSS files based on the HTML markup for a given page. This optimization is consistently recommended by Google PageSpeed Insights.
- loadCSS: A function for loading CSS asynchronously
Wrapping up
I definitely wouldn’t say that every question and tool mentioned here is needed or even relevant for all projects. In fact, there are likely more questions and tools to consider. Are there questions you routinely ask before releasing CSS? For example, where would accessibility fit in for you? Please share!
Also, remember that the goal of auditing our code is not to make it perfect. We make compromises in CSS for many reasons (whether intentional or not) and, at the end of the day, just trying to do a good job is more than enough and this is part of that effort.
Thought its included in some style guides, I think that should be an area dedicated only to the nesting.
No mention of SMACSS? It’s been a godsend to our team in terms of organizing project structure and extensibility.
Overall good overview for a CSS code review. Thanks!
I think SMACSS (or any sort of CSS style convention) is implied. The goal is to ensure that whatever standards are adopted are adopted consistently and that everyone on the team is accountable to them.
One thing I’d add to Area 0: It doesn’t do things it’s not supposed to.
I work in a department that generates a lot of web-hosted projects that are “like that other one you did, but….” So the starting point for a lot of these projects is copying the code of the previous one and the default is to add to that code for what the new project needs. The result is often sprawling CSS that often has selectors that are not even present in the new project.
So having a review the pointed out that these files were trying to do a lot that wasn’t relevant and slowing down load time in the process would be a useful step.
That’s a good point: clearly define what CSS means for the project and make sure it is neither failing those standards nor stepping out of bounds.
You can reduce error-prone human factor by adding validation tasks during build automation. It is very easy to constantly check if your CSS adheres to previously defined standards.
I love using Scss-lint to check if project follows very strict style guidelines, including naming convention for selectors. Plus you can make own regular expressions to test against.
Agreed, automated tasks are rad! Now, running those parallel to a some sort of shared code style guide would be pretty epic rather than having to maintain those standards and the documentation in two places.
I actually “grade” review our bootcamp student’s CSS all the time and I can say that beginners often make very poor CSS selector decisions. In other words they might have one
header
element on the page so they’ll writeheader {}
without anticipation of another header. Or perhaps they’ll write.some-component div {}
with the idea that right now their component only has one div so this seems okay to them. I guess what I’m saying is that beginners have a hard time anticipating the future of the project and how the way selectors are written will inevitably break as more markup is added. In these cases.primary-header
and.some-component .sub-component
could be stronger.Obviously we can’t write selectors in ways that are bulletproof to future markup change, but I would say that another thing to look for in a review is what I call “future-proofing” the selectors to make it less likely.
That’s a fine point indeed, and goes for many teams as well. That’s where defining standards up front in some sort of code style guide would be a benefit, knowing that it too will need to be well maintained and changed as standards evolve…kinda like we did ourselves earlier this year with a Sass style guide: https://css-tricks.com/sass-style-guide/
One more thing worth to mention is parker https://github.com/katiefenn/parker
Very handy CSS stats tool.
We’ve done CSS and HTML reviews at a previous company (in fact every line of code was reviewed before going into prod).
It takes a lot less time than people think. We weren’t looking for detailed analysis, or even looking at the rendered content – just the code. QA will pick up any visual bugs/browser inconsistencies. Effectively we were checking all the above except Area 0 and 5.
You might also be surprised how much gets picked up at review.
I agree with you, it takes hardly any time at all, and the more you do it, the better you get at spotting the inconsistencies. My first job I was code reviewed at the end of every project and it led to me becoming that much better of a developer, after all you can’t learn from your mistakes if you don’t know that they were mistakes.
I agree with the points apart from !important. It has it’s uses but should only be used under certain circumstances.
I follow the ITCSS system for CSS organisation and with that I have a trumps sectiond that comes last in the stylesheet, it contains a list of helper functions that always have to win out, so in this file everything gets !important added.
Outside of this though any !important found should be scrutinised and likewise so should any ID.
Great article Geoff! I’ve been implementing a lot of similar techniques on my team over the last couple months. We’ve been particularly honing in on using SCSS Lint and CSSComb together to enforce and automate our Styleguide conventions across the distributed team. By automating as many of the stylistic conventions as possible, it’s freed us up to use our actual code review time to focus on more strategic decisions like class naming, specificity and when to use or break from existing site patterns.
I particularly like the scss-lint editor plugins for Atom and SublimeText, since they serve as a visual reminder of the Styleguide as you go. Another plus of the visual reminder is that you can see what’s wrong before watching your build fail. In our case we’re working on improving a large legacy system, so I wanted a tool that could start helping us improve sections incrementally instead of forcing us to make a wholesale change before we’re ready.
One area that has raised some debate in our CSS code reviews is around the use of post-processors and how they affect the final outputted code. For instance, you might come across something like:
You might assume a reasonable critique could include:
padding: 0px;
– remove units for “0” values
transform: translate(20px,0);
– vendor prefixes?
line-height: 0.9;
– no need for the “0” here
font-size: 16px;
– you could save some bytes by using “font-size: 1pc” instead
margin: 0 !important;
– why “!important”? A comment to explain why you need this would be good, also you don’t need the final semi-colon
The thing is, our post-processor (in our case postcss) takes care of all of these things (bar the comment). So the question is – where do you draw the line between pre-processed code quality and post-processed final output when reviewing code? How do you strike a balance between enforcing good coding practice and letting the post-processor do its thing?
Reading through Geoff’s advice, I think we could improve our own process by focusing more on Areas 1,3 & 4 and less on semantics and micro-performance enhancers, which is something I’ll certainly take from this.
I don’t think the quality of the processed code really matters nobody is going to read it, it’s for the machine.
The pre-processed code is where we should be following best practices and standards.
Like in my comment above the comment is uneccessary if you are using a trumps file for strict overrides, but I agree anywhere else it should be commented. As should any magic numbers.
Generally speaking when I see something like margin: 0; I always wonder what style that’s undoing, fine if it’s overriding a browser default like the margin on a blockquote. But if it’s overriding some previously declared CSS then it’s likely there’s some issue with the underlying structure, if we’re finding ourselves overriding a lot of styles we probably declared them too early.
I don’t think ommiting certain units or the semicolon on the final declaration and so forth matters in the pre-processed code, our post-processor can remove all of that stuff. I personally prefer to keep the semi-colons for the same reasons that I think it’s a bad idea to remove them in JavaScript – it’s a common source of bugs and confusion.
Very good. TNX