Front-End Testing is For Everyone

Avatar of Evgeny Klimenchenko
Evgeny Klimenchenko on (Updated on )

Testing is one of those things that you either get super excited about or kinda close your eyes and walk away. Whichever camp you fall into, I’m here to tell you that front-end testing is for everyone. In fact, there are many types of tests and perhaps that is where some of the initial fear or confusion comes from.

I’m going to cover the most popular and widely used types of tests in this article. This might be nothing new to some of you, but it can at least serve as a refresher. Either way, my goal is that you’re able to walk away with a good idea of the different types of tests out there. Unit. Integration. Accessibility. Visual regression. These are the sorts of things we’ll look at together.

And not just that! We’ll also point out the libraries and frameworks that are used for each type of test, like Mocha. Jest, Puppeteer, and Cypress, among others. And don’t worry — I’ll avoid a bunch of technical jargon. That said, you should have some front-end development experience to understand the examples we’re going to cover.

OK, let’s get started!

What is testing?

Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test.

Cem Kaner, “Exploratory Testing” (November 17, 2006)

At its most basic, testing is an automated tool that finds errors in your development as early as possible. That way, you’re able to fix those issues before they make it into production. Tests also serve as a reminder that you may have forgotten to check your own work in a certain area, say accessibility.

In short, front-end testing validates that what people see on the site and the features they use on it work as intended.

Front-end testing is for the client side of your application. For example, front-end tests can validate that pressing a “Delete” button properly removes an item from the screen. However, it won’t necessarily check if the item was actually removed from the database — that sort of thing would be covered during back-end testing.

That’s testing in a nutshell: we want to catch errors on the client side and fix them before code is deployed.

Different tests look at different parts of the project

Different types of tests cover different aspects of a project. Nevertheless, it is important to differentiate them and understand the role of each type. Confusing which tests do what makes for a messy, unreliable testing suit.

Ideally, you’d use several different types of tests to surface different types of possible issues. Some test types have a test coverage analytic that shows just how much of your code (as a percentage) is looked at by that particular test. That’s a great feature, and while I’ve seen developers aim for 100% coverage, I wouldn’t rely on that metric alone. The most important thing is to make sure all possible edge cases are covered and taken into account.

So, with that, let’s turn our attention to the different types of testing. Remember, it’s not so much that you’re expected to use each and every one of these. It’s about being able to differentiate the tests so that you know which ones to use in certain circumstances.

Unit testing

Unit testing is the most basic building block for testing. It looks at individual components and ensures they work as expected. This sort of testing is crucial for any front-end application because, with it, your components are tested against how they’re expected to behave, which leads to a much more reliable codebase and app. This is also where things like edge cases can be considered and covered.

Unit tests are particularly great for testing APIs. But rather than making calls to a live API, hardcoded (or “mocked”) data makes sure that your test runs are always consistent at all time.

Let’s take a super simple (and primitive) function as an example:

const sayHello = (name) => {
  if (!name) {
    return "Hello human!";
  }

  return `Hello ${name}!`;
};

Again, this is a basic case, but you can see that it covers a small edge case where someone may have neglected to provide a first name to the application. If there’s a name, we’ll get “Hello ${name}!” where ${name} is what we expect the person to have provided.

“Um, why do we need to test for something small like that?” you might wonder. There are some very important reasons for this:

  • It forces you to think deeply about the possible outcomes of your function. More often than not, you really do discover edge cases which helps you cover them in your code.
  • Some part of your code can rely on this edge case, and if someone comes and deletes something important, the test will warn them that this code is important and cannot be removed.

Unit tests are often small and simple. Here’s an example:

describe("sayHello function", () => {
  it("should return the proper greeting when a user doesn't pass a name", () => {
    expect(sayHello()).toEqual("Hello human!")
  })

  it("should return the proper greeting with the name passed", () => {
    expect(sayHello("Evgeny")).toEqual("Hello Evgeny!")
  })
})

describe and it are just syntactic sugar. The most important lines with expect and toEqual. describe and it breaks the test into logical blocks that are printed to the terminal. The expect function accepts the input we want to validate, while toEqual accepts the desired output. There are a lot of different functions and methods you can use to test your application.

Let’s say we’re working with Jest, a library for writing units. In the example above, Jest will display the sayHello function as a title in the terminal. Everything inside an it function is considered as a single test and is reported in the terminal below the function title, making everything very easy to read.

The green checkmarks mean both of our tests have passed. Yay!

Integration testing

If unit tests check the behavior of a block, integration tests make sure that blocks work flawlessly together. That makes Integration testing super important because it opens up testing interactions between components. It’s very rare (if ever) that an application is composed of isolated pieces that function by themselves. That’s why we rely on integration tests.

We go back to the function we unit tested, but this time use it in a simple React application. Let’s say that clicking a button triggers a greeting to appear on the screen. That means a test involves not only the function but also the HTML DOM and a button’s functionality. We want to test how all these parts play together.

Here’s the code for a <Greeting /> component we’re testing:

export const Greeting = () => {  
  const [showGreeting, setShowGreeting] = useState(false);  

 return (  
   <div>  
     <p data-testid="greeting">{showGreeting && sayHello()}</p>  
     <button data-testid="show-greeting-button" onClick={() => setShowGreeting(true)}>Show Greeting</button>  
   </div>
 );  
};

Here’s the integration test:

describe('<Greeting />', () => {  
  it('shows correct greeting', () => {  
    const screen = render(<Greeting />);  
     const greeting = screen.getByTestId('greeting');  
     const button = screen.getByTestId('show-greeting-button');  

     expect(greeting.textContent).toBe('');  
     fireEvent.click(button);  
     expect(greeting.textContent).toBe('Hello human!');  
 });  
});

We already know describe and it from our unit test. They break tests up into logical parts. We have the render function that displays a <Greeting /> component in the special emulated DOM so we can test interactions with the component without touching the real DOM — otherwise, it can be costly.

Next up, the test queries <p> and <button> elements via test IDs ( #greeting and #show-greeting-button, respectively). We use test IDs because it’s easier to get the components we want from the emulated DOM. There are other ways to query components, but this is how I do it most often.

It’s not until line 7 that the actual integration test begins! We first check that <p> tag is empty. Then we click the button by simulating a click event. And lastly, we check that the <p> tag contains “Hello human!” inside it. That’s it! All we’re testing is that an empty paragraph contains text after a button is clicked. Our component is covered.

We can, of course, add input where someone types their name and we use that input in the greeting function. However, I decided to make it a bit simpler. We’ll get to using inputs when we cover other types of tests.

Check out what we get in the terminal when running the integration test:

Termain message showing a passed test like before, but now with a specific test item for showing the correct greeting. It includes the number of tests that ran, how many passed, how many snapshots were taken, and how much time the tests took, which was 1.085 seconds.
Perfect! The <Greeting /> component shows the correct greeting when clicking the button.

End-to-end (E2E) testing

  • Level: High
  • Scope: Tests user interactions in a real-life browser by providing it instructions for what to do and expected outcomes.
  • Possible tools: Cypress, Puppeteer

E2E tests are the highest level of testing in this list. E2E tests care only about how people see your application and how they interact with it. They don’t know anything about the code and the implementation.

E2E tests tell the browser what to do, what to click, and what to type. We can create all kinds of interactions that test different features and flows as the end user experiences them. It’s literally a robot that’s interacted to click through an application to make sure everything works.

E2E tests are similar to integration tests in a sort of way. However, E2E tests are executed in a real browser with a real DOM rather than something we mock up — we generally work with real data and a real API in these tests.

It is good to have full coverage with unit and integration tests. However, users can face unexpected behaviors when they run an application in the browser — E2E tests are the perfect solution for that.

Let’s look at an example using Cypress, an extremely popular testing library. We are going to use it specifically for an E2E test of our previous component, this time inside a browser with some extra features.

Again, we don’t need to see the code of the application. All we’re assuming is that we have some application and we want to test it as a user. We know what buttons to click and the IDs those buttons have. That’s all we really have to go off of.

describe('Greetings functionality', () => {  
  it('should navigate to greetings page and confirm it works', () => {
    cy.visit('http://localhost:3000')  
    cy.get('#greeting-nav-button').click()  
    cy.get('#greetings-input').type('Evgeny', { delay: 400 })  
    cy.get('#greetings-show-button').click()  
    cy.get('#greeting-text').should('include.text', 'Hello Evgeny!')  
  })  
})

This E2E test looks very similar to our previous integration test. The commands are extremely similar, the main difference being that these are executed in a real browser.

First, we use cy.visit to navigate to a specific URL where our application lies:

cy.visit('http://localhost:3000')

Second, we use cy.get to get the navigation button by its ID, then instruct the test to click it. That action will navigate to the page with the <Greetings /> component. In fact, I’ve added the component to my personal website and provided it with its own URL route.

cy.get('#greeting-nav-button').click()

Then, sequentially, we get text input, type “Evgeny,” click the #greetings-show-button button and, lastly, check that we got the desired greeting output.

cy.get('#greetings-input').type('Evgeny', { delay: 400 })
cy.get('#greetings-show-button').click()
cy.get('#greeting-text').should('include.text', 'Hello Evgeny!')  

It is pretty cool to watch how the test clicks buttons for you in a real live browser. I slowed down the test a bit so you can see what is going on. All of this usually happens very quickly.

Here is the terminal output:

Terminal showing a run test for greetings.spec.js that passed in 12 seconds.

Accessibility testing

Web accessibility means that websites, tools, and technologies are designed and developed so that people with disabilities can use them.

W3C

Accessibility tests make sure people with disabilities can effectively access and use a website. These tests validate that you follow the standards for building a website with accessibility in mind.

For example, many unsighted people use screen readers. Screen readers scan your website and attempt to present it to users with disability in a format (usually spoken) those users can understand. As a developer, you want to make a screen reader’s job easy and accessibility testing will help you understand where to start.

There are a lot of different tools, some of them automated and some that run manually to validate accessibilit. For example, Chrome already has one tool built right into its DevTools. You may know it as Lighthouse.

Let’s use Lighthouse to validate the application we made in the E2E testing section. We open Lighthouse in Chrome DevTools, click the “Accessibility” test option, and “Generate” the report.

That’s literally all we have to do! Lighthouse does its thing, then generates a lovely report, complete with a score, a summary of audits that ran, and an outline of opportunities for improving the score.

But this is just one tool that measures accessibility from its particular lens. We have all kinds of accessibility tooling, and it’s worth having a plan for what to test and the tooling that’s available to hit those points.

Visual regression testing

  • Level: High
  • Scope: Tests the visual structure of application, including the visual differences produced by a change in the code.
  • Possible tools: Cypress, Percy, Applitools

Sometimes E2E tests are insufficient to verify that the last changes to your application didn’t break the visual appearance of anything in an interface. Have you pushed the code with some changes to production just to realize that it broke the layout of some other part of the application? Well, you are not alone. Most times than not, changes to a codebase break an app’s visual structure, or layout.

The solution is visual regression testing. The way it works is pretty straightforward. Visual test merely take a screenshot of pages or components and compare them with screenshots that were captured in previous successful tests. If these tests find any discrepancies between the screenshots, they’ll give us some sort of notification.

Let’s turn to a visual regression tool called Percy to see how visual regression test works. There are a lot of other ways to do visual regression tests, but I think Percy is simple to show in action. In fact, you can jump over to Paul Ryan’s deep dive on Percy right here on CSS-Tricks. But we’ll do something considerably simpler to illustrate the concept.

I intentionally broke the layout of our Greeting application by moving the button to the bottom of the input. Let’s try to catch this error with Percy.

Percy works well with Cypress, so we can follow their installation guide and run Percy regression tests along with our existing E2E tests.

describe('Greetings functionality', () => {  
  it('should navigate to greetings page and confirm everything is there', () => {  
    cy.visit('http://localhost:3000')  
    cy.get('#greeting-nav-button').click()  
    cy.get('#greetings-input').type('Evgeny', { delay: 400 })  
    cy.get('#greetings-show-button').click()  
    cy.get('#greeting-text').should('include.text', 'Hello Evgeny!')  


    // Percy test
     cy.percySnapshot() // HIGHLIGHT
  })  
})

All we added at the end of our E2E test is a one-liner: cy.percySnapshot(). This will take a screenshot and send it to Percy to compare. That is it! After the tests have finished, we’ll receive a link to check our regressions. Here is what I got in the terminal:

Terminal output that shows white text on a black background. It displays the same result as before, but with a step showing that Percy created a build and where to view it.
Hey, look, we can see that the E2E tests have passed as well! That shows how E2E testing won’t always catch a visual error.

And here’s what we get from Percy:

Animated gif of a webpage showing a logo and navigation above a form field. The animation overlays the original snapshot with the latest to reveal differences between the two.
Something clearly changed and it needs to be fixed.

Performance testing

Performance testing is great for checking the speed of your application. If performance is crucial for your business — and it likely is given the recent focus on Core Web Vitals and SEO — you’ll definitely want to know if the changes to your codebase have a negative impact on the speed of the application.

We can bake this into the rest of our testing flow, or we can run them manually. It’s totally up to you how to run these tests and how frequently to run them. Some devs create what’s called a “performance budget” and run a test that calculates the size of the app — and a failed test will prevent a deployment from happening if the size exceeds a certain threshold. Or, test manually every so often with Lighthouse, as it also measures performance metrics. Or combine the two and build Lighthouse into the testing suite.

Performance tests can measure anything related to performance. They can measure how fast an application loads, the size of its initial bundle, and even the speed of a particular function. Performance testing is a somewhat broad, vast landscape.

Here’s a quick test using Lighthouse. I think it’s a good one to show because of its focus on Core Web Vitals as well as how easily accessible it is in Chrome’s DevTools without any installation or configuration.

A Lighthouse report open in Chrome DevTools showing a Performance score of 55 indicated by a orange circle bearing the score. Various metrics are listed below the score, including a timeline of the page as it loads.
Not a great score, but at least we can see what’s up and we have some recommendations for how to make improvements.

Wrapping up

Here’s a breakdown of what we covered:

TypeLevelScopeTooling examples
UnitLowTests the functions and methods of an application.
IntegrationMediumTests Interactions between units.
End-to-endHighTests user interactions in a real-life browser by providing it instructions for what to do and expected outcomes.
AccessibilityHighTests the interface of your application against accessibility standards criteria.
Visual regressionHighTests the visual structure of application, including the visual differences produced by a change in the code. 
PerformanceHighTests the application forperformance and stability.

So, is testing for everyone? Yes, it is! Given all the available libraries, services, and tools we have to test different aspects of an application at different points, there’s at least something out there that allows us to measure and test code against standards and expectations — and some of them don’t even require code or configuration!

In my experience, many developers neglect testing and think that a simple click-through or post check will help any possible bugs from a change in the code. If you want to make sure your application works as expected, is inclusive to as many people as possible, runs efficiently, and is well-designed, then testing needs to be a core part of your workflow, whether it’s automated or manual.

Now that you know what types tests there are and how they work, how are you going to implement testing into your work?