JRM.DEV

Front-End Testing in the Dusk of the Second Age

Shawn Wang has pointed out that the world of JavaScript and Front End software development may be on the cusp of Third Age, and I for one could not be happier to jump on that band-wagon. But, alas, I’ve saddled myself with several projects written in the technology that has defined this Second Age: React. In my exploration of React and it’s ecosystem (and embarking on a few projects that are perhaps too ambitious to do alone), I have found only one thing truly befuddling about building modern JavaScript applications: testing.

Near the beginning of this last month, I published an article about how I was going to spend the month writing shitty tests. And I’m happy to report that I have, in fact, written many, many shitty tests. Have I written enough that I can now write amazing tests? No, no I haven’t, and it may take me years before I finally get to that point (perhaps this coming Third Age has an answer to my strife).

I have, however, learned. In particular, the most informative experience I have had this month was reading the excellent The Art of Software Testing, the bible for the field of software testing that has still managed to endure (with a handful of revisions) for nearly 40 years. The Art of Software Testing brought back semi-forgotten memories of sitting through my software engineering class in college and clarified many things I had always wondered about the biggest being perhaps how a good test is formulated. What follows is a series of lessons I’ve learned over the last month, both from this book, and from the escapades I’ve had thus far trying to write tests for the front-end.

Disclaimer: What follows are the opinionated ramblings of a some-what educated software developer who is taking their first steps into this wild world of front-end web development.

Testing Intuition Isn’t Total Gibberish

I want to get this out of the way, since the topic of the last article I wrote on software testing was the idea of “testing intuition”. Looking back at that article, and perhaps to no one’s surprise, it’s very clear I had no idea what I was talking about. Which aptly describes the level of knowledge I had about testing at the time (and probably still maintain).

Turns out, however, that the notion of “testing intuition” is not complete gibberish. In the chapters discussing how one goes about formulating tests, the authors of The Art of Software Testing do address something similar, which they refer to as error guessing. Error guessing is described as the process where:

Given a particular program, [the tester will] surmise, both by intuition and experience certain probable types of errors and then write test cases to expose those errors.

This practice is reasonably ill-advised, since it’s not backed by any kind of rigid system or methodology like other kinds of test case creation. But, it does help get to the most obvious (or maybe least obvious) test cases, allowing you to spend more effort on discovering the less trivial test cases that are needed for a piece of software.

Software Testing Is For Humans Too

What originally started my dive into testing on the front-end was an application I was working on for collaboratively playing D&D and other table-top role-playing games online. This kind of application, which is very heavy in the graphics department, is plain hard to test.

Take, for example, the Token component, which represents a character on the game board. This component has a few simple requirements:

  1. Show a portrait of the character
  2. Move with the user’s mouse cursor when the user clicks and drags the Token.

As informal as these requirements are, they are accurate to the basic functions of the component. And both of these are impossible to test via an automated process. Why?

Take, for example, the first requirement: “Show a portrait of the character”. This is a pretty bad requirement, since ‘show’ could mean a few things. A better requirement, and one that is centered around what the user will care about, is: “On the token, a user shall see a portrait of the character that token represents”. And this is where the trouble begins: how do you verify that your user can see the portrait with an automatic test runner? Sure, there are shortcuts, like checking that the Token has an image element embedded in it, and that the image element is pointing to the right image file. But how do you know if the image element is visible? You need several boundary tests to check that every possible configuration of the Token keeps the image element visible.

But what if you added an offset to the image, or applied a shape mask over the image to make it render in a way that was appealing? How do you verify that the offset doesn’t send the image off the screen? How do you verify that the shape mask doesn’t fully obscure the image? These are things that are incredibly hard to test in an automated setup, usually requiring visual regression testing that checks the visual appearance of a component. But most influential people in the modern testing scene consider these tests fragile, since visual regression testing requires pixel-by-pixel comparisons to a prior snapshot of the token’s appearance.

Ergo, you will inevitably need a user to verify that the simple requirement “Show a portrait of the character” is satisfied. And this shouldn’t be surprising, since human testing is the first testing method that is talked about in The Art of Software Testing. And I’d argue, it’s more valuable than integration tests in the “trophy” of software testing, simply because a human saying “something looks funny” can provide a lot more information than a machine saying “something looks funny”.

Confidence In The Details

The thing that bugs me the most about the mantra “don’t test implementation details” that seems to dominate the discussion of front-end software testing is that the methodology idolizes the concept of black-box testing. For the unfamiliar, black-box testing is the idea of testing a program by feeding data in and then checking that the data that comes out is correct. While not inherently problematic, The Art of Software Testing illustrates that this is a terrible way to go about testing software, since it would require an untenable number of black-box tests to fully confirm that the software contains no or, at the least, very few bugs. (Not to mention there may be some requirements that cannot be verified by checking outputs alone.)

There is another way to perform testing the reverse of black-box testing known as white-box testing. This kind of testing validates that the software meets requirements by using knowledge of the software’s inner workings. In other words, white-box testing tests those vile implementation details. The Art of Software Testing talks about these kinds of tests as well, but gives them the same verdict as black-box testing: you cannot reasonably use only white-box tests to validate that your software is free of bugs.

The solution, The Art of Software Testing proposes, is a combination of strategically designed black-box and white-box tests. By verifying that your code yields outputs that satisfy the requirements, and internally functions within the specifications, you can have an incredible amount of confidence in the code you deliver with a reasonable, but by no means small, number of tests. And it’s this methodology I stand by, for reasons I have observed over the last month of writing shitty tests.

The primary reason I standby the “grey-box” testing method is because for every code base there are two categories of users: end-users and developers. Black-box testing is perfect for satisfying end-users, and some classes of developers (those, for instance, who consume some data processing API), but doesn’t work so well when the developers have to interact with your code structure (i.e. a fellow teammate) and be confident that when they change something, the code doesn’t suddenly break

Specifications are Gold

From my journey in testing so far, the biggest pain I have suffered is sitting down to write the tests for a component and not knowing what to start with. Trying to hold to the principle of TDD, in these situations I had no actual component to test, only tests to write to describe the component I would build. But I almost always ended up staring at a blinking cursor, suffering a case of analysis paralysis as my brain went into a small existential crisis about what the hell I was actually trying to build.

And I’m sure that it’s no surprise to anyone that it’s hard to write tests without knowing what the specification for the component is. Shooting from the hip is a great way to watch a project fall apart because none of the tests are testing things that really matter. Or worse, there’s no tests at all. Unfortunately, writing specifications can send one down the rabbit hole of requirements engineering and software architecture. A rabbit hole I’m all too familiar with at this point and has led a few too many projects to rot.

Conclusion

There’s a lot I have yet to figure out about this testing business with modern, front-end web applications, but I think that this past month has been a decent start towards learning a little bit more about the subject. My next goal is to start playing with other tools for building front-ends, like Vue and the shiny new framework Svelte (which I and so many other devs seem to adore), and learning if testing with them are any different from testing with React, or if it is fraught with all the same problems.

But for now, my key takeaways are:

  • ABT: Always Be Testing. The more tests you write, the quicker and easier it will be to recognize potential problem areas and write tests to cover them.
  • Don’t forget the humans. Automated tests are great, but there are some things computers still can’t help us test. In those cases, don’t be afraid to invite a person to help you!
  • The devil’s in the details. Leaving implementation details untested is asking for poor developer experience and software architecture.
  • Stop! And write yourself some specifications! Don’t build software you intend to test without first knowing what it’s expected to do (and not do).

© Joseph R Miles 2020