Return to Blog

Introduction to Automated Testing for BeginnersArticle by Brandon Burrus Posted on February 13, 2022 13 min read

I remember when I was a budding developer and was first learning about testing, feeling completely overwhelmed by all of the concepts and terms. Testing is a whole world unto its own, so the goal of this article will be to give a complete beginner a solid foundation for the key concepts and terms of what it means to write tests, particularly automated tests.

The examples themselves will use the Jest testing framework for JavaScript, however these will be very surface level examples and can be easily translated into another language + testing frameworks such as pytest for Python or JUnit for Java.

What is Testing?

To start, lets first get a clear definition of what testing is and define some terms. Dictionary.com defines testing as the means by which the presence, quality, or genuineness of anything is determined; a means of trial. Wow, a bit of a mouthful. But that last bit is on the money, testing is a "means of trial". What's on trial? Our code, of course!

Testing is all about ensuring that our code works as we intend it to. More importantly, it's a sort of insurance that when we make changes to our code, we can execute our tests to make sure that we didn't break something that was previously working.

There's two main "umbrellas" of testing. The first is manual testing and the second is automated testing. Manual testing is the easier of the two, as its something we as developers do naturally all the time when we're working on something and need to verify that yes, it's actually working, and so we'll run our code locally and manually test that the code we've just written is indeed working as expected (and sometimes not, to our chagrin).

However manual testing isn't great long term. As our codebases naturally grow to include more functionality and features, manual testing means that we'll need to test every piece of implemented functionality. For a large codebase, this is terribly inefficient and not cost effective at all! This is where automated testing steps in.

Automated testing is the process of writing more code to make sure our original code works as expected. At first this might seam strange, but the reason for this is that we can take full advantages of all the constructs of a high level programming language and make assertions on our code that's it is indeed working the way we intended it to.

Assertions is a common term you'll hear, and it's simply the process of checking that a piece of code that was just executed is producing the expected outputs. At the end of the day, that's all testing is: running our coded and checking its output to make sure we're getting the results we expected from the beginning.

Types of Tests

Unit Tests

There are several kind of tests, and this is where the confusion typically begins. Lets start with the simplest of automated testing, the unit test. Unit tests are typically very, very small tests that are meant to isolate and test the smallest piece of functionality possible. A unit test will typically have one (maybe two) assertions that the test is checking to make sure are correct. Now is also a great time to mention Assumptions - an assumption is a sort of "prerequisite" that is usually checked at any time before the assertion is made. It's also worth noting that assumptions are almost always a very implicit part of the test.

Let's look at a quick example of a unit test to get a good idea of the test itself, what the assertion is, and what the assumption(s) might be.

Here is the piece of code we will be placing under testing:

function greet(name) {
  return `Hello, ${name}!`;
}

We're keeping it simple for now to make sure we have a good grasp of the basics. We will now write the test itself. Don't worry too much about the syntax or how the test will actually be run or anything like that, focus on the concepts being presented in this test:

expect(greet('World')).toMatch('Hello, World!');

As you can see, this test is very small. The test itself is making sure that the returned output of our function is what we expect() it to be. The assertion here is the toMatch() part, we want to assert that the output matches what we gave it. The assumption here is that we're passing something valid to the greet() function. If we didn't actually know how the greet function works (and only what we expect it to return), it might throw an error or return something unexpected when it's given something unexpected.

We can also make an explicit assumption in something like the following:

expect(someArray).toBeDefined();
expect(someArray).toBe(expect.arrayContaining([1, 2, 3]))

Here the first line is an explicit assumption; we assume that the array is defined (and check this just to be sure). On the second line is the actual assertion of expecting that the array contains the numbers 1, 2, and 3.

It cannot be stressed how important it is that unit tests need to be small; by ensuring that the tests are small and focused, it ensures that our tests are fast, allowing us to write more tests and run them all as quickly as possible. The reason for this is due to friction.

As humans, we avoid repeated tasks that take up our team and create friction to whatever we are working on. The idea here is that by ensuring our tests are fast, we'll (hopefully) run them more often and can be more confident in the code we're writing. It's also worth mentioning that this is a core tenant of modern DevOps practices, is to shorten feedback loops to be as small and fast as possible as to catch, respond to, and resolve errors as quickly as possible for our users.

Integration Tests

The next type of test is the Integration test. While a unit test is meant to isolate and test a single piece of functionality, an integration test is meant to test multiple things together, thus testing that these items are correctly "integrated" with each other.

An example of an integration test might be testing a redirection link in our UI components. We expect that when we click the link on the first page that we end up on the second page. Or we might test that when a Controller receives a specific request that the "email sender" module actually sends the email. These types of tests can vary in scale as we might be testing the coupling of two modules together, or instead might be testing two entirely separate services that communicate over a message bus or queue.

Another key difference of integration vs unit tests are that the former are typically a bit "heavier" of a test in terms of time. Integration tests tend to be "doing more" than our unit tests might be, and as such will take a big longer to run. This tends to lead to there being less integration tests overall as compared to unit tests, however it should not diminish the importance of these kinds of tests.

It is here that I will interject about some contention you may hear from programmers when it comes to testing, specifically integration vs unit tests. Some will argue that unit tests are more important, whilst others will say that integration tests are. Regardless of this, it's important to know about both of them as well as to know which one your company or team values more.

End-to-End Tests

The next kind of test are "end-to-end" tests, also called E2E or UI (short for User Interface) tests. These tests instead take on the perspective of the user, and make assertions about what the user should expect to happen when interacting with our application. These are much heavier tests than either integration or unit tests, as they require at minimum an entire sub-section of the application system, or will need the system entirely to be running and active.

Because we are approaching this from the perspective of the user, we also will want these kinds of tests to match our production environment as closely as possible as to mimic how the user will actually be interacting with the application in a real world scenario. However this further adds to the "size" of unit tests and means that these kinds of tests are the most costly to run in terms of time.

End-to-end tests are fantastic because they tell use that something unexpected happened when the user did something, but they can sometimes not be as useful as an integration or unit tests as they might not tell you what the issue itself actually is, only that there is one.

For example, we might be testing that when the user uploads a file, they see a success message when the file has completed its upload. Our assertion would be to expect that the success popup message shows up. But if we introduced a bug into the system and broke the file uploader, our test wouldn't tell us where the issue is, only that the success popup is no longer being shown to the user.

One last key point about E2E tests is that you the developer might not even be the one writing these kinds of tests. It may instead be the responsibility of QA Engineers to write these tests and give the results to the developers as some part of Quality Assurance process. This will vary from company to company (and potentially even between teams), to be sure to ask what the expectations of you are when it comes to testing.

Regression Tests

When a change is introduced to a system that causes unexpected or unwanted behavior, you might hear this to be referred to as a regression. As the name implies, regression tests attempt to ensure that this does not occur, and when it does to provide us a feedback loop to notify us that it did. What constitutes an actual regression test can be vague as it does not have a consistent meaning across organizations.

The axis of which the definition will vary usually comes down to two things: does the organization consider all three types of tests (unit, integration, and E2E) as regression tests, and when are these tests run.

For example, one company might refer to regression tests to be the E2E tests that are used to run against production using dummy test data whilst another company might consider them to be E2E and integration tests that run in some kind of pipeline before the code is allowed to hit production.

Regardless of which one it is, regression tests are about catching bugs and ensuring that we have a feedback loop in place to notify us as quickly as possible when a regression is introduced into the system. It's not a matter of if but when when it comes to defects being introduced into our software!

Smoke Tests

Similar to regression tests, the definition of smoke tests might vary slightly from company to company. The idea of a smoke test is to determine the subset of all of your tests that ensure the critical path for your users works. The term itself originates from when Circuit Board makers would test their designs; if the circuit didn't smoke, then it's fair to assume that it's working as expected. Of course we know this isn't always the case, but it's a nice rule of thumb.

Smoke tests will almost always be a subset of E2E tests that run either before code hits production in a pipeline, or against production itself after the code has been deployed. If the code is "smoking" (if the tests are failing) and the tests are being run after deployment, then some kind of rollback will need to be performed (which itself might be automated).

API Testing

Not as common as the aforementioned testing terms, but still worth noting is API testing. A subset of E2E tests, it is the process of personifying the system (almost always a front-end client) as the user and executing tests against back-end APIs (regardless if the RPC transport is RESTful, GraphQL, gRPC, or something even more low-level).

In other words, these types of tests are all about sending a request to the server and then making assertions on the returned responses. A benefit however as distinguished from regular E2E tests is that assuming your API returns or is logging the errors, tracking down the source of an issue can sometimes be much faster than that of a normal E2E test.

Black Box vs White Box Testing

During the interview for what would eventually become my first programming job, the interviewer threw a curve ball question at me about black box vs white box testing, asking if I knew the difference. I sheepishly had to admit that I hadn't a clue, but I was still able to land the position despite my not knowing.

While your interviewer might not ask you anything about testing, Black vs White box testing is still a good concept to know. The idea here is to think of our code as a box; either the box is an opaque black mystery, or its a crystalline white casing around the inner workings of what we've developed. When treating our code as a black box, we know nothing about how the box actually works, only what input we need to give it to receive the expected (and tested) output. But when it's a white box, we're able to peer inside the box and make assertions on its inner workings.

In general, white box testing should be avoided when possible (which is not always). To explain the reasoning, let's think about it from the perspective of a user. The user doesn't care how your product solves their problem, only that it does (and how well it solves it, among other things). We can take this same approach to our code. Performance aside, it shouldn't matter how we're solving the problem at hand, only that we *have* indeed solved it, which is what the tests we write are verifying for us.

Approaching our code in the tests as a white box will end up making our tests weak and flimsy. Let's say we've developed a feature and have tested it using a white box approach. Some time has passed, and now we need to go back and make some refactors on that feature to support an upcoming feature. When making our changes, we discover that our tests break because of the changes we introduced. However, the original feature itself is still working (at least from the perspective of the user). This is a problem, as the tests we've written are dependent on the implementation. If the implementation changes, our tests will break.

This can all be avoided by preferring a black box approach rather than a white box one. Instead of making assertions on the inner workings of our code's implementation, we instead only make assertions on the *output* of the implementation. We are making sure that what the user is expecting is indeed what the code itself is actually producing.