Want to be a testing guru? Use reachability graphs (Part 2) 2

A fairly long time ago, I introduced reachability graphs and showed you how to start using them to debug applications. After all, debugging is a form of testing.

In this post, I’ll continue exploring reachability graphs for the purposes of exploratory testing. To illustrate their useful power, I will use reachability graphs to test some aspects of an extremely popular web browser.

The web browser we’ll be using for this is freely accessible (you might already be using it): Google Chrome.

First, let me set the stage: as independent rockstar testers, we’ve been hired to test Chrome to complement the testing performed in-house. Google has offered to pay us $1000 USD for each bug we find, provided we give an accurate, easy-to-understand bug report with clear and specific steps indicating how to recreate the bugs as well as the associated inputs that need to be tested.

However, if the bug report is greater than one page, they will deduct $400 USD for every page in the report. This is because, as our hiring point person told us via phone: “we are NOT interested in long, boring textual bug reports. We want something that will quickly tell us what the bug is and the exact steps to recreate it.”

In other words, they’re paying us to find bugs and report them accurately, not to write a novel.

Daunting? Not with reachability graphs.



Reachability Graphs

Let’s first review what a reachability graph is.


A reachability graph for abstract states is an undirected graph where edges connect abstract states* that are known or suspected to be related.


*Abstract states are defined below.



The edges can be labeled with timestamps to provide further granularity for event sequences, indicating that reachability is time-wise monotonically increasing (i.e. reachability never goes backward in time).


Reachability graphs are very simple and intuitive. That’s why they are such useful tools!

These kinds of graphs are called reachability graphs from the fact that, when two abstract states are related, one abstract state can reach the other via state transitions in the underlying system under test (i.e. the reflexive transitive closure of states.)

Recall that whenever we’re using any software system, we’re traversing different states in its state space. Moreover, an axiom in the unified theory of testing states that any system behavior can be mapped to a sequence of one or more system states. I call this the System Behavior-State Mapping axiom, and it is exactly what reachability graphs capture.

The unique states of a system and its transitions are “folded”, so to speak, in compact graphs that show, at a glance, interesting paths and behaviors. This is the aspect that we, as testers, exploit to our advantage.

We start by documenting all our preconditions for a test. For example: the current GA version of Google Chrome is being used.

From this precondition, we should be able to “jump” to (that is, reach) any other state that we find in our testing — this much is obvious and poses no surprise. However, what might not be so obvious is that reachability graphs allow us capture such system states without worrying about the irrelevant details that we see along the way. In other words, reachability graphs help us record states of interest — and only the states of interest — by using abstract states.


An abstract state is a finite set of properties about the SUT that are expected to be true at a specific point in a test.


Further, an abstract state is only composed of properties that are relevant at that particular point of the test.


All abstract states are members of the class ABSTRACT_STATESUT, which is the entire collection of sets of properties that the given SUT is expected to have.


Finally, an abstract state represents a collection and an abstraction of one or more underlying (i.e. real) states of the SUT.


Of course, what is relevant and irrelevant depends very much on the intent of a test. By definition, however, the intent gradually emerges in structured exploratory testing because there are no test cases defined beforehand. That is, as we explore the SUT (system under test), our preconditions (i.e. initial abstract state) and steps eventually define the intent of our current test — the test that is taking form.

Fortunately, another powerful aspect of reachability graphs resolve this chicken-and-egg problem: they make it super easy to detect irrelevant properties so that we can form abstract states that only include useful, relevant details.

For instance, let’s say we decide to explore the search functionality of the browser in terms of searching HTML and PDF documents. What we have at this moment is typically called a “charter” in the jargon of exploratory testing. Note that, however, there is no clear intent yet. So let’s get started with our test and emerge its intent.



Our first bug

Chrome is a great tool written by many talented people, but there are some very fundamental tests that they missed.

First, type this address in your browser’s address bar:



Now open the search box (hit Ctrl-F) and type the string “planet”. You will notice there are no hits (this is expected).

Now, without closing the search box, modify the address in your browser to this:



At this point, the search box might have closed automatically. If this is the case, open it again but leave the previously typed string unmodified. In other words, do not remove the text “planet” that you typed before; rather, simply hit Enter.

Notice there are again zero hits. However, this is an error because you will notice by manually scanning the page that there are two words that must be a match. If you do not yet see them by reading the page, simply clear the text from the search box, close it, open it again and type the same string you typed before.

Voila! The two missing hits will now appear, and they will become highlighted.

At this moment, we have seen our test’s intent emerge fully.

Also, at this moment, the typical tester would stop here and right a bug report in text format.

However, the previous sequence of steps in our test is only for one and only one example. The bug certainly does not occur with only those values. Therefore, how could we create a bug report that illustrates this problem for infinite strings and pages? Doing so would be very useful so that others can recreate the bug with as many combinations of values as they wish.

The image below shows a reachability graph containing all of the information, answering the goal above.


Bug report for Chrome search

Bug report for Chrome search


This is our bug report. Notice how clean, unambiguous and easy it is to read, while at the same time making the relevant states of the SUT clear at each point. This, in turn, makes it incredibly easy and straightforward to recreate the bug.

Try it out for yourself and recreate the bug with any value that satisfies the conditions in the graph.

With this example bug, I hope you are beginning to see the power of reachability graphs. Namely, they can provide a huge amount of information about state in a very terse manner.

I will be posting more examples to elucidate different facets of these very useful graphs. Stay tuned!



Leave a comment

Your email address will not be published. Required fields are marked *

2 thoughts on “Want to be a testing guru? Use reachability graphs (Part 2)