Apologies to those of you awaiting the next post of the test case equivalence series. The series will continue in a subsequent post, but this blog post must be published now.
So what is so important as to interrupt a series? It is to put the silly matter of “testing vs checking” to rest.
For many years now, I’ve been voicing to my colleagues (and recently to the world, via this blog) the private observation that the terminology we use in our industry, the testing industry, is lacking and murky, to say the least.
Most of the mainstream terminology is unhelpful on a daily frequency. For instance, see my posts about duplicates (which is, incidentally, the series of posts that was interrupted).
This vagueness in definitions has spawned much confusion and noise from consultants and other testing loud-mouths (I say this with kindness, as I am a loud-mouth too) regarding what certain terms mean and heuristics to go about them.
Unfortunately, certain noise has ended up being echoed back down to us by the sphere of ignorance encapsulating the industry and has amplified certain discussions that merit no energy.
Before I get to the issue, I wish to make clear that I commend Michael Bolton et al. in their vigorous attempts at chipping away at this unfortunate imprecision in our industry. However, I need to voice my concern about their efforts, and do so urgently.
In my view, we’re dangerously close to creating ghost definitions that will only hurt future generations of testers, regardless if such definitions benefit the consultants cashing the money in the present.
Put another way: the chipping away is being done on something that was not a problem to begin with.
The “testing vs checking” dilemma that is plaguing the minds of testers world-wide is fueling countless expensive courses by many a consultant, and, most importantly and sadly, false knowledge in our industry.
“Why false knowledge?” you ask. Because the problem with the “testing vs checking” nonsense is that it takes a very dim view of the world (i.e. as consultants see it) and presupposes that technology will be in the future as it is right now.
Let me give you a very real example. I’ll start with my personal, practical definition of a test:
|a test attempts to prove something about a system|
Whether it be a security vulnerability or the fact that a sequence of states will lead to the infamous “screen of death,” doesn’t matter. What matters is that we want to prove that the SUT exhibits some property or behavior (or lack thereof), whatever that happens to be.
Now, fast-forward 50, or 100 years from now, when theorem-proving algorithms become fast enough to be practical (i.e. cost-effective) in the industry. Many applications will then be specified in a rigorous language like first-order logic.
In fact, many applications are at present being specified in some rigorous language like Z1 or even the powerful method of ASMs (abstract state machines)2.
Such languages are called formal systems.
It turns out that a basic, natural property of any formal system (including first-order logic) is that their theorems (their truths) can be mechanically enumerated3.
That is, all the truths of a formal system can be deduced by a Turing machine.
When applications are specified with such formal systems, all the testing is done via a machine. Theorem-provers4 are able to reach conclusions about the software that we never thought possible, regardless of how many “rapid software testing” courses or “session-based test management” courses a tester attended (and paid for dearly).
Algorithms exist that can prove properties of an application without ever having to touch the UI or concern itself with “sapience” (or other such consultant-speak).
In other words, bugs can and have been discovered via machine, because humans were not able to.
I’d like to make it very, very clear that my point is not aimed at the silly dichotomy between humans and machines. Rather, my point is that the whole “testing vs checking” distinction, along with its refinements of “human checking,” “machine testing” and any combinations thereof, are made altogether useless precisely because of the vacuous premise implied within those definitions.
Our industry is already plagued by plasters of useless terminology that has taken us nowhere, except toward avenues where people create dubious definitions to fill in gaps and their own pockets.
It is personally very sad to see many otherwise smart people subscribing worldwide to such vacuous definitions brought about to our industry. This is especially true when the following device is used by consultants to turn people’s judgment off immediately, which will forever astonish me:
How true that isn’t.
I admire and respect anyone who truly questions the nonsense currently permeating our industry. I’d like to hear from you, if you’re out there.
1 The way of Z. Jacky.
2 Abstract State Machines. Börger & Stärk.
3 Gödel’s Theorem. Torkel Franzén.
4 Computational Complexity. Oded Goldreich.