First law: The pesticide paradox. Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective.
Testing proves a programmer’s failure. Debugging is the programmer’s vindication.
A design remedy that prevents bugs is always preferable to a test method that discovers them.
A test that reveals a bug has succeeded, not failed.
A good threat is worth a thousand tests.
One of the saddest sights to me has always been a human at a keyboard doing something by hand that could be automated. It’s sad but hilarious.
If the objective of testing were to prove that a program is free of bugs, then not only would testing be practically impossible, but it would also be theoretically impossible.
Extra features were once considered desirable. We now recognize that ‘free’ features are rarely free. Any increase in generality that does not contribute to reliability, modularity, maintainability, and robustness should be suspected.
More than the act of testing, the act of designing tests is one of the best bug preventers known.
In programming, it’s often the buts in the specification that kill you.
Bugs lurk in corners and congregate at boundaries.
If you can’t test it, don’t build it. If you don’t test it, rip it out.
Software never was perfect and won’t get perfect. But is that a license to create garbage? The missing ingredient is our reluctance to quantify quality.