Sunday, October 20, 2013

Test-Assisted Design

I'm re-thinking my opinions about unit tests and test-driven design. On the one hand, part of me immediately liked the idea of TDD as soon as I heard it: by writing out your tests first, you have a clear, unambiguous spec for what you're supposed to build. It's clear, because it has to be a test that the machine can run, and it's unambiguous because it's a test. It either passes or it doesn't.

On the other hand, I've been having second thoughts about unit tests ever since I saw Rich Hickey's talk on Simple Made Easy, the Magna Carta of functional programmers. Tests are good, but-- Tests are necessary, but--

The thing is, unit tests could be thought of as a kind of code smell, or a kind of technical debt. You're writing tests because you don't know what your code is going to do. And maybe that's unavoidable. Maybe programming is just too complicated for us to write code that always does exactly what we expect it to do. But still, a code smell, a sign of an undesirable circumstance that we should be trying to avoid or reduce.

And it's technical debt--it's code that needs to be continually maintained and updated. Worse, you can't ever pay down your debt because you're always going to need to test. If anything, your unit tests are going to increase over time, increasing the load on your developers and maintainers. And think of this: every test you write, if it never finds any bugs, is wasted code. Not entirely wasted, because you get at least the warm comfortable feeling of knowing at least one bug that's not present. But if you know your code isn't going to break, it's a waste of time to write a test for it. How long since you had a test for assertEquals(4, 2+2)?

So I've come to question the value of unit tests and TDD. Not to reject either one, but to consider, on a case-by-case basis whether writing a test provides more benefit than cost. Having come from an enthusiastic more-is-better attitude towards testing, my pendulum is swinging far to the less-is-more side of the arc.

And then again, my experiences this past week are making me re-rethink my position. I've been working on an event engine that could be used to build any sort of system where you want to define a set of events, and assign handlers to them, and then trigger the handlers at key points in your program code. Since it's a lib and not a stand-alone app, I couldn't just run my app and see how it was working. I needed some kind of harness I could run that would call my functions with a given set of arguments and then return some results for me to look at.

Obviously, this is a classic use case for unit tests, so I wrote some up, and used them extensively during my initial design and coding. What I discovered surprised me. The process of implementing a reasonable and useful set of tests also forced me to reconsider some aspects of my underlying design. Mind you, I wasn't using a full-blown TDD approach and writing my tests first. I wrote my code, and then I got tired of defining a bunch of test events in the REPL, so I wrote some Midje tests to automate the setup and teardown, and then I wrote my tests so I'd have an easy way to take advantage of the setup/teardown stuff.

In the process, I wound up adding some extra validation and more descriptive error messages, so that Midje would have something concrete and specific to look for. I also reworked some of the arguments my functions were asking for, and in one case I completely changed the algorithm I was using to parse out the optional arguments to my function, because the old way screwed up my unit tests. I changed my code, because my testing made me change my understanding of the problem domain, and thus my design.

What I'm converging on, in my mind, is a slightly more refined approach to testing---maybe Test-Assisted Design or something. The tests don't have to come first, and if you know what your code is going to do, you may not need to test at all, but tests do have value, and they can be particularly valuable as a tool for teasing out design factors you might have overlooked in your initial approach. In fact, tests may have as much, or more, value as a way of testing out your understanding of the domain space, above and beyond testing whether or not the code does what you intended.

This is where tests have lasting value: when they contribute directly to improving your code. Writing unit tests solely for the sake of inflating your test count is a waste of time; writing tests to catch bad code is better (though it's a code smell), but tests are golden when they affect you, the developer, and change the way you think about the problem. That, more than code verification, is what we should focus on in our testing.

2 comments: