Testing TDD

Since reading Robert C. Martin's The Clean Coder: A Code of Ethics for Professional Programmers I've been eager to try Test Driven Development (TDD) for myself. I have to think that most programmers have at least heard of the practice these days, but I'm sure that many have not ever tried using it themselves. I decided that it was time to give it a shot and see what all the hubbub was about, but before I get into that here are some thoughts regarding the book.

The Misnamed TDD Treatise

The book was a decent read, but not worth cover price. Be sure to buy it on Amazon at a steep discount. The first few chapters about how to interact with others to say exactly what you mean in a pragmatic and clear way are really the meat of the work. Much of the rest of the book is a treatise on the wonder of TDD and justification for why it's good. The final chapter on tooling was needless and had nothing to do with the title of the book, and nobody should care whether he uses a new Mac or a 2002 Lenovo (reference to the last sentence in the book about his personal hardware). That being said, if you're looking for some philosophy on how to engage with the higher-ups in a way that conveys exactly what you intend to put forth, the first few chapters are worthwhile. Most of his examples from his own personal history dig way back to a time before I was born, so it would have been nice to hear some of the challenges of our modern day, but no such anecdotes were to be found.

TDD Testing Experience

I decided to test the TDD workflow by building a REST API in PHP for managing a simple Todo list. This API is consumed by a yet-to-be-written Angular front end. I suppose that can come first on my Todo list... Some of the most poignant notes during the process are as follows.

Familiarity

The foremost consideration is that you absolutely must be familiar with the practice of unit testing before you can even begin TDD. The workflow hinges on the very fact that you can not only write unit tests, but write good unit tests. That in itself takes a certain level of experience. For that matter, though the proponents of TDD toute the idea that you'll get highly-tested systems from it, that depends wholly on your ability to write good tests and not overlook anything. You may or may not be able to unit test the gamut of inputs that your system allows for, so while you may have a full suite of unit, integration, and acceptance tests, this doesn't mean that you can rest easy.

Old Habits Die Hard

If you've been programming in a non-TDD way for most of your career (as most of us have), it's very easy to slip back into the mode of writing a bit more system code because you want to try something out rather than writing the supporting unit tests first before writing the system code to pass the tests. I can think of a factor and a symptom related to this.

Factor: Exploratory Development

TDD is not a good methodology when you're trying to do exploratory programming. If you've never solved a particular problem before in code, the symptom of trying to test something that you haven't proven in the first place is menacing at best. It may even grind you to a dead halt when you're myopically trying to write test cases for things that you don't know you need. You can't write tests for solutions that don't exist as metaphors in your brain.

Symptom: Myopic Fixation

The pithy adage has been thrown around that "If your code is difficult to test then the design is flawed." This may or may not be true depending on the domain and the problem being solved, and it gets my goat when otherwise smart people throw this saying around as if it's infallible fact because they just read it in an article or book and they have a glorified, mystical view of testing. First of all, what kind of tests are we talking about? If you're writing a compiler then integration tests are mostly cruft; the project benefits more from unit tests because the errors from integration tests would give you little understanding of what went wrong in the first place. A unit test will pinpoint the exact issue, and such a piece of software requires the utmost exactness. Contrarily, if you're writing a large pipeline process (such as a shopping app), integration and end-to-end tests will be utterly helpful. Of course unit tests will be good and should be required in such a program, but when you're handling an influx of data, transforming it, shipping it to different services, taking user input to accept/modify/shuffle along a data package, and this can span the course of several minutes to several days, you cannot unit test that. And it's hard to test those transactions. This doesn't mean that your system design is flawed. It means that the use-case is difficult in the first place. You can have wonderful architecture, clean code, and good hardware, but if something breaks in the middle of the process of things being passed to other things with latency in between, it might not be due to any bad decisions that you made upfront. Of course you'll need to patch the problem, but the fault may not have been an design flaw, and the difficulty of testing doesn't imply that such a flaw exists inherently. All of that preamble is to say that if you're fixating on unit testing and changing your design decisions to facilitate ease of unit testing, you may not be doing something "right" based on properly objective criteria. But you may be. There is no pith that will answer that question, and religiously relying on pith is not what professionals do. It's what that person you hate working with does.

Early Coupling and Increased Friction

This point ties back to the Exploratory Development factor to some degree. Unit tests are obviously highly coupled to your system code because otherwise they wouldn't test it, but this will happen early in your progress. Right from the start. There is a trade off of time here if you know you're going to have some sweeping refactoring later on (which you are, because that's how progressive software enhancement works) because you're going to break a hulking mass of unit tests. Moreover, you may not even be aware which unit tests need to be updated in order to make your sweeping changes. Are the TDD evangelists going to posit that if you can't find every unit test that needs changing before making a system code change then you don't actually understand the core responsibilities of the system objects and all of their connections? Well, they would probably be right, but this is because decreased coupling by using OO techniques will increase code reuse which will then potentially impact more tests if you're mocking certain objects or you have to change parameters passed to certain functions or object constructors. In another scenario, what if you're a new employee and are expected to get up and running quickly in a software system? If you're confined to TDD from the start, you'll be stuck wading in the lowest level minutia without first developing a broader perspective. It's not conducive to plucking at strings to see what happens.

Unit Tests are Not Documentation

If anything, they're examples. Examples are found within documentation, but they are not the whole aspect of it in and of themselves. Not only that, but unit tests are not exemplary. You're not only testing things that should work, you're also testing things that should not work. Beyond some of the most simplistic functions, you also have to deal with the cruft of mocking which is at best bloated overload of documentation. So no, unit tests are not lowest level documentation. I have no problem wading through them as examples of what can and cannot be done, but if you need an overview of a library either as a new employee or simply as someone who has not looked at the code in a year, that's simply not the documentation that I want to reference right away. User-friendly documentation shouldn't go the way of the dodo just because you're a programmer and you test your software from the bottom up.

Conclusion

Do test your code. Do unit, integration, and end-to-end tests. Do TDD if you think it's good for what ails you. But so far I'm not convinced that the paradigm shift is the ultimate answer for the best product. It can certainly be an answer, but I don't find it to be the most pragmatic approach to a well-designed, well-tested whole. Will the upfront development time save testing time? I'm not sure, that really depends on circumstances. The effectiveness probably slopes upward as time moves on and changes become more minimal to the point of being in maintenance mode, but for a new project where you have a team with diverse levels of experience in test automation and a reticent set of non-developer approvers who are most focused on the business aspect but can't be relied on to write acceptance tests, shoving TDD into the mix probably won't integrate well. Likewise, TDD is not a practice that can be shimmed in after the fact, or migrated in slowly in the middle of a project. As a programmer I can understand the appeal to these sorts of ideas where you rigorously test all of the nuts and bolts before building the widget, so try TDD for yourself, but I can't automatically glom onto the evangelical hype, as well-meaning as it may be. I know, I know, I'm just not doing it right...