hide all comments

Programming

D-TDD: Destruction Test Driven Development

January 24, 2013 18:00:25 +0200 (EET)

If you've ever seen a child learn to stack blocks, you'll know that the greatest pleasure isn't derived from the beauty or height of the structure. No: the squeals of joy are reserved for when he knocks it down, and the order of the tower is replaced by the chaos of flying blocks.

Last Friday evening I took an equally constructive approach to work on the MetaEdit+ 5.0 multi-user version. We're at the stage where we have the single user version tested and released, and "the first build that could possibly work" of the multi-user clients, with normal user testing showed no problems. So I set up the server and a couple of clients on my PC, and scripted the clients to bash the server as fast as they could with requests. In each transaction, a client would try to lock and change 10% of the repository objects, then either abandon or commit the transaction.

As that seemed to bubble along quite happily, I started up my laptop too, and used a batch file to start a MetaEdit+ client (from the original PC's disk) and run the same script. And again for a fourth client, whereupon I remembered a PC in the next office was unused, and I knew its MAC address so could start it up with WakeOnLAN. (You really learn to appreciate Remote Desktop Connection, VNC and WakeOnLAN when it's -29°C...)

By the end of the evening, I'd squished a couple of bugs: the more cycles you run, the less chance there is of bugs being able to hide. I'd also progressed to four PCs, running a total of 12 clients. Over the course of the weekend, I occasionally poked my nose into a remote desktop to see how things were doing, and another bug was found (apparently in Windows XP's networking, since it didn't appear on the other platforms; still, easy enough to just retry if it occurred).

At some point I restarted the experiment with the bug fixes in place from the start, to get a consistent set of data. At that point the original repository had grown from 40MB to over 4GB, as each client was creating several times the original amount of data each hour. As I woke up with the 'flu on Monday, the experiment continued in a similar fashion through to my return to work on Thursday. The last session was started Wednesday afternoon, with up to 20 clients, and by the same time Thursday its server had processed 1TB of disk I/O, in under 10 hours of CPU time and only 32MB of memory:

So, what do we learn from this?

  1. Destruction testing is fun! Just as much fun as with the blocks, and even more so: when you catch a bug as it happens, and fix it in the server on the fly, the tumbling client blocks reassemble themselves into the tower.
  2. Destruction testing is necessary. There are some bugs you'll never catch with code inspection, manual testing or unit tests. They might occur only once a year in normal use - and at a customer, where all you get is an error log. By forcing them out of the woodwork with massive destruction testing, you can see them several times, spot the pattern, and confirm a fix.
  3. Destruction testing is easier than ever before. Earlier, the operating system, hardware, or time would be the limiting factor more often than not. Now, normal PCs are reliable and performant enough to keep the focus on what you want to be tested, rather than how to keep enough PCs running long enough to test it.
  4. Destruction testing is not scalability testing. It may stray into that area, but it has a different purpose. The repository used in MetaEdit+ has been tested with hundreds of simultaneous users, so that wasn't in doubt. The point here was to flush out any bugs that had crept in with the new MetaEdit+ and platform versions.
  5. Destruction testing is not bulletproof. There are plenty of bugs that it won't find, but it will find bugs you can't find another way. Since you can't test everything to destruction, concentrate on testing lower-level services called by everything, or just the most common services. Other kinds of testing are better at covering the breadth of functionality.