http://www.youtube.com/watch?v=YX3iRjKj7C0
The video claims to be a comparison of the Smalltalk and Ruby communities, but it mostly expounds on the virtues of test-driven development; I wonder whether our emulators have unit tests or how clean the code is...
As with most new and fancy "development techniques" which might sound really cool and inspiring, you should always take them with a grain of salt. A quite old saying in programming says that "there is no silver bullet", and it does apply today as well as it did back when it was first uttered.
As with other similar recent fads, "test-driven development" may sound wonderful in theory, but when put in practice, it often leads to less than optimal results. One could argue whether this is due to problems in the technique itself or whether it's because developers don't use the technique correctly, but the end result is the same.
One of the basic ideas behind test-driven development is that rather than going the traditional way of implementing something (such as a function, a module, a class or an entire program) first and then testing it, you do the opposite: You first implement the testing procedures (which basically amounts to defining the specifications and requirements of the module) and then you implement the module so that it will pass the tests. The basic idea behind this is that it makes development faster because there are less bugs to fix later in the project, when bugs are found in the testing stage.
While this idea has its merits, the problem with this is that it's impossible to predict how the module will need to be implemented, what kinds of things it will do, and hence what kind of tests should be used to test it thoroughly. With your pre-made tests you will only be testing part of the module (sometimes even a very small part) while large parts of the module may end up being completely untested.
"Ok, that's fine and understandable. Just add the additional tests after the module has been implemented." But the problem with this is that it now goes to the traditional development process, which is what test-driven development wanted to avoid in the first place. What this causes is that the test-driven development mentality induces developers to skip adding tests after modules have been implemented, relying solely on the pre-made tests. This may cause large portions of the program to be completely untested, causing big problems later. (The later in the development process that you have to fix a bug, the more difficult/expensive it will be.)
So, in short: Writing unit tests is ok, but what "test-driven development" says is that you should write them first, and implement the module then. The problem is that it's impossible to predict in advance all the unit tests you would need to thoroughly test the program. If you want to test the module thoroughly you will end up going back to a more traditional development model, so the whole "test-driven development" idea gets watered down by necessity.
"Test-driven development" can also be a different way of budgeting time to get any tests written at all. If you wait until the module or program is ready to be tested, all too often you'll end up going off to work on implementing some new module instead of writing tests for the existing module. In the long run this ends up costing you more time (from increased support costs, bugs slipping through, and so on) than writing the tests early would have.
Test-driven development is also a good way to enforce specs in a large organization where a development project is spanning multiple teams. They act to fix the spec in place, giving you more leverage to say "no" when someone says "Oh, can you do this too?" And believe me, the ability to say "no" is incredibly important in those organizations; it's basically the only way anything gets done. I worked on a team that had to dedicate about 30% of its workload (that's one manager and one developer, basically) to prioritizing tasks and saying no, so that the remainder of the team could actually get things done.
Pyrel - an open-source rewrite of the Angband roguelike game in Python.
^^^We don't exactly reject changes to the emulators just because old runs fail to synch on them; otherwise we would have frozen FCEU around version 0.98.12 because of the Dragon Warrior 4 run.
This is what I first thought when I first heard of testing: You can't necessarily envision what the program will need to consist of or do right from the start, so you can't develop all of the tests right away.
As you may imagine, the OP was made out of a complete ignorance of the development work behind the emulators; I don't actually know how messy the code is or whether there is a robust suite of unit tests for any of the emulators, although I am rooting around in the SVN tree for FCEUX right now.
Basically you are saying that having some tests (which is what test-driven development gives you) is better than having no tests at all. But this assumes that the developer will not create the tests at any point, unless demanded by the development process.
But if that's the case, then your problem is deeper than your development process. If your developers are skipping proper testing, the problem is elsewhere.
My point was that if you have two types of testing procedure: 1) the traditional way, where you implement first, then create tests for all the features, and 2) you first create the tests and then implement, you might end up in a situation where the program passes all the pre-made tests in the second case, yet is still buggy because not all features (which were impossible to predict when the tests were created) are being tested, while in the first case, if proper procedures are followed, more bugs would have been found early on (because after implementing you know what you have to test).
If "test-driven development" exists to get around the problem of lazy programmers not being bothered to write testcases, then the problem is not in the development procedure. The technique is used to fight the symptoms rather than the cause.
I think that test driven development is used far more often to sneak tests past lazy management (or rushed management, or management that cares far more about new development than maintenance) than lazy programmers. I've seen far more examples of "Man, I'd like to write some tests, but I really don't have the time" than I have of "Ehhh, I don't feel like writing tests." That is to say, I've seen a dozen or so examples of the former, and none of the latter.
And yes, I'm basically saying here that test-driven development is better than non-test-driven development if it means the difference between having some tests and having no tests.
Pyrel - an open-source rewrite of the Angband roguelike game in Python.