The vehicles that finished the DARPA course this year weren't 20x faster or 20x more powerful or 20x more thoroughly instrumented than the ones that did so poorly last year. [And the sensors, actuators, and software weren't 20x better either.] Rather, this year's entries didn't make the same kinds of stupid mistakes, or have the same kinds of fatal weaknesses, that kept last year's entries from doing as well as their overall high levels of engineering and construction should have allowed.So why did at least some of this year's vehicles perform 20x better than the best of last year's? Clearly this has something to do with an implementation of an abstraction that was (more or less) correct this year and not correct (or not correct enough) last year.
A program with a tiny error in it may not run at all. (If the error is a syntax error, it won't even compile.) Fix the tiny error and the revised program will run (in some sense) "infinitely" better. So for one thing, there is something wrong with the metric being used. This year's system were not 20x better. They were correct, whereas last year's were not. The corrected program is not infinitely better than the uncorrected one—although it might be infinitely more useful to users. So one must be very careful about metrics and the significance one gives them.
But besides the metric issue, I think that a bigger point is that (as Peter says) in complex systems, little things can mean a lot. Is there really much else one can say about that?
No comments:
Post a Comment