There seems to be a small storm on Brooks's (Brooks’?) Silver Bullet out in them thar tubes.
First off, Joel quotes Brooks in pushing back against Lego analogies
Om Malik (November, 2006): ”... these startups are building development environments that let the user cobble together software packages as easily as snapping together Lego bricks.”
None of them believed Frederick P. Brooks, in 1987: “Not only are there no silver bullets now in view, the very nature of software makes it unlikely that there will be any—no inventions that will do for software productivity, reliability, and simplicity what electronics, transistors, and large-scale integration did for computer hardware.... I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation.... If this is true, building software will always be hard. There is inherently no silver bullet.”
Back in 1993 the US Navy ran a little test of prototyping languages: experts in several different languages were given a simplified spec taken from a real problem and asked to implement it. Some results:
- Ada 83: 767 lines - Ada 95: 800 lines - C++: 1195 lines - Haskell: 85 lines
So on that particular problem Haskell really was an order of magnitude better than conventional languages.
But is this truly a reduction in complexity, or is it just packing that the same into fewer lines? I believe the former...
Larry O'Brien found a post by Wesner Moise and in a follow up gave what I think is a great example of how a genuine order-of-magnitude improvement in tools doesn't tackle what Brooks calls the “essential difficulties”:
Here's an example of the fundamental issue: Eric Sink's woodworking application. The whole issue of “silver bullets” boils down to whether you think the problem is creating a 3D view or writing an application that linearizes the steps in a woodworking project. Is coding the views 10 times simpler than it would be if he were using DirectX? Just about. Is DirectX 10 times simpler than writing your own projections? Yep. So for 3D views, you have substantial advances: multiple generations, programming becoming faster not just by integral factors, but by orders of magnitude. Yes, I think that's fair.
But... what about the “steps” part of things? How much more capable is this program at creating a list of materials than was the copy of Autocad I used twenty years ago? A little? A lot? I dunno’. But I guarantee you that there is not an order of magnitude difference between the time Eric spent representing those domain rules in whatever-language-he-used and the time it would have taken to express the same design rules in OCaml or other functional language and the time it would have taken to express those rules in AutoLisp (assuming the Eric was fluent in all of them).
Around this point, I got snared into the conversation on a mailing list and had this to say back to the Paul Johnson post:
From the Brooks essay:
/I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation/. We still make syntax errors, to be sure; but they are fuzz compared with the conceptual errors in most systems.
If this is true, building software will always be hard.
It's too late for me to try and read all over the Brooks thing again and blogthing's essay -- but it seems that functional programming may be a great next step for dealing with what Brooks calls accidental difficulties, but there's just no magic solution for:
1) Customer needs X ... maybe 2) Customer tries to communicate X to developer 3) Developer builds some of X 4) Customer likes half of it, but it's not really /it/ 5) Developer starts to see, “Oh, as I understand things, he doesn't really want X, he wants a cow.” 6) Customer says no way to bovine 7) Developer codes up a cow and lets a real end user play with it 8) Both the customer and developer watch the end user and realize the user already has a cow, but just needs a saddle.
(Wow - it's later than I thought)
Let's try Brooks again:
Nevertheless, such advances [refering to OOP] can do no more than to remove all the accidental difficulties from the expression of the design. The complexity of the design itself is essential, and such attacks make no change whatever in that. An order-of-magnitude gain can be made by object-oriented programming only if the unnecessary type-specification underbrush still in our programming language is itself nine-tenths of the work involved in designing a program product. I doubt it.
Ok - blogthing does try to address this - but I just don't follow his logic.
The data and processing are inherent to the problem, and so are therefore essential rather than accidental: any technology must grapple with the same complexity. ... - Haskell: 85 lines
So on that particular problem Haskell really was an order of magnitude better than conventional languages. ... But is this truly a reduction in complexity, or is it just packing that the same into fewer lines?
This was lines to solve what again ... ?
experts in several different languages were given a simplified spec taken from a real problem and asked to implement it.
So, was the use of Haskell an integral part of simplifying the original spec given to the experts in the first place?
The question interesting to me is, does the order-of-magnitude improvement in the quality of the code I write with a functional languange have an influence on reducing the inherent complexity in discovering the design. I still agree with Brooks here, “advances can do no more than to remove all the accidental difficulties from the expression of the design. The complexity of the design itself is essential, and such attacks make no change whatever in that.”