Here is an excellent article by Nicholas Carr that assesses the hype over the new fad of "open-source" everything. In short, those who believe that collaborative, wiki-type projects will revolutionize corporate production philosophy are probably out of touch with reality. While wikis make for a popular encyclopedia, that product is so successful not because it lacks flaws --even its most adamant proponents would surely admit that Wikipedia is lacking in many areas-- but because it's free. The open-source model, left entirely to its own devices, will likely produce many more Wikipedias, and you can't sell one of those for much.
But even as the corporate world has begun to embrace the idea of the bazaar as a forum for innovation, software programmers have continued to debate the strengths and weaknesses of peer production. The open source model has proven to be an extraordinarily powerful way to refine programs that already exist — Linux, for instance, is an elaboration of the venerable Unix operating system, and the open source Firefox browser builds on Netscape’s old Navigator — but it has proven less successful at creating exciting new programs from scratch. That fact has led some to conclude that peer production is best viewed as a means for refining the old rather than inventing the new; that it’s an optimization model more than an invention model.Here I'm reminded of one of the early successful open-source models: the cryptographic doctrine of security through transparency. (Lo and behold, I looked for something to link for that concept and ol' Raymond is in the article. Maybe I shouldn't be so hard on Wikipedia after all. And maybe I shouldn't link a Wikipedia article to support my thesis that Wikipedia is pretty crappy.) Anyways, the theory basically says that encryption algorithms that rely on a secret in the algorithm itself (i.e., the algorithm is not known) are not as secure as those in which the algorithm is public, and only the key is secret. By the way, The Code Book is a fantastic lay explanation of basic concepts in cryptography.
Anyway, the reason the analogy is (hopefully) apt is that strong encryption algorithms are not developed by democratic collaboration; they are developed by the same isolated, eccentric geniuses as ever. But they are then scrutinized by the masses in a process that confirms and ensures their security. The wiki-digm may lead to better critics, but that's a far cry from producing better authors. In a production sense, the creation of material for them to improve upon is still rate-limiting. This is the invention vs. optimization distinction, and it's one of the primary reasons this "revolution" will end not with a bang, but with a whimper:
What makes the open source model so well suited to finding and fixing software flaws is that debugging is a task that requires little coordination among workers. Debuggers are able to sift through chunks of code in isolation — whether “splendid” or not — without knowing or caring what their fellow bug finders are doing. “Debugging,” as Raymond puts it, “is parallelizable.” All the debuggers have to do is communicate their findings and fixes to some central authority, like Linus Torvalds. The central authority takes care of synthesizing the work of the crowd, choosing the best contributions, melding them together into a coherent product, and then redistributing the work to the crowd for the next go-round.The take home:
But in Raymond’s observation, we also begin to see some of the limitations of the bazaar. First, peer production works best with routine or narrowly defined tasks that can be pursued simultaneously by a big crowd of people. It is not well suited to a job that requires a lot of coordination among the participants. If members of a large, informal group had to coordinate their efforts closely, their work would quickly bog down in complexity. The crowd’s size and diversity would turn from a strength to a weakness, and the speed advantage would be lost. Second, because it requires so many “eyeballs,” open source works best when the labor is donated or partially subsidized. If Linus Torvalds had had to compensate all his “eyeballs,” he would have gone broke long ago.
Third, and most important, the open source model — when it works effectively — is not as egalitarian or democratic as it is often made out to be. Linux has been successful not just because so many people have been involved, but because the crowd’s work has been filtered through a central authority who holds supreme power as a synthesizer and decision maker.
The bottom line is that peer production has valuable but limited applications. It can be a powerful tool, but it is no panacea. It’s a great way to find and fix problems, to collect and categorize information, or to perform any other time-consuming task that can be sped up by having lots of people with diverse perspectives working in parallel. It can also have the important added benefit of engaging customers in your innovation process, which not only allows their insights to be harnessed but also may increase their loyalty to your company.
But if peer production is a good way to mine the raw material for innovation, it doesn’t seem well suited to shaping that material into a final product. That’s a task that is still best done in the closed quarters of a cathedral, where a relatively small and formally organized group of talented professionals can collaborate closely in perfecting the fit and finish of a product. Involving a crowd in this work won’t speed it up; it will just bring delays and confusion.
Ultimately, the hitch is the same as it ever was: the judicious selection of the talented few who possess the discretion to separate the wheat from the chaff. And we can't rely on the wisdom of crowds for that.
Bookmarkz
|