... In which the author proves himself a hopeless heretic by disparaging Longhorn ...
I attended the Microsoft Professional Developer's Conference in Los Angeles last week. Microsoft formally unveiled "Longhorn", the next version of Windows, along with a bunch of new underlying technology. The target of the conference was most emphatically developers, and the focus was "how to build stuff with these new tools". My first day's reaction was PDC = Moo!; a positive impression of a lot of cool new stuff. Parts of the conference were excellent; in particular, Rich Rasheed's keynote surveying new technology from Microsoft Research was amazing. But my takeaway is... there's a lot less here than it would at first appear.
If you're still reading and haven't clicked your back button in disgust, let me explain. The most important thing is not how easy it is to build code, the most important thing is how well the code runs once it is built. This concept seems to have escaped the Longhorn developers, and from this viewpoint Longhorn and its underlying technologies are pretty unexciting.
Remember, the audience for the PDC was professional developers. We build code that other people pay to use. It might take five people six months to build an application that thousands of people use every day for years. What's more important, the experience those five people had for six months (and the fact that it took them six months instead of four), or the experience those thousands of people will have for years?
At the highest level there are two things a development platform can to do improve applications: first, improve performance, and second, enable functionality, in that order. Let's take look at Longhorn from this angle.
Performance can be subdivided into speed and robustness. Longhorn will certainly hurt speed. Whether it helps robustness remains to be seen; we can only hope it will, given that this is a big problem for Windows currently.
If you list the most "important" applications today, you'll find that zero of them run on .NET (and zero of them run on Java), despite the fact that .NET has been out for three years, and Java much longer. Why? Because of performance.
On the client side, Microsoft Office has to be fast. Adobe Photoshop has to be fast. Quicken has to be fast. Why do people code in C++? Because it results in fast code. Why do people use sockets? Because they are fast.
Bonus question: Why did Internet Explorer take over from Netscape? Answer: Because it was faster. Really.
On the server side, Google has to be fast. Amazon has to be fast. eBay and PayPal have to be fast. Why do people use Apache? Speed. Why do people use Linux? Because it runs fast (and because it stays up). Yeah, that's right, not because it is easy (even though it is), but because it is fast and stable. Why do people use Oracle? Because it scales.
Etc. Feel free to indulge yourself by making a list of counter-examples. It will be a short list.
Generally people worry about robustness on servers more than on desktops. Servers need to be up 24x7, and are exposed publicly, while desktops can be rebooted from time to time without much impact, and are generally safely hidden behind firewalls. (Although of course clients do receive emails with attachments!) This explains why Linux is more popular as a server OS, its better stability and security are more important for servers.
It is interesting that in all the demos and discussions at the PDC, nobody worried about performance. I have to believe XAML imposes substantial overhead on the GUI (look at what XUL did to Mozilla). And vector graphics? Hopefully it can all be pawned off on GPUs and everything will work okay. WinFS is going to be a big time resource hog. I'm guessing it is painfully slow now and that there’s a bunch of people working hard trying desperately to make it fast enough (not to be confused with fast, period). Indigo isn't far enough along for performance to be assessed, but because SOA is simpler than object proxying Indigo has a great chance to be faster than COM+ or DCOM or .NET remoting (none of which were fast enough to be useful in “real” applications). Let's hope the security wrappers don't kill the basic speed of ASMX; in the real world people still use sockets with no security whatsoever, because they're fast.
In the opening keynote, Jim Allchin made a point of saying "the PDC 'bits' will be slow". He said it unapologetically, like yeah, Longhorn is slow now, but we'll make it faster later. I can appreciate that there may be debug code and features which haven't yet been optimized, but performance isn't something you add in later. It has to be designed in from the start. There were zero cases where I heard a presenter at the PDC say "this was done for performance". Functionality for developers was the guiding design principle.
Okay, so maybe performance will be a liability for Longhorn. Surely the amazing functionality enabled by Avalon and WinFS and Indigo means applications will be cooler, right? Yeah, maybe. But let's double-click on this a bit.
Every day the size and kind of display devices for applications keeps expanding. You have handheld PDAs over here (and wristwatches!), and you have giant 200dpi monitors over there. So I'm not denigrating vector graphics at all, in fact, it is probably the one thing in Longhorn which really will matter to users.
Surely WinFS is going to make applications better? I mean, XML metadata for every file. Common data shared transparently between applications. Automatic searching and grouping. What could be better than that? Well, it won't work. WinFS is going to be glacial. Whatever benefits WinFS holds for applications will be overwhelmed by performance so poor as to make them unusable.
Consider my personal computer, an ordinary Compaq laptop. The hard drive currently has 140,000 files stored in 6,000 folders, a total of 54GB of data. I may be atypical, but I don't think so. Is Microsoft seriously suggesting that XML metadata for 140,000 files is practical? I probably care about 1,000 of these files, at most. The rest are buried deep in the Windows or Program Files folders, little pieces of functionality for applications or the system which I don't know or care about.
Of the 140,000 files, there is one file I care about more than any other, my Outlook .PST file. This one file is a repository of all my emails, sent and received, all my calendar items, and all my contacts. Know why it is one file? For performance. Try storing every email, appointment, or contact in a separate file, and you'll have the slowest PIM known to man.
How about sharing all those calendar items with other applications? Or my contacts? Wouldn't it be better to expose these data so other applications could use them? In theory, yes. But in the real world practical considerations come into play. These hypothetical other applications have performance considerations, too. Searching my 5,500 emails or 1,200 contacts for something takes time. It would be much better for Outlook to expose a search service than for all the data to be stored in separate files.
The same kind of trade-off occurs with database design. In theory relational databases are great, you can compose ad hoc queries and fire them off to get any data in any way you want. In practice with databases of any size you have to optimize access by organizing your tables appropriately, creating indices for frequent searches, limiting search result sets, etc. There is no magic bullet.
How about Indigo? Clearly better, right? Yes, clearly better, as compared to any object-oriented program-to-program communication, like COM+, DCOM, or .NET remoting. (Or for that matter, clearly better than CORBA.) But clearly better than sockets? We'll see. The built-in security functionality of Indigo is compelling, and it is possible that by riding on ASMX (essentially, using SOAP to access exposed services) the overall performance will be acceptable.
It was interesting that in Don Box' introduction to Indigo at the PDC, he asked the roomful of developers "how many people have successfully deployed DCOM?" Nobody raised their hands. "How many people have successfully deployed .NET remoting?" Nobody raised their hands. "How many people have successfully deployed CORBA?" Only a few hands. Out of 3,000 developers essentially none had successfully deployed an object-oriented remote communication method. It is good that Microsoft has abandoned OO and embraced SOA.
So although Longhorn will make code easier to build, it won't really make the code run better. And that's why I'm unexcited. I'm not disappointed, mind you; my expectations were low. On August 16th I posted:
If I had three wishes for the next version of Windows, what would they be?
- Don't reinvent the wheel and change "everything". I have a feeling based on what I've read that I won't get this one.
- Networking that works. Why is it so much harder to hook PCs together than Macs? Or than Unix boxes? It shouldn't be... The whole domain master thing needs to go.
- Paging that works. Why can Unix boxes and (to a lessor extent) Macs easily run working sets larger than physical memory, whereas on a PC as soon as you start paging, the machine turns to crap?
Let's keep track of these and see how they do...
My wishes were pretty low tech, huh? And I didn't get them.
- Yeah, XAML will help me be more productive, and WinFX seems like a decent API for the OS. But why did we reinvent the wheel. Was it really that hard to build applications before? No. Will it be easier under Longhorn? Yes, but in the meantime there’s a lot of new stuff to learn. I don't mean to be a curmudgeon; well okay, yeah I do. Aside from vector graphics I don't see that much benefit from all the changes.
- I don't know if Indigo represents networking that works or not. It seems like Indigo mostly represents program-to-program communication that works (although SOAP already gave us that). Service orientation seems like a step back from object-to-object remoting in the direction of HTTP. However what about machine and network configuration? Will it be as easy as Linux, or will we still have “my network places” and “domain servers” and “active directory”? Yeah, I thought so.
- I don't think anyone cares about paging except me. Just buy more memory, right? I bet performance under Longhorn is not snappy. Lots and lots of CPU cycles, and lots and lots of memory. I still think Windows paging sucks, and fixing it would be really helpful. But what do I know.
Let me wrap up with something positive. Whidbey looks really nice (the upcoming release of Visual Studio). Not only will it make programmers more productive, but it will make debugging easier and more thorough, resulting in more stable code for customers. And - get this! - it does not look like the wheel was reinvented. It is Visual Studio, but with incremental improvements. Yippee.
So I can keep on cranking C++ for the Win32 SDK. This is your friendly neighborhood curmudgeon, signing off...
Correlation vs. Causality
The Tyranny of Email
Aperio's Mission = Automating Pathology
Try, or Try Not
Books and Wine
Google and Blogs
God and Beauty
Moving Mount Fuji
Rock 'n Roll
IQ and Populations
Are You a Bright?
The Joy of Craftsmanship
The Emperor's New Code
The Return of the King
Religion vs IQ
Most Spectacular Photos of 2003
In the Wet
the big day
solving bongard problems
the nuclear option
On the Persistence of Bad Design...
Texas chili cookoff
the inflection point
almost famous design and stochastic debugging
may I take your order?
China's olympic gardens
New Yorker covers
Death Rider! (da da dum)
how did I get here (Mt.Whitney)?
the Law of Significance
Daniel Jacoby's photographs
room with a view
weird disaster update
in praise of paddle shifting
the first bird
Gödel Escher Bach: Birthday Cantatatata
shining a light
Father's Day (in pictures)
Tour de France 2009
Tour de France 2010
Jobsnotes of note
Tour de France 2011