Skip to content

Is Visual Basic 9 "Haskell for the Masses?"

A few days ago, I mentioned an interesting paper (one of several) from the recent OOPSLA conference called "Confessions Of A Used Programming Language Salesman (Getting The Masses Hooked On Haskell)." Today I’d like to discuss that paper in some more detail.

The paper is written in an informal style, and includes some fairly audacious claims, such as, "Visual Basic is the ultimate language to democratize programming against the Cloud," (where "the Cloud" appears to be used more-or-less synonymously with "Web 2.0") and, "Every useful Haskell program somehow relies on unsafePerformIO." These claims distract from the substance of the paper, so I’ll leave those for the pundits to fight over.

The author, Erik Meijer, spends several pages recounting his personal history, some landmarks in the history of functional programming and interprocess communication on Windows, and some functional programming languages which never "made it." He discusses the history of the Cω programming language in substantially more detail, explaining its XML features, type-system extensions, generalized member access, wiry comprehensions, and nullable types, before moving on to the challenges of designing a type system which can express an XSD in a useful fashion, C# 3.0, Visual Basic 9, and LINQ. This is interesting, but I’m not going to cover it in any great detail in this post, since all of this information is available elsewhere.

Instead, I’ll focus on three ideas from the paper: The Change Function, Visual Basic 9’s type system, and research languages vs. the real world. Please understand that this is not an entirely fair characterization of the paper, as it covers a great deal more ground. But I’m choosing to focus on these ideas, since there has already been a large amount of discussion about LINQ, functional programming, etc.

The Change Function

The author states that, "The Change Function is a simple theory that predicts the success of new technologies." It reflects the ratio of the benefit of a new technology to its end user versus the amount of work the user has to do to use the technology. The author points out that inventors of new technologies tend to overstate the benefits of their invention, under estimate the costs of its adoption, and presume that its deficiencies will be addressed in the future. The expression of The Change Function, which he credits to Pip Coburn, is as follows:

Change Function = F(Perceived Crisis/Perceived Pain of Adoption)

The author uses Haskell, which he has unsuccessfully attempted to market "to the masses," as an example of a technology with a potentially high benefit, but also a high perceived pain of adoption. He concludes this section saying:

I had already nailed the numerator portion of The Change Function. From now on my goal in life would be to also drives the denominator down to zero to maximize my chances of success.

To this end, he took a job at Microsoft, working on the C# and Visual Basic design teams (amongst other activities; he’s also an architect in the SQL Server group).

In the conclusion of the paper, the author (fairly) notes that monads are considered one of the hardest parts of Haskell to master, therefore it is remarkable that they have shown up in languages such as Visual Basic 9. Now, whether VB 9 actually has monads, per se, is a topic for a different post. But it is certainly true that the forthcoming versions of C# and VB are heavily influenced by functional programming idioms, and that this influence is manifest in code which will seem confusing to existing C# and VB developers. The author states:

The explanation for this apparent contradiction is that the successful programming languages understand developer inertia and obey The Change Function. They offer a solution to deep developer pain with a very low adoption threshold. The reason why most research languages die a quick death and his six successful languages a slow one, is because they do not realize that users are in charge.

This covers a lot of ground, so let’s pull some of the claims apart. There is certainly a lot of truth to The Change Function. There are no shortage of examples to show that the technologies with lower pain of adoption have an advantage, in terms of market penetration, anyway, over "superior" technologies with higher pain of adoption. But I’m not sure that The Change Function really accounts for a paradigm shift, like the transition from directly writing machine code to high-level languages, or the success of object-oriented programming. It tells an important story, but not the whole story. Advocates of functional programming, such as John Backus, maintain that it is an important paradigm shift on par with the advent of the high-level language. We’ll see.

Haskell, after all, distinguishes itself by its purity. There are other languages, such as OCaml and F#, which are intended to provide a functional programming features in a not-entirely-functional environment. OCaml in particular can claim to offer an entirely decent performance, a pain point which the author refers to several times. The benefit of programming in Haskell is not merely using a random collection of functional programming features, but that the features are based on a sound theoretical underpinning, and that the language makes it very difficult to write code with side effects. The "pain" (of learning a new programming paradigm) is intentional and is intended to be beneficial. Its advocates speak of "liberation" from "the von Neumann-style," not supplementing it.

On the other hand, C# 3.0 and VB 9 are hardly pain-free. Even in version 3.0, C# is still a brand-new language and a brand-new environment for many users, and LINQ will only add to the learning curve. VB is even worse, as Microsoft’s historic, near-pathological disregard for backwards compatibility in VB source code means that the future will likely bring even more pain than the present, due to the need to continually rewrite your application. The funny thing about pain is that the pain that you feel is often the maximum of the individual pain points. Having 20/20 vision does little to mask the pain of the bullet in your leg. And yet, people use Visual Basic anyway — lots of them.

So while I accept the truth and usefulness of The Change Function, I don’t think it tells the entire story, and I am not yet convinced that it makes VB 9.0 a slam dunk. It does, however, offer food for thought for anyone implementing new technologies, whether they are intended to be paradigm shifting or not.

Visual Basic’s Type System

This is an odd little section of the paper, not the least because it is entitled, "The Great Internet Hype, Version 2.0." BASIC, of course was designed to be easy; the first word of the acronym is "Beginner’s." When the author notes that people sometimes have "an outdated idea of ‘Basic’ in mind," I presume that he means that Visual Basic is intended to be powerful, not that it is intended to be difficult. Now, all Delphi users know that "easy" does not mean "wimpy," but there is perhaps some irony insofar as this section is one of the more technically complicated bits of the paper, both in terms of overall readability, and in terms of comprehensibility of the features described to the average programmer. The author notes that, "People often… think that Visual Basic.NET is just C# with slightly more verbose syntax. Nothing is further from the truth." I’m not sure about "nothing," but let’s examine the substance of this.

The first bit of this section discusses a feature which makes asynchronous methods work similarly to event handlers. Meijer states, "this should be easy to grasp for programmers that already have a basic understanding of event handling," and I agree. It’s useful syntactic sugar, although it probably won’t be sufficient to make many-core fly.

The next subsection is entitled, "static typing where possible, dynamic typing where needed." Here Meijer describes an attempt to combine the benefits of static and dynamic typing. This is summarized as follows:

Visual Basic is unique in that it allows static typing where possible and dynamic typing where necessary. When the receiver of a member-access expression has static type Object, member resolution is phase-shifted to runtime since at that point the dynamic type of the receiver has become its static type.

This, roughly, makes VB 9 behave somewhat like VB 6, where variables were of type variant by default, and late bound. Now I’m hardly an expert in VB 6, but I believe that VB 9 is stricter than VB 6 insofar as its static typing features are concerned. The notion of being able to declare that any particular argument should be handled as late bound certainly has its uses, but if I had to choose a single type with which to implement this feature, I suspect that System.Object would not be high on my list.

Meijer also states, "Another aspect in which Visual Basic differs from statically typed languages such as C# and Java, is that the Visual Basic compiler inserts not just upcasts, but downcasts too, automatically. He discusses how this feature might be used to handle contravariance in delegate argument types. By sheer coincidence, Eric Lippert has been discussing the same issue in the context of C# recently. He (or perhaps the C# team) seem to be leaning more towards a programmer-specified notion of contravariance, as opposed to compiler-generated downcasts, however. Or maybe that’s just for the sake of discussion. We’ll see.

One Other Thing

Meijer covers one more interesting issue in the last paragraph of the conclusion of the paper:

There is one aspect of the division between research and practice that I do not know how to solve. In practice most development effort goes into the "noise" that researchers abstract away in order to drill down to the core of the problem. However, it is often in this noise that the hard implementation problems are hidden.

Well, I certainly agree with that, and it is, in fact, for that reason that I disagree with the implication earlier in the paper that a research language is less-than-successful if it is not widely adopted for real world programming. In my opinion, the success or failure of a research language is generally more in the marketplace of ideas than in shipping software.

I suspect the answer to this quandary, though, may lie in extensibility. One of the neat things about Haskell is that you can implement features such as exceptions, which in other languages need to be coded into the compiler, in code. Making the language itself extensible turns every deployed compiler into a proving ground for new language ideas, in much the same way that useful third-party IDE plug-ins have a tendency to migrate to future versions of the shipping IDE. In other words, an extensible programming language becomes a mechanism by which programmers can express and communicate their needs to the language designers.

{ 2 } Comments

  1. Serg | October 26, 2007 at 11:37 pm | Permalink

    I hate this form of blog. Only a quarter of my screen is used, I don’t want to read it.

  2. manu | October 27, 2007 at 6:01 pm | Permalink

    Craig, Newspapers use narrow widths, because it allows them to cram more text while still being legible. Optimal line width is estimated at around 50 char / line (might be less for a screen).

    The problem with your CSS, is while one can increase the text size, the width column is fixed (must be in pixels should be in em), so you end up with a lot less char than 50 / line…

    Come on ! Start messing with those styles !!!!

Post a Comment

Your email is never published nor shared. Required fields are marked *

Bad Behavior has blocked 713 access attempts in the last 7 days.

Close