Skip to content

"Test-Only Development" with the Z3 Theorem Prover

In this post I’ll introduce a style of programming which may be totally unfamiliar and perhaps a bit magical to many readers. What if you could write a unit test for a problem and have your compiler automatically return a correct implementation? Call it "test only" instead of "test first." I’m going to translate the problem itself into a specification language and use Microsoft Research’s Z3 theorem prover to find a solution. I won’t write any implementation code at all!

A Simple Problem

Here’s the problem I’ll use as an example, which is problem #5 from the Project Euler:

A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.

Find the largest palindrome made from the product of two 3-digit numbers.

The usual approach is to use brute force. Here’s a C# example, which I suspect is similar to what many people do:

(from    factor1 in Enumerable.Range(100, 899)
 from    factor2 in Enumerable.Range(100, 899)
 let     product = factor1 * factor2
 where   IsPalindrome(product) // defined elsewhere
 orderby product descending
 select  new { Factor1 = factor1, Factor2 = factor2, Product = product}).First()

This is not a terrible solution. It runs pretty quickly and returns the correct solution, and we can see opportunities for making it more efficient. I suspect most people would declare the problem finished and move on.

However, the LINQ syntax may obscure the fact that this is still a brute force solution. Any time we have to think about how to instruct a computer to find the answer to the problem instead of the characteristics of the problem itself, we add cognitive overhead and increase the chances of making a mistake.

Also, what is that IsPalindrome(product) hiding? Most people implement this by converting the number to a string, and comparing it with the reversed value. But it turns out that the mathematical properties of a palindromic number are critical to finding an efficient solution.

Indeed, you can solve this problem fairly quickly with pencil and paper so long as you don’t do IsPalindrome this way! (It would probably double the length of this post to explain how, so I’ll skip that. If there’s demand in comments I can explain, otherwise just read on.)

Solving with Pure Specification

For my purely declarative solution, I’m going to use the language SMT-LIB. As a pure specification language, it doesn’t allow me to define an implementation at all! Instead, I’ll use it to express the constraints of the problem, and then use MSR’s Z3 solver to find a solution. SMT-LIB uses Lisp-like S-expressions, so, yes Virginia, there will be parentheses, but don’t let that scare you off. It’s always worthwhile to learn languages which make you think about problems differently, and I think you’ll find SMT-LIB really delivers!

Since this language will seem unusual to many readers, let’s walk through the problem step by step.

First, we need to declare some of the variables used in the problem. I use "variable" here in the mathematical, rather than software, sense: A placeholder for an unknown, but not something to which I’ll be assigning varying values. So here are three variables roughly equivalent to the corresponding C# vars above:

(declare-const product Int)
(declare-const factor1 Int)
(declare-const factor2 Int)

In an S-expression, the first item inside the parentheses is the function, and the remaining items are arguments. So declare-const is the function here and the remaining items are the variable name and its "sort" (type).

Next, the problem says that product must be the, ahem, product of the two factors:

(assert (= (* factor1 factor2) product))

"assert" sounds like a unit test, doesn’t it? Indeed, many people coming to a theorem prover from a software development background will find that programming them is much more similar to writing tests than writing code. The line above just says that factor1 * factor2 = product. But it’s an assertion, not an assignment; we haven’t specified values for any of these variables.

The problem statement says that both factors are three digit numbers:

(assert (and (>= factor1 100) (<= factor1 999)))
(assert (and (>= factor2 100) (<= factor2 999)))

Mathematically, what does it mean for a number to be a palindrome? In this case, the largest product of 3 digit numbers is going to be a six digit number of the form abccba, so product = 100000a + 10000b + 1000c + 100c + 10b + a. As I noted above, expressing the relationship this way is key to finding a non-brute-force solution using algebra. But you don’t need to know that in order to use Z3, because Z3 knows algebra! All you need to know is that you should express relationships mathematically instead of using string manipulation.

(declare-const a Int)
(declare-const b Int)
(declare-const c Int)
(assert (= product (+ (* 100000 a) (* 10000 b) (* 1000 c) (* 100 c) (* 10 b) a)))

I implied above that a, b, and c are single-digit numbers, so we need to be specific about that. Also, a can’t be 0 or we won’t have a 6 digit number.

(assert (and (>= a 1) (<= a 9)))
(assert (and (>= b 0) (<= b 9)))
(assert (and (>= c 0) (<= c 9)))

These 4 assertions about a, b, and c are enough to determine that product is a palindrome. We’re not quite done yet, but let’s see how we’re doing so far. (check-sat) asks Z3 if there is a solution to the problem we’ve posed, and (get-model) displays that solution. Here’s the entire script so far:

(declare-const product Int)
(declare-const factor1 Int)
(declare-const factor2 Int)
(assert (and (>= factor1 100) (< factor1 1000)))
(assert (and (>= factor2 100) (< factor2 1000)))
(assert (= (* factor1 factor2) product))
(declare-const a Int)
(declare-const b Int)
(declare-const c Int)
(assert (and (>= a 1) (<= a 9)))
(assert (and (>= b 0) (<= b 9)))
(assert (and (>= c 0) (<= c 9)))
(assert (= product (+ (* 100000 a) (* 10000 b) (* 1000 c) (* 100 c) (* 10 b) a)))
(check-sat)
(get-model)

When you run this through Z3 (try it yourself!), the solver responds:

sat
(model
  (define-fun c () Int
    1)
  (define-fun b () Int
    0)
  (define-fun a () Int
    1)
  (define-fun product () Int
    101101)
  (define-fun factor2 () Int
    143)
  (define-fun factor1 () Int
    707)
)

That’s pretty good! sat, here, means that Z3 found a solution (it would have displayed unsat if it hadn’t). Eliding some of the syntax, the solution it found was 143 * 707 = 101101. Not bad for zero implementation code, but also not the answer to the Project Euler problem, which asks for the largest product.

Optimization

"Optimization," in Z3 parlance, refers to finding the "best" proof for the theorem, not doing it as quickly as possible. But how do we tell Z3 to find the largest product?

(Update: I had a mistake in the original version of this post, and so I’ve significantly changed this section.)

Z3 has a function called maximize, but it’s a bit limited. If I try adding (maximize product), Z3 complains:

Z3(15, 10): ERROR: Objective function '(* factor1 factor2)' is not supported

After some fiddling, however, it seems (maximize (+ factor1 factor2)) works, sort of. Adding this to the script above causes Z3 to return:

(+ factor1 factor2) |-> [1282:oo]
unknown --...

Which is to say, Z3 could not find the maximal value. ("oo" just means ∞, and unknown means it could neither prove nor disprove the theorem.) Guessing that a might be bigger than 1, I can change its range to 8..9 and Z3 arrives at a single solution:

(+ factor1 factor2) |-> 1906
sat
(model
  (define-fun b () Int
    0)
  (define-fun c () Int
    6)
  (define-fun factor1 () Int
    913)
  (define-fun factor2 () Int
    993)
  (define-fun a () Int
    9)
  (define-fun product () Int
    906609)
)

The final script is:

(declare-const product Int)
(declare-const factor1 Int)
(declare-const factor2 Int)
(assert (and (>= factor1 100) (< factor1 1000)))
(assert (and (>= factor2 100) (< factor2 1000)))
(assert (= (* factor1 factor2) product))
(declare-const a Int)
(declare-const b Int)
(declare-const c Int)
(assert (and (>= a 8 ) (<= a 9)))
(assert (and (>= b 0) (<= b 9)))
(assert (and (>= c 0) (<= c 9)))
(assert (= product (+ (* 100000 a) (* 10000 b) (* 1000 c) (* 100 c) (* 10 b) a)))
(maximize (+ factor1 factor2))
(check-sat)
(get-model)

This bothers me just a little, since I had to lie ever so slightly about my objective, even though I did end up with the right answer.

That’s just a limitation of Z3, and it may be fixed some day; Z3 is under active development, and the optimization features are not even in the unstable or master branches. But think about what has been achieved here: We’ve solved a problem with nothing but statements about the properties of the correct answer, and without any "implementation" code whatsoever. Also, using this technique forced me to think deeply about what the problem actually meant.

Can This Be Real?

At this point, you may have questions about doing software development in this way. Sure, it works fine for this trivial problem, but can you solve real-world problems this way? Is it fast? Are there any other tools with similar features? What are the downsides of working in this way? You may find the answers to these questions as surprising as the code above. Stay tuned!

Emerging Languages Camp Part 5: Axiomatic Language

This is the fifth post of my notes from Emerging Languages Camp last year. If you haven’t seen it already, you might want to read the Introduction to this series.

Axiomatic Language

Walter Wilson

Homepage · Slides · Presentation

One of the ways that you can describe a coding style is declarative versus imperative. That is, focusing on the desired result versus focusing on how that result should be computed, respectively. Walter Wilson’s axiomatic language attempts to take the former approach as far as possible. It is a pure specification language which attempts to provide a means for exhaustively specifying the output of a program. As such, it is more comparable with specification languages like TLA+ than with programming languages designed around the idea of producing an executable.

If you can specify every property or every possible input and output desired from a system, then you might be able to write a program which could read that specification and produce a program which implements it. In fact, dependently typed languages like Agda can do this today in a more limited capacity.

Actually building such a working system, Walter concedes, is a challenge! I’ll talk more about that challenge in a moment, but first let’s take a look at what he has actually built. I am not aware that there is any working software here; at this point, the project is just the grammar. The semantics include axioms and expressions. Axioms generate valid expressions. The syntax includes four elements:

  • Atoms: `abc, `+
  • Expression variables: %w, %3
  • String variables: $, $xyz
  • Sequences:(), (`M %x (`a $2))

Now you can use these to build axioms, with the following syntax:

<conclu> < <cond1>, …, <condn>.
<conclu>.       ! an unconditional axiom

Axioms produce axiom instances, where values are substituted for the axiom variables. Axiom instances produce valid expressions. If the conditions of an axiom instance are valid expressions, then the conclusion is a valid expression. The examples are somewhat lengthy, so I will refer you to the axiomatic language homepage, which includes many sample programs.

This was a very thought-provoking presentation. When I listened to many of the other speakers, I often had mental reactions like, "Hey, that’s a really useful thing!" or "That’s not my cup of tea." When I listened to Walter’s presentation, however, my reactions were more along the lines of, "Is this even possible?" and "If so, is it a good idea?" (In a good way!) I really don’t know the answers. When I spoke with Walter later, thanking him for making me think, he asked me if I thought that an implementation of such a system would be possible. My gut reaction is that, much like termination analysis, the general answer is no, but it might be possible to handle enough specific, useful cases to produce a usable system anyway.

Axiomatic language is clearly useful as an intellectual exercise. Would it also be useful as a practical system? As an industry, I don’t think we know very much about writing great specs. My gut feel is that it is harder to write a spec which is so complete that it can be used to produce a functional system than it is to write correct code. It is usually harder to work at a higher level of abstraction, though it’s often worth the effort!

Although Walter claims that specifications are "smaller & more readable than algorithms," my conclusion is not so clear-cut. Compare this sort in Haskell with Walter’s specification for a sort in axiomatic language. In general, when I look at Walter’s examples, I think it is fair to say that the claim that the specifications would be smaller and more readable than algorithms is, at best, debatable. Very expressive code in a contemporary, high level language, in my opinion, can do better, without introducing too much inessential imperative overhead. You can also compare Walter’s natural number addition from his slide deck with this example in Agda. The advantage of a specification over an implementation, I think, is that specifications can be free of implementation details.

These examples are not a perfect comparison. I would point out that the axiomatic language sort specification there is incomplete because it does not specify performance or space boundaries. Also, the Haskell version does not specify ordering relations, but Walter’s example does. Nevertheless, when I read the Haskell version I can clearly see what is going on, both in terms of what the result will be and I have some idea of the amount of time it will take. I can see the result fairly clearly in Walter’s version, but it takes a good bit more reading. And I have no idea how long it will take.

Actually, I don’t even know how long the sort should take, because that probably depends upon the application. A more complete specification might include information about the expected length of the input and an upper bound on time, available memory, etc. But such details, while important, bring us dangerously close to specifying an algorithm, which is exactly what Walter is trying to avoid!

For more complicated, but still realistic, problem domains ("I need a program which will calculate the correct income tax for any US citizen."), I rather doubt that a complete specification, sufficient to produce a working program, is even possible. The US tax code, vague in some places and self-contradictory in others, certainly would not provide enough information to do such a thing. However, it would still be useful if you were able to somehow translate the US tax code into a machine-readable specification, in order to test the program you produced by other means. There may be subsets of the tax code which are deterministic and it’s probably useful to verify implementations of these via machine-assisted proof.

One might at first be tempted to confuse programming via specification with waterfall, but these methodologies are orthogonal, I think. You can develop a specification in an agile manner, just like you can do waterfall without a formal specification.

Axiomatic language also reminds me of the philosophical languages of the 17th century, which attempted to produce minimal, concise grammars in which it was impossible to make an incorrect statement. Where axiomatic language differs from all of these is the as yet unfulfilled intention of enabling a system by which a program can be automatically generated (as opposed to "merely" checking satisfiability).

Up Next

In the next post in this series I’ll discuss Matt Graham’s qbert bytecode.

Tagged , , , ,

Emerging Languages Camp Part 4: Nimrod and Dao

This is the fourth post of my notes from Emerging Languages Camp last year. If you haven’t seen it already, you might want to read the Introduction to this series.

Nimrod: A new approach to meta programming

Andreas Rumpf

Homepage · Slides · Presentation

Nimrod’s creator, Andreas Rumpf, describes the language as a statically typed, systems programming language with clean syntax and strong meta-programming. It compiles to C. He said it had a "realtime GC," but if you look at the Nimrod release notes, you will see that it does not do cycle detection unless you enable the mark and sweep GC, and that the mark and sweep GC is not realtime. Interestingly, use of the GC is optional and the compiler removes it if you do not use it. The compiler, IDE, and package manager are all self-hosted.

My favorite slide title from this presentation (or possibly all of ELC) was "Optimizing Hello World (4)". And yes, there were three preceding slides in that series.

Andreas noted that:

echo "hello ", "world", 99

…is rewritten to:

echo([$"hello ", $"world", $99])

(where $ is Nimrod’s toString operator.) Andreas said that this does not require dynamic binding. It seems like the compiler does a lot of rewriting. Andreas said that side effect free methods might be evaluated at compile time. There is also a macro system, which appears hygienic. The slideshow has a nice example of using the template system to implement a simple DSL for HTML templating, along with the rewriting performed by the compiler on the output of a template expansion, which eventually boils down to a single string with the final HTML.

This presentation provided food for thought on what the boundaries should be between compiler rewriting and the use of library templates or macros. There’s certainly some gray area between these two.

Dao Programming Language for Scripting and Computing

Limin Fu

Homepage · Slides · Presentation

Dao is a optionally/implicitly typed language motivated by the author’s frustration with Perl and desire for a better programming language for bioinformatics. True to this origin, much of the language’s optimization is numerically-focused. There is an LLVM-based JIT and a C interop system. There is an unhygienic macro system based on EBNF specifications.

And more! Want mixins? Aspects? Async?  It’s in there.

The async feature, interestingly, is specified at the call site:

routine SumOfLogs( n = 10 )
{
    sum = 0.0
    for( i = 1 : n ) sum += log( i )
    return sum
}

fut = SumOfLogs( 2000000 ) !!    # Asynchronous mode;
while( fut.wait( 0.01 ) == 0 ) io.writeln( ’still computing’ ) io.writeln( ’sum of logs =’, fut.value() )

Here, the !! means "run this asynchronously."

In the next post in this series I’ll discuss Walter Wilson’s presentation on Axiomatic Language.

Tagged , , , ,

Emerging Languages Camp Part 3: Noether

This is the third post of my notes from Emerging Languages Camp last year. If you haven’t seen it already, you might want to read the Introduction to this series.

Noether: Symmetry in Programming Language Design

Daira Hopwood

Slides · Presentation

I found this presentation to be at once fascinating and frustrating. It was the single best talk at ELC in terms of changing how I think about programming languages. To whatever degree I went to ELC in order to learn and change my thinking about programming, this talk really delivered. At the same time, the presentation itself was kind of a mess. There were far too many sides (69, and dense with text, equations, and citations), with far too much information for the allotted time (less than an hour!). It felt like the author had planned on a full day presentation and was surprised when the hour was up. However, I will take substance over style any day. The ideas presented here were so big that a full-day seminar would probably just scratch the surface.

Daira asked: How should we program gigantic computers? Have languages and tools improved proportional to hardware? No. The NSA (calling back to the previous presentation) exploits flaws because the tools are not good enough. The "software crisis" is still here. However, some techniques, like pure functional programming seem to help. How?

By imposing a symmetry. If you change a program in the same way, you should get a similar program.

Here’s some examples of symmetries in programming:

  • Confluence: in a pure language, evaluating in different orders produces the same result.
  • Alpha renaming.
  • Macro expansion (or abstraction)
  • Comments and dead code can be added and removed without changing the meaning of the program.

However, there are other programming language features which break symmetries we would like to have. Some are essential features, like failure handling and concurrency. Others are probably less essential: implicit conversion, unhygienic macros, global state, global floating-point modes, etc. A design wart in a feature can stop it from having desirable symmetries.

How do we keep desirable symmetry breaking while making undesirable symmetry breaking as difficult as possible? A possible solution to this question is a "stratified language." To some degree, languages like Erlang, Oz, and Haskell already do this, but Noether takes this idea much, much further. The name of the language is, obviously, a nod to Noether’s theorem.

There were not any substantive syntax examples in the presentation, as far as I recall. The presentation almost exclusively discussed semantics. The semantics themselves seem to be very much a work in progress. Daira’s approach is to create a hierarchy of "sublanguages" for each desirable level of symmetry-breaking. As you’ll see, this results in a somewhat dizzying taxonomy of sublanguages. However, ze says that as the language design progresses, many of these will be combined when it seems appropriate. The goal is to retain properties of inner languages when you compose them in a coordinating language.

The best "big picture" slide in the deck lists eight different sublanguages, but other sides imply there are many more. These languages add things like, variously, failure handling (but not failure creation; that’s a different sublanguage!), task-local mutable state, or Turing completeness.

Daira also listed some disadvantages of stratified languages. Rarely-used features will impose significant costs in implementation and specification complexity. The language is more complicated and will take longer to learn. Refactoring could be more difficult if it imposes a smaller sublanguage constraint on existing code.

Ze also said something which has been in the back of my mind for a few years: "Just apply existing research; termination provers got really good and nobody noticed!" Indeed, SAT solvers in general have gotten really good. A few people have noticed: They’re finding use in package management and static analysis applications. But it’s a very general technique and those applications are just the tip of the iceberg.

Coming Next…

In the next post in this series, I discuss the presentations on the Nimrod and Dao programming languages.

Tagged , ,

Emerging Languages Camp Part 2: Daimio and Babel

In this exciting installment of my notes from Emerging Languages Camp last year, some information about the Daimio and Babel programming languages. If you haven’t seen it already, you might want to read the Introduction to this series.

Daimio: A Language for Sharing

Dann Toliver

Homepage · Presentation · Slides

Daimio Is a domain-specific language for customization of web applications. Dann Toliver, the presernter, says that web applications should be extensible and extensions should be sharable. In this sense, Daimio is to some degree what AutoLISP was for AutoCAD. However, "shareable," in this case, means more than just emailing source files around. As best I understand it, part of the goal is that user scripts should be able to interact with each other, kind of like Core War/Redcode.

Daimio is a work in progress. The syntax and semantics seem pretty well thought-out and there is a working implementation, but there are also some unresolved questions.

Dann listed some of the goals for the language as being a smaller language then JavaScript with editable interfaces, extensible functionality, and expressible interaction. He likes dataflow languages. Dataflow here, means pipes, like:

3 | add 5

There are many more substantive examples on the Daimio site.

During the question and answer period, one of the members of the audience asked Dann if he had heard of Bloom, an experimental language from Berkeley. I hadn’t, so I looked at the site. It looks pretty interesting.

Babel: An Untyped, Stack-based HLL

Clayton Bauman

Homepage · Presentation · Slides

This talk began with a political preamble about NSA spying on Americans and tech company cooperation with same. The author said his motivation for creating the language was, "to favor the rights of the user over the rights of the designer." Despite this, the remainder of the talk was technical, and it wasn’t apparent to me how his political motivation manifested itself in the language he has created. There was some discussion towards the end of supporting various types of encryption, but I don’t think that has been implemented.

Technically, it didn’t strike me that there is a whole lot new here. As the title indicates, Babel is a stack-based language. The author says it is inspired by Joy. It is untyped, but has tags.

One feature I did appreciate was that he has written in memory visualizer which creates a graph of-memory data structures from the live heap of a running program. You can see some of these graphs in the slide deck above.

Coming next week: Noether

Noether is a really interesting programming language based on symmetries in language design. The presentation was fascinating, thought provoking, and also frustrating. Come back next week to hear more!

Tagged , , , , ,

Emerging Languages Camp Part 1: Introduction and Gershwin

Emerging Languages Camp is an all day event held before Strange Loop. There were 11 presentations on new and unusual programming languages in varying stages of development.

Production-ready languages like C#, Ruby, Clojure, and Haskell don’t just spring to life out of nothing. There exists a historical context of major language families (Algol, LISP, ML, etc.) as well as a "primordial soup" of amateur, research, and proof-of-concept experiments which allow for features well outside the patterns found in mainstream programming languages.

As new problems in computing arise, new languages are being created to help tackle those problems. Emerging Languages Camp brings together programming language creators, researchers, and enthusiasts annually to share their work and ideas.

Our goal is advancing the state of the art in programming language design and implementation by fostering collaboration between academics, industry practitioners, and hobbyists.

What follows are my notes from the 2013 ELC, with links to the presentations and slides. There’s way too much material here for a single post, so I’ll break this up over several days. I hope that these notes encourage you to watch some of the presentations and possibly attend a future ELC!

Gershwin: Stack-based Concatenative Clojure

Dan Gregoire

Homepage · Presentation · Slides

I found this presentation interesting because the author was able to essentially host a completely different language on top of Clojure with very few changes to the language syntax or parsing. Clojure is a LISP, whereas Gershwin is a concatenative language like FORTH or Factor. Syntactically, they have little in common. However, you can easily use both Clojure and Gershwin code in the same file or even within the same line of code. You simply omit the outer parentheses when writing Gershwin code. In this way, use of () provides a kind of "Clojure interop" for Gershwin code. Gershwin adds very little new syntax. The major additions are:

: name ;; defines a word -- "word" is Gershwin-speak for function
#[ body ] ;; anonymous function

Here are some simple examples:

2 2 +   ;; 4
: foo { :foo "bar" } get
;; "bar"
: times-2 [n--n] 2 * .  ;; Here, [n--n] is the "stack effect." This means "pop n from the stack, then put n back onto the stack." Required for function declarations
4 times-2 ;; 8
[ 1 2 3] #[2 *] map  ;; [2 4 6]

Dan talked about the implementation of the compiler. Again, I was impressed at have few changes were required versus "stock" Clojure. The compiler takes on extra argument, a flag indicating whether or not Gershwin should be enabled.

Why is this useful? Mostly, the advantages and disadvantages are similar to other stack-based, concatenative programming languages. You can produce amazingly succinct, elegant code. This requires "vigorous factoring." However, the succinctness can turn into opacity if taken "too far." It can be difficult for programmers reading stack-based code to keep track of the stack, which is not visible in the code itself (although it is visible in the REPL). This is particularly true if the code does a lot of stack manipulation. Then said that if you feel the need for stack manipulation, you should probably read that as a need for factoring your code. Gershwin also offers dataflow combinatorial for higher-order stack manipulation. These are essentially copied from Factor.

Dan said that Gershwin might be a good fit for your code if you find yourself using a lot of arrows ( -> ) in your Clojure.

Dan is an entertaining speaker who maintained a fast but easily understandable pace.

In the next post, I’ll discuss the presentations on Daimio and Babel.

Tagged , , , ,

How to Fix MSBuild Error MSB4006

You may encounter an error which looks like this:

MSB4006: There is a circular dependency in the target dependency graph involving target "ResolveProjectReferences" [MyProjectName\MyProjectName.csproj]

…when running MSBuild from the command line.

This error happens when:

  1. You run MSBuild on a machine with .NET 4.5 installed and
  2. You build a project referencing a solution where the SLN file contains dependencies in the SLN.
These dependencies are the dependencies you’ll get if you right click a project in Solution Explorer, choose Project Dependencies, and start checking projects. This is usually unnecessary as project dependencies can be inferred from the References.
The fix is to open the SLN file in your favorite text editor, and search for sections like this:
Project("{AAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MyProjectName", "MyProjectName\MyProjectName.csproj", "{B10E1FCF-DFBA-44A8-830F-6F3B54DFA7CB}"
	ProjectSection(ProjectDependencies) = postProject
		{B9E9F8CE-A607-4A6C-97F7-2BD439122F89} = {B9E9F8CE-A607-4A6C-97F7-2BD439122F89}
	EndProjectSection
EndProject
Just delete the entire section, so your Project node looks like this:
Project("{AAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MyProjectName", "MyProjectName\MyProjectName.csproj", "{B10E1FCF-DFBA-44A8-830F-6F3B54DFA7CB}"
EndProject
Now you should be able to use MSBuild successfully.
Tagged , ,

Cloud Security, For Real This Time

Cloud Security, For Real This Time: Homomorphic Encryption and the Future of Data Privacy. That’s the title of my presentation at the next Central Ohio OWASP Quarterly Seminar, on 27 February at 1:00 p.m. Dan King, from Microsoft, will be talking about single sign-on for federated Dynamics CRM, very practical stuff which is in real world use today. I, on the other hand, will be talking about technologies which don’t quite exist in fully practical forms today, but which I predict will change the Internet in the decade to come and which I find mind-expanding to even read about.

In the meantime, I’ll leave you with some decidedly weird Python code:

>  def my_factorial_less_than_20(n):
..   result = 1;
..   for i in range(2, 20):
..     result *= 1 if i > n else i
..   return result
>  my_factorial_less_than_20(4)
=> 24
>  my_factorial_less_than_20(100)
=> 121645100408832000L
>  my_factorial_less_than_20(1000)
=> 121645100408832000L

This looks both limited and inefficient! Is there any reason at all to write such a strange function?

In what significant way does the behavior of code written in such a style differ from more standard factorial implementations? (Hint: The answer can be found in Gödel, Escher, Bach.)

Tagged , , ,

When Does Lexing End and Parsing Begin?

I had an interesting bug in my compiler: The parser would fail on blank lines. To a certain degree, this makes sense; the formal grammar of the language does not include blank lines. This is invalid input! On the other hand, every programming language ever invented, as far as I know, simply ignores them. That sounds simple, but, as the author of the compiler, it is necessary to ask: Where is the correct place to ignore a blank line?

Compiler textbooks and classes tend to divide compiler architecture into phases, and these generally begin with lexing (also called scanning) and parsing. However, this is misleading in several ways. Some compilers do not treat scanning as a separate phase at all. Of those that do, most run scanning and parsing in parallel, feeding tokens to the parser as they are recognized by the lexer, rather than completing lexing before beginning parsing. It is also common (if, admittedly, a hack) to feed information from the parser back into the lexer as scanning progresses. In all, there is not a strict distinction between lexing and parsing "phases."

If a compiler exists to transform source code into an executable program, what is the rationale for having lexing as a separate "phase?" Partially, it’s an implementation detail: Treating lexing separately can improve performance and makes certain features easier to implement. There is a theoretical justification as well; if you specify the PL as a context-free grammar using terminals which are lexemes specified with regular expressions, then it makes sense to implement these concerns separately.

Compiler textbooks do not always do a great job of explaining the border between lexing and parsing in a general sense. For example, "Compilers: Principles, Techniques, & Tools," ("the dragon book"), Second Edition, has this to say on p. 6: "Blanks separating the lexemes would be discarded by the lexical analyzer." That is true enough for the toy example in question on that page, but clearly wrong for a language with significant whitespace, like Python or Haskell. Significant whitespace is a reasonably common programming language feature, but the dragon book mostly ignores it.

Whitespace is not significant in C# or VB.NET, but Roslyn does not discard it. Instead, Roslyn syntax nodes store whitespace, comments, etc., in SyntaxTrivia fields. The intention is that the full, original source text can be rebuilt from the syntax tree.

Lexers tend to treat keywords (reserved words) distinctly from identifiers such as variable or type names. The dragon book does suggest why this is the case, but, as Frans Bouma pointed out, does so in terms "requiring knowledge found hundreds of pages further on [in the book]." Other books have a similar problem: "Modern Compiler Implementation in ML" states, "A lexical token is a sequence of characters that can be treated as a unit in the grammar of a programming language." But it does this well before the notion of a "unit in the grammar of a programming language," is really explained.

The Coursera compilers course finally made this click for me: The point of a lexer is to provide input to a parser. Put differently, a lexer should supply a valid sequence of terminals to the parser, given valid (source code) input. So you need to understand parsers before you can really understand a lexer. Indeed, it may make more sense to teach compilers "backwards."

In the same way that lexers operate on a sequence of characters or code points, parsers operate on a sequence of terminals. Hence, the lexer should aim to produce a valid enumeration of terminals in the parser’s context-free grammar, given valid input.

Therefore: If the parser’s terminals are characters, then you do not need a lexer and you are doing scannerless parsing. If the parser’s terminals are "more than individual characters," keywords, formatted numbers, identifiers, and the like, then the job of the lexer is to produce terminals, and only terminals, on correctly-formatted source code.

This tells me that the correct place for me to remove blank lines in my compiler is in the lexer.

Tagged , ,

Let’s Build a Compiler… In F#!

I’m building a small compiler for a toy language which emits .NET executables, implemented in F#. Demo compilers are a dime a dozen, but there are a few things which make this project distinct:

  • No lexer or parser generators are used. The entire compiler is written from the ground up, and is intended to be fairly simple code.
  • The source code is idiomatic F#, and is almost totally purely functional.

The project started as a fairly simple port of Jack Crenshaw’s code from his classic series, "Lets Build A Compiler," but deviated pretty quickly as I found I wanted to take the project in a slightly different direction.

  • In the name of simplicity, Crenshaw avoids using distinct lexing, parsing, etc. phases in his code. I completely understand his reasoning, but found that this was confusing to me personally, because the code I was writing did not resemble the code I read in compiler textbooks. Indeed, many "educational" compiler implementations swing the other way, using a "nanopass" (many distinct phases) design.
  • F# code directly ported from Pascal is ugly, and I wanted to use purely functional code whenever possible.
  • I had been attempting to port one "chapter" of Crenshaw’s code at a time, but I fairly quickly discovered that this is exactly backwards; it makes far more sense to implement a complete compiler and then break it down into chapter-sized steps. It is easier to design a road when you know what the destination will be! I do recognize the value of Crenshaw’s "learn by doing / just write the code" approach. I want to go back and "chapterize" the compiler when I’m done.
  • Crenshaw’s code emits 68000 ASM for SK*DOS as a string; mine emits a binary .NET assembly as a file.

As I develop the compiler, I have found that I use a back-and-forth cycle of adding features and beautifying code. New features tend to be pretty ugly until I understand how the code needs to work in the end, and then I go back and rewrite the uglier bits using the knowledge I have gained in the implementation.

For example, I have just added support for decoration and dereferencing of local variables, and, with that, multiple-line programs. This resulted in a considerable expansion of both the parser and the code generator. So I’m going to go back and simplify a number of things, especially error handling. I’m going to use a combination "railway oriented" design and error nodes in the parser.

The code does have moments of beauty, though. Here’s the top level of the compiler:

let compile = lex >> parse >> optimize >> codeGen >> methodBuilder

That’s not an ASCII art diagram; it’s actual, executable F# code.

Here is one of the unit tests:

[<TestMethod>]
member x.``2+-3 should equal -1`` () =
   Compiler.compile("2+-3") |> shouldEqual <| -1

This test uses the compiler to emit a .NET assembly in memory, execute its main function, and compare the result with the value in the test.

Anyway, this is an ongoing project. You can see where I’m at by following the GitHub repository.

Tagged , ,

Bad Behavior has blocked 713 access attempts in the last 7 days.

Close