Skip to content

How To Use Real Computer Science in Your Day Job

I’ll be speaking at Lambda Jam next week. Here’s the synopsis:

When you leave Lambda Jam and return to work, do you expect to apply what you’ve learned here to hard problems, or is there just never time or permission to venture outside of fixing “undefined is not a function" in JavaScript? Many of us do use functional languages, machine learning, proof assistants, parsing, and formal methods in our day jobs, and employment by a CS research department is not a prerequisite. As a consultant who wants to choose the most effective tool for the job and keep my customers happy in the process, I’ve developed a structured approach to finding ways to use the tools of the future (plus a few from the 70s!) in the enterprises of today. I’ll share that with you and examine research into the use of formal methods in other companies. I hope you will leave the talk excited about your job!

For Columbus friends who can’t make it to Chicago, I’ll be rehearsing the presentation this coming Saturday at 3.

Provable Optimization with Microsoft Z3

A few months ago, some coworkers sent around a Ruby challenge. It appears simple, but we can sometimes learn a lot from simple problems.

Write a Ruby program that determines the smallest three digit number such that when said number is divided by the sum of its digits the answer is 20.

In case that’s not clear, let’s pick a number, say, 123. The sum of the digits of 123 is 6, and 123/6 = 20.5, so 123 is not a solution. What is?

Here’s some Ruby code I wrote to solve it:

def digitSum(num, base = 10)
    num.to_s(base).split(//).inject {|z, x| z + x.to_i(base)}

def solution
    (100..999).step(20).reject {|n| n / digitSum(n) != 20 }.first

puts solution

Problem solved, right?

Well, no. For starters, it doesn’t even execute. There’s a really subtle type error in this code. You probably have to be fairly good with Ruby to figure out why without actually running it. Even when fixed, the cognitive overhead for understanding the code to even a simple problem is very high! It doesn’t look like the problem specification at all.

Does this version, when the bug is fixed, actually produce a correct answer to the problem? Does it even produce an incorrect solution? It’s not quite clear.

So maybe my solution isn’t so good. But one of my colleagues had an interesting solution:

def the_solution

Well, that looks really, really efficient, and it typechecks. But is it the correct answer? Will you know, six months down the road, what question it’s even trying to answer? Tests would help, but the word "smallest" in the problem turns out to be tricky to test well. Do you want methods like this in code you maintain?

The Best of Both Worlds

Is there a solution which is as efficient as just returning 180 but which also proves that 180 is, in fact, the correct solution to the problem? Let’s encode the specification for the problem in a pure specification language, SMT-LIB:

(define-fun matches-problem ((n Int)) Bool
    (>= n 100)
    (< n 1000)                   ; three digits
    (= 20.0 (/ n (digit-sum n)))))

Z3/SMT-LIB doesn’t ship with a digit-sum function, so I had to write that. You can find that code in the full solution, below.

That’s most of the problem, but not quite all. We also have to show that this is the smallest solution. So let’s also assert that there exists a "smallest" solution, which means that any other solution is larger:

(declare-const num Int)
    (matches-problem num) ; "num" is a solution
    (forall ((other Int))
        (implies (matches-problem other) (>= other num))) ; any "other" solution is larger

Now let’s ask Z3 if this specification even makes sense, and if it could be reduced into something more efficient:


And Z3 replies…

  (define-fun num () Int

A round of applause for the theorem prover, please! To see the full solution, try it yourself without installing anything.

One interesting point here: The output language is SMT-LIB, just like the input language. The "compile" transforms a provably consistent and more obviously correct specification into an more efficient representation of the answer in the same language as the input. This is especially useful for problems which do not have a single answer. Z3 can return a function matching a specification as easily as it can return an integer.

What does it mean when I ask "if this specification even makes sense?" Well, let’s say I don’t like the number 180. I can exclude it with an additional assert:

(assert (> num 180))

This time, when I check-sat, Z3 replies unsat, meaning the model is not satisfiable, which means there’s definitely no solution. So 180 is not only the smallest solution to the original problem, it turns out to be the unique solution.

Formal methods can show that your problem specifications are consistent and that your implementation is correct, and they can also guarantee that "extreme" optimizations are correct. This turns out to be really useful in real-world problems [PDF].

Tagged , , ,

What Is the Name of This Function?

There is a function I need. I know how to write it, but I don’t know if it has a standard name (like map, fold, etc.). It takes one argument — a list of something — and returns a list of 2-tuples of equal length. Each tuple contains one item from the list and the list without that item. It’s probably easiest if I show you an example:

> f [1; 2; 3];;
val it : (int * int list) list = [
    (1, [2; 3])
    (2, [1; 3])
    (3, [1; 2])

Here’s the implementation I’m using:

let f (items : 'T list) =
    let rec implementation (start : 'T list) = function
        | [] -> []
        | item :: tail -> (item, start @ tail) :: implementation (start @ [ item ]) tail
    implementation [] items

Anybody know a standard name for this function?


In case you’re curious, the reason I want this is I’m implementing a decision tree. I have a list of functions which are predicates over the domain of my example data. I need to try each function in the list, pick the "best", and then recurse over the rest of the functions. "Best" is usually measured in terms of information gain.

It’s never a great idea to do equality comparisons on functions, so it’s helpful to transform this list into a list of functions paired with the remaining functions.

Tagged , , ,

Your Flying Car is Ready: Amazing Programming Tools of the Future, Today!

That’s the title of my presentation at Dog Food Conference 2014, 29-30 September, in Columbus, Ohio. If you found my post on "Test-Only Development" with the Z3 Theorem Prover was interesting, then you’ll love this.

What if simply writing "unit tests" was enough to produce a program which makes them pass? What if your compiler could guarantee that your OpenSSL replacement follows the TLS specification to the letter? What if you could write a test which showed that your code had no unintentional behavior?

Microsoft Research is well known for its contributions to Kinect, F#, the Entity Framework, WorldWide Telescope, and more, but it’s also the home of a number of programming tools which do things which many programmers would consider surprising, if not impossible. But they work, and in this session you’ll see them in action.

Like the idea of code contracts, but concerned about runtime performance and errors? The Dafny language can check contracts at compile time. Sounds a bit magical, but it works! I’ll use the Z3 theorem prover to generate working programs from specifications alone. Sound impractical? I’ll explain how it is used to make Hyper-V and Windows Azure secure. I’ll show the F7 specification language for F# and relate how its authors used it to not only produce a TLS implementation which probably follows the spec, but to also identify dangerous holes in the TLS specification itself. You’ll learn how Amazon uses the TLA+ specification language to prove that there are no edge cases in its internal protocols.

Far from being research toys, these tools are in daily use in cases where stability, security, and reliability of code matters most. Can they help with your hardest problems? You might be surprised!

Tagged , , , , ,

Cloud Security, For Real This Time: Homomorphic Encryption and the Future of Online Privacy

That’s the title of the presentation I’ll be giving at CloudDevelop 2014, on October 17th, in Columbus, Ohio. If you read my blog at all then you’re probably interested in where software development will be headed five years in the future. Two things I recommend that you study are proving systems and homomorphic encryption.

I’ve written about proving systems in the past, and will have more to say in the future, but today we’ll talk about homomorphic encryption.

Homomorphic encryption will change the web in the same way that SSL/TLS did. I say this with quite a bit more confidence than I have in the past! If you remember the web in 1993, that’s interesting to you. If not, imagine the web as a magazine which could show you ads, but required calling an 800 number if you wanted to make a purchase, and contrast that with today’s

I have given two presentations with similar titles before. But this presentation will be an almost complete rewrite, just like the last. In the time since my first article on homomorphic encryption, it’s gone from a gleam in a mathematician’s eye to an open source DB access library. It’s a fast-moving technology, which, thankfully, becomes more practical each year.

Interestingly, as the technology for cloud security becomes more practical, the need for it becomes more pressing.

CloudDevelop 2014 is just $20, which is quite cheap, as conferences go, but you can use this link to save 50%, which means that for the $10 you might have spent for lunch that day you get a conference for free!

Tagged , , ,

"Test-Only Development" with the Z3 Theorem Prover

In this post I’ll introduce a style of programming which may be totally unfamiliar and perhaps a bit magical to many readers. What if you could write a unit test for a problem and have your compiler automatically return a correct implementation? Call it "test only" instead of "test first." I’m going to translate the problem itself into a specification language and use Microsoft Research’s Z3 theorem prover to find a solution. I won’t write any implementation code at all!

A Simple Problem

Here’s the problem I’ll use as an example, which is problem #4 from the Project Euler:

A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.

Find the largest palindrome made from the product of two 3-digit numbers.

The usual approach is to use brute force. Here’s a C# example, which I suspect is similar to what many people do:

(from    factor1 in Enumerable.Range(100, 899)
 from    factor2 in Enumerable.Range(100, 899)
 let     product = factor1 * factor2
 where   IsPalindrome(product) // defined elsewhere
 orderby product descending
 select  new { Factor1 = factor1, Factor2 = factor2, Product = product}).First()

This is not a terrible solution. It runs pretty quickly and returns the correct solution, and we can see opportunities for making it more efficient. I suspect most people would declare the problem finished and move on.

However, the LINQ syntax may obscure the fact that this is still a brute force solution. Any time we have to think about how to instruct a computer to find the answer to the problem instead of the characteristics of the problem itself, we add cognitive overhead and increase the chances of making a mistake.

Also, what is that IsPalindrome(product) hiding? Most people implement this by converting the number to a string, and comparing it with the reversed value. But it turns out that the mathematical properties of a palindromic number are critical to finding an efficient solution.

Indeed, you can solve this problem fairly quickly with pencil and paper so long as you don’t do IsPalindrome this way! (It would probably double the length of this post to explain how, so I’ll skip that. If there’s demand in comments I can explain, otherwise just read on.)

Solving with Pure Specification

For my purely declarative solution, I’m going to use the language SMT-LIB. As a pure specification language, it doesn’t allow me to define an implementation at all! Instead, I’ll use it to express the constraints of the problem, and then use MSR’s Z3 solver to find a solution. SMT-LIB uses Lisp-like S-expressions, so, yes Virginia, there will be parentheses, but don’t let that scare you off. It’s always worthwhile to learn languages which make you think about problems differently, and I think you’ll find SMT-LIB really delivers!

Since this language will seem unusual to many readers, let’s walk through the problem step by step.

First, we need to declare some of the variables used in the problem. I use "variable" here in the mathematical, rather than software, sense: A placeholder for an unknown, but not something to which I’ll be assigning varying values. So here are three variables roughly equivalent to the corresponding C# vars above:

(declare-const product Int)
(declare-const factor1 Int)
(declare-const factor2 Int)

In an S-expression, the first item inside the parentheses is the function, and the remaining items are arguments. So declare-const is the function here and the remaining items are the variable name and its "sort" (type).

Next, the problem says that product must be the, ahem, product of the two factors:

(assert (= (* factor1 factor2) product))

"assert" sounds like a unit test, doesn’t it? Indeed, many people coming to a theorem prover from a software development background will find that programming them is much more similar to writing tests than writing code. The line above just says that factor1 * factor2 = product. But it’s an assertion, not an assignment; we haven’t specified values for any of these variables.

The problem statement says that both factors are three digit numbers:

(assert (and (>= factor1 100) (<= factor1 999)))
(assert (and (>= factor2 100) (<= factor2 999)))

Mathematically, what does it mean for a number to be a palindrome? In this case, the largest product of 3 digit numbers is going to be a six digit number of the form abccba, so product = 100000a + 10000b + 1000c + 100c + 10b + a. As I noted above, expressing the relationship this way is key to finding a non-brute-force solution using algebra. But you don’t need to know that in order to use Z3, because Z3 knows algebra! All you need to know is that you should express relationships mathematically instead of using string manipulation.

(declare-const a Int)
(declare-const b Int)
(declare-const c Int)
(assert (= product (+ (* 100000 a) (* 10000 b) (* 1000 c) (* 100 c) (* 10 b) a)))

I implied above that a, b, and c are single-digit numbers, so we need to be specific about that. Also, a can’t be 0 or we won’t have a 6 digit number.

(assert (and (>= a 1) (<= a 9)))
(assert (and (>= b 0) (<= b 9)))
(assert (and (>= c 0) (<= c 9)))

These 4 assertions about a, b, and c are enough to determine that product is a palindrome. We’re not quite done yet, but let’s see how we’re doing so far. (check-sat) asks Z3 if there is a solution to the problem we’ve posed, and (get-model) displays that solution. Here’s the entire script so far:

(declare-const product Int)
(declare-const factor1 Int)
(declare-const factor2 Int)
(assert (and (>= factor1 100) (< factor1 1000)))
(assert (and (>= factor2 100) (< factor2 1000)))
(assert (= (* factor1 factor2) product))
(declare-const a Int)
(declare-const b Int)
(declare-const c Int)
(assert (and (>= a 1) (<= a 9)))
(assert (and (>= b 0) (<= b 9)))
(assert (and (>= c 0) (<= c 9)))
(assert (= product (+ (* 100000 a) (* 10000 b) (* 1000 c) (* 100 c) (* 10 b) a)))

When you run this through Z3 (try it yourself!), the solver responds:

  (define-fun c () Int
  (define-fun b () Int
  (define-fun a () Int
  (define-fun product () Int
  (define-fun factor2 () Int
  (define-fun factor1 () Int

That’s pretty good! sat, here, means that Z3 found a solution (it would have displayed unsat if it hadn’t). Eliding some of the syntax, the solution it found was 143 * 707 = 101101. Not bad for zero implementation code, but also not the answer to the Project Euler problem, which asks for the largest product.


"Optimization," in Z3 parlance, refers to finding the "best" proof for the theorem, not doing it as quickly as possible. But how do we tell Z3 to find the largest product?

(Update: I had a mistake in the original version of this post, and so I’ve significantly changed this section.)

Z3 has a function called maximize, but it’s a bit limited. If I try adding (maximize product), Z3 complains:

Z3(15, 10): ERROR: Objective function '(* factor1 factor2)' is not supported

After some fiddling, however, it seems (maximize (+ factor1 factor2)) works, sort of. Adding this to the script above causes Z3 to return:

(+ factor1 factor2) |-> [1282:oo]
unknown --...

Which is to say, Z3 could not find the maximal value. ("oo" just means ∞, and unknown means it could neither prove nor disprove the theorem.) Guessing that a might be bigger than 1, I can change its range to 8..9 and Z3 arrives at a single solution:

(+ factor1 factor2) |-> 1906
  (define-fun b () Int
  (define-fun c () Int
  (define-fun factor1 () Int
  (define-fun factor2 () Int
  (define-fun a () Int
  (define-fun product () Int

The final script is:

(declare-const product Int)
(declare-const factor1 Int)
(declare-const factor2 Int)
(assert (and (>= factor1 100) (< factor1 1000)))
(assert (and (>= factor2 100) (< factor2 1000)))
(assert (= (* factor1 factor2) product))
(declare-const a Int)
(declare-const b Int)
(declare-const c Int)
(assert (and (>= a 8 ) (<= a 9)))
(assert (and (>= b 0) (<= b 9)))
(assert (and (>= c 0) (<= c 9)))
(assert (= product (+ (* 100000 a) (* 10000 b) (* 1000 c) (* 100 c) (* 10 b) a)))
(maximize (+ factor1 factor2))

This bothers me just a little, since I had to lie ever so slightly about my objective, even though I did end up with the right answer.

That’s just a limitation of Z3, and it may be fixed some day; Z3 is under active development, and the optimization features are not even in the unstable or master branches. But think about what has been achieved here: We’ve solved a problem with nothing but statements about the properties of the correct answer, and without any "implementation" code whatsoever. Also, using this technique forced me to think deeply about what the problem actually meant.

Can This Be Real?

At this point, you may have questions about doing software development in this way. Sure, it works fine for this trivial problem, but can you solve real-world problems this way? Is it fast? Are there any other tools with similar features? What are the downsides of working in this way? You may find the answers to these questions as surprising as the code above. Stay tuned!

Emerging Languages Camp Part 5: Axiomatic Language

This is the fifth post of my notes from Emerging Languages Camp last year. If you haven’t seen it already, you might want to read the Introduction to this series.

Axiomatic Language

Walter Wilson

Homepage · Slides · Presentation

One of the ways that you can describe a coding style is declarative versus imperative. That is, focusing on the desired result versus focusing on how that result should be computed, respectively. Walter Wilson’s axiomatic language attempts to take the former approach as far as possible. It is a pure specification language which attempts to provide a means for exhaustively specifying the output of a program. As such, it is more comparable with specification languages like TLA+ than with programming languages designed around the idea of producing an executable.

If you can specify every property or every possible input and output desired from a system, then you might be able to write a program which could read that specification and produce a program which implements it. In fact, dependently typed languages like Agda can do this today in a more limited capacity.

Actually building such a working system, Walter concedes, is a challenge! I’ll talk more about that challenge in a moment, but first let’s take a look at what he has actually built. I am not aware that there is any working software here; at this point, the project is just the grammar. The semantics include axioms and expressions. Axioms generate valid expressions. The syntax includes four elements:

  • Atoms: `abc, `+
  • Expression variables: %w, %3
  • String variables: $, $xyz
  • Sequences:(), (`M %x (`a $2))

Now you can use these to build axioms, with the following syntax:

<conclu> < <cond1>, …, <condn>.
<conclu>.       ! an unconditional axiom

Axioms produce axiom instances, where values are substituted for the axiom variables. Axiom instances produce valid expressions. If the conditions of an axiom instance are valid expressions, then the conclusion is a valid expression. The examples are somewhat lengthy, so I will refer you to the axiomatic language homepage, which includes many sample programs.

This was a very thought-provoking presentation. When I listened to many of the other speakers, I often had mental reactions like, "Hey, that’s a really useful thing!" or "That’s not my cup of tea." When I listened to Walter’s presentation, however, my reactions were more along the lines of, "Is this even possible?" and "If so, is it a good idea?" (In a good way!) I really don’t know the answers. When I spoke with Walter later, thanking him for making me think, he asked me if I thought that an implementation of such a system would be possible. My gut reaction is that, much like termination analysis, the general answer is no, but it might be possible to handle enough specific, useful cases to produce a usable system anyway.

Axiomatic language is clearly useful as an intellectual exercise. Would it also be useful as a practical system? As an industry, I don’t think we know very much about writing great specs. My gut feel is that it is harder to write a spec which is so complete that it can be used to produce a functional system than it is to write correct code. It is usually harder to work at a higher level of abstraction, though it’s often worth the effort!

Although Walter claims that specifications are "smaller & more readable than algorithms," my conclusion is not so clear-cut. Compare this sort in Haskell with Walter’s specification for a sort in axiomatic language. In general, when I look at Walter’s examples, I think it is fair to say that the claim that the specifications would be smaller and more readable than algorithms is, at best, debatable. Very expressive code in a contemporary, high level language, in my opinion, can do better, without introducing too much inessential imperative overhead. You can also compare Walter’s natural number addition from his slide deck with this example in Agda. The advantage of a specification over an implementation, I think, is that specifications can be free of implementation details.

These examples are not a perfect comparison. I would point out that the axiomatic language sort specification there is incomplete because it does not specify performance or space boundaries. Also, the Haskell version does not specify ordering relations, but Walter’s example does. Nevertheless, when I read the Haskell version I can clearly see what is going on, both in terms of what the result will be and I have some idea of the amount of time it will take. I can see the result fairly clearly in Walter’s version, but it takes a good bit more reading. And I have no idea how long it will take.

Actually, I don’t even know how long the sort should take, because that probably depends upon the application. A more complete specification might include information about the expected length of the input and an upper bound on time, available memory, etc. But such details, while important, bring us dangerously close to specifying an algorithm, which is exactly what Walter is trying to avoid!

For more complicated, but still realistic, problem domains ("I need a program which will calculate the correct income tax for any US citizen."), I rather doubt that a complete specification, sufficient to produce a working program, is even possible. The US tax code, vague in some places and self-contradictory in others, certainly would not provide enough information to do such a thing. However, it would still be useful if you were able to somehow translate the US tax code into a machine-readable specification, in order to test the program you produced by other means. There may be subsets of the tax code which are deterministic and it’s probably useful to verify implementations of these via machine-assisted proof.

One might at first be tempted to confuse programming via specification with waterfall, but these methodologies are orthogonal, I think. You can develop a specification in an agile manner, just like you can do waterfall without a formal specification.

Axiomatic language also reminds me of the philosophical languages of the 17th century, which attempted to produce minimal, concise grammars in which it was impossible to make an incorrect statement. Where axiomatic language differs from all of these is the as yet unfulfilled intention of enabling a system by which a program can be automatically generated (as opposed to "merely" checking satisfiability).

Up Next

In the next post in this series I’ll discuss Matt Graham’s qbert bytecode.

Tagged , , , ,

Emerging Languages Camp Part 4: Nimrod and Dao

This is the fourth post of my notes from Emerging Languages Camp last year. If you haven’t seen it already, you might want to read the Introduction to this series.

Nimrod: A new approach to meta programming

Andreas Rumpf

Homepage · Slides · Presentation

Nimrod’s creator, Andreas Rumpf, describes the language as a statically typed, systems programming language with clean syntax and strong meta-programming. It compiles to C. He said it had a "realtime GC," but if you look at the Nimrod release notes, you will see that it does not do cycle detection unless you enable the mark and sweep GC, and that the mark and sweep GC is not realtime. Interestingly, use of the GC is optional and the compiler removes it if you do not use it. The compiler, IDE, and package manager are all self-hosted.

My favorite slide title from this presentation (or possibly all of ELC) was "Optimizing Hello World (4)". And yes, there were three preceding slides in that series.

Andreas noted that:

echo "hello ", "world", 99

…is rewritten to:

echo([$"hello ", $"world", $99])

(where $ is Nimrod’s toString operator.) Andreas said that this does not require dynamic binding. It seems like the compiler does a lot of rewriting. Andreas said that side effect free methods might be evaluated at compile time. There is also a macro system, which appears hygienic. The slideshow has a nice example of using the template system to implement a simple DSL for HTML templating, along with the rewriting performed by the compiler on the output of a template expansion, which eventually boils down to a single string with the final HTML.

This presentation provided food for thought on what the boundaries should be between compiler rewriting and the use of library templates or macros. There’s certainly some gray area between these two.

Dao Programming Language for Scripting and Computing

Limin Fu

Homepage · Slides · Presentation

Dao is a optionally/implicitly typed language motivated by the author’s frustration with Perl and desire for a better programming language for bioinformatics. True to this origin, much of the language’s optimization is numerically-focused. There is an LLVM-based JIT and a C interop system. There is an unhygienic macro system based on EBNF specifications.

And more! Want mixins? Aspects? Async?  It’s in there.

The async feature, interestingly, is specified at the call site:

routine SumOfLogs( n = 10 )
    sum = 0.0
    for( i = 1 : n ) sum += log( i )
    return sum

fut = SumOfLogs( 2000000 ) !!    # Asynchronous mode;
while( fut.wait( 0.01 ) == 0 ) io.writeln( ’still computing’ ) io.writeln( ’sum of logs =’, fut.value() )

Here, the !! means "run this asynchronously."

In the next post in this series I’ll discuss Walter Wilson’s presentation on Axiomatic Language.

Tagged , , , ,

Emerging Languages Camp Part 3: Noether

This is the third post of my notes from Emerging Languages Camp last year. If you haven’t seen it already, you might want to read the Introduction to this series.

Noether: Symmetry in Programming Language Design

Daira Hopwood

Slides · Presentation

I found this presentation to be at once fascinating and frustrating. It was the single best talk at ELC in terms of changing how I think about programming languages. To whatever degree I went to ELC in order to learn and change my thinking about programming, this talk really delivered. At the same time, the presentation itself was kind of a mess. There were far too many sides (69, and dense with text, equations, and citations), with far too much information for the allotted time (less than an hour!). It felt like the author had planned on a full day presentation and was surprised when the hour was up. However, I will take substance over style any day. The ideas presented here were so big that a full-day seminar would probably just scratch the surface.

Daira asked: How should we program gigantic computers? Have languages and tools improved proportional to hardware? No. The NSA (calling back to the previous presentation) exploits flaws because the tools are not good enough. The "software crisis" is still here. However, some techniques, like pure functional programming seem to help. How?

By imposing a symmetry. If you change a program in the same way, you should get a similar program.

Here’s some examples of symmetries in programming:

  • Confluence: in a pure language, evaluating in different orders produces the same result.
  • Alpha renaming.
  • Macro expansion (or abstraction)
  • Comments and dead code can be added and removed without changing the meaning of the program.

However, there are other programming language features which break symmetries we would like to have. Some are essential features, like failure handling and concurrency. Others are probably less essential: implicit conversion, unhygienic macros, global state, global floating-point modes, etc. A design wart in a feature can stop it from having desirable symmetries.

How do we keep desirable symmetry breaking while making undesirable symmetry breaking as difficult as possible? A possible solution to this question is a "stratified language." To some degree, languages like Erlang, Oz, and Haskell already do this, but Noether takes this idea much, much further. The name of the language is, obviously, a nod to Noether’s theorem.

There were not any substantive syntax examples in the presentation, as far as I recall. The presentation almost exclusively discussed semantics. The semantics themselves seem to be very much a work in progress. Daira’s approach is to create a hierarchy of "sublanguages" for each desirable level of symmetry-breaking. As you’ll see, this results in a somewhat dizzying taxonomy of sublanguages. However, ze says that as the language design progresses, many of these will be combined when it seems appropriate. The goal is to retain properties of inner languages when you compose them in a coordinating language.

The best "big picture" slide in the deck lists eight different sublanguages, but other sides imply there are many more. These languages add things like, variously, failure handling (but not failure creation; that’s a different sublanguage!), task-local mutable state, or Turing completeness.

Daira also listed some disadvantages of stratified languages. Rarely-used features will impose significant costs in implementation and specification complexity. The language is more complicated and will take longer to learn. Refactoring could be more difficult if it imposes a smaller sublanguage constraint on existing code.

Ze also said something which has been in the back of my mind for a few years: "Just apply existing research; termination provers got really good and nobody noticed!" Indeed, SAT solvers in general have gotten really good. A few people have noticed: They’re finding use in package management and static analysis applications. But it’s a very general technique and those applications are just the tip of the iceberg.

Coming Next…

In the next post in this series, I discuss the presentations on the Nimrod and Dao programming languages.

Tagged , ,

Emerging Languages Camp Part 2: Daimio and Babel

In this exciting installment of my notes from Emerging Languages Camp last year, some information about the Daimio and Babel programming languages. If you haven’t seen it already, you might want to read the Introduction to this series.

Daimio: A Language for Sharing

Dann Toliver

Homepage · Presentation · Slides

Daimio Is a domain-specific language for customization of web applications. Dann Toliver, the presernter, says that web applications should be extensible and extensions should be sharable. In this sense, Daimio is to some degree what AutoLISP was for AutoCAD. However, "shareable," in this case, means more than just emailing source files around. As best I understand it, part of the goal is that user scripts should be able to interact with each other, kind of like Core War/Redcode.

Daimio is a work in progress. The syntax and semantics seem pretty well thought-out and there is a working implementation, but there are also some unresolved questions.

Dann listed some of the goals for the language as being a smaller language then JavaScript with editable interfaces, extensible functionality, and expressible interaction. He likes dataflow languages. Dataflow here, means pipes, like:

3 | add 5

There are many more substantive examples on the Daimio site.

During the question and answer period, one of the members of the audience asked Dann if he had heard of Bloom, an experimental language from Berkeley. I hadn’t, so I looked at the site. It looks pretty interesting.

Babel: An Untyped, Stack-based HLL

Clayton Bauman

Homepage · Presentation · Slides

This talk began with a political preamble about NSA spying on Americans and tech company cooperation with same. The author said his motivation for creating the language was, "to favor the rights of the user over the rights of the designer." Despite this, the remainder of the talk was technical, and it wasn’t apparent to me how his political motivation manifested itself in the language he has created. There was some discussion towards the end of supporting various types of encryption, but I don’t think that has been implemented.

Technically, it didn’t strike me that there is a whole lot new here. As the title indicates, Babel is a stack-based language. The author says it is inspired by Joy. It is untyped, but has tags.

One feature I did appreciate was that he has written in memory visualizer which creates a graph of-memory data structures from the live heap of a running program. You can see some of these graphs in the slide deck above.

Coming next week: Noether

Noether is a really interesting programming language based on symmetries in language design. The presentation was fascinating, thought provoking, and also frustrating. Come back next week to hear more!

Tagged , , , , ,

Bad Behavior has blocked 713 access attempts in the last 7 days.