Latest Posts


Emerging Languages Camp Part 3: Noether

This is the third post of my notes from Emerging Languages Camp last year. If you haven’t seen it already, you might want to read the Introduction to this series.

Noether: Symmetry in Programming Language Design

Daira Hopwood

Slides · Presentation

I found this presentation to be at once fascinating and frustrating. It was the single best talk at ELC in terms of changing how I think about programming languages. To whatever degree I went to ELC in order to learn and change my thinking about programming, this talk really delivered. At the same time, the presentation itself was kind of a mess. There were far too many sides (69, and dense with text, equations, and citations), with far too much information for the allotted time (less than an hour!). It felt like the author had planned on a full day presentation and was surprised when the hour was up. However, I will take substance over style any day. The ideas presented here were so big that a full-day seminar would probably just scratch the surface.

Daira asked: How should we program gigantic computers? Have languages and tools improved proportional to hardware? No. The NSA (calling back to the previous presentation) exploits flaws because the tools are not good enough. The "software crisis" is still here. However, some techniques, like pure functional programming seem to help. How?

By imposing a symmetry. If you change a program in the same way, you should get a similar program.

Here’s some examples of symmetries in programming:

  • Confluence: in a pure language, evaluating in different orders produces the same result.
  • Alpha renaming.
  • Macro expansion (or abstraction)
  • Comments and dead code can be added and removed without changing the meaning of the program.

However, there are other programming language features which break symmetries we would like to have. Some are essential features, like failure handling and concurrency. Others are probably less essential: implicit conversion, unhygienic macros, global state, global floating-point modes, etc. A design wart in a feature can stop it from having desirable symmetries.

How do we keep desirable symmetry breaking while making undesirable symmetry breaking as difficult as possible? A possible solution to this question is a "stratified language." To some degree, languages like Erlang, Oz, and Haskell already do this, but Noether takes this idea much, much further. The name of the language is, obviously, a nod to Noether’s theorem.

There were not any substantive syntax examples in the presentation, as far as I recall. The presentation almost exclusively discussed semantics. The semantics themselves seem to be very much a work in progress. Daira’s approach is to create a hierarchy of "sublanguages" for each desirable level of symmetry-breaking. As you’ll see, this results in a somewhat dizzying taxonomy of sublanguages. However, ze says that as the language design progresses, many of these will be combined when it seems appropriate. The goal is to retain properties of inner languages when you compose them in a coordinating language.

The best "big picture" slide in the deck lists eight different sublanguages, but other sides imply there are many more. These languages add things like, variously, failure handling (but not failure creation; that’s a different sublanguage!), task-local mutable state, or Turing completeness.

Daira also listed some disadvantages of stratified languages. Rarely-used features will impose significant costs in implementation and specification complexity. The language is more complicated and will take longer to learn. Refactoring could be more difficult if it imposes a smaller sublanguage constraint on existing code.

Ze also said something which has been in the back of my mind for a few years: "Just apply existing research; termination provers got really good and nobody noticed!" Indeed, SAT solvers in general have gotten really good. A few people have noticed: They’re finding use in package management and static analysis applications. But it’s a very general technique and those applications are just the tip of the iceberg.

Coming Next…

In the next post in this series, I’ll discuss the presentations on the Nimrod and Dao programming languages.


posted @ Thu, 17 Apr 2014 17:06:02 +0000 by Craig Stuntz


Emerging Languages Camp Part 2: Daimio and Babel

In this exciting installment of my notes from Emerging Languages Camp last year, some information about the Daimio and Babel programming languages. If you haven’t seen it already, you might want to read the Introduction to this series.

Daimio: A Language for Sharing

Dann Toliver

Homepage · Presentation · Slides

Daimio Is a domain-specific language for customization of web applications. Dann Toliver, the presernter, says that web applications should be extensible and extensions should be sharable. In this sense, Daimio is to some degree what AutoLISP was for AutoCAD. However, "shareable," in this case, means more than just emailing source files around. As best I understand it, part of the goal is that user scripts should be able to interact with each other, kind of like Core War/Redcode.

Daimio is a work in progress. The syntax and semantics seem pretty well thought-out and there is a working implementation, but there are also some unresolved questions.

Dann listed some of the goals for the language as being a smaller language then JavaScript with editable interfaces, extensible functionality, and expressible interaction. He likes dataflow languages. Dataflow here, means pipes, like:

3 | add 5

There are many more substantive examples on the Daimio site.

During the question and answer period, one of the members of the audience asked Dann if he had heard of Bloom, an experimental language from Berkeley. I hadn’t, so I looked at the site. It looks pretty interesting.

Babel: An Untyped, Stack-based HLL

Clayton Bauman

Homepage · Presentation · Slides

This talk began with a political preamble about NSA spying on Americans and tech company cooperation with same. The author said his motivation for creating the language was, "to favor the rights of the user over the rights of the designer." Despite this, the remainder of the talk was technical, and it wasn’t apparent to me how his political motivation manifested itself in the language he has created. There was some discussion towards the end of supporting various types of encryption, but I don’t think that has been implemented.

Technically, it didn’t strike me that there is a whole lot new here. As the title indicates, Babel is a stack-based language. The author says it is inspired by Joy. It is untyped, but has tags.

One feature I did appreciate was that he has written in memory visualizer which creates a graph of-memory data structures from the live heap of a running program. You can see some of these graphs in the slide deck above.

Coming next week: Noether

Noether is a really interesting programming language based on symmetries in language design. The presentation was fascinating, thought provoking, and also frustrating. Come back next week to hear more!


posted @ Thu, 03 Apr 2014 13:47:57 +0000 by Craig Stuntz


Emerging Languages Camp Part 1: Introduction and Gershwin

Emerging Languages Camp is an all day event held before Strange Loop. There were 11 presentations on new and unusual programming languages in varying stages of development.

Production-ready languages like C#, Ruby, Clojure, and Haskell don’t just spring to life out of nothing. There exists a historical context of major language families (Algol, LISP, ML, etc.) as well as a "primordial soup" of amateur, research, and proof-of-concept experiments which allow for features well outside the patterns found in mainstream programming languages.

As new problems in computing arise, new languages are being created to help tackle those problems. Emerging Languages Camp brings together programming language creators, researchers, and enthusiasts annually to share their work and ideas.

Our goal is advancing the state of the art in programming language design and implementation by fostering collaboration between academics, industry practitioners, and hobbyists.

What follows are my notes from the 2013 ELC, with links to the presentations and slides. There’s way too much material here for a single post, so I’ll break this up over several days. I hope that these notes encourage you to watch some of the presentations and possibly attend a future ELC!

Gershwin: Stack-based Concatenative Clojure

Dan Gregoire

Homepage · Presentation · Slides

I found this presentation interesting because the author was able to essentially host a completely different language on top of Clojure with very few changes to the language syntax or parsing. Clojure is a LISP, whereas Gershwin is a concatenative language like FORTH or Factor. Syntactically, they have little in common. However, you can easily use both Clojure and Gershwin code in the same file or even within the same line of code. You simply omit the outer parentheses when writing Gershwin code. In this way, use of () provides a kind of "Clojure interop" for Gershwin code. Gershwin adds very little new syntax. The major additions are:

: name ;; defines a word -- "word" is Gershwin-speak for function
#[ body ] ;; anonymous function

Here are some simple examples:

2 2 +   ;; 4
: foo { :foo "bar" } get
;; "bar"
: times-2 [n--n] 2 * .  ;; Here, [n--n] is the "stack effect." This means "pop n from the stack, then put n back onto the stack." Required for function declarations
4 times-2 ;; 8
[ 1 2 3] #[2 *] map  ;; [2 4 6]

Dan talked about the implementation of the compiler. Again, I was impressed at have few changes were required versus "stock" Clojure. The compiler takes on extra argument, a flag indicating whether or not Gershwin should be enabled.

Why is this useful? Mostly, the advantages and disadvantages are similar to other stack-based, concatenative programming languages. You can produce amazingly succinct, elegant code. This requires "vigorous factoring." However, the succinctness can turn into opacity if taken "too far." It can be difficult for programmers reading stack-based code to keep track of the stack, which is not visible in the code itself (although it is visible in the REPL). This is particularly true if the code does a lot of stack manipulation. Then said that if you feel the need for stack manipulation, you should probably read that as a need for factoring your code. Gershwin also offers dataflow combinatorial for higher-order stack manipulation. These are essentially copied from Factor.

Dan said that Gershwin might be a good fit for your code if you find yourself using a lot of arrows ( -> ) in your Clojure.

Dan is an entertaining speaker who maintained a fast but easily understandable pace.

In the next post, I’ll discuss the presentations on Daimio and Babel.


posted @ Wed, 26 Mar 2014 14:25:53 +0000 by Craig Stuntz


How to Fix MSBuild Error MSB4006

You may encounter an error which looks like this:

MSB4006: There is a circular dependency in the target dependency graph involving target "ResolveProjectReferences" [MyProjectName\MyProjectName.csproj]

…when running MSBuild from the command line.

This error happens when:

  1. You run MSBuild on a machine with .NET 4.5 installed and
  2. You build a project referencing a solution where the SLN file contains dependencies in the SLN.
These dependencies are the dependencies you’ll get if you right click a project in Solution Explorer, choose Project Dependencies, and start checking projects. This is usually unnecessary as project dependencies can be inferred from the References.
The fix is to open the SLN file in your favorite text editor, and search for sections like this:
Project("{AAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MyProjectName", "MyProjectName\MyProjectName.csproj", "{B10E1FCF-DFBA-44A8-830F-6F3B54DFA7CB}"
	ProjectSection(ProjectDependencies) = postProject
		{B9E9F8CE-A607-4A6C-97F7-2BD439122F89} = {B9E9F8CE-A607-4A6C-97F7-2BD439122F89}
	EndProjectSection
EndProject
Just delete the entire section, so your Project node looks like this:
Project("{AAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MyProjectName", "MyProjectName\MyProjectName.csproj", "{B10E1FCF-DFBA-44A8-830F-6F3B54DFA7CB}"
EndProject
Now you should be able to use MSBuild successfully.

posted @ Mon, 03 Mar 2014 16:25:56 +0000 by Craig Stuntz


Cloud Security, For Real This Time

Cloud Security, For Real This Time: Homomorphic Encryption and the Future of Data Privacy. That’s the title of my presentation at the next Central Ohio OWASP Quarterly Seminar, on 27 February at 1:00 p.m. Dan King, from Microsoft, will be talking about single sign-on for federated Dynamics CRM, very practical stuff which is in real world use today. I, on the other hand, will be talking about technologies which don’t quite exist in fully practical forms today, but which I predict will change the Internet in the decade to come and which I find mind-expanding to even read about.

In the meantime, I’ll leave you with some decidedly weird Python code:

>  def my_factorial_less_than_20(n):
..   result = 1;
..   for i in range(2, 20):
..     result *= 1 if i > n else i
..   return result
>  my_factorial_less_than_20(4)
=> 24
>  my_factorial_less_than_20(100)
=> 121645100408832000L
>  my_factorial_less_than_20(1000)
=> 121645100408832000L

This looks both limited and inefficient! Is there any reason at all to write such a strange function?

In what significant way does the behavior of code written in such a style differ from more standard factorial implementations? (Hint: The answer can be found in Gödel, Escher, Bach.)


posted @ Wed, 19 Feb 2014 14:38:51 +0000 by Craig Stuntz


When Does Lexing End and Parsing Begin?

I had an interesting bug in my compiler: The parser would fail on blank lines. To a certain degree, this makes sense; the formal grammar of the language does not include blank lines. This is invalid input! On the other hand, every programming language ever invented, as far as I know, simply ignores them. That sounds simple, but, as the author of the compiler, it is necessary to ask: Where is the correct place to ignore a blank line?

Compiler textbooks and classes tend to divide compiler architecture into phases, and these generally begin with lexing (also called scanning) and parsing. However, this is misleading in several ways. Some compilers do not treat scanning as a separate phase at all. Of those that do, most run scanning and parsing in parallel, feeding tokens to the parser as they are recognized by the lexer, rather than completing lexing before beginning parsing. It is also common (if, admittedly, a hack) to feed information from the parser back into the lexer as scanning progresses. In all, there is not a strict distinction between lexing and parsing "phases."

If a compiler exists to transform source code into an executable program, what is the rationale for having lexing as a separate "phase?" Partially, it’s an implementation detail: Treating lexing separately can improve performance and makes certain features easier to implement. There is a theoretical justification as well; if you specify the PL as a context-free grammar using terminals which are lexemes specified with regular expressions, then it makes sense to implement these concerns separately.

Compiler textbooks do not always do a great job of explaining the border between lexing and parsing in a general sense. For example, "Compilers: Principles, Techniques, & Tools," ("the dragon book"), Second Edition, has this to say on p. 6: "Blanks separating the lexemes would be discarded by the lexical analyzer." That is true enough for the toy example in question on that page, but clearly wrong for a language with significant whitespace, like Python or Haskell. Significant whitespace is a reasonably common programming language feature, but the dragon book mostly ignores it.

Whitespace is not significant in C# or VB.NET, but Roslyn does not discard it. Instead, Roslyn syntax nodes store whitespace, comments, etc., in SyntaxTrivia fields. The intention is that the full, original source text can be rebuilt from the syntax tree.

Lexers tend to treat keywords (reserved words) distinctly from identifiers such as variable or type names. The dragon book does suggest why this is the case, but, as Frans Bouma pointed out, does so in terms "requiring knowledge found hundreds of pages further on [in the book]." Other books have a similar problem: "Modern Compiler Implementation in ML" states, "A lexical token is a sequence of characters that can be treated as a unit in the grammar of a programming language." But it does this well before the notion of a "unit in the grammar of a programming language," is really explained.

The Coursera compilers course finally made this click for me: The point of a lexer is to provide input to a parser. Put differently, a lexer should supply a valid sequence of terminals to the parser, given valid (source code) input. So you need to understand parsers before you can really understand a lexer. Indeed, it may make more sense to teach compilers "backwards."

In the same way that lexers operate on a sequence of characters or code points, parsers operate on a sequence of terminals. Hence, the lexer should aim to produce a valid enumeration of terminals in the parser’s context-free grammar, given valid input.

Therefore: If the parser’s terminals are characters, then you do not need a lexer and you are doing scannerless parsing. If the parser’s terminals are "more than individual characters," keywords, formatted numbers, identifiers, and the like, then the job of the lexer is to produce terminals, and only terminals, on correctly-formatted source code.

This tells me that the correct place for me to remove blank lines in my compiler is in the lexer.


posted @ Wed, 08 Jan 2014 17:48:59 +0000 by Craig Stuntz


Let’s Build a Compiler… In F#!

I’m building a small compiler for a toy language which emits .NET executables, implemented in F#. Demo compilers are a dime a dozen, but there are a few things which make this project distinct:

  • No lexer or parser generators are used. The entire compiler is written from the ground up, and is intended to be fairly simple code.
  • The source code is idiomatic F#, and is almost totally purely functional.

The project started as a fairly simple port of Jack Crenshaw’s code from his classic series, "Lets Build A Compiler," but deviated pretty quickly as I found I wanted to take the project in a slightly different direction.

  • In the name of simplicity, Crenshaw avoids using distinct lexing, parsing, etc. phases in his code. I completely understand his reasoning, but found that this was confusing to me personally, because the code I was writing did not resemble the code I read in compiler textbooks. Indeed, many "educational" compiler implementations swing the other way, using a "nanopass" (many distinct phases) design.
  • F# code directly ported from Pascal is ugly, and I wanted to use purely functional code whenever possible.
  • I had been attempting to port one "chapter" of Crenshaw’s code at a time, but I fairly quickly discovered that this is exactly backwards; it makes far more sense to implement a complete compiler and then break it down into chapter-sized steps. It is easier to design a road when you know what the destination will be! I do recognize the value of Crenshaw’s "learn by doing / just write the code" approach. I want to go back and "chapterize" the compiler when I’m done.
  • Crenshaw’s code emits 68000 ASM for SK*DOS as a string; mine emits a binary .NET assembly as a file.

As I develop the compiler, I have found that I use a back-and-forth cycle of adding features and beautifying code. New features tend to be pretty ugly until I understand how the code needs to work in the end, and then I go back and rewrite the uglier bits using the knowledge I have gained in the implementation.

For example, I have just added support for decoration and dereferencing of local variables, and, with that, multiple-line programs. This resulted in a considerable expansion of both the parser and the code generator. So I’m going to go back and simplify a number of things, especially error handling. I’m going to use a combination "railway oriented" design and error nodes in the parser.

The code does have moments of beauty, though. Here’s the top level of the compiler:

let compile = lex >> parse >> optimize >> codeGen >> methodBuilder

That’s not an ASCII art diagram; it’s actual, executable F# code.

Here is one of the unit tests:

[<TestMethod>]
member x.``2+-3 should equal -1`` () =
   Compiler.compile("2+-3") |> shouldEqual <| -1

This test uses the compiler to emit a .NET assembly in memory, execute its main function, and compare the result with the value in the test.

Anyway, this is an ongoing project. You can see where I’m at by following the GitHub repository.


posted @ Thu, 12 Dec 2013 15:12:14 +0000 by Craig Stuntz


On Learning Programming and Math at Coursera

Coursera, Udacity, MIT Open Courseware, and other such sites are useful to me because they decouple the desire to learn college-level material from the expense and regulations of earning (another) diploma. The latter isn’t compelling to me today, but the former certainly is.

I’ve now taken three Coursera courses: Functional Programming Principles in Scala, Social Network Analysis, and Coding the Matrix: Linear Algebra Through Computer Science Applications. I also tried to take Calculus: Single Variable, but found that the workload was higher than I could manage with home and work obligations. I had varying degrees of experience with these subjects before beginning the courses; Functional Programming was largely review for me, Social Network Analysis was mostly new to me, and Linear Algebra was somewhere in between.

First, I found all three courses to be an effective way for me to learn the material presented.  This is mostly because all three professors did an excellent job of selecting the exercises to complete for the course. Most of what I learned in all three courses came from completing the exercises. The quality of the lectures and the availability of written materials for offline study varied from course to course. But it’s hard to be too critical considering that the courses are free. For someone like me who cares much more about increasing my knowledge than receiving credit or some sort of official certificate, it almost seems foolish not to take at least one course at any given time.

Completing the courses gave me a chance to practice with some programming languages I don’t use in my professional work right now, like Scala, Python 3, and R.

Perhaps surprisingly, given that all three courses use Coursera’s services, the means of submitting assignments was wildly different from class to class.

For the Scala course, assignments were submitted via sbt, and graded by unit tests on the server. These tests were occasionally fallible; when the server was especially loaded, it could decide that a submission was experiencing infinite recursion (a real hazard in functional programming) when, in fact, it was simply executing the code slowly. At most other times, however, the tests were accurate and the feedback was pretty reasonable.

For the Social Network Analysis course, the programming assignments were an optional add-on. You had to attach the homework via Coursera’s site, and it would be graded somehow. It’s not clear to me what the mechanism was, but it always accepted my submissions. The final programming assignment was graded using a peer review system where three other students would rate your submission, and you would evaluate the submissions of three more students. I really appreciated the feedback I got on this; it seems that the other students put real effort into their feedback.

In the Linear Algebra course, the programming assignments were all in Python. You submit them using a custom Python script which analyzes your code using regular expressions and unit tests on the client and server. This system was the buggiest of the three; the client and server code were occasionally out of sync, and frequently rejected correct code. These issues were fixed as the course went on, but it made for some very frustrating hours.

The use of instructor-supplied unit testing varied from class to class, as well. Unit tests were always included in the Scala class assignments, although the grader would run additional tests on your code. The Linear Algebra class assignments occasionally included unit tests, but mostly did not. The grader always ran unit tests, but they were not usable by the students. This deficit was made up for by other students who would post reasonable unit tests to the class forum. The Social Network Analysis class never included any unit tests in the assignments at all.

My biggest disappointment with all of these courses, however, is non-technical. Coursera, like nearly everyone in academia, has an honor code which forbids sharing work. This stands in stark contrast to most other collaborative systems for learning programming, like Exercism, Project Euler, 4Clojure (and arguably GitHub) where you are expected to first solve a problem yourself, and then work with other developers to refine your solution into the best possible implementation. You learn how to do the work well, not just how to make tests pass.

With Coursera (with the sole exception of the Social Network Analysis final problem), you just can’t do that at all. There are forums, where you are allowed to opaquely discuss the assignment and your approach to it, but you cannot show any actual code for the assignment itself. In a real college course, you could review your work privately with a TA, so you’d at least get some feedback beyond pass/fail. There is no such option at Coursera. For me, this is the biggest barrier to real learning in Coursera, and it seems unnecessary, given that "cheating" on a zero-credit course you take for fun robs only yourself.

There are other areas in which I think Coursera could profit by deviating from the way things are done in college. It’s not clear to me why a fixed schedule is required for these classes. Why not give students the option of "auditing" a course on their own time? Since the lectures are pre-recorded and the assignments are, for the most part, graded mechanically, it seems like this should be possible. Having a more flexible schedule would allow students who cannot commit the roughly 6 through 16 hours per week that most of the classes seem to require.

These criticisms aside, however, the big picture is that all of the classes I have taken so far have been good because the professors did an excellent job of designing the courses. As long as Coursera continues to recruit such qualified teachers, I think their classes will be a good investment of my time.


posted @ Thu, 26 Sep 2013 15:48:00 +0000 by Craig Stuntz


Strange Loop Crossword

I wrote a 15*15, NYT-style crossword puzzle for Strange Loop. On the NYT difficulty scale, it’s roughly a Wednesday-level puzzle. However, it was written for Strange Loop and thus does presume familiarity with functional programming and math, and has a few "inside jokes."

You can find the puzzle and the solution on the Strange Loop wiki.


posted @ Wed, 25 Sep 2013 15:17:55 +0000 by Craig Stuntz


Google’s Research on Interviewing Technical Candidates

Yesterday’s New York Times has a good article on Google’s analysis of what works and what does not work when interviewing candidates for technical jobs. This paragraph closely matches my experience:

Behavioral interviewing also works — where you’re not giving someone a hypothetical, but you’re starting with a question like, “Give me an example of a time when you solved an analytically difficult problem.” The interesting thing about the behavioral interview is that when you ask somebody to speak to their own experience, and you drill into that, you get two kinds of information. One is you get to see how they actually interacted in a real-world situation, and the valuable “meta” information you get about the candidate is a sense of what they consider to be difficult.

I have interviewed candidates this way for years, and I can’t recall ever being wrong in my assessment of a developer’s general intelligence and technical skills. However, I have made mistakes! In particular, this method doesn’t necessarily give you a good indication of how well the candidate will get along with other members of the team, and whether or not they will behave professionally. I’m not as good at assessing that as I am at assessing technical skill.

Unsurprisingly, Google also found

[...] that brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.

The article doesn’t mention anything about whiteboard coding, but I also find that useful in technical interviews.

I have to disagree with the headline, however. While big data might not be able to tell you who to hire, it clearly told Google that they were doing it incorrectly!


posted @ Thu, 20 Jun 2013 13:46:05 +0000 by Craig Stuntz



Server Response from: BLOGS1

 
Copyright© 1994 - 2013 Embarcadero Technologies, Inc. All rights reserved. Contact Us   Legal Notices   Privacy Policy   Report Software Piracy