Skip to content

Emerging Languages Camp Part 2: Daimio and Babel

In this exciting installment of my notes from Emerging Languages Camp last year, some information about the Daimio and Babel programming languages. If you haven’t seen it already, you might want to read the Introduction to this series.

Daimio: A Language for Sharing

Dann Toliver

Homepage · Presentation · Slides

Daimio Is a domain-specific language for customization of web applications. Dann Toliver, the presernter, says that web applications should be extensible and extensions should be sharable. In this sense, Daimio is to some degree what AutoLISP was for AutoCAD. However, "shareable," in this case, means more than just emailing source files around. As best I understand it, part of the goal is that user scripts should be able to interact with each other, kind of like Core War/Redcode.

Daimio is a work in progress. The syntax and semantics seem pretty well thought-out and there is a working implementation, but there are also some unresolved questions.

Dann listed some of the goals for the language as being a smaller language then JavaScript with editable interfaces, extensible functionality, and expressible interaction. He likes dataflow languages. Dataflow here, means pipes, like:

3 | add 5

There are many more substantive examples on the Daimio site.

During the question and answer period, one of the members of the audience asked Dann if he had heard of Bloom, an experimental language from Berkeley. I hadn’t, so I looked at the site. It looks pretty interesting.

Babel: An Untyped, Stack-based HLL

Clayton Bauman

Homepage · Presentation · Slides

This talk began with a political preamble about NSA spying on Americans and tech company cooperation with same. The author said his motivation for creating the language was, "to favor the rights of the user over the rights of the designer." Despite this, the remainder of the talk was technical, and it wasn’t apparent to me how his political motivation manifested itself in the language he has created. There was some discussion towards the end of supporting various types of encryption, but I don’t think that has been implemented.

Technically, it didn’t strike me that there is a whole lot new here. As the title indicates, Babel is a stack-based language. The author says it is inspired by Joy. It is untyped, but has tags.

One feature I did appreciate was that he has written in memory visualizer which creates a graph of-memory data structures from the live heap of a running program. You can see some of these graphs in the slide deck above.

Coming next week: Noether

Noether is a really interesting programming language based on symmetries in language design. The presentation was fascinating, thought provoking, and also frustrating. Come back next week to hear more!

Tagged , , , , ,

Emerging Languages Camp Part 1: Introduction and Gershwin

Emerging Languages Camp is an all day event held before Strange Loop. There were 11 presentations on new and unusual programming languages in varying stages of development.

Production-ready languages like C#, Ruby, Clojure, and Haskell don’t just spring to life out of nothing. There exists a historical context of major language families (Algol, LISP, ML, etc.) as well as a "primordial soup" of amateur, research, and proof-of-concept experiments which allow for features well outside the patterns found in mainstream programming languages.

As new problems in computing arise, new languages are being created to help tackle those problems. Emerging Languages Camp brings together programming language creators, researchers, and enthusiasts annually to share their work and ideas.

Our goal is advancing the state of the art in programming language design and implementation by fostering collaboration between academics, industry practitioners, and hobbyists.

What follows are my notes from the 2013 ELC, with links to the presentations and slides. There’s way too much material here for a single post, so I’ll break this up over several days. I hope that these notes encourage you to watch some of the presentations and possibly attend a future ELC!

Gershwin: Stack-based Concatenative Clojure

Dan Gregoire

Homepage · Presentation · Slides

I found this presentation interesting because the author was able to essentially host a completely different language on top of Clojure with very few changes to the language syntax or parsing. Clojure is a LISP, whereas Gershwin is a concatenative language like FORTH or Factor. Syntactically, they have little in common. However, you can easily use both Clojure and Gershwin code in the same file or even within the same line of code. You simply omit the outer parentheses when writing Gershwin code. In this way, use of () provides a kind of "Clojure interop" for Gershwin code. Gershwin adds very little new syntax. The major additions are:

: name ;; defines a word -- "word" is Gershwin-speak for function
#[ body ] ;; anonymous function

Here are some simple examples:

2 2 +   ;; 4
: foo { :foo "bar" } get
;; "bar"
: times-2 [n--n] 2 * .  ;; Here, [n--n] is the "stack effect." This means "pop n from the stack, then put n back onto the stack." Required for function declarations
4 times-2 ;; 8
[ 1 2 3] #[2 *] map  ;; [2 4 6]

Dan talked about the implementation of the compiler. Again, I was impressed at have few changes were required versus "stock" Clojure. The compiler takes on extra argument, a flag indicating whether or not Gershwin should be enabled.

Why is this useful? Mostly, the advantages and disadvantages are similar to other stack-based, concatenative programming languages. You can produce amazingly succinct, elegant code. This requires "vigorous factoring." However, the succinctness can turn into opacity if taken "too far." It can be difficult for programmers reading stack-based code to keep track of the stack, which is not visible in the code itself (although it is visible in the REPL). This is particularly true if the code does a lot of stack manipulation. Then said that if you feel the need for stack manipulation, you should probably read that as a need for factoring your code. Gershwin also offers dataflow combinatorial for higher-order stack manipulation. These are essentially copied from Factor.

Dan said that Gershwin might be a good fit for your code if you find yourself using a lot of arrows ( -> ) in your Clojure.

Dan is an entertaining speaker who maintained a fast but easily understandable pace.

In the next post, I’ll discuss the presentations on Daimio and Babel.

Tagged , , , ,

How to Fix MSBuild Error MSB4006

You may encounter an error which looks like this:

MSB4006: There is a circular dependency in the target dependency graph involving target "ResolveProjectReferences" [MyProjectName\MyProjectName.csproj]

…when running MSBuild from the command line.

This error happens when:

  1. You run MSBuild on a machine with .NET 4.5 installed and
  2. You build a project referencing a solution where the SLN file contains dependencies in the SLN.
These dependencies are the dependencies you’ll get if you right click a project in Solution Explorer, choose Project Dependencies, and start checking projects. This is usually unnecessary as project dependencies can be inferred from the References.
The fix is to open the SLN file in your favorite text editor, and search for sections like this:
Project("{AAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MyProjectName", "MyProjectName\MyProjectName.csproj", "{B10E1FCF-DFBA-44A8-830F-6F3B54DFA7CB}"
	ProjectSection(ProjectDependencies) = postProject
		{B9E9F8CE-A607-4A6C-97F7-2BD439122F89} = {B9E9F8CE-A607-4A6C-97F7-2BD439122F89}
	EndProjectSection
EndProject
Just delete the entire section, so your Project node looks like this:
Project("{AAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MyProjectName", "MyProjectName\MyProjectName.csproj", "{B10E1FCF-DFBA-44A8-830F-6F3B54DFA7CB}"
EndProject
Now you should be able to use MSBuild successfully.
Tagged , ,

Cloud Security, For Real This Time

Cloud Security, For Real This Time: Homomorphic Encryption and the Future of Data Privacy. That’s the title of my presentation at the next Central Ohio OWASP Quarterly Seminar, on 27 February at 1:00 p.m. Dan King, from Microsoft, will be talking about single sign-on for federated Dynamics CRM, very practical stuff which is in real world use today. I, on the other hand, will be talking about technologies which don’t quite exist in fully practical forms today, but which I predict will change the Internet in the decade to come and which I find mind-expanding to even read about.

In the meantime, I’ll leave you with some decidedly weird Python code:

>  def my_factorial_less_than_20(n):
..   result = 1;
..   for i in range(2, 20):
..     result *= 1 if i > n else i
..   return result
>  my_factorial_less_than_20(4)
=> 24
>  my_factorial_less_than_20(100)
=> 121645100408832000L
>  my_factorial_less_than_20(1000)
=> 121645100408832000L

This looks both limited and inefficient! Is there any reason at all to write such a strange function?

In what significant way does the behavior of code written in such a style differ from more standard factorial implementations? (Hint: The answer can be found in Gödel, Escher, Bach.)

Tagged , , ,

When Does Lexing End and Parsing Begin?

I had an interesting bug in my compiler: The parser would fail on blank lines. To a certain degree, this makes sense; the formal grammar of the language does not include blank lines. This is invalid input! On the other hand, every programming language ever invented, as far as I know, simply ignores them. That sounds simple, but, as the author of the compiler, it is necessary to ask: Where is the correct place to ignore a blank line?

Compiler textbooks and classes tend to divide compiler architecture into phases, and these generally begin with lexing (also called scanning) and parsing. However, this is misleading in several ways. Some compilers do not treat scanning as a separate phase at all. Of those that do, most run scanning and parsing in parallel, feeding tokens to the parser as they are recognized by the lexer, rather than completing lexing before beginning parsing. It is also common (if, admittedly, a hack) to feed information from the parser back into the lexer as scanning progresses. In all, there is not a strict distinction between lexing and parsing "phases."

If a compiler exists to transform source code into an executable program, what is the rationale for having lexing as a separate "phase?" Partially, it’s an implementation detail: Treating lexing separately can improve performance and makes certain features easier to implement. There is a theoretical justification as well; if you specify the PL as a context-free grammar using terminals which are lexemes specified with regular expressions, then it makes sense to implement these concerns separately.

Compiler textbooks do not always do a great job of explaining the border between lexing and parsing in a general sense. For example, "Compilers: Principles, Techniques, & Tools," ("the dragon book"), Second Edition, has this to say on p. 6: "Blanks separating the lexemes would be discarded by the lexical analyzer." That is true enough for the toy example in question on that page, but clearly wrong for a language with significant whitespace, like Python or Haskell. Significant whitespace is a reasonably common programming language feature, but the dragon book mostly ignores it.

Whitespace is not significant in C# or VB.NET, but Roslyn does not discard it. Instead, Roslyn syntax nodes store whitespace, comments, etc., in SyntaxTrivia fields. The intention is that the full, original source text can be rebuilt from the syntax tree.

Lexers tend to treat keywords (reserved words) distinctly from identifiers such as variable or type names. The dragon book does suggest why this is the case, but, as Frans Bouma pointed out, does so in terms "requiring knowledge found hundreds of pages further on [in the book]." Other books have a similar problem: "Modern Compiler Implementation in ML" states, "A lexical token is a sequence of characters that can be treated as a unit in the grammar of a programming language." But it does this well before the notion of a "unit in the grammar of a programming language," is really explained.

The Coursera compilers course finally made this click for me: The point of a lexer is to provide input to a parser. Put differently, a lexer should supply a valid sequence of terminals to the parser, given valid (source code) input. So you need to understand parsers before you can really understand a lexer. Indeed, it may make more sense to teach compilers "backwards."

In the same way that lexers operate on a sequence of characters or code points, parsers operate on a sequence of terminals. Hence, the lexer should aim to produce a valid enumeration of terminals in the parser’s context-free grammar, given valid input.

Therefore: If the parser’s terminals are characters, then you do not need a lexer and you are doing scannerless parsing. If the parser’s terminals are "more than individual characters," keywords, formatted numbers, identifiers, and the like, then the job of the lexer is to produce terminals, and only terminals, on correctly-formatted source code.

This tells me that the correct place for me to remove blank lines in my compiler is in the lexer.

Tagged , ,

Let’s Build a Compiler… In F#!

I’m building a small compiler for a toy language which emits .NET executables, implemented in F#. Demo compilers are a dime a dozen, but there are a few things which make this project distinct:

  • No lexer or parser generators are used. The entire compiler is written from the ground up, and is intended to be fairly simple code.
  • The source code is idiomatic F#, and is almost totally purely functional.

The project started as a fairly simple port of Jack Crenshaw’s code from his classic series, "Lets Build A Compiler," but deviated pretty quickly as I found I wanted to take the project in a slightly different direction.

  • In the name of simplicity, Crenshaw avoids using distinct lexing, parsing, etc. phases in his code. I completely understand his reasoning, but found that this was confusing to me personally, because the code I was writing did not resemble the code I read in compiler textbooks. Indeed, many "educational" compiler implementations swing the other way, using a "nanopass" (many distinct phases) design.
  • F# code directly ported from Pascal is ugly, and I wanted to use purely functional code whenever possible.
  • I had been attempting to port one "chapter" of Crenshaw’s code at a time, but I fairly quickly discovered that this is exactly backwards; it makes far more sense to implement a complete compiler and then break it down into chapter-sized steps. It is easier to design a road when you know what the destination will be! I do recognize the value of Crenshaw’s "learn by doing / just write the code" approach. I want to go back and "chapterize" the compiler when I’m done.
  • Crenshaw’s code emits 68000 ASM for SK*DOS as a string; mine emits a binary .NET assembly as a file.

As I develop the compiler, I have found that I use a back-and-forth cycle of adding features and beautifying code. New features tend to be pretty ugly until I understand how the code needs to work in the end, and then I go back and rewrite the uglier bits using the knowledge I have gained in the implementation.

For example, I have just added support for decoration and dereferencing of local variables, and, with that, multiple-line programs. This resulted in a considerable expansion of both the parser and the code generator. So I’m going to go back and simplify a number of things, especially error handling. I’m going to use a combination "railway oriented" design and error nodes in the parser.

The code does have moments of beauty, though. Here’s the top level of the compiler:

let compile = lex >> parse >> optimize >> codeGen >> methodBuilder

That’s not an ASCII art diagram; it’s actual, executable F# code.

Here is one of the unit tests:

[<TestMethod>]
member x.``2+-3 should equal -1`` () =
   Compiler.compile("2+-3") |> shouldEqual <| -1

This test uses the compiler to emit a .NET assembly in memory, execute its main function, and compare the result with the value in the test.

Anyway, this is an ongoing project. You can see where I’m at by following the GitHub repository.

Tagged , ,

On Learning Programming and Math at Coursera

Coursera, Udacity, MIT Open Courseware, and other such sites are useful to me because they decouple the desire to learn college-level material from the expense and regulations of earning (another) diploma. The latter isn’t compelling to me today, but the former certainly is.

I’ve now taken three Coursera courses: Functional Programming Principles in Scala, Social Network Analysis, and Coding the Matrix: Linear Algebra Through Computer Science Applications. I also tried to take Calculus: Single Variable, but found that the workload was higher than I could manage with home and work obligations. I had varying degrees of experience with these subjects before beginning the courses; Functional Programming was largely review for me, Social Network Analysis was mostly new to me, and Linear Algebra was somewhere in between.

First, I found all three courses to be an effective way for me to learn the material presented.  This is mostly because all three professors did an excellent job of selecting the exercises to complete for the course. Most of what I learned in all three courses came from completing the exercises. The quality of the lectures and the availability of written materials for offline study varied from course to course. But it’s hard to be too critical considering that the courses are free. For someone like me who cares much more about increasing my knowledge than receiving credit or some sort of official certificate, it almost seems foolish not to take at least one course at any given time.

Completing the courses gave me a chance to practice with some programming languages I don’t use in my professional work right now, like Scala, Python 3, and R.

Perhaps surprisingly, given that all three courses use Coursera’s services, the means of submitting assignments was wildly different from class to class.

For the Scala course, assignments were submitted via sbt, and graded by unit tests on the server. These tests were occasionally fallible; when the server was especially loaded, it could decide that a submission was experiencing infinite recursion (a real hazard in functional programming) when, in fact, it was simply executing the code slowly. At most other times, however, the tests were accurate and the feedback was pretty reasonable.

For the Social Network Analysis course, the programming assignments were an optional add-on. You had to attach the homework via Coursera’s site, and it would be graded somehow. It’s not clear to me what the mechanism was, but it always accepted my submissions. The final programming assignment was graded using a peer review system where three other students would rate your submission, and you would evaluate the submissions of three more students. I really appreciated the feedback I got on this; it seems that the other students put real effort into their feedback.

In the Linear Algebra course, the programming assignments were all in Python. You submit them using a custom Python script which analyzes your code using regular expressions and unit tests on the client and server. This system was the buggiest of the three; the client and server code were occasionally out of sync, and frequently rejected correct code. These issues were fixed as the course went on, but it made for some very frustrating hours.

The use of instructor-supplied unit testing varied from class to class, as well. Unit tests were always included in the Scala class assignments, although the grader would run additional tests on your code. The Linear Algebra class assignments occasionally included unit tests, but mostly did not. The grader always ran unit tests, but they were not usable by the students. This deficit was made up for by other students who would post reasonable unit tests to the class forum. The Social Network Analysis class never included any unit tests in the assignments at all.

My biggest disappointment with all of these courses, however, is non-technical. Coursera, like nearly everyone in academia, has an honor code which forbids sharing work. This stands in stark contrast to most other collaborative systems for learning programming, like Exercism, Project Euler, 4Clojure (and arguably GitHub) where you are expected to first solve a problem yourself, and then work with other developers to refine your solution into the best possible implementation. You learn how to do the work well, not just how to make tests pass.

With Coursera (with the sole exception of the Social Network Analysis final problem), you just can’t do that at all. There are forums, where you are allowed to opaquely discuss the assignment and your approach to it, but you cannot show any actual code for the assignment itself. In a real college course, you could review your work privately with a TA, so you’d at least get some feedback beyond pass/fail. There is no such option at Coursera. For me, this is the biggest barrier to real learning in Coursera, and it seems unnecessary, given that "cheating" on a zero-credit course you take for fun robs only yourself.

There are other areas in which I think Coursera could profit by deviating from the way things are done in college. It’s not clear to me why a fixed schedule is required for these classes. Why not give students the option of "auditing" a course on their own time? Since the lectures are pre-recorded and the assignments are, for the most part, graded mechanically, it seems like this should be possible. Having a more flexible schedule would allow students who cannot commit the roughly 6 through 16 hours per week that most of the classes seem to require.

These criticisms aside, however, the big picture is that all of the classes I have taken so far have been good because the professors did an excellent job of designing the courses. As long as Coursera continues to recruit such qualified teachers, I think their classes will be a good investment of my time.

Tagged , , , , , ,

Strange Loop Crossword

I wrote a 15*15, NYT-style crossword puzzle for Strange Loop. On the NYT difficulty scale, it’s roughly a Wednesday-level puzzle. However, it was written for Strange Loop and thus does presume familiarity with functional programming and math, and has a few "inside jokes."

You can find the puzzle and the solution on the Strange Loop wiki.

Tagged , , ,

Google’s Research on Interviewing Technical Candidates

Yesterday’s New York Times has a good article on Google’s analysis of what works and what does not work when interviewing candidates for technical jobs. This paragraph closely matches my experience:

Behavioral interviewing also works — where you’re not giving someone a hypothetical, but you’re starting with a question like, “Give me an example of a time when you solved an analytically difficult problem.” The interesting thing about the behavioral interview is that when you ask somebody to speak to their own experience, and you drill into that, you get two kinds of information. One is you get to see how they actually interacted in a real-world situation, and the valuable “meta” information you get about the candidate is a sense of what they consider to be difficult.

I have interviewed candidates this way for years, and I can’t recall ever being wrong in my assessment of a developer’s general intelligence and technical skills. However, I have made mistakes! In particular, this method doesn’t necessarily give you a good indication of how well the candidate will get along with other members of the team, and whether or not they will behave professionally. I’m not as good at assessing that as I am at assessing technical skill.

Unsurprisingly, Google also found

[...] that brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.

The article doesn’t mention anything about whiteboard coding, but I also find that useful in technical interviews.

I have to disagree with the headline, however. While big data might not be able to tell you who to hire, it clearly told Google that they were doing it incorrectly!

Tagged ,

YAML and Remote Code Execution

YAML’s security risks are in no way limited to Rails or Ruby. YAML documents should be treated as executable code and firewalled accordingly. Deserializing arbitrary types is user-controlled, arbitrary code execution.

It’s Not Just Ruby

A few weeks ago, I had a need to parse Jasmine’s jasmine.yml in some C# code. I spent some time looking at existing YAML parsers for .NET and ended up deciding that spending a couple of hours writing a lightweight, purpose-specific parser for jasmine.yml made more sense for my use case than including an off-the-shelf YAML parser which invariably turned out to be quite a heavyweight project.

Having made this choice, I then spent a little bit of time reading the YAML specification. So when, a week or so later, the first of what would become a series of malicious-YAML-based attacks on Rails began to hit the news, I started paying attention. A couple weeks after that, yet another YAML-based security flaw was corrected in Rails, and then rubygems.org was compromised in still another malicious YAML attack separate from the Rails bugs. This one had the risk of compromising any machine which runs gem install on a regular basis, although it doesn’t seem like that actually happened at this point.

Because all of these attacks landed within the Ruby community, observers have occasionally characterized this as a crisis for Rails or Ruby. I think that’s (at least a little) misguided. The real focus of our attention should be YAML.

Update: Aaron Patterson, one of the maintainers of Ruby’s Psych parser, has an excellent discussion of the Ruby-specific aspects of this issue.

It is very easy to demonstrate that the same vulnerabilities exist in other platforms. Here’s a two-line "attack" on Clojure’s YAML parser from Justin Leitgeb. I have to use the term "attack" here loosely, because all he is doing is deserializing an arbitrary class, which the YAML spec allows. But, in most environments, deserializing an arbitrary class is tantamount to code execution. The PyYAML library for Python has nearly the same vulnerability (though there is a workaround for it). I think that YAML-based attacks have tended to target Ruby projects simply because use of YAML is quite a bit more common amongst Rubyists, and certain prominent Ruby libraries in an internal way — users of these libraries may have no idea that the JSON input they supply, e.g., might be vulnerable to an internal YAML parser.

User-Controlled, Arbitrary Object Instantiation is Remote Code Execution

The introduction to the YAML spec states,

YAML leverages these primitives, and adds a simple typing system and aliasing mechanism to form a complete language for serializing any native data structure.

That is, in practice, remote code execution. If an external actor, such as the author of a YAML document, can cause your application to instantiate an arbitrary type, then they can probably execute code on your server.

This is easier in some environments than others. If you happen to be running within a framework where one of the indispensable types calls eval on a property assignment, then it is very, very easy indeed.

On the other hand, even in environments which make a very careful distinction between data type construction and code execution, one can imagine vulnerable code. Consider the following Haskell data type:

type LaunchCodes = (Int, Int)

…and some code, elsewhere in the application:

do
    case input of
        LaunchCodes (targetId, presidentialPassword) -> launchMissiles( --…

Contrived, sure. But you may have more innocuously-named types which you don’t plan for random users to spin up. Haskell, indeed, makes such attacks harder, but it’s not a free pass.

It’s difficult to overstate the danger of remote code authentication. If someone can execute code on your server, they can probably own your data center.

The YAML spec is largely mute on the issue of security. The word "security" does not appear in the document at all, and malicious user input isn’t discussed, as far as I can see.

Types in the YAML Spec

The encoding of arbitrary types is discussed in the last section of the YAML spec, "Recommended Schemas." It specifies "tag resolution," which is, in practice, the mapping of YAML content to instantiated types during deserialization. This section defines four schemas which a compliant parser should understand. The first three, "Failsafe," "JSON," and "Core," define , define tag resolutions for common types like strings, numbers, lists, maps, etc., but don’t appear dangerous to me.

However, the last section of "Recommended Schemas" is a catchall called "Other Schemas." It notes,

None of the above recommended schemas preclude the use of arbitrary explicit tags. Hence YAML processors for a particular programming language typically provide some form of local tags that map directly to the language’s native data structures (e.g., !ruby/object:Set).

While such local tags are useful for ad-hoc applications, they do not suffice for stable, interoperable cross-application or cross-platform data exchange.

In practice, most YAML deserialization code will, by default, attempt to instantiate any type specified in the YAML file.

Some YAML parsers have a "safe" mode where they will only deserialize types specified in the tag resolution. For example, PyYAML has safe_load. Its documentation notes,

Warning: It is not safe to call yaml.load with any data received from an untrusted source! yaml.load is as powerful as pickle.load and so may call any Python function. Check the yaml.safe_load function though.

(emphasis in original)

Notably, however, Ruby’s Psych parser has no such method at this time.

So Should We All Dump YAML?

Some Rubyists have questioned why YAML is so pervasive in the Ruby community when other formats, like JSON or Ruby itself (à la Rake), are perfectly usable in most cases. It’s a good question to ask, especially in cases where YAML parsing has not been explicitly requested.

On the other hand, it’s not hard to imagine cases where allowing arbitrary object instantiation makes sense, such as in an application configuration file. An example in the .NET space would be XAML files. If you are defining a form or a workflow, then you want to be able to allow the instantiation of custom controls. There is no standard way to do this with, say, JSON files, so using a format like YAML makes sense here. (So does using a non-domain-specific language like Ruby, but that presumes that Ruby is available and might not be suitable for cross-language work.)

For the most part, you never want to accept YAML from the outside world. The risks are too high, and the benefits of the YAML format are largely not relevant here. Note, for example, that while gem files include manifests in YAML format, the jQuery plugin repository does essentially the same thing with JSON documents.

Why Don’t YAML Parsers with a "Safe" Mode Use It By Default?

Good question.

Tagged , , , ,

Bad Behavior has blocked 713 access attempts in the last 7 days.

Close