Keep Working, Worker Bee!


It's the final day of the Worker Bee! PLDI Roundup! Today had a lot of interesting talks, mainly I think because it focused more on the PLD part and less of the I part. Good stuff.

Invited Talk

The Transactional Manifesto: Software Engineering and Non-Blocking Synchronization
By Maurice Herlihy

As people have been saying for years now, the days of Moore's Law translating directly into faster execution of the same programs are ending. It's not that Moore's Law is over — it doesn't appear to be — it's that the era when cramming more transistors onto a chip actually makes a sequential computation take less time is over, meaning that all the major chip manufacturers are moving towards multicore systems that only make your programs faster if you can keep multiple independent processors busy. However, current parallelism techniques (by which the speaker meant pthreads) are a terrible solution to the concurrency problem and can't scale up to a computing environment in which everyone needs to write multithreaded programs all the time. For instance, he gave a challenge: write a library for a doubly-linked list that allows insertion and deletion from either end, but that allows inserts and deletes from opposite ends to happen concurrently unless that would lead to data corruption (e.g., the queue only has one element or something). After going through all the problems, he concluded that a nice elegant solution to this problem would be a publishable result. The solution to a problem this simple being a publishable result is not a good sign!

He had a vision for overcoming these obstacles, borrowed from the databases community: transactions. The idea is that you send a function that's supposed to be executed atomically to a transaction manager, which executes it and if a problem arises, rolls it back and tries it again. He's implemented a C# library that does this and says that while it's got a lot of problems, it allowed an undergraduate intern to code up a working concurrent red-black-tree implementation in a week — he said he's seen a PhD thesis a pthreads solution to the same problem.

I'm pretty happy with the CML-style concurrency in PLT Scheme, but that's actually more about addressing "concurrency" (= running multiple threads because it models your problem) rather than "parallelism" (= running multiple threads because you want your program to run faster). It seems like a very interesting idea, and I'd be interested to see how far the transactional-memory paradigm can go.

Session 9: Domain-Specific Tools

Programming by Sketching for Bitstreaming Programs
As Simon Peyton-Jones said during Q&A, this seems a bit like magic. You want to make a program that processes a bitstream, taking some input stream and producing an output, for instance dropping every third bit from the input. As it turns out there are several different algorithms for doing this that vary widely in efficiency, and even though what you want can be described concisely the algorithm is far from trivial to implement. So what these folks have done is found a way that you can describe what you want ("drop every third bit") and it'll automatically produce some algorithm for doing it; you can also add in "sketches" that guide how it chooses an algorithm, partial definitions of the algorithm that guide code generation and allow you to tune the output algorithm. For the drop-third program they found that users of their sketching tool made twice as fast algorithms as people writing in C, even when both groups were encouraged to tweak their solutions as much as they wanted to make performance better. It works because bitstream algorithms generally have the property that the goals are much easier to describe than they are to derive efficient algorithms for, but this technique might be applicable to other domains. (This paper shared the best paper award with the one I mentioned on Monday.)

PADS: A domain-specific language for processing ad hoc data
You've got log files in all kinds of ad hoc formats, and if you want to read them programmatically the typical solution is to hack up a custom thing in perl or something. This is a brittle solution, especially if you want to do any kind of error handling. PADS is a language for writing parsers for ad hoc data like this where you describe the file format very naturally -- "there's the string 'FILE' followed by whitespace followed by a number between 0 and 100 followed by ...." and PADS gives you back not just a parser with robust error handling but also a whole suite of fancy tools like an XML converter, statistical analyzer, and so on. The speaker said that their goal was to make the suite of free tools so compelling that people would be crazy not to use PADS if they're doing anything that has to do with ad-hoc data formats.

Composing Security Policies with Polymer
I'm not sure what to think of this. The idea is, you'd rather have security policies be both (a) centralized and (b) composable than the alternative. So, they built a framework of security policies that compose with each other: you have "security events" in your code and then a central security manager that says what to do about it. I could go on, but that's the gist. Here's the thing: security is very tricky, and I'm really uncomfortable using somebody else's security code for my application. I don't know, maybe it's totally benign and really works great, but the road to security breaches is paved with clever ideas about how to avoid having to worry about security.

Formal Loop Merging for Signal Transforms
The actual domain of this work was making a compiler optimization for a specialized signal-processing language so that it could merge loops that go over the same dataset multiple times. I don't know anything about signal processing, so that aspect of the talk went right over my head. However, the way they did it was illuminating: they observed that traditional approaches tried to special-case some particular construct in the source language into a particular object-code result, but that there's a better way to do it. What they did was extended one of the intermediate representations they used so that it explicitly tracked where loops occured, and then made the optimization a very very simple rewrite on that IR. Just that alone was enough to optimize all of the cases those special cases had handled, and several more besides. The moral of the story is that sometimes having the right information in your intermediate representation is very powerful.

Okay, that was PLDI. It was a very good conference, and I met a bunch of cool people and found a bunch of good leads for my current work. And the coveted Worker Bee! Paper-I-Thought-Was-Cool Prize goes to: Garbage Collection Without Paging! The best implementation papers at this conference weren't ones that blew your mind with things you couldn't imagine, but ones that took things that are easy to imagine and proved that they actually work. This one showed that if you puncture the barrier that hides real memory from the memory manager, you have an opportunity to avoid all the performance penalties the OS would otherwise incur. The larger picture it paints is that while abstractions like virtual memory can help us, sometimes they hurt us too by not letting us deal with the messy cases in the right way. In the case of garbage collectors, they hurt us very very much.

Next one of these I do will be for ICFP, I suppose.


  • Jacob, thanks for taking the time to write your PLDI Roundup. Thanks to you, I was able to experience PLCI without being there. Consider yourself my real-life virtual avatar. ;-)


    By Anonymous Tom Moertel, at 09:57  

  • Jacob, thanks very much for this. I'm a systems guy and POPL and PLDI somehow were never as exciting as SOSP , OSDI, HotOS, HotNets etc. But your summaries are really excellent - i understand them and it makes me want to read the papers !

    By Anonymous Clive, at 10:59  

Post a Comment

<< Home