Keep Working, Worker Bee!

3.14.2006

More on polymorphic contracts

It's remarkable how fragile some results of type theory can be. Last time I wrote about polymorphic contracts, I offhandedly mentioned that in an ML-like language a function of type ∀α.α→α might memorize the first argument that it is provided with at runtime and thereafter always return that value. I was actually thinking of a full System F + state language; I'm pretty sure that it isn't possible to write a function that behaves that way in actual SML. Surprisingly, though, you can get a polymorphic memory function if you're willing to relax the type to ∀α.unit→α→α. Here it is:


let val (f : unit -> 'a -> 'a) =
fn _ =>
let val memory = ref NONE in
fn x =>
case !memory of
NONE => (memory := (SOME x); x)
| (SOME y) => y
end
val g' = f()
val g'' = f()
in
(g' 1, g' 2, g'' "hello", g'' "goodbye")
end

This program typechecks in SML and produces the value (1,1,"hello","hello").

So what's going on here? It doesn't seem like taking an additional unit value should affect anything, but clearly it does. Here's why: when you apply a polymorphic function in ML, you're actually doing two things: first, you're supplying the function a value argument, and second, you're implicitly supplying it with concrete types for all its type variables. (You have to do both of these at the same time because of SML's value restriction.) So, if you're writing a function of type ∀α.α→α, then every time you call it with a new value, you're also calling it with a new type. If you have one piece of memory for all of these calls, then you might remember an argument of type α when α = int and then return it when α=string, which would violate ML's safety guarantee.

Now instead let's say you're writing a function with type ∀α.unit→α→α. Now when you call the function for the first time, you're supplying it with the trivial unit value and also a particular choice for α. Now, in the body of that function you can set a cell of memory that only holds values of that particular choice for α, and then a function that consumes and produces a value for that same choice of α. In essence, that first unit argument is just there to hack around the value restriction; it gives us a way to do some computation in between picking the particular type we want to use for α and coughing up a function that consumes and produces αs.

So, what does this have to do with polymorphic contracts? Well, I've got a prototype polymorphic contract implementation working, and this exact issue pops back up but in a form that's a little more annoying. In order to be able to write down ML functions like the one I've posted here and have them typecheck, you pretty much need to understand ML's type system. You simply can't execute a program that doesn't work. Polymorphic contracts, on the other hand — it's easy to write down a program using polymorphic contracts that will definitely cause a contract violation in the right contexts (like if you write a memory function and give it contract ∀α.α→α) and then execute that program, maybe even succeed on lots of inputs, without noticing you've done anything wrong.

0 Comments:

Post a Comment

<< Home