Friday, April 29, 2011

von-Neumann machines and Lambdas

Brian's premise in his argument to the question "Are side effects a good thing?" is interesting:

computers are von-Neumann machines that are designed to work well with effects (rather than being designed to work well with lambdas)

I am confused by the juxtaposition of the approaches. I cannot see them as black and white. What is the proof-value of:

computers are von-Neumann machines that are designed to work well with effects [1]

The last part confuses me:

rather than being designed to work well with lambdas [2]

Are the Lambdas used as symbols for Functional programming? Or are they euphenisms for functional programing? What is the real message?

In what sense, the parts of the premise [1] and [2] are right? What are the hidden premises in the reply? Can someone justify the original premise? How do the von-Neumann machines and Lambdas really work?

From stackoverflow
  • I'm not entirely sure what you're asking, but as I read it, you're asking what he means by lambdas?

    He is referring to lambda calculus, which form much of the theoretical basis for functional programming. It is an abstract notation for (among other things) describing and reasoning about higher-order functions.

    A Von Neumann machine is basically what we have. Programs are executed by instructions manipulating and accessing a store (our RAM). That is, everything is implicitly done through side effects. Data is read from some area in RAM, processed a bit, and written back to some (other, perhaps) area in RAM. Without side effects, the CPU would be limited to manipulating whatever garbage data happened to be in its registers when it was powered on.

    Lambda calculus has no notion of side effects, so a machine based around this principle would not have the distinction between "what the CPU can access" (our registers, essentially), and "what can be accessed indirectly" (our RAM). Everything in such a machine would be based around functional principles, of functions taking one or more arguments, and returning a new value, never modifying existing ones. (And no, I'm not sure how that would work in hardware... :))

    Does that answer your question?

    Masi : Why do you use the word 'limited'? Do you refer to productivity by the word? I understand that it is very hard to do most things in functional languages and hence less productive. However, I think the biggest problem is the hardness to innovate on them. You cannot take the natural Trial and Error approach. It is better to test things with other languages and, then, gradually implement with functional languages. The 'functional' hardware is interesting idea.
    jalf : I mean "limited" in the purest sense of the word. That our computers rely on side effects. Side effects are what allow your CPU to read meaningful data into the registers it's able to operate on. Side effects are how this data can be stored as part of the application state. I am not talking about languages, but about how our hardware works.
    Masi : clearly, I do not know how our computer works. Have to look at some reading. Thank you for your reply!
  • Here is more depth of what I meant, though it will be interesting to see if others agree or what they have to say.

    Consider how computers work today. You have hardware that has integer and floating-point registers, and a vast array of random access memory, and instructions which are mostly of the form 'based on reading the value of this register/memory cell, go poke this new value into this register/cell'. (Updating memory cells has all kinds of perf implications when it comes to cache lines and coherency and memory models and whatnot.) Integers are 32 or 64 bits, and nearly all programming languages surface these data types that exactly match the hardware. Nearly every runtime works with a small call stack where stack-allocated objects are cheap, and a more expensive 'heap' where other objects can be created and destroyed when non-stack-based-lifetimes are needed.

    Now consider most modern functional programming languages. Immutability is the norm; you will rarely 'poke' memory with new values. (This means you create more new objects, which means you allocate more.) Lambdas and continuations are the norm; you more rarely have object lifetimes that correspond to the stack. (Indeed, some FP runtimes don't use a stack; in a CPS implementation the notion of stack and program counter are out-of-place.) Recursion is a looping construct, so you at least need 'tail' calls to not consume the stack anyway. Practically everything needs to "heap" allocate, and of course you need a garbage-collector. Algebraic data types provide tagged data; in theory these tags would only require an extra 2 or 3 bits of data, but to match the runtime, they often need to have an extra word of memory or more. ... I am kind of meandering, but the things you do most often in an FP language tend to correspond to exactly to the things that scale worst or are most expensive on the typical computer hardware architecture and basic language runtime.

    It doesn't have to be that way. One can imagine a world where the runtime eschews a stack, and makes heap/allocation fast (and not a bottleneck for multi-threaded apps). One can imagine a world where the interoperable integer types have 29 or 60 bits, and the runtime/hardware use the extra leftover bits of the word for e.g. the GC, or algebraic type tags, or whatnot. (I think some FP implementations/runtimes do some of these tricks.) Etc. etc... The point is, if you take a modern functional language as a given, and then design runtime/hardware around it, it would look very different from the typical hardware/runtimes of today.

    (I don't think I communicated that terrifically, and I am imprecise about many details I don't know precisely, but hopefully you grok the gist of my thesis here.)

    jalf : You're right, some languages do reserve a bit for a primitive type tag (basically pointer/nonpointer), but I'm not sure how all this relates to your original question. You're right, a machine designed around immutability and other FP properties, it'd look very different from our Von Neumann machines. Which was exactly the point of the answer from the other thread you asked about. So I'm not sure what you're asking. :)
    Masi : @Brian: Don't worry about meandering, I feel we have bigger problems to come if we are doing to da a functional hardware. Somehow, I feel the problems are mathematical. When I asked my lecturer about the state of functional programming, he mentioned some mathematical probles. The goal of the question was to create an open discussion. I have thought about the same things in your writing, clearly, I am not the only person thinking about them :)
    Masi : +1 for mentioning the functional data structures.

0 comments:

Post a Comment