281   danielsz.github.io

Refreshing Comments...

The post quotes McCarthy:

"one of the myths concerning LISP that people think up or invent for themselves becomes apparent, and that is that LISP is somehow a realization of the lambda calculus, or that was the intention. The truth is that I didn't understand the lambda calculus, really" - John McCarthy

So there are a two issues here, 1) whether or not it was McCarthy's intention to realize the Lambda Calculus in LISP, and 2) whether or not LISP is such a realization. Or at least some kind of close realization.

The answer to 1 is clearly no. This doesn't imply an answer to 2 one way or another.

If 2 isn't true, what explains the widespread belief? Is it really just that he, McCarthy, borrowed some notation?

Modern lisps do realize the lambda calculus, but this was not immediate. In particular, in order to exactly match the lambda-calculus beta-reduction rule, you need to use lexical rather than dynamic scope, which did not really become popular until Scheme in the 1970s.
Did not become popular, or did not become implemented?

My understanding is that lexical scope was first implemented in Algol and Pascal, and then was first implemented with true garbage collection in Scheme. (Thereby leading to the restriction in Algol and Pascal that closures existed, but they could only be passed into functions, and never returned from them. That way the variables being closed over could live on the stack.)

But I'd love to learn that I'm wrong and that these things came before that.

Oh, I think you're right, Scheme was at least the first lexically scoped Lisp.

Wikipedia says

> The concept of closures was developed in the 1960s for the mechanical evaluation of expressions in the λ-calculus and was first fully implemented in 1970 as a language feature in the PAL programming language to support lexically scoped first-class functions.

> PAL, the Pedagogic Algorithmic Language, is a programming language developed at the Massachusetts Institute of Technology in around 1967 to help teach programming language semantics and design. It is a "direct descendant" of ISWIM and owes much of its philosophy to Christopher Strachey.

LISP 1.5 as described in the manual implements lexical scope, at least in the interpreter. I haven't studied the compiler too closely, I think there dynamic scope is more prevalent, but as far as I know it has lexical scope as well.
Where do you see that?

Looking at the "Universal LISP function" on page 13 in [0], the case for apply/LAMBDA just extends the current environment a with the arguments of the lambda, but it doesn't unpack a closure to get the environment the lambda function was defined in, so it implements the dynamic version. (Unlike, e.g., the interpreter in SICP [1].)

[0] http://www.softwarepreservation.org/projects/LISP/book/LISP%... [1] https://mitpress.mit.edu/sites/default/files/sicp/full-text/...

The one on page 13 is not what LISP 1.5 does, that's more of an earlier purer idea. I meant the one on page 70 where FUNCTION packs a fucking together with its environment into a FUNARG closure, which when applied unpacks the environment it was created with.
Interesting! So yeah, the LISP 1.5. interpreter already seems to do this.

Googling a bit, it seems that McCarthy mentions this in "History of Lisp" [0], as a change from LISP 1 to LISP 1.5:

> In modern terminology, lexical scoping was wanted, and dynamic scoping was obtained. I must confess that I regarded this difficulty as just a bug and expressed confidence that Steve Russell would soon fix it. He did fix it but by inventing the so-called FUNARG device that took the lexical environment along with the functional argument. Similar difficulties later showed up in Algol 60, and Russell's turned out to be one of the more comprehensive solutions to the problem. While it worked well in the interpreter, comprehensiveness and speed seem to be opposed in compiled code, and this led to a succession of compromises. Unfortunately, time did not permit writing an appendix giving the history of the problem, and the interested reader is referred to (Moses 1970) as a place to start. (David Park tells me that Patrick Fischer also had a hand in developing the FUNARG device).

So I guess closures had been invented at least by 1962 (when the LISP 1.5 manual was published), but were not widely used because of performance considerations?

[0] http://www-formal.stanford.edu/jmc/history/lisp/node4.html

Question: if lexical scopes are in the language's data structures, but can't be explicitly created or made visible in the language's syntax, is it still homoiconic?
Now, all the Lisps (Racket, Clojure) are lexically scoped.

Common LISP is lexically scoped, though it does still have opt-in dynamic scoping ("special variables").

> So there are a two issues here, 1) whether or not it was McCarthy's intention to realize the Lambda Calculus in LISP, and 2) whether or not LISP is such a realization. Or at least some kind of close realization.

This would fit in with Graham's suggestion that McCarthy more "discovered" Lisp than "invented" it.

If it was a realization of a lambda calculus, then it is one with (a) primitives, (b) strict evaluation, (c) quoted lambda terms, and (d) "dynamic" bindings.

(a) In classic lambda calculus, everything is a lambda term. McCarthy's Lisp has primitives like lists and numbers. However, it is known that lambda calculus is powerful enough to encode these things as lambda terms (for example,

  null = (lambda (n c) (n))
  (cons a b) = (lambda (n c) (c a b))
gives a way to encode lists. The car function would be something like

  (car a) = (lambda (lst)
              (lst (lambda () (error "car: not cons"))
                   (lambda (a b) a)))
This would not work in the original Lisp because of binding issues: the definition of cons requires the specific a and b bindings be remembered by the returned lambda.)

(b) Lambda calculus does not have any evaluation rules. Rather, it is like algebra where you can try to normalize an expression if you wish, but the point is that some lambda terms are equivalent to others based on some simple rules that model abstract properties of function compositions. Lambda-calculus-based programming languages choose some evaluation rule, but there is no guarantee of convergence: there might be two programs that lambda calculus says are formally equivalent, but one might terminate while the other might not. Depending on how you're feeling, you might say that no PL for a computer can ever realize the lambda calculus, but more pragmatically we can say most languages use lambda calculus with a strict evaluation strategy.

(c) The lambda terms in lambda calculus are not inspectable objects, but more just a sequence of symbols. Perhaps one of the innovations of McCarthy is that lambda terms can be represented using lists, and the evaluator can be written as a list processor (much better than Godel numbering!). In any case, the fact that terms have the ability to evaluate representations of terms within the context of the eval makes things a little different. It's also not too hard to construct a lambda evaluator in the lambda calculus[1], but you don't have the "level collapse" of Lisp.

(d) In lambda calculus, one way to model function application is that you immediately substitute in arguments wherever that parameter is used in the function body. Shadowing is dealt with using a convention in PL known as lexical scoping, and an efficient implementation uses a linked list of environments. In the original Lisp, there was a stack of variable bindings instead, leading to something that is now known as dynamic scoping, which gives different results from the immediate substitution model. Pretty much everything fun you can do with the lambda calculus depends on having lexical scoping.

All this said, the widespread belief about Lisp being the lambda calculus probably comes from Scheme, which was intentionally lambda calculus with a strict evaluation model. Steele and Sussman were learning about actors for AI research, and I think it was Sussman (a former logician) who suggested that their planning language Schemer (truncated to Scheme) ought to have real lambdas. At some point, they realized actors and lambdas (with mutable environments) had the exact same implementation. This led to "Scheme: An Interpreter for Extended Lambda Calculus" (1975) and the "Lambda the ultimate something" papers. Later, many of these ideas were backported to Lisp during the standardization of Common Lisp.

[1] https://math.berkeley.edu/~kmill/blog/blog_2018_5_31_univers...

Schemer (later renamed Scheme) was invented to scheme against Actors reprising Conniver, which was invented to connive against Planner.

See the following for the current state of the art including the latest Actor approach to Eval, which is more modular and concurrent than the Eval in Lisp and Scheme:


The above article explains exactly how Actors are much more powerful than lambdas with mutable environments.

Adding a little to your wonderful post, another possibility for why the conflation of the original lisp as based on the lambda calculus is that its notation was heavily inspired by the lambda calculus, even though it was more properly a refinement of the μ-recursive functions (of form f:(N,N,...N) -> N) over the natural numbers.

While a lot of people are trying to defend the lambda calculus as a basis, I think this actually undersells the significance of LISP. Apart from Lisp the language family and its implementations, there is Lisp, (arguably) the first practically realizable mathematical model of computation. That is, it stands on its own as a model for computation†, continuing along a long line of which I think Grassmann's 1861 work on arithmetic and induction is a good starting point.

Turing Machines are intuitive and the lambda calculus is subtle and expressive, but Lisp's contribution was to place partial recursive function on a more intuitive/realizable basis in terms of simple building blocks of partial functions, predicates, conditional expressions and symbolic expressions (ordered pairs/lists of atomic symbols). Lambdas come in as a notation for functions with a modification to facilitate recursive definitions.

†Making Greenspun's Tenth Rule trivially true.

This is excellent. Thank you!

John McCarthy said that he never had the intention to realize the lambda calculus, but he followed that statement with the corollary that had someone "started out with that intention, he might have ended with something like LISP." Peter Landin was a pioneer in that regard. See "The Mechanical Evaluation of Expressions", published in 1964, and the SECD virtual machine. Machine interpreters like SECD and CEK may come close to a "realization" of the lambda calculus. Their design is directly inspired by its semantics. You don't necessarily end up with something like LISP, but you can, see Lispkit and LispMe.

For (b) isn't this the appeal of normal-order evaluation (or its related sibling lazy evaluation)? If there is a terminating reduction sequence lazy evaluation will find it, whereas eager evaluation will fail to find it.

Moreover the lambda calculus is confluent, so if you find the terminating reduction sequence, you're guaranteed all other terminating sequences end up with the same result.

So as long as your PL uses normal-form evaluation or lazy evaluation you can entirely realize any equivalences in the lambda calculus.

> In classic lambda calculus, everything is a lambda term

OO says everything is an object. Even though Java has non-object primitives, we're still gonna classify Java as OO.

> Lambda calculus does not have any evaluation rules.

> The lambda terms in lambda calculus are not inspectable objects, but more just a sequence of symbols.

It's not clear to me why this makes Lisp not in the family of Lambda implementations.

> In the original Lisp, there was a stack of variable bindings instead, leading to something that is now known as dynamic scoping.

That's true. Every modern Lisp (Scheme, Clojure, Racket) has lexical scoping. And Common LISP uses lexical by default.

> Later, many of these ideas were backported to Lisp during the standardization of Common Lisp.

Again this contributes to the notion that LISP/Schema/Lambda Calculus were "discovered", not that Lambda calculus has an explicit pedigree.

Unlike lambda calculus, OO is not a specific mathematical formalism but rather a methodology and ontology for organizing a program. Lambda calculus, defined by Alonzo Church, is a kind of 'arithmetic' of abstract function manipulation devoid of semantics. It has some strong theoretical footing as, in modern language, reflexive objects in a category.

> It's not clear to me why this makes Lisp not in the family of Lambda implementations.

To be clear, I started my comment by writing "if it is a realization, then it is one with [the following differences]." Lambda calculus was such a good idea that pretty much anything with function abstractions can be described by some variation of it. It's the dynamic scoping that causes the main issues here, though, and suggests lambda calculus was not a significant motivation in the definition of McCarthy's Lisp. Yet, he was still aware of it enough to call the abstraction operator "lambda."

>> Later, many of these ideas were backported to Lisp during the standardization of Common Lisp.

> Again this contributes to the notion that LISP/Schema/Lambda Calculus were "discovered", not that Lambda calculus has an explicit pedigree.

I don't see how that follows. Sussman was a math undergrad and PhD and was well aware of developments in logic, and he influenced Steele, who created the quite-influential Scheme and went on be one of the main people on the standardization committee for Common Lisp. This isn't even mentioning all the work people have done in PL research with typed lambda calculi (going back to corrections to Church's attempt to use lambda calculus as a foundation for mathematics), which has influenced the designs of many type systems in modern programming languages.

> this contributes to the notion that LISP/Schema/Lambda Calculus were "discovered", not that Lambda calculus has an explicit pedigree.

That notion is wrong (at least with a very high likelihood), and it's usually stated by people who fetishize the lambda calculus but know little of its long evolution. It's just your ordinary case (of hubris) where people aesthetically drawn to something describe it as inevitable or even a law of nature. And I know it's wrong in part because of the following quote:

> We do not attach any character of uniqueness or absolute truth to any particular system of logic. The entities of formal logic are abstractions, invented because of their use in describing and systematizing facts of experience or observation, and their properties, determined in rough outline by this intended use, depend for their exact character on the arbitrary choice of the inventor.

This quote is by the American logician Alonzo Church (1903-1995) in his 1932 paper, A Set of Postulates for the Foundation of Logic, and it appears as an introduction to the invention Church first described in that paper: the (untyped) lambda calculus [1].

The simpler explanation, which has the added benefit of also being true, or at least supported by plentiful evidence, is that the lambda calculus was invented as a step in a long line of research, tradition and aesthetics, and so others exposed to it could have (and did) invent similar things.

If you're interested in the real history of the evolution of formal logic and computation (and algebra) you can find the above quote, and many others, in a 300-page anthology of (mostly) primary sources that I composed about a year and a half ago [2]. They describe the meticulous, intentional invention of various formalisms over the centuries, as well as aesthetic concerns that have led some to prefer one formalism over another.

[1]: Actually, in that paper, what would become the lambda calculus is presented as the proof calculus for a logic that was later proven unsound. The calculus itself was then extracted and used in Church's more famous 1936 paper, An Unsolvable Problem of Elementary Number Theory in an almost-successful attempt to describe the essence of computation. That feat was finally achieved by Turing a few months later.

[2]: https://pron.github.io/computation-logic-algebra

Thanks for this comment and the anthology. Indeed the whole question is a bit odd and one ought not to glorify calculi but study them.

Lambda calculus originated from research in formal logic, which is about manipulating symbols according to precise rules that would capture reasoning. It is a compelling way to combine variable binding, equality and substitution into a model of "function calls" - even if the purpose was to formalize arithmetic computation and reasoning.

At some level, reasoning is what programming is about as well! The notation and rules may change, but ultimately we want to make the machines do things and at some level, we need abstraction mechanisms. Recursive procedures are such a mechanism and it can be expressed as a lambda term that involves a fixed-point combinator, or machine code.

It is easy to model and understand many things using lambda calculus or functional programming techniques, depending on whether the interest is theoretical/formal or practical.

To quote Peter Landin (heavily influenced by McCarthy and LISP and author of 'The next 700 programming languages'):

> A possible first step in the research program is 1700 doctoral theses called "A Correspondence between x and Church's λ-notation."

Maybe people think this was different in the late 1950s?

Let's read McCarthy's paper 'Recursive Functions of Symbolic Expressions and their computation by machine Part I' where he explicitly cites Church and introduces lambda notation.

I would consider that paper part of the phenomenon that is LISP, would that not settle the question? Lambda calculus gives little guidance in terms of implementation, but I think it does not diminish LISP in any way that it should be "based" on lambda calculus.

And I do not find the linked article adds any value, but I am very glad to read the HN discussion to find gems like the above (even if I should rather have slept for the past few hours).

BTW, the Church/Turing theory of computation is not universal for digital computation as explained in the following article:


That proof is like disproving the conservation of energy by pointing out that the water inside a kettle boils. Or speaking about the "Toaster-Enhanced Turing Machine" (https://www.scottaaronson.com/blog/?p=1121). It's easy to "disprove" Turing's thesis when you misstate it.

Turing's thesis talks about some system transforming an input to an output. Clearly, a TM could simulate the actor itself in your proof. If it is not able to simulate the entire actor-collaborator system, that's only because you may have given the collaborator (whatever it is that generates the messages) super-Turing powers. You assumed that there could be something that could issue a `stop` after an arbitrary number of `go`'s, but you haven't established that such a mechanism could actually exist, and that's where the super-Turing computation actually hides: in a collaborator whose existence you have not established. As you have not established the existence of the collaborator, you have not established the existence of your actor-collaborator system. I claim that a TM cannot simulate it simply because it cannot exist (not as you describe it, at least).

So here's another "proof": The actor machine takes two messages, Q and A(Bool), and it gets them alternately, always Q followed by A. Every time it gets a Q, it increments a counter (initialized to zero) by 1 to the value N, and emits a string corresponding to the Nth Turing machine. It then gets a message A containing a value telling it whether the Nth TM terminates on an empty tape, and in response it emits A's argument back. And here you have an actor machine that decides halting!

The article referenced presents a strongly-typed proof that the halting problem is compurationally undecidable. Nevertheless, Actors can perform computations impossible on a nondeterministic Turing Machine.
If they can, you haven't shown that. The "computation" you present is not just the actor's behavior, but the behavior of a combined actor-collaborator system (the collaborator is whatever it is that sends the actor messages). This system presents "super-Turing" behavior iff the collaborator is super-Turing. You haven't shown such a collaborator, that can reliably emit a `stop` command after an arbitrary number of `go`s, can exist. It's always possible that the scientific consensus is wrong, but that requires proof, and the "proof" in the paper isn't one.
There is no "collaborator" in the Actor computation that Plotkin:s proof shows cannot be preformed by a nondeterministic Turing Machine.
Yes, there is. The "computation" of the actor (really, actor-collaborator) relies on something that can reliably emit a `stop` after an arbitrary number of `go`s. Please show that something can actually do that, and I'll show you a TM that can do the same. It's easy to describe non-computable behaviors; the difficulty is demonstrating that there are actual physical systems that can carry them out.
The Actor (which cannot be implemented by a nondeterministic TM) sends the 'stop' message to itself. However, just as there can be an arbitrarily long amount of time between two steps of a computation, there can be a arbitrarily long amount of time for a message to be delivered.
> The Actor (which cannot be implemented by a nondeterministic TM)

As you now describe it, it cannot be implemented (physically realized) at all. Or, conversely, any physically implementable refinement of it (which will not exhibit the entire range of behaviors) will be simulatable by a TM.

There are many abstract machines that cannot be implemented by TMs -- e.g. various oracle TMs. There is nothing special, surprising or new about that.

There are many formalisms that are more or less convenient abstractions for various kinds of systems. There is nothing special, surprising or new about that, either. In fact, some formalisms that can describe non-computable behaviors are commonly used to specify either software or hardware as they're convenient (like Lamport's TLA).

But you're making a claim about the Church-Turing thesis, which, as commonly interpreted today (as the physical Church-Turing thesis), is the claim that any mechanism that can be implemented by a physical mechanism can be simulated by a TM. Unless you show how to build a physical system that cannot be simulated by a TM, your claim is no refutation of the thesis; it has nothing to do with it. Your claim that arbiters in digital circuits cannot be simulated has not been established and is not recognized by scientific consensus.

> However, just as there can be an arbitrarily long amount of time between two steps of a computation, there can be a arbitrarily long amount of time for a message to be delivered.

This is a completely different use of "arbitrary". In TMs, the fact that an arbitrary amount of time can pass between steps means that any device, with any finite amount of time between steps, can produce the full range of TM behaviors. In your actor case, to get non-computable behavior, you need to show that the device can delay the message by every amount of time. You need to show that such a physical device can exist.

Put simply, it's one thing to propose a non-computable abstraction that's convenient to model some systems, and another thing altogether to claim that there are realizable physical systems that cannot be simulated by a Turing machine. The former is useful but mundane (in fact, all of classical analysis falls in this category); the latter has not been achieved to date.

Digital arbiters can theoretically take an arbitrary amount of time to settle although statistically they tend to settle soon rather than later. Also, if an Actor sends itself a 'stop' message over the Internet via Timbuktu, it can take an arbitrary amount of time to be received back. Also, an Actor can take an arbitrary amount of time to process a message.

In the above models of computation, arbitrary means absolutely arbitrary, i.e., there is no a priori bound on the amount of time that it can take.

Your trouble may be with Plotkin's proof, which shows that state machine models of nondeterministic computation are inadequate.

I have no problem with the proof. It's easy to come up with non-computable abstractions. All of calculus is one (in fact, Turing himself pointed it out in "On Computable Numbers", and he invented and used others when he found them useful), yet it's commonly used to model natural systems without anyone considering it a proof against Church-Turing. So the fact that an abstraction is non-computable is unsurprising and has nothing to do with the thesis. The Turing thesis is relevant when you're talking about a physical realization, and you have not provided any proof that you've found one that's not Turing-computable.

Every physical object does have an a priori bound on the amount of time it can take to do something, unless that time could possibly be infinite. The reason is that it needs some sort of a counter, so it needs some state, and there's only so much state storable in the universe to store a counter.

There are simple nondetermintic procedures that can be implemented by digital circuits using arbiters that cannot be implemented by a nondeterminustic Turing Machine.
That is a strong assertion that requires proof. The consensus view is that there aren't. For example, one could claim that a TM couldn't simulate a coin flip as it cannot simulate true randomness, but this assumes that the coin flip is "truly" random without establishing it (which would be hard because of pseudorandomness). Or, in the case of arbiters, you could claim that the arbiter behaves like the magical collaborator in the stop/go examples, converting two analog inputs to a binary decision that takes an arbitrarily long time, but this only introduces yet another magical collaborator capable of producing analog inputs that are equal to arbitrary precision.

This is a common problem when we appeal to continuous natural phenomena, as their common description is usually a convenient, but imprecise, abstraction. Goldreich addressed this in On the philosophical basis of computational theories [1]: "A computational model cannot be justified by merely asserting that it is consistent with some theory of natural phenomena ... The source of trouble is the implicit postulate that asserts that whatever is not forbidden explicitly by the relevant electrical theories, can actually be implemented"[2]

[1]: http://www.wisdom.weizmann.ac.il/~oded/VO/qc-fable.pdf

[2]: He adds "at no cost" because his focus is complexity, not computability


Goldreich demonstrates the problem by showing that our abstractions of electrical circuits don't in themselves preclude circuits that violate computational complexity results or even decide halting, but that alone is insufficient to show that such circuits can actually be built.

There are certain formalisms that make this kind of error harder to spot: actor formalisms make it easy to hide impossible "computation" in a collaborator; some typed formalisms could hide computation in the syntax (which requires a lot or even infinite computation to decide).

Could it be that, at least some, university teach programming languages with the idea that Lisp has this feature? My PL course at Northeastern had SHLAC (Scheme Has Lambda Calculus) where we started with an identity function and built up from there.
That seems likely. But then, I would think LISP does realize the lambda calculus in some sense. It naturally lends itself to this sort of exercise and it's really successful.
Lambda Calculus (LC) versus LISP is not just about lexical scoping, but also about partial application (currying), which is cumbersome in LISP and natural in LC. In LC (where \ = lambda)

    (\xy.x)M  ==>  \y.M
while in LISP

    ((lambda (x y) x) M)  ==>  undefined
because the lambda function expects two arguments. Of course

\xy.x is just an abbreviation for \x.\y.x, so the LISP counterpart would really be

    ((lambda (x) (lambda (y) x)) M)  ==>  (lambda (y) M)
but this only proves the point that currying is natural in LC and not in LISP, because LC provides syntactic sugar that allows to treat higher-order functions and functions of multiple variables in the same way.

Also, LC is not compatible with functions with a variable number of arguments, which is common in LISP. For instance,

    (+ 1)  ==>  1
in most LISPs, but given PLUS == \mnfx.mf(nfx) and 1 == \x.fx

    PLUS 1  ==>  \nfx.f(nfx) == SUCC
i.e., (PLUS 1) reduces to "SUCCessor", the function adding one to its argument.

In most LISP dialects, you can pass any number of arguments to a variable-argument function like +. So what does the syntax (F X) denote in general? The application of a unary function to one argument or the partial application of a binary function? Or a ternary one...?

In LC it does not matter, because multi-variable functions and higher-order functions are the same.

I have developed a LISPy language that uses currying instead of functions of multiple arguments in the book Compiling Lambda Calculus (https://www.t3x.org/clc/index.html).

You can download the code here: https://www.t3x.org/clc/lc1.html.

If I understood you right, Lisp was not directly inspired by lambda calculus, but from McCarthy's own research into recursive functions where he found that the three primary functions can cover the whole of computation.

What I'm extrapolating from this is that McCarthy's ideas are similar in implication to Lambda Calculus where you can define computation with just function abstraction and application, and use Peano numbers to represent data. Both approaches end up creating a purely functional way to write programs.

Would that be correct? I also wonder whether there is anything we can take away from this knowledge that is applicable to programming or how we look at it?

What "three primary functions" are you referring to?
This is touched on in the article:

"They are interesting because with just three initial functions (successor, constant and projection functions) closed under composition and primitive recursion, one can produce most computable functions studied in number theory (addition, division, factorial, exponential, etc.)."

Probably means the 3 irreducable primitives in LC: applications, abstractions, and "variables" (ie. attribute identifiers)
I'm not even sure why it gets pushed as "functional"; I mean, you can pass and return functions, but that's really not the point of the language like it is with ML or Haskell. It's primarily a symbolic language.
One of the newest Lisp dialects, Kernel, is pretty close to lambda calculus, though. Like in LC, there is no implicit evaluation of arguments. A fexpr receives the "source code" of its input expressions, similar to LC. Then it can explicitly evaluate those it cares about.


"Receiving the source code" of an argument in lambda calculus is an accident of notation. The source code is not observable by the function it is passed to. Confluence implies that there is no way within lambda calculus to distinguish the result of reducing a term from the term itself.
Afaik, Haskell is a realization of the (typed!) lambda calculus. Lisps aren't because they don't do lazy evaluation. The LC beta reduction of (\a. a) (\c. d) (\e. f) is (\c. d) (\e. f) but most lisps will reduce it to (\a. a) d. This might seem like a minor detail but means general recursion using the y combinator isn't actually implementable in lisps (I could be wrong though).
> Afaik, Haskell is a realization of the (typed!) lambda calculus. Lisps aren't because they don't do lazy evaluation.

Lazy evaluation is just one possible operational semantics for a lambda calculus. Eager evaluation is another. In fact, all of the versions of lambda calculi presented in Benjamin Pierce's widely-read textbook "Types and Programming Languages" feature eager evaluation rather than lazy evaluation.

So the claim that the reason that Lisps aren't based on the lambda calculus is due to lack of lazy evaluation is incorrect. There are other reasons that Lisps diverge from lambda calculi but the evaluation strategy isn't one of them.

> general recursion using the y combinator isn't actually implementable in lisps

I think the 'typed' bit is key. You can't implement Y in plain old Haskell because it would need to recurse infinitely during type-checking.

It's valid Haskell 98 with a type constructor: http://r6.ca/blog/20060919T084800Z.html. But in GHC I think the first code snippet without NOINLINE still crashes, it's a perma-bug: https://downloads.haskell.org/~ghc/latest/docs/html/users_gu...

Some type systems do support equi-recursive types without the type constructor, e.g. Whiley (http://whiley.org/2013/04/21/iso-recursive-versus-equi-recur...). Maybe there you could implement Y without a type signature and have the recursive type inferred.

The main problem is speed. Using the Y combinator is going to mess up whatever code flow analysis the compiler has, unless it's using some cutting edge optimization research that I haven't been able to find.

There are some common themes here. Let's get some precise terminology so we can all talk about the same thing.

Some questions to ponder:

Is Lisp a term re-writing system? https://news.ycombinator.com/item?id=9554335

Is lambda calculus a term rewriting system? https://cstheory.stackexchange.com/questions/36090/how-is-la...

Is the Mathematica language a term-rewriting system? https://mathematica.stackexchange.com/questions/119933/why-d...

And to round it all up: Is Lisp an evaluation system and Lambda calculus an evaluation system? [I'll leave this one to the reader]

It is difficult to believe that McCarthy did not understand he was beating the same horse along with Church, Curry, Schoenfinkel, et al.
I mean, he obviously was aware of lambda-calculus as a thing that existed (since he used lambda notation for functions!), but might not have bothered to study it very closely. Nowadays the lambda calculus is considered very elegant, but in the 1940s I think it was considered quite weird and not very well known. (I forget who, but I think some famous logician complained that people did not read his thesis because it used the lambda-calculus formalism, or was adviced not to use it for that reason, or something like that.)

I think, actually the main reason it became so popular is exactly because it was implemented in Lisp/Scheme...

From what I understand, he purposefully took ideas from it, but because he did not feel like he understood it fully, he didn’t try to make a full implementation of it.
Why? Does every compiler writer know all the theoretical underpinnings and generalizations of their work? Or do that make something that solves a problem without investigating the entire universe around it?
Because he was certainly aware of the literature and he was a top notch scholar. Your follow-up questions seem to be implying something, care to spell it out for me?
But was he top notch back then? He's most well known for "creating" Lisp. And I put that in quotes because he never meant for anyone to implement it on a real machine.
re:was he top notch? By 1955 he was an assistant professor of Mathematics and known in his field. In 1956 he organised the Dartmouth conference where the field of Artificial Intellgence - got its name. He was a peer of Claude Channon, Marvin Minsky and Nathanial Rochester - so yes he was top notch. YC audience knows him for Lisp - but he was known for a lot more in his fields of research.
Stupid question, why is it often written "_the_ lambda calculus" and not just "lambda calculus"
A calculus is a system for calculation, so traditionally a specific system for calculation is named as the X calculus: the integral calculus, the pi calculus, the lambda calculus, the situation calculus, etc.

As a loose analogy, think of how specific instances of the general idea of systems are named: the court system, the cooling system, the moderation system, etc. Some uses are a bit archaic though, e.g. people now usually refer to integral calculus as a standalone name, without the definite article. I think we're somewhere in between with (the) lambda calculus; you can find papers that use "the" and others that don't.

there are many lambda calculi.

Here's two:

A) Pure Lambda Calculus is defined by

1. the terms (metavariables t and u): λx.t | x | (t u)

2. the reduction rules, β

3. the two conversion rules, α and η

B) Another of Church's lambda calculus, defined by

1. the terms (metavariables t and u): λx.t | x | (t u)

2. the reduction rule, β

3. the conversion rule, α.

You might go .. wait, these are the same! They are not. Without the η rule, you cannot define extensional equivalence, thus you cannot do things like referential transparency.

There are a number of shortcomings with the pure lambda calculus of course, so over the years people have added things , creating variations of lambda calculi.

A particularly famous one is System F. System F extends the Simply Typed Lambda Calculus (which itself is an extension of the pure lambda calculus) with type generators and eliminators. The trick is to make the language of types into terms of the lambda calculus itself.

System F begets SML, SML(NJ), Ocaml, Haskell and F#.

Then you have the other extensions along Barendregt's Lambda Cube. You get things like Coq and Agda and Idris, which are extensions of System F in the same directions but different ways of constructing them.

On the other side of things people are trying to make sense of the pure lambda calculus itself. There exists a whole subindustry of people explaining substitutions (lambda calculi with explicit substitution), people explaining self interpretation which you can do in lisp but not really in pure lambda calculus (first clean attempt was in 1995!). There are people developing or splitting pure lambda calculus into two subsets - example are people who work with sequent calculus.

All these systems are lambda calculi. When people say _the_ lambda calculus then they usually mean the pure lambda calculus.

The problem is like 'is Erlang an Actor language?'. The answer is yes.

Carl Hewitt developed the Actor model based on Smalltalk in the 1970s.

Joe Armstrong created Erlang in the 1980s, which he didn't know the Actor model at all at that time. Erlang doesn't even have the concept of Actor, it accidentally implemented Actor model by the elegant design of processes.

But when it comes to the Actor model nowadays, Erlang is basically a must-mention language, although the intention wasn't about Actor.

Actors were influenced more by Simula-67 than by SmallTalk'72, which was a byte-stream language. However, neither language had adequate constructs for concurrency.
Interesting parallel with stories out there where authors think teachers get their writings “wrong”.

Unintended metaphor and application are things.

Smacks of a cognitive bias known as functional fixedness: https://en.m.wikipedia.org/wiki/Functional_fixedness

A screwdriver can also be a pry bar :-)

Imo this is why looser IP laws are important. Humanity needs to be able to rethink and find new application of its epistemological ideas to find new ideas of interest.

Too often we’re held to thinking about IP only the way the author intended. It’s almost pushing into thought policing.

I spent a lot of time on this. See M-LISP: a representation-independent dialect of LISP with reduction semantics, TOPLAS, 1992, the relevant bit is in section 2.

It's true that J. McCarthy had only a passing familiarity with LC. M-expression LISP, as it was originally conceived, was all about first-order recursion schemes over S-expressions. But due to a very simple error in the base case of an inductive definition, LISP 1.0 "featured" or "supported" higher-order functions, ala LC.

So it wasn't based on LC, but is probably isomorphic to LC, right? This is why it's hard for me to believe math is invented. Different people and separate efforts but all arrive at the same patterns with different names.
When pedantry goes wrong...?

This article repeats this "TL;DR Lisp is not based on the Lambda Calculus"

But that's not actually what McCarthy said. McCarthy said:

> one of the myths concerning LISP that people think up or invent for themselves becomes apparent, and that is that LISP is somehow a realization of the lambda calculus

"Based on" and "realization of" are two different things. This kind of exaggerated or hyperbolic pedantry strikes me as clickbait. Which is unfortunate, because the article does contain some good content.

If you read the LISP I manual, you will see that concepts beyond the obvious lambda notation are used directly from The Calculi of Lambda Conversion. Notably, the distinction between forms and functions.

Clearly, we're splitting some very fine hairs here.

Can we extend from this another misconception then? That functional programming stems from the Lambda Calculus? When in reality, it might come from Lisp, which does not come from Lambda Calculus, thus making Lisp the root of the tree for the origin of functional programming?
Not really, Peter Landin’s ISWIM is essentially syntactic sugar over lambda calculus, and went on to influence ML and Haskell. So there is a more direct lineage from lambda calculus to functional programming via that route.

(Regarding the sibling comment: Landin’s paper also predates Backus’ paper by about 10 years)

No idea how accurate it is, but Wikipedia says that ISWIM was influenced by Lisp...
That is correct.

From Landin’s paper:

6. Relationship to LISP

ISWIM can be looked on as an attempt to deliver LISP from its eponymous commitment to lists, its reputation for hand-to-mouth storage allocation, the hardware dependent flavor of its pedagogy, its heavy bracketing, and its compromises with tradition.

We know "the root of the tree for the origin of functional programming": John Backus's Turing Award lecture "Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs"

https://amturing.acm.org/award_winners/backus_0703524.cfm It's not obvious but the "ACM Turing Award Lecture" link is the PDF.

I dunno. I thought the foundations were laid in mathematics considerably pre computer. eg. from wiki's page on haskell curry:


The focus of Curry's work were attempts to show that combinatory logic could provide a foundation for mathematics. (edit: accidentally stripped the part here mentioned that was in 1933 ie. very pre-computer) [...]. The paradox, developed by Rosser and Stephen Kleene, had proved the inconsistency of a number of related formal systems, including one proposed by Alonzo Church (a system which had the lambda calculus as a consistent subsystem) and Curry's own system. [...]

By working in the area of Combinatory Logic for his entire career, Curry essentially became the founder and biggest name in the field. Combinatory logic is the foundation for one style of functional programming language. The power and scope of combinatory logic are quite similar to that of the lambda calculus of Church, and the latter formalism has tended to predominate in recent decades.


And I think there's more but it's hardly my field. Prolog is grown out of predicate calculus which has its roots in propositional calculus, which goes back to the ancient greeks.

The mathematical foundations of things can be surprisingly old. I saw a 3D wireframe of a goblet with perspective, and that was from the 1500's. It could have been done on a 1980's home computer by appearance.

You reminded me of one of my favorite jokes:

"Computer Science could be called the post-Turing decline in the study of formal systems."

(I dunno who said it. Maybe Dijkstra.)

That is funny. But you could also restate that as "computer science could be called the post-turing mass commercialisation of formal systems" cos AFAIK being a pure mathematician was never a path to wealth.

Come to think of it, still isn't so here's one in return

Q. What's the difference between a mathematician and a large pizza?

A. A large pizza can feed a family of four.

This is not relevant directly to the subject but perhaps someone in formal langs can help me. I'm interested in optimisation of (necessarily) pure functional langs. Starting with deforesting (the elimination of intermediate structures) eg.

  map(f, map(g, list(1, 2, 3)))
can be optimised trivially by a human to

  map(f.g, list(1, 2, 3))
(where f.g is functional composition) but I want to do this automatically, and the first step is to play with it. I've defined defined stuff on paper then started substituting but it's slow and, being me, error prone, when done with paper and pen.

Does anyone know of a symbolic manipulation software for haskell, or similar syntax (prefer to avoid lisp syntax if poss, but ok if nothing else) which will allow me to do this easily and get a feel for it?


Sounds like you are in need of transducers
Could you elaborate? A bit searching throws up some very interesting stuff but AFAICS a transducer is roughly comparable to a partial function, which doesn't relate to my request for manual expression manipulation support.

If I'm missing something, please say.

I'm referring to transducers as in clojure. You can compose 2 'map' transducers, and that results in a single 'map' of the composed mapping-functions. That's to say that the two expressions you wrote would result in the same computations if you expressed them in terms of transducers.
5 mins looking at https://stackoverflow.com/questions/26317325/can-someone-exp... I think I'm getting it. Ops accumulated and composed until they actually have to be evaluated. That's very interesting indeed, thanks, and I'll spend a couple of hours on it.

However, I was using my my example exactly as an example; I was after a symbolic manips app.

The issue of the cost of naively compiled functional languages matters to me as bad performance will kill a good thing.

It isn't clear though if McCarthy didn't know anything about the Lambda Calculus, or simply didn't know it well and didn't create Lisp as a concrete realization of Lambda Calculus. In that, he might have created Lisp for whatever other reasons and was doing his own exploration, but it's probable that in doing so, he used his inherent knowledge of many pre-existing literature, which could include some of Lambda Calculus, thus Lisp having some resemblance to it, like the use of lambda to define functions.

Also, realistically speaking, no programming language is based on the Lambda Calculus as is, even those that try to be.

It is entirely possible to realise Lambda calculus using lisp. But McCarthy not understanding it is surprising.
>McCarthy not understanding it is surprising.

I think he is commenting on the subtleties of it.

I think many reading here will say they understand it or have studied it in a course but I am not so sure everyone gets the subtle points. Myself I have always puzzled over the difference between what programmers call LC and what seems to be discussed by Church.

I understand that Turing and Church came up with different approaches to describing the fundamentals of computing. You can see there is a relationship between LC and LISP. My question is how did we get to the von Neumann architecture and CPU instruction sets from either Church or Turing's work ?
Anybody who holds forth of syntactical matters (lambda calculus and LISP being two examples thereof) and commits the grammatical heresy of writing “I wasn’t going to go home” (emphasis mine) in lieu of “I wouldn’t be going home” has just neutered themselves, in my humble opinion at least.
There's nothing grammatically wrong with "going to go home".
When I was taught English it was most definitely frowned upon and disparaged as “at best an Americanism”. It is grating to the native British ear and has no place in formal writings. There is no situation where it cannot be avoided by rephrasing the sentence (usually, by no more than employing “will be going”, but occasionally resorting to other constructs). During the IB we were absolutely forbidden from using it and would be marked down severely.
I'm a native British English speaker and it sounds completely normal to me. There's certainly nothing grammatically wrong with it. It's the same structure as "going to eat" or "going to walk".

Marking you down for using an expression used by all native English speakers is bonkers.

Programming language grammars and spoken language grammars are two completely separate things. Someone can conceivably be a great programming language researcher who sometimes get's a rule of the English language wrong in a sentence.