"one of the myths concerning LISP that people think up or invent for themselves becomes apparent, and that is that LISP is somehow a realization of the lambda calculus, or that was the intention. The truth is that I didn't understand the lambda calculus, really" - John McCarthy

So there are a two issues here, 1) whether or not it was McCarthy's intention to realize the Lambda Calculus in LISP, and 2) whether or not LISP is such a realization. Or at least some kind of close realization.

The answer to 1 is clearly no. This doesn't imply an answer to 2 one way or another.

If 2 isn't true, what explains the widespread belief? Is it really just that he, McCarthy, borrowed some notation?

My understanding is that lexical scope was first implemented in Algol and Pascal, and then was first implemented with true garbage collection in Scheme. (Thereby leading to the restriction in Algol and Pascal that closures existed, but they could only be passed into functions, and never returned from them. That way the variables being closed over could live on the stack.)

But I'd love to learn that I'm wrong and that these things came before that.

Wikipedia says

> The concept of closures was developed in the 1960s for the mechanical evaluation of expressions in the λ-calculus and was first fully implemented in 1970 as a language feature in the PAL programming language to support lexically scoped first-class functions.

> PAL, the Pedagogic Algorithmic Language, is a programming language developed at the Massachusetts Institute of Technology in around 1967 to help teach programming language semantics and design. It is a "direct descendant" of ISWIM and owes much of its philosophy to Christopher Strachey.

Looking at the "Universal LISP function" on page 13 in [0], the case for apply/LAMBDA just extends the current environment a with the arguments of the lambda, but it doesn't unpack a closure to get the environment the lambda function was defined in, so it implements the dynamic version. (Unlike, e.g., the interpreter in SICP [1].)

[0] http://www.softwarepreservation.org/projects/LISP/book/LISP%... [1] https://mitpress.mit.edu/sites/default/files/sicp/full-text/...

Googling a bit, it seems that McCarthy mentions this in "History of Lisp" [0], as a change from LISP 1 to LISP 1.5:

> In modern terminology, lexical scoping was wanted, and dynamic scoping was obtained. I must confess that I regarded this difficulty as just a bug and expressed confidence that Steve Russell would soon fix it. He did fix it but by inventing the so-called FUNARG device that took the lexical environment along with the functional argument. Similar difficulties later showed up in Algol 60, and Russell's turned out to be one of the more comprehensive solutions to the problem. While it worked well in the interpreter, comprehensiveness and speed seem to be opposed in compiled code, and this led to a succession of compromises. Unfortunately, time did not permit writing an appendix giving the history of the problem, and the interested reader is referred to (Moses 1970) as a place to start. (David Park tells me that Patrick Fischer also had a hand in developing the FUNARG device).

So I guess closures had been invented at least by 1962 (when the LISP 1.5 manual was published), but were not widely used because of performance considerations?

[0] http://www-formal.stanford.edu/jmc/history/lisp/node4.html

"With lexical scope, a name always refers to its (more or less) local lexical environment. This is a property of the program text..."

So as local lexical environment isa property of the program text, lexical scope is explicit in the language's syntax.

Common LISP is lexically scoped, though it does still have opt-in dynamic scoping ("special variables").

Since Emacs 24.1 (had to look up the version), it's been possible to enable lexical scope for a file or buffer. The default is still dynamic scope, and dynamic scope can be achieved even after enabling lexical scope with certain conditions.

This would fit in with Graham's suggestion that McCarthy more "discovered" Lisp than "invented" it.

(a) In classic lambda calculus, *everything* is a lambda term. McCarthy's Lisp has primitives like lists and numbers. However, it is known that lambda calculus is powerful enough to encode these things as lambda terms (for example,

```
null = (lambda (n c) (n))
(cons a b) = (lambda (n c) (c a b))
```

gives a way to encode lists. The car function would be something like```
(car a) = (lambda (lst)
(lst (lambda () (error "car: not cons"))
(lambda (a b) a)))
```

This would not work in the original Lisp because of binding issues: the definition of cons requires the specific a and b bindings be remembered by the returned lambda.)(b) Lambda calculus does not have any evaluation rules. Rather, it is like algebra where you can try to normalize an expression if you wish, but the point is that some lambda terms are equivalent to others based on some simple rules that model abstract properties of function compositions. Lambda-calculus-based programming languages choose some evaluation rule, but there is no guarantee of convergence: there might be two programs that lambda calculus says are formally equivalent, but one might terminate while the other might not. Depending on how you're feeling, you might say that no PL for a computer can ever realize the lambda calculus, but more pragmatically we can say most languages use lambda calculus with a strict evaluation strategy.

(c) The lambda terms in lambda calculus are not inspectable objects, but more just a sequence of symbols. Perhaps one of the innovations of McCarthy is that lambda terms can be represented using lists, and the evaluator can be written as a list processor (much better than Godel numbering!). In any case, the fact that terms have the ability to evaluate representations of terms within the context of the eval makes things a little different. It's also not too hard to construct a lambda evaluator in the lambda calculus[1], but you don't have the "level collapse" of Lisp.

(d) In lambda calculus, one way to model function application is that you immediately substitute in arguments wherever that parameter is used in the function body. Shadowing is dealt with using a convention in PL known as lexical scoping, and an efficient implementation uses a linked list of environments. In the original Lisp, there was a stack of variable bindings instead, leading to something that is now known as dynamic scoping, which gives different results from the immediate substitution model. Pretty much everything fun you can do with the lambda calculus depends on having lexical scoping.

All this said, the widespread belief about Lisp being the lambda calculus probably comes from Scheme, which *was* intentionally lambda calculus with a strict evaluation model. Steele and Sussman were learning about actors for AI research, and I think it was Sussman (a former logician) who suggested that their planning language Schemer (truncated to Scheme) ought to have real lambdas. At some point, they realized actors and lambdas (with mutable environments) had the exact same implementation. This led to "Scheme: An Interpreter for Extended Lambda Calculus" (1975) and the "Lambda the ultimate *something*" papers. Later, many of these ideas were backported to Lisp during the standardization of Common Lisp.

[1] https://math.berkeley.edu/~kmill/blog/blog_2018_5_31_univers...

See the following for the current state of the art including the latest Actor approach to Eval, which is more modular and concurrent than the Eval in Lisp and Scheme:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3418003

The above article explains exactly how Actors are much more powerful than lambdas with mutable environments.

While a lot of people are trying to defend the lambda calculus as a basis, I think this actually undersells the significance of LISP. Apart from Lisp the language family and its implementations, there is Lisp, (arguably) the first practically realizable mathematical model of computation. That is, it stands on its own as a model for computation†, continuing along a long line of which I think Grassmann's 1861 work on arithmetic and induction is a good starting point.

Turing Machines are intuitive and the lambda calculus is subtle and expressive, but Lisp's contribution was to place partial recursive function on a more intuitive/realizable basis in terms of simple building blocks of partial functions, predicates, conditional expressions and symbolic expressions (ordered pairs/lists of atomic symbols). Lambdas come in as a notation for functions with a modification to facilitate recursive definitions.

†Making Greenspun's Tenth Rule trivially true.

This is probably the most informative comment that I've read on HN in the last couple of months.

John McCarthy said that he never had the intention to realize the lambda calculus, but he followed that statement with the corollary that had someone "started out with that intention, he might have ended with something like LISP." Peter Landin was a pioneer in that regard. See "The Mechanical Evaluation of Expressions", published in 1964, and the SECD virtual machine. Machine interpreters like SECD and CEK may come close to a "realization" of the lambda calculus. Their design is directly inspired by its semantics. You don't necessarily end up with something like LISP, but you can, see Lispkit and LispMe.

Moreover the lambda calculus is confluent, so if you find the terminating reduction sequence, you're guaranteed all other terminating sequences end up with the same result.

So as long as your PL uses normal-form evaluation or lazy evaluation you can entirely realize any equivalences in the lambda calculus.

See the following:

OO says everything is an object. Even though Java has non-object primitives, we're still gonna classify Java as OO.

> Lambda calculus does not have any evaluation rules.

> The lambda terms in lambda calculus are not inspectable objects, but more just a sequence of symbols.

It's not clear to me why this makes Lisp not in the family of Lambda implementations.

> In the original Lisp, there was a stack of variable bindings instead, leading to something that is now known as dynamic scoping.

That's true. Every modern Lisp (Scheme, Clojure, Racket) has lexical scoping. And Common LISP uses lexical by default.

> Later, many of these ideas were backported to Lisp during the standardization of Common Lisp.

Again this contributes to the notion that LISP/Schema/Lambda Calculus were "discovered", not that Lambda calculus has an explicit pedigree.

> It's not clear to me why this makes Lisp not in the family of Lambda implementations.

To be clear, I started my comment by writing "if it is a realization, then it is one with [the following differences]." Lambda calculus was such a good idea that pretty much anything with function abstractions can be described by some variation of it. It's the dynamic scoping that causes the main issues here, though, and suggests lambda calculus was not a significant motivation in the definition of McCarthy's Lisp. Yet, he was still aware of it enough to call the abstraction operator "lambda."

>> Later, many of these ideas were backported to Lisp during the standardization of Common Lisp.

> Again this contributes to the notion that LISP/Schema/Lambda Calculus were "discovered", not that Lambda calculus has an explicit pedigree.

I don't see how that follows. Sussman was a math undergrad and PhD and was well aware of developments in logic, and he influenced Steele, who created the quite-influential Scheme and went on be one of the main people on the standardization committee for Common Lisp. This isn't even mentioning all the work people have done in PL research with typed lambda calculi (going back to corrections to Church's attempt to use lambda calculus as a foundation for mathematics), which has influenced the designs of many type systems in modern programming languages.

That notion is wrong (at least with a very high likelihood), and it's usually stated by people who fetishize the lambda calculus but know little of its long evolution. It's just your ordinary case (of hubris) where people aesthetically drawn to something describe it as inevitable or even a law of nature. And I know it's wrong in part because of the following quote:

> *We do not attach any character of uniqueness or absolute truth to any particular system of logic. The entities of formal logic are abstractions, invented because of their use in describing and systematizing facts of experience or observation, and their properties, determined in rough outline by this intended use, depend for their exact character on the arbitrary choice of the inventor.*

This quote is by the American logician Alonzo Church (1903-1995) in his 1932 paper, *A Set of Postulates for the Foundation of Logic*, and it appears as an introduction to the invention Church first described in that paper: the (untyped) lambda calculus [1].

The simpler explanation, which has the added benefit of also being true, or at least supported by plentiful evidence, is that the lambda calculus was invented as a step in a long line of research, tradition and aesthetics, and so others exposed to it could have (and did) invent similar things.

If you're interested in the real history of the evolution of formal logic and computation (and algebra) you can find the above quote, and many others, in a 300-page anthology of (mostly) primary sources that I composed about a year and a half ago [2]. They describe the meticulous, intentional invention of various formalisms over the centuries, as well as aesthetic concerns that have led some to prefer one formalism over another.

[1]: Actually, in that paper, what would become the lambda calculus is presented as the proof calculus for a logic that was later proven unsound. The calculus itself was then extracted and used in Church's more famous 1936 paper, An Unsolvable Problem of Elementary Number Theory in an almost-successful attempt to describe the essence of computation. That feat was finally achieved by Turing a few months later.

Lambda calculus originated from research in formal logic, which is about manipulating symbols according to precise rules that would capture reasoning. It is a compelling way to combine variable binding, equality and substitution into a model of "function calls" - even if the purpose was to formalize arithmetic computation and reasoning.

At some level, reasoning is what programming is about as well! The notation and rules may change, but ultimately we want to make the machines do things and at some level, we need abstraction mechanisms. Recursive procedures are such a mechanism and it can be expressed as a lambda term that involves a fixed-point combinator, or machine code.

It is easy to model and understand many things using lambda calculus or functional programming techniques, depending on whether the interest is theoretical/formal or practical.

To quote Peter Landin (heavily influenced by McCarthy and LISP and author of 'The next 700 programming languages'):

> A possible first step in the research program is 1700 doctoral theses called "A Correspondence between x and Church's λ-notation."

Maybe people think this was different in the late 1950s?

Let's read McCarthy's paper 'Recursive Functions of Symbolic Expressions and their computation by machine Part I' where he explicitly cites Church and introduces lambda notation.

I would consider that paper part of the phenomenon that is LISP, would that not settle the question? Lambda calculus gives little guidance in terms of implementation, but I think it does not diminish LISP in any way that it should be "based" on lambda calculus.

And I do not find the linked article adds any value, but I am very glad to read the HN discussion to find gems like the above (even if I should rather have slept for the past few hours).

https://softwareengineering.stackexchange.com/questions/1076...

Turing's thesis talks about some system transforming an input to an output. Clearly, a TM could simulate the actor itself in your proof. If it is not able to simulate the entire actor-collaborator system, that's only because you may have given the *collaborator* (whatever it is that generates the messages) super-Turing powers. You assumed that there could be something that could issue a `stop` after an arbitrary number of `go`'s, but you haven't established that such a mechanism could actually exist, and *that's* where the super-Turing computation actually hides: in a collaborator whose existence you have not established. As you have not established the existence of the collaborator, you have not established the existence of your actor-collaborator system. I claim that a TM cannot simulate it simply because it cannot exist (not as you describe it, at least).

So here's another "proof": The actor machine takes two messages, Q and A(Bool), and it gets them alternately, always Q followed by A. Every time it gets a Q, it increments a counter (initialized to zero) by 1 to the value N, and emits a string corresponding to the Nth Turing machine. It then gets a message A containing a value telling it whether the Nth TM terminates on an empty tape, and in response it emits A's argument back. And here you have an actor machine that decides halting!

*describe*non-computable behaviors; the difficulty is demonstrating that there are actual physical systems that can carry them out.

As you now describe it, it cannot be implemented (physically realized) at all. Or, conversely, any physically implementable refinement of it (which will not exhibit the entire range of behaviors) *will* be simulatable by a TM.

There are many *abstract* machines that cannot be implemented by TMs -- e.g. various oracle TMs. There is nothing special, surprising or new about that.

There are many formalisms that are more or less convenient abstractions for various kinds of systems. There is nothing special, surprising or new about that, either. In fact, some formalisms that can describe non-computable behaviors are commonly used to specify either software or hardware as they're convenient (like Lamport's TLA).

But you're making a claim about the Church-Turing thesis, which, as commonly interpreted today (as the physical Church-Turing thesis), is the claim that any mechanism that can be implemented by a *physical mechanism* can be simulated by a TM. Unless you show how to build a physical system that cannot be simulated by a TM, your claim is no refutation of the thesis; it has nothing to do with it. Your claim that arbiters in digital circuits cannot be simulated has not been established and is not recognized by scientific consensus.

> However, just as there can be an arbitrarily long amount of time between two steps of a computation, there can be a arbitrarily long amount of time for a message to be delivered.

This is a completely different use of "arbitrary". In TMs, the fact that an arbitrary amount of time can pass between steps means that any device, with *any* finite amount of time between steps, can produce the full range of TM behaviors. In your actor case, to get non-computable behavior, you need to show that the device can delay the message by *every* amount of time. You need to show that such a physical device can exist.

Put simply, it's one thing to propose a non-computable abstraction that's convenient to model some systems, and another thing altogether to claim that there are realizable physical systems that cannot be simulated by a Turing machine. The former is useful but mundane (in fact, all of classical analysis falls in this category); the latter has not been achieved to date.

In the above models of computation, *arbitrary* means absolutely arbitrary, i.e., there is no a priori bound on the amount of time that it can take.

Your trouble may be with Plotkin's proof, which shows that state machine models of nondeterministic computation are inadequate.

*abstraction*is non-computable is unsurprising and has nothing to do with the thesis. The Turing thesis is relevant when you're talking about a physical realization, and you have

*not*provided any proof that you've found one that's not Turing-computable.

Every physical object does have an a priori bound on the amount of time it can take to do something, unless that time could possibly be infinite. The reason is that it needs some sort of a counter, so it needs some state, and there's only so much state storable in the universe to store a counter.

*assumes*that the coin flip is "truly" random without establishing it (which would be hard because of pseudorandomness). Or, in the case of arbiters, you could claim that the arbiter behaves like the magical collaborator in the stop/go examples, converting two analog inputs to a binary decision that takes an arbitrarily long time, but this only introduces yet another magical collaborator capable of producing analog inputs that are equal to arbitrary precision.

This is a common problem when we appeal to continuous natural phenomena, as their common description is usually a convenient, but imprecise, abstraction. Goldreich addressed this in *On the philosophical basis of computational theories* [1]: "A computational model cannot be justified by merely asserting that it is consistent with some theory of natural phenomena ... The source of trouble is the implicit postulate that asserts that *whatever is not forbidden explicitly by the relevant electrical theories, can actually be implemented*"[2]

[1]: http://www.wisdom.weizmann.ac.il/~oded/VO/qc-fable.pdf

[2]: He adds "at no cost" because his focus is complexity, not computability

Goldreich demonstrates the problem by showing that our abstractions of electrical circuits don't in themselves preclude circuits that violate computational complexity results or even decide halting, but that alone is insufficient to show that such circuits can actually be built.

There are certain formalisms that make this kind of error harder to spot: actor formalisms make it easy to hide impossible "computation" in a collaborator; some typed formalisms could hide computation in the syntax (which requires a lot or even infinite computation to decide).

*digital*computation.