Re: Combinators 1 (Introduction)

2018-09-17 Thread Bruno Marchal

> On 17 Sep 2018, at 05:55, Jason Resch  wrote:
> 
> 
> 
> On Sun, Aug 5, 2018 at 2:05 PM Bruno Marchal  > wrote:
> Hi Jason,
> 
> 
>> On 5 Aug 2018, at 05:24, Jason Resch > > wrote:
>> 
>> 
>> 
>> On Sat, Jul 28, 2018 at 2:19 PM Bruno Marchal > > wrote:
>> Hi Jason, people,
>> 
>> 
>> Hi Bruno,
>> 
>> Thank you for this. I've been trying to digest it over the past few days.
> 
> No problem.  It was hard to begin with , and I was about sending few easy 
> exercise to help for the notation. But you did very well.
> 
> 
> Thank you. :-)

You are welcome!



>  
> 
> 
>>  
>> 
>> I will send my post on the Church-Turing thesis and incompleteness later. It 
>> is too long.
>> 
>> So, let us proceed with the combinators.
>> 
>> Two seconds of historical motivation. During the crisis in set theory, Moses 
>> Schoenfinkel publishes, in 1924, an attempt to found mathematics on only 
>> functions. But he did not consider the functions as defined by their 
>> behaviour (or input-output) but more as rules to follow.
>> 
>> He considered also only functions of one variable, and wrote (f x) instead 
>> of the usual f(x).
>> 
>> The idea is that a binary function like (x + y) when given the input 4, say, 
>> and other inputs, will just remains patient, instead of insulting the user, 
>> and so to compute 4+5 you just give 5 (+ 4), that is you compute
>>  ((+ 4) 5). (+ 4) will be an object computing the function 4 + x. 
>> 
>> 
>> The composition of f and g on x is thus written  (f (g x)), and a combinator 
>> should be some function B able on f, g and x to give (f (g x)).
>> 
>> Bfgx = f(gx), for example. 
>> 
>> So am I correct to say a combinator "B" is a function taking a single input 
>> "fgx”,
> 
> Three inputs. B on f will first gives (B f), written Bf, then when B will get 
> its second input that will give ((B f) g), written Bfg, which is a new 
> function which on x, will now trigger the definition above and give the 
> combinator (f (g x)), written f(gx) and which would compute f(g(x)) written 
> with the usual schoolboy notation.
> 
> 
> Okay, I see.
>  
> 
> 
> 
>> but is itself capable of parsing the inputs and evaluating them as functions?
> 
> It just recombine its inputs, the functions will evaluate by themselves. 
> Don’t worry, you will see clearly the how and why.
> 
> B is called an applicator, because given f, g and h has arguments, Bfgh, it 
> gives f(gh). I have used f and g and h has symbol, but I can use x and y and 
> z instead. Those variables are put for combinators. Bxyz = x(yz). Formally B 
> only introduce those right parenthesis. With full parentheses we should write:
> 
> (((Bx)y)z) = (x(yz)). But we suppress all leftmost parentheses: Bxyz =x(yz).
> 
> The interesting question is: does B exist? Which here means —is there a 
> combinator (named B) which applied on x, then y, then z, gives x(yz).
> 
> I believe B exists, given combinators are universal, but I don't know what it 
> is.


Nobody knows what number or combinator are. That is probably why Raymond 
Smullyan suggest that they are birds. And B is the blue-bird, although an 
authentic SNARK hunter knows they could be a Bellman, a Barrister, a Baker, etc.

The only important thing is that it is NOT a Boojum! Because if it is a Boojum, 
the Baker (the hero without a name) will softly and suddenly vanish away ...




>  
> 
> Later I will provide an algorithm solving the task of finding a combinators 
> doing some given combination like that.  But here I just answer the question: 
> YES!
> 
> Theorem B = S(KS)K, i.e. Bxyz = S(KS)Kxyz = x(yz)
> 
> Ahh here it is!

Ah! Good. You wreaking yourself the right question: what is B in term of S and 
K. Sorry for the pun above.



>  
> 
> Proof: it is enough to compute S(KS)Kxyz and to see if we get x(yz)
> 
> Let us compute, and of course I remind you the two fundamental laws used in 
> that computation:
> 
> Kxy = x
> Sxyz = xz(yz)
> 
> S(KS)Kxyz =
> 
> OK let see in detail that is the combinator S, which got a first argument, 
> the combinator (KS) this gives (S (K S)) written S(KS), which remains stable 
> ("not enough argument”), then S(KS) get the argument K which gives S(KS)K, 
> which remains stable (indeed it is supposed to be the code of B) and indeed S 
> has still got only his first two argument and so we can’t apply any laws to 
> proceed, but now, S get its third argument x so 
> 
> we are at S(KS)Kx, that is S (KS) K x, and here S has three arguments and so 
> match the left part of the second law S x y z, with x = KS, y = K and z = x.
> 
> Okay, I follow so far.  The idea of waiting until having enough arguments 
> before activating is helpful and I think I was missing that before.

Good. In some future post, overlapping on the Church’s thesis thread (the 
phi_i) and the Combinators thread, I will explain how to mimic that waiting for 
the arguments in all programming language. I will show how 

Re: Combinators 1 (Introduction)

2018-09-16 Thread Jason Resch
On Sun, Aug 5, 2018 at 2:05 PM Bruno Marchal  wrote:

> Hi Jason,
>
>
> On 5 Aug 2018, at 05:24, Jason Resch  wrote:
>
>
>
> On Sat, Jul 28, 2018 at 2:19 PM Bruno Marchal  wrote:
>
>> Hi Jason, people,
>>
>>
> Hi Bruno,
>
> Thank you for this. I've been trying to digest it over the past few days.
>
>
> No problem.  It was hard to begin with , and I was about sending few easy
> exercise to help for the notation. But you did very well.
>
>
Thank you. :-)


>
>
>
>
>>
>> I will send my post on the Church-Turing thesis and incompleteness later.
>> It is too long.
>>
>> So, let us proceed with the combinators.
>>
>> Two seconds of historical motivation. During the crisis in set theory,
>> Moses Schoenfinkel publishes, in 1924, an attempt to found mathematics on
>> only functions. But he did not consider the functions as defined by their
>> behaviour (or input-output) but more as rules to follow.
>>
>> He considered also only functions of one variable, and wrote (f x)
>> instead of the usual f(x).
>>
>> The idea is that a binary function like (x + y) when given the input 4,
>> say, and other inputs, will just remains patient, instead of insulting the
>> user, and so to compute 4+5 you just give 5 (+ 4), that is you compute
>>  ((+ 4) 5). (+ 4) will be an object computing the function 4 + x.
>>
>>
>> The composition of f and g on x is thus written  (f (g x)), and a
>> combinator should be some function B able on f, g and x to give (f (g x)).
>>
>> Bfgx = f(gx), for example.
>>
>
> So am I correct to say a combinator "B" is a function taking a single
> input "fgx”,
>
>
> Three inputs. B on f will first gives (B f), written Bf, then when B will
> get its second input that will give ((B f) g), written Bfg, which is a new
> function which on x, will now trigger the definition above and give the
> combinator (f (g x)), written f(gx) and which would compute f(g(x)) written
> with the usual schoolboy notation.
>
>
Okay, I see.


>
>
>
> but is itself capable of parsing the inputs and evaluating them as
> functions?
>
>
> It just recombine its inputs, the functions will evaluate by themselves.
> Don’t worry, you will see clearly the how and why.
>
> B is called an applicator, because given f, g and h has arguments, Bfgh,
> it gives f(gh). I have used f and g and h has symbol, but I can use x and y
> and z instead. Those variables are put for combinators. Bxyz = x(yz).
> Formally B only introduce those right parenthesis. With full parentheses we
> should write:
>
> (((Bx)y)z) = (x(yz)). But we suppress all leftmost parentheses: Bxyz
> =x(yz).
>
> The interesting question is: does B exist? Which here means —is there a
> combinator (named B) which applied on x, then y, then z, gives x(yz).
>

I believe B exists, given combinators are universal, but I don't know what
it is.


>
> Later I will provide an algorithm solving the task of finding a
> combinators doing some given combination like that.  But here I just answer
> the question: YES!
>
> Theorem B = S(KS)K, i.e. Bxyz = S(KS)Kxyz = x(yz)
>

Ahh here it is!


>
> Proof: it is enough to compute S(KS)Kxyz and to see if we get x(yz)
>
> Let us compute, and of course I remind you the two fundamental laws used
> in that computation:
>
> Kxy = x
> Sxyz = xz(yz)
>
> S(KS)Kxyz =
>
> OK let see in detail that is the combinator S, which got a first argument,
> the combinator (KS) this gives (S (K S)) written S(KS), which remains
> stable ("not enough argument”), then S(KS) get the argument K which gives
> S(KS)K, which remains stable (indeed it is supposed to be the code of B)
> and indeed S has still got only his first two argument and so we can’t
> apply any laws to proceed, but now, S get its third argument x so
>
> we are at S(KS)Kx, that is S (KS) K x, and here S has three arguments and
> so match the left part of the second law S x y z, with x = KS, y = K and z
> = x.
>

Okay, I follow so far.  The idea of waiting until having enough arguments
before activating is helpful and I think I was missing that before.


>
> Now the second law is triggered, so to speak, and we get xz(yz) with with
> x = KS, y = K and z = x, and that is gives (KS)x(Kx) = KSx(Kx). OK?
>

It isn't fully clear to me how the "eliminate left parenthesis" rule works.
Is it that you only cancel the left-most parenthesis until you hot a
non-left-parenthesis element, or is it more complex than this?


>
> You always add the left parentheses, or some of them to be sure what we
> have obtained. KSx(Kx) = ((KSx) (Kx)), but “KSx” is a redex, as it match
> Kxy, with x = S and y = x, and so get “reduces” into S, so we get S(Kx)
> (starting from S(KS)Kx, which is Bx, waiting now for y and then z.
>
> We are at Bxy = S(KS)Kxy = (we just computed) S(Kx)y, which is S with “not
> enough argument” so we give the remains z and get
>
> S(Kx)yz
>
> Which triggers again the second law to give (x = (Kx), y = y, z = z)
>
> (Kx)z(yz) = Kxz(yz)
>
> And again, Kxz gives x (by the first law) so we get
>
> x(yz).
>
> OK?
>

It is 

Re: Combinators 1 (Introduction)

2018-08-05 Thread Bruno Marchal
Hi Jason,


> On 5 Aug 2018, at 05:24, Jason Resch  wrote:
> 
> 
> 
> On Sat, Jul 28, 2018 at 2:19 PM Bruno Marchal  > wrote:
> Hi Jason, people,
> 
> 
> Hi Bruno,
> 
> Thank you for this. I've been trying to digest it over the past few days.

No problem.  It was hard to begin with , and I was about sending few easy 
exercise to help for the notation. But you did very well.



>  
> 
> I will send my post on the Church-Turing thesis and incompleteness later. It 
> is too long.
> 
> So, let us proceed with the combinators.
> 
> Two seconds of historical motivation. During the crisis in set theory, Moses 
> Schoenfinkel publishes, in 1924, an attempt to found mathematics on only 
> functions. But he did not consider the functions as defined by their 
> behaviour (or input-output) but more as rules to follow.
> 
> He considered also only functions of one variable, and wrote (f x) instead of 
> the usual f(x).
> 
> The idea is that a binary function like (x + y) when given the input 4, say, 
> and other inputs, will just remains patient, instead of insulting the user, 
> and so to compute 4+5 you just give 5 (+ 4), that is you compute
>  ((+ 4) 5). (+ 4) will be an object computing the function 4 + x. 
> 
> 
> The composition of f and g on x is thus written  (f (g x)), and a combinator 
> should be some function B able on f, g and x to give (f (g x)).
> 
> Bfgx = f(gx), for example. 
> 
> So am I correct to say a combinator "B" is a function taking a single input 
> "fgx”,

Three inputs. B on f will first gives (B f), written Bf, then when B will get 
its second input that will give ((B f) g), written Bfg, which is a new function 
which on x, will now trigger the definition above and give the combinator (f (g 
x)), written f(gx) and which would compute f(g(x)) written with the usual 
schoolboy notation.




> but is itself capable of parsing the inputs and evaluating them as functions?

It just recombine its inputs, the functions will evaluate by themselves. Don’t 
worry, you will see clearly the how and why.

B is called an applicator, because given f, g and h has arguments, Bfgh, it 
gives f(gh). I have used f and g and h has symbol, but I can use x and y and z 
instead. Those variables are put for combinators. Bxyz = x(yz). Formally B only 
introduce those right parenthesis. With full parentheses we should write:

(((Bx)y)z) = (x(yz)). But we suppress all leftmost parentheses: Bxyz =x(yz).

The interesting question is: does B exist? Which here means —is there a 
combinator (named B) which applied on x, then y, then z, gives x(yz).

Later I will provide an algorithm solving the task of finding a combinators 
doing some given combination like that.  But here I just answer the question: 
YES!

Theorem B = S(KS)K, i.e. Bxyz = S(KS)Kxyz = x(yz)

Proof: it is enough to compute S(KS)Kxyz and to see if we get x(yz)

Let us compute, and of course I remind you the two fundamental laws used in 
that computation:

Kxy = x
Sxyz = xz(yz)

S(KS)Kxyz =

OK let see in detail that is the combinator S, which got a first argument, the 
combinator (KS) this gives (S (K S)) written S(KS), which remains stable ("not 
enough argument”), then S(KS) get the argument K which gives S(KS)K, which 
remains stable (indeed it is supposed to be the code of B) and indeed S has 
still got only his first two argument and so we can’t apply any laws to 
proceed, but now, S get its third argument x so 

we are at S(KS)Kx, that is S (KS) K x, and here S has three arguments and so 
match the left part of the second law S x y z, with x = KS, y = K and z = x.

Now the second law is triggered, so to speak, and we get xz(yz) with with x = 
KS, y = K and z = x, and that is gives (KS)x(Kx) = KSx(Kx). OK?

You always add the left parentheses, or some of them to be sure what we have 
obtained. KSx(Kx) = ((KSx) (Kx)), but “KSx” is a redex, as it match Kxy, with x 
= S and y = x, and so get “reduces” into S, so we get S(Kx) (starting from 
S(KS)Kx, which is Bx, waiting now for y and then z.

We are at Bxy = S(KS)Kxy = (we just computed) S(Kx)y, which is S with “not 
enough argument” so we give the remains z and get

S(Kx)yz

Which triggers again the second law to give (x = (Kx), y = y, z = z)

(Kx)z(yz) = Kxz(yz)

And again, Kxz gives x (by the first law) so we get

x(yz).

OK?

How could we have found that B was computed by the combinators S(KS)K?

We can do this by guessing and computing in reverse, introducing K or other 
combinators so that we can reverse the fundamental laws. So in x(yz), we can 
replace x by (Kxz) that is ((Kx)z) so that we can apply axiom 2 to x(yz) = 
(Kxz)(yz) = S(Kx)yz, then, well the “yz” are already in the good place, but the 
x is still in a parenthesis which has to be removed: we just do the same trick 
and replace S(Kx) by (KSx)(Kx), and so we get S(KS)Kx and we are done: B = 
S(KS)K.

Don’t worry too much, I will soon or a bit later provide an algorithm which 
from a specification Xxyzt = xt(ytxzx) 

Re: Combinators 1 (Introduction)

2018-08-04 Thread Jason Resch
On Sat, Jul 28, 2018 at 2:19 PM Bruno Marchal  wrote:

> Hi Jason, people,
>
>
Hi Bruno,

Thank you for this. I've been trying to digest it over the past few days.


>
> I will send my post on the Church-Turing thesis and incompleteness later.
> It is too long.
>
> So, let us proceed with the combinators.
>
> Two seconds of historical motivation. During the crisis in set theory,
> Moses Schoenfinkel publishes, in 1924, an attempt to found mathematics on
> only functions. But he did not consider the functions as defined by their
> behaviour (or input-output) but more as rules to follow.
>
> He considered also only functions of one variable, and wrote (f x) instead
> of the usual f(x).
>
> The idea is that a binary function like (x + y) when given the input 4,
> say, and other inputs, will just remains patient, instead of insulting the
> user, and so to compute 4+5 you just give 5 (+ 4), that is you compute
>  ((+ 4) 5). (+ 4) will be an object computing the function 4 + x.
>
>
> The composition of f and g on x is thus written  (f (g x)), and a
> combinator should be some function B able on f, g and x to give (f (g x)).
>
> Bfgx = f(gx), for example.
>

So am I correct to say a combinator "B" is a function taking a single input
"fgx", but is itself capable of parsing the inputs and evaluating them as
functions?


>
> When I said that Shoenfinkel considered only functions, I meant it
> literally, and he accepts that a function applies to any other functions,
> so (f f) is permitted. Here (f f) is f applied to itself.
>

So are input and output values themselves considered as functions, with
fixed values just being identities which return themselves?


>
> A first question was about the existence of a finite set of combinators
> capable of giving all possible combinators, noting that a combinators
> combine. Shoenfinkel will find that it is the case, and provide the S and K
> combinators, for this. I will prove this later.
>
> A second question will be, can the SK-combinators compute all partial
> computable functions from N to N, and thus all total computable functions?
> The answer is yes. That has been proved by Curry, I think.
>
> OK? (Infinitely more could be said here, but let us give the mathematical
> definition of the SK-combinators:
>
> K is a combinator.
> S is a combinator.
> If x and y are combinator, then (x y) is a combinator.
>
> That is, is combinator is S, or K or a combination of S and K.
>
> So, the syntaxe is very easy, although there will be some problem with the
> parentheses which will justified a convention/simplifcation.
>
> Example of combinators.
>
> Well, K and S, and their combinations, (K K), (K S), (S K), (S S), and the
> (K ( K K)) and ((K K) K), and (K (K S)) and …… (((S (K S)) K) etc.
>
> I directly introduce an abbreviation to avoid too many parentheses. As all
> combinator is a function with one argument, I suppress *all* parentheses
> starting from the left:
> The enumeration above is then:  K, S, KK, KS, SK, K(KK), KKK, K(SK) and …
> S(KS)K ...
>
> So aaa(bbb) will be an abbreviation for (  ((a a) a) ((b b) b) ). It means
> a applied on a, the result is applied on a, and that results is applied on
> .. well the same with b (a and b being some combinators).
>
>
>
> OK?
>

The syntax is a bit unfamiliar to me but I think I follow so far.


>
> Of course, they obeys some axioms, without which it would be hard to
> believe they could be
> 1) combinatorial complete (theorem 1)
> 2) Turing complete (theorem 2)
>
> What are the axioms?
>
> I write them with the abbreviation (and without, a last time!)
>
> Kxy = x
> Sxyz = xz(yz)
>
> That is all.
>
> A natural fist exercise consists in finding an identity combinator. That
> is a combinator I such that Ix = x.
>


I am having trouble translating the functions and their arguments (putting
the parenthesis back in), is this translation correct?

K(x(y)) = x
S(x(y(z))) = x(z(y(z)))



>
> Well, only Kxy can give x, and Kxy does not seem to match xz(yz), so as to
> apply axiom 2, does it? Yes, it does with y matching (Kx), or (Sx).
> (Sometime we add again some left parenthesis to better see the match.
>
> So, x = Kxy = Kx(Kx) = SKKx, and we are done! I = SKK
>
> Vérification (we would not have sent Curiosity on Mars, without testing
> the software, OK? Same with the combinators. Let us test SKK on say (KK),
> that gives SKK(KK) which gives by axiom 2 K(KK)(K(KK)) which gives (KK) =
> KK, done!
>
> Note that SKK(KK) is a non stable combinators. It is called a redex. It is
> triggered by the axiom 2. The same for KKK, which gives K. A combinators
> which remains stable, and contains no redex, is said to be in normal form.
> As you can guess, the price of Turing universality is that some combinators
> will have no normal form, and begin infinite computatutions. A computation,
> here, is a sequence of applications of the two axioms above. It can be
> proved that if a combinators has a normal form exist, all computations with
> evaluation 

Combinators 1 (Introduction)

2018-07-28 Thread Bruno Marchal
Hi Jason, people,


I will send my post on the Church-Turing thesis and incompleteness later. It is 
too long.

So, let us proceed with the combinators.

Two seconds of historical motivation. During the crisis in set theory, Moses 
Schoenfinkel publishes, in 1924, an attempt to found mathematics on only 
functions. But he did not consider the functions as defined by their behaviour 
(or input-output) but more as rules to follow.

He considered also only functions of one variable, and wrote (f x) instead of 
the usual f(x).

The idea is that a binary function like (x + y) when given the input 4, say, 
and other inputs, will just remains patient, instead of insulting the user, and 
so to compute 4+5 you just give 5 (+ 4), that is you compute
 ((+ 4) 5). (+ 4) will be an object computing the function 4 + x. 


The composition of f and g on x is thus written  (f (g x)), and a combinator 
should be some function B able on f, g and x to give (f (g x)).

Bfgx = f(gx), for example. 

When I said that Shoenfinkel considered only functions, I meant it literally, 
and he accepts that a function applies to any other functions, so (f f) is 
permitted. Here (f f) is f applied to itself.

A first question was about the existence of a finite set of combinators capable 
of giving all possible combinators, noting that a combinators combine. 
Shoenfinkel will find that it is the case, and provide the S and K combinators, 
for this. I will prove this later.

A second question will be, can the SK-combinators compute all partial 
computable functions from N to N, and thus all total computable functions?  The 
answer is yes. That has been proved by Curry, I think.

OK? (Infinitely more could be said here, but let us give the mathematical 
definition of the SK-combinators:

K is a combinator. 
S is a combinator.
If x and y are combinator, then (x y) is a combinator.

That is, is combinator is S, or K or a combination of S and K.

So, the syntaxe is very easy, although there will be some problem with the 
parentheses which will justified a convention/simplifcation.

Example of combinators.

Well, K and S, and their combinations, (K K), (K S), (S K), (S S), and the (K ( 
K K)) and ((K K) K), and (K (K S)) and …… (((S (K S)) K) etc.

I directly introduce an abbreviation to avoid too many parentheses. As all 
combinator is a function with one argument, I suppress *all* parentheses 
starting from the left:
The enumeration above is then:  K, S, KK, KS, SK, K(KK), KKK, K(SK) and … 
S(KS)K ...

So aaa(bbb) will be an abbreviation for (  ((a a) a) ((b b) b) ). It means a 
applied on a, the result is applied on a, and that results is applied on  .. 
well the same with b (a and b being some combinators).



OK?

Of course, they obeys some axioms, without which it would be hard to believe 
they could be
1) combinatorial complete (theorem 1)
2) Turing complete (theorem 2)

What are the axioms?

I write them with the abbreviation (and without, a last time!)

Kxy = x
Sxyz = xz(yz)

That is all.

A natural fist exercise consists in finding an identity combinator. That is a 
combinator I such that Ix = x.

Well, only Kxy can give x, and Kxy does not seem to match xz(yz), so as to 
apply axiom 2, does it? Yes, it does with y matching (Kx), or (Sx). (Sometime 
we add again some left parenthesis to better see the match. 

So, x = Kxy = Kx(Kx) = SKKx, and we are done! I = SKK

Vérification (we would not have sent Curiosity on Mars, without testing the 
software, OK? Same with the combinators. Let us test SKK on say (KK), that 
gives SKK(KK) which gives by axiom 2 K(KK)(K(KK)) which gives (KK) = KK, done!

Note that SKK(KK) is a non stable combinators. It is called a redex. It is 
triggered by the axiom 2. The same for KKK, which gives K. A combinators which 
remains stable, and contains no redex, is said to be in normal form.  As you 
can guess, the price of Turing universality is that some combinators will have 
no normal form, and begin infinite computatutions. A computation, here, is a 
sequence of applications of the two axioms above. It can be proved that if a 
combinators has a normal form exist, all computations with evaluation staring 
from the left will find it.

I will tomorrow, or Monday, show that there is a combinator M such that Mx = 
xx, a combinators T such that Thy = yx, a combinator L such that Lxy = x(yy), … 
and others, Later, I will prove theorem 1 by providing an algorithm to build a 
combinator down any given combinations.That will prove the combinatorial 
completeness. Then I will prove that all recursive relation admits a solution, 
i.e. you can always find a combinator A such that Axyzt = x(Atzz)(yA), say. 
Then I will show how easy we can implement the control structure IF A then B 
else C, and follow Barendregt and Smullyan in using this to define the logical 
connectives with combinators, then I will provides some definition of the 
natural numbers, and define addition, multiplication, all primitive recursive 
function,