Re: [fricas-devel] GPT4 and computer algebra

2024-01-17 Thread Hill Strong
Tim,

This is a good example of the problem of the language we are using.

On Thu, Jan 18, 2024 at 5:49 PM Tim Daly  wrote:

> AlphaGeometry: An Olympiad-level AI system for Geometry
>
> https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/
>
> On Thursday, January 18, 2024 at 12:32:34 AM UTC-5 Tim Daly wrote:
>
>> While the question of human intelligence is interesting
>> please note that my post topic has been restricted to the
>> application of GPT-like / AI-like systems to the automation
>> of proof and computer algebra.
>>
>> Given the changes I'm seeing and the research I'm doing it
>> is likely that there will be a very rapid change from the legacy
>> version of computer algebra we now use, taking advantage of
>> the growing power of these systems.
>>
>> I've been involved and published in AI-like systems since the 1973,
>> including such
>> "AI-like" areas of machine vision, robotics, expert systems, speech,
>> neural
>> nets, knowledge representation, planning, etc. All of these were
>> considered
>> "intelligent behavior" ... until they weren't. Indeed, in the 1960s
>> computer
>> algebra-like systems were considered "intelligent".
>>
>> Intelligence is a "suitcase word" [0], interesting but irrelevant.
>>
>> The topic at hand is "computational mathematics", meaning the application
>> of recent technology (proof assistants, theorem databases, algorithm
>> proofs, and reasoning assistants) to moving beyond 50-year-old legacy
>> computer algebra. Proving computer algebra algorithms correct and
>> providing useful interfaces is the long-term goal.
>>
>> Tim
>>
>> [0] "Marvin Minsky, in his fascinating book "The Emotion Machine" ...
>> described
>> words such as "emotions," "memory," "thinking" and "intelligence" as
>> suitcase
>> words - words that mean "*nothing by itself, but holds a bunch of things
>> inside *
>> *that you have to unpack*."
>>
>> On Wednesday, January 17, 2024 at 10:03:53 PM UTC-5 elders...@gmail.com
>> wrote:
>>
>>> Good morning Waldek,
>>>
>>> My response will be delving into some areas of philosophical and
>>> metaphysical response.
>>>
>>> On Thu, Jan 18, 2024 at 2:56 AM Waldek Hebisch  wrote:
>>>
 On Thu, Jan 18, 2024 at 01:16:55AM +1100, Hill Strong wrote:
 > On Wed, Jan 17, 2024 at 11:09 PM Tim Daly  wrote:
 >
 > They can raise the issue all they like. What they are not seeing is
 that
 > artificial stupidity (AI) systems are limited. As I said above. The
 only
 > intelligence you will find in these systems is the stuff generated by
 human
 > intelligence. No artificial stupidity (AI) system can ever exceed the
 > limitations of the programming entailed in them.

 Well, humans are at least as limited: your claim as true as claim
 that "humans can not ever exceed the limitations of the programming
 entailed in them".  In case of humans programming meaning both things

>>>
>>> Though we are limited in every sort of way, we are capable of exceeding
>>> any programming that is involved in our development. What is so not obvious
>>> is that there is a category error when comparing humans to any sort of
>>> machine that we build. Though we can build machines stronger, faster and
>>> able to handle tedious tasks far better than we can do, due to our very
>>> distinctive intelligence, we are able to do what no machine can do and that
>>> is solve the problems that arise.
>>>
>>>
>>> hardcoded in genome and chemical machinery of the body and "learned
 stuff".  Already at age 1 toys, customized environment and interactions

>>>
>>> You speak here about [programming] hard-coded in the genome and chemical
>>> machinery of the human body. But I somehow don't think you realise just how
>>> far beyond us is that machinery. Within a single living cell, the automatic
>>> adaptive capabilities are so far beyond our current levels of technology
>>> that in comparison, we are no more advanced than some caveman when we
>>> compare the entire planetary industrial and planetary capabilities that we
>>> have already made.
>>>
>>> The more we learn about cellular biology with its internal
>>> transportation, control and manufacturing systems the more we should be
>>> recognising that we are not capable of building any computerised system
>>> that will ever be intelligent.
>>>
>>> with other humans make significant difference to learning.  At later
 stages there are stories which were perfected for thousends of years,
 school curricula and books.  There were myths that people from
 non-western cultures are less "inteligent" than people from western
 culture.  More deeper research showed that part of our "inteligence"
 is really "programmed" (learned) and "programming" in differnet
 cultures were different.

>>>
>>> Though part of our intelligence is learned there is something that is so
>>> far beyond this, so that, even though this is an active area of 

Re: [fricas-devel] GPT4 and computer algebra

2024-01-17 Thread Hill Strong
Good afternoon Tim,

On Thu, Jan 18, 2024 at 4:32 PM Tim Daly  wrote:

> While the question of human intelligence is interesting
> please note that my post topic has been restricted to the
> application of GPT-like / AI-like systems to the automation
> of proof and computer algebra.
>

While such systems may (and I stress may) be useful in the hands of
talented humans, it will require the insights of humans to use these
systems properly. The problem is the use of the word [intelligence] in
relation to these systems. It would be far better to use some other terms
to show that these systems are tools first and formost.


>
> Given the changes I'm seeing and the research I'm doing it
> is likely that there will be a very rapid change from the legacy
> version of computer algebra we now use, taking advantage of
> the growing power of these systems.
>

There is nothing wrong with taking advantage of these systems in this kind
of work. But we must ensure that we don't attribute to these systems
properties that are not true.


>
> I've been involved and published in AI-like systems since the 1973,
> including such
> "AI-like" areas of machine vision, robotics, expert systems, speech, neural
> nets, knowledge representation, planning, etc. All of these were considered
> "intelligent behavior" ... until they weren't. Indeed, in the 1960s
> computer
> algebra-like systems were considered "intelligent".
>

I agree with you that all of these systems have had associated with them
the term [intelligent behaviour] and in hindsight this has been a major
mistake and a detriment to our advancement of these systems.


>
> Intelligence is a "suitcase word" [0], interesting but irrelevant.
>

Unfortunately not actually irrelevant.

>
> The topic at hand is "computational mathematics", meaning the application
> of recent technology (proof assistants, theorem databases, algorithm
> proofs, and reasoning assistants) to moving beyond 50-year-old legacy
> computer algebra. Proving computer algebra algorithms correct and
> providing useful interfaces is the long-term goal.
>

I do not disagree with you here. However, it behooves us to ensure that
appropriate terminology is used that does not get distorted once these
systems enter into wider usage.


>
> Tim
>
> [0] "Marvin Minsky, in his fascinating book "The Emotion Machine" ...
> described
> words such as "emotions," "memory," "thinking" and "intelligence" as
> suitcase
> words - words that mean "*nothing by itself, but holds a bunch of things
> inside *
> *that you have to unpack*."
>
> On Wednesday, January 17, 2024 at 10:03:53 PM UTC-5 elders...@gmail.com
> wrote:
>
>> Good morning Waldek,
>>
>> My response will be delving into some areas of philosophical and
>> metaphysical response.
>>
>> On Thu, Jan 18, 2024 at 2:56 AM Waldek Hebisch  wrote:
>>
>>> On Thu, Jan 18, 2024 at 01:16:55AM +1100, Hill Strong wrote:
>>> > On Wed, Jan 17, 2024 at 11:09 PM Tim Daly  wrote:
>>> >
>>> > They can raise the issue all they like. What they are not seeing is
>>> that
>>> > artificial stupidity (AI) systems are limited. As I said above. The
>>> only
>>> > intelligence you will find in these systems is the stuff generated by
>>> human
>>> > intelligence. No artificial stupidity (AI) system can ever exceed the
>>> > limitations of the programming entailed in them.
>>>
>>> Well, humans are at least as limited: your claim as true as claim
>>> that "humans can not ever exceed the limitations of the programming
>>> entailed in them".  In case of humans programming meaning both things
>>>
>>
>> Though we are limited in every sort of way, we are capable of exceeding
>> any programming that is involved in our development. What is so not obvious
>> is that there is a category error when comparing humans to any sort of
>> machine that we build. Though we can build machines stronger, faster and
>> able to handle tedious tasks far better than we can do, due to our very
>> distinctive intelligence, we are able to do what no machine can do and that
>> is solve the problems that arise.
>>
>>
>> hardcoded in genome and chemical machinery of the body and "learned
>>> stuff".  Already at age 1 toys, customized environment and interactions
>>>
>>
>> You speak here about [programming] hard-coded in the genome and chemical
>> machinery of the human body. But I somehow don't think you realise just how
>> far beyond us is that machinery. Within a single living cell, the automatic
>> adaptive capabilities are so far beyond our current levels of technology
>> that in comparison, we are no more advanced than some caveman when we
>> compare the entire planetary industrial and planetary capabilities that we
>> have already made.
>>
>> The more we learn about cellular biology with its internal
>> transportation, control and manufacturing systems the more we should be
>> recognising that we are not capable of building any computerised system
>> that will ever be intelligent.
>>
>> with other humans make significant 

Re: [fricas-devel] GPT4 and computer algebra

2024-01-17 Thread Tim Daly
AlphaGeometry: An Olympiad-level AI system for Geometry
https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/

On Thursday, January 18, 2024 at 12:32:34 AM UTC-5 Tim Daly wrote:

> While the question of human intelligence is interesting
> please note that my post topic has been restricted to the
> application of GPT-like / AI-like systems to the automation
> of proof and computer algebra.
>
> Given the changes I'm seeing and the research I'm doing it
> is likely that there will be a very rapid change from the legacy
> version of computer algebra we now use, taking advantage of 
> the growing power of these systems.
>
> I've been involved and published in AI-like systems since the 1973, 
> including such
> "AI-like" areas of machine vision, robotics, expert systems, speech, neural
> nets, knowledge representation, planning, etc. All of these were considered
> "intelligent behavior" ... until they weren't. Indeed, in the 1960s 
> computer
> algebra-like systems were considered "intelligent".
>
> Intelligence is a "suitcase word" [0], interesting but irrelevant.
>
> The topic at hand is "computational mathematics", meaning the application
> of recent technology (proof assistants, theorem databases, algorithm
> proofs, and reasoning assistants) to moving beyond 50-year-old legacy
> computer algebra. Proving computer algebra algorithms correct and
> providing useful interfaces is the long-term goal.
>
> Tim
>
> [0] "Marvin Minsky, in his fascinating book "The Emotion Machine" ... 
> described 
> words such as "emotions," "memory," "thinking" and "intelligence" as 
> suitcase 
> words - words that mean "*nothing by itself, but holds a bunch of things 
> inside *
> *that you have to unpack*."
>
> On Wednesday, January 17, 2024 at 10:03:53 PM UTC-5 elders...@gmail.com 
> wrote:
>
>> Good morning Waldek,
>>
>> My response will be delving into some areas of philosophical and 
>> metaphysical response.
>>
>> On Thu, Jan 18, 2024 at 2:56 AM Waldek Hebisch  wrote:
>>
>>> On Thu, Jan 18, 2024 at 01:16:55AM +1100, Hill Strong wrote:
>>> > On Wed, Jan 17, 2024 at 11:09 PM Tim Daly  wrote:
>>> > 
>>> > They can raise the issue all they like. What they are not seeing is 
>>> that
>>> > artificial stupidity (AI) systems are limited. As I said above. The 
>>> only
>>> > intelligence you will find in these systems is the stuff generated by 
>>> human
>>> > intelligence. No artificial stupidity (AI) system can ever exceed the
>>> > limitations of the programming entailed in them.
>>>
>>> Well, humans are at least as limited: your claim as true as claim
>>> that "humans can not ever exceed the limitations of the programming
>>> entailed in them".  In case of humans programming meaning both things
>>>
>>
>> Though we are limited in every sort of way, we are capable of exceeding 
>> any programming that is involved in our development. What is so not obvious 
>> is that there is a category error when comparing humans to any sort of 
>> machine that we build. Though we can build machines stronger, faster and 
>> able to handle tedious tasks far better than we can do, due to our very 
>> distinctive intelligence, we are able to do what no machine can do and that 
>> is solve the problems that arise.
>>
>>
>> hardcoded in genome and chemical machinery of the body and "learned
>>> stuff".  Already at age 1 toys, customized environment and interactions
>>>
>>
>> You speak here about [programming] hard-coded in the genome and chemical 
>> machinery of the human body. But I somehow don't think you realise just how 
>> far beyond us is that machinery. Within a single living cell, the automatic 
>> adaptive capabilities are so far beyond our current levels of technology 
>> that in comparison, we are no more advanced than some caveman when we 
>> compare the entire planetary industrial and planetary capabilities that we 
>> have already made.
>>
>> The more we learn about cellular biology with its internal 
>> transportation, control and manufacturing systems the more we should be 
>> recognising that we are not capable of building any computerised system 
>> that will ever be intelligent.
>>
>> with other humans make significant difference to learning.  At later
>>> stages there are stories which were perfected for thousends of years,
>>> school curricula and books.  There were myths that people from
>>> non-western cultures are less "inteligent" than people from western
>>> culture.  More deeper research showed that part of our "inteligence"
>>> is really "programmed" (learned) and "programming" in differnet
>>> cultures were different.
>>>
>>
>> Though part of our intelligence is learned there is something that is so 
>> far beyond this, so that, even though this is an active area of research, 
>> we haven't a clue as to what this is. 
>>
>>
>>> In slightly different spirit, in fifties there were efforts to
>>> define inteligence and researcheres from that time postulated
>>> several abilities 

Re: [fricas-devel] GPT4 and computer algebra

2024-01-17 Thread Tim Daly
While the question of human intelligence is interesting
please note that my post topic has been restricted to the
application of GPT-like / AI-like systems to the automation
of proof and computer algebra.

Given the changes I'm seeing and the research I'm doing it
is likely that there will be a very rapid change from the legacy
version of computer algebra we now use, taking advantage of 
the growing power of these systems.

I've been involved and published in AI-like systems since the 1973, 
including such
"AI-like" areas of machine vision, robotics, expert systems, speech, neural
nets, knowledge representation, planning, etc. All of these were considered
"intelligent behavior" ... until they weren't. Indeed, in the 1960s computer
algebra-like systems were considered "intelligent".

Intelligence is a "suitcase word" [0], interesting but irrelevant.

The topic at hand is "computational mathematics", meaning the application
of recent technology (proof assistants, theorem databases, algorithm
proofs, and reasoning assistants) to moving beyond 50-year-old legacy
computer algebra. Proving computer algebra algorithms correct and
providing useful interfaces is the long-term goal.

Tim

[0] "Marvin Minsky, in his fascinating book "The Emotion Machine" ... 
described 
words such as "emotions," "memory," "thinking" and "intelligence" as 
suitcase 
words - words that mean "*nothing by itself, but holds a bunch of things 
inside *
*that you have to unpack*."

On Wednesday, January 17, 2024 at 10:03:53 PM UTC-5 elders...@gmail.com 
wrote:

> Good morning Waldek,
>
> My response will be delving into some areas of philosophical and 
> metaphysical response.
>
> On Thu, Jan 18, 2024 at 2:56 AM Waldek Hebisch  wrote:
>
>> On Thu, Jan 18, 2024 at 01:16:55AM +1100, Hill Strong wrote:
>> > On Wed, Jan 17, 2024 at 11:09 PM Tim Daly  wrote:
>> > 
>> > They can raise the issue all they like. What they are not seeing is that
>> > artificial stupidity (AI) systems are limited. As I said above. The only
>> > intelligence you will find in these systems is the stuff generated by 
>> human
>> > intelligence. No artificial stupidity (AI) system can ever exceed the
>> > limitations of the programming entailed in them.
>>
>> Well, humans are at least as limited: your claim as true as claim
>> that "humans can not ever exceed the limitations of the programming
>> entailed in them".  In case of humans programming meaning both things
>>
>
> Though we are limited in every sort of way, we are capable of exceeding 
> any programming that is involved in our development. What is so not obvious 
> is that there is a category error when comparing humans to any sort of 
> machine that we build. Though we can build machines stronger, faster and 
> able to handle tedious tasks far better than we can do, due to our very 
> distinctive intelligence, we are able to do what no machine can do and that 
> is solve the problems that arise.
>
>
> hardcoded in genome and chemical machinery of the body and "learned
>> stuff".  Already at age 1 toys, customized environment and interactions
>>
>
> You speak here about [programming] hard-coded in the genome and chemical 
> machinery of the human body. But I somehow don't think you realise just how 
> far beyond us is that machinery. Within a single living cell, the automatic 
> adaptive capabilities are so far beyond our current levels of technology 
> that in comparison, we are no more advanced than some caveman when we 
> compare the entire planetary industrial and planetary capabilities that we 
> have already made.
>
> The more we learn about cellular biology with its internal transportation, 
> control and manufacturing systems the more we should be recognising that we 
> are not capable of building any computerised system that will ever be 
> intelligent.
>
> with other humans make significant difference to learning.  At later
>> stages there are stories which were perfected for thousends of years,
>> school curricula and books.  There were myths that people from
>> non-western cultures are less "inteligent" than people from western
>> culture.  More deeper research showed that part of our "inteligence"
>> is really "programmed" (learned) and "programming" in differnet
>> cultures were different.
>>
>
> Though part of our intelligence is learned there is something that is so 
> far beyond this, so that, even though this is an active area of research, 
> we haven't a clue as to what this is. 
>
>
>> In slightly different spirit, in fifties there were efforts to
>> define inteligence and researcheres from that time postulated
>> several abilities that every inteligent being should have.
>> Based on that there were "proofs" that artificial inteligence
>> is impossible.  One of such "proofs" goes as follows: people
>> can prove math theorems.  But Goedel and Church proved that
>> no machine can prove math theorem.  So no machine will match
>> humans.  The fallacy of this argument is classic abuse of
>> 

Re: [fricas-devel] GPT4 and computer algebra

2024-01-17 Thread Hill Strong
Good morning Waldek,

My response will be delving into some areas of philosophical and
metaphysical response.

On Thu, Jan 18, 2024 at 2:56 AM Waldek Hebisch  wrote:

> On Thu, Jan 18, 2024 at 01:16:55AM +1100, Hill Strong wrote:
> > On Wed, Jan 17, 2024 at 11:09 PM Tim Daly  wrote:
> >
> > They can raise the issue all they like. What they are not seeing is that
> > artificial stupidity (AI) systems are limited. As I said above. The only
> > intelligence you will find in these systems is the stuff generated by
> human
> > intelligence. No artificial stupidity (AI) system can ever exceed the
> > limitations of the programming entailed in them.
>
> Well, humans are at least as limited: your claim as true as claim
> that "humans can not ever exceed the limitations of the programming
> entailed in them".  In case of humans programming meaning both things
>

Though we are limited in every sort of way, we are capable of exceeding any
programming that is involved in our development. What is so not obvious is
that there is a category error when comparing humans to any sort of machine
that we build. Though we can build machines stronger, faster and able to
handle tedious tasks far better than we can do, due to our very distinctive
intelligence, we are able to do what no machine can do and that is solve
the problems that arise.


hardcoded in genome and chemical machinery of the body and "learned
> stuff".  Already at age 1 toys, customized environment and interactions
>

You speak here about [programming] hard-coded in the genome and chemical
machinery of the human body. But I somehow don't think you realise just how
far beyond us is that machinery. Within a single living cell, the automatic
adaptive capabilities are so far beyond our current levels of technology
that in comparison, we are no more advanced than some caveman when we
compare the entire planetary industrial and planetary capabilities that we
have already made.

The more we learn about cellular biology with its internal transportation,
control and manufacturing systems the more we should be recognising that we
are not capable of building any computerised system that will ever be
intelligent.

with other humans make significant difference to learning.  At later
> stages there are stories which were perfected for thousends of years,
> school curricula and books.  There were myths that people from
> non-western cultures are less "inteligent" than people from western
> culture.  More deeper research showed that part of our "inteligence"
> is really "programmed" (learned) and "programming" in differnet
> cultures were different.
>

Though part of our intelligence is learned there is something that is so
far beyond this, so that, even though this is an active area of research,
we haven't a clue as to what this is.


> In slightly different spirit, in fifties there were efforts to
> define inteligence and researcheres from that time postulated
> several abilities that every inteligent being should have.
> Based on that there were "proofs" that artificial inteligence
> is impossible.  One of such "proofs" goes as follows: people
> can prove math theorems.  But Goedel and Church proved that
> no machine can prove math theorem.  So no machine will match
> humans.  The fallacy of this argument is classic abuse of
> quantifiers: humans can prove same (easy) math theorems.
>

Here you are now arguing against the various incompleteness theorems and in
doing so, you are entering the realm of metaphysics and philosophy. These
incompleteness theorems highlight something quite interesting in that the
basis of our logic and mathematics (the fundamental axioms used) cannot be
proved true from within our logic and mathematics. We can argue for them
from metaphysics and philosophy but we are unable to prove them from within
the systems that are based on those fundamental axioms.

What you have missed here is that any theorems developed within the systems
are provable based on those unprovable axioms and previously proven
theorems. That is how we can build up the various systems of mathematics in
use. However, new mathematical systems require insights not possible to any
computerised system. These new insights are in the realm of human
intelligence which does exceed any programming that individual humans may
be exposed to.

No machine or human can prove _each_ math theorem.  Actually,
> we still do not know how hard is proving, but common belief
>

All theorems can be proved if they are based on the initial set of axioms
used. How hard it is to prove the theorem is another matter.


> is that complexity of proving is exponential in length of the
> proof.  What is proven is that that there is no computable
> bound on length of shortest proof.  Clearly this difficulty,
> that is large length of proofs affect humans as much as
> computers.
>

Maybe. The thing of interest here is that it is humans who can find
alternative ways of proving a theorem which is not a part of the 

Re: [fricas-devel] GPT4 and computer algebra

2024-01-17 Thread Bill Page
Another example:

https://www.nature.com/articles/s41586-023-06747-5

On Wed, Jan 17, 2024 at 10:56 AM Waldek Hebisch  wrote:

> …
> One more thing: early in history of AI there was Eliza.
> It was simple pattern matcher clearly having no inteligence,
> yet it was able to fool some humans to belive that they
> communicate with other human (ok, at least for some time).
> Some people take this to consider all solved AI problems
> as kind of fake and show that the problem was not about
> inteligence.  But IMO there is different possiblity: that
> all our inteligence is "fake" in similar vein.  In other
> words, we do not solve general problem but use tricks which
> happen to work in real life.  Or to put it differently,
> we may be much more limited than we imagine.  Eliza clearly
> shows that we can be easily fooled into assuming that
> something has much more abilities than it really has
> (and "something" may be really "we").


+1

I strongly agree with your point in parenthesis above.  It seems to me that
the current generation of AI tells us more about the actual capabilities of
human beings and the nature of intelligence than it does about the
limitations of computer systems.



>

-- 
You received this message because you are subscribed to the Google Groups 
"FriCAS - computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to fricas-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/fricas-devel/CAC6x94RH1y4a_-BtJVkEDFS2B-d65SUbesP%3Dx93g5bKwX8oMjA%40mail.gmail.com.


Re: [fricas-devel] GPT4 and computer algebra

2024-01-17 Thread Waldek Hebisch
On Thu, Jan 18, 2024 at 01:16:55AM +1100, Hill Strong wrote:
> On Wed, Jan 17, 2024 at 11:09 PM Tim Daly  wrote:
> 
> They can raise the issue all they like. What they are not seeing is that
> artificial stupidity (AI) systems are limited. As I said above. The only
> intelligence you will find in these systems is the stuff generated by human
> intelligence. No artificial stupidity (AI) system can ever exceed the
> limitations of the programming entailed in them.

Well, humans are at least as limited: your claim as true as claim
that "humans can not ever exceed the limitations of the programming
entailed in them".  In case of humans programming meaning both things
hardcoded in genome and chemical machinery of the body and "learned
stuff".  Already at age 1 toys, customized environment and interactions
with other humans make significant difference to learning.  At later
stages there are stories which were perfected for thousends of years,
school curricula and books.  There were myths that people from
non-western cultures are less "inteligent" than people from western
culture.  More deeper research showed that part of our "inteligence"
is really "programmed" (learned) and "programming" in differnet
cultures were different.

In slightly different spirit, in fifties there were efforts to
define inteligence and researcheres from that time postulated
several abilities that every inteligent being should have.
Based on that there were "proofs" that artificial inteligence
is impossible.  One of such "proofs" goes as follows: people
can prove math theorems.  But Goedel and Church proved that
no machine can prove math theorem.  So no machine will match
humans.  The fallacy of this argument is classic abuse of
quantifiers: humans can prove same (easy) math theorems.
No machine or human can prove _each_ math theorem.  Actually,
we still do not know how hard is proving, but common belief
is that complexity of proving is exponential in length of the
proof.  What is proven is that that there is no computable
bound on length of shortest proof.  Clearly this difficulty,
that is large length of proofs affect humans as much as
computers.

To put is differently, if you put strong requirements on
inteligence, like ability to prove each math theorem, then
humans are not inteligent.  If you lower your requirements,
so that humans are deemed inteligent, than appropriately
programmed computer is likely to qualify.

One more thing: early in history of AI there was Eliza.
It was simple pattern matcher clearly having no inteligence,
yet it was able to fool some humans to belive that they
communicate with other human (ok, at least for some time).
Some people take this to consider all solved AI problems
as kind of fake and show that the problem was not about
inteligence.  But IMO there is different possiblity: that
all our inteligence is "fake" in similar vein.  In other
words, we do not solve general problem but use tricks which
happen to work in real life.  Or to put it differently,
we may be much more limited than we imagine.  Eliza clearly
shows that we can be easily fooled into assuming that
something has much more abilities than it really has
(and "something" may be really "we").

-- 
  Waldek Hebisch

-- 
You received this message because you are subscribed to the Google Groups 
"FriCAS - computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to fricas-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/fricas-devel/Zaf4t0bQZRQAsouA%40fricas.org.


Re: [fricas-devel] GPT4 and computer algebra

2024-01-17 Thread Hill Strong
Tim,

Some comments are below.

On Wed, Jan 17, 2024 at 11:09 PM Tim Daly  wrote:

> I'm not the only one raising the question of AI and GPT systems and
> their effects on Mathematics. The best and brightest of mathematicians,
> of which I'm not one, are raising the issue.
>

They can raise the issue all they like. What they are not seeing is that
artificial stupidity (AI) systems are limited. As I said above. The only
intelligence you will find in these systems is the stuff generated by human
intelligence. No artificial stupidity (AI) system can ever exceed the
limitations of the programming entailed in them.


>
> This is a lecture by Jeremy Avigad,
> Professor of Mathematics and Philosophy at CMU titled:
> Is Mathematics Obsolete?
> https://www.youtube.com/watch?v=lE48QtO4xBQ_channel=SantaFeInstitute
>
> Timothy Gowers, a Fields Medalist, raises similar issues:
> Can Computers Do Mathematical Research?
> https://mathscholar.org/2020/09/can-computers-do-mathematical-research/
>
> Gowers talks about computer proofs of mathematical theorems and
> automatic mathematical reasoning. Gowers 'says that this may lead to a
> brief golden age, when mathematicians still dominate in original thinking
> and computer programs focus on technical details, “but I think it will
> last a
> very short time,” given that there is no reason that computers cannot
> eventually
> become proficient at the more creative aspects of mathematical research as
> well.'
>

Here we see the failure of understanding the limitations found in all
computer systems. We can build systems that can be used to develop proofs
and may even be able to do exhaustive tests to find those proofs. But none
of these systems will be able to do the original thinking that is inherent
in human intelligence and intuition. I have been observing the many
different areas in which we have progressed with computerised systems and
the more I see, the more it becomes very clear that we will never get a
thinking machine based on any level of technology that we are able to
invent.

I said it in a previous comment above about watching my granddaughter as
she is growing up. I have been watching my children and my grandchildren as
well as many many other people. No matter how seemingly complex the systems
we build, there is nothing that approaches even the animals, let alone
humanity. Irrespective of any seeming intelligence that the researchers
speak on, there is no intelligence within these systems. If you do ever see
some artificial system show distinctive signs of real intelligence, take it
that we are seeing a natural intelligence trying to fool you. There are
many people who willingly commit fraud to achieve control over others.


> Certainly, as I"ve mentioned, proof assistants provide all of the elements
> of
> games which makes them perfect for reinforcement learning, the final
> training stage of GPT systems. All this requires is a bright PhD candidate.
>

Proof assistants are relatively simple tools to help a human intelligence
with the tedium involved. Nothing wrong with that as this is a useful
amplification of our own abilities. But, we are far too stupid in reality
to be able to understand what intelligence actually is. There are enough
people, philosophers and theologians and physical scientists who are
working on understanding this and we have in all essentials gotten nowhere
in understanding what intelligence is, let alone understanding what we need
to do to build anything even resembling the least of living intelligence
systems.

I am far more concerned with what various people and groups of people will
do with our current levels of technology than ever being concerned that we
will ever build a real artificial system.

What seems to be lost is that we build real intelligence systems all the
time and these we call children.


> What few people so far have done is realize the impact this will have on
> computer algebra. Certainly a GPT system can answer questions about
> the proper handling of sqrt(4) and/or working through both the positive
> and negative root assumptions in a calculation by conducting the
> calculation
> applying the simplification for 2 and -2 and presenting both results. All
> the
> GPT prompt would have to say is "assuming sqrt(4) is -2 compute ...".
>

The impact will be, if properly used, to build comprehensive proofs in
mathematics that can have potential use in other areas of civilisation. But
as we often see, the human motivations involved here are for the purposes
of control, power and wealth and influence.

The insights that come from mathematics and how we can apply these are in
all cases coming from our human intelligence.

One must be wary of the problem of the hype surrounding these systems. It
is the case that we need to [follow the money] to see who gains from this
hype.

All expert systems have points of failure that are never seen until far too
late. These failure points are based on coding errors, data errors, human

Re: [fricas-devel] GPT4 and computer algebra

2024-01-17 Thread Tim Daly
I'm not the only one raising the question of AI and GPT systems and
their effects on Mathematics. The best and brightest of mathematicians,
of which I'm not one, are raising the issue.

This is a lecture by Jeremy Avigad,
Professor of Mathematics and Philosophy at CMU titled:
Is Mathematics Obsolete?
https://www.youtube.com/watch?v=lE48QtO4xBQ_channel=SantaFeInstitute

Timothy Gowers, a Fields Medalist, raises similar issues:
Can Computers Do Mathematical Research?
https://mathscholar.org/2020/09/can-computers-do-mathematical-research/

Gowers talks about computer proofs of mathematical theorems and
automatic mathematical reasoning. Gowers 'says that this may lead to a 
brief golden age, when mathematicians still dominate in original thinking 
and computer programs focus on technical details, “but I think it will last 
a 
very short time,” given that there is no reason that computers cannot 
eventually 
become proficient at the more creative aspects of mathematical research as 
well.'

Certainly, as I"ve mentioned, proof assistants provide all of the elements 
of
games which makes them perfect for reinforcement learning, the final
training stage of GPT systems. All this requires is a bright PhD candidate.

What few people so far have done is realize the impact this will have on
computer algebra. Certainly a GPT system can answer questions about
the proper handling of sqrt(4) and/or working through both the positive
and negative root assumptions in a calculation by conducting the calculation
applying the simplification for 2 and -2 and presenting both results. All 
the
GPT prompt would have to say is "assuming sqrt(4) is -2 compute ...".

The mixtral (Mixture of Experts)[0] system allows a custom expert to be
added to an existing system. A computer algebra expert is certainly 
possible.

This future is not far off. Consider that the ollama system[1] lists 61
systems, of which "wizard-math"[2] is one. The OpenAI "apps" page is
said to have several hundred systems since November 2023.

The jerk[3] in AI systems is difficult to track. Alvin Toffler[4] 
underestimated
the rapidity of change.  It is not a matter of "if", only a matter of 
"when".

Tim

[0] https://huggingface.co/docs/transformers/model_doc/mixtral

[1] https://ollama.ai/library

[2] wizard-math "Model focused on math and logic problems"

[3] Jerk (Wikipedia) aka the third derivative
*jerk* ... is the rate of change of an object's acceleration 
 over time

[4] Alvin Toffler "Future Shock"
https://cdn.preterhuman.net/texts/literature/general/Alvin%20Toffler%20-%20Future%20Shock.pdf

On Wednesday, January 17, 2024 at 2:05:44 AM UTC-5 elders...@gmail.com 
wrote:

> Tim, 
>
> That use should be something that would benefit people. Good luck with it.
>
>
> On Wed, 17 Jan 2024, 1:18 pm Tim Daly,  wrote:
>
>> I have GPT systems running locally. It is possible to feed the systems
>> text files for training. Once trained they can answer questions related
>> to the training. It will be interesting to convert the book to text and
>> train the system on the book. At minimum it might make an interesting
>> "help" facility capable of responding to natural language questions.
>>
>> On Monday, January 15, 2024 at 10:05:54 PM UTC-5 elders...@gmail.com 
>> wrote:
>>
>>> Tim, notice though that the only intelligence here is what is provided 
>>> by human intelligence. None of the various systems have any inherent 
>>> intelligence.
>>>
>>> My youngest granddaughter has just turned 1 and I have been spending 
>>> time watching her develop intelligence and the difference between her and 
>>> any of the current systems is possibly on the order of a million orders of 
>>> magnitude with the systems being the primitive level.
>>>
>>> As I said above, these systems can have possible use cases but only if 
>>> humans are controlling them intelligently. It is a matter of finding 
>>> specific use cases that work for us.
>>>
>>> An appropriate analogy is summed up as: fire is a good servant but a 
>>> destructive master.
>>>
>>> On Tue, 16 Jan 2024, 1:00 pm Tim Daly,  wrote:
>>>
 There is an interesting example of proof earlier in the video:
 https://youtu.be/3pb_-oLfWJ4?t=298

 Reinforcement learning (using the Bellman equation[0]) needs
 the ability to at least decide if it succeeds and can use information
 about success in intermediate steps to rapidly improve. It also
 needs a set of actions that can be performed at each step.
 Reinforcement learning has conquered gaming with this structure.

 LEAN provides a database of theorems, definitions, and tactics
 that can be applied at each step of a proof. It provides feedback
 about the local success of each action. It provides global success
 once the proof completes. Thus all of the elements needed by
 reinforcement learning are available in proof systems. LEAN's
 theorems are strongly typed.

 When training 

Re: [fricas-devel] make build with sbcl-2.4.0 work out of box

2024-01-17 Thread Qian Yun

I think Dima gives a good advice, WDYT, Waldek?

- Qian

diff --git a/configure.ac b/configure.ac
index 073f3e96..99426fb1 100644
--- a/configure.ac
+++ b/configure.ac
@@ -190,13 +190,16 @@ if test -z "$fricas_lisp" ; then
   use --with-lisp option to tell FriCAS
   how to invoke your Lisp])
 fi
+FRICAS_LISP="$FRICAS_LISP --dynamic-space-size 2048"
 fricas_lisp=$FRICAS_LISP
 else
 ## Honor use of Lisp image specified on command line
+if "$fricas_lisp" = "sbcl" ; then
+fricas_lisp="sbcl --dynamic-space-size 2048"
+fi
 FRICAS_LISP=$fricas_lisp
-AC_SUBST(FRICAS_LISP)
-:
 fi
+AC_SUBST(FRICAS_LISP)

 AC_ARG_WITH([lisp-flavor],
 [AS_HELP_STRING([--with-lisp-flavor=F],


On 1/16/24 20:26, Dima Pasechnik wrote:


it should be easy to basically let ./configure convert --with-lisp=sbcl into
--with-lisp="sbcl --dynamic-space-size 2048"



- Qian

--
You received this message because you are subscribed to the Google Groups "FriCAS - 
computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to fricas-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/fricas-devel/6a77b463-e750-40f5-bc67-f7237eddee4c%40gmail.com.




--
You received this message because you are subscribed to the Google Groups "FriCAS - 
computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to fricas-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/fricas-devel/ee9c4cbf-fbc3-4cc3-bb9d-0493046b580f%40gmail.com.


Re: [fricas-devel] [PATCH] Store stat information in $timedNameStack instead of global, variables

2024-01-17 Thread Qian Yun

OK, here is an alternative way:

Pack every variable that is related with stats to $statsInfo,
and make it a dynamic variable, along side with $timedNameStack.
(Otherwise we have to make 5 dynamic variables.)

Basically move the topFrame into an individual variable.

See attachment.

- Qian

On 1/17/24 08:55, Waldek Hebisch wrote:

On Tue, Jan 16, 2024 at 07:32:56PM +0800, Qian Yun wrote:

This is the second patch (of two) to enable correct collection
of statistics information on recursive calls to interpreter.

Move $statsInfo and other global variables required for timing
to the top of $timedNameStack.

Since $timedNameStack now has dynamic scope, recursive timing
is correctly supported after this patch.


This business with 'topFrame' looks fishy.  Stack should be
a stack, but you effectively turn top item into a global
variable and push below it.  I did not analyzed the whole
of the patch, but ATM moment my feeling is "there should be
better way".



--
You received this message because you are subscribed to the Google Groups "FriCAS - 
computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to fricas-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/fricas-devel/084af024-56c3-4305-94a1-63e64b9bd24f%40gmail.com.
From 1a814d4d174a4b395e70ef0244a835417fdd8637 Mon Sep 17 00:00:00 2001
From: Qian Yun 
Date: Wed, 13 Dec 2023 18:00:09 +0800
Subject: [PATCH] Store stats information in $statsInfo instead of global
 variables

---
 src/interp/g-timer.boot  | 43 ++--
 src/interp/i-toplev.boot |  1 +
 2 files changed, 16 insertions(+), 28 deletions(-)

diff --git a/src/interp/g-timer.boot b/src/interp/g-timer.boot
index 51d0846a7..fba145ca9 100644
--- a/src/interp/g-timer.boot
+++ b/src/interp/g-timer.boot
@@ -104,15 +104,12 @@ stopTimingProcess name ==
   popTimedName()
 
 --% Instrumentation specific to the interpreter
-DEFPARAMETER($oldElapsedSpace, 0)
-DEFPARAMETER($oldElapsedGCTime, 0.0)
-DEFPARAMETER($oldElapsedTime, 0.0)
 DEFPARAMETER($timePrintDigits, 2)
 
 -- $timedNameStack is used to hold the names of sections of the
 -- code being timed.
 
-DEFPARAMETER($timedNameStack, '(other))
+DEFVAR($timedNameStack)
 
 DEFPARAMETER($interpreterTimedNames, '(
 -- name class abbrev
@@ -148,27 +145,33 @@ DEFPARAMETER($interpreterTimedClasses, '(
 DEFVAR($statsInfo)
 
 initializeTimedNames() ==
-  len := # $interpreterTimedNames
-  $statsInfo := VECTOR(GETZEROVEC len, GETZEROVEC len)
   for [name, :.] in $interpreterTimedNames for i in 0.. repeat
 PUT(name, 'index, i)
   initializeTimedStack()
 
 initializeTimedStack() ==
   $timedNameStack := '(other)
-  computeElapsedTime()
-  computeElapsedSpace()
+  len := # $interpreterTimedNames
+  $statsInfo := VECTOR(GETZEROVEC len, GETZEROVEC len, get_run_time(), _
+   elapsedGcTime(), HEAPELAPSED())
   NIL
 
 updateTimedName name ==
+  oldTime := $statsInfo.2
+  oldGCTime := $statsInfo.3
+  oldSpace := $statsInfo.4
+  newTime := $statsInfo.2 := get_run_time()
+  newGCTime := $statsInfo.3 := elapsedGcTime()
+  newSpace := $statsInfo.4 := HEAPELAPSED()
+
   i := GET(name, 'index)
   timeVec := $statsInfo.0
   spaceVec := $statsInfo.1
-  [time, gcTime] := computeElapsedTime()
-  timeVec.i := timeVec.i + time
+  gcDelta := newGCTime - oldGCTime
+  timeVec.i := timeVec.i + (newTime - oldTime - gcDelta)*$inverseTimerTicksPerSecond
   i2 := GET('gc, 'index)
-  timeVec.i2 := timeVec.i2 + gcTime
-  spaceVec.i := spaceVec.i + computeElapsedSpace()
+  timeVec.i2 := timeVec.i2 + gcDelta * $inverseTimerTicksPerSecond
+  spaceVec.i := spaceVec.i + newSpace - oldSpace
 
 makeLongTimeString(listofnames,listofclasses) ==
   makeLongStatStringByProperty(listofnames, listofclasses,  _
@@ -182,22 +185,6 @@ makeLongSpaceString(listofnames,listofclasses) ==
 
 DEFPARAMETER($inverseTimerTicksPerSecond, 1.0/$timerTicksPerSecond)
 
-computeElapsedTime() ==
-  currentTime:= get_run_time()
-  currentGCTime:= elapsedGcTime()
-  gcDelta := currentGCTime - $oldElapsedGCTime
-  elapsedSeconds:= $inverseTimerTicksPerSecond *
- (currentTime-$oldElapsedTime-gcDelta)
-  $oldElapsedTime := currentTime
-  $oldElapsedGCTime := currentGCTime
-  [elapsedSeconds, $inverseTimerTicksPerSecond * gcDelta]
-
-computeElapsedSpace() ==
-  currentElapsedSpace := HEAPELAPSED()
-  elapsedBytes := currentElapsedSpace - $oldElapsedSpace
-  $oldElapsedSpace := currentElapsedSpace
-  elapsedBytes
-
 timedAlgebraEvaluation(code) ==
   startTimingProcess 'algebra
   r := eval code
diff --git a/src/interp/i-toplev.boot b/src/interp/i-toplev.boot
index de9a472d3..a26930aea 100644
--- a/src/interp/i-toplev.boot
+++ b/src/interp/i-toplev.boot
@@ -128,6 +128,7 @@ DEFPARAMETER($inRetract, nil)
 
 processInteractive(form, posnForm) ==
 $timedNameStack : local := NIL
+$statsInfo : local := NIL
 initializeTimedStack()
 finally(