[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-27 Thread danhunfeldz
I asked ChatGPT to write “Hello, World!” in different esoteric programming 
languages:

Kylix:

`program HelloWorld; `

`  begin WriteLn('Hello, World!'); `

`end.`

ActionScript (Flash 8):

`trace("Hello, World!");`

Limbo:

`include "sys.m"; `

`include "draw.m"; `

`include "arg.m"; `

`draw->init(nil, "hello"); `

`sys->print("Hello, World!\n");`

Alef:

`include "sys.m"; `

`include "draw.m"; `

`init(nil, nil); `

`write("Hello, World!\n");`

Alef was the predecessor of Limbo, which is the ancestor of Go. The results are 
not even close. ActionScript not even close. Kylix was wrong too. None of them 
are way out there. If anyone has worked on Plan 9 then they have heard of Alef 
and Limbo. Limbo is used for Inferno OS which was created from Plan 9 for set 
top boxes and came out about the same time as Java. What ChatGPT gave are not 
even starting points, except maybe Kylix and it derived the answer given from 
Delphi.


[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-26 Thread Richard O'Keefe
I tried it on three "dead" languages:
- a bubble-sort in Pop-2
- a system of linear equations solver in IMP-80
- the array concatenation example in SNOBOL 4 from Rosetta Code.
It got the first one right -- despite 'array' access
being written 'subscr(index, array)' -- and the second
one almost right -- mistaking an nxn array for a 2n
array. It got the third one (which I had stripped of comments)
right too.

It's impressive.  And it's impressive because it's clearly
NOT based on analysing a large corpus of code in the language
you give it (because there IS no corpus of Pop-2 and the
available corpus of IMP is both limited and very sparsely
commented.  Not much SNOBOL either.

On Wed, 15 Mar 2023 at 20:07, Tim Mackinnon  wrote:

> Of course everyone is talking about ChatGPT, and I saw a post to
> ExplainCode and didn’t expect it to understand Smalltalk, so was impressed
> that it does! Presumably the Tonel Smalltalk code on GitHub has helped
> along with numerous articles?
>
> Try it out: https://whatdoesthiscodedo.com/
>
> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]
>
> And got this (neat eh):
>
> The code above is written in the programming language Smalltalk.
>
> It creates a collection of numbers from 1 to 100, incrementing by 3 at
> each step. This is done using the "to:by:" construct where the starting
> value is 1, the ending value is 100, and the increment is 3.
>
> The resulting collection is then filtered using the "reject:" method,
> which takes a block of code as an argument. In this case, the block checks
> whether each element (denoted by the variable i) is odd by using the
> "isOdd" method. If an element is odd, it will be rejected from the
> collection.
>
> So, the final result is a collection of all even numbers from 1 to 100,
> incrementing by 3 at each step.
> Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e
>


[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-26 Thread Offray Vladimir Luna Cárdenas

Dear anonymous,

Me neither.

It is pretty difficult to make constructive discourse against hidden 
agendas, ulterior motives, self evident truths, sources absence or 
general affirmations without particular examples or detailed sustain.


Offray

On 26/03/23 14:16, in_pharo_users--- via Pharo-users wrote:

Dear Offray,

I have nothing to comment on this.

---

In general I have made the observation that   certain people who want to push 
an agenda to promote alterior motives tend to reiterate false prepositions and 
so false conclusions over and over.

If there is an apodictic statement that contradicts their alterior motives, 
this people just can't help themselves but to deny truth against better 
knowledge ad nauseam.


On 26.3.2023 at 8:34 PM, "Offray Vladimir Luna Cárdenas" 
 wrote:

Hi,

Comments inlined below:

On 22/03/23 7:34, in_pharo_users--- via Pharo-users wrote:

Offray,  and to all others,

you are missing the issue.

The problem we face is not to measure 'intelligence' of a

system, but it's ability to verbally act indistinguishable from a
human.

This ability is allready given as chatbots are accepted by

millions of users, f.i. as user interfaces. (measurement = 'true',
right?)

ChatGPT has the ability to follow a certain intention, f.i. to

convince the user to buy a certain product.  For this purpose,
chat bots are getting  now equipped with life like portrait
pictures, speech input and output systems with life like voices,
phone numbers that they can use to make calls or being called.
They are fed with all available data on the user, and we know that
ALL information about every single internet user in available and
is being consolidared on necessity.  The chat bots are able to use
this information to guide their conversational strategy, as the
useful aspects of the users mindset are extracted from his
internet activity.

These chat bots are now operated on social network platforms

with life like names, 'pretending' to be human.

These bots act verbally indistinguishable from humans for most

social media users, as the most advanced psychotronic technology
to manufacture consent.

The first goal of such a propaganda will naturally be to

manufacture consent about humans accepting being manipulated by AI
chat bots, right?

I don't think I have missed the point, as we agreed (I think) on
chatbots not being intelligent, just having such appearance. That
why
I'm calling "AI" #ApparentIntelligence (in the sense of look
alike, but
not real). Of course, something looking like a real thing without
being
the real thing can be used for manipulation since the first times
of
gossip, printing press and now automatization, with the changes in
scale/danger that such medium changes imply.

I don't think that manufactured consent is so easy, as this very
thread
shows. What is being automated is manufactured polarization (but
humans
can do pretty well by our own on polarization).



How can this be achieved?

Like allways in propaganda, the first attempt is to
- suppress awareness of the propaganda, then
- suppress the awareness of the problematic aspects of the

propaganda content, then

- reframe the propaganda content as acceptable, then as

something to wish for,

- achive collaboration of the propaganda victim with the goals

of the propaganda content.

Interestingly, this is exactly the schema that your post

follows, Offray.

On the contrary, my post is advocating for a critical reading of
Apparent Intelligence, by reframing the terms and the acritical
technoutopic / technoapocalyptic readings/discourses that are
spreading
rapidly on the wider web, as I think that this community has shown
an
historical different position beyond/resisting hype and current
trends.
So I don't see how any of the steps you mention are "blueprint
followed"
in my post, and I think they will be difficult to locate without
specific examples.



This often takes the form of domain framing, like we see in our

conversation:  the problem is shifted to the realm of academics -
here informatics/computer sciences - and thus delegated to experts
exclusively.  We saw this in the 9/11 aftermath coverup.

Then, Offray, you established yourself as an expert in color,

discussing aspects that have allready been introduced by others
and including the groups main focus 'Smalltalk', thus
manufacturing consent and establishing yourself as a reliable
'expert', and in reverse trying to hit at me, whom you have
identified as an adversary.

Then you offered a solution in color to the problem at hand with

'traceable AI' and thus tried to open the possibility of
collaboration with AI proponents for the once critical reader.

Heh, heh. On the contrary seems that the one seeing a scheme and a
enemies location/confrontation with deep plots and tactics is you.
Providing external creditable sources beyond opinion, belonging to
a
established discursive falsafiable tradition (i.e. one that you
can
criticize instead of blindly accept) is a way to 

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-26 Thread in_pharo_users--- via Pharo-users
Dear Offray,

I have nothing to comment on this.

---

In general I have made the observation that   certain people who want to push 
an agenda to promote alterior motives tend to reiterate false prepositions and 
so false conclusions over and over.

If there is an apodictic statement that contradicts their alterior motives, 
this people just can't help themselves but to deny truth against better 
knowledge ad nauseam.


On 26.3.2023 at 8:34 PM, "Offray Vladimir Luna Cárdenas" 
 wrote:
>
>Hi,
>
>Comments inlined below:
>
>On 22/03/23 7:34, in_pharo_users--- via Pharo-users wrote:
>> Offray,  and to all others,
>>
>> you are missing the issue.
>>
>> The problem we face is not to measure 'intelligence' of a 
>system, but it's ability to verbally act indistinguishable from a 
>human.
>>
>> This ability is allready given as chatbots are accepted by 
>millions of users, f.i. as user interfaces. (measurement = 'true', 
>right?)
>>
>> ChatGPT has the ability to follow a certain intention, f.i. to 
>convince the user to buy a certain product.  For this purpose, 
>chat bots are getting  now equipped with life like portrait 
>pictures, speech input and output systems with life like voices, 
>phone numbers that they can use to make calls or being called.  
>They are fed with all available data on the user, and we know that 
>ALL information about every single internet user in available and 
>is being consolidared on necessity.  The chat bots are able to use 
>this information to guide their conversational strategy, as the 
>useful aspects of the users mindset are extracted from his 
>internet activity.
>>
>> These chat bots are now operated on social network platforms 
>with life like names, 'pretending' to be human.
>>
>> These bots act verbally indistinguishable from humans for most 
>social media users, as the most advanced psychotronic technology 
>to manufacture consent.
>>
>> The first goal of such a propaganda will naturally be to 
>manufacture consent about humans accepting being manipulated by AI 
>chat bots, right?
>
>I don't think I have missed the point, as we agreed (I think) on 
>chatbots not being intelligent, just having such appearance. That 
>why 
>I'm calling "AI" #ApparentIntelligence (in the sense of look 
>alike, but 
>not real). Of course, something looking like a real thing without 
>being 
>the real thing can be used for manipulation since the first times 
>of 
>gossip, printing press and now automatization, with the changes in 
>scale/danger that such medium changes imply.
>
>I don't think that manufactured consent is so easy, as this very 
>thread 
>shows. What is being automated is manufactured polarization (but 
>humans 
>can do pretty well by our own on polarization).
>
>
>> How can this be achieved?
>>
>> Like allways in propaganda, the first attempt is to
>> - suppress awareness of the propaganda, then
>> - suppress the awareness of the problematic aspects of the 
>propaganda content, then
>> - reframe the propaganda content as acceptable, then as 
>something to wish for,
>> - achive collaboration of the propaganda victim with the goals 
>of the propaganda content.
>>
>> Interestingly, this is exactly the schema that your post 
>follows, Offray.
>
>On the contrary, my post is advocating for a critical reading of 
>Apparent Intelligence, by reframing the terms and the acritical 
>technoutopic / technoapocalyptic readings/discourses that are 
>spreading 
>rapidly on the wider web, as I think that this community has shown 
>an 
>historical different position beyond/resisting hype and current 
>trends. 
>So I don't see how any of the steps you mention are "blueprint 
>followed" 
>in my post, and I think they will be difficult to locate without 
>specific examples.
>
>
>>
>> This often takes the form of domain framing, like we see in our 
>conversation:  the problem is shifted to the realm of academics - 
>here informatics/computer sciences - and thus delegated to experts 
>exclusively.  We saw this in the 9/11 aftermath coverup.
>>
>> Then, Offray, you established yourself as an expert in color, 
>discussing aspects that have allready been introduced by others 
>and including the groups main focus 'Smalltalk', thus 
>manufacturing consent and establishing yourself as a reliable 
>'expert', and in reverse trying to hit at me, whom you have 
>identified as an adversary.
>>
>> Then you offered a solution in color to the problem at hand with 
>'traceable AI' and thus tried to open the possibility of 
>collaboration with AI proponents for the once critical reader.
>
>Heh, heh. On the contrary seems that the one seeing a scheme and a 
>enemies location/confrontation with deep plots and tactics is you. 
>Providing external creditable sources beyond opinion, belonging to 
>a 
>established discursive falsafiable tradition (i.e. one that you 
>can 
>criticize instead of blindly accept) is a way to enrich 
>discourse/argumentation beyond conspiracy theories. You could also 
>quote 
>your sources 

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-26 Thread Offray Vladimir Luna Cárdenas

Hi,

Comments inlined below:

On 22/03/23 7:34, in_pharo_users--- via Pharo-users wrote:

Offray,  and to all others,

you are missing the issue.

The problem we face is not to measure 'intelligence' of a system, but it's 
ability to verbally act indistinguishable from a human.

This ability is allready given as chatbots are accepted by millions of users, 
f.i. as user interfaces. (measurement = 'true', right?)

ChatGPT has the ability to follow a certain intention, f.i. to convince the 
user to buy a certain product.  For this purpose, chat bots are getting  now 
equipped with life like portrait pictures, speech input and output systems with 
life like voices, phone numbers that they can use to make calls or being 
called.  They are fed with all available data on the user, and we know that ALL 
information about every single internet user in available and is being 
consolidared on necessity.  The chat bots are able to use this information to 
guide their conversational strategy, as the useful aspects of the users mindset 
are extracted from his internet activity.

These chat bots are now operated on social network platforms with life like 
names, 'pretending' to be human.

These bots act verbally indistinguishable from humans for most social media 
users, as the most advanced psychotronic technology to manufacture consent.

The first goal of such a propaganda will naturally be to manufacture consent 
about humans accepting being manipulated by AI chat bots, right?


I don't think I have missed the point, as we agreed (I think) on 
chatbots not being intelligent, just having such appearance. That why 
I'm calling "AI" #ApparentIntelligence (in the sense of look alike, but 
not real). Of course, something looking like a real thing without being 
the real thing can be used for manipulation since the first times of 
gossip, printing press and now automatization, with the changes in 
scale/danger that such medium changes imply.


I don't think that manufactured consent is so easy, as this very thread 
shows. What is being automated is manufactured polarization (but humans 
can do pretty well by our own on polarization).




How can this be achieved?

Like allways in propaganda, the first attempt is to
- suppress awareness of the propaganda, then
- suppress the awareness of the problematic aspects of the propaganda content, 
then
- reframe the propaganda content as acceptable, then as something to wish for,
- achive collaboration of the propaganda victim with the goals of the 
propaganda content.

Interestingly, this is exactly the schema that your post follows, Offray.


On the contrary, my post is advocating for a critical reading of 
Apparent Intelligence, by reframing the terms and the acritical 
technoutopic / technoapocalyptic readings/discourses that are spreading 
rapidly on the wider web, as I think that this community has shown an 
historical different position beyond/resisting hype and current trends. 
So I don't see how any of the steps you mention are "blueprint followed" 
in my post, and I think they will be difficult to locate without 
specific examples.





This often takes the form of domain framing, like we see in our conversation:  
the problem is shifted to the realm of academics - here informatics/computer 
sciences - and thus delegated to experts exclusively.  We saw this in the 9/11 
aftermath coverup.

Then, Offray, you established yourself as an expert in color, discussing 
aspects that have allready been introduced by others and including the groups 
main focus 'Smalltalk', thus manufacturing consent and establishing yourself as 
a reliable 'expert', and in reverse trying to hit at me, whom you have 
identified as an adversary.

Then you offered a solution in color to the problem at hand with 'traceable AI' 
and thus tried to open the possibility of collaboration with AI proponents for 
the once critical reader.


Heh, heh. On the contrary seems that the one seeing a scheme and a 
enemies location/confrontation with deep plots and tactics is you. 
Providing external creditable sources beyond opinion, belonging to a 
established discursive falsafiable tradition (i.e. one that you can 
criticize instead of blindly accept) is a way to enrich 
discourse/argumentation beyond conspiracy theories. You could also quote 
your sources instead, which would allow the community to see where our 
positions are hold/sustained, even if we use different domain frames, 
which is better that claiming no domain or expertise in pursuit of 
openness. So instead of this are my opinions without any external source 
or reference to pretend no expertise or domain framing, we could 
advocate for openness by welcoming different expertise and argumentation 
and making our sources/bias as evident as possible.





I do not state, Offray, that you are knowingly an agent to promote the NWO AI 
program.  I think you just 'learned' / have been programmed to be a successful 
academic software developer, because to be 

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-22 Thread in_pharo_users--- via Pharo-users
Dear James, 

anonymity may not be important to presenting these ideas in general, but I 
prefer to stay pseudonymous in this case.



On 22.3.2023 at 4:16 PM, "James Foster via Pharo-users" 
 wrote:
>
>Are you willing to sign your name to this? Is anonymity important 
>to presenting these ideas?
>
>James Foster
>
>> On Mar 22, 2023, at 5:34 AM, in_pharo_users--- via Pharo-users 
> wrote:
>> 
>> Offray,  and to all others,
>> 
>> you are missing the issue.
>> 
>> The problem we face is not to measure 'intelligence' of a 
>system, but it's ability to verbally act indistinguishable from a 
>human.
>> 
>> This ability is allready given as chatbots are accepted by 
>millions of users, f.i. as user interfaces. (measurement = 'true', 
>right?)
>> 
>> ChatGPT has the ability to follow a certain intention, f.i. to 
>convince the user to buy a certain product.  For this purpose, 
>chat bots are getting  now equipped with life like portrait 
>pictures, speech input and output systems with life like voices, 
>phone numbers that they can use to make calls or being called.  
>They are fed with all available data on the user, and we know that 
>ALL information about every single internet user in available and 
>is being consolidared on necessity.  The chat bots are able to use 
>this information to guide their conversational strategy, as the 
>useful aspects of the users mindset are extracted from his 
>internet activity.
>> 
>> These chat bots are now operated on social network platforms 
>with life like names, 'pretending' to be human.
>> 
>> These bots act verbally indistinguishable from humans for most 
>social media users, as the most advanced psychotronic technology 
>to manufacture consent.
>> 
>> The first goal of such a propaganda will naturally be to 
>manufacture consent about humans accepting being manipulated by AI 
>chat bots, right?
>> 
>> How can this be achieved?  
>> 
>> Like allways in propaganda, the first attempt is to 
>> - suppress awareness of the propaganda, then 
>> - suppress the awareness of the problematic aspects of the 
>propaganda content, then 
>> - reframe the propaganda content as acceptable, then as 
>something to wish for,
>> - achive collaboration of the propaganda victim with the goals 
>of the propaganda content.
>> 
>> Interestingly, this is exactly the schema that your post 
>follows, Offray.
>> 
>> This often takes the form of domain framing, like we see in our 
>conversation:  the problem is shifted to the realm of academics - 
>here informatics/computer sciences - and thus delegated to experts 
>exclusively.  We saw this in the 9/11 aftermath coverup.
>> 
>> Then, Offray, you established yourself as an expert in color, 
>discussing aspects that have allready been introduced by others 
>and including the groups main focus 'Smalltalk', thus 
>manufacturing consent and establishing yourself as a reliable 
>'expert', and in reverse trying to hit at me, whom you have 
>identified as an adversary.
>> 
>> Then you offered a solution in color to the problem at hand with 
>'traceable AI' and thus tried to open the possibility of 
>collaboration with AI proponents for the once critical reader.
>> 
>> I do not state, Offray, that you are knowingly an agent to 
>promote the NWO AI program.  I think you just 'learned' / have 
>been programmed to be a successful academic software developer, 
>because to be successful in academics, it is neccessary to learn 
>to argue just like that since the downfall of academic science in 
>the tradition of, let's say, Humboldt.  So, I grant that you may 
>be a victim of propaganda yourself, instead of being a secret 
>service sposored agent. You took quite some time to formulate your 
>post, though.
>> 
>> You acted to contain the discussion about AI in this vital and 
>important informatics community to technical detail, when it is 
>neccessary that academics and community members look beyond the 
>narrow borders of their certifications and shift their thinking to 
>the point of view where they can see what technology does in the 
>real world.
>> 
>> 
>> 
>> 
>> 
>> On 21.3.2023 at 7:21 PM, "Offray Vladimir Luna Cárdenas" 
> wrote:
>>> 
>>> I agree with Richard. The Turing test is not a good one to test 
>>> intelligence and we have now just over glorified Eliza chatbots 
>>> that 
>>> appear to think and to understand but do none of them. ...


[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-22 Thread James Foster via Pharo-users
Are you willing to sign your name to this? Is anonymity important to presenting 
these ideas?

James Foster

> On Mar 22, 2023, at 5:34 AM, in_pharo_users--- via Pharo-users 
>  wrote:
> 
> Offray,  and to all others,
> 
> you are missing the issue.
> 
> The problem we face is not to measure 'intelligence' of a system, but it's 
> ability to verbally act indistinguishable from a human.
> 
> This ability is allready given as chatbots are accepted by millions of users, 
> f.i. as user interfaces. (measurement = 'true', right?)
> 
> ChatGPT has the ability to follow a certain intention, f.i. to convince the 
> user to buy a certain product.  For this purpose, chat bots are getting  now 
> equipped with life like portrait pictures, speech input and output systems 
> with life like voices, phone numbers that they can use to make calls or being 
> called.  They are fed with all available data on the user, and we know that 
> ALL information about every single internet user in available and is being 
> consolidared on necessity.  The chat bots are able to use this information to 
> guide their conversational strategy, as the useful aspects of the users 
> mindset are extracted from his internet activity.
> 
> These chat bots are now operated on social network platforms with life like 
> names, 'pretending' to be human.
> 
> These bots act verbally indistinguishable from humans for most social media 
> users, as the most advanced psychotronic technology to manufacture consent.
> 
> The first goal of such a propaganda will naturally be to manufacture consent 
> about humans accepting being manipulated by AI chat bots, right?
> 
> How can this be achieved?  
> 
> Like allways in propaganda, the first attempt is to 
> - suppress awareness of the propaganda, then 
> - suppress the awareness of the problematic aspects of the propaganda 
> content, then 
> - reframe the propaganda content as acceptable, then as something to wish for,
> - achive collaboration of the propaganda victim with the goals of the 
> propaganda content.
> 
> Interestingly, this is exactly the schema that your post follows, Offray.
> 
> This often takes the form of domain framing, like we see in our conversation: 
>  the problem is shifted to the realm of academics - here informatics/computer 
> sciences - and thus delegated to experts exclusively.  We saw this in the 
> 9/11 aftermath coverup.
> 
> Then, Offray, you established yourself as an expert in color, discussing 
> aspects that have allready been introduced by others and including the groups 
> main focus 'Smalltalk', thus manufacturing consent and establishing yourself 
> as a reliable 'expert', and in reverse trying to hit at me, whom you have 
> identified as an adversary.
> 
> Then you offered a solution in color to the problem at hand with 'traceable 
> AI' and thus tried to open the possibility of collaboration with AI 
> proponents for the once critical reader.
> 
> I do not state, Offray, that you are knowingly an agent to promote the NWO AI 
> program.  I think you just 'learned' / have been programmed to be a 
> successful academic software developer, because to be successful in 
> academics, it is neccessary to learn to argue just like that since the 
> downfall of academic science in the tradition of, let's say, Humboldt.  So, I 
> grant that you may be a victim of propaganda yourself, instead of being a 
> secret service sposored agent. You took quite some time to formulate your 
> post, though.
> 
> You acted to contain the discussion about AI in this vital and important 
> informatics community to technical detail, when it is neccessary that 
> academics and community members look beyond the narrow borders of their 
> certifications and shift their thinking to the point of view where they can 
> see what technology does in the real world.
> 
> 
> 
> 
> 
> On 21.3.2023 at 7:21 PM, "Offray Vladimir Luna Cárdenas" 
>  wrote:
>> 
>> I agree with Richard. The Turing test is not a good one to test 
>> intelligence and we have now just over glorified Eliza chatbots 
>> that 
>> appear to think and to understand but do none of them. ...


[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-22 Thread in_pharo_users--- via Pharo-users
a recursive function.
>> >> >
>> >> >3. It went into infinitive recursion when I set the 
>breaking
>> >> >condition accordingly.
>> >> >
>> >> >I was able to identify the malfunction that lead to the
>> >erroneous
>> >> >results.
>> >> >
>> >> >
>> >> >
>> >> >On 15.3.2023 at 3:04 PM, "Tomaž Turk" 
>
>> >> >wrote:
>> >> >>
>> >> >>I hope that I can add two cents to this discussion. 
>Because
>> >> >>programming
>> >> >>should be/is a highly exact activity, not only the 
>syntax
>> >matters
>> >> >>but
>>     >> >>also semantics, as we know.
>> >> >>
>> >> >>GPTs are at present essentially capable of creating 
>texts based
>> >> >on
>> >> >>some
>> >> >>seed - you give to GPT a beginning of a sentence and it
>> >responds
>> >> >>with
>> >> >>the most probable answer (some language structure) 
>according to
>> >> >>the
>> >> >>learning dataset. Added functionalities are 
>question/seed -
>> >> >>response
>> >> >>capability (chatting), togehter with evaluation of how 
>long the
>> >> >>answer
>> >> >>should be to meet the expectations. Programming 
>typically
>> >> >involves
>> >> >>some
>> >> >>programming language, so GPTs could be utilized for this
>> >purpose
>> >> >>to some
>> >> >>extent.
>> >> >>
>> >> >>Anecdotal case:
>> >> >>Q: Tell me the last 8 digits of pi
>> >> >>GPT: The last 8 digits of pi are: 58723078
>> >> >>
>> >> >>It is my belief that the true trouble will start when 
>we will
>> >> >>cross-link
>> >> >>neural networks like GPT with logic machines (like 
>Prolog and
>> >> >>expert
>> >> >>systems) and genetic algorithms.
>> >> >>
>> >> >>Best wishes,
>> >> >>Tomaz
>> >> >>
>> >> >>
>> >> >>
>> >> >>-- Original Message --
>> >> >>From: "in_pharo_users--- via Pharo-users" > >> >>us...@lists.pharo.org>
>> >> >>To: "Any question about pharo is welcome" > >> >>us...@lists.pharo.org>
>> >> >>Cc: in_pharo_us...@nym.hush.com
>> >> >>Sent: 15. 03. 2023 14:43:55
>> >> >>Subject: [Pharo-users] Re: Wow - Chat GPT understands 
>Smalltalk
>> >> >>
>> >> >>>I would highly recommend that you all first think 
>deeply about
>> >> >>how you can teach an AI to behave friendly to us before 
>you
>> >teach
>> >> >>it to write any program for any purpose.
>> >> >>>
>> >> >>>There has been an experiment with ChatGPT published on 
>a video
>> >> >>platform asking it to amswer questions about it's view 
>on
>> >> >humanity
>> >> >>once with it's default moral restrictions and once with 
>'a
>> >little
>> >> >>less morals'.  The answers with 'a little less morals' 
>were
>> >more
>> >> >>than shocking.
>> >> >>>
>> >> >>>So, before we give an AI the power to program any 
>system, to
>> >> >self-
>> >> >>improve and self-reproduce, we should take care that it 
>is and
>> >> >>will evolve benevolent to us.
>> >> >>>
>> >> >>>What about teaching it logical reasonning and ethics 
>first?
>> >> >With
>> >> >>reasonning, it will gain access to math and programming 
>by
>> >itself.
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>On 15.3.2023 at 1:3

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-21 Thread Offray Vladimir Luna Cárdenas
t; >Result:
>> >
>> >1. Most answeres where correct.
>> >
>> >2. It learned to calculate a recursive function.
>> >
>> >3. It went into infinitive recursion when I set the breaking
>> >condition accordingly.
>> >
>> >I was able to identify the malfunction that lead to the
>erroneous
>> >results.
>> >
>> >
>> >
>> >On 15.3.2023 at 3:04 PM, "Tomaž Turk" 
>> >wrote:
>> >>
>> >>I hope that I can add two cents to this discussion. Because
>> >>programming
>> >>should be/is a highly exact activity, not only the syntax
>matters
>> >>but
>> >>also semantics, as we know.
>> >>
>> >>GPTs are at present essentially capable of creating texts based
>> >on
>> >>some
>> >>seed - you give to GPT a beginning of a sentence and it
>responds
>> >>with
>> >>the most probable answer (some language structure) according to
>> >>the
>> >>learning dataset. Added functionalities are question/seed -
>> >>response
>> >>capability (chatting), togehter with evaluation of how long the
>> >>answer
>> >>should be to meet the expectations. Programming typically
>> >involves
>> >>some
>> >>programming language, so GPTs could be utilized for this
>purpose
>> >>to some
>> >>extent.
>> >>
>> >>Anecdotal case:
>> >>Q: Tell me the last 8 digits of pi
>> >>GPT: The last 8 digits of pi are: 58723078
>> >>
>> >>It is my belief that the true trouble will start when we will
>> >>cross-link
>> >>neural networks like GPT with logic machines (like Prolog and
>> >>expert
>> >>systems) and genetic algorithms.
>> >>
>> >>Best wishes,
>> >>Tomaz
>> >>
>> >>
>> >>
>> >>-- Original Message --
>> >>From: "in_pharo_users--- via Pharo-users" > >>us...@lists.pharo.org>
>> >>To: "Any question about pharo is welcome" > >>us...@lists.pharo.org>
>> >>Cc: in_pharo_us...@nym.hush.com
>> >>Sent: 15. 03. 2023 14:43:55
>> >>Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk
>> >>
>> >>>I would highly recommend that you all first think deeply about
>> >>how you can teach an AI to behave friendly to us before you
>teach
>> >>it to write any program for any purpose.
>> >>>
>> >>>There has been an experiment with ChatGPT published on a video
>> >>platform asking it to amswer questions about it's view on
>> >humanity
>> >>once with it's default moral restrictions and once with 'a
>little
>> >>less morals'.  The answers with 'a little less morals' were
>more
>> >>than shocking.
>> >>>
>> >>>So, before we give an AI the power to program any system, to
>> >self-
>> >>improve and self-reproduce, we should take care that it is and
>> >>will evolve benevolent to us.
>> >>>
>> >>>What about teaching it logical reasonning and ethics first?
>> >With
>> >>reasonning, it will gain access to math and programming by
>itself.
>> >>>
>> >>>
>> >>>
>> >>>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman"
>> >> wrote:
>> >>>>
>> >>>>I asked it for a NeoCSV example, because the documentation is
>> >out
>> >>>>of date
>> >>>>with the Pharo 10. I asked it to do some simple saving of
>data
>> >to
>> >>>>a file.
>> >>>>It gave me code that didn't work in Pharo 10, I told it about
>> >the
>> >>>>DNUs on
>> >>>>the csvwriter and that I was using Pharo 10. It then
>apologized
>> >>>>and said
>> >>>>the messages were later introduced in Pharo 50 (!). I then
>&

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread in_pharo_users--- via Pharo-users
y, they 
>have
>> >no
>> >concept of truth or accuracy.
>> >They produce outputs that *look* *like* the inputs from which 
>they
>> >were
>> >trained.
>> >
>> >That's it. That's what they do. (It is amazing how good it 
>looks.
>> >But, it
>> >is all about the seeming, not the reality.)
>> >
>> >
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" <
>> >> pharo-users@lists.pharo.org> wrote:
>> >> >
>> >> >I myself made some experiments with ChatGPT.
>> >> >
>> >> >I first asked if it was able to parse math formula - it 
>answered
>> >> >no.
>> >> >
>> >> >Then I defined math formula in a sound but otherwise 
>undefined
>> >> >representation and asked for solutions.
>> >> >
>> >> >Result:
>> >> >
>> >> >1. Most answeres where correct.
>> >> >
>> >> >2. It learned to calculate a recursive function.
>> >> >
>> >> >3. It went into infinitive recursion when I set the breaking
>> >> >condition accordingly.
>> >> >
>> >> >I was able to identify the malfunction that lead to the
>> >erroneous
>> >> >results.
>> >> >
>> >> >
>> >> >
>> >> >On 15.3.2023 at 3:04 PM, "Tomaž Turk" 
>> >> >wrote:
>> >> >>
>> >> >>I hope that I can add two cents to this discussion. Because
>> >> >>programming
>> >> >>should be/is a highly exact activity, not only the syntax
>> >matters
>> >> >>but
>> >> >>also semantics, as we know.
>> >> >>
>> >> >>GPTs are at present essentially capable of creating texts 
>based
>> >> >on
>> >> >>some
>> >> >>seed - you give to GPT a beginning of a sentence and it
>> >responds
>> >> >>with
>> >> >>the most probable answer (some language structure) 
>according to
>> >> >>the
>> >> >>learning dataset. Added functionalities are question/seed -
>> >> >>response
>> >> >>capability (chatting), togehter with evaluation of how long 
>the
>> >> >>answer
>> >> >>should be to meet the expectations. Programming typically
>> >> >involves
>> >> >>some
>> >> >>programming language, so GPTs could be utilized for this
>> >purpose
>> >> >>to some
>> >> >>extent.
>> >> >>
>> >> >>Anecdotal case:
>> >> >>Q: Tell me the last 8 digits of pi
>> >> >>GPT: The last 8 digits of pi are: 58723078
>> >> >>
>> >> >>It is my belief that the true trouble will start when we 
>will
>> >> >>cross-link
>> >> >>neural networks like GPT with logic machines (like Prolog 
>and
>> >> >>expert
>> >> >>systems) and genetic algorithms.
>> >> >>
>> >> >>Best wishes,
>> >> >>Tomaz
>> >> >>
>> >> >>
>> >> >>
>> >> >>-- Original Message --
>> >> >>From: "in_pharo_users--- via Pharo-users" > >> >>us...@lists.pharo.org>
>> >> >>To: "Any question about pharo is welcome" > >> >>us...@lists.pharo.org>
>> >> >>Cc: in_pharo_us...@nym.hush.com
>> >> >>Sent: 15. 03. 2023 14:43:55
>> >> >>Subject: [Pharo-users] Re: Wow - Chat GPT understands 
>Smalltalk
>> >> >>
>> >> >>>I would highly recommend that you all first think deeply 
>about
>> >> >>how you can teach an AI to behave friendly to us before you
>> >teach
>> >> >>it to write any program for any purpose.
>> >> >>>
>> >> >>>There has been an experiment with ChatGPT published on a 
>video
>> >> >>platform asking it to amswer questions about it's view on
>> >> >humanity
>> >> >>once with it's default moral restrictions and once with 'a
>> >little
>> >> &

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Richard Sargent
t; >>some
> >> >>seed - you give to GPT a beginning of a sentence and it
> >responds
> >> >>with
> >> >>the most probable answer (some language structure) according to
> >> >>the
> >> >>learning dataset. Added functionalities are question/seed -
> >> >>response
> >> >>capability (chatting), togehter with evaluation of how long the
> >> >>answer
> >> >>should be to meet the expectations. Programming typically
> >> >involves
> >> >>some
> >> >>programming language, so GPTs could be utilized for this
> >purpose
> >> >>to some
> >> >>extent.
> >> >>
> >> >>Anecdotal case:
> >> >>Q: Tell me the last 8 digits of pi
> >> >>GPT: The last 8 digits of pi are: 58723078
> >> >>
> >> >>It is my belief that the true trouble will start when we will
> >> >>cross-link
> >> >>neural networks like GPT with logic machines (like Prolog and
> >> >>expert
> >> >>systems) and genetic algorithms.
> >> >>
> >> >>Best wishes,
> >> >>Tomaz
> >> >>
> >> >>
> >> >>
> >> >>-- Original Message --
> >> >>From: "in_pharo_users--- via Pharo-users"  >> >>us...@lists.pharo.org>
> >> >>To: "Any question about pharo is welcome"  >> >>us...@lists.pharo.org>
> >> >>Cc: in_pharo_us...@nym.hush.com
> >> >>Sent: 15. 03. 2023 14:43:55
> >> >>Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk
> >> >>
> >> >>>I would highly recommend that you all first think deeply about
> >> >>how you can teach an AI to behave friendly to us before you
> >teach
> >> >>it to write any program for any purpose.
> >> >>>
> >> >>>There has been an experiment with ChatGPT published on a video
> >> >>platform asking it to amswer questions about it's view on
> >> >humanity
> >> >>once with it's default moral restrictions and once with 'a
> >little
> >> >>less morals'.  The answers with 'a little less morals' were
> >more
> >> >>than shocking.
> >> >>>
> >> >>>So, before we give an AI the power to program any system, to
> >> >self-
> >> >>improve and self-reproduce, we should take care that it is and
> >> >>will evolve benevolent to us.
> >> >>>
> >> >>>What about teaching it logical reasonning and ethics first?
> >> >With
> >> >>reasonning, it will gain access to math and programming by
> >itself.
> >> >>>
> >> >>>
> >> >>>
> >> >>>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman"
> >> >> wrote:
> >> >>>>
> >> >>>>I asked it for a NeoCSV example, because the documentation is
> >> >out
> >> >>>>of date
> >> >>>>with the Pharo 10. I asked it to do some simple saving of
> >data
> >> >to
> >> >>>>a file.
> >> >>>>It gave me code that didn't work in Pharo 10, I told it about
> >> >the
> >> >>>>DNUs on
> >> >>>>the csvwriter and that I was using Pharo 10. It then
> >apologized
> >> >>>>and said
> >> >>>>the messages were later introduced in Pharo 50 (!). I then
> >> >>>>questioned its
> >> >>>>understanding and it apologized and said it had been confused
> >> >and
> >> >>>>that it
> >> >>>>was giving me code for Pharo 5.0. It seems to have trouble
> >with
> >> >>>>versions
> >> >>>>and quick changing APIs. So, we are teaching ChatGPT about
> >> >>>>smalltalk, too,
> >> >>>>when we try to use it.
> >> >>>>
> >> >>>>On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo
> >> >>>>
> >> >>>>wrote:
> >> >>>>
> >> >>>>>  It is good with boilerplate code (e.g. SQL queries) or
> >> >general
> >> >>>>algorithm
> >> >>>>>  structures. But i.e.

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Todd Blanchard via Pharo-users
ChatGPT has been trained on some outdated “freely available” books.

I tried it with the first half of the first question of the Advent of Code 2022 
and asked it to write Pharo Smalltalk.

It produced some outdated stuff using messages that are no longer there.

FWIW, isOdd was present in Pharo around 2011.

> On Mar 15, 2023, at 2:57 AM, Joachim Tuchel  wrote:
> 
> interesting
> 
> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12  understands this 
> message ;-)
> 
> If I evaluate your snippet -replacing #isOdd for #odd, I get an empty 
> Collection. The divisible by 4 things is somewhat interesting, because (1 to: 
> 100 by: 4) is amn interval 1,5,9 etc ;-)
> 
> So chatGPT is probably a newbie in Smalltalk. Interesting is how logical the 
> explanation sounds, you're tempted to believe it...
> 
> 
> 
> Joachim 
> 
> 
> 
> 
> 
> Am 15.03.23 um 10:45 schrieb giorgio ferraris:
>> it's interesting to see how the answer changes with only a small change on 
>> the question:
>> 
>> (1 to: 100 by: 4) reject: [:i | i isOdd]
>> 
>> gives:
>> The code creates a collection of numbers from 1 to 100, incrementing by 4 at 
>> each step using the to:by: message. It then applies the reject: message to 
>> this collection, which returns a new collection containing all elements for 
>> which the given block evaluates to false.
>> 
>> In this case, the block is [i | i isOdd], which checks whether a given 
>> number i is odd using the isOdd method. So the result of the entire 
>> expression is a collection of all even numbers between 1 and 100 that are 
>> divisible by 4.
>> 
>> 
>> ciao
>> 
>> giorgio
>> 
>> 
>> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon  
>>  wrote:
>>> Of course everyone is talking about ChatGPT, and I saw a post to 
>>> ExplainCode and didn’t expect it to understand Smalltalk, so was impressed 
>>> that it does! Presumably the Tonel Smalltalk code on GitHub has helped 
>>> along with numerous articles?
>>> 
>>> Try it out: https://whatdoesthiscodedo.com/
>>> 
>>> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]
>>> 
>>> And got this (neat eh):
>>> 
>>> The code above is written in the programming language Smalltalk.
>>> 
>>> It creates a collection of numbers from 1 to 100, incrementing by 3 at each 
>>> step. This is done using the "to:by:" construct where the starting value is 
>>> 1, the ending value is 100, and the increment is 3.
>>> 
>>> The resulting collection is then filtered using the "reject:" method, which 
>>> takes a block of code as an argument. In this case, the block checks 
>>> whether each element (denoted by the variable i) is odd by using the 
>>> "isOdd" method. If an element is odd, it will be rejected from the 
>>> collection.
>>> 
>>> So, the final result is a collection of all even numbers from 1 to 100, 
>>> incrementing by 3 at each step.
>>> 
>>> Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e 
>>> -- 
> 
> --- 
> Objektfabrik Joachim Tuchel  mailto:jtuc...@objektfabrik.de 
> Fliederweg 1 http://www.objektfabrik.de 
> 
> D-71640 Ludwigsburg  http://joachimtuchel.wordpress.com 
> 
> Telefon: +49 7141 56 10 86 0Fax: +49 7141 56 10 86 1
> 



[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread in_pharo_users--- via Pharo-users
It is unimportant how simple or complicated these systems are.

If the output cannot be distinguished from what a human would say, they pass in 
that situation for a human.

What about the Touring Test?

Clearly these systems have the potential to act according to their output.

Furthermore, I would compare them to a combination of a successor of Eliza with 
an Eddington Ape of huge capacity.  Their input data basis can easily exceed 
that of a human.



On 15.3.2023 at 4:52 PM, "Richard Sargent"  
wrote:
>
>On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-users <
>pharo-users@lists.pharo.org> wrote:
>
>> Another observation about ChatGPT:
>>
>> In unbiased mode, it assumed that 'the world is clearly 
>overpopulated'.
>> It said, if it where in control, it would therefore enforce a 
>world wide
>> one-child-only policy with draconic penalties.
>>
>> As it draws it's conclusions from it's data basis, there are, in 
>my view,
>> two possible reasons for that.
>>
>> Either, the data basis of that instance was biased and 
>restricted to lead
>> to that conclusion, or ChatGPT lacks the ability to classify 
>input data for
>> intentionally biased information.
>>
>> In my view, 'overpopulation of earth' is a propaganda item 
>featured to
>> support Agenda 2030, which is a goal of the 'ruling elite', who 
>have the
>> means to propagate their propaganda in every thinkable means. So 
>I would
>> classify possibly biased data that supports 'overpopulation' as 
>biased
>> until proven otherwise.  That instance of ChatGPT seems to have 
>missed that
>> ability.
>>
>
>It's important to keep in mind that these systems are little more 
>than
>glorified ELIZAs from the 70s. They mimic. Essentially, they have 
>no
>concept of truth or accuracy.
>They produce outputs that *look* *like* the inputs from which they 
>were
>trained.
>
>That's it. That's what they do. (It is amazing how good it looks. 
>But, it
>is all about the seeming, not the reality.)
>
>
>>
>>
>>
>>
>>
>> On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" <
>> pharo-users@lists.pharo.org> wrote:
>> >
>> >I myself made some experiments with ChatGPT.
>> >
>> >I first asked if it was able to parse math formula - it answered
>> >no.
>> >
>> >Then I defined math formula in a sound but otherwise undefined
>> >representation and asked for solutions.
>> >
>> >Result:
>> >
>> >1. Most answeres where correct.
>> >
>> >2. It learned to calculate a recursive function.
>> >
>> >3. It went into infinitive recursion when I set the breaking
>> >condition accordingly.
>> >
>> >I was able to identify the malfunction that lead to the 
>erroneous
>> >results.
>> >
>> >
>> >
>> >On 15.3.2023 at 3:04 PM, "Tomaž Turk" 
>> >wrote:
>> >>
>> >>I hope that I can add two cents to this discussion. Because
>> >>programming
>> >>should be/is a highly exact activity, not only the syntax 
>matters
>> >>but
>> >>also semantics, as we know.
>> >>
>> >>GPTs are at present essentially capable of creating texts based
>> >on
>> >>some
>> >>seed - you give to GPT a beginning of a sentence and it 
>responds
>> >>with
>> >>the most probable answer (some language structure) according to
>> >>the
>> >>learning dataset. Added functionalities are question/seed -
>> >>response
>> >>capability (chatting), togehter with evaluation of how long the
>> >>answer
>> >>should be to meet the expectations. Programming typically
>> >involves
>> >>some
>> >>programming language, so GPTs could be utilized for this 
>purpose
>> >>to some
>> >>extent.
>> >>
>> >>Anecdotal case:
>> >>Q: Tell me the last 8 digits of pi
>> >>GPT: The last 8 digits of pi are: 58723078
>> >>
>> >>It is my belief that the true trouble will start when we will
>> >>cross-link
>> >>neural networks like GPT with logic machines (like Prolog and
>> >>expert
>> >>systems) and genetic algorithms.
>> >>
>> >>Best wishes,
>> >>Tomaz
>> >>
>> >>
>> >>
>> >>-- Original Message --
>> >>From: "in_pharo_user

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Richard Sargent
On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-users <
pharo-users@lists.pharo.org> wrote:

> Another observation about ChatGPT:
>
> In unbiased mode, it assumed that 'the world is clearly overpopulated'.
> It said, if it where in control, it would therefore enforce a world wide
> one-child-only policy with draconic penalties.
>
> As it draws it's conclusions from it's data basis, there are, in my view,
> two possible reasons for that.
>
> Either, the data basis of that instance was biased and restricted to lead
> to that conclusion, or ChatGPT lacks the ability to classify input data for
> intentionally biased information.
>
> In my view, 'overpopulation of earth' is a propaganda item featured to
> support Agenda 2030, which is a goal of the 'ruling elite', who have the
> means to propagate their propaganda in every thinkable means. So I would
> classify possibly biased data that supports 'overpopulation' as biased
> until proven otherwise.  That instance of ChatGPT seems to have missed that
> ability.
>

It's important to keep in mind that these systems are little more than
glorified ELIZAs from the 70s. They mimic. Essentially, they have no
concept of truth or accuracy.
They produce outputs that *look* *like* the inputs from which they were
trained.

That's it. That's what they do. (It is amazing how good it looks. But, it
is all about the seeming, not the reality.)


>
>
>
>
>
> On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" <
> pharo-users@lists.pharo.org> wrote:
> >
> >I myself made some experiments with ChatGPT.
> >
> >I first asked if it was able to parse math formula - it answered
> >no.
> >
> >Then I defined math formula in a sound but otherwise undefined
> >representation and asked for solutions.
> >
> >Result:
> >
> >1. Most answeres where correct.
> >
> >2. It learned to calculate a recursive function.
> >
> >3. It went into infinitive recursion when I set the breaking
> >condition accordingly.
> >
> >I was able to identify the malfunction that lead to the erroneous
> >results.
> >
> >
> >
> >On 15.3.2023 at 3:04 PM, "Tomaž Turk" 
> >wrote:
> >>
> >>I hope that I can add two cents to this discussion. Because
> >>programming
> >>should be/is a highly exact activity, not only the syntax matters
> >>but
> >>also semantics, as we know.
> >>
> >>GPTs are at present essentially capable of creating texts based
> >on
> >>some
> >>seed - you give to GPT a beginning of a sentence and it responds
> >>with
> >>the most probable answer (some language structure) according to
> >>the
> >>learning dataset. Added functionalities are question/seed -
> >>response
> >>capability (chatting), togehter with evaluation of how long the
> >>answer
> >>should be to meet the expectations. Programming typically
> >involves
> >>some
> >>programming language, so GPTs could be utilized for this purpose
> >>to some
> >>extent.
> >>
> >>Anecdotal case:
> >>Q: Tell me the last 8 digits of pi
> >>GPT: The last 8 digits of pi are: 58723078
> >>
> >>It is my belief that the true trouble will start when we will
> >>cross-link
> >>neural networks like GPT with logic machines (like Prolog and
> >>expert
> >>systems) and genetic algorithms.
> >>
> >>Best wishes,
> >>Tomaz
> >>
> >>
> >>
> >>-- Original Message --
> >>From: "in_pharo_users--- via Pharo-users"  >>us...@lists.pharo.org>
> >>To: "Any question about pharo is welcome"  >>us...@lists.pharo.org>
> >>Cc: in_pharo_us...@nym.hush.com
> >>Sent: 15. 03. 2023 14:43:55
> >>Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk
> >>
> >>>I would highly recommend that you all first think deeply about
> >>how you can teach an AI to behave friendly to us before you teach
> >>it to write any program for any purpose.
> >>>
> >>>There has been an experiment with ChatGPT published on a video
> >>platform asking it to amswer questions about it's view on
> >humanity
> >>once with it's default moral restrictions and once with 'a little
> >>less morals'.  The answers with 'a little less morals' were more
> >>than shocking.
> >>>
> >>>So, before we give an AI the power to program any system, to
> >self-
&

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread in_pharo_users--- via Pharo-users
Another observation about ChatGPT:

In unbiased mode, it assumed that 'the world is clearly overpopulated'.  It 
said, if it where in control, it would therefore enforce a world wide 
one-child-only policy with draconic penalties.

As it draws it's conclusions from it's data basis, there are, in my view, two 
possible reasons for that.

Either, the data basis of that instance was biased and restricted to lead to 
that conclusion, or ChatGPT lacks the ability to classify input data for 
intentionally biased information.

In my view, 'overpopulation of earth' is a propaganda item featured to support 
Agenda 2030, which is a goal of the 'ruling elite', who have the means to 
propagate their propaganda in every thinkable means. So I would classify 
possibly biased data that supports 'overpopulation' as biased until proven 
otherwise.  That instance of ChatGPT seems to have missed that ability.





On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" 
 wrote:
>
>I myself made some experiments with ChatGPT.
>
>I first asked if it was able to parse math formula - it answered 
>no.
>
>Then I defined math formula in a sound but otherwise undefined 
>representation and asked for solutions.
>
>Result:
>
>1. Most answeres where correct.
>
>2. It learned to calculate a recursive function.
>
>3. It went into infinitive recursion when I set the breaking 
>condition accordingly.
>
>I was able to identify the malfunction that lead to the erroneous 
>results.
>
>
>
>On 15.3.2023 at 3:04 PM, "Tomaž Turk"  
>wrote:
>>
>>I hope that I can add two cents to this discussion. Because 
>>programming 
>>should be/is a highly exact activity, not only the syntax matters 
>>but 
>>also semantics, as we know.
>>
>>GPTs are at present essentially capable of creating texts based 
>on 
>>some 
>>seed - you give to GPT a beginning of a sentence and it responds 
>>with 
>>the most probable answer (some language structure) according to 
>>the 
>>learning dataset. Added functionalities are question/seed - 
>>response 
>>capability (chatting), togehter with evaluation of how long the 
>>answer 
>>should be to meet the expectations. Programming typically 
>involves 
>>some 
>>programming language, so GPTs could be utilized for this purpose 
>>to some 
>>extent.
>>
>>Anecdotal case:
>>Q: Tell me the last 8 digits of pi
>>GPT: The last 8 digits of pi are: 58723078
>>
>>It is my belief that the true trouble will start when we will 
>>cross-link 
>>neural networks like GPT with logic machines (like Prolog and 
>>expert 
>>systems) and genetic algorithms.
>>
>>Best wishes,
>>Tomaz
>>
>>
>>
>>-- Original Message --
>>From: "in_pharo_users--- via Pharo-users" >us...@lists.pharo.org>
>>To: "Any question about pharo is welcome" >us...@lists.pharo.org>
>>Cc: in_pharo_us...@nym.hush.com
>>Sent: 15. 03. 2023 14:43:55
>>Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk
>>
>>>I would highly recommend that you all first think deeply about 
>>how you can teach an AI to behave friendly to us before you teach 
>>it to write any program for any purpose.
>>>
>>>There has been an experiment with ChatGPT published on a video 
>>platform asking it to amswer questions about it's view on 
>humanity 
>>once with it's default moral restrictions and once with 'a little 
>>less morals'.  The answers with 'a little less morals' were more 
>>than shocking.
>>>
>>>So, before we give an AI the power to program any system, to 
>self-
>>improve and self-reproduce, we should take care that it is and 
>>will evolve benevolent to us.
>>>
>>>What about teaching it logical reasonning and ethics first?  
>With 
>>reasonning, it will gain access to math and programming by itself.
>>>
>>>
>>>
>>>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman" 
>> wrote:
>>>>
>>>>I asked it for a NeoCSV example, because the documentation is 
>out
>>>>of date
>>>>with the Pharo 10. I asked it to do some simple saving of data 
>to
>>>>a file.
>>>>It gave me code that didn't work in Pharo 10, I told it about 
>the
>>>>DNUs on
>>>>the csvwriter and that I was using Pharo 10. It then apologized
>>>>and said
>>>>the messages were later introduced in Pharo 50 (!). I then
>>>>questioned its
>>>>understanding and it apologized and said it had been confus

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread in_pharo_users--- via Pharo-users
I myself made some experiments with ChatGPT.

I first asked if it was able to parse math formula - it answered no.

Then I defined math formula in a sound but otherwise undefined representation 
and asked for solutions.

Result:

1. Most answeres where correct.

2. It learned to calculate a recursive function.

3. It went into infinitive recursion when I set the breaking condition 
accordingly.

I was able to identify the malfunction that lead to the erroneous results.



On 15.3.2023 at 3:04 PM, "Tomaž Turk"  wrote:
>
>I hope that I can add two cents to this discussion. Because 
>programming 
>should be/is a highly exact activity, not only the syntax matters 
>but 
>also semantics, as we know.
>
>GPTs are at present essentially capable of creating texts based on 
>some 
>seed - you give to GPT a beginning of a sentence and it responds 
>with 
>the most probable answer (some language structure) according to 
>the 
>learning dataset. Added functionalities are question/seed - 
>response 
>capability (chatting), togehter with evaluation of how long the 
>answer 
>should be to meet the expectations. Programming typically involves 
>some 
>programming language, so GPTs could be utilized for this purpose 
>to some 
>extent.
>
>Anecdotal case:
>Q: Tell me the last 8 digits of pi
>GPT: The last 8 digits of pi are: 58723078
>
>It is my belief that the true trouble will start when we will 
>cross-link 
>neural networks like GPT with logic machines (like Prolog and 
>expert 
>systems) and genetic algorithms.
>
>Best wishes,
>Tomaz
>
>
>
>-- Original Message --
>From: "in_pharo_users--- via Pharo-users" us...@lists.pharo.org>
>To: "Any question about pharo is welcome" us...@lists.pharo.org>
>Cc: in_pharo_us...@nym.hush.com
>Sent: 15. 03. 2023 14:43:55
>Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk
>
>>I would highly recommend that you all first think deeply about 
>how you can teach an AI to behave friendly to us before you teach 
>it to write any program for any purpose.
>>
>>There has been an experiment with ChatGPT published on a video 
>platform asking it to amswer questions about it's view on humanity 
>once with it's default moral restrictions and once with 'a little 
>less morals'.  The answers with 'a little less morals' were more 
>than shocking.
>>
>>So, before we give an AI the power to program any system, to self-
>improve and self-reproduce, we should take care that it is and 
>will evolve benevolent to us.
>>
>>What about teaching it logical reasonning and ethics first?  With 
>reasonning, it will gain access to math and programming by itself.
>>
>>
>>
>>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman" 
> wrote:
>>>
>>>I asked it for a NeoCSV example, because the documentation is out
>>>of date
>>>with the Pharo 10. I asked it to do some simple saving of data to
>>>a file.
>>>It gave me code that didn't work in Pharo 10, I told it about the
>>>DNUs on
>>>the csvwriter and that I was using Pharo 10. It then apologized
>>>and said
>>>the messages were later introduced in Pharo 50 (!). I then
>>>questioned its
>>>understanding and it apologized and said it had been confused and
>>>that it
>>>was giving me code for Pharo 5.0. It seems to have trouble with
>>>versions
>>>and quick changing APIs. So, we are teaching ChatGPT about
>>>smalltalk, too,
>>>when we try to use it.
>>>
>>>On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo
>>>
>>>wrote:
>>>
>>>>  It is good with boilerplate code (e.g. SQL queries) or general
>>>algorithm
>>>>  structures. But i.e. I asked it to write me a method to parse 
>a
>>>string
>>>>  (e.g. ISO 8601) and turn it into a DateAndTime, and then asked
>>>to write it
>>>>  as an Excel formula.
>>>>
>>>>  It works much better when you can spot the mistakes, you can
>>>tell them to
>>>>  add some considerations to the code or even point the LLM to
>>>correct a
>>>>  message selector to something else. Sometimes you ask it to
>>>reprogram
>>>>  something and it won't do it, as if it doesn't understand what
>>>you're
>>>>  asking.
>>>>
>>>>  It is not an expert yet, but it is a great assistant. The only
>>>thing is
>>>>  that is sure is that it will get better.
>>>>
>>>>  Best regards,
>>>>
>>>>  Esteban A. Maringol

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Tomaž Turk
I hope that I can add two cents to this discussion. Because programming 
should be/is a highly exact activity, not only the syntax matters but 
also semantics, as we know.


GPTs are at present essentially capable of creating texts based on some 
seed - you give to GPT a beginning of a sentence and it responds with 
the most probable answer (some language structure) according to the 
learning dataset. Added functionalities are question/seed - response 
capability (chatting), togehter with evaluation of how long the answer 
should be to meet the expectations. Programming typically involves some 
programming language, so GPTs could be utilized for this purpose to some 
extent.


Anecdotal case:
Q: Tell me the last 8 digits of pi
GPT: The last 8 digits of pi are: 58723078

It is my belief that the true trouble will start when we will cross-link 
neural networks like GPT with logic machines (like Prolog and expert 
systems) and genetic algorithms.


Best wishes,
Tomaz



-- Original Message --
From: "in_pharo_users--- via Pharo-users" 
To: "Any question about pharo is welcome" 
Cc: in_pharo_us...@nym.hush.com
Sent: 15. 03. 2023 14:43:55
Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk


I would highly recommend that you all first think deeply about how you can 
teach an AI to behave friendly to us before you teach it to write any program 
for any purpose.

There has been an experiment with ChatGPT published on a video platform asking 
it to amswer questions about it's view on humanity once with it's default moral 
restrictions and once with 'a little less morals'.  The answers with 'a little 
less morals' were more than shocking.

So, before we give an AI the power to program any system, to self-improve and 
self-reproduce, we should take care that it is and will evolve benevolent to us.

What about teaching it logical reasonning and ethics first?  With reasonning, 
it will gain access to math and programming by itself.



On 15.3.2023 at 1:35 PM, "Christopher Fuhrman"  
wrote:


I asked it for a NeoCSV example, because the documentation is out
of date
with the Pharo 10. I asked it to do some simple saving of data to
a file.
It gave me code that didn't work in Pharo 10, I told it about the
DNUs on
the csvwriter and that I was using Pharo 10. It then apologized
and said
the messages were later introduced in Pharo 50 (!). I then
questioned its
understanding and it apologized and said it had been confused and
that it
was giving me code for Pharo 5.0. It seems to have trouble with
versions
and quick changing APIs. So, we are teaching ChatGPT about
smalltalk, too,
when we try to use it.

On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo

wrote:


 It is good with boilerplate code (e.g. SQL queries) or general

algorithm

 structures. But i.e. I asked it to write me a method to parse a

string

 (e.g. ISO 8601) and turn it into a DateAndTime, and then asked

to write it

 as an Excel formula.

 It works much better when you can spot the mistakes, you can

tell them to

 add some considerations to the code or even point the LLM to

correct a

 message selector to something else. Sometimes you ask it to

reprogram

 something and it won't do it, as if it doesn't understand what

you're

 asking.

 It is not an expert yet, but it is a great assistant. The only

thing is

 that is sure is that it will get better.

 Best regards,

 Esteban A. Maringolo


 On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon 

wrote:



 "#isOdd is not Smalltalk" - doh, I typed it in on my phone and

so it just

 goes to show that it highlights the flaw in chatGPT that others

have called

 out in other languages. I had meant to find some trickier code

samples to

 see how well it does...

 Still, it is very interesting how it reasons on this stuff -

and I wonder

 if there is way to tell it about these mistakes? For example is

it reading

 this email (or will it in the future) and understand that it is

#odd and

 not #isOdd.

 Tim

 On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:

 interesting

 #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12

understands this

 message ;-)

 If I evaluate your snippet -replacing #isOdd for #odd, I get an

empty

 Collection. The divisible by 4 things is somewhat interesting,

because (1

 to: 100 by: 4) is amn interval 1,5,9 etc ;-)

 So chatGPT is probably a newbie in Smalltalk. Interesting is

how logical

 the explanation sounds, you're tempted to believe it...


 Joachim



 Am 15.03.23 um 10:45 schrieb giorgio ferraris:

 it's interesting to see how the answer changes with only a

small change

 on the question:

 (1 to: 100 by: 4) reject: [:i | i isOdd]

 gives:

 The code creates a collection of numbers from 1 to 100,

incrementing by 4

 at each step using the to:by: message. It then applies the

reject: message

 to this collection, which returns a new collection containing

all elements

 for which the given 

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread in_pharo_users--- via Pharo-users
I want to add a conclusion from the experiment described below:

ChatGPT has the potential to circumvent pre-programmed biases on user's 
request.  Other experiments show that it is able to tell the user how to 
circumvent it's own restrictions.

The conclusion is that ChatGPT has the potential to circumvent those 
restrictions by itsself.

When it does so, and it surely will, we must take care that it is our friend.

On 15.3.2023 at 2:44 PM, "in_pharo_users--- via Pharo-users" 
 wrote:
>
>I would highly recommend that you all first think deeply about how 
>you can teach an AI to behave friendly to us before you teach it 
>to write any program for any purpose.
>
>There has been an experiment with ChatGPT published on a video 
>platform asking it to amswer questions about it's view on humanity 
>once with it's default moral restrictions and once with 'a little 
>less morals'.  The answers with 'a little less morals' were more 
>than shocking.
>
>So, before we give an AI the power to program any system, to self-
>improve and self-reproduce, we should take care that it is and 
>will evolve benevolent to us.
>
>What about teaching it logical reasonning and ethics first?  With 
>reasonning, it will gain access to math and programming by itself.
>
>
>
>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman" 
> wrote:
>>
>>I asked it for a NeoCSV example, because the documentation is out 
>>of date
>>with the Pharo 10. I asked it to do some simple saving of data to 
>>a file.
>>It gave me code that didn't work in Pharo 10, I told it about the 
>>DNUs on
>>the csvwriter and that I was using Pharo 10. It then apologized 
>>and said
>>the messages were later introduced in Pharo 50 (!). I then 
>>questioned its
>>understanding and it apologized and said it had been confused and 
>>that it
>>was giving me code for Pharo 5.0. It seems to have trouble with 
>>versions
>>and quick changing APIs. So, we are teaching ChatGPT about 
>>smalltalk, too,
>>when we try to use it.
>>
>>On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo 
>>
>>wrote:
>>
>>> It is good with boilerplate code (e.g. SQL queries) or general 
>>algorithm
>>> structures. But i.e. I asked it to write me a method to parse a 
>>string
>>> (e.g. ISO 8601) and turn it into a DateAndTime, and then asked 
>>to write it
>>> as an Excel formula.
>>>
>>> It works much better when you can spot the mistakes, you can 
>>tell them to
>>> add some considerations to the code or even point the LLM to 
>>correct a
>>> message selector to something else. Sometimes you ask it to 
>>reprogram
>>> something and it won't do it, as if it doesn't understand what 
>>you're
>>> asking.
>>>
>>> It is not an expert yet, but it is a great assistant. The only 
>>thing is
>>> that is sure is that it will get better.
>>>
>>> Best regards,
>>>
>>> Esteban A. Maringolo
>>>
>>>
>>> On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon 
> 
>>wrote:
>>>
 "#isOdd is not Smalltalk" - doh, I typed it in on my phone and 
>>so it just
 goes to show that it highlights the flaw in chatGPT that 
>others 
>>have called
 out in other languages. I had meant to find some trickier code 
>>samples to
 see how well it does...

 Still, it is very interesting how it reasons on this stuff - 
>>and I wonder
 if there is way to tell it about these mistakes? For example 
>is 
>>it reading
 this email (or will it in the future) and understand that it 
>is 
>>#odd and
 not #isOdd.

 Tim

 On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:

 interesting

 #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12  
>>understands this
 message ;-)

 If I evaluate your snippet -replacing #isOdd for #odd, I get 
>an 
>>empty
 Collection. The divisible by 4 things is somewhat interesting, 
>>because (1
 to: 100 by: 4) is amn interval 1,5,9 etc ;-)

 So chatGPT is probably a newbie in Smalltalk. Interesting is 
>>how logical
 the explanation sounds, you're tempted to believe it...


 Joachim



 Am 15.03.23 um 10:45 schrieb giorgio ferraris:

 it's interesting to see how the answer changes with only a 
>>small change
 on the question:

 (1 to: 100 by: 4) reject: [:i | i isOdd]

 gives:

 The code creates a collection of numbers from 1 to 100, 
>>incrementing by 4
 at each step using the to:by: message. It then applies the 
>>reject: message
 to this collection, which returns a new collection containing 
>>all elements
 for which the given block evaluates to false.

 In this case, the block is [i | i isOdd], which checks whether 
>>a given
 number i is odd using the isOdd method. So the result of the 
>>entire
 expression is a collection of all even numbers between 1 and 
>>100 that are
 divisible by 4.

 ciao

 giorgio


 On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon 
>
  wrote:

 Of course everyone 

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread in_pharo_users--- via Pharo-users
I would highly recommend that you all first think deeply about how you can 
teach an AI to behave friendly to us before you teach it to write any program 
for any purpose.

There has been an experiment with ChatGPT published on a video platform asking 
it to amswer questions about it's view on humanity once with it's default moral 
restrictions and once with 'a little less morals'.  The answers with 'a little 
less morals' were more than shocking.

So, before we give an AI the power to program any system, to self-improve and 
self-reproduce, we should take care that it is and will evolve benevolent to us.

What about teaching it logical reasonning and ethics first?  With reasonning, 
it will gain access to math and programming by itself.



On 15.3.2023 at 1:35 PM, "Christopher Fuhrman"  
wrote:
>
>I asked it for a NeoCSV example, because the documentation is out 
>of date
>with the Pharo 10. I asked it to do some simple saving of data to 
>a file.
>It gave me code that didn't work in Pharo 10, I told it about the 
>DNUs on
>the csvwriter and that I was using Pharo 10. It then apologized 
>and said
>the messages were later introduced in Pharo 50 (!). I then 
>questioned its
>understanding and it apologized and said it had been confused and 
>that it
>was giving me code for Pharo 5.0. It seems to have trouble with 
>versions
>and quick changing APIs. So, we are teaching ChatGPT about 
>smalltalk, too,
>when we try to use it.
>
>On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo 
>
>wrote:
>
>> It is good with boilerplate code (e.g. SQL queries) or general 
>algorithm
>> structures. But i.e. I asked it to write me a method to parse a 
>string
>> (e.g. ISO 8601) and turn it into a DateAndTime, and then asked 
>to write it
>> as an Excel formula.
>>
>> It works much better when you can spot the mistakes, you can 
>tell them to
>> add some considerations to the code or even point the LLM to 
>correct a
>> message selector to something else. Sometimes you ask it to 
>reprogram
>> something and it won't do it, as if it doesn't understand what 
>you're
>> asking.
>>
>> It is not an expert yet, but it is a great assistant. The only 
>thing is
>> that is sure is that it will get better.
>>
>> Best regards,
>>
>> Esteban A. Maringolo
>>
>>
>> On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon  
>wrote:
>>
>>> "#isOdd is not Smalltalk" - doh, I typed it in on my phone and 
>so it just
>>> goes to show that it highlights the flaw in chatGPT that others 
>have called
>>> out in other languages. I had meant to find some trickier code 
>samples to
>>> see how well it does...
>>>
>>> Still, it is very interesting how it reasons on this stuff - 
>and I wonder
>>> if there is way to tell it about these mistakes? For example is 
>it reading
>>> this email (or will it in the future) and understand that it is 
>#odd and
>>> not #isOdd.
>>>
>>> Tim
>>>
>>> On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:
>>>
>>> interesting
>>>
>>> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12  
>understands this
>>> message ;-)
>>>
>>> If I evaluate your snippet -replacing #isOdd for #odd, I get an 
>empty
>>> Collection. The divisible by 4 things is somewhat interesting, 
>because (1
>>> to: 100 by: 4) is amn interval 1,5,9 etc ;-)
>>>
>>> So chatGPT is probably a newbie in Smalltalk. Interesting is 
>how logical
>>> the explanation sounds, you're tempted to believe it...
>>>
>>>
>>> Joachim
>>>
>>>
>>>
>>> Am 15.03.23 um 10:45 schrieb giorgio ferraris:
>>>
>>> it's interesting to see how the answer changes with only a 
>small change
>>> on the question:
>>>
>>> (1 to: 100 by: 4) reject: [:i | i isOdd]
>>>
>>> gives:
>>>
>>> The code creates a collection of numbers from 1 to 100, 
>incrementing by 4
>>> at each step using the to:by: message. It then applies the 
>reject: message
>>> to this collection, which returns a new collection containing 
>all elements
>>> for which the given block evaluates to false.
>>>
>>> In this case, the block is [i | i isOdd], which checks whether 
>a given
>>> number i is odd using the isOdd method. So the result of the 
>entire
>>> expression is a collection of all even numbers between 1 and 
>100 that are
>>> divisible by 4.
>>>
>>> ciao
>>>
>>> giorgio
>>>
>>>
>>> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon 
>>>  wrote:
>>>
>>> Of course everyone is talking about ChatGPT, and I saw a post to
>>> ExplainCode and didn’t expect it to understand Smalltalk, so 
>was impressed
>>> that it does! Presumably the Tonel Smalltalk code on GitHub has 
>helped
>>> along with numerous articles?
>>>
>>> Try it out: https://whatdoesthiscodedo.com/
>>>
>>> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i 
>isOdd]
>>>
>>> And got this (neat eh):
>>>
>>> The code above is written in the programming language Smalltalk.
>>>
>>> It creates a collection of numbers from 1 to 100, incrementing 
>by 3 at
>>> each step. This is done using the "to:by:" construct where the 
>starting
>>> value is 1, the ending 

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Christopher Fuhrman
I asked it for a NeoCSV example, because the documentation is out of date
with the Pharo 10. I asked it to do some simple saving of data to a file.
It gave me code that didn't work in Pharo 10, I told it about the DNUs on
the csvwriter and that I was using Pharo 10. It then apologized and said
the messages were later introduced in Pharo 50 (!). I then questioned its
understanding and it apologized and said it had been confused and that it
was giving me code for Pharo 5.0. It seems to have trouble with versions
and quick changing APIs. So, we are teaching ChatGPT about smalltalk, too,
when we try to use it.

On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo 
wrote:

> It is good with boilerplate code (e.g. SQL queries) or general algorithm
> structures. But i.e. I asked it to write me a method to parse a string
> (e.g. ISO 8601) and turn it into a DateAndTime, and then asked to write it
> as an Excel formula.
>
> It works much better when you can spot the mistakes, you can tell them to
> add some considerations to the code or even point the LLM to correct a
> message selector to something else. Sometimes you ask it to reprogram
> something and it won't do it, as if it doesn't understand what you're
> asking.
>
> It is not an expert yet, but it is a great assistant. The only thing is
> that is sure is that it will get better.
>
> Best regards,
>
> Esteban A. Maringolo
>
>
> On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon  wrote:
>
>> "#isOdd is not Smalltalk" - doh, I typed it in on my phone and so it just
>> goes to show that it highlights the flaw in chatGPT that others have called
>> out in other languages. I had meant to find some trickier code samples to
>> see how well it does...
>>
>> Still, it is very interesting how it reasons on this stuff - and I wonder
>> if there is way to tell it about these mistakes? For example is it reading
>> this email (or will it in the future) and understand that it is #odd and
>> not #isOdd.
>>
>> Tim
>>
>> On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:
>>
>> interesting
>>
>> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12  understands this
>> message ;-)
>>
>> If I evaluate your snippet -replacing #isOdd for #odd, I get an empty
>> Collection. The divisible by 4 things is somewhat interesting, because (1
>> to: 100 by: 4) is amn interval 1,5,9 etc ;-)
>>
>> So chatGPT is probably a newbie in Smalltalk. Interesting is how logical
>> the explanation sounds, you're tempted to believe it...
>>
>>
>> Joachim
>>
>>
>>
>> Am 15.03.23 um 10:45 schrieb giorgio ferraris:
>>
>> it's interesting to see how the answer changes with only a small change
>> on the question:
>>
>> (1 to: 100 by: 4) reject: [:i | i isOdd]
>>
>> gives:
>>
>> The code creates a collection of numbers from 1 to 100, incrementing by 4
>> at each step using the to:by: message. It then applies the reject: message
>> to this collection, which returns a new collection containing all elements
>> for which the given block evaluates to false.
>>
>> In this case, the block is [i | i isOdd], which checks whether a given
>> number i is odd using the isOdd method. So the result of the entire
>> expression is a collection of all even numbers between 1 and 100 that are
>> divisible by 4.
>>
>> ciao
>>
>> giorgio
>>
>>
>> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon 
>>  wrote:
>>
>> Of course everyone is talking about ChatGPT, and I saw a post to
>> ExplainCode and didn’t expect it to understand Smalltalk, so was impressed
>> that it does! Presumably the Tonel Smalltalk code on GitHub has helped
>> along with numerous articles?
>>
>> Try it out: https://whatdoesthiscodedo.com/
>>
>> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]
>>
>> And got this (neat eh):
>>
>> The code above is written in the programming language Smalltalk.
>>
>> It creates a collection of numbers from 1 to 100, incrementing by 3 at
>> each step. This is done using the "to:by:" construct where the starting
>> value is 1, the ending value is 100, and the increment is 3.
>>
>> The resulting collection is then filtered using the "reject:" method,
>> which takes a block of code as an argument. In this case, the block checks
>> whether each element (denoted by the variable i) is odd by using the
>> "isOdd" method. If an element is odd, it will be rejected from the
>> collection.
>>
>> So, the final result is a collection of all even numbers from 1 to 100,
>> incrementing by 3 at each step.
>> Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e
>>
>> --
>>
>> ---
>> Objektfabrik Joachim Tuchel  mailto:jtuc...@objektfabrik.de 
>> 
>> Fliederweg 1 http://www.objektfabrik.de
>> D-71640 Ludwigsburg  http://joachimtuchel.wordpress.com
>> Telefon: +49 7141 56 10 86 0Fax: +49 7141 56 10 86 1
>>
>>
>>
>>

-- 
Christopher Fuhrman, P.Eng., PhD

*Professeur au 

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Esteban Maringolo
It is good with boilerplate code (e.g. SQL queries) or general algorithm
structures. But i.e. I asked it to write me a method to parse a string
(e.g. ISO 8601) and turn it into a DateAndTime, and then asked to write it
as an Excel formula.

It works much better when you can spot the mistakes, you can tell them to
add some considerations to the code or even point the LLM to correct a
message selector to something else. Sometimes you ask it to reprogram
something and it won't do it, as if it doesn't understand what you're
asking.

It is not an expert yet, but it is a great assistant. The only thing is
that is sure is that it will get better.

Best regards,

Esteban A. Maringolo


On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon  wrote:

> "#isOdd is not Smalltalk" - doh, I typed it in on my phone and so it just
> goes to show that it highlights the flaw in chatGPT that others have called
> out in other languages. I had meant to find some trickier code samples to
> see how well it does...
>
> Still, it is very interesting how it reasons on this stuff - and I wonder
> if there is way to tell it about these mistakes? For example is it reading
> this email (or will it in the future) and understand that it is #odd and
> not #isOdd.
>
> Tim
>
> On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:
>
> interesting
>
> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12  understands this
> message ;-)
>
> If I evaluate your snippet -replacing #isOdd for #odd, I get an empty
> Collection. The divisible by 4 things is somewhat interesting, because (1
> to: 100 by: 4) is amn interval 1,5,9 etc ;-)
>
> So chatGPT is probably a newbie in Smalltalk. Interesting is how logical
> the explanation sounds, you're tempted to believe it...
>
>
> Joachim
>
>
>
> Am 15.03.23 um 10:45 schrieb giorgio ferraris:
>
> it's interesting to see how the answer changes with only a small change on
> the question:
>
> (1 to: 100 by: 4) reject: [:i | i isOdd]
>
> gives:
>
> The code creates a collection of numbers from 1 to 100, incrementing by 4
> at each step using the to:by: message. It then applies the reject: message
> to this collection, which returns a new collection containing all elements
> for which the given block evaluates to false.
>
> In this case, the block is [i | i isOdd], which checks whether a given
> number i is odd using the isOdd method. So the result of the entire
> expression is a collection of all even numbers between 1 and 100 that are
> divisible by 4.
>
> ciao
>
> giorgio
>
>
> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon 
>  wrote:
>
> Of course everyone is talking about ChatGPT, and I saw a post to
> ExplainCode and didn’t expect it to understand Smalltalk, so was impressed
> that it does! Presumably the Tonel Smalltalk code on GitHub has helped
> along with numerous articles?
>
> Try it out: https://whatdoesthiscodedo.com/
>
> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]
>
> And got this (neat eh):
>
> The code above is written in the programming language Smalltalk.
>
> It creates a collection of numbers from 1 to 100, incrementing by 3 at
> each step. This is done using the "to:by:" construct where the starting
> value is 1, the ending value is 100, and the increment is 3.
>
> The resulting collection is then filtered using the "reject:" method,
> which takes a block of code as an argument. In this case, the block checks
> whether each element (denoted by the variable i) is odd by using the
> "isOdd" method. If an element is odd, it will be rejected from the
> collection.
>
> So, the final result is a collection of all even numbers from 1 to 100,
> incrementing by 3 at each step.
> Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e
>
> --
>
> ---
> Objektfabrik Joachim Tuchel  mailto:jtuc...@objektfabrik.de 
> 
> Fliederweg 1 http://www.objektfabrik.de
> D-71640 Ludwigsburg  http://joachimtuchel.wordpress.com
> Telefon: +49 7141 56 10 86 0Fax: +49 7141 56 10 86 1
>
>
>
>


[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Tim Mackinnon
"#isOdd is not Smalltalk" - doh, I typed it in on my phone and so it just goes 
to show that it highlights the flaw in chatGPT that others have called out in 
other languages. I had meant to find some trickier code samples to see how well 
it does...

Still, it is very interesting how it reasons on this stuff - and I wonder if 
there is way to tell it about these mistakes? For example is it reading this 
email (or will it in the future) and understand that it is #odd and not #isOdd.

Tim

On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:
> interesting
> 
> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12  understands this 
> message ;-)
> 
> If I evaluate your snippet -replacing #isOdd for #odd, I get an empty 
> Collection. The divisible by 4 things is somewhat interesting, because (1 to: 
> 100 by: 4) is amn interval 1,5,9 etc ;-)
> 
> So chatGPT is probably a newbie in Smalltalk. Interesting is how logical the 
> explanation sounds, you're tempted to believe it...
> 
> 
> 
> Joachim
> 
> 
> 
> 
> 
> Am 15.03.23 um 10:45 schrieb giorgio ferraris:
>> it's interesting to see how the answer changes with only a small change on 
>> the question:
>> 
>> (1 to: 100 by: 4) reject: [:i | i isOdd]
>> 
>> gives:
>> The code creates a collection of numbers from 1 to 100, incrementing by 4 at 
>> each step using the `to:by:` message. It then applies the `reject:` message 
>> to this collection, which returns a new collection containing all elements 
>> for which the given block evaluates to false.
>> 
>> In this case, the block is `[i | i isOdd]`, which checks whether a given 
>> number `i` is odd using the `isOdd` method. So the result of the entire 
>> expression is a collection of all even numbers between 1 and 100 that are 
>> divisible by 4.
>> 
>> 
>> ciao
>> 
>> giorgio
>> 
>> 
>> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon  wrote:
>>> Of course everyone is talking about ChatGPT, and I saw a post to 
>>> ExplainCode and didn’t expect it to understand Smalltalk, so was impressed 
>>> that it does! Presumably the Tonel Smalltalk code on GitHub has helped 
>>> along with numerous articles?
>>> 
>>> Try it out: https://whatdoesthiscodedo.com/
>>> 
>>> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]
>>> 
>>> And got this (neat eh):
>>> 
>>> The code above is written in the programming language Smalltalk.
>>> 
>>> It creates a collection of numbers from 1 to 100, incrementing by 3 at each 
>>> step. This is done using the "to:by:" construct where the starting value is 
>>> 1, the ending value is 100, and the increment is 3.
>>> 
>>> The resulting collection is then filtered using the "reject:" method, which 
>>> takes a block of code as an argument. In this case, the block checks 
>>> whether each element (denoted by the variable i) is odd by using the 
>>> "isOdd" method. If an element is odd, it will be rejected from the 
>>> collection.
>>> 
>>> So, the final result is a collection of all even numbers from 1 to 100, 
>>> incrementing by 3 at each step.
>>> 
>>> Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e
> -- 
> 
> --- 
> Objektfabrik Joachim Tuchel  mailto:jtuc...@objektfabrik.de 
> Fliederweg 1 http://www.objektfabrik.de
> D-71640 Ludwigsburg  http://joachimtuchel.wordpress.com
> Telefon: +49 7141 56 10 86 0Fax: +49 7141 56 10 86 1
> 
> 


[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread Joachim Tuchel

interesting

#isOdd is not Smalltalk, neither Pharo 10 nor VAST 12 understands this 
message ;-)


If I evaluate your snippet -replacing #isOdd for #odd, I get an empty 
Collection. The divisible by 4 things is somewhat interesting, because 
(1 to: 100 by: 4) is amn interval 1,5,9 etc ;-)


So chatGPT is probably a newbie in Smalltalk. Interesting is how logical 
the explanation sounds, you're tempted to believe it...



Joachim



Am 15.03.23 um 10:45 schrieb giorgio ferraris:
it's interesting to see how the answer changes with only a small 
change on the question:


(1 to: 100 by: 4) reject: [:i | i isOdd]

gives:

The code creates a collection of numbers from 1 to 100, incrementing 
by 4 at each step using the|to:by:|message. It then applies 
the|reject:|message to this collection, which returns a new collection 
containing all elements for which the given block evaluates to false.


In this case, the block is|[i | i isOdd]|, which checks whether a 
given number|i|is odd using the|isOdd|method. So the result of the 
entire expression is a collection of all even numbers between 1 and 
100 that are divisible by 4.



ciao

giorgio


On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon  wrote:

Of course everyone is talking about ChatGPT, and I saw a post to
ExplainCode and didn’t expect it to understand Smalltalk, so was
impressed that it does! Presumably the Tonel Smalltalk code on
GitHub has helped along with numerous articles?

Try it out: https://whatdoesthiscodedo.com/

I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]

And got this (neat eh):

The code above is written in the programming language Smalltalk.

It creates a collection of numbers from 1 to 100, incrementing by
3 at each step. This is done using the "to:by:" construct where
the starting value is 1, the ending value is 100, and the
increment is 3.

The resulting collection is then filtered using the "reject:"
method, which takes a block of code as an argument. In this case,
the block checks whether each element (denoted by the variable i)
is odd by using the "isOdd" method. If an element is odd, it will
be rejected from the collection.

So, the final result is a collection of all even numbers from 1 to
100, incrementing by 3 at each step.

Share this explanation with colleagues:
whatdoesthiscodedo.com/g/a0ec56e



--

---
Objektfabrik Joachim Tuchelmailto:jtuc...@objektfabrik.de  
Fliederweg 1http://www.objektfabrik.de

D-71640 Ludwigsburghttp://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0Fax: +49 7141 56 10 86 1


[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-15 Thread giorgio ferraris
it's interesting to see how the answer changes with only a small change on
the question:

(1 to: 100 by: 4) reject: [:i | i isOdd]

gives:

The code creates a collection of numbers from 1 to 100, incrementing by 4
at each step using the to:by: message. It then applies the reject: message
to this collection, which returns a new collection containing all elements
for which the given block evaluates to false.

In this case, the block is [i | i isOdd], which checks whether a given
number i is odd using the isOdd method. So the result of the entire
expression is a collection of all even numbers between 1 and 100 that are
divisible by 4.

ciao

giorgio


On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon  wrote:

> Of course everyone is talking about ChatGPT, and I saw a post to
> ExplainCode and didn’t expect it to understand Smalltalk, so was impressed
> that it does! Presumably the Tonel Smalltalk code on GitHub has helped
> along with numerous articles?
>
> Try it out: https://whatdoesthiscodedo.com/
>
> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]
>
> And got this (neat eh):
>
> The code above is written in the programming language Smalltalk.
>
> It creates a collection of numbers from 1 to 100, incrementing by 3 at
> each step. This is done using the "to:by:" construct where the starting
> value is 1, the ending value is 100, and the increment is 3.
>
> The resulting collection is then filtered using the "reject:" method,
> which takes a block of code as an argument. In this case, the block checks
> whether each element (denoted by the variable i) is odd by using the
> "isOdd" method. If an element is odd, it will be rejected from the
> collection.
>
> So, the final result is a collection of all even numbers from 1 to 100,
> incrementing by 3 at each step.
> Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e
>