Re: [FRIAM] Honeymoon over!

2024-01-29 Thread Steve Smith
With integration of LLMs (and other ML) into AR and personal assistant 
tech riding around on early adopters "shoulders", I would expect these 
percieve-reason-act structures to be "in training" essentially learning 
how to emulate (and extrapolate) their user's/wearer's/familiar's 
decision processes?


It would seem that this is where Pearl and Glymour's causal inferencing 
models would be directly applicable?


I read somewhere that Tesla's data  gathered from their Self Driving 
features represents a somewhat unique data-set due to these 
percieve/reason/act implications.   Does (less than full) self-driving 
car tech not represent a real-life training opportunity?


An AR enhanced ML personal assistant would seem to be an equally obvious 
place to begin to bootstrap training an AI in "everyday activities"?



On 1/28/24 5:23 PM, Russ Abbott wrote:
Thanks, Jochen, I know about LangChain. I'm not claiming that LLMs 
cannot be used as elements of larger computations, just that LLMs on 
their own are quite limited. I'll make that point in the talk if the 
abstract is accepted.

_
_
__-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Sun, Jan 28, 2024 at 1:31 PM Jochen Fromm  wrote:

Langchain is an agent framework started by Harrison Chase. A
Langchain agent uses LLMs to reason in a perceive-reason-act
cycle. One could argue that Langchain agents are able to think,
and we are even able to watch them thinking
https://github.com/langchain-ai/langchain

deeplearning.ai <http://deeplearning.ai> has free courses about
Langchain

https://www.deeplearning.ai/short-courses/langchain-for-llm-application-development/

-J.


 Original message 
From: Russ Abbott 
Date: 1/28/24 9:58 PM (GMT+01:00)
To: The Friday Morning Applied Complexity Coffee Group
    
    Subject: Re: [FRIAM] Honeymoon over!

Sorry you couldn't get through. The abstract for the abstract had
to be submitted separately. Here it is.

LLMs are strikingly good at generating text: their output is
syntactically correct,  coherent, and plausible. They seem capable
of following instructions and of carrying out meaningful
conversations. LLMs achieve these results by using transformers to
produce text based on complex patterns in their training data. But
powerful though they are, transformers have nothing to do with
reasoning. LLMs have no means to build or to reason from internal
models; they cannot backtrack or perform exploratory search; they
cannot perform after-the-fact analysis; and they cannot diagnose
and correct errors.  More generally, LLMs cannot formulate, apply,
or correct strategies or heuristics. In short, LLMs are not a step
away from Artificial General Intelligence.

A pdf of the full abstract is attached.
_
_
__-- Russ

On Sun, Jan 28, 2024 at 10:12 AM Steve Smith  wrote:




And if you're interested, my long abstract submission to
IACAP-2024
<https://pretalx.iacapconf.org/iacap-2024/me/submissions/N388VQ/>
has related thoughts. (Scroll down until you get to the link
for the actual paper.)


Russ -

I am interested in reading your abstract/paper..._
_

I signed up for an IACAP account but the link you provided
seems to be dead?

- Steve_
_

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p
Zoom https://bit.ly/virtualfriam
to (un)subscribe
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present
https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021 http://friam.383.s1.nabble.com/

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present
https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021 http://friam.383.s1.nabble.com/


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p 
Zoomhttps://bit.ly/virtualfriam
to (un)subscribehttp://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIChttp://friam-comic.blogspot.com/
archives:  5/2017 thru presenthttps://redfish.com/pipermail/friam_redfish.com/
   1/2003 thru 6/2021http://friam.383.s1.nabble.com/-. --- - / ...- .- .-.. .. -.. / -- --

Re: [FRIAM] Honeymoon over!

2024-01-28 Thread Russ Abbott
Thanks, Jochen, I know about LangChain. I'm not claiming that LLMs cannot
be used as elements of larger computations, just that LLMs on their own are
quite limited. I'll make that point in the talk if the abstract is accepted.

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Sun, Jan 28, 2024 at 1:31 PM Jochen Fromm  wrote:

> Langchain is an agent framework started by Harrison Chase. A Langchain
> agent uses LLMs to reason in a perceive-reason-act cycle. One could argue
> that Langchain agents are able to think, and we are even able to watch them
> thinking
> https://github.com/langchain-ai/langchain
>
> deeplearning.ai has free courses about Langchain
>
> https://www.deeplearning.ai/short-courses/langchain-for-llm-application-development/
>
> -J.
>
>
>  Original message 
> From: Russ Abbott 
> Date: 1/28/24 9:58 PM (GMT+01:00)
> To: The Friday Morning Applied Complexity Coffee Group 
>
> Subject: Re: [FRIAM] Honeymoon over!
>
> Sorry you couldn't get through. The abstract for the abstract had to be
> submitted separately. Here it is.
>
> LLMs are strikingly good at generating text: their output is syntactically
> correct,  coherent, and plausible. They seem capable of following
> instructions and of carrying out meaningful conversations. LLMs achieve
> these results by using transformers to produce text based on complex
> patterns in their training data. But powerful though they are, transformers
> have nothing to do with reasoning. LLMs have no means to build or to reason
> from internal models; they cannot backtrack or perform exploratory search;
> they cannot perform after-the-fact analysis; and they cannot diagnose and
> correct errors.  More generally, LLMs cannot formulate, apply, or correct
> strategies or heuristics. In short, LLMs are not a step away from
> Artificial General Intelligence.
>
> A pdf of the full abstract is attached.
>
> -- Russ
>
> On Sun, Jan 28, 2024 at 10:12 AM Steve Smith  wrote:
>
>>
>>
>> And if you're interested, my long abstract submission to IACAP-2024
>> <https://pretalx.iacapconf.org/iacap-2024/me/submissions/N388VQ/> has
>> related thoughts. (Scroll down until you get to the link for the actual
>> paper.)
>>
>> Russ -
>>
>> I am interested in reading your abstract/paper...
>>
>> I signed up for an IACAP account but the link you provided seems to be
>> dead?
>>
>> - Steve
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present
>> https://redfish.com/pipermail/friam_redfish.com/
>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Honeymoon over!

2024-01-28 Thread Jochen Fromm
Langchain is an agent framework started by Harrison Chase. A Langchain agent 
uses LLMs to reason in a perceive-reason-act cycle. One could argue that 
Langchain agents are able to think, and we are even able to watch them 
thinkinghttps://github.com/langchain-ai/langchaindeeplearning.ai has free 
courses about 
Langchainhttps://www.deeplearning.ai/short-courses/langchain-for-llm-application-development/-J.
 Original message From: Russ Abbott  
Date: 1/28/24  9:58 PM  (GMT+01:00) To: The Friday Morning Applied Complexity 
Coffee Group  Subject: Re: [FRIAM] Honeymoon over! Sorry you 
couldn't get through. The abstract for the abstract had to be submitted 
separately. Here it is.LLMs are strikingly good at generating text: their 
output is syntactically correct,  coherent, and plausible. They seem capable of 
following instructions and of carrying out meaningful conversations. LLMs 
achieve these results by using transformers to produce text based on complex 
patterns in their training data. But powerful though they are, transformers 
have nothing to do with reasoning. LLMs have no means to build or to reason 
from internal models; they cannot backtrack or perform exploratory search; they 
cannot perform after-the-fact analysis; and they cannot diagnose and correct 
errors.  More generally, LLMs cannot formulate, apply, or correct strategies or 
heuristics. In short, LLMs are not a step away from Artificial General 
Intelligence.A pdf of the full abstract is attached.  -- RussOn Sun, Jan 28, 
2024 at 10:12 AM Steve Smith  wrote:

  

  
  


  


And
  if you're interested, my long abstract submission to
IACAP-2024 has related thoughts. (Scroll down until you
  get to the link for the actual paper.)
  

Russ -
I am interested in reading your abstract/paper...
  
I signed up for an IACAP account but the link you provided seems
  to be dead?
- Steve
  
  

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Honeymoon over!

2024-01-28 Thread Steve Smith




And if you're interested, my long abstract submission to IACAP-2024 
 has 
related thoughts. (Scroll down until you get to the link for the 
actual paper.)


Russ -

I am interested in reading your abstract/paper..._
_

I signed up for an IACAP account but the link you provided seems to be dead?

- Steve_
_
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Honeymoon over!

2024-01-27 Thread Russ Abbott
For more along these lines see Gary Marcus' latest
. He
must have written it fast and didn't proofread it. It's worth
reading nevertheless.

And if you're interested, my long abstract submission to IACAP-2024
 has
related thoughts. (Scroll down until you get to the link for the actual
paper.)

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Fri, Jan 26, 2024 at 8:39 AM Steve Smith  wrote:

> GPT is dead, long live LLMs!
>
> The following is a pretty good (IMO)  reflection on what GPT is bad (and
> good) for.
>
> https://medium.com/@jordan_gibbs/how-to-not-use-chatgpt-8088ec559681
>
> I've been messing with GPT3/4 and Bard for most of a year now and the
> honeymoon is definitely over, not that it ever started.
>
> I like to refer to them as "bar friends" because my expectations of them
> fall just about where my expectations of a new bar friend might be.   I
> don't expect them to be interesting much less informative or useful on any
> given topic, but am pleasantly surprised if/when/as they turn out to be any
> of the above.
>
> I rarely take the advice of a "bar friend" at face value, but do find that
> they can often bring new perspectives from either their unique personality
> or their unique experiences.   This is not to say I don't "trust" my
> bar-friends, just that I trust them to be who they are, even though I
> likely don't *know* who they are.
>
> I feel I've come to know GPT and Bard well enough to agree with Gibbs
> (above) about it's limitations and biases...
>
> My main use of them seems to have degenerated to A) fancier/easier
> interface to web-search; B) Brainstorming on new ideas; C) Burning off my
> excess-ideation energy.
>
> I have also used it effectively to *re* start programming projects which
> I've abandoned, bringing me back up to speed on syntax more efficiently
> than 1) RTFM; 2) cut-and-try with compile/execute tools.
>
> Caveats:
>
> A) I have never been (known to me) fooled by their propensity to "make
> shit up"...  either I am skeptical enough or already have enough knowledge
> that they haven't slipped anything past me, though they have 'tried".  Or
> maybe they are slicker than I know?
>
> B) Given that I am pretty loosey-goosey in my own flights of fancy when it
> comes to Brainstorming, I don't feel they have ever lead *me* astray.  If
> *they* could be lead astray, it would be more likely that direction.
>
> C) Mary (and FriAM and several other friends) don't have to endure *as
> much* of my "flying off in all directions at once"
>
> Coding:  Once I've got my sea (C? Java/Python/JavaScript/PS/???) legs back
> under me, GPT is only minimally useful (usually to outline an algorithm I'm
> familiar with but have forgotten or am too-lazy-to-reconstruct details of)
> and generally distracting, creating tangents and dead-ends that I don't
> need.
>
> Of course GPT-5 and/or SteroidBard will roll out some day and I'll either
> be re-enamored or so jaded as to not-bother... who knows?
>
> I'm curious what others here experience with these tools.  SG is the only
> one I know to be as (or more) engaged than I am, but I suspect a few here
> have done some time with these tools from each of your unique perspectives?
>
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[FRIAM] Honeymoon over!

2024-01-26 Thread Steve Smith

GPT is dead, long live LLMs!

The following is a pretty good (IMO)  reflection on what GPT is bad (and 
good) for.


https://medium.com/@jordan_gibbs/how-to-not-use-chatgpt-8088ec559681

I've been messing with GPT3/4 and Bard for most of a year now and the 
honeymoon is definitely over, not that it ever started.


I like to refer to them as "bar friends" because my expectations of them 
fall just about where my expectations of a new bar friend might be.   I 
don't expect them to be interesting much less informative or useful on 
any given topic, but am pleasantly surprised if/when/as they turn out to 
be any of the above.


I rarely take the advice of a "bar friend" at face value, but do find 
that they can often bring new perspectives from either their unique 
personality or their unique experiences.   This is not to say I don't 
"trust" my bar-friends, just that I trust them to be who they are, even 
though I likely don't *know* who they are.


I feel I've come to know GPT and Bard well enough to agree with Gibbs 
(above) about it's limitations and biases...


My main use of them seems to have degenerated to A) fancier/easier 
interface to web-search; B) Brainstorming on new ideas; C) Burning off 
my excess-ideation energy.


I have also used it effectively to *re* start programming projects which 
I've abandoned, bringing me back up to speed on syntax more efficiently 
than 1) RTFM; 2) cut-and-try with compile/execute tools.


Caveats:

   A) I have never been (known to me) fooled by their propensity to
   "make shit up"...  either I am skeptical enough or already have
   enough knowledge that they haven't slipped anything past me, though
   they have 'tried".  Or maybe they are slicker than I know?

   B) Given that I am pretty loosey-goosey in my own flights of fancy
   when it comes to Brainstorming, I don't feel they have ever lead
   *me* astray.  If *they* could be lead astray, it would be more
   likely that direction.

   C) Mary (and FriAM and several other friends) don't have to endure
   *as much* of my "flying off in all directions at once"

   Coding:  Once I've got my sea (C? Java/Python/JavaScript/PS/???)
   legs back under me, GPT is only minimally useful (usually to outline
   an algorithm I'm familiar with but have forgotten or am
   too-lazy-to-reconstruct details of) and generally distracting,
   creating tangents and dead-ends that I don't need.

Of course GPT-5 and/or SteroidBard will roll out some day and I'll 
either be re-enamored or so jaded as to not-bother... who knows?


I'm curious what others here experience with these tools.  SG is the 
only one I know to be as (or more) engaged than I am, but I suspect a 
few here have done some time with these tools from each of your unique 
perspectives?


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/