With integration of LLMs (and other ML) into AR and personal assistant tech riding around on early adopters "shoulders", I would expect these percieve-reason-act structures to be "in training" essentially learning how to emulate (and extrapolate) their user's/wearer's/familiar's decision processes?

It would seem that this is where Pearl and Glymour's causal inferencing models would be directly applicable?

I read somewhere that Tesla's data  gathered from their Self Driving features represents a somewhat unique data-set due to these percieve/reason/act implications.   Does (less than full) self-driving car tech not represent a real-life training opportunity?

An AR enhanced ML personal assistant would seem to be an equally obvious place to begin to bootstrap training an AI in "everyday activities"?


On 1/28/24 5:23 PM, Russ Abbott wrote:
Thanks, Jochen, I know about LangChain. I'm not claiming that LLMs cannot be used as elements of larger computations, just that LLMs on their own are quite limited. I'll make that point in the talk if the abstract is accepted.
_
_
__-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Sun, Jan 28, 2024 at 1:31 PM Jochen Fromm <j...@cas-group.net> wrote:

    Langchain is an agent framework started by Harrison Chase. A
    Langchain agent uses LLMs to reason in a perceive-reason-act
    cycle. One could argue that Langchain agents are able to think,
    and we are even able to watch them thinking
    https://github.com/langchain-ai/langchain

    deeplearning.ai <http://deeplearning.ai> has free courses about
    Langchain
    
https://www.deeplearning.ai/short-courses/langchain-for-llm-application-development/

    -J.


    -------- Original message --------
    From: Russ Abbott <russ.abb...@gmail.com>
    Date: 1/28/24 9:58 PM (GMT+01:00)
    To: The Friday Morning Applied Complexity Coffee Group
    <friam@redfish.com>
    Subject: Re: [FRIAM] Honeymoon over!

    Sorry you couldn't get through. The abstract for the abstract had
    to be submitted separately. Here it is.

    LLMs are strikingly good at generating text: their output is
    syntactically correct,  coherent, and plausible. They seem capable
    of following instructions and of carrying out meaningful
    conversations. LLMs achieve these results by using transformers to
    produce text based on complex patterns in their training data. But
    powerful though they are, transformers have nothing to do with
    reasoning. LLMs have no means to build or to reason from internal
    models; they cannot backtrack or perform exploratory search; they
    cannot perform after-the-fact analysis; and they cannot diagnose
    and correct errors.  More generally, LLMs cannot formulate, apply,
    or correct strategies or heuristics. In short, LLMs are not a step
    away from Artificial General Intelligence.

    A pdf of the full abstract is attached.
    _
    _
    __-- Russ

    On Sun, Jan 28, 2024 at 10:12 AM Steve Smith <sasm...@swcp.com> wrote:



        And if you're interested, my long abstract submission to
        IACAP-2024
        <https://pretalx.iacapconf.org/iacap-2024/me/submissions/N388VQ/>
        has related thoughts. (Scroll down until you get to the link
        for the actual paper.)

        Russ -

        I am interested in reading your abstract/paper..._
        _

        I signed up for an IACAP account but the link you provided
        seems to be dead?

        - Steve_
        _

        -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
        FRIAM Applied Complexity Group listserv
        Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p
        Zoom https://bit.ly/virtualfriam
        to (un)subscribe
        http://redfish.com/mailman/listinfo/friam_redfish.com
        FRIAM-COMIC http://friam-comic.blogspot.com/
        archives:  5/2017 thru present
        https://redfish.com/pipermail/friam_redfish.com/
          1/2003 thru 6/2021 http://friam.383.s1.nabble.com/

    -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
    FRIAM Applied Complexity Group listserv
    Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
    https://bit.ly/virtualfriam
    to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
    FRIAM-COMIC http://friam-comic.blogspot.com/
    archives:  5/2017 thru present
    https://redfish.com/pipermail/friam_redfish.com/
      1/2003 thru 6/2021 http://friam.383.s1.nabble.com/


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p 
Zoomhttps://bit.ly/virtualfriam
to (un)subscribehttp://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIChttp://friam-comic.blogspot.com/
archives:  5/2017 thru presenthttps://redfish.com/pipermail/friam_redfish.com/
   1/2003 thru 6/2021http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to