I mostly agree with Marcus' sentiment. The dot com analogy may be apt, but
it also smells too easy an analog. I find the K-shaped AI adoption to be
bizarre. Personally, I do not believe LLMs, nor any particular
architecture, to be the be-all-end-all. I suspect we will see
a transition away from throwing money at developing the most general form
and a move toward more idiosyncratic instantiations. For instance, I
continue to think that Deepmind did meaningful work going the RL path with
AlphaGo/Atari games and it has yet to come to my attention what happens
when Transformers attempt to replicate these successes. Almost every LLM I
have met is really really bad at go. This said, AI in their current form,
and from this perspective, has been here for a decade. Some have adopted it
and use it to surprising effect, others treat LLMs as nothing more than a
robust database querying language. What people do with it and how they
perceive it will undoubtedly have an impact. In the meantime, I am excited
to see what happens as programmers learn to use formal type theories as
pidgins and LLMs become more amenable to compositionality.
.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
--- -- . / .- .-. . / ..- ... . ..-. ..- .-..
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to