A lot of these people that are suddenly piling on the bandwagon to AGI --
if they say it doesn't take a lot of compute, not going to use LLMs, where
were they the last 20+ years?

" Unlike large language models
<https://www.techopedia.com/definition/34948/large-language-model-llm> like
ChatGPT <https://chat.openai.com/>, which* parrot* information back in a
human-like way, AGI is a form of artificial intelligence that learns the
way humans learn, but has processing speed and capacity far beyond what
humans can do"
ref:
https://edmonton.taproot.news/news/2023/09/26/tech-pioneers-partner-to-create-artificial-general-intelligence-by-2030

I wish Emily Bender's "parrot" schtick would die off.


On Tue, Sep 26, 2023 at 12:00 PM James Bowery <[email protected]> wrote:

> https://www.youtube.com/watch?v=aM7F5kuMjRA&t=5652s
>
> "The data's not that large, the compute's probably not *that* large in
> the larger scheme of things.  A point I make: A year of life is a billion
> frames at 30 frames per second; and that fits on a thumb drive.  You can
> tell that a one year old is a conscious intelligent being, so it does not
> require all the data on the Internet to demonstrate artificial general
> intelligence if your algorithm is correct."
>
> Sara Hooker's indictment of AGI research in "The Hardware Lottery" has its
> answer in The Hutter Prize's restrictions on hardware.  Of all people,
> Carmack should be able to relate to the path dependency of the emerging AI
> industry on GPUs.  Hutter's is a rigorously-founded direction for AGI
> research and it is from one of those most qualified to set it.  This is as
> opposed to engineering/technology/development -- all of which are utility
> driven which is where we get into the "ought" of intelligent decision
> making as opposed to the "is" of natural science.
> We need a lot more focus on what "is" the case.
>
> The current hysteria over what "ought" to be the case "is" unhinged.
> Wikipedia's data should be more than adequate for a decent start.
> Moreover, if Wikipedia is truly comprehended, then the latent points of
> view going into its content will be exposed, along with their biases, as a
> necessary byproduct.  That will be a very interesting aspect of what "is"
> the case.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Td500c6985165fa72-M44f31f795a0cf1600100dd96>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td500c6985165fa72-Meadf71e19ac680ef3691f4b5
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to