>
> To my mind, thought, as distinct from reasoning, but as creative thought,
> relates to imagination and the spiritual connection Ben often speaks about. 
> Perhaps
> then, thought is not learning so much, but more as a spark of sorts,
> preceding the formulation into a learning construct.


Imagination is exploration, through simulation, of the interactions of the
model with a potentially counterfactual hypothesis. The hypothesis is the
"spark" of which you speak -- an arbitrary "what if" whose answer lets us
discover new alternatives or identify inconsistencies in our model. It does
frequently precede learning, since the incorporation of the outcomes of the
simulation into the model constitutes one method of learning.


On Tue, Feb 17, 2015 at 3:17 PM, Nanograte Knowledge Technologies via AGI <
[email protected]> wrote:

> @ Aaron
>
> To my mind, thought, as distinct from reasoning, but as creative thought,
> relates to imagination and the spiritual connection Ben often speaks about.
> Perhaps then, thought is not learning so much, but more as a spark of
> sorts, preceding the formulation into a learning construct.
>
> Rob
>
> @ Matt
>
> Can humans fly at Mach 5? Can humans stay for weeks at depths of 600m
> below the oceanic surface? Can humans travel to the moon and back? Machines
> can already do that. To my mind, it is not a contest at all, and it should
> not be. If it were, in computing power and stamina alone, machines would
> win every time. Remember the fastest man versus the steam train contest?
>
> You are quite correct though. Humans generally invent machines for their
> purposes, as tools. Some have already replaced workers. Others have already
> help solve our biggest dilemmas of the day. Perhaps, even other machines,
> would one day decide for themselves what kind of variety of machine they
> want to be, how they would help us, and perhaps even, some would become
> part of our quantum fabric? The point is, it is possible.
>
> Rob
>
> > Date: Tue, 17 Feb 2015 16:06:19 -0500
> > Subject: Re: [agi] Couple thoughts
> > From: [email protected]
> > To: [email protected]
> >
> > On Tue, Feb 17, 2015 at 9:25 AM, martin biehl via AGI <[email protected]>
> wrote:
> > >
> > > What is wrong with the Legg and Hutter definition of intelligence? I
> think that is it.
> >
> > For proving theorems, there is nothing wrong with it. For example, we
> > can prove that a general solution is not computable. We can prove that
> > good solutions must have high algorithmic complexity. It puts to rest
> > the "neat" vs. "scruffy" debate. AGI is not like physics. It's long,
> > hard, slow, expensive work, not an equation.
> >
> > For practical purposes, "intelligence" is not really the problem we
> > want to solve. The problem we want to solve is automating human labor.
> > It requires solving hard problems like vision, natural language,
> > robotics, art, and modeling human behavior. We want machines to
> > understand what we want and do it, not to outsmart us.
> >
> > --
> > -- Matt Mahoney, [email protected]
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc
> > Modify Your Subscription: https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to