Yeah. I forgot to mention that robots are not a"alive" yet could act
indistinguishably from what is alive. The concept of alive is likely
something that requires inductive type reasoning and generalization to
learn. Categorization, similarity analysis, etc could assist in making such
distinctions as well.

The point is that agi is not defined by any particular problem. It is
defined by how you solve problems, even simple ones. Which is why your claim
that my problems are not agi is simply wrong.

On Jun 28, 2010 12:22 PM, "Jim Bromer" <[email protected]> wrote:

On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner <[email protected]>
wrote:

>
>
> Inanimate objects normally move  *regularly,* in *patterned*/*pattern*
ways, and *predictably....
This presumption looks similar (in some profound way) to many of the
presumptions that were tried in the early days of AI, partly because
computers lacked memory and they were very slow.  It's unreliable just
because we need the AGI program to be able to consider situations when, for
example, inanimate objects move in patchy patchwork ways or in unpredictable
patterns.

Jim Bromer
   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/> |
Modify<https://www.listbox.com/member/?&;>Your
Subscription
<http://www.listbox.com>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to