Hi
I'm new to this list, but I've been thinking about consciousness, cognition
and AI for about half of my life (I'm 32 years old). As is probably the
case for many of us here, my interests began with direct recognition of the
depth and wonder of varieties of phenomenological experiences-- and
at
(I'm a little late in this conversation. I tried to send this message the
other day but I had my list membership configured wrong. -Rob)
-- Forwarded message --
From: rob levy
Date: Sun, Jun 20, 2010 at 5:48 PM
Subject: Re: [agi] An alternative plan to discover self-organiz
not enough people
have attempted to run evolutionary computation experiments at these massive
scales either.
Rob
On Mon, Jun 21, 2010 at 12:59 PM, Matt Mahoney wrote:
> rob levy wrote:
> > On a related note, what is everyone's opinion on why evolutionary
> algorithms are such a mis
> But there is some other kind of problem. We should have figured it out by
> now. I believe that there must be some fundamental computational problem
> that is standing as the major obstacle to contemporary AGI. Without solving
> that problem we are going to have to wade through years of increm
>
> why should AGIs give a damn about us?
> I like to think that they will give a damn because humans have a unique way
of experiencing reality and there is no reason to not take advantage of that
precious opportunity to create astonishment or bliss. If anything is
important in the universe, its
at AGI won't destroy the world, you study the problem and come
> up with a safe design.
>
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
> ------
> *From:* rob levy
> *To:* agi
> *Sent:* Sat, June 26, 2010 1:14:22 PM
> *Subject:* Re: [ag
In order to have perceptual/conceptual similarity, it might make sense that
there is distance metric over conceptual spaces mapping (ala Gardenfors or
something like this theory) underlying how the experience of reasoning
through is carried out. This has the advantage of being motivated by
neuros
On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield
wrote:
> Rob,
>
> I just LOVE opaque postings, because they identify people who see things
> differently than I do. I'm not sure what you are saying here, so I'll make
> some "random" responses to exhibit my ignorance and elicit more explanation.
>
Sorry, the link I included was invalid, this is what I meant:
http://www.geog.ucsb.edu/~raubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf
On Tue, Jun 29, 2010 at 2:28 AM, rob levy wrote:
> On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield <
> steve.richfi...@
>
>
> However, I see that there are no valid definitions of AGI that explain what
> AGI is generally , and why these tests are indeed AGI. Google - there are v.
> few defs. of AGI or Strong AI, period.
>
I like Fogel's idea that intelligence is the ability to "solve the problem
of how to solve pr
re able
to do).
On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner wrote:
> Whaddya mean by "solve the problem of how to solve problems"? Develop a
> universal approach to solving any problem? Or find a method of solving a
> class of problems? Or what?
>
> *From:* rob levy
&
Fogel originally used the phrase to argue that evolutionary computation
makes sense as a cognitive architecture for a general-purpose AI problem
solver.
On Mon, Jul 19, 2010 at 11:45 AM, rob levy wrote:
> Well, solving ANY problem is a little too strong. This is AGI, not AGH
> (arti
Whaddya mean by "solve the problem of how to solve problems"? Develop a
>> universal approach to solving any problem? Or find a method of solving a
>> class of problems? Or what?
>>
>> *From:* rob levy
>> *Sent:* Monday, July 19, 2010 1:26 PM
>> *To:*
A "child" AGI should be expected to need help learning how to solve many
problems, and even be told what the steps are. But at some point it needs
to have developed general problem-solving skills. But I feel like this is
all stating the obvious.
On Tue, Jul 20, 2010 at 11:32 PM, Matt Mahoney wr
er activities
>
> So AGI's require a fundamentally and massively different paradigm of
> instruction to the program, comprehensive, step-by-step paradigm of narrow
> AI.
>
> [The rock wall/toybox tests BTW are AGI activities, where it is
> *impossible* to give full instru
Not sure how that is useful, or even how it relates to creativity if
considered as an informal description?
On Sun, Jul 25, 2010 at 10:15 AM, Mike Tintner wrote:
> I came across this, thinking it was going to be an example of maths
> fantasy, but actually it has a rather nice idea about the math
On Sun, Jul 25, 2010 at 5:05 PM, Mike Tintner wrote:
> I think it's v. useful - although I was really extending his idea.
>
> Correct me - but almost no matter what you guys do, (or anyone in AI does)
> , you think in terms of spaces, or frames. Spaces of options. Whether you're
> doing logic, ma
*, indeed **open-domain" by contrast with the spaces of
> programs wh. are closed, uni-domain. When you search for "what am I going to
> do..?" your brain can go through an endless world of domains - movies,call
> a friend, watch TV, browse the net, meal, go for walk, pl
Interesting article:
http://www.newscientist.com/article/mg20727723.700-artificial-life-forms-evolve-basic-intelligence.html?page=1
On Sun, Aug 1, 2010 at 3:13 PM, Jan Klauck wrote:
> Ian Parker wrote
>
> > I would like your
> > opinion on *proofs* which involve an unproven hypothesis,
>
> I've n
19 matches
Mail list logo