Interesting article:
http://www.newscientist.com/article/mg20727723.700-artificial-life-forms-evolve-basic-intelligence.html?page=1
On Sun, Aug 1, 2010 at 3:13 PM, Jan Klauck jkla...@uni-osnabrueck.dewrote:
Ian Parker wrote
I would like your
opinion on *proofs* which involve an unproven
... and on and on.
The space thinking of rationality is superefficient but rigid and useless
for AGI. The open world of the human, creative mind is highly inefficient
by comparison but superflexible and the only way to do AGI.
*From:* rob levy r.p.l...@gmail.com
*Sent:* Monday, July 26, 2010 1:06 AM
Not sure how that is useful, or even how it relates to creativity if
considered as an informal description?
On Sun, Jul 25, 2010 at 10:15 AM, Mike Tintner tint...@blueyonder.co.ukwrote:
I came across this, thinking it was going to be an example of maths
fantasy, but actually it has a rather
On Sun, Jul 25, 2010 at 5:05 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
I think it's v. useful - although I was really extending his idea.
Correct me - but almost no matter what you guys do, (or anyone in AI does)
, you think in terms of spaces, or frames. Spaces of options. Whether
A child AGI should be expected to need help learning how to solve many
problems, and even be told what the steps are. But at some point it needs
to have developed general problem-solving skills. But I feel like this is
all stating the obvious.
On Tue, Jul 20, 2010 at 11:32 PM, Matt Mahoney
paradigm of narrow
AI.
[The rock wall/toybox tests BTW are AGI activities, where it is
*impossible* to give full instructions, or produce a formula, whatever you
may want to do].
*From:* rob levy r.p.l...@gmail.com
*Sent:* Wednesday, July 21, 2010 3:56 PM
*To:* agi agi@v2.listbox.com
However, I see that there are no valid definitions of AGI that explain what
AGI is generally , and why these tests are indeed AGI. Google - there are v.
few defs. of AGI or Strong AI, period.
I like Fogel's idea that intelligence is the ability to solve the problem
of how to solve problems
).
On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.ukwrote:
Whaddya mean by solve the problem of how to solve problems? Develop a
universal approach to solving any problem? Or find a method of solving a
class of problems? Or what?
*From:* rob levy r.p.l...@gmail.com
Fogel originally used the phrase to argue that evolutionary computation
makes sense as a cognitive architecture for a general-purpose AI problem
solver.
On Mon, Jul 19, 2010 at 11:45 AM, rob levy r.p.l...@gmail.com wrote:
Well, solving ANY problem is a little too strong. This is AGI, not AGH
be generalized and where past solutions
can be varied and reused is a detail of how intelligence works that is
likely to be universal.
vs
narrow AI is about applying pre-existing *general* methods of
problemsolving (applicable to whole classes of problems)?
*From:* rob levy r.p.l
On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield
steve.richfi...@gmail.comwrote:
Rob,
I just LOVE opaque postings, because they identify people who see things
differently than I do. I'm not sure what you are saying here, so I'll make
some random responses to exhibit my ignorance and elicit
Sorry, the link I included was invalid, this is what I meant:
http://www.geog.ucsb.edu/~raubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf
On Tue, Jun 29, 2010 at 2:28 AM, rob levy r.p.l...@gmail.com wrote:
On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield
steve.richfi
In order to have perceptual/conceptual similarity, it might make sense that
there is distance metric over conceptual spaces mapping (ala Gardenfors or
something like this theory) underlying how the experience of reasoning
through is carried out. This has the advantage of being motivated by
of hoping that AGI won't destroy the world, you study the problem and come
up with a safe design.
-- Matt Mahoney, matmaho...@yahoo.com
--
*From:* rob levy r.p.l...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Sat, June 26, 2010 1:14:22 PM
*Subject:* Re: [agi
why should AGIs give a damn about us?
I like to think that they will give a damn because humans have a unique way
of experiencing reality and there is no reason to not take advantage of that
precious opportunity to create astonishment or bliss. If anything is
important in the universe, its
But there is some other kind of problem. We should have figured it out by
now. I believe that there must be some fundamental computational problem
that is standing as the major obstacle to contemporary AGI. Without solving
that problem we are going to have to wade through years of
Hi
I'm new to this list, but I've been thinking about consciousness, cognition
and AI for about half of my life (I'm 32 years old). As is probably the
case for many of us here, my interests began with direct recognition of the
depth and wonder of varieties of phenomenological experiences-- and
(I'm a little late in this conversation. I tried to send this message the
other day but I had my list membership configured wrong. -Rob)
-- Forwarded message --
From: rob levy r.p.l...@gmail.com
Date: Sun, Jun 20, 2010 at 5:48 PM
Subject: Re: [agi] An alternative plan to discover
people
have attempted to run evolutionary computation experiments at these massive
scales either.
Rob
On Mon, Jun 21, 2010 at 12:59 PM, Matt Mahoney matmaho...@yahoo.com wrote:
rob levy wrote:
On a related note, what is everyone's opinion on why evolutionary
algorithms are such a miserable
19 matches
Mail list logo