John,
You're making a massively important point, wh. I have been thinking about
recently.
I think it's more useful to say that AGI-ers are thinking in terms of building
a *complete AGI system* (rather than person) wh. could range from a simple
animal robot to fantasies of an all intelligent
sometimes make mistakes.
Jim Bromer
On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
[BTW Sloman's quote is a month old]
I think he means what I do - the end-problems that an AGI must face. Please
name me one true AGI end-problem being dealt with by any AGI-er - apart
Dave, Re my first point there is no choice whatsoever - you (any serious
creative) *have* to start by addressing the creative problem - in this case
true AGI end-problems. You have to start, e.g.,, addressing the problem part of
your would-be plane, the part that's going to give you take-off
or at work, are closely related to the mechanisms that also
produce artistic forms of creativity.
From: Jim Bromer
Sent: Thursday, June 24, 2010 6:57 PM
To: agi
Subject: Re: [agi] The problem with AGI per Sloman
On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner tint...@blueyonder.co.uk wrote
Mike,
start by addressing the creative problem. this phrase doesn't mean
anything to me. You haven't properly defined what you mean by creative to
me. What do you think the true AGI end-problems are? Try not to use the word
creative so much. There possible algorithms that produce high level
I think there is a great deal of confusion between these two objectives.
When I wrote that if you had a car accident due to a fault in AI/AGI and
Matt wrote back talking about downloads this was a case in point. I was
assuming that you had a system which was intelligent but was *not* a
download
can be creative. They just need to learn
to think more creatively, and that is another one of your mistakes.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription
I suggest we form a team for this purpose ..and I am willing to join
From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Thu, June 24, 2010 2:33:01 PM
Subject: [agi] The problem with AGI per Sloman
One of the
problems of AI
Matt:It is like the way evolution works, except that there is a human in the
loop to make the process a little more intelligent.
IOW this is like AGI, except that it's narrow AI. That's the whole point - you
have to remove the human from the loop. In fact, it also sounds like a
misconceived
Mike Tintner wrote:
Matt:It is like the way evolution works, except that there is a human in the
loop to make the process a little more intelligent.
IOW this is like AGI, except that it's narrow AI. That's the whole point -
you have to remove the human from the loop. In fact, it also
little good if you don't understand why it works that way. You would
have to create a synthetic brain to take advantage of the knowledge, which
is not a approach to AGI for many reasons. There are a million other ways,
even better ways, to do it than the way the brain does it. Just because the
brain
complicated of course. You are more
likely to detect motion in objects that you recognize and expect to move, like
people, animals, cars, etc.
-- Matt Mahoney, matmaho...@yahoo.com
From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Mon
...@yahoo.com
--
*From:* David Jones davidher...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Mon, June 21, 2010 9:39:30 AM
*Subject:* [agi] Re: High Frame Rates Reduce Uncertainty
Ignoring Steve because we are simply going to have to agree to disagree...
And I don't see
that
you recognize and expect to move, like people, animals, cars, etc.
-- Matt Mahoney, matmaho...@yahoo.com
From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Mon, June 21, 2010 9:39:30 AM
Subject: [agi] Re: High Frame Rates Reduce
are more likely to detect motion in objects
that
you recognize and expect to move, like people, animals, cars, etc.
-- Matt Mahoney, matmaho...@yahoo.com
From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Mon, June 21
is indeed the case, then AGI and related efforts don't stand a
snowball's chance in hell of ever outperforming humans, UNTIL the underlying
network stability theory is well enough understood to perform perfectly to
digital precision. This wouldn't necessarily have to address all aspects of
intelligence
-- and
attempting to comprehend how these constellations of significance fit in
with a larger picture of what we can reliably know about the natural world.
I am secondarily motivated by the fact that (considerations of morality or
amorality aside) AGI is inevitable, though it is far from being a forgone
(I'm a little late in this conversation. I tried to send this message the
other day but I had my list membership configured wrong. -Rob)
-- Forwarded message --
From: rob levy r.p.l...@gmail.com
Date: Sun, Jun 20, 2010 at 5:48 PM
Subject: Re: [agi] An alternative plan to discover
-Original Message-
From: Steve Richfield [mailto:steve.richfi...@gmail.com]
My underlying thought here is that we may all be working on the wrong
problems. Instead of working on the particular analysis methods (AGI) or
self-organization theory (NN), perhaps if someone found
having yet (or ever) reaching perfection. Hence, evolution may have
struck a balance, where less intelligence directly impairs survivability,
and greater intelligence impairs network stability, and hence indirectly
impairs survivability.
If the above is indeed the case, then AGI and related
have
struck a balance, where less intelligence directly impairs survivability,
and greater intelligence impairs network stability, and hence indirectly
impairs survivability.
If the above is indeed the case, then AGI and related efforts don't stand a
snowball's chance in hell of ever
be working on the wrong
problems. Instead of working on the particular analysis methods (AGI) or
self-organization theory (NN), perhaps if someone found a solution to
large-
network stability, then THAT would show everyone the ways to their
respective goals.
For a distributed AGI
rob levy wrote:
I am secondarily motivated by the fact that (considerations of morality or
amorality aside) AGI is inevitable, though it is far from being a forgone
conclusion that powerful general thinking machines will have a first-hand
subjective relationship to a world, as living
) reaching perfection. Hence, evolution may have
struck a balance, where less intelligence directly impairs survivability,
and greater intelligence impairs network stability, and hence indirectly
impairs survivability.
If the above is indeed the case, then AGI and related efforts don't stand
years of evolution that created human intelligence?
-- Matt Mahoney, matmaho...@yahoo.com
From: rob levy r.p.l...@gmail.com
To: agi agi@v2.listbox.com
Sent: Mon, June 21, 2010 11:56:53 AM
Subject: Re: [agi] An alternative plan to discover self-organization theory
(AGI) or
self-organization theory (NN), perhaps if someone found a solution to
large-
network stability, then THAT would show everyone the ways to their
respective goals.
For a distributed AGI this is a fundamental problem. Difference is that a
power grid is such a fixed network
and our world is implementation detail. We do our part, and it
does its part. I'm sure that there are Zen Buddhists out there who would
just LOVE this yin-yang view of things.
Any thoughts?
Steve
---
agi
Archives: https://www.listbox.com/member/archive/303
of the oscillations that long ping times can
introduce in people's (and intelligent bot's) behavior. Again, this is
basically the same 12db/octave phenomenon.
Steve
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member
Steve: For example, based on ability to follow instruction, cats must be REALLY
stupid.
Either that or really smart. Who wants to obey some dumb human's instructions?
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
network stability, and hence indirectly
impairs survivability.
If the above is indeed the case, then AGI and related efforts don't stand
a snowball's chance in hell of ever outperforming humans, UNTIL the
underlying network stability theory is well enough understood to perform
perfectly
:30, deepakjnath deepakjn...@gmail.com wrote:
The brain does not get the high frame rate signals as the eye itself
only gives brain images at 24 frames per second. Else u wouldn't be
able to watch a movie.
Any comments?
---
agi
Archives: https
but it seems from observance
that intelligence/consciousness exhibits some sort of harmonic property, or
levels.
John
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your
--
*From:* rob levy r.p.l...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Mon, June 21, 2010 11:56:53 AM
*Subject:* Re: [agi] An alternative plan to discover self-organization
theory
(I'm a little late in this conversation. I tried to send this message the
other day but I had my list
problems?
Lack of computing power. How much computation would you need to simulate
the 3 billion years of evolution that created human intelligence?
-- Matt Mahoney, matmaho...@yahoo.com
--
*From:* rob levy r.p.l...@gmail.com
*To:* agi agi@v2.listbox.com
of this?
Yes, our repeated successes in simultaneously improving both the size
and stability of very large scale networks (trade, postage, telegraph,
electricity, road, telephone, Internet) serve as very nice existence
proofs.
---
agi
Archives: https
http://www.zerohedge.com/article/fast-reading-computers-are-about-drink-your-trading-milkshake
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https
of interconnections.
serve as very nice existence
proofs.
I'm still looking.
Steve
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member
immaterial.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http
instability to their
benefit?! Is this related to your harmonic thoughts?
Thanks.
Steve
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https
of the time
axis in the formula results in a reduction of short-term memory loss and
thus more resources for the brain to work with.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303
with.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http
Hi,
* AGI should be scalable - More data just mean the potential for more
accurate results.
* More data can chew up more computation time without a benefit. ie If
all you want to do is identify a bird, it's still a bird at 1 fps and
1000 fps.
* Don't aim for precision, aim for generality. Eg. AGI
familiar
component which we could get at by study of the structure of natural
information and knowledge.
John
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription
be simple enough to work, and simple
enough that it just HAS to be tried.
All thoughts, stones, and rotten fruit will be gratefully appreciated.
Thanks in advance.
Steve
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
into how a programmer can design a test for self-organization. It is a
subtle question.
Jim Bromer
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https
Classic example of the crazy way AGI-ers think about AGI - divorced from any
reality.
Starting-point - NOT what's the problem? - what is this brain/thinking
machine supposed to do? - what problems should it be dealing with?.. and how do
we design a machine to deal with those problems
On Sun, Jun 20, 2010 at 8:52 AM, Mike Tintner tint...@blueyonder.co.ukwrote:
Classic example of the crazy way AGI-ers think about AGI - divorced from
any reality.
You must have missed the part where Steve said, No, I haven't been smokin'
any wacky tobacy. Instead, I was having a long talk
Mike,
There is a very fundamental flaw in your response, which I will explain. I
suggest/request that you re-post while addressing the flawed issue:
You presume that I (and/or Eddie) have ANY interest in creating an AGI. I
don't, and I don't think that Eddie does. What Eddie and I are trying
. It is a subtle question.
I agree. Do you have any thoughts about how to go about this?
Steve
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com
of
possibilities that they could work with would be so great. Programmed
computers are capable of appearing as if they were behaving in
non-programmed ways because they are capable of learning through
input-output in ways the AGI programmer could not anticipate and the
possible combinations
Steve,
I'm not really interested in shooting at particular people, only in the grand
principles here. And they still seem to apply despite your qualifications.
What AGI problems - actual problems that actual animals or humans/ agents
living in the real world have to deal with - is self
seconds.
-- Matt Mahoney, matmaho...@yahoo.com
From: Steve Richfield steve.richfi...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, June 20, 2010 2:06:55 AM
Subject: [agi] An alternative plan to discover self-organization theory
No, I haven't been smokin' any wacky
.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com
that covaries with novelty is like shooting fish in a
barrel.
Of course, it's not like these are the only people making this kind of
non-progress ;-)
Richard Loosemore
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
be as few as
10's of millions or so.
I don't think this is a problem for AGI because, if you could create an AGI
with about the level of intelligence of a single human, you could duplicate it
quickly and exactly to as many individual computer systems as you desired.
Humans have many ways
http://machineslikeus.com/news/paper-voodoo-correlations-social-neuroscience
http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss
,
we have to replicate the capabilities of not one human mind, but a system of
10^10 minds. That is why my AGI proposal is so hideously expensive.
http://www.mattmahoney.net/agi2.html
Let's fire Matt and hire 10 chimps instead.
Problems with IQ notwithstanding, I'm confident that, were
to solve the engram problem,
Richard is not a lone hero, but a part of the vast collective enterprise of
science/scientists trying to understand the brain as a whole, and his eventual
discovery will have to dovetail with others' efforts. So not just one AGI, Ben,
a whole society of them. He's
the capabilities of not one
human mind, but a system of 10^10 minds. That is why my AGI proposal is
so hideously expensive.
http://www.mattmahoney.net/agi2.html
Now really expensive if quantum entanglement is in fact present in a hybrid
of quantum circuits stored in carbon tetrachloride functioning
many human-made artifacts are like
this.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=126863270
. That is why my AGI proposal is
so hideously expensive.
http://www.mattmahoney.net/agi2.html
Now really expensive if quantum entanglement is in fact present in a
hybrid of quantum circuits stored in carbon tetrachloride functioning as a
capacitor. In principle 420 billion human minds or about
?
Please give me an IQ test that measures something that can't be done by n log n
people (allowing for some organizational overhead).
-- Matt Mahoney, matmaho...@yahoo.com
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http
five or six years after they were [up and running].
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id
Richfield
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http
of the field.
We've attacked from a different direction, but we had a wide range of
targets to choose, believe me.
The short version of the overall story is that neuroscience is out of
control as far as overinflated claims go.
Richard Loosemore
---
agi
Cool,
this idea has already been applied successfully to some areas of AI, such as
ant-colony algorithms and swarm intelligence algorithms. But I was thinking
that it would be interesting to apply it at a high level. For example,
consider that you create the best AGI agent you can come up
Chris: Problems with IQ notwithstanding, I'm confident that, were my silly
IQ
of 145 merely doubled,..
Chris/Matt: Hasn't anyone ever told you - it's not the size of it, it's what
you do with it that counts?
---
agi
Archives: https
I guess something like this is in the plan of many, if not all, AGI
projects. For NARS, see
http://nars.wang.googlepages.com/wang.roadmap.pdf , under (4)
Socialization in page 11.
It is just that to attempt any non-trivial multi-agent experiment, the
work in single agent needs to be mature enough
and/or have other ideas for encouraging empathy (assuming you see empathy as
a good goal)?
From: Pei Wang mail.peiw...@gmail.com
Reply-To: agi@v2.listbox.com
To: agi@v2.listbox.com
Subject: Re: [agi] just a thought
Date: Wed, 14 Jan 2009 16:21:23 -0500
I guess something like this is in the plan
://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http
little to do with neuroscience. The field as a whole is hardly
mortally afflicted with that problem (whether it's even real or not).
If you look at any field large enough, there will be bad science. How
is it relevant to study of AGI?
--
Vladimir Nesov
robot...@gmail.com
http
work.
Ron
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox
to study of AGI?
Your child comes home and says they make a zero on the big test.
A child says they made 80 on the test and failed , The reason they missed
80 questions out of 100.
A child says they had a grade of 98 right and the teacher gave them a B.
The reason there were 110
questions
is it relevant to study of AGI?
People here are sometimes interested in cognitive science matters, and
some are interested in the concept of building an AGI by brain
emulation. Neuroscience is relevant to that.
Beyond that, this is just an FYI.
I really do not care to put much effort
substantiation, mere 50
papers that got confused with statistics don't do it justice.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
of) doesn't strike me as a v.big deal, since
emotions are so vague anyway. If you have criticisms of the lack of
correlation with more precise cognitive observations, like words or sights,
that would be v. interesting.
---
agi
Archives: https
Bayesian surprise attracts human attention
http://tinyurl.com/77p9xo
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member
.
Kyle Kidd
kylek...@gmail.com
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
to
reason about,
Next thing I'll work on is the planning module. That's where the AGI
interacts with the environment.
... about why and how a given approach to reasoning is
expected to be powerful.
I think if PZ logic can express a great variety of uncertain
phenomena, that's good enough
2009/1/9 Ben Goertzel b...@goertzel.org:
This is an attempt to articulate a virtual world infrastructure that
will be adequate for the development of human-level AGI
http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf
goertzel.org seems to be down. So I can't refresh my memory of the paper
Yes, I'm expecting the AI to make tools from blocks and beads
No, i'm not attempting to make a detailed simulation of the human
brain/body, just trying to use vaguely humanlike embodiment and
high-level mind-architecture together with computer science
algorithms, to achieve AGI
On Tue, Jan 13
algorithms, to achieve AGI
I wasn't suggesting you were/should. The comment about ones own
changing body was simply one of the many examples of things that
happen in the world that we have to try and cope with and adjust to,
making our brains flexible and leading to development rather than
of the BlocksNBeadsWorld, and I think it's an acceptable
one...
ben
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id
Melting and boiling at least should be doable: assign every bead a
temperature, and let solid interbead bonds turn liquid above a certain
temperature and disappear completely above some higher temperature.
---
agi
Archives: https://www.listbox.com/member
.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com
temperature and disappear completely above some higher temperature.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered
russell.wall...@gmail.com wrote:
Melting and boiling at least should be doable: assign every bead a
temperature, and let solid interbead bonds turn liquid above a certain
temperature and disappear completely above some higher temperature.
---
agi
2009/1/12 Ben Goertzel b...@goertzel.org:
The problem with simulations that run slower than real time is that
they aren't much good for running AIs interactively with humans... and
for AGI we want the combination of social and physical interaction
There's plenty you can do with real-time
input
the AGI would have.
E.g. you might specify that its vision system would consist of 2
pixelmaps (binocular vision) each 1000x1000 pixels, in three colours
and 16 bits of intensity, updated 20 times per second.
Of course, you may want to specify the visual system differently, but
it's useful
Actually, I view that as a matter for the AGI system, not the world.
Different AGI systems hooked up to the same world may choose to
receive different inputs from it
Binocular vision, for instance, is not necessary in a virtual world,
and some AGIs might want to use it whereas others don't
. Nothing is easy about what you did -
for either AI or AGI. And no one in AGI has ever attempted creative
problems.Perhaps you can show me wrong.
1. THE CENTRAL ISSUE - I suggest, to put it v. v. broadly at first, is
this:
*are there general logical procedures that can tackle creative problems,
esp
My response to Ben's paper is to be cautious about drawing conclusions from
simulated environments. Human level AGI has an algorithmic complexity of 10^9
bits (as estimated by Landauer). It is not possible to learn this much
information from an environment that is less complex. If a baby AI did
paper is to be cautious about drawing conclusions from
simulated environments. Human level AGI has an algorithmic complexity of 10^9
bits (as estimated by Landauer). It is not possible to learn this much
information from an environment that is less complex. If a baby AI did
perform well
, matmaho...@yahoo.com
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered
of houses and to pictures of flying, would have the
ability to eventually draw a picture of a flying house (along with a
lot of other creative efforts that you have not) even thought of. But
the thing is, that I can do this without using advanced AGI
techniques!
So, I must retain the recognition that I
I think what Mike is saying is that I could draw what I think a flying house
would look like, and you could look at my picture and say it was a flying
house, even though neither of us has ever seen one. Therefore, AGI should be
able to solve the same kind of problems, and why aren't we
-
which is having an incomplete domain set, and incomplete set of rules,
proceed to construct something in an altogether new domain, and make up the
rules as you go. That's the problem for - and whole challenge of - AGI.
-
You're kind of illustrating my central thesis of creative
701 - 800 of 13786 matches
Mail list logo