I would point out that our legal frameworks are designed under the
assumption that there is rough parity in intelligence between all
actors in the system. The system breaks badly when you have extreme
disparities in the intelligence of the actors because you are breaking
one of the
I was thinking about the so-called paralellism of the brain which is a
poorly fitting metaphore at best... To explain the high resiliency of
neural circuits to minor variations in structure the term redundant
seems more appropriate...
The computers we build can be viewed, in this context as Many
Jonathan Standley wrote:
That approach went out with the introduction of the 4004.
Imagine a motherboard that acted as the physical layer for a
TCP/IP-based mesh network.
TCP/IP is a bit too heavy...
I heard of a system once that used ATM as its bus protocol... Today
there is 3GIO and
From recent comments here I can see there are still a lot of people out
there who think that building an AGI is a relatively modest-size
project, and the key to success is simply uncovering some new insight
or technique that has been overlooked thus far.
I would agree with that though the
Brad Wyble wrote:
Heck, even the underlying PC hardware is more complex in a number of
ways than the brain, it seems...
The brain is very RISCy... using a relatively simple processing
pattern and then repeating it millions of times.
Alan, I strongly suggest you increase your
OTOH, at least Novamente has enough internal complexity to reach
territory that hasn't already been explored by classical AI research. I
don't expect it to wake up, but I expect it will be a lot more
productive than those One True Simple Formula For Intelligence-type
projects.
Yes and
[META: please turn line-wrap on, for each of these responses my own
standards for outgoing mail necessitate that I go through each line and
ensure all quotations are properly formatted...]
Brad Wyble wrote:
The situation for understanding a single neuron is somewhat disastrous.
...
I'm just
Higher-order function representations are not robust in the sense that
neural representations probably are: they aren't redundant at all, one
error will totally change the meaning. They're not brainlike in any
sense. But maybe (if my hypothesis is right) they provide a great
foundation
Jonathan Standley wrote:
Dedicated purpose hardware provides task specific performance orders of
magnitude higher than that of a general purpose CPU. And task-specific
hardware need not be inordinately expensive. Look at graphics and
sound boards as an example of this.
There is no reason
Eliezer S. Yudkowsky wrote:
Let's imagine I'm a superintelligent magician, sitting in my castle,
Dyson Sphere, what-have-you. I want to allow sentient beings some way
to visitme, but I'm tired of all these wandering AIXI-tl spambots that
script kiddies code up to brute-force my entrance
Eliezer S. Yudkowsky wrote:
1) AI morality is an extremely deep and nonobvious challenge which has
no significant probability of going right by accident.
2) If you get the deep theory wrong, there is a strong possibility of
a silent catastrophic failure: the AI appears to be learning
This is slightly off-topic but no more so than the rest of the thread...
1) That it is selfishly pragmatic for a superintelligence to deal with
humans economically rather than converting them to computronium.
For convenience, lets rephrase this
the majority of arbitrarily generated
Jonathan Standley wrote:
Now here is my question, it's going to sound silly but there is
quite a bit behind it:
Of what use is computronium to a superintelligence?
If the superintelligence perceives a need for vast computational
resources, then computronium would indeed be very useful.
Ben Goertzel wrote:
you really test my tolerance as list moderator.
My appologies.
Please, please, no personal insults. And no anti-Semitism or racism of
any kind.
ACK.
I guess that your reference to Eliezer as the rabbi may have been
meant as amusing,
It is not at all amusing, nor
om
Since I'm too lazy (not superhuman enough) to master the x86 PC enough
to write an operating system for it I started looking into what it would
take to write an AI to do it for me. ;)
(Specificly, I have an OS-TEST machine that I'm trying to scrape
togeather an OS for... I am having trouble
I'm not sure that Black White would be good training for an AGI. Do
we really want it to limber up as a dominating god - maybe benevolent
and maybe not??
Obviously, not... but still it might make for an interesting test of
charactor.
--
I WANT A DEC ALPHA!!! =)
21364: THE UNDISPUTED GOD OF
Olivier - (JConcept / Cetip) wrote:
But why is it necessary to reproduce our internal brain way of working
to build an intelligent system ?
One one level, it would be very advantageous to replace biological
architectures with much more powerful/scalable/reliable/efficient
approaches.
On the
om
I seem to have fallen into the list-ecological niche of good discussion
starter. In that capacity I write the following.
I attended my first session of CS480: Introduction to Artificial
Intelligence, this morning and it got me to thinking about something
that has started to bug me...
What
Ben Goertzel wrote:
Since I'm too busy studying neuroscience, I simply don't have any
time for learning operating systems. I will therefore either use the
systems I know or the systems that require the least ammount of effort
to learn regardless of their features.
Alan, that sounds like a
I say this as someone who just burned half a week setting up a Linux
network in his study.
Ditto...
The windows 3.11 machine took 10 minutes.
The Leenooks machine took 3 days...
Yeah, that stuff is a pain. But compared to designing,
programming and testing a thinking machine, it's cake,
This type of training should be given to the AGI as early as it is
understandable in order to ensure proper consideration of the welfare
of it's creators.
Not so simple:
The human brain has evolved a special agent modeling circuit that
exists in the frontal lobe. (probably having a
Damien Sullivan wrote:
You _MIGHT_ be able to produce a proof of concept that way...
However, a practical working AI, such as the one which could help me
design my my next body, would need to be quite a bit more. =\
Why? Why should such a thing require replacing the original
Ben Goertzel wrote:
I think that is a VERY bad approach !!!
I don't want a superhuman AGI to destroy us by accident or through
indifference... which are possibilities just as real as aggression.
Positive action requires positive motovation.
--
Linux programmers: the only people in the
Neither of these arguments are particularly persuasive though based on
what I've developed to date.
!+ d03$n'7 vv0rk b3cuz $uch 4 $!st3m c4n'+ r34d m! 31337 +3x+.
I am involved in such a project and certainly don't wish to to be
wasting my time!
I would be out of place to say anything
Damien Sullivan wrote:
A human level intelligence requires arbitrary acess to
visual/phonetic/other faculties in order to be intelligent.
I'm sure all those blind and deaf people appreciate being considered
unintelligent.
It depends.
If their brains are intact they are no less intelligent
Gary Miller wrote:
AG A human level intelligence requires arbitrary access to
AG visual/phonetic/other faculties in order to be intelligent.
By this definition of intelligence then we must conclude the Helen
Keller was totally lacking in
intelligence.
You are confusing the visual faculty (a
According to my rule of thumb,
If it has a natural language database it is wrong,
many of the proposed early AGI apps are rather unfeasable.
However, there is a very interesting application which goes streight to
the hart to the main AI problem and also provides a very valuble tool
for
[motovation problem].
No, human euphoria is much more than simple neural reenforcement. It is
a result of special endorphines such as dopomine that are released when
the midbrain is happy about something.
You see, the cortex has no oppinion about anything whatsovever. It is
merely a
In 1986 Nintendo released a game called The Legend of Zelda.
It remained on the top-10 list for the next five years.
So why do I mention this totally irrelevant game on this list?
Well, I'ts become apparent that I am well suited for a niche on
list-ecology that is responsible for throwing up a
Ben Goertzel wrote:
This is not a matter of principle, it's a matter of pragmatics I
think that a perceptual-motor domain in which a variety of cognitively
simple patterns are simply expressed, will make world-grounded early
language learning much easier...
If anyone has the software
The functional unit of the cerebral cortex is the cortical column.
A cortical column is roughly .5-.6 mm in diameter. (lets say that 4 can
fit in a square mm).
The cerebral cortex is around the size of four sheets of regular paper.
Lets say the paper is 216x280 = 60,480 square milimeters.
The
For instance, in relation to memory capacity. let's say I could live
for the age of the universe, roughly 15 billion years. I believe the
human mind(without enhancement of any kind) is capable of remembering
every detail of every day for that entire lifespan.
That is contrary to actual
We have a team of computational linguists who have added the
vocabulary to make Cyc able to represent lexical concepts.
But its still not the meta-vocabluary/meta-ontology that is required to
whack the problem.
In 2003, our data entry activities will be emphasized as a result of
our
33 matches
Mail list logo