See
http://www.topix.net/content/ap/2007/09/techies-ponder-computers-smarter-than-us-4.
It's from the Associated Press, so it's written once
and then copy-pasted to news sources all over the
world.
- Tom
--- [EMAIL PROTECTED] wrote:
Near the beginning of this discussion, reference is
made to
--- Artificial Stupidity [EMAIL PROTECTED] wrote:
Who cares? Really, who does? You can't create an
AGI that is friendly or
unfriendly. It's like having a friendly or
unfriendly baby.
No, it is not. A baby comes pre-designed by evolution
and genetics. An AGI can be custom-written to spec.
--- Quasar Strider [EMAIL PROTECTED] wrote:
Hello,
I see several possible avenues for implementing a
self-aware machine which
can pass the Turing test: i.e. human level AI.
Mechanical and Electronic.
However, I see little purpose in doing this. Fact
is, we already have self
aware
--- Quasar Strider [EMAIL PROTECTED] wrote:
On 9/7/07, Matt Mahoney [EMAIL PROTECTED]
wrote:
--- Quasar Strider [EMAIL PROTECTED]
wrote:
Hello,
I see several possible avenues for implementing
a self-aware machine
which
can pass the Turing test: i.e. human level AI.
You've just admitted that computers can perform a
logical operation other than addition (taking a
negation).
- Tom
--- Alan Grimes [EMAIL PROTECTED] wrote:
Charles D Hixson wrote:
Alan Grimes wrote:
Think of asserting that All computers will be,
at their core, adding
machines.
to get
--- Mike Tintner [EMAIL PROTECTED] wrote:
AG: The mid-point of the singularity
window could be as close as 2009. A rediculously
pessimistic prediction
would put it around 2012.
We're pretty far off from having any kind of
Singularity as it stands now. What do you think is
going to happen in
from its original
programming. The capacity, say, to find a new kind
of path through a maze or
forest.
Tom McCabe: Pathfinding programs, to my knowledge,
are actually
quite advanced (due primarily to commerical
investment). Open up a copy of Warcraft III or any
other modern computer
--- Alan Grimes [EMAIL PROTECTED] wrote:
om
In this article I will quote and address some of the
issues raised
against my previous posting. I will then continue
with the planned
discussion of the current state of AI, I will also
survey some of the
choices available to the
--- Alan Grimes [EMAIL PROTECTED] wrote:
om
Today, I'm going to attempt to present an argument
in favor of a theory
that has resulted from my studies relating to AI.
While this is one of
the only things I have to show for my time spent on
AI. I am reasonably
confident in it's validity
Is this a moderated list or not?
- Tom
--- Alan Grimes [EMAIL PROTECTED] wrote:
Jey Kottalam wrote:
On 7/12/07, Alan Grimes [EMAIL PROTECTED]
wrote:
White on black text, which I have to manually set
my X-term for every
time I open a fucking window on Linux is the best
compromise
--- Randall Randall [EMAIL PROTECTED]
wrote:
On Jul 4, 2007, at 1:14 AM, Tom McCabe wrote:
That definition isn't accurate, because it doesn't
match what we intuitively see as 'death'. 'Death'
is
actually fairly easy to define, compared to good
or
even truth; I would define
--- Sergey A. Novitsky [EMAIL PROTECTED]
wrote:
Are these questions, statement, opinions, sound
bites or what? It seem a
bit of a stew.
Yes. A bit of everything indeed. Thanks for noting
the incoherency.
* As it already happened with nuclear
weapons, there may be
treaties
Using that definition, everyone would die at an age of
a few months, because the brain's matter is regularly
replaced by new organic chemicals.
- Tom
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 30/06/07, Heartland [EMAIL PROTECTED]
wrote:
Objective observers care only about the
--- Sergey Novitsky [EMAIL PROTECTED] wrote:
Governments do not have a history of realizing the
power of technology before it comes on the market.
But this was not so with nuclear weapons...
It was the physicists who first became aware of the
power of nukes, and the physicists had to
.
- Tom
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 04/07/07, Tom McCabe [EMAIL PROTECTED]
wrote:
Using that definition, everyone would die at an
age of
a few months, because the brain's matter is
regularly
replaced by new organic chemicals.
I know that, which is why I asked
--- Jef Allbright [EMAIL PROTECTED] wrote:
On 7/1/07, Stathis Papaioannou [EMAIL PROTECTED]
wrote:
If its top level goal is to allow its other goals
to vary randomly,
then evolution will favour those AI's which decide
to spread and
multiply, perhaps consuming humans in the process.
--- Samantha Atkins [EMAIL PROTECTED] wrote:
Tom McCabe wrote:
--- Samantha Atkins [EMAIL PROTECTED] wrote:
Out of the bazillions of possible ways to
configure
matter only a
ridiculously tiny fraction are more intelligent
than
a cockroach. Yet
it did not take any
True, but an AGI can do all of that stuff a lot faster
and easier than humans can, and I believe the original
question was what are the benefits of AGI?
- Tom
--- Samantha Atkins [EMAIL PROTECTED] wrote:
Tom McCabe wrote:
Okay, to start with:
- Total control over the structure of our
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 02/07/07, Tom McCabe [EMAIL PROTECTED]
wrote:
Are you suggesting that the AI won't be smart
enough
to understand
what people mean when they ask for a banana?
It's not a question of intelligence- it's a
question
of selecting
It is very coherent; however, I'm not sure how you
would judge a goal's arbitrariness. From the human
perspective it is rather arbitrary, since it's
unrelated to most human desires.
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 02/07/07, Jef Allbright [EMAIL PROTECTED]
wrote:
While I
I am not sure you are capable of following an argument
in a manner that makes it worth my while to continue.
- s
So, you're saying that I have no idea what I'm talking
about, so therefore you're not going to bother arguing
with me anymore. This is a classic example of an ad
hominem argument. To
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 02/07/07, Tom McCabe [EMAIL PROTECTED]
wrote:
The goals will be designed by humans, but the huge
prior probability against the goals leading to an
AGI
that does what people want means that it takes a
heck
of a lot of design effort
--- BillK [EMAIL PROTECTED] wrote:
On 7/2/07, Tom McCabe wrote:
AGIs do not work in a sensible manner, because
they
have no constraints that will force them to stay
within the bounds of behavior that a human would
consider sensible.
If you really mean the above, then I don't
--- Jef Allbright [EMAIL PROTECTED] wrote:
On 7/2/07, Tom McCabe [EMAIL PROTECTED]
wrote:
--- Jef Allbright [EMAIL PROTECTED] wrote:
I hope that my response to Stathis might further
elucidate.
Er, okay. I read this email first.
Chaka...when the walls fell
Might I suggest
--- Jef Allbright [EMAIL PROTECTED] wrote:
On 7/2/07, Tom McCabe [EMAIL PROTECTED]
wrote:
I think we're getting terms mixed up here. By
values, do you mean the ends, the ultimate
moral
objectives that the AGI has, things that the AGI
thinks are good across all possible situations
--- Charles D Hixson [EMAIL PROTECTED]
wrote:
Tom McCabe wrote:
The problem isn't that the AGI will violate its
original goals; it's that the AGI will eventually
do
something that will destroy something really
important
in such a way as to satisfy all of its
constraints. By
setting
--- Eugen Leitl [EMAIL PROTECTED] wrote:
On Sun, Jul 01, 2007 at 10:24:17AM +1000, Stathis
Papaioannou wrote:
Nuclear weapons need a lot of capital and
resources to construct,
They also need knowledge, which is still largely
secret.
Knowledge of *what*? How to build a crude gun to
--- Eugen Leitl [EMAIL PROTECTED] wrote:
On Sun, Jul 01, 2007 at 12:45:20AM -0700, Tom McCabe
wrote:
Do you have any actual evidence for this? History
has
shown that numbers made up on the spot with no
experimental verification whatsoever don't work
well.
You need 10^17 bits
the opponent
exists only in your head; it doesn't exist in any
chess rulebook and isn't automatically transferred to
the AGI.
- Tom
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 01/07/07, Tom McCabe [EMAIL PROTECTED]
wrote:
If its goal is achieve x using whatever means
necessary
The constraints of don't shoot the opponent aren't
written into the formal rules of chess; they exist
only in your mind. If you claim otherwise, please give
me one chess tutorial that explicitly says don't
shoot the opponent.
- Tom
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 01/07/07,
--- BillK [EMAIL PROTECTED] wrote:
On 7/1/07, Tom McCabe wrote:
The constraints of don't shoot the opponent
aren't
written into the formal rules of chess; they exist
only in your mind. If you claim otherwise, please
give
me one chess tutorial that explicitly says don't
shoot
--- BillK [EMAIL PROTECTED] wrote:
On 7/1/07, Tom McCabe wrote:
These rules exist only in your head. They aren't
written down anywhere, and they will not be
transferred via osmosis into the AGI.
They *are* written down.
I just quoted from the FIDE laws of chess.
And they would
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 02/07/07, Tom McCabe [EMAIL PROTECTED]
wrote:
The AGI doesn't care what any human, human
committee,
or human government thinks; it simply follows its
own
internal rules.
Sure, but its internal rules and goals might be
specified
possibly do in
advance, and that isn't going to work.
- Tom
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 02/07/07, Tom McCabe [EMAIL PROTECTED]
wrote:
But killing someone and then beating them on the
chessboard due to the lack of opposition does
count as
winning under the formal
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 02/07/07, Tom McCabe [EMAIL PROTECTED]
wrote:
For
example, if it has as its most important goal
obeying the commands of
humans, that's what it will do.
Yup. For example, if a human said I want a
banana,
the fastest way
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 01/07/07, Tom McCabe [EMAIL PROTECTED]
wrote:
An excellent analogy to a superintelligent AGI is
a
really good chess-playing computer program. The
computer program doesn't realize you're there, it
doesn't know you're human
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 01/07/07, Tom McCabe [EMAIL PROTECTED]
wrote:
But Deep Blue wouldn't try to poison Kasparov in
order to win the
game. This isn't because it isn't intelligent
enough
to figure out
that disabling your opponent would be helpful
What does Vista have to do with hardware development?
Vista merely exploits hardware; it doesn't build it.
If you want to measure hardware progress, you can just
use some benchmarking program; you don't have to use
OS hardware requirements as a proxy.
- Tom
--- Charles D Hixson [EMAIL
--- Randall Randall [EMAIL PROTECTED]
wrote:
On Jun 28, 2007, at 11:26 PM, Tom McCabe wrote:
--- Randall Randall [EMAIL PROTECTED]
wrote:
and
What should a person before a copying experiment
expect to remember, after the experiment? That
is,
what should he anticipate?
Waking
I'm going to let the zombie thread die.
- Tom
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 29/06/07, Tom McCabe [EMAIL PROTECTED]
wrote:
But when you talk about yourself, you mean the
yourself of the copy, not the yourself of the
original person. While all the copied selves
--- Niels-Jeroen Vandamme
[EMAIL PROTECTED] wrote:
From: Charles D Hixson [EMAIL PROTECTED]
Reply-To: singularity@v2.listbox.com
To: singularity@v2.listbox.com
Subject: Re: [singularity] critiques of Eliezer's
views on AI
Date: Thu, 28 Jun 2007 09:56:12 -0700
Stathis Papaioannou wrote:
How do you get the 50% chance? There is a 100%
chance of a mind waking up who has been uploaded, and
also a 100% chance of a mind waking up who hasn't.
This doesn't violate the laws of probability because
these aren't mutually exclusive. Asking which one was
you is silly, because we're assuming
--- Niels-Jeroen Vandamme
[EMAIL PROTECTED] wrote:
This is a textbook case of what Eliezer calls
worshipping a sacred mystery. People tend to act
like a theoretical problem is some kind of God,
something above them in the social order, and since
it's beaten others before you it would be
--- Randall Randall [EMAIL PROTECTED]
wrote:
On Jun 28, 2007, at 5:18 PM, Tom McCabe wrote:
How do you get the 50% chance? There is a 100%
chance of a mind waking up who has been uploaded,
and
also a 100% chance of a mind waking up who hasn't.
This doesn't violate the laws
--- Randall Randall [EMAIL PROTECTED]
wrote:
On Jun 28, 2007, at 7:35 PM, Tom McCabe wrote:
You're assuming again that consciousness is
conserved.
I have no idea why you think so. I would say that
I think that each copy is conscious only of their
own particular existence, and if that's
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 29/06/07, Charles D Hixson
[EMAIL PROTECTED] wrote:
Yes, you would live on in one of the copies as
if uploaded, and yes
the selection of which copy would be purely
random, dependent on the
relative frequency of each copy (you can
--- Alan Grimes [EMAIL PROTECTED] wrote:
;)
Seriously now, Why do people insist there is a
necessary connection (as
in A implies B) between the singularity and brain
uploading?
Why is it that anyone who thinks the singularity
happens and most
people remain humanoid is automatically
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 29/06/07, Niels-Jeroen Vandamme
Personally, I do not believe in coincidence.
Everything in the universe
might seem stochastic, but it all has a logical
explanation. I believe the
same applies to quantum chaos, though quantum
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 29/06/07, Tom McCabe [EMAIL PROTECTED]
wrote:
I think
it works better to look at it from the perspective
of
the guy doing the upload rather than the guy being
uploaded. If you magically inserted yourself into
the
brain of a copy
--- Randall Randall [EMAIL PROTECTED]
wrote:
On Jun 28, 2007, at 9:08 PM, Tom McCabe wrote:
--- Randall Randall [EMAIL PROTECTED]
wrote:
On Jun 28, 2007, at 7:35 PM, Tom McCabe wrote:
You're assuming again that consciousness is
conserved.
I have no idea why you think so. I would say
Not so much anesthetic as liquid helium, I think,
to be quadruply sure that all brain activity has
stopped and the physical self and virtual self don't
diverge. People do have brain activity even while
unconscious.
- Tom
--- Jey Kottalam [EMAIL PROTECTED] wrote:
On 6/25/07, Papiewski, John
Because otherwise it would be a copy and not a
transfer. Transfer implies that it is moved from one
place to another and so only one being can exist when
the process is finished.
- Tom
--- Jey Kottalam [EMAIL PROTECTED] wrote:
On 6/25/07, Matt Mahoney [EMAIL PROTECTED]
wrote:
You can
(sigh) That's not the point. What Gene Roddenberry
thought, and whether Star Trek is real or not, are
totally irrelevant to the ethical issue of whether
transportation would be a good thing, and how it
should be done to minimize any possible harmful
effects.
- Tom
--- Colin Tate-Majcher [EMAIL
--- Eugen Leitl [EMAIL PROTECTED] wrote:
On Mon, Jun 25, 2007 at 11:53:09PM -0700, Tom McCabe
wrote:
Not so much anesthetic as liquid helium, I
think,
How about 20-30 sec of stopped blood flow. Instant
flat EEG. Or, hypothermia. Or, anaesthesia (barbies
are nice)
This is human life
--- Michael LaTorra [EMAIL PROTECTED] wrote:
Bill Hibbard (author of _Super-Intelligent Machines_
and researcher in the
Machine Intelligence Project at the U. of Wisconsin)
wrote (see
http://www.ssec.wisc.edu:80/~billh/visfiles.html):
Currently, according to theory, every pair of
people
These questions, although important, have little to do
with the feasibility of FAI. I think we can all agree
that the space of possible universe configurations
without sentient life of *any kind* is vastly larger
than the space of possible configurations with
sentient life, and designing an AGI to
--- Samantha Atkins [EMAIL PROTECTED] wrote:
On Jun 21, 2007, at 8:14 AM, Tom McCabe wrote:
We can't know it in the sense of a mathematical
proof, but it is a trivial observation that out of
the
bazillions of possible ways to configure matter,
only
a ridiculously tiny fraction
--- Panu Horsmalahti [EMAIL PROTECTED] wrote:
An AGI is not selected by random from all possible
minds, it is designed
by humans, therefore you can't apply the probability
from the assumption
that most AI's are unfriendly.
True; there is likely some bias towards Friendliness
in AIs
--- Matt Mahoney [EMAIL PROTECTED] wrote:
--- Mike Tintner [EMAIL PROTECTED] wrote:
Perhaps you've been through this - but I'd like to
know people's ideas about
what exact physical form a Singulitarian or
near-Singul. AGI will take. And
I'd like to know people's automatic
--- Eugen Leitl [EMAIL PROTECTED] wrote:
On Thu, Jun 14, 2007 at 11:05:23PM -0300, Lúcio de
Souza Coelho wrote:
Check your energetics. Asteroid mining is
promising for space-based
construction. Otherwise you'd better at least
have controllable fusion
rockets.
It is quite useful
is roughly inversely
proportional to its size, because inertia goes up with
r^3 while surface area (and hence drag) only goes up
with r^2.
- Tom
--- Lúcio de Souza Coelho [EMAIL PROTECTED] wrote:
On 6/15/07, Tom McCabe [EMAIL PROTECTED]
wrote:
(...)
Also, simply crashing an asteroid onto the planet
--- Lúcio de Souza Coelho [EMAIL PROTECTED] wrote:
On 6/15/07, Tom McCabe [EMAIL PROTECTED]
wrote:
How exactly do you control a megaton-size hunk of
metal flying through the air at 10,000+ m/s?
Clarifying this point on speed, in my view the
asteroid would not hit
Earth directly. Instead
That would be nice, but unfortunately it's
unrealistic. Just look at what medical science has
done over the past millennium:
1. Totally wiped out smallpox, a huge killer.
2. Effectively wiped out many more diseases, such as
measles, mumps, rubella, typhus, diphtheria, cholera,
tetanus and many
--- Michael Anissimov [EMAIL PROTECTED]
wrote:
On 6/7/07, Eugen Leitl [EMAIL PROTECTED] wrote:
You've been sounding like a broken record for a
while. It's because
speed kills. What or who is doing the killing is
not important.
Who needs politeness or respect for your fellow man
when
--- Charles D Hixson [EMAIL PROTECTED]
wrote:
Tom McCabe wrote:
--- Eugen Leitl [EMAIL PROTECTED] wrote:
On Tue, Jun 05, 2007 at 01:24:04PM -0700, Tom
McCabe
wrote:
Unless, of course, that human turns out to be
evil
and
That why you need to screen them
--- Eugen Leitl [EMAIL PROTECTED] wrote:
On Thu, Jun 07, 2007 at 07:24:32AM -0700, Michael
Anissimov wrote:
You've been sounding like a broken record for a
while. It's because
speed kills. What or who is doing the killing is
not important.
Who needs politeness or respect for your
Unless, of course, that human turns out to be evil and
proceeds to use his power to create The Holocaust Part
II. Seriously- out of all the people in positions of
power, a very large number are nasty jerks who abuse
that power. I can't think of a single great world
power that has not committed
1980 Of all the work done on computers over the past
forty years, why don't we have a hard drive or even a
description of a hard drive that can store as much
information as even a child? And here people are
talking about a gigantic worldwide knowledge database
in 20 years. Feh!/1980
- Tom
---
So there's you're problem! You're demanding a system
that works, however badly. Any computer programmer can
tell you that you will not get a system that works at
all without doing a large percentage of the work
needed to implement a system that works *well*. So you
can see a model of the human
Of those that do, 80% don't believe that artificial,
human-level intelligence is possible - either ever, or
for a long, long time.
Does this apply to other futuristic technologies, like
interstellar travel, nanotechnology or genetics? I
remember a professor of nanoengineering's speech, in
which
--- Papiewski, John [EMAIL PROTECTED] wrote:
I disagree. If even a half-baked, partial, buggy,
slow simulation of a
human mind were available the captains of industry
would jump on it in a
second.
True. But then again, the first half-baked, partial,
buggy, slow HTML browser came out in
--- Lúcio de Souza Coelho [EMAIL PROTECTED] wrote:
On 6/4/07, Tom McCabe [EMAIL PROTECTED]
wrote:
So there's you're problem! You're demanding a
system
that works, however badly. Any computer programmer
can
tell you that you will not get a system that works
at
all without doing
--- Eugen Leitl [EMAIL PROTECTED] wrote:
On Mon, May 14, 2007 at 08:21:45PM -0700, Tom McCabe
wrote:
Hmmm, this is true. However, if these techniques
were
powerful enough to design new, useful AI
algorithms,
why is writing algorithms almost universally done
by
programmers instead
Thank you for that. It would be an interesting problem
to build a box AGI without morality, which
paperclips everything within a given radius of some
fixed position and then stops without disturbing the
matter outside. It would obviously be far simpler to
build such an AGI than a true FAI, and it
needs a
suitable supercomputer?
- Tom
--- Matt Mahoney [EMAIL PROTECTED] wrote:
--- Tom McCabe [EMAIL PROTECTED] wrote:
--- Matt Mahoney [EMAIL PROTECTED] wrote:
Personally, I would experiment with
neural language models that I can't currently
implement because I lack the
computing
--- Matt Mahoney [EMAIL PROTECTED] wrote:
--- Eugen Leitl [EMAIL PROTECTED] wrote:
On Mon, May 14, 2007 at 06:05:01AM -0700, Matt
Mahoney wrote:
I assumed you knew that the human brain has a
volume of 1000 to 1500 cm^3.
If
you divide this among 10^5 processors then each
--- Matt Mahoney [EMAIL PROTECTED] wrote:
--- Tom McCabe [EMAIL PROTECTED] wrote:
You cannot get large amounts of computing power
simply
by hooking up a hundred thousand PCs for problems
that
are not easily parallelized, because you very
quickly
run into bandwidth limitations
--- Matt Mahoney [EMAIL PROTECTED] wrote:
--- Tom McCabe [EMAIL PROTECTED] wrote:
--- Matt Mahoney [EMAIL PROTECTED] wrote:
Language and vision are prerequisites to AGI.
No, they aren't, unless you care to suggest that
someone with a defect who can't see and can't form
--- Matt Mahoney [EMAIL PROTECTED] wrote:
--- Tom McCabe [EMAIL PROTECTED] wrote:
--- Matt Mahoney [EMAIL PROTECTED] wrote:
--- Tom McCabe [EMAIL PROTECTED] wrote:
You cannot get large amounts of computing
power
simply
by hooking up a hundred thousand PCs
--- Matt Mahoney [EMAIL PROTECTED] wrote:
I posted some comments on DIGG and looked at the
videos by Thiel and
Yudkowsky. I'm not sure I understand the push to
build AGI with private
donations when companies like Google are already
pouring billions into the
problem.
Private companies
Saying that X or Y could be evidence of a simulation
is silly. Why would X be more likely to be evidence of
a simulation than ~X? Seeing as how anyone who could
design anything so sophisticated could easily pick
either for most Xs, and since we know nothing about
their motivations, we're rather
Why would a simulating alien race want to create a
universe with fluctuating constants as opposed to
fixed constants? To drop us a subtle hint? Why a
subtle hint, and not an obvious hint or no hint at
all?
--- Matt Mahoney [EMAIL PROTECTED] wrote:
--- Eugen Leitl [EMAIL PROTECTED] wrote:
82 matches
Mail list logo