On Sat, Apr 26, 2008 at 7:48 AM, Thomas McCabe [EMAIL PROTECTED]
wrote:
On Thu, Apr 24, 2008 at 3:16 AM, Samantha Atkins [EMAIL PROTECTED] wrote:
Thomas McCabe wrote:
Popularity is irrelevant.
Popularity, of course, is not the ultimate judge of accuracy. But are
you seriously claiming
J. Andrew Rogers wrote:
On Apr 6, 2008, at 9:38 AM, Ben Goertzel wrote:
That's surely part of it ... but investors have put big $$ into much
LESS
mature projects in areas such as nanotech and quantum computing.
This is because nanotech and quantum computing can be readily and
easily
Ben Goertzel wrote:
Much of this discussion is very abstract, which is I guess how you think about
these issues when you don't have a specific AGI design in mind.
My view is a little different.
If the Novamente design is basically correct, there's no way it can possibly
take thousands or
Mike Tintner wrote:
Samantha:From what you said above $50M will do the entire job. If
that is all
that is standing between us and AGI then surely we can get on with it in
all haste.
Oh for gawdsake, this is such a tedious discussion. I would suggest
the following is a reasonable *framework*
On Apr 9, 2008, at 12:33 PM, Derek Zahn wrote:
Matt Mahoney writes:
Just what do you want out of AGI? Something that thinks like a
person or
something that does what you ask it to?
The or is interesting. If it really thinks like a person and at
at least human level then I doubt very
Arguably many of the problems of Vista including its legendary slippages
were the direct result of having thousands of merely human programmers
involved. That complex monkey interaction is enough to kill almost
anything interesting. shudder
- samantha
Panu Horsmalahti wrote:
Just because
Richard Loosemore wrote:
John K Clark wrote:
And I will define consciousness just as soon as you define define.
Ah, but that is exactly my approach.
Thus, the subtitle I gave to my 2006 conference paper was Explaining
Consciousness by Explaining That You Cannot Explain it, Because Your
On Jan 28, 2008, at 6:43 AM, Mike Tintner wrote:
Stathis: Are you simply arguing that an embodied AI that can
interact with the
real world will find it easier to learn and develop, or are you
arguing that there is a fundamental reason why an AI can't develop in
a purely virtual
WTF does this have to do with AGI or Singularity? I hope the AGI
gets here soon. We Stupid Monkeys get damn tiresome.
- samantha
On Jan 29, 2008, at 7:06 AM, gifting wrote:
On 29 Jan 2008, at 14:13, Vladimir Nesov wrote:
On Jan 29, 2008 11:49 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
On Jan 27, 2008, at 6:18 AM, Mike Tintner wrote:
Samantha: MT: You've been fooled by the puppet. It doesn't work
without the
puppeteer. Samantha:What's that, elan vitale, a soul, a
consciousness that is
independent of the puppet?
It's significant that you make quite the wrong
On Jan 26, 2008, at 2:36 PM, Mike Tintner wrote:
Gudrun: I am an artist who is interested in science, in utopia and
seemingly
impossible
projects. I also came across a lot of artists with OC traits. ...
The OCAP, actually the obsessive compulsive 'arctificial' project ..
These new OCA
On Jan 26, 2008, at 3:59 PM, Mike Tintner wrote:
Ben,
Thanks for reply. I think though that Samantha may be more
representative - i.e. most here simply aren't interested in non-
computer alternatives. Which is fine.
The Singularity Institute exists for one purpose. That I point that
On Jan 26, 2008, at 6:07 PM, Mike Tintner wrote:
Tom:A computer is not disembodied any more than you are. Silicon,
as a
substrate, is fully equivalent to biological neurons in terms of
theoretical problem-solving ability.
You've been fooled by the puppet. It doesn't work without the
On Jan 25, 2008, at 10:14 AM, Natasha Vita-More wrote:
The idea of useless technology is developed in wearables more than
in bioart. Steve's perspective is more political than artistic in
regards to uselessness, don't you think? My paper which includes
an interview with him is
Alan Grimes wrote:
If it's a problem may I suggest you use a more user friendly terminal
such as gnome-terminal or konsole. They have profiles that can be
edited through the GUI.
Not a bad suggestion, lemme see if my distro will let me kill Xterm...
crap, it's depended on by xinit, which
Tom McCabe wrote:
--- Samantha Atkins [EMAIL PROTECTED] wrote:
Out of the bazillions of possible ways to configure
matter only a
ridiculously tiny fraction are more intelligent than
a cockroach. Yet
it did not take any grand design effort upfront to
arrive at a world
overrun when
Colin Tate-Majcher wrote:
When you talk about uploading are you referring to creating a copy
of your consciousness? If that's the case then what do you do after
uploading, continue on with a mediocre existence while your
cyber-duplicate shoots past you? Sure, it would have all of those
Alan Grimes wrote:
Samantha Atkins wrote:
Alan Grimes wrote:
Available computing power doesn't yet match that of the human brain,
but I see your point,
What makes you so sure of that?
It has been computed countless times here and elsewhere that I am sure
you
Tom McCabe wrote:
--- Samantha Atkins [EMAIL PROTECTED] wrote:
Tom McCabe wrote:
--- Samantha Atkins [EMAIL PROTECTED] wrote:
Out of the bazillions of possible ways to
configure
matter only a
ridiculously tiny fraction are more intelligent
than
Sergey A. Novitsky wrote:
Dear all,
Perhaps, the questions below were already touched numerous times in
the past.
Could someone kindly point to discussion threads and/or articles where
these concerns were addressed or discussed?
Kind regards,
Serge
Charles D Hixson wrote:
Stathis Papaioannou wrote:
Available computing power doesn't yet match that of the human brain,
but I see your point, software (in general) isn't getting better
nearly as quickly as hardware is getting better.
Well, not at the personally accessible level. I understand
Alan Grimes wrote:
Available computing power doesn't yet match that of the human brain,
but I see your point,
What makes you so sure of that?
It has been computed countless times here and elsewhere that I am sure
you are aware of so why do you ask?
-
This list is sponsored by
On Jun 21, 2007, at 8:14 AM, Tom McCabe wrote:
We can't know it in the sense of a mathematical
proof, but it is a trivial observation that out of the
bazillions of possible ways to configure matter, only
a ridiculously tiny fraction are Friendly, and so it
is highly unlikely that a selected
Mike Tintner wrote:
Perhaps you've been through this - but I'd like to know people's ideas
about what exact physical form a Singulitarian or near-Singul. AGI
will take. And I'd like to know people's automatic associations even
if they don't have thought-through ideas - just what does a
While I have my own doubts about Eliezer's approach and likelihood of
success and about the extent of his biases and limitations, I don't
consider it fruitful to continue to bash Eliezer on various lists
once you feel seriously slighted by him or convinced that he is
hopelessly mired or
On May 29, 2007, at 11:36 AM, Jonathan H. Hinck wrote:
Indeed, displacement of the human labor force began since the
beginning of the industrial revolution (if not before). This is the
definition of technology. And, indeed, the jump form a labor-based
to an automation-based economy
On May 29, 2007, at 4:22 PM, Jonathan H. Hinck wrote:
But does there need to be consensus among the experts for a public
issue to be raised? Regarding other topics that have been on the
public discussion palate for awhile, how often has this been the
case? Perhaps with regard to issues
Shane Legg wrote:
http://www.youtube.com/watch?v=WGoi1MSGu64
Which got me thinking. It seems reasonable to think that killing a
human is worse than killing a mouse because a human is more
intelligent/complex/conscious/...etc...(use what ever measure you
prefer) than a mouse.
So, would
Keith Elis wrote:
Shane Legg wrote:
If a machine was more intelligent/complex/conscious/...etc... than
all of humanity combined, would killing it be worse than killing all of
humanity?
You're asking a rhetorical question but let's just get the
Keith Elis wrote:
Shane Legg wrote:
If a machine was more intelligent/complex/conscious/...etc... than
all of humanity combined, would killing it be worse than killing all of
humanity?
You're asking a rhetorical question but let's just get the
On May 28, 2007, at 3:32 PM, Russell Wallace wrote:
On 5/28/07, Shane Legg [EMAIL PROTECTED] wrote:
If one accepts that there is, then the question becomes:
Where should we put a super human intelligent machine
on the list? If it's not at the top, then where is it and why?
I don't claim to
On May 28, 2007, at 4:29 PM, Joel Pitt wrote:
On 5/29/07, Keith Elis [EMAIL PROTECTED] wrote:
In the end, my advice is pragmatic: Anytime you post publicly on
topics
such as these, where the stakes are very, very high, ask yourself,
Can I
be taken out of context here? Is this position,
On May 26, 2007, at 4:16 AM, John Ku wrote:
I think maximization of negative entropy is a poor goal to have.
Although life perhaps has some intrinsic value, I think the primary
thing we care about is not life, per se, but beings with
consciousness and capable of well-being. Under your
On Oct 23, 2006, at 7:39 AM, Ben Goertzel wrote:
Michael,
I think your summary of the situation is in many respects accurate;
but, an interesting aspect you don't mention has to do with the
disclosure of technical details...
In the case of Novamente, we have sufficient academic
OnOct20,2006,at2:14AM,MichaelAnissimovwrote:Sometimes, Samantha, it seems like you have little faith in anypossible form of intelligence, and that the only way for one to besafe/happy is to be isolated from everything. I sometimes get thisimpression from libertarians (not to say that I'm
On Oct 17, 2006, at 2:45 PM, Michael Anissimov wrote:
Mike,
On 10/10/06, deering [EMAIL PROTECTED] wrote:
Going beyond the definition of Singularity we can make some
educated guesses
about the most likely conditions under which the Singularity will
occur.
Due to technological synergy,
36 matches
Mail list logo