Linus,
On 7/7/08, Linas Vepstas [EMAIL PROTECTED] wrote:
Thus, I personally conclude that:
1) the singularity has already happened
2) it was explosive
3) we are living in a simulation, created by the singularity,
in order to better understand what the hell just happened.
4) Its turtles all
Linas Vepstas wrote:
Reposting, sorry if this is a dupe.
--linas
-- Forwarded message --
2008/6/22 William Pearson [EMAIL PROTECTED]:
Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern, so I think
Reposting, sorry if this is a dupe.
--linas
-- Forwarded message --
2008/6/22 William Pearson [EMAIL PROTECTED]:
Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern, so I think
non-exploding intelligences
2008/6/22 William Pearson [EMAIL PROTECTED]:
Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern, so I think
non-exploding intelligences have the evidence for being simpler on
their side.
Familiar with Bostrom's simulation
On Sun, Jun 22, 2008 at 10:12 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
I find the absence of such models troubling. One problem is that there are no
provably hard problems. Problems like tic-tac-toe and chess are known to be
easy, in the sense that they can be fully analyzed with sufficient
--- On Wed, 6/25/08, Abram Demski [EMAIL PROTECTED] wrote:
On Sun, Jun 22, 2008 at 10:12 PM, Matt Mahoney
[EMAIL PROTECTED] wrote:
I find the absence of such models troubling. One
problem is that there are no provably hard problems.
Problems like tic-tac-toe and chess are known to be
On 6/23/08, William Pearson [EMAIL PROTECTED] wrote:
The base beliefs shared between the group would be something like
- The entities will not have goals/motivations inherent to their
form. That is robots aren't likely to band together to fight humans,
or try to take over the world for
2008/6/22 William Pearson [EMAIL PROTECTED]:
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern
Probably the last intelligence explosion - a relatively rapid
increase in the degree
2008/6/23 Bob Mottram [EMAIL PROTECTED]:
2008/6/22 William Pearson [EMAIL PROTECTED]:
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern
Probably the last intelligence explosion -
On Mon, Jun 23, 2008 at 12:50 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
Two questions:
1) Do you know enough to estimate which scenario is more likely?
Well since intelligence explosions haven't happened previously in our
light cone, it can't
Philosophically, intelligence explosion in the sense being discussed
here is akin to ritual magic - the primary fallacy is the attribution
to symbols alone of powers they simply do not possess.
The argument is that an initially somewhat intelligent program A can
generate a more intelligent
Russell:The mistake of trying to reach truth by pure armchair thought was
understandable in ancient Greece. We now know better.So attractive as the
image of a Transcendent Power popping out of a basement may be to us geeks,
it doesn't have anything to do with
reality. Making smarter machines
On Mon, Jun 23, 2008 at 5:22 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
If we step back and think about it, we really knew this already. In
every case where humans, machines or biological systems exhibit
anything that could be called an intelligence improvement - biological
evolution, a
On Mon, Jun 23, 2008 at 3:43 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
We are very inefficient in processing evidence, there is plenty of
room at the bottom in this sense alone. Knowledge doesn't come from
just feeding the system with data - try to read machine learning
textbooks to a chimp,
2008/6/23 Vladimir Nesov [EMAIL PROTECTED]:
On Mon, Jun 23, 2008 at 12:50 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
Two questions:
1) Do you know enough to estimate which scenario is more likely?
Well since intelligence explosions haven't
On Mon, Jun 23, 2008 at 6:52 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 3:43 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
We are very inefficient in processing evidence, there is plenty of
room at the bottom in this sense alone. Knowledge doesn't come from
just feeding
On Mon, Jun 23, 2008 at 4:34 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 6:52 PM, Russell Wallace
Indeed, but becoming more efficient at processing evidence is
something that requires being embedded in the environment to which the
evidence pertains.
Why is that?
For
William Pearson wrote:
While SIAI fills that niche somewhat, it concentrates on the
Intelligence explosion scenario. Is there a sufficient group of
researchers/thinkers with a shared vision of the future of AI coherent
enough to form an organisation? This organisation would discus,
explore and
On Mon, Jun 23, 2008 at 7:52 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 4:34 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 6:52 PM, Russell Wallace
Indeed, but becoming more efficient at processing evidence is
something that requires being
On Mon, Jun 23, 2008 at 5:22 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
But it can just work with a static corpus. When you need to figure out
efficient learning, you only need to know a little about the overall
structure of your data (which can be described by a reasonably small
number of
On Mon, Jun 23, 2008 at 8:32 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 5:22 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
But it can just work with a static corpus. When you need to figure out
efficient learning, you only need to know a little about the overall
On Mon, Jun 23, 2008 at 5:58 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 8:32 PM, Russell Wallace
Why do you think that? All the evidence is to the contrary - the
examples we have of figuring out efficient learning, from evolution to
childhood play to formal education
On Mon, Jun 23, 2008 at 9:35 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 5:58 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Evidence is an indication that depends on the
referred event: evidence is there when referred event is there, but
evidence is not there when
Vlad,
You seem to be arguing in a logical vacuum in denying the essential nature
of evidence to most real-world problem-solving.
Let's keep it real, bro.
Science - bear in mind science deals with every part of the world - from the
cosmos to the earth to living organisms, animals, humans,
On Mon, Jun 23, 2008 at 8:48 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
There are only evolution-built animals, which is a very limited
repertoir of intelligences. You are saying that if no apple tastes
like a banana, therefore no fruit tastes like a banana, even banana.
I'm saying if no
On Tue, Jun 24, 2008 at 1:29 AM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 8:48 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
There are only evolution-built animals, which is a very limited
repertoir of intelligences. You are saying that if no apple tastes
like a banana,
Russell:quite a few very smart people
(myself among them) have tried hard to design something that could
enhance its intelligence divorced from the real world, and all such
attempts have failed. Obviously I can't _prove_ the impossibility of this -
in the same way
that I can't prove the
On Mon, Jun 23, 2008 at 11:57 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Oh yes, it can be proven. It requires an extended argument to do so
properly, which I won't attempt here.
Fair enough, I'd be interested to see your attempted proof if you ever
get it written up.
While SIAI fills that niche somewhat, it concentrates on the
Intelligence explosion scenario. Is there a sufficient group of
researchers/thinkers with a shared vision of the future of AI coherent
enough to form an organisation? This organisation would discus,
explore and disseminate what can be
On Sun, Jun 22, 2008 at 8:38 PM, William Pearson [EMAIL PROTECTED] wrote:
While SIAI fills that niche somewhat, it concentrates on the
Intelligence explosion scenario. Is there a sufficient group of
researchers/thinkers with a shared vision of the future of AI coherent
enough to form an
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
Two questions:
1) Do you know enough to estimate which scenario is more likely?
Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern, so I think
non-exploding intelligences have the
--- On Sun, 6/22/08, William Pearson [EMAIL PROTECTED] wrote:
From: William Pearson [EMAIL PROTECTED]
Two questions:
1) Do you know enough to estimate which scenario is
more likely?
Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple
32 matches
Mail list logo