2008/6/22 William Pearson [EMAIL PROTECTED]:
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern
Probably the last intelligence explosion - a relatively rapid
increase in the degree
2008/6/23 Bob Mottram [EMAIL PROTECTED]:
2008/6/22 William Pearson [EMAIL PROTECTED]:
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern
Probably the last intelligence explosion -
On Mon, Jun 23, 2008 at 12:50 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
Two questions:
1) Do you know enough to estimate which scenario is more likely?
Well since intelligence explosions haven't happened previously in our
light cone, it can't
On 6/23/08, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 6/22/08, Kaj Sotala [EMAIL PROTECTED] wrote:
On 6/21/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Eliezer asked a similar question on SL4. If an agent
flips a fair quantum coin and is copied 10 times if it
comes up heads,
Philosophically, intelligence explosion in the sense being discussed
here is akin to ritual magic - the primary fallacy is the attribution
to symbols alone of powers they simply do not possess.
The argument is that an initially somewhat intelligent program A can
generate a more intelligent
Russell:The mistake of trying to reach truth by pure armchair thought was
understandable in ancient Greece. We now know better.So attractive as the
image of a Transcendent Power popping out of a basement may be to us geeks,
it doesn't have anything to do with
reality. Making smarter machines
On Mon, Jun 23, 2008 at 5:22 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
If we step back and think about it, we really knew this already. In
every case where humans, machines or biological systems exhibit
anything that could be called an intelligence improvement - biological
evolution, a
On Mon, Jun 23, 2008 at 3:43 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
We are very inefficient in processing evidence, there is plenty of
room at the bottom in this sense alone. Knowledge doesn't come from
just feeding the system with data - try to read machine learning
textbooks to a chimp,
2008/6/23 Vladimir Nesov [EMAIL PROTECTED]:
On Mon, Jun 23, 2008 at 12:50 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
Two questions:
1) Do you know enough to estimate which scenario is more likely?
Well since intelligence explosions haven't
On Mon, Jun 23, 2008 at 6:52 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 3:43 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
We are very inefficient in processing evidence, there is plenty of
room at the bottom in this sense alone. Knowledge doesn't come from
just feeding
Abram Demski wrote:
To be honest, I am not completely satisfied with my conclusion on the
post you refer to. I'm not so sure now that the fundamental split
between logical/messy methods should occur at the line between perfect
approximate methods. This is one type of messiness, but one only. I
Since combinatorial search problems are so common to artificial
intelligence, it has obvious applications. If such an algorithm can be
made, it seems like it could be used *everywhere* inside an AGI:
deduction (solve for cases consistent with constraints), induction
(search for the best model),
On Mon, Jun 23, 2008 at 4:34 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 6:52 PM, Russell Wallace
Indeed, but becoming more efficient at processing evidence is
something that requires being embedded in the environment to which the
evidence pertains.
Why is that?
For
William Pearson wrote:
While SIAI fills that niche somewhat, it concentrates on the
Intelligence explosion scenario. Is there a sufficient group of
researchers/thinkers with a shared vision of the future of AI coherent
enough to form an organisation? This organisation would discus,
explore and
On Mon, Jun 23, 2008 at 7:52 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 4:34 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 6:52 PM, Russell Wallace
Indeed, but becoming more efficient at processing evidence is
something that requires being
On Mon, Jun 23, 2008 at 5:22 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
But it can just work with a static corpus. When you need to figure out
efficient learning, you only need to know a little about the overall
structure of your data (which can be described by a reasonably small
number of
On Mon, Jun 23, 2008 at 8:32 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 5:22 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
But it can just work with a static corpus. When you need to figure out
efficient learning, you only need to know a little about the overall
On Mon, Jun 23, 2008 at 5:58 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 8:32 PM, Russell Wallace
Why do you think that? All the evidence is to the contrary - the
examples we have of figuring out efficient learning, from evolution to
childhood play to formal education
Thanks for the comments. My replies:
It does happen to be the case that I
believe that logic-based methods are mistaken, but I could be wrong about
that, and it could turn out that the best way to build an AGI is with a
completely logic-based AGI, along with just one small mechanism that
On Mon, Jun 23, 2008 at 9:35 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 5:58 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Evidence is an indication that depends on the
referred event: evidence is there when referred event is there, but
evidence is not there when
Vlad,
You seem to be arguing in a logical vacuum in denying the essential nature
of evidence to most real-world problem-solving.
Let's keep it real, bro.
Science - bear in mind science deals with every part of the world - from the
cosmos to the earth to living organisms, animals, humans,
On Mon, Jun 23, 2008 at 8:48 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
There are only evolution-built animals, which is a very limited
repertoir of intelligences. You are saying that if no apple tastes
like a banana, therefore no fruit tastes like a banana, even banana.
I'm saying if no
--- On Mon, 6/23/08, Kaj Sotala [EMAIL PROTECTED] wrote:
a) Perform the experiment several times. If, on any of the trials,
copies are created, then have all of them partake in the next trial as
well, flipping a new coin and possibly being duplicated again (and
quickly leading to an
Loosemore said,
It is very important to understand that the paper I wrote was about the
methodology of AGI research, not about specific theories/models/systems
within AGI. It is about the way that we come up with ideas for systems
and the way that we explore those systems, not about the
On Tue, Jun 24, 2008 at 1:29 AM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 8:48 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
There are only evolution-built animals, which is a very limited
repertoir of intelligences. You are saying that if no apple tastes
like a banana,
Russell:quite a few very smart people
(myself among them) have tried hard to design something that could
enhance its intelligence divorced from the real world, and all such
attempts have failed. Obviously I can't _prove_ the impossibility of this -
in the same way
that I can't prove the
On Mon, Jun 23, 2008 at 11:57 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Oh yes, it can be proven. It requires an extended argument to do so
properly, which I won't attempt here.
Fair enough, I'd be interested to see your attempted proof if you ever
get it written up.
I just realised - how can you really understand what I'm talking about -
without supplementary images/evidence?
So here's simple evidence - look at the following foto - and note that you
can distinguish each individual in it immediately. And you can only do it
imagistically. No maths, no
Abram Demski wrote:
Thanks for the comments. My replies:
It does happen to be the case that I
believe that logic-based methods are mistaken, but I could be wrong about
that, and it could turn out that the best way to build an AGI is with a
completely logic-based AGI, along with just one
Jim Bromer wrote:
Loosemore said,
It is very important to understand that the paper I wrote was about the
methodology of AGI research, not about specific theories/models/systems
within AGI. It is about the way that we come up with ideas for systems
and the way that we explore those systems,
Andy,
This is a PERFECT post, because it so perfectly illustrates a particular
point of detachment from reality that is common among AGIers. In the real
world we do certain things to achieve a good result, but when we design
politically correct AGIs, we banish the very logic that allows us to
On Jun 23, 2008, at 7:53 PM, Steve Richfield wrote:
Andy,
The use of diminutives is considered rude in many parts of anglo-
culture if the individual does not use it to identify themselves,
though I realize it is common practice in some regions of the US. When
in doubt, use the given
32 matches
Mail list logo