Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Bob Mottram
2008/6/22 William Pearson [EMAIL PROTECTED]:
 2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
 Well since intelligence explosions haven't happened previously in our
 light cone, it can't be a simple physical pattern

Probably the last intelligence explosion - a relatively rapid
increase in the degree of adaptability capabile of being exhibited by
an organism - was the appearance of the first Homo sapiens.  The
number and variety of tools created by Homo sapiens compared to
earlier hominids indicate that this was one of the great leaps forward
in history (probably greatly facilitated by a more elaborate language
ability).


 If you take
 the intelligence explosion scenario seriously you won't write anything
 in public forums that might help other people make AI. As bad/ignorant
 people might get hold of it and cause the first explosion.


I don't fear intelligence, only ignorance.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread William Pearson
2008/6/23 Bob Mottram [EMAIL PROTECTED]:
 2008/6/22 William Pearson [EMAIL PROTECTED]:
 2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
 Well since intelligence explosions haven't happened previously in our
 light cone, it can't be a simple physical pattern

 Probably the last intelligence explosion - a relatively rapid
 increase in the degree of adaptability capabile of being exhibited by
 an organism - was the appearance of the first Homo sapiens.  The
 number and variety of tools created by Homo sapiens compared to
 earlier hominids indicate that this was one of the great leaps forward
 in history (probably greatly facilitated by a more elaborate language
 ability).

I am using intelligence explosion to mean what would Eliezer mean by it. See

http://www.overcomingbias.com/2008/06/optimization-an.html#more

I.e. something never seen on this planet.

I am sceptical of whether such a process is theoretically possible.

Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Vladimir Nesov
On Mon, Jun 23, 2008 at 12:50 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:


 Two questions:
 1) Do you know enough to estimate which scenario is more likely?

 Well since intelligence explosions haven't happened previously in our
 light cone, it can't be a simple physical pattern, so I think
 non-exploding intelligences have the evidence for being simpler on
 their side.

This message that I'm currently writing hasn't happened previously in
out light code. By your argument, it is evidence for it being more
difficult to write, than to recreate life on Earth and human
intellect, which is clearly false, for all practical purposes. You
should state that argument more carefully, in order for it to make
sense.


 So we might find them more easily. I also think I have
 solid reasoning to think intelligence exploding is unlikely, which
 requires paper length rather than post length. So it I think I do, but
 should I trust my own rationality?

But not too much, especially when the argument is not technical (which
is clearly the case for questions such as this one). If argument is
sound, you should be able to convince seed AI crowd too, even against
their confirmation bias. If you can't convince them, then either they
are idiots, or the argument is not good enough, which means that it's
probably wrong, and so you yourself shouldn't place too high stakes on
it.


 Getting a bunch of people together to argue for both paths seems like
 a good bet at the moment.

Yes, if it will lead to a good estimation of which methodology is more
likely to succeed.


 2) What does this difference change for research at this stage?

 It changes the focus of research from looking for simple principles of
 intelligence (that can be improved easily on the fly), to one that
 expects intelligence creation to be a societal process over decades.

 It also makes secrecy no longer be the default position. If you take
 the intelligence explosion scenario seriously you won't write anything
 in public forums that might help other people make AI. As bad/ignorant
 people might get hold of it and cause the first explosion.


I agree, but it works only if you know that the answer is correct, and
(which you didn't address and which is critical for these issues) you
won't build a doomsday machine as a result of your efforts, even if
this particular path turns out to be more feasible.

If you want to achieve artificial flight, you can start a research
project that will try to figure out the fundamental principles of
flying and will last a thousand years, or you can get a short cut, by
climbing to a highest cliff in the world (which is no easy feat too),
and jumping from it, thus achieving limited flying. Even if you have a
good argument that cliff-climbing is a simpler technology than
aerodynamics, choosing to climb is a wrong conclusion.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] Coin-flipping duplicates (was: Breaking Solomonoff induction (really))

2008-06-23 Thread Kaj Sotala
On 6/23/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- On Sun, 6/22/08, Kaj Sotala [EMAIL PROTECTED] wrote:
   On 6/21/08, Matt Mahoney [EMAIL PROTECTED] wrote:
   
Eliezer asked a similar question on SL4. If an agent
   flips a fair quantum coin and is copied 10 times if it
   comes up heads, what should be the agent's subjective
   probability that the coin will come up heads? By the
   anthropic principle, it should be 0.9. That is because if
   you repeat the experiment many times and you randomly
   sample one of the resulting agents, it is highly likely
   that will have seen heads about 90% of the time.
  
   That's the wrong answer, though (as I believe I pointed out when the
   question was asked over on SL4). The copying is just a red
   herring, it doesn't affect the probability at all.
  
   Since this question seems to confuse many people, I wrote a
   short Python program simulating it:
   http://www.saunalahti.fi/~tspro1/Random/copies.py

 The question was about subjective anticipation, not the actual outcome. It 
 depends on how the agent is programmed. If you extend your experiment so that 
 agents perform repeated, independent trials and remember the results, you 
 will find that on average agents will remember the coin coming up heads 99% 
 of the time. The agents have to reconcile this evidence with their knowledge 
 that the coin is fair.


If the agent is rational, then its subjective anticipation should
match the most likely outcome, no?

Define perform repeated, independent trials. That's a vague wording
- I can come up with at least two different interpretations:

a) Perform the experiment several times. If, on any of the trials,
copies are created, then have all of them partake in the next trial as
well, flipping a new coin and possibly being duplicated again (and
quickly leading to an exponentially increasing number of copies).
Carry out enough trials to eliminate the effect of random chance.
Since every agent is flipping a fair coin each time, by the time you
finish running the trials, all of them will remember seeing a roughly
equal amount of heads and tails. Knowing this, a rational agent should
anticipate this result, and not a 99% ratio.

b) Perform the experiment several times. If, on any of the trials,
copies are created, leave most of them be and only have one of them
partake in the repeat trials. This will eventually result in a large
number of copies who've most recently seen heads and at most one copy
at a time who's most recently seen tails. But this doesn't tell us
anything about the original question! The original situation was, if
you flip a coin and get copied on seeing heads, what result should you
anticipate seeing, not if you flip a coin several times, and on each
time that heads turn up, copies of you get made and most are set aside
while one keeps flipping the coin, should you anticipate eventually
ending up in a group that has most recently seen heads. Yes, there is
a high chance of ending up in such a group, but we again have a
situation where the copying doesn't really affect things - this kind
of wording is effectively the same as asking, if you flip a coin and
stop flipping once you see heads, should you on enough trials
anticipate that the outcome you most recently saw was heads - the
copying only gives you a small chance to keep flipping anyway. The
agent should still anticipate seeing an equal ratio of tails and heads
beforehand, since that's what it will see, up to the point that it
ends up in a position where it'll stop flipping the coin anymore.

  It is a tricker question without multiple trials. The agent then needs to 
 model its own thought process (which is impossible for any Turing computable 
 agent to do with 100% accuracy). If the agent knows that it is programmed so 
 that if it observes an outcome R times out of N that it would expect the 
 probability to be R/N, then it would conclude I know that I would observe 
 heads 99% of the time and therefore I would expect heads with probability 
 0.99. But this programming would not make sense in a scenario with 
 conditional copying.

That's right, it doesn't.

  Here is an equivalent question. If you flip a fair quantum coin, and you are 
 killed with 99% probability conditional on the coin coming up tails, then, 
 when you look at the coin, what is your subjective anticipation of seeing 
 heads?

What sense of equivalent do you mean? It isn't directly equivalent,
since it will produce a somewhat different outcome on the single-trial
(or repeated single trial) case. Previously all the possible outcomes
would have either been in the seen heads or the seen tails
category, this question adds the hasn't seen anything, is dead
category.

In the original experiment my expectation would have been 50:50 - here
I have a 50% subjective anticipation of seeing heads, a 0.5%
anticipation of seeing tails, and 49,5% anticipation of not seeing
anything at all.




-- 
http://www.saunalahti.fi/~tspro1/ | 

Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
Philosophically, intelligence explosion in the sense being discussed
here is akin to ritual magic - the primary fallacy is the attribution
to symbols alone of powers they simply do not possess.

The argument is that an initially somewhat intelligent program A can
generate a more intelligent program B, which in turn can generate...
so on to Z.

Let's stop and consider that first step, A to B. Clearly A cannot
already have B encoded within itself, or the process is mere
installation of already-existing software. So it must generate and
evaluate candidates B1, B2 etc and choose the best one.

On what basis does it choose? Most intelligent? But there's no such
function as Intelligence(S) where S is a symbol system. There are
functions F(S, E) where E is the environment, denoting the ability of
S to produce useful results in that environment; intelligence is the
word we use to refer to a family of such functions.

So A must evaluate Bx in the context of the environment in which B is
intended to operate. Furthermore, A can't evaluate by comparing Bx's
answers in each potential situation to the correct ones - if A knew
the correct answers in all situations, it would already be as
intelligent as B. It has to work by feedback from the environment.

If we step back and think about it, we really knew this already. In
every case where humans, machines or biological systems exhibit
anything that could be called an intelligence improvement - biological
evolution, a child learning to talk, a scientific community improving
its theories, engineers building better aeroplanes, programmers
improving their software - it involves feedback from the environment.
The mistake of trying to reach truth by pure armchair thought was
understandable in ancient Greece. We now know better.

So attractive as the image of a Transcendent Power popping out of a
basement may be to us geeks, it doesn't have anything to do with
reality. Making smarter machines in the real world is, like every
other engineering activity, a process that has to take place _in_ the
real world.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Mike Tintner

Russell:The mistake of trying to reach truth by pure armchair thought was
understandable in ancient Greece. We now know better.So attractive as the 
image of a Transcendent Power popping out of a basement may be to us geeks, 
it doesn't have anything to do with

reality. Making smarter machines in the real world is, like every
other engineering activity, a process that has to take place _in_ the
real world

Just so. I called it the Bookroom Fantasy, (you're almost calling it the 
Armchair Fallacy), and it does go back philosophically to the Greeks.  It 
all depends on what the Greeks started - the era of rationality, (in the 
technical sense of the rational sign systems of logic, maths, and, to an 
extent, language). In rational systems it IS possible to reach truth to a 
great extent by pure armchair thought - but only truths about rational 
systems themselves. And you geeks (your word) don't seem to have noticed 
that these systems, while extremely valuable, are only used in strictly 
limited ways in the real world,  real-world problem solving -  and actually 
pride themselves on being somewhat divorced from reality.


I like mathematics because it is not human and has nothing particular to do 
with this planet or with the whole accidental universe .

Bertrand Russell

Mathematics may be defined as the subject in which we never know what we are 
talking about, nor whether what we are saying is true.

Bertrand Russell

As far as the laws of mathematics refer to reality, they are not certain; 
and as far as they are certain, they do not refer to reality.

Einstein

The fantasy of super-accelerating intelligence is based on such a simplistic 
armchair fallacy. And it's ironic because it's cropping up just as the era 
of rationality is ending.  I haven't seen its equivalent, though, in any 
other area of our culture besides AGI. Roboticists don't seem to have it.


What's replacing rationality? I'm still thinking about the best term. I 
think it's probably *creativity*.


The rational era believed in humans as rational animals using pure reason - 
and especially rational systems - to think about the world.


The new creative era is recognizing that thinking about the world, or indeed 
anything, involves


Reason  + Emotion + Imagination [Reflective]  + Enactment/Embodied Thought + 
Imagination[Direct Sensory]


Reason + Generativity + Research + Investigation.

Science +  Technology + Arts + History. (the last two are totally ignored by 
rationalists although they are of equal weight in the real world 
intellectual economy).


Rationality is fragmented, specialised (incl. narrow AI) thinking. 
Creativity is unified, general thinking. 





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Vladimir Nesov
On Mon, Jun 23, 2008 at 5:22 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 If we step back and think about it, we really knew this already. In
 every case where humans, machines or biological systems exhibit
 anything that could be called an intelligence improvement - biological
 evolution, a child learning to talk, a scientific community improving
 its theories, engineers building better aeroplanes, programmers
 improving their software - it involves feedback from the environment.
 The mistake of trying to reach truth by pure armchair thought was
 understandable in ancient Greece. We now know better.


We are very inefficient in processing evidence, there is plenty of
room at the bottom in this sense alone. Knowledge doesn't come from
just feeding the system with data - try to read machine learning
textbooks to a chimp, nothing will stick. Intelligence is, among other
things, an ability to absorb the data and use it to deftly manipulate
the world to your ends, by nudging it here and there.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 3:43 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 We are very inefficient in processing evidence, there is plenty of
 room at the bottom in this sense alone. Knowledge doesn't come from
 just feeding the system with data - try to read machine learning
 textbooks to a chimp, nothing will stick.

Indeed, but becoming more efficient at processing evidence is
something that requires being embedded in the environment to which the
evidence pertains. A chimp did not acquire the ability to read
textbooks by sitting in a cave and pondering deep thoughts for a
million years.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread William Pearson
2008/6/23 Vladimir Nesov [EMAIL PROTECTED]:
 On Mon, Jun 23, 2008 at 12:50 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:


 Two questions:
 1) Do you know enough to estimate which scenario is more likely?

 Well since intelligence explosions haven't happened previously in our
 light cone, it can't be a simple physical pattern, so I think
 non-exploding intelligences have the evidence for being simpler on
 their side.

 This message that I'm currently writing hasn't happened previously in
 out light code. By your argument, it is evidence for it being more
 difficult to write, than to recreate life on Earth and human
 intellect, which is clearly false, for all practical purposes. You
 should state that argument more carefully, in order for it to make
 sense.

If your message was an intelligent entity then you would have a point.
I'm looking at classes of technologies and their natural or current
human created analogues.

Let me give you an example. You have two people claiming to be able to
give you an improved TSP solver. One person claims to be able to do
all examples in polynomial time the other simply has a better
algorithm which can do certain types of graphs in polynomial time, but
resorts to exponential time for random graphs.

Which would you consider more likely if neither of them have detailed
proofs and why?


 So we might find them more easily. I also think I have
 solid reasoning to think intelligence exploding is unlikely, which
 requires paper length rather than post length. So it I think I do, but
 should I trust my own rationality?

 But not too much, especially when the argument is not technical (which
 is clearly the case for questions such as this one).

The question is one of theoretical computer science and should be able
to be decided as well as the resolution to the halting problem.
I'm leaning towards something like Russell Wallace's resolution, but
there maybe some complications when you have a program that learns
from the environment. I would like to see it done in formally at some
point.

 If argument is
 sound, you should be able to convince seed AI crowd too

Since the concept is their idea they have to be the ones to define it.
They won't accept any arguments against it otherwise. They haven't as
yet formally defined it, or if they have I haven't seen it.


 I agree, but it works only if you know that the answer is correct, and
 (which you didn't address and which is critical for these issues) you
 won't build a doomsday machine as a result of your efforts, even if
 this particular path turns out to be more feasible.

I don't think a doomsday machine is possible. But considering I would
be doing my best to make the system incapable of modifying it's own
source code *in the fashion that eliezer wants/is afraid of* anyway, I
am not too worried. See http://www.sl4.org/archive/0606/15131.html

 Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Vladimir Nesov
On Mon, Jun 23, 2008 at 6:52 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 3:43 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 We are very inefficient in processing evidence, there is plenty of
 room at the bottom in this sense alone. Knowledge doesn't come from
 just feeding the system with data - try to read machine learning
 textbooks to a chimp, nothing will stick.

 Indeed, but becoming more efficient at processing evidence is
 something that requires being embedded in the environment to which the
 evidence pertains.

Why is that?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-23 Thread Richard Loosemore

Abram Demski wrote:

To be honest, I am not completely satisfied with my conclusion on the
post you refer to. I'm not so sure now that the fundamental split
between logical/messy methods should occur at the line between perfect
 approximate methods. This is one type of messiness, but one only. I
think you are referring to a related but different messiness: not
knowing what kind of environment your AI is dealing with. Since we
don't know which kinds of models will fit best with the world, we
should (1) trust our intuitions to some extent, and (2) try things and
see how well they work. This is as Loosemore suggests.

On the other hand, I do not want to agree with Loosemore too strongly.
Mathematics and mathematical proof is a very important tool, and I
feel like he wants to reject it. His image of an AGI seems to be a
system built up out of totally dumb pieces, with intelligence emerging
unexpectedly. Mine is a system built out of somewhat smart pieces,
cooperating to build somewhat smarter pieces, and so on. Each piece
has provable smarts.


Okay, let me try to make some kind of reply to your comments here and in 
your original blog post.


It is very important to understand that the paper I wrote was about the 
methodology of AGI research, not about specific theories/models/systems 
within AGI.  It is about the way that we come up with ideas for systems 
and the way that we explore those systems, not about the content of 
anyone's particular ideas.


So, in the above text you refer to a split between logical and messy 
methods - now, it may well be that my paper would lead someone to 
embrace 'messy' methods and reject 'logical' ones, but that is a side 
effect of the argument, not the argument itself.  It does happen to be 
the case that I believe that logic-based methods are mistaken, but I 
could be wrong about that, and it could turn out that the best way to 
build an AGI is with a completely logic-based AGI, along with just one 
small mechanism that was Complex.  That would be perfectly consistent 
with my argument (though a little surprising, for other reasons).


Similarly, you suggest that I have an image of an AGI that is built out 
of totally dumb pieces, with intelligence emerging unexpectedly.  Some 
people have suggested that that is my view of AGI, but whether or not 
those people are correct in saying that [aside:  they are not!], that 
does not relate to the argument I presented, because it is all about 
specific AGI design preferences, whereas the thing that I have called 
the Complex Systems Problem is fairly neutral on most design decisions.


In your original blog post, also, you mention the way that AGI planning 
mechanisms can be built in such a way that they contain a logical 
substrate, but with heuristics that force the systems to make 
'sub-optimal' choices.  This is a specific instance of a more general 
design pattern:  logical engines that have 'inference control 
mechanisms' riding on their backs, preventing them from deducing 
everything in the universe whilst trying to come to a simple decision. 
The problem is that you have portrayed the distinction between 'pure' 
logical mechanisms and 'messy' systems that have heuristics riding on 
their backs, as equivalent to a distinction that you thought I was 
making between non-complex and complex AGI systems.  I hope you can see 
now that this is not what I was trying to argue.  My target would be the 
methodologies that people use to decide such questions as which 
heuristics to using in a planning mechanism, whether the representation 
used by the planning mechanism can co-exist with the learning 
mechanisms, and so on.


Now, having said all of that, what does the argument actually say, and 
does it make *any* claims at all about what sort of content to put in an 
AGI design?


The argument says that IF intelligent systems belong to the 'complex 
systems' class, THEN a it would be a dreadful mistake to use a certain 
type of scientific or engineering approach to build intelligent systems. 
 I tried to capture this with an analogy at one point:  if you we John 
Horton Conway, sitting down on Day 1 of his project to find a cellular 
automaton with certain global properties, you would not be able to use 
any standard scientific, engineering or mathematical tools to discover 
the rules that should go into your system - you would, in fact, have no 
option but to try rules at random until you found rules that gave the 
global behavior that you desired.


My point was that a modified form of that same problem (that inability 
to use our scientific intuitions to just go from a desired global 
behavior to the mechanisms that will generate that global behavior) 
could apply to the question of building an AGI.  I do not suggest that 
the problem will manifest itself in exactly the same way (it is not that 
we would make zero progress with current techniques, and have to use 
completely random trial and error, like Conway had to), but 

Re: [agi] Approximations of Knowledge

2008-06-23 Thread Abram Demski
Since combinatorial search problems are so common to artificial
intelligence, it has obvious applications. If such an algorithm can be
made, it seems like it could be used *everywhere* inside an AGI:
deduction (solve for cases consistent with constraints), induction
(search for the best model), planning... Particularly if there is a
generalization to soft constraint problems.

On 6/22/08, Jim Bromer [EMAIL PROTECTED] wrote:
 Abram,
 I did not group you with probability buffs.  One of the errors I feel that
 writers make when their field is controversial is that they begin
 representing their own opinions from the vantage of countering critics.
 Unfortunately, I am one of those writers, (or perhaps I am just projecting).
  But my comment about the probability buffs wasn't directed toward you, I
 was just using it as an exemplar (of something or another).

 Your comments seem to make sense to me although I don't know where you are
 heading.  You said:
 what should be hoped for is convergence to (nearly) correct models of
 (small parts of) the universe. So I suppose that rather than asking for
 meaning in a fuzzy logic, I should be asking for clear accounts of
 convergence properties...

 When you have to find a way to tie together components of knowledge together
 you typically have to achieve another kind of convergence.  Even if these
 'components' of knowledge are reliable, they cannot usually be converged
 easily due to the complexity that their interrelations with other kinds of
 knowledge (other 'components' of knowledge) will cause.

 To follow up on what I previously said, if my logic program works it will
 mean that I can combine and test logical formulas of up to a few hundred
 distinct variables and find satisfiable values for these combinations in a
 relatively short period of time.  I think this will be an important method
 to test whether AI can be advanced by advancements in handling complexity
 even though some people do not feel that logical methods are appropriate to
 use on multiple source complexity.  As you seem to appreciate, logic can
 still be brought to to the field even though it is not a purely logical game
 that is to be played.

 When I begin to develop some simple theories about a subject matter, I will
 typically create hundreds of minor variations concerning those theories over
 a period of time.  I cannot hold all those variations of the conjecture in
 consciousness at any one moment, but I do feel that they can come to mind in
 response to a set of conditions for which that particular set of variations
 was created for.  So while a simple logical theory (about some subject) may
 be expressible with only a few terms, when you examine all of the possible
 variations that can be brought into conscious consideration in response to a
 particular set of stimuli, I think you may find that the theories could be
 more accurately expressed using hundreds of distinct logical values.

 If this conjecture of mine turns out to be true, and if I can actually get
 my new logical methods to work, then I believe that this new range of
 logical methods may show whether advancements in complexity can make a
 difference to AI even if its application does not immediately result in
 human level of intelligence.

 Jim Bromer


 - Original Message 
 From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Sunday, June 22, 2008 4:38:02 PM
 Subject: Re: [agi] Approximations of Knowledge

 Well, since you found my blog, you probably are grouping me somewhat
 with the probability buffs. I have stated that I will not be
 interested in any other fuzzy logic unless it is accompanied by a
 careful account of the meaning of the numbers.

 You have stated that it is unrealistic to expect a logical model to
 reflect the world perfectly. The intuition behind this seems clear.
 Instead, what should be hoped for is convergence to (nearly) correct
 models of (small parts of) the universe. So I suppose that rather than
 asking for meaning in a fuzzy logic, I should be asking for clear
 accounts of convergence properties... but my intuition says that from
 clear meaning, everything else follows.






 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 4:34 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 6:52 PM, Russell Wallace
 Indeed, but becoming more efficient at processing evidence is
 something that requires being embedded in the environment to which the
 evidence pertains.

 Why is that?

For the reason I explained earlier. Suppose program A generates
candidate programs B1, B2... that are conjectured to be more efficient
at processing evidence. It can't just compare their processing of
evidence with the correct version, because if it knew the correct
results in all cases, it would already be that efficient itself. It
has to try them out.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Richard Loosemore

William Pearson wrote:

While SIAI fills that niche somewhat, it concentrates on the
Intelligence explosion scenario. Is there a sufficient group of
researchers/thinkers with a shared vision of the future of AI coherent
enough to form an organisation? This organisation would discus,
explore and disseminate what can be done to make the introduction as
painless as possible.

The base beliefs shared between the group would be something like

 - The entities will not have goals/motivations inherent to their
form. That is robots aren't likely to band together to fight humans,
or try to take over the world for their own means.  These would have
to be programmed into them, as evolution has programmed group loyalty
and selfishness into humans.
- The entities will not be capable of fully wrap around recursive
self-improvement. They will improve in fits and starts in a wider
economy/ecology like most developments in the world *
- The goals and motivations of the entities that we will likely see in
the real world will be shaped over the long term by the forces in the
world, e.g. evolutionary, economic and physics.

Basically an organisation trying to prepare for a world where AIs
aren't sufficiently advanced technology or magic genies, but still
dangerous and a potentially destabilising world change. Could a
coherent message be articulated by the subset of the people that agree
with these points. Or are we all still too fractured?

  Will Pearson

* I will attempt to give an inside view of why I take this view, at a
later date.


The Bulletin of the Atomic Scientists is an organization that started 
with a precise idea, based on extremely well-established theory, of the 
dangers of nuclear technology.


At this time there is nothing like a coherent theory from which we could 
draw conclusions about the (possible) dangers of AGI.


Such an organization would be pointless.  It is bad enough that SIAI is 
50% community mouthpiece and 50% megaphone for Yudkowsky's ravings. 
More mouthpieces we don't need.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Vladimir Nesov
On Mon, Jun 23, 2008 at 7:52 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 4:34 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 6:52 PM, Russell Wallace
 Indeed, but becoming more efficient at processing evidence is
 something that requires being embedded in the environment to which the
 evidence pertains.

 Why is that?

 For the reason I explained earlier. Suppose program A generates
 candidate programs B1, B2... that are conjectured to be more efficient
 at processing evidence. It can't just compare their processing of
 evidence with the correct version, because if it knew the correct
 results in all cases, it would already be that efficient itself. It
 has to try them out.


But it can just work with a static corpus. When you need to figure out
efficient learning, you only need to know a little about the overall
structure of your data (which can be described by a reasonably small
number of exemplars), you don't need much of the data itself.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 5:22 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 But it can just work with a static corpus. When you need to figure out
 efficient learning, you only need to know a little about the overall
 structure of your data (which can be described by a reasonably small
 number of exemplars), you don't need much of the data itself.

Why do you think that? All the evidence is to the contrary - the
examples we have of figuring out efficient learning, from evolution to
childhood play to formal education and training to science to hardward
and software engineering, do not work with just a static corpus.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Vladimir Nesov
On Mon, Jun 23, 2008 at 8:32 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 5:22 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 But it can just work with a static corpus. When you need to figure out
 efficient learning, you only need to know a little about the overall
 structure of your data (which can be described by a reasonably small
 number of exemplars), you don't need much of the data itself.

 Why do you think that? All the evidence is to the contrary - the
 examples we have of figuring out efficient learning, from evolution to
 childhood play to formal education and training to science to hardward
 and software engineering, do not work with just a static corpus.


It is not evidence. Evidence is an indication that depends on the
referred event: evidence is there when referred event is there, but
evidence is not there when refereed event is absent. What would you
expect to see, depending on correctness of your assumption? Literally,
it translates to animals having a phase where they sit cross-legged
and meditate on accumulated evidence, until they gain enlightenment,
become extremely efficient learners and launch Singularity...
Evolution just didn't figure it out, just like it didn't figure out
transistors, and had to work with legacy 100Hz neurons.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 5:58 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 8:32 PM, Russell Wallace
 Why do you think that? All the evidence is to the contrary - the
 examples we have of figuring out efficient learning, from evolution to
 childhood play to formal education and training to science to hardward
 and software engineering, do not work with just a static corpus.

 It is not evidence.

Yes it is.

 Evidence is an indication that depends on the
 referred event: evidence is there when referred event is there, but
 evidence is not there when refereed event is absent.

And if the referred thing (entities acquiring intelligence from static
corpus in the absence of environment) existed we would expect to see
it happening, if (as I claim) it does not exist then we would expect
to see all intelligence-acquiring entities needing interaction with an
environment; we observe the latter, which by the above criterion is
evidence for my theory.

 What would you
 expect to see, depending on correctness of your assumption? Literally,
 it translates to animals having a phase where they sit cross-legged
 and meditate on accumulated evidence, until they gain enlightenment,
 become extremely efficient learners and launch Singularity...

...er, I think there's a miscommunication here - I'm claiming this is
_not_ possible. I thought you were claiming it is?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-23 Thread Abram Demski
Thanks for the comments. My replies:



 It does happen to be the case that I
 believe that logic-based methods are mistaken, but I could be wrong about
 that, and it could turn out that the best way to build an AGI is with a
 completely logic-based AGI, along with just one small mechanism that was
 Complex.

Logical methods are quite Complex. This was part of my point. Logical
deduction in any sufficiently complicated formalism satisfies both
types of global-local disconnect that I mentioned (undecidability, and
computational irreducibility). If this were not the case, it seems
your argument would be much less appealing. (In particular, there
would be one less argument for the mind being complex; we could not
say logic has some subset of the mind's capabilities; a brute-force
theorem prover is a complex system; therefore the mind is probably a
complex system.)

 Similarly, you suggest that I have an image of an AGI that is built out of
 totally dumb pieces, with intelligence emerging unexpectedly.  Some people
 have suggested that that is my view of AGI, but whether or not those people
 are correct in saying that [aside:  they are not!]

Apologies. But your arguments do appear to point in that direction.

 In your original blog post, also, you mention the way that AGI planning
 The problem is that you have portrayed the
 distinction between 'pure' logical mechanisms and 'messy' systems that have
 heuristics riding on their backs, as equivalent to a distinction that you
 thought I was making between non-complex and complex AGI systems.  I hope
 you can see now that this is not what I was trying to argue.

You are right, this characterization is quite bad. I think that is
part of what was making me uneasy about my conclusion. My intention
was not that approximation should always equal a logical search with
messy heuristics stacked upon it. In fact, I had two conflicting
images in mind:use

-A logical search with logical heuristics (such as greedy methods for
NP-complete problems, which are guaranteed to be fairly near optimal)

-A messy method (such as a neural net or swarm) that somehow gives
you an answer without precise logic

A revised version of my argument would run something like this. As the
approximation problem gets more demanding, it gets more difficult to
devise logical heuristics. Increasingly, we must rely on intuitions
tested by experiments. There then comes a point when making the
distinction between the heuristic and the underlying search becomes
unimportant; the method is all heuristic, so to speak. At this point
we are simply using messy methods.

I'm still not really satisfied, though, because I would personally
stop at the stage when the heuristic started to get messy, and say,
The problem is starting to become AI-complete, so at this point I
should include a meta-level search to find a good heuristic for me,
rather than trying to hard-code one...

 Finally, I should mention one general misunderstanding about mathematics.
  This argument has a superficial similarity to Godel's theorem, but you
 should not be deceived by that.  Godel was talking about formal deductive
 systems, and the fact that there are unreachable truths within such systems.
  My argument is about the feasibility of scientific discovery, when applied
 to systems of different sorts.  These are two very different domains.

I think it is fair to say that I accounted for this. In particular, I
said: It's this second kind of irreducibility, computational
irreducibility, that I see as more relevant to AI. (Actually, I do
see Godel's theorem as relevant to AI; I should have been more
specific and said relevant to AI's global-local disconnect.)


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Vladimir Nesov
On Mon, Jun 23, 2008 at 9:35 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 5:58 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:

 Evidence is an indication that depends on the
 referred event: evidence is there when referred event is there, but
 evidence is not there when refereed event is absent.

 And if the referred thing (entities acquiring intelligence from static
 corpus in the absence of environment) existed we would expect to see
 it happening, if (as I claim) it does not exist then we would expect
 to see all intelligence-acquiring entities needing interaction with an
 environment; we observe the latter, which by the above criterion is
 evidence for my theory.


There are only evolution-built animals, which is a very limited
repertoir of intelligences. You are saying that if no apple tastes
like a banana, therefore no fruit tastes like a banana, even banana.
Whether a design is possible or not, you expect to see the same
result, if it was never attempted. And so, the absence of an
implementation of design that was never attempted is not evidence of
impossibility of design.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Mike Tintner

Vlad,

You seem to be arguing in a logical vacuum  in denying the essential nature 
of evidence to most real-world problem-solving.


Let's keep it real, bro.

Science - bear in mind science deals with every part of the world - from the 
cosmos to the earth to living organisms, animals, humans, societies etc. 
Which branch of science can solve problems about the world without evidence 
and physically interacting with the subject matter?


Technology - which branch of technology can solve problems without evidence 
 interacting with machines and artefacts and the real world? Ditto: which 
branch of AI or AGI can solve problems without interacting with real-world 
:computers? (Some purely logical, mathematical problems yes, but 
overwhelmingly, no).


Real-world technology - i.e. business etc - which branch can solve problems 
without interacting with real products and real customers?


History/journalism ...etc. etc.

If you think AGI's can somehow magically transcend the requirement to have 
physical, personal experience and evidence of a subject in order to solve 
problems about that subject, you must explain how. Preferably with reference 
to the real world, and not just by using logical argument.


As Zeno's paradox shows, logic can prove anything, no matter how absurd. 
Science and real world intelligence, which are tied to evidence, can't.





Evidence is an indication that depends on the

referred event: evidence is there when referred event is there, but
evidence is not there when refereed event is absent.


And if the referred thing (entities acquiring intelligence from static
corpus in the absence of environment) existed we would expect to see
it happening, if (as I claim) it does not exist then we would expect
to see all intelligence-acquiring entities needing interaction with an
environment; we observe the latter, which by the above criterion is
evidence for my theory.



There are only evolution-built animals, which is a very limited
repertoir of intelligences. You are saying that if no apple tastes
like a banana, therefore no fruit tastes like a banana, even banana.
Whether a design is possible or not, you expect to see the same
result, if it was never attempted. And so, the absence of an
implementation of design that was never attempted is not evidence of
impossibility of design.







---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 8:48 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 There are only evolution-built animals, which is a very limited
 repertoir of intelligences. You are saying that if no apple tastes
 like a banana, therefore no fruit tastes like a banana, even banana.

I'm saying if no fruit anyone has ever tasted confers magical powers,
and theory says fruit can't do so, and there's no evidence whatsoever
that it can, then we should accept that eating fruit does not confer
magical powers.

 Whether a design is possible or not, you expect to see the same
 result, if it was never attempted. And so, the absence of an
 implementation of design that was never attempted is not evidence of
 impossibility of design.

But it has been attempted. I cited not only biological evolution and
learning within the lifetime of individuals, but all fields of science
and engineering - including AI, where quite a few very smart people
(myself among them) have tried hard to design something that could
enhance its intelligence divorced from the real world, and all such
attempts have failed.

Obviously I can't _prove_ the impossibility of this - in the same way
that I can't prove the impossibility of summoning demons by chanting
the right phrases in Latin; you can always say, well maybe there's
some incantation nobody has yet tried.

But here's a question for you: Is the possibility of intelligence
enhancement in a vacuum a matter of absolute faith, or is there some
point at which you would accept it's impossible after all? If the
latter, when will you accept its futility? Ten years from now? Twenty?
Thirty?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Coin-flipping duplicates (was: Breaking Solomonoff induction (really))

2008-06-23 Thread Matt Mahoney
--- On Mon, 6/23/08, Kaj Sotala [EMAIL PROTECTED] wrote:
 a) Perform the experiment several times. If, on any of the trials,
 copies are created, then have all of them partake in the next trial as
 well, flipping a new coin and possibly being duplicated again (and
 quickly leading to an exponentially increasing number of copies).
 Carry out enough trials to eliminate the effect of random chance.
 Since every agent is flipping a fair coin each time, by the time you
 finish running the trials, all of them will remember seeing a roughly
 equal amount of heads and tails. Knowing this, a rational agent should
 anticipate this result, and not a 99% ratio.

That is my meaning. But you can run a simulation yourself. The agents that see 
heads get copied, so you have more agents remembering heads than remembering 
tails.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-23 Thread Jim Bromer
Loosemore said,
It is very important to understand that the paper I wrote was about the 
methodology of AGI research, not about specific theories/models/systems 
within AGI.  It is about the way that we come up with ideas for systems 
and the way that we explore those systems, not about the content of 
anyone's particular ideas.

And Abram said,
A revised version of my argument would run something like this. As the
approximation problem gets more demanding, it gets more difficult to
devise logical heuristics. Increasingly, we must rely on intuitions
tested by experiments. There then comes a point when making the
distinction between the heuristic and the underlying search becomes
unimportant; the method is all heuristic, so to speak. At this point
we are simply using messy methods,

I wondered if Abram was talking about the way an AI program should work or the 
way research into AI should work, or the way AI programs and research into AI 
should work?
Jim Bromer


- Original Message 
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, June 23, 2008 3:11:16 PM
Subject: Re: [agi] Approximations of Knowledge

Thanks for the comments. My replies:



 It does happen to be the case that I
 believe that logic-based methods are mistaken, but I could be wrong about
 that, and it could turn out that the best way to build an AGI is with a
 completely logic-based AGI, along with just one small mechanism that was
 Complex.

Logical methods are quite Complex. This was part of my point. Logical
deduction in any sufficiently complicated formalism satisfies both
types of global-local disconnect that I mentioned (undecidability, and
computational irreducibility). If this were not the case, it seems
your argument would be much less appealing. (In particular, there
would be one less argument for the mind being complex; we could not
say logic has some subset of the mind's capabilities; a brute-force
theorem prover is a complex system; therefore the mind is probably a
complex system.)

 Similarly, you suggest that I have an image of an AGI that is built out of
 totally dumb pieces, with intelligence emerging unexpectedly.  Some people
 have suggested that that is my view of AGI, but whether or not those people
 are correct in saying that [aside:  they are not!]

Apologies. But your arguments do appear to point in that direction.

 In your original blog post, also, you mention the way that AGI planning
 The problem is that you have portrayed the
 distinction between 'pure' logical mechanisms and 'messy' systems that have
 heuristics riding on their backs, as equivalent to a distinction that you
 thought I was making between non-complex and complex AGI systems.  I hope
 you can see now that this is not what I was trying to argue.

You are right, this characterization is quite bad. I think that is
part of what was making me uneasy about my conclusion. My intention
was not that approximation should always equal a logical search with
messy heuristics stacked upon it. In fact, I had two conflicting
images in mind:use

-A logical search with logical heuristics (such as greedy methods for
NP-complete problems, which are guaranteed to be fairly near optimal)

-A messy method (such as a neural net or swarm) that somehow gives
you an answer without precise logic

A revised version of my argument would run something like this. As the
approximation problem gets more demanding, it gets more difficult to
devise logical heuristics. Increasingly, we must rely on intuitions
tested by experiments. There then comes a point when making the
distinction between the heuristic and the underlying search becomes
unimportant; the method is all heuristic, so to speak. At this point
we are simply using messy methods.

I'm still not really satisfied, though, because I would personally
stop at the stage when the heuristic started to get messy, and say,
The problem is starting to become AI-complete, so at this point I
should include a meta-level search to find a good heuristic for me,
rather than trying to hard-code one...

 Finally, I should mention one general misunderstanding about mathematics.
  This argument has a superficial similarity to Godel's theorem, but you
 should not be deceived by that.  Godel was talking about formal deductive
 systems, and the fact that there are unreachable truths within such systems.
  My argument is about the feasibility of scientific discovery, when applied
 to systems of different sorts.  These are two very different domains.

I think it is fair to say that I accounted for this. In particular, I
said: It's this second kind of irreducibility, computational
irreducibility, that I see as more relevant to AI. (Actually, I do
see Godel's theorem as relevant to AI; I should have been more
specific and said relevant to AI's global-local disconnect.)


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Vladimir Nesov
On Tue, Jun 24, 2008 at 1:29 AM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 8:48 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 There are only evolution-built animals, which is a very limited
 repertoir of intelligences. You are saying that if no apple tastes
 like a banana, therefore no fruit tastes like a banana, even banana.

 I'm saying if no fruit anyone has ever tasted confers magical powers,
 and theory says fruit can't do so, and there's no evidence whatsoever
 that it can, then we should accept that eating fruit does not confer
 magical powers.

Yes, we are discussing the theory that banana taste (magical power)
doesn't exist. But this theory mustn't consist in merely asserting
that there are no precedents, and pass the absence of precedents for
evidence. If there is more to the theory, what is the idea in
hand-picking this weak point?


 Whether a design is possible or not, you expect to see the same
 result, if it was never attempted. And so, the absence of an
 implementation of design that was never attempted is not evidence of
 impossibility of design.

 But it has been attempted. I cited not only biological evolution and
 learning within the lifetime of individuals, but all fields of science
 and engineering - including AI, where quite a few very smart people
 (myself among them) have tried hard to design something that could
 enhance its intelligence divorced from the real world, and all such
 attempts have failed.

I have only a very vague idea about what you mean by intelligence
divorced from the real world. Without justification, it looks like a
scapegoat.


 Obviously I can't _prove_ the impossibility of this - in the same way
 that I can't prove the impossibility of summoning demons by chanting
 the right phrases in Latin; you can always say, well maybe there's
 some incantation nobody has yet tried.

Maybe there is, but we don't have any hints about the processes that
would produce such an effect, much less a prototype demon-summoning
device at any level of obfuscation, so there is little prior in that
endeavor. Whereas with intelligence, we have a prototype and plenty of
theory that seems to grope for the process, but not quite capture it.


 But here's a question for you: Is the possibility of intelligence
 enhancement in a vacuum a matter of absolute faith, or is there some
 point at which you would accept it's impossible after all? If the
 latter, when will you accept its futility? Ten years from now? Twenty?
 Thirty?

As I said earlier, I don't see any inherent dichotomies between the
search for fundamental process and understanding of existing
biological brains. It doesn't need to be a political decision, if at
some point the brain-inspired technology turns out to be a better
path, or more likely, informs the theory, let's take it. For now, it
looks like cliff-jumping.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Mike Tintner


Russell:quite a few very smart people
(myself among them) have tried hard to design something that could
enhance its intelligence divorced from the real world, and all such
attempts have failed. Obviously I can't _prove_ the impossibility of this - 
in the same way

that I can't prove the impossibility of summoning demons by chanting
the right phrases in Latin; you can always say, well maybe there's
some incantation nobody has yet tried.

Oh yes, it can be proven. It requires an extended argument to do so 
properly, which I won't attempt here.


But it all comes down, if you think about it, to different forms of 
sign/representation. The AGI-ers who think knowledge can be superaccelerated 
are almost exclusively talking about knowledge in the form of symbols - 
logical, mathematical, linguistic.


When you or I talk about gathering evidence and personal experience, we are 
talking about knowledge gathered in the form of sensory images - and I am 
also talking about embodied images -  which involve your whole body 
(that's what mirror neurons are referring to - when you mimic someone or 
something, you do it with your whole body, not just your senses).


The proof lies in the direction of thinking of the world as consisting of 
bodies - and then asking: what can and can't the different kinds of sign: 
symbols - words/numbers/ algebraic-logical variables, -   and then  image 
schemas -  geometric figures etc.  and  then images - sensory/ 
photographs/movies/ etc -  tell you and show you of bodies?


Each form of sign/representation has strictly v. limited powers , and can 
only show certain dimensions of bodies. All the symbols and schemas in 
existence cannot tell you what Russell Wallace or Vladimir Nesov look like - 
i.e. cannot show you their distinctive, individual bodies. Only images (or, 
if you like, evidence) can do that - and do it in a second.  (And that can 
be proven, scientifically). And since the real world consists, in the final 
analysis, of nothing but individual bodies like Russell and Vlad, each of 
which are different from each other - even that ipod over there  is actually 
different from this ipod here, -  then you'd better have images if you want 
to be intelligent about the real world of real individuals, and be able to 
deal with all their idiosyncrasies - or make fresh generalisations about 
them.


Which is why evolution went to the extraordinary trouble of founding real 
AGI's on the continuous set of moving images we call consciousness - in 
order to be able to deal with the real world of individuals, and not just 
the rational world of abstract general classes, we call logic, maths and 
language.*


But, as I said, this requires an extended argument to demonstrate properly. 
But, yes, it can be proven.


*In case that's confusing, language and logic can refer to individuals like 
Russell Wallace - but only in general terms. They can't show what 
distinguishes those individuals.








---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 11:57 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Oh yes, it can be proven. It requires an extended argument to do so
 properly, which I won't attempt here.

Fair enough, I'd be interested to see your attempted proof if you ever
get it written up.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent ..P.S.I just realised - how can you really understand what I'm talking about - without supplementary images/evidence?

2008-06-23 Thread Mike Tintner
I just realised - how can you really understand what I'm talking about - 
without supplementary images/evidence?


So here's simple evidence - look at the following foto - and note that you 
can distinguish each individual in it immediately. And you can only do it 
imagistically. No maths, no language, no algebraic variables, no programming 
languages can tell you what makes each one of those people individual/ 
different. Just images. So uniquely powerful.


http://www.cdomusic.com/MilitaryImages/AALL%20CASUALTIES%20IN%20ONE%20POSTER.jpg

Every thing in the world is individual. 





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-23 Thread Richard Loosemore

Abram Demski wrote:

Thanks for the comments. My replies:




It does happen to be the case that I
believe that logic-based methods are mistaken, but I could be wrong about
that, and it could turn out that the best way to build an AGI is with a
completely logic-based AGI, along with just one small mechanism that was
Complex.


Logical methods are quite Complex. This was part of my point. Logical
deduction in any sufficiently complicated formalism satisfies both
types of global-local disconnect that I mentioned (undecidability, and
computational irreducibility). If this were not the case, it seems
your argument would be much less appealing. (In particular, there
would be one less argument for the mind being complex; we could not
say logic has some subset of the mind's capabilities; a brute-force
theorem prover is a complex system; therefore the mind is probably a
complex system.)


Okay, I made a mistake in my choice of words (I knew it when I wrote 
them, but neglected to go back and correct!).


I did not mean to imply that I *require* some complexity in an AGI 
formalism, and that finding some complexity would be a good thing, end 
of story, problem solved, etc.  So for example, you are correct to point 
out that most 'logical' systems do exhibit complexity, provided they do 
something realistically approximating intelligence.


Instead, what I meant to say was that we are not setting up our research 
procedures to cope with the complexity.  So, it might turn out that a 
good, robust AGI can be built with something like a regular logic-based 
formalism, BUT with just a few small aspects that are complex  but 
unfortunately we are currently not able to discover what those complex 
parts should be like, because our current methodology is to use blind 
hunch and intuition (i.e. heuristics that look as though the will 
work).  Going back to your planning system example, it might be the case 
that only one choice of heuristic control mechanism will actually make a 
given logical formalism converge on fully intelligent behavior, but 
there might be 10^100 choices of possible control mechanism, and our 
current method for searching through the possibilities is to use 
intuition to pick likely candidates.


The point here is that a small amount of the factors that give rise to 
complexity can actualy have a massive effect on the behavior of the 
system, but people are today acting as if a small amount of 
complexity-inducing characteristics means a small amount of 
unpredictability in the behavior.  This is simply not the case.








Similarly, you suggest that I have an image of an AGI that is built out of
totally dumb pieces, with intelligence emerging unexpectedly.  Some people
have suggested that that is my view of AGI, but whether or not those people
are correct in saying that [aside:  they are not!]


Apologies. But your arguments do appear to point in that direction.


In your original blog post, also, you mention the way that AGI planning
The problem is that you have portrayed the
distinction between 'pure' logical mechanisms and 'messy' systems that have
heuristics riding on their backs, as equivalent to a distinction that you
thought I was making between non-complex and complex AGI systems.  I hope
you can see now that this is not what I was trying to argue.


You are right, this characterization is quite bad. I think that is
part of what was making me uneasy about my conclusion. My intention
was not that approximation should always equal a logical search with
messy heuristics stacked upon it. In fact, I had two conflicting
images in mind:use

-A logical search with logical heuristics (such as greedy methods for
NP-complete problems, which are guaranteed to be fairly near optimal)

-A messy method (such as a neural net or swarm) that somehow gives
you an answer without precise logic

A revised version of my argument would run something like this. As the
approximation problem gets more demanding, it gets more difficult to
devise logical heuristics. Increasingly, we must rely on intuitions
tested by experiments. There then comes a point when making the
distinction between the heuristic and the underlying search becomes
unimportant; the method is all heuristic, so to speak. At this point
we are simply using messy methods.


Ah, I agree completely here.  We are taling about a Wag The Dog 
scenario, where everyone focusses on the pristine beauty of the logical 
formalism, but turns a blind eye to the (assumed-to-be) trivial 
heuristic control mechanisms   but in the end it is the heuristic 
control mechanism that is responsible for almost all of the actual behavior.






I'm still not really satisfied, though, because I would personally
stop at the stage when the heuristic started to get messy, and say,
The problem is starting to become AI-complete, so at this point I
should include a meta-level search to find a good heuristic for me,
rather than trying to hard-code one...


And at that point, 

Re: [agi] Approximations of Knowledge

2008-06-23 Thread Richard Loosemore

Jim Bromer wrote:

Loosemore said,
It is very important to understand that the paper I wrote was about the 
methodology of AGI research, not about specific theories/models/systems 
within AGI.  It is about the way that we come up with ideas for systems 
and the way that we explore those systems, not about the content of 
anyone's particular ideas.


And Abram said,
A revised version of my argument would run something like this. As the
approximation problem gets more demanding, it gets more difficult to
devise logical heuristics. Increasingly, we must rely on intuitions
tested by experiments. There then comes a point when making the
distinction between the heuristic and the underlying search becomes
unimportant; the method is all heuristic, so to speak. At this point
we are simply using messy methods,

I wondered if Abram was talking about the way an AI program should work or the 
way research into AI should work, or the way AI programs and research into AI 
should work?
Jim Bromer


I interpreted him (see parallel post) to be referring still to the 
question of how to deal with planning systems, where there is a 
formalism (the logic substructure) which cannot be allowed to run its 
methods to completion (because they would take too long) and which 
therefore has to use approximation methods, or heuristics, to guess 
which are the most likely best planning choices.  When the system is 
required to do more real-world-type performance (as in an AGI, rather 
than a narrow AI) it's behavior will be dominated by the heuristics.


He then went on to talk about methodology:  do we just use intuitions to 
pick heuristics, or do we make the methodology more systematic by 
engaging in automatic searches of the space of possible heuristics?


My perspective on that question would back up one step:  if it is a 
complex system we are dealing with, we should have been using 
systematic, automatic searches of the design space BEFORE, when we were 
choosing whether or not to do planning with a Logic+Heuristics design!


But of course, that would be wildly, extravagantly infeasible.  So, 
instead, I propose to start from a basic design that is as similar as 
possible to the human design, and then do our systematic, automatic 
search (of the space of mechanism-designs) in an outward direction from 
that human-cognition baseline.  If intelligence involves even a small 
amount of complexity, it could well be that this is the only feasible 
way to ever get an intelligence up and running.


Treat it, in other words, as a calculus of variations problem.




Richard Loosemore.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-23 Thread Steve Richfield
Andy,

This is a PERFECT post, because it so perfectly illustrates a particular
point of detachment from reality that is common among AGIers. In the real
world we do certain things to achieve a good result, but when we design
politically correct AGIs, we banish the very logic that allows us to
function. For example, if you see a black man walking behind you at night,
you rightly worry, but if you include that in your AGI design, you would be
dismissed as a racist.

Effectively solving VERY VERY difficult problems, like why a particular
corporation is failing after other experts have failed, is a multiple-step
process that starts with narrowing down the vast field of possibilities. As
others have already pointed out here, this is often done in a rather summary
and non-probabilistic way. Perhaps all of the really successful programmers
that you have known have had long hair, so if the programming is failing and
the programmer has short hair, then maybe there is an attitude issue to look
into. Of course this does NOT necessarily mean that there is any linkage at
all - just another of many points to focus some attention to.

Similarly, over the course of 100 projects I have developed a long list of
rules that help me find the problems with a tractable amount of effort.
No, I don't usually tell others my poorly-formed rules because they prove
absolutely NOTHING, only focus further effort. I have a special assortment
of rules to apply whenever God is mentioned. After all, not everyone thinks
that God has the same motivations, so SOME approach is needed to paradigm
shift one person's statements to be able to be understood by another
person. The posting you responded to was expressing one such rule. That
having been said...

On 6/22/08, J. Andrew Rogers [EMAIL PROTECTED] wrote:


 Somewhere in the world, there is a PhD chemist and a born-again Christian
 on another mailing list ...the project had hit a serious snag, and so the
 investors brought in a consultant that would explain why the project was
 broken by defectively reasoning about dubious generalizations he pulled out
 of his ass...


Of course I don't make any such (I freely admit to dubious) generalizations
to investors. However, I immediately drill down to find out exactly why THEY
SAY that they didn't stop and reconsider their direction when it should have
been obvious that things had gone off track. When I hear about how God just
couldn't have led them astray, I quote what they said in my report and
suggest that perhaps the problem is that God isn't also underwriting the
investment with limitless funds.

How would YOU (or your AGI) handle such situations? Would you (or your AGI)
ignore past empirical evidence because of lack of proof or political
incorrectness?

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-23 Thread J. Andrew Rogers


On Jun 23, 2008, at 7:53 PM, Steve Richfield wrote:

Andy,



The use of diminutives is considered rude in many parts of anglo- 
culture if the individual does not use it to identify themselves,  
though I realize it is common practice in some regions of the US. When  
in doubt, use the given form.



 This is a PERFECT post, because it so perfectly illustrates a  
particular point of detachment from reality that is common among  
AGIers. In the real world we do certain things to achieve a good  
result, but when we design politically correct AGIs, we banish the  
very logic that allows us to function. For example, if you see a  
black man walking behind you at night, you rightly worry, but if you  
include that in your AGI design, you would be dismissed as a racist.



You have clearly confused me with someone else.


Effectively solving VERY VERY difficult problems, like why a  
particular corporation is failing after other experts have failed,  
is a multiple-step process that starts with narrowing down the vast  
field of possibilities. As others have already pointed out here,  
this is often done in a rather summary and non-probabilistic way.  
Perhaps all of the really successful programmers that you have known  
have had long hair, so if the programming is failing and the  
programmer has short hair, then maybe there is an attitude issue to  
look into. Of course this does NOT necessarily mean that there is  
any linkage at all - just another of many points to focus some  
attention to.



Or it could simply mean that the vast majority of programmers and  
software monkeys are mediocre at best such that the handful of people  
you will meet with deep talent won't constitute a useful sample size.   
Hell, even Brooks suggested as much and he was charitable. In all my  
years in software, I've only met a small number of people who were  
unambiguously wicked smart when it came to software, and while none of  
them could be confused with a completely mundane person they also did  
not have many other traits in common (though I will acknowledge they  
tend to rational and self-analytical to a degree that is rare in most  
people though this is not a trait exclusive to these people). Of  
course, *my* sample size is also small and so it does not count for  
much.




Similarly, over the course of 100 projects...



Eh? Over 100 projects?  These were either very small projects or you  
are older than Methuselah.  I've worked on a lot of projects, but  
nowhere near 100 and I was a consultant for many years.



J. Andrew Rogers


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com