Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-09-03 Thread Valentina Poletti
So it's about money then.. now THAT makes me feel less worried!! :)

That explains a lot though.

On 8/28/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Valentina Poletti [EMAIL PROTECTED] wrote:
  Got ya, thanks for the clarification. That brings up another question.
 Why do we want to make an AGI?

 I'm glad somebody is finally asking the right question, instead of skipping
 over the specification to the design phase. It would avoid a lot of
 philosophical discussions that result from people having different ideas of
 what AGI should do.

 AGI could replace all human labor, worth about US $2 to $5 quadrillion over
 the next 30 years. We should expect the cost to be of this magnitude, given
 that having it sooner is better than waiting.

 I think AGI will be immensely complex, on the order of 10^18 bits,
 decentralized, competitive, with distributed ownership, like today's
 internet but smarter. It will converse with you fluently but know too much
 to pass the Turing test. We will be totally dependent on it.

 -- Matt Mahoney, [EMAIL PROTECTED]






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread Terren Suydam

Hi Ben, 

My own feeling is that computation is just the latest in a series of technical 
metaphors that we apply in service of understanding how the universe works. 
Like the others before it, it captures some valuable aspects and leaves out 
others. It leaves me wondering: what future metaphors will we apply to the 
universe, ourselves, etc., that will make computation-as-metaphor seem as 
quaint as the old clockworks analogies?

I believe that computation is important in that it can help us simulate 
intelligence, but intelligence itself is not simply computation (or if it is, 
it's in a way that requires us to transcend our current notions of 
computation). Note that I'm not suggesting anything mystical or dualistic at 
all, just offering the possibility that we can find still greater metaphors for 
how intelligence works. 

Either way though, I'm very interested in the results of your work - at worst, 
it will shed some needed light on the subject. At best... well, you know that 
part. :-]

Terren

--- On Tue, 9/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Recursive self-change: some definitions
To: agi@v2.listbox.com
Date: Tuesday, September 2, 2008, 4:50 PM



On Tue, Sep 2, 2008 at 4:43 PM, Eric Burton [EMAIL PROTECTED] wrote:

I really see a number of algorithmic breakthroughs as necessary for

the development of strong general AI 

I hear that a lot, yet I never hear any convincing  arguments in that regard...

So, hypothetically (and I hope not insultingly),
 I tend to view this as a kind of unconscious overestimation of the awesomeness 
of our own

species ... we feel intuitively like we're doing SOMETHING so cool in our 
brains, it couldn't
possibly be emulated or superseded by mere algorithms like the ones computer 
scientists
have developed so far ;-)


ben







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Terren Suydam

Hi Vlad,

Thanks for the response. It seems that you're advocating an incremental 
approach *towards* FAI, the ultimate goal being full attainment of 
Friendliness... something you express as fraught with difficulty but not 
insurmountable. As you know, I disagree that it is attainable, because it is 
not possible in principle to know whether something that considers itself 
Friendly actually is. You have to break a few eggs to make an omelet, as the 
saying goes, and Friendliness depends on whether you're the egg or the cook.

Terren

--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 From: Vladimir Nesov [EMAIL PROTECTED]
 Subject: [agi] What is Friendly AI?
 To: agi@v2.listbox.com
 Date: Saturday, August 30, 2008, 1:53 PM
 On Sat, Aug 30, 2008 at 8:54 PM, Terren Suydam
 [EMAIL PROTECTED] wrote:
  --- On Sat, 8/30/08, Vladimir Nesov
 [EMAIL PROTECTED] wrote:
 
  You start with what is right? and end
 with
  Friendly AI, you don't
  start with Friendly AI and close the
 circular
  argument. This doesn't
  answer the question, but it defines Friendly AI
 and thus
  Friendly AI
  (in terms of right).
 
  In your view, then, the AI never answers the question
 What is right?.
  The question has already been answered in terms of the
 algorithmic process
  that determines its subgoals in terms of Friendliness.
 
 There is a symbolic string what is right? and
 what it refers to, the
 thing that we are trying to instantiate in the world. The
 whole
 process of  answering the question is the meaning of life,
 it is what
 we want to do for the rest of eternity (it is roughly a
 definition of
 right rather than over-the-top extrapolation
 from it). It is an
 immensely huge object, and we know very little about it,
 like we know
 very little about the form of a Mandelbrot set from the
 formula that
 defines it, even though it entirely unfolds from this
 little formula.
 What's worse, we don't know how to safely establish
 the dynamics for
 answering this question, we don't know the formula, we
 only know the
 symbolic string, formula, that we assign some
 fuzzy meaning to.
 
 There is no final answer, and no formal question, so I use
 question-answer pairs to describe the dynamics of the
 process, which
 flows from question to answer, and the answer is the next
 question,
 which then follows to the next answer, and so on.
 
 With Friendly AI, the process begins with the question a
 human asks to
 himself, what is right?. From this question
 follows a technical
 solution, initial dynamics of Friendly AI, that is a device
 to make a
 next step, to initiate transferring the dynamics of
 right from human
 into a more reliable and powerful form. In this sense,
 Friendly AI
 answers the question of right, being the next
 step in the process.
 But initial FAI doesn't embody the whole dynamics, it
 only references
 it in the humans and learns to gradually transfer it, to
 embody it.
 Initial FAI doesn't contain the content of
 right, only the structure
 of absorb it from humans.
 
 Of course, this is simplification, there are all kinds of
 difficulties. For example, this whole endeavor needs to be
 safeguarded
 against mistakes made along the way, including the mistakes
 made
 before the idea of implementing FAI appeared, mistakes in
 everyday
 design that went into FAI, mistakes in initial stages of
 training,
 mistakes in moral decisions made about what
 right means. Initial
 FAI, when it grows up sufficiently, needs to be able to
 look back and
 see why it turned out to be the way it did, was it because
 it was
 intended to have a property X, or was it because of some
 kind of
 arbitrary coincidence, was property X intended for valid
 reasons, or
 because programmer Z had a bad mood that morning, etc.
 Unfortunately,
 there is no objective morality, so FAI needs to be made
 good enough
 from the start to eventually be able to recognize what is
 valid and
 what is not, reflectively looking back at its origin, with
 all the
 depth of factual information and optimization power to run
 whatever
 factual queries it needs.
 
 I (vainly) hope this answered (at least some of the) other
 questions as well.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] draft for comment

2008-09-03 Thread Pei Wang
TITLE: Embodiment: Who does not have a body?

AUTHOR: Pei Wang

ABSTRACT: In the context of AI, ``embodiment'' should not be
interpreted as ``giving the system a body'', but as ``adapting to the
system's experience''. Therefore, being a robot is neither a
sufficient condition nor a necessary condition of being embodied. What
really matters is the assumption about the environment for which the
system is designed.

URL: http://nars.wang.googlepages.com/wang.embodiment.pdf


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Vladimir Nesov
On Thu, Sep 4, 2008 at 12:46 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 Hi Vlad,

 Thanks for the response. It seems that you're advocating an incremental
 approach *towards* FAI, the ultimate goal being full attainment of 
 Friendliness...
 something you express as fraught with difficulty but not insurmountable.
 As you know, I disagree that it is attainable, because it is not possible in
 principle to know whether something that considers itself Friendly actually
 is. You have to break a few eggs to make an omelet, as the saying goes,
 and Friendliness depends on whether you're the egg or the cook.


Sorry Terren, I don't understand what you are trying to say in the
last two sentences. What does considering itself Friendly means and
how it figures into FAI, as you use the phrase? What (I assume) kind
of experiment or arbitrary decision are you talking about?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread William Pearson
2008/9/2 Ben Goertzel [EMAIL PROTECTED]:

 Yes, I agree that your Turing machine approach can model the same
 situations, but the different formalisms seem to lend themselves to
 different kinds of analysis more naturally...

 I guess it all depends on what kinds of theorems you want to formulate...


What I am interested in is if someone gives me a computer system that
changes its state is some fashion, can I state how powerful that
method of change is likely to be? That is what the exact difference
between a traditional learning algorithm and the way I envisage AGIs
changing their state.

Also can you formalise the difference between a humans method of
learning how to learn, and boot strapping language off language (both
examples of a strange loop), and a program inspecting and changing its
source code.

I'm also interested in recursive self changing systems and whether you
can be sure they will stay recursive self changing systems, as they
change. This last one especially with regard to people designs systems
with singletons in mind.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-09-03 Thread William Pearson
2008/8/28 Valentina Poletti [EMAIL PROTECTED]:
 Got ya, thanks for the clarification. That brings up another question. Why
 do we want to make an AGI?



To understand ourselves as intelligent agents better? It might enable
us to have decent education policy, rehabilitation of criminals.

Even if we don't make human like AGIs the principles should help us
understand ourselves, just as optics of the lens helped us understand
the eye and aerodynamics of wings helps us understand bird flight.

It could also gives us more leverage, more brain power on the planet
to help solve the planets problems.

This is all predicated on the idea that fast take off is pretty much
impossible. It is possible then all bets are off.

 Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Terren Suydam

Hey Vlad - 

By considers itself Friendly, I'm refering to an FAI that is renormalizing in 
the sense you suggest. It's an intentional stance interpretation of what it's 
doing, regardless of whether the FAI is actually considering itself Friendly, 
whatever that would mean.

I'm asserting that if you had an FAI in the sense you've described, it wouldn't 
be possible in principle to distinguish it with 100% confidence from a rogue 
AI. There's no Turing Test for Friendliness.

Terren

--- On Wed, 9/3/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 From: Vladimir Nesov [EMAIL PROTECTED]
 Subject: Re: [agi] What is Friendly AI?
 To: agi@v2.listbox.com
 Date: Wednesday, September 3, 2008, 5:04 PM
 On Thu, Sep 4, 2008 at 12:46 AM, Terren Suydam
 [EMAIL PROTECTED] wrote:
 
  Hi Vlad,
 
  Thanks for the response. It seems that you're
 advocating an incremental
  approach *towards* FAI, the ultimate goal being full
 attainment of Friendliness...
  something you express as fraught with difficulty but
 not insurmountable.
  As you know, I disagree that it is attainable, because
 it is not possible in
  principle to know whether something that considers
 itself Friendly actually
  is. You have to break a few eggs to make an omelet, as
 the saying goes,
  and Friendliness depends on whether you're the egg
 or the cook.
 
 
 Sorry Terren, I don't understand what you are trying to
 say in the
 last two sentences. What does considering itself
 Friendly means and
 how it figures into FAI, as you use the phrase? What (I
 assume) kind
 of experiment or arbitrary decision are you talking about?
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-03 Thread Mike Tintner

Pei:it is important to understand
that both linguistic experience and non-linguistic experience are both 
special
cases of experience, and the latter is not more real than the former. In 
the previous
discussions, many people implicitly suppose that linguistic experience is 
nothing but
Dictionary-Go-Round [Harnad, 1990], and only non-linguistic experience can 
give
symbols meaning. This is a misconception coming from traditional semantics, 
which
determines meaning by referred object, so that an image of the object seems 
to be closer

to the real thing than a verbal description [Wang, 2007].

1. Of course the image is more real than the symbol or word.

Simple test of what should be obvious: a) use any amount of symbols you 
like, incl. Narsese, to describe Pei Wang. Give your description to any 
intelligence, human or AI, and see if it can pick out Pei in a lineup of 
similar men.


b) give the same intelligence a photo of Pei -  apply the same test.

Guess which method will win.

Only images can represent *INDIVIDUAL objects* - incl Pei/Ben or this 
keyboard on my desk. And in the final analysis, only indvidual objects *are* 
real. There are no chairs or oranges for example - those general 
concepts are, in the final analysis, useful fictions. There is only this 
chair here and that chair over there. And if you want to refer to them, 
individually, - so that you communicate successfully with another 
person/intelligence - you have no choice but to use images, (flat or solid).


2. Symbols are abstract - they can't refer to anything unless you already 
know, via images, what they refer to. If you think not, please draw a 
cheggnutAgain, if I give you an image of a cheggnut, you will have no 
problem.


3. You talk of a misconception of semantics, but give no reason why it is 
such, merely state it is.


4. You leave out the most important thing of all - you argue that experience 
is composed of symbols and images. And...?  Hey, there's also the real 
thing(s). The real objects that they refer to. You certainly can't do 
science without looking at the real objects. And science is only a 
systematic version of all intelligence. That's how every  functioning 
general intelligence is able to be intelligent about the world - by being 
grounded in the real world, composed of real objects. which it can go out 
and touch, walk round, look at and interact with. A box like Nars can't do 
that, can it?


Do you realise what you're saying, Pei? To understand statements is to 
*realise* what they mean - what they refer to - to know that they refer to 
real objects, which you can really go and interact with and test - and to 
try (or have your brain try automatically) to connect those statements to 
real objects.


When you or I are given words or images, find this man [Pei], or cook a 
Chinese meal tonight, we know that those signs must be tested in the real 
world and are only valid if so tested. We know that it's possible that that 
man over there who looks v. like the photo may not actually be Pei, or that 
Pei may have left the country and be impossible to find. We know that it may 
be impossible to cook such a meal, because there's no such food around. - 
And all such tests can only be conducted in the real world (and not say by 
going and looking at other texts or photos - living in a Web world).


Your concept of AI is not so much un-grounded as unreal.

5. Why on earth do you think that evolution shows us general intelligences 
very successfully dealing with the problems of the world for over a billion 
years *without* any formal symbols? Why do infants take time to acquire 
l;anguage and are therefore able to survive without it?


The conception of AI that you are advancing is the equivalent of 
Creationism - it both lacks and denies an evolutionary perspective on 
intelligence - a (correctly) cardinal sin in modern science..







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread Ben Goertzel
hi,



 What I am interested in is if someone gives me a computer system that
 changes its state is some fashion, can I state how powerful that
 method of change is likely to be? That is what the exact difference
 between a traditional learning algorithm and the way I envisage AGIs
 changing their state.


I'm sure this question is unsolvable in general ... so the interesting
question may be: Is there a subset of the class of possible AGI's, which
includes systems of an extremely (and hopefully unlimitedly) high level of
intelligence, and for which it *is* tractable to usefully probabilistically
predict the consequences of the system's self-modifications...



 Also can you formalise the difference between a humans method of
 learning how to learn, and boot strapping language off language (both
 examples of a strange loop), and a program inspecting and changing its
 source code.


Suppose one has a program of size N that has some self-reprogramming
capability.   There's a question of: for a certain probability p, how large
is the subset of program space that the program has probability  p of
entering (where the probability is calculated across possible worlds, e.g.
according to an occam distribution).






 I'm also interested in recursive self changing systems and whether you
 can be sure they will stay recursive self changing systems, as they
 change.



I'm almost certain there is no certainty in this world, regarding empirical
predictions like that ;-)

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Vladimir Nesov
On Thu, Sep 4, 2008 at 1:34 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 I'm asserting that if you had an FAI in the sense you've described, it 
 wouldn't
 be possible in principle to distinguish it with 100% confidence from a rogue 
 AI.
 There's no Turing Test for Friendliness.


You design it to be Friendly, you don't generate an arbitrary AI and
then test it. The latter, if not outright fatal, might indeed prove
impossible as you suggest, which is why there is little to be gained
from AI-boxes.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-03 Thread Ben Goertzel
Pei,

I have a different sort of reason for thinking embodiment is important ...
it's a deeper reason that I think underlies the embodiment is important
because of symbol grounding argument.

Linguistic data, mathematical data, visual data, motoric data etc. are all
just bits ... and intelligence needs to work by recognizing patterns among
these bits, especially patterns related to system goals.

What I think is that the set of patterns in perceptual and motoric data has
radically different statistical properties than the set of patterns in
linguistic and mathematical data ... and that the properties of the set of
patterns in perceptual and motoric data is intrinsically better suited to
the needs of a young, ignorant, developing mind.

All these different domains of pattern display what I've called a dual
network structure ... a collection of hierarchies (of progressively more
and more complex, hierarchically nested patterns) overlayed with a
heterarchy (of overlapping, interrelated patterns).  But the statistics of
the dual networks in the different domains is different.  I haven't fully
plumbed the difference yet ... but, among the many differences is that in
perceptual/motoric domains, you have a very richly connected dual network at
a very low level of the overall dual network hierarchy -- i.e., there's a
richly connected web of relatively simple stuff to understand ... and then
these simple things are related to (hence useful for learning) the more
complex things, etc.

In short, Pei, I agree that the arguments typically presented in favor of
embodiment in AI suck.  However, I think there are deeper factors going on
which do imply a profound value of embodiment for AGI.  Unfortunately, we
currently lack a really appropriate scientific language for describing the
differences in statistical organization between different pattern-sets, so
it's almost as difficult to articulate these differences as it is to
understand them...

-- Ben G

On Wed, Sep 3, 2008 at 4:58 PM, Pei Wang [EMAIL PROTECTED] wrote:

 TITLE: Embodiment: Who does not have a body?

 AUTHOR: Pei Wang

 ABSTRACT: In the context of AI, ``embodiment'' should not be
 interpreted as ``giving the system a body'', but as ``adapting to the
 system's experience''. Therefore, being a robot is neither a
 sufficient condition nor a necessary condition of being embodied. What
 really matters is the assumption about the environment for which the
 system is designed.

 URL: http://nars.wang.googlepages.com/wang.embodiment.pdf


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-03 Thread Matt Mahoney
I think that computation is not so much a metaphor for understanding the 
universe as it is an explanation. If you enumerate all possible Turing 
machines, thus enumerating all possible laws of physics, then some of those 
universes will have the right conditions for the evolution of intelligent life. 
If neutrons were slightly heavier than they actually are (relative to protons), 
then stars could not sustain fusion. If they were slightly lighter, then they 
would be stable and we would have no elements.

Because of gravity, the speed of light, Planck's constant, the quantization of 
electric charge, and the finite age of the universe, the universe has a finite 
length description, and is therefore computable. The Bekenstein bound of the 
Hubble radius is 2.91 x 10^122 bits. Any computer within a finite universe must 
have less memory than it, and therefore cannot simulate it except by using an 
approximate (probabilistic) model. One such model is quantum mechanics.

For the same reason, an intelligent agent (which must be Turing computable if 
the universe is) cannot model itself, except probabilistically as an 
approximation. Thus, we cannot predict what we will think without actually 
thinking it. This property makes our own intelligence seem mysterious.

An explanation is only useful if it makes predictions, and it does. If the 
universe were not Turing computable, then Solomonoff induction and AIXI as 
ideal models of prediction and intelligence would not be applicable to the real 
world. Yet we have Occam's Razor and find in practice that all successful 
machine learning algorithms use algorithmically simple hypothesis sets.


-- Matt Mahoney, [EMAIL PROTECTED]

--- On Wed, 9/3/08, Terren Suydam [EMAIL PROTECTED] wrote:
From: Terren Suydam [EMAIL PROTECTED]
Subject: Re: [agi] Recursive self-change: some definitions
To: agi@v2.listbox.com
Date: Wednesday, September 3, 2008, 4:17 PM


Hi Ben, 

My own feeling is that computation is just the latest in a series of technical 
metaphors that we apply in service of understanding how the universe works. 
Like the others before it, it captures some valuable aspects and leaves out 
others. It leaves me wondering: what future metaphors will we apply to the 
universe, ourselves, etc., that will make computation-as-metaphor seem as 
quaint as the old clockworks analogies?

I believe that computation is important in that it can help us simulate 
intelligence, but intelligence itself is not simply computation (or if it is, 
it's in a way that requires us to transcend our current notions of 
computation). Note that I'm not suggesting anything mystical or dualistic at 
all, just offering the possibility that we can find still greater metaphors for 
how intelligence works. 

Either way though,
 I'm very interested in the results of your work - at worst, it will shed some 
needed light on the subject. At best... well, you know that part. :-]

Terren

--- On Tue, 9/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Recursive self-change: some definitions
To: agi@v2.listbox.com
Date: Tuesday, September 2, 2008, 4:50 PM



On Tue, Sep 2, 2008 at 4:43 PM, Eric Burton [EMAIL PROTECTED] wrote:

I really see a number of algorithmic breakthroughs as necessary for

the development of strong general AI 

I hear that a lot, yet I never hear any convincing  arguments in that regard...

So, hypothetically (and I hope not insultingly),
 I tend to view this as a kind of unconscious overestimation of the awesomeness 
of our own

species ... we feel intuitively like we're doing SOMETHING so cool in our 
brains, it couldn't
possibly be emulated or superseded by mere algorithms like the ones computer 
scientists
have developed so far ;-)


ben




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-03 Thread Pei Wang
Mike,

As I said before, you give symbol a very narrow meaning, and insist
that it is the only way to use it. In the current discussion,
symbols are not 'X', 'Y', 'Z', but 'table', 'time', 'intelligence'.
BTW, what images you associate with the latter two?

Since you prefer to use person as example, let me try the same. All of
my experience about 'Mike Tintner' is symbolic, nothing visual, but it
still makes you real enough to me, and I've got more information about
you than a photo of you can provide. For instance, this experience
tells me that to argue this issue with you will very likely be a waste
of time, which is something that no photo can teach me. I still cannot
pick you out in a lineup, but it doesn't mean your name is meaningless
to me.

I'm sorry if it sounds rude --- I rarely talk to people in this tone,
but you are exceptional, in my experience of personal communication.
Again, the meaning of your name, in my mind, is not the person it
refers, but its relations with other concepts in my experience, this
experience can either be visual, verbal, or something else.

Pei

On Wed, Sep 3, 2008 at 6:07 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Pei:it is important to understand
 that both linguistic experience and non-linguistic experience are both
 special
 cases of experience, and the latter is not more real than the former. In
 the previous
 discussions, many people implicitly suppose that linguistic experience is
 nothing but
 Dictionary-Go-Round [Harnad, 1990], and only non-linguistic experience can
 give
 symbols meaning. This is a misconception coming from traditional semantics,
 which
 determines meaning by referred object, so that an image of the object seems
 to be closer
 to the real thing than a verbal description [Wang, 2007].

 1. Of course the image is more real than the symbol or word.

 Simple test of what should be obvious: a) use any amount of symbols you
 like, incl. Narsese, to describe Pei Wang. Give your description to any
 intelligence, human or AI, and see if it can pick out Pei in a lineup of
 similar men.

 b) give the same intelligence a photo of Pei -  apply the same test.

 Guess which method will win.

 Only images can represent *INDIVIDUAL objects* - incl Pei/Ben or this
 keyboard on my desk. And in the final analysis, only indvidual objects *are*
 real. There are no chairs or oranges for example - those general
 concepts are, in the final analysis, useful fictions. There is only this
 chair here and that chair over there. And if you want to refer to them,
 individually, - so that you communicate successfully with another
 person/intelligence - you have no choice but to use images, (flat or solid).

 2. Symbols are abstract - they can't refer to anything unless you already
 know, via images, what they refer to. If you think not, please draw a
 cheggnutAgain, if I give you an image of a cheggnut, you will have no
 problem.

 3. You talk of a misconception of semantics, but give no reason why it is
 such, merely state it is.

 4. You leave out the most important thing of all - you argue that experience
 is composed of symbols and images. And...?  Hey, there's also the real
 thing(s). The real objects that they refer to. You certainly can't do
 science without looking at the real objects. And science is only a
 systematic version of all intelligence. That's how every  functioning
 general intelligence is able to be intelligent about the world - by being
 grounded in the real world, composed of real objects. which it can go out
 and touch, walk round, look at and interact with. A box like Nars can't do
 that, can it?

 Do you realise what you're saying, Pei? To understand statements is to
 *realise* what they mean - what they refer to - to know that they refer to
 real objects, which you can really go and interact with and test - and to
 try (or have your brain try automatically) to connect those statements to
 real objects.

 When you or I are given words or images, find this man [Pei], or cook a
 Chinese meal tonight, we know that those signs must be tested in the real
 world and are only valid if so tested. We know that it's possible that that
 man over there who looks v. like the photo may not actually be Pei, or that
 Pei may have left the country and be impossible to find. We know that it may
 be impossible to cook such a meal, because there's no such food around. -
 And all such tests can only be conducted in the real world (and not say by
 going and looking at other texts or photos - living in a Web world).

 Your concept of AI is not so much un-grounded as unreal.

 5. Why on earth do you think that evolution shows us general intelligences
 very successfully dealing with the problems of the world for over a billion
 years *without* any formal symbols? Why do infants take time to acquire
 l;anguage and are therefore able to survive without it?

 The conception of AI that you are advancing is the equivalent of Creationism
 - it both lacks and denies an 

Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread Mike Tintner
Terren:My own feeling is that computation is just the latest in a series of 
technical metaphors that we apply in service of understanding how the universe 
works. Like the others before it, it captures some valuable aspects and leaves 
out others. It leaves me wondering: what future metaphors will we apply to the 
universe, ourselves, etc., that will make computation-as-metaphor seem as 
quaint as the old clockworks analogies?

I think this is a good important point. I've been groping confusedly here. It 
seems to me computation necessarily involves the idea of using a code (?). But 
the nervous system seems to me something capable of functioning without a code 
- directly being imprinted on by the world, and directly forming movements, 
(even if also involving complex hierarchical processes), without any code. I've 
been wondering whether computers couldn't also be designed to function without 
a code in somewhat similar fashion.  Any thoughts or ideas of your own?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment.. P.S.

2008-09-03 Thread Mike Tintner
I think I have an appropriate term for what I was trying to conceptualise. 
It is that intelligence has not only to be embodied, but it has to be 
EMBEDDED in the real world -  that's the only way it can test whether 
information about the world and real objects is really true. If you want to 
know whether Jane Doe is great at sex, you can't take anyone's word for it, 
you have to go to bed with her. [Comments on the term esp. welcome). 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-03 Thread Pei Wang
On Wed, Sep 3, 2008 at 6:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 What I think is that the set of patterns in perceptual and motoric data has
 radically different statistical properties than the set of patterns in
 linguistic and mathematical data ... and that the properties of the set of
 patterns in perceptual and motoric data is intrinsically better suited to
 the needs of a young, ignorant, developing mind.

Sure it is. Systems with different sensory channels will never fully
understand each other. I'm not saying that one channel (verbal) can
replace another (visual), but that both of them (and many others) can
give symbol/representation/concept/pattern/whatever-you-call-it
meaning. No on is more real than others.

 All these different domains of pattern display what I've called a dual
 network structure ... a collection of hierarchies (of progressively more
 and more complex, hierarchically nested patterns) overlayed with a
 heterarchy (of overlapping, interrelated patterns).  But the statistics of
 the dual networks in the different domains is different.  I haven't fully
 plumbed the difference yet ... but, among the many differences is that in
 perceptual/motoric domains, you have a very richly connected dual network at
 a very low level of the overall dual network hierarchy -- i.e., there's a
 richly connected web of relatively simple stuff to understand ... and then
 these simple things are related to (hence useful for learning) the more
 complex things, etc.

True, but can you say that the relations among words, or concepts, are simpler?

 In short, Pei, I agree that the arguments typically presented in favor of
 embodiment in AI suck.  However, I think there are deeper factors going on
 which do imply a profound value of embodiment for AGI.  Unfortunately, we
 currently lack a really appropriate scientific language for describing the
 differences in statistical organization between different pattern-sets, so
 it's almost as difficult to articulate these differences as it is to
 understand them...

In this short paper, I make no attempt to settle all issues, but just
to point out a simple fact --- a laptop has a body, and is not less
embodied than Roomba or Mindstorms --- that seems have been ignored in
the previous discussion.

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-03 Thread Abram Demski
Matt, I have several objections.

First, as I understand it, your statement about the universe having a
finite description length only applies to the *observable* universe,
not the universe as a whole. The hubble radius expands at the speed of
light as more light reaches us, meaning that the observable universe
has a longer description length every day. So it does not seem very
relevant to say that the description length is finite.

The universe as a whole (observable and not-observable) *could* be
finite, but we don't know one way or the other so far as I am aware.

Second, I do not agree with your reason for saying that physics is
necessarily probabilistic. It seems possible to have a completely
deterministic physics, which merely suffers from a lack of information
and computation ability. Imagine if the universe happened to follow
Newtonian physics, with atoms being little billiard balls. The
situation is deterministic, if only we knew the starting state of the
universe and had large enough computers to approximate the
differential equations to arbitrary accuracy.

Third, this is nitpicking, but I also am not sure about the argument
that we cannot predict our thoughts. It seems formally possible that a
system could predict itself. The system would need to be compressible,
so that a model of itself could fit inside the whole. I could be wrong
here, feel free to show me that I am. Anyway, the same objection also
applies back to the necessity of probabilistic physics: is it really
impossible for beings within a universe to have an accurate compressed
model of the entire universe? (Similarly, if we have such a model,
could we use it to run a simulation of the entire universe? This seems
much less possible.)

--Abram


On Wed, Sep 3, 2008 at 6:45 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 I think that computation is not so much a metaphor for understanding the 
 universe as it is an explanation. If you enumerate all possible Turing 
 machines, thus enumerating all possible laws of physics, then some of those 
 universes will have the right conditions for the evolution of intelligent 
 life. If neutrons were slightly heavier than they actually are (relative to 
 protons), then stars could not sustain fusion. If they were slightly lighter, 
 then they would be stable and we would have no elements.

 Because of gravity, the speed of light, Planck's constant, the quantization 
 of electric charge, and the finite age of the universe, the universe has a 
 finite length description, and is therefore computable. The Bekenstein bound 
 of the Hubble radius is 2.91 x 10^122 bits. Any computer within a finite 
 universe must have less memory than it, and therefore cannot simulate it 
 except by using an approximate (probabilistic) model. One such model is 
 quantum mechanics.

 For the same reason, an intelligent agent (which must be Turing computable if 
 the universe is) cannot model itself, except probabilistically as an 
 approximation. Thus, we cannot predict what we will think without actually 
 thinking it. This property makes our own intelligence seem mysterious.

 An explanation is only useful if it makes predictions, and it does. If the 
 universe were not Turing computable, then Solomonoff induction and AIXI as 
 ideal models of prediction and intelligence would not be applicable to the 
 real world. Yet we have Occam's Razor and find in practice that all 
 successful machine learning algorithms use algorithmically simple hypothesis 
 sets.


 -- Matt Mahoney, [EMAIL PROTECTED]

 --- On Wed, 9/3/08, Terren Suydam [EMAIL PROTECTED] wrote:
 From: Terren Suydam [EMAIL PROTECTED]
 Subject: Re: [agi] Recursive self-change: some definitions
 To: agi@v2.listbox.com
 Date: Wednesday, September 3, 2008, 4:17 PM


 Hi Ben,

 My own feeling is that computation is just the latest in a series of 
 technical metaphors that we apply in service of understanding how the 
 universe works. Like the others before it, it captures some valuable aspects 
 and leaves out others. It leaves me wondering: what future metaphors will we 
 apply to the universe, ourselves, etc., that will make 
 computation-as-metaphor seem as quaint as the old clockworks analogies?

 I believe that computation is important in that it can help us simulate 
 intelligence, but intelligence itself is not simply computation (or if it is, 
 it's in a way that requires us to transcend our current notions of 
 computation). Note that I'm not suggesting anything mystical or dualistic at 
 all, just offering the possibility that we can find still greater metaphors 
 for how intelligence works.

 Either way though,
  I'm very interested in the results of your work - at worst, it will shed 
 some needed light on the subject. At best... well, you know that part. :-]

 Terren

 --- On Tue, 9/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 From: Ben Goertzel [EMAIL PROTECTED]
 Subject: Re: [agi] Recursive self-change: some definitions
 To: agi@v2.listbox.com
 Date: 

Re: [agi] What is Friendly AI?

2008-09-03 Thread Terren Suydam

I'm talking about a situation where humans must interact with the FAI without 
knowledge in advance about whether it is Friendly or not. Is there a test we 
can devise to make certain that it is?

--- On Wed, 9/3/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 From: Vladimir Nesov [EMAIL PROTECTED]
 Subject: Re: [agi] What is Friendly AI?
 To: agi@v2.listbox.com
 Date: Wednesday, September 3, 2008, 6:11 PM
 On Thu, Sep 4, 2008 at 1:34 AM, Terren Suydam
 [EMAIL PROTECTED] wrote:
 
  I'm asserting that if you had an FAI in the sense
 you've described, it wouldn't
  be possible in principle to distinguish it with 100%
 confidence from a rogue AI.
  There's no Turing Test for
 Friendliness.
 
 
 You design it to be Friendly, you don't generate an
 arbitrary AI and
 then test it. The latter, if not outright fatal, might
 indeed prove
 impossible as you suggest, which is why there is little to
 be gained
 from AI-boxes.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread Terren Suydam

Hi Mike,

I see two ways to answer your question. One is along the lines that Jaron 
Lanier has proposed - the idea of software interfaces that are fuzzy. So rather 
than function calls that take a specific set of well defined arguments, 
software components talk somehow in 'patterns' such that small errors can be 
tolerated. While there would still be a kind of 'code' that executes, the 
process of translating it to processor instructions would be much more highly 
abstracted than any current high level language. I'm not sure I truly grokked 
Lanier's concept, but it's clear that for it to work, this high-level pattern 
idea would still need to somehow translate to instructions the processor can 
execute.

The other way of answering this question is in terms of creating simulations of 
things like brains that don't execute code. You model the parallelism in code 
from which emerges the structures of interest. This is the A-Life approach that 
I advocate.

But at bottom, a computer is a processor that executes instructions. Unless 
you're talking about a radically different kind of computer... if so, care to 
elaborate?

Terren

--- On Wed, 9/3/08, Mike Tintner [EMAIL PROTECTED] wrote:
From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] Recursive self-change: some definitions
To: agi@v2.listbox.com
Date: Wednesday, September 3, 2008, 7:02 PM



 
 

Terren:My own 
feeling is that computation is just the latest in a series of technical 
metaphors that we apply in service of understanding how the universe works. 
Like 
the others before it, it captures some valuable aspects and leaves out others. 
It leaves me wondering: what future metaphors will we apply to the universe, 
ourselves, etc., that will make computation-as-metaphor seem as quaint as the 
old clockworks analogies?

I think this is a good important point. 
I've been groping confusedly here. It seems to me computation necessarily 
involves the idea of using a code (?). But the nervous system seems to me 
something capable of functioning without a code - directly being imprinted on 
by 
the world, and directly forming movements, (even if also involving complex 
hierarchical processes), without any code. I've been wondering whether 
computers 
couldn't also be designed to function without a code in somewhat similar 
fashion.  Any thoughts or ideas of your own?



  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-03 Thread Matt Mahoney
--- On Wed, 9/3/08, Abram Demski [EMAIL PROTECTED] wrote:

 From: Abram Demski [EMAIL PROTECTED]
 Subject: Re: Computation as an explanation of the universe (was Re: [agi] 
 Recursive self-change: some definitions)
 To: agi@v2.listbox.com
 Date: Wednesday, September 3, 2008, 7:35 PM
 Matt, I have several objections.
 
 First, as I understand it, your statement about the
 universe having a
 finite description length only applies to the *observable*
 universe,
 not the universe as a whole. The hubble radius expands at
 the speed of
 light as more light reaches us, meaning that the observable
 universe
 has a longer description length every day. So it does not
 seem very
 relevant to say that the description length is finite.

 The universe as a whole (observable and not-observable)
 *could* be
 finite, but we don't know one way or the other so far
 as I am aware.

OK, then the observable universe has a finite description length. We don't need 
to describe anything else to model it, so by universe I mean only the 
observable part.

 
 Second, I do not agree with your reason for saying that
 physics is
 necessarily probabilistic. It seems possible to have a
 completely
 deterministic physics, which merely suffers from a lack of
 information
 and computation ability. Imagine if the universe happened
 to follow
 Newtonian physics, with atoms being little billiard balls.
 The
 situation is deterministic, if only we knew the starting
 state of the
 universe and had large enough computers to approximate the
 differential equations to arbitrary accuracy.

I am saying that the universe *is* deterministic. It has a definite quantum 
state, but we would need about 10^122 bits of memory to describe it. Since we 
can't do that, we have to resort to approximate models like quantum mechanics.

I believe there is a simpler description. First, the description length is 
increasing with the square of the age of the universe, since it is proportional 
to area. So it must have been very small at one time. Second, the most 
efficient way to enumerate all possible universes would be to run each B-bit 
machine for 2^B steps, starting with B = 0, 1, 2... until intelligent life is 
found. For our universe, B ~ 407. You could reasonably argue that the 
algorithmic complexity of the free parameters of string theory and general 
relativity is of this magnitude. I believe that Wolfram also argued that the 
(observable) universe is a few lines of code.

But even if we discover this program it does not mean we could model the 
universe deterministically. We would need a computer larger than the universe 
to do so.

 Third, this is nitpicking, but I also am not sure about the
 argument
 that we cannot predict our thoughts. It seems formally
 possible that a
 system could predict itself. The system would need to be
 compressible,
 so that a model of itself could fit inside the whole. I
 could be wrong
 here, feel free to show me that I am. Anyway, the same
 objection also
 applies back to the necessity of probabilistic physics: is
 it really
 impossible for beings within a universe to have an accurate
 compressed
 model of the entire universe? (Similarly, if we have such a
 model,
 could we use it to run a simulation of the entire universe?
 This seems
 much less possible.)

There is a simple argument using information theory. Every system S has a 
Kolmogorov complexity K(S), which is the smallest size that you can compress a 
description of S to. A model of S must also have complexity K(S). However, this 
leaves no space for S to model itself. In particular, if all of S's memory is 
used to describe its model, there is no memory left over to store any results 
of the simulation.

 
 --Abram
 
 
 On Wed, Sep 3, 2008 at 6:45 PM, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  I think that computation is not so much a metaphor for
 understanding the universe as it is an explanation. If you
 enumerate all possible Turing machines, thus enumerating all
 possible laws of physics, then some of those universes will
 have the right conditions for the evolution of intelligent
 life. If neutrons were slightly heavier than they actually
 are (relative to protons), then stars could not sustain
 fusion. If they were slightly lighter, then they would be
 stable and we would have no elements.
 
  Because of gravity, the speed of light, Planck's
 constant, the quantization of electric charge, and the
 finite age of the universe, the universe has a finite length
 description, and is therefore computable. The Bekenstein
 bound of the Hubble radius is 2.91 x 10^122 bits. Any
 computer within a finite universe must have less memory than
 it, and therefore cannot simulate it except by using an
 approximate (probabilistic) model. One such model is quantum
 mechanics.
 
  For the same reason, an intelligent agent (which must
 be Turing computable if the universe is) cannot model
 itself, except probabilistically as an approximation. Thus,
 we cannot predict what we 

[agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-03 Thread Mike Tintner
Terren's request for new metaphors/paradigms for intelligence threw me 
temporarily off course.Why a new one - why not the old one? The computer. 
But the whole computer.


You see, AI-ers simply don't understand computers, or understand only half 
of them


What I'm doing here is what I said philosophers do - outline existing 
paradigms and point out how they lack certain essential dimensions.


When AI-ers look at a computer, the paradigm that they impose on it is that 
of a Turing machine - a programmed machine, a device for following programs.


But that is obviously only the half of it.Computers are obviously much more 
than that - and  Turing machines. You just have to look at them. It's 
staring you in the face. There's something they have that Turing machines 
don't. See it? Terren?


They have -   a keyboard.

And as a matter of scientific, historical fact, computers are first and 
foremost keyboards - i.e.devices for CREATING programs  on keyboards, - and 
only then following them. [Remember how AI gets almost everything about 
intelligence back to front?] There is not and never has been a program that 
wasn't first created on a keyboard. Indisputable fact. Almost everything 
that happens in computers happens via the keyboard.


So what exactly is a keyboard? Well, like all keyboards whether of 
computers, musical instruments or typewriters, it is a creative instrument. 
And what makes it creative is that it is - you could say - an organiser.


A device with certain organs (in this case keys) that are designed to be 
creatively organised - arranged in creative, improvised (rather than 
programmed) sequences of  action/ association./organ play.


And an extension of the body. Of the organism. All organisms are 
organisers - devices for creatively sequencing actions/ 
associations./organs/ nervous systems first and developing fixed, orderly 
sequences/ routines/ programs second.


All organisers are manifestly capable of an infinity of creative, novel 
sequences, both rational and organized, and crazy and disorganized.  The 
idea that organisers (including computers) are only meant to follow 
programs - to be straitjacketed in movement and thought -  is obviously 
untrue. Touch the keyboard. Which key comes first? What's the program for 
creating any program? And there lies the secret of AGI.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-03 Thread Terren Suydam

Mike,

There's nothing particularly creative about keyboards. The creativity comes 
from what uses the keyboard. Maybe that was your point, but if so the 
digression about a keyboard is just confusing.

In terms of a metaphor, I'm not sure I understand your point about 
organizers. It seems to me to refer simply to that which we humans do, which 
in essence says general intelligence is what we humans do.  Unfortunately, I 
found this last email to be quite muddled. Actually, I am sympathetic to a lot 
of your ideas, Mike, but I also have to say that your tone is quite 
condescending. There are a lot of smart people on this list, as one would 
expect, and a little humility and respect on your part would go a long way. 
Saying things like You see, AI-ers simply don't understand computers, or 
understand only half of them.  More often than not you position yourself as 
the sole source of enlightened wisdom on AI and other subjects, and that does 
not make me want to get to know your ideas any better.  Sorry to veer off topic 
here, but I say these things because I think some of your ideas are valid and 
could really benefit from an adjustment in your
 presentation of them, and yourself.  If I didn't think you had anything 
worthwhile to say, I wouldn't bother.

Terren

--- On Wed, 9/3/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: [agi] A NewMetaphor for Intelligence - the Computer/Organiser
 To: agi@v2.listbox.com
 Date: Wednesday, September 3, 2008, 9:42 PM
 Terren's request for new metaphors/paradigms for
 intelligence threw me 
 temporarily off course.Why a new one - why not the old one?
 The computer. 
 But the whole computer.
 
 You see, AI-ers simply don't understand computers, or
 understand only half 
 of them
 
 What I'm doing here is what I said philosophers do -
 outline existing 
 paradigms and point out how they lack certain essential
 dimensions.
 
 When AI-ers look at a computer, the paradigm that they
 impose on it is that 
 of a Turing machine - a programmed machine, a device for
 following programs.
 
 But that is obviously only the half of it.Computers are
 obviously much more 
 than that - and  Turing machines. You just have to look at
 them. It's 
 staring you in the face. There's something they have
 that Turing machines 
 don't. See it? Terren?
 
 They have -   a keyboard.
 
 And as a matter of scientific, historical fact, computers
 are first and 
 foremost keyboards - i.e.devices for CREATING programs  on
 keyboards, - and 
 only then following them. [Remember how AI gets almost
 everything about 
 intelligence back to front?] There is not and never has
 been a program that 
 wasn't first created on a keyboard. Indisputable fact.
 Almost everything 
 that happens in computers happens via the keyboard.
 
 So what exactly is a keyboard? Well, like all keyboards
 whether of 
 computers, musical instruments or typewriters, it is a
 creative instrument. 
 And what makes it creative is that it is - you could say -
 an organiser.
 
 A device with certain organs (in this case
 keys) that are designed to be 
 creatively organised - arranged in creative, improvised
 (rather than 
 programmed) sequences of  action/ association./organ
 play.
 
 And an extension of the body. Of the organism. All
 organisms are 
 organisers - devices for creatively sequencing
 actions/ 
 associations./organs/ nervous systems first and developing
 fixed, orderly 
 sequences/ routines/ programs second.
 
 All organisers are manifestly capable of an infinity of
 creative, novel 
 sequences, both rational and organized, and crazy and
 disorganized.  The 
 idea that organisers (including computers) are only meant
 to follow 
 programs - to be straitjacketed in movement and thought - 
 is obviously 
 untrue. Touch the keyboard. Which key comes first?
 What's the program for 
 creating any program? And there lies the secret of AGI.
 
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Matt Mahoney
--- On Wed, 9/3/08, Steve Richfield [EMAIL PROTECTED] wrote:

OK, lets take a concrete example: The Middle East situation,
and ask our infinitely intelligent AGI what to do about it.

OK, lets take a concrete example of friendly AI, such as competitive message 
routing ( http://www.mattmahoney.net/agi.html ). CMR has an algorithmically 
complex definition of friendly. The behavior of billions of peers (narrow-AI 
specialists) are controlled by their human owners who have an economic 
incentive to trade cooperatively and provide useful information. Nevertheless, 
the environment is hostile, so a large fraction (probably most) of CPU cycles 
and knowledge will probably be used to defend against attacks, primarily spam.

CMR is friendly AGI because a lot of narrow-AI specialists that understand just 
enough natural language to do their jobs and know just a little about where to 
route other messages will result (I believe) in a system that is generally 
useful as a communication medium to humans. You would just enter any natural 
language message and it would get routed to anyone who cares, human or machine.

So to answer your question, CMR would not solve the Middle East conflict. It is 
not designed to. That is for people to do. Forcing people to do anything is not 
friendly.

CMR is friendly in the sense that a market is friendly. A market can sell 
weapons to both sides, but markets also reward cooperation. Countries that 
trade with each other have an incentive not to go to war. Likewise, the 
internet can be used to plan attacks and promote each sides' agenda, but also 
to make it easier for the two sides to communicate.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Matt Mahoney
--- On Wed, 9/3/08, Terren Suydam [EMAIL PROTECTED] wrote:

 I'm talking about a situation where humans must interact
 with the FAI without knowledge in advance about whether it
 is Friendly or not. Is there a test we can devise to make
 certain that it is?

No. If an AI has godlike intelligence, then testing whether it is friendly 
would be like an ant proving that you won't step on it.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] What Time Is It? No. What clock is it?

2008-09-03 Thread Brad Paulsen

Hey gang...

It’s Likely That Times Are Changing
http://www.sciencenews.org/view/feature/id/35992/title/It%E2%80%99s_Likely_That_Times_Are_Changing
A century ago, mathematician Hermann Minkowski famously merged space with 
time, establishing a new foundation for physics;  today physicists are 
rethinking how the two should fit together.


A PDF of a paper presented in March of this year, and upon which the 
article is based, can be found at http://arxiv.org/abs/0805.4452.  It's a 
free download.  Lots of equations, graphs, oh my!


Cheers,
Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread j.k.

On 09/03/2008 05:52 PM, Terren Suydam wrote:

I'm talking about a situation where humans must interact with the FAI without 
knowledge in advance about whether it is Friendly or not. Is there a test we 
can devise to make certain that it is?


   


This seems extremely unlikely. Consider that any set of interactions you 
have with a machine you deem friendly could have been with a genuinely 
friendly machine or with an unfriendly machine running an emulation of a 
friendly machine in an internal sandbox, with the unfriendly machine 
acting as man in the middle.


If you have only ever interacted with party B, how could you determine 
if party B is relaying your questions to party C and returning party C's 
responses to you or interacting with you directly -- given that all 
real-world solutions like timing responses against expected response 
times and trying to check for outgoing messages are not possible? Unless 
you understood party B's programming perfectly and had absolute control 
over its operation, you could not. And if you understood its programming 
that well, you wouldn't have to interact with it to determine if it is 
friendly or not.


joseph


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Steve Richfield
Terren,

On 9/3/08, Terren Suydam [EMAIL PROTECTED] wrote:


 I'm talking about a situation where humans must interact with the FAI
 without knowledge in advance about whether it is Friendly or not. Is there a
 test we can devise to make certain that it is?


Like religions based on friendly prophets that eventually lead their
following astray, past action can be no guarantee of future safety despite
the best of intentions. Certainly, the Middle East situation is proof of
this, as all three monotheistic religions are now doing really insane things
to confom to their religious teachings. I suspect that a *successful* FAI
will make these same sorts of errors.

I believe that there are VERY clever ways of correcting even the most awful
of problematic situations using advanced forms of logic like reverse
reductio ad absurdum. However, I have neither following nor prior success to
support this, so this remains my own private conviction.

Steve Richfield

--- On Wed, 9/3/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

  From: Vladimir Nesov [EMAIL PROTECTED]
  Subject: Re: [agi] What is Friendly AI?
  To: agi@v2.listbox.com
  Date: Wednesday, September 3, 2008, 6:11 PM
  On Thu, Sep 4, 2008 at 1:34 AM, Terren Suydam
  [EMAIL PROTECTED] wrote:
  
   I'm asserting that if you had an FAI in the sense
  you've described, it wouldn't
   be possible in principle to distinguish it with 100%
  confidence from a rogue AI.
   There's no Turing Test for
  Friendliness.
  
 
  You design it to be Friendly, you don't generate an
  arbitrary AI and
  then test it. The latter, if not outright fatal, might
  indeed prove
  impossible as you suggest, which is why there is little to
  be gained
  from AI-boxes.
 
  --
  Vladimir Nesov
  [EMAIL PROTECTED]
  http://causalityrelay.wordpress.com/
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-03 Thread Mike Tintner

Terren,

If you think it's all been said, please point me to the philosophy of AI 
that includes it.


A programmed machine is an organized structure. A keyboard (and indeed a 
computer with keyboard) are something very different - there is no 
organization to those 26 letters etc.   They can be freely combined and 
sequenced to create an infinity of texts. That is the very essence and 
manifestly, the whole point, of a keyboard.


Yes, the keyboard is only an instrument. But your body - and your brain - 
which use it,  are themselves keyboards. They consist of parts which also 
have no fundamental behavioural organization - that can be freely combined 
and sequenced to create an infinity of sequences of movements and thought - 
dances, texts, speeches, daydreams, postures etc.


In abstract logical principle, it could all be preprogrammed. But I doubt 
that it's possible mathematically - a program for selecting from an infinity 
of possibilities? And it would be engineering madness - like trying to 
preprogram a particular way of playing music, when an infinite repertoire is 
possible and the environment, (in this case musical culture), is changing 
and evolving with bewildering and unpredictable speed.


To look at computers as what they are (are you disputing this?) - machines 
for creating programs first, and following them second,  is a radically 
different way of looking at computers. It also fits with radically different 
approaches to DNA - moving away from the idea of DNA as coded program, to 
something that can be, as it obviously can be, played like a keyboard  - see 
Dennis Noble, The Music of Life. It fits with the fact (otherwise 
inexplicable) that all intelligences have both deliberate (creative) and 
automatic (routine) levels - and are not just automatic, like purely 
programmed computers. And it fits with the way computers are actually used 
and programmed, rather than the essentially fictional notion of them as pure 
turing machines.


And how to produce creativity is the central problem of AGI - completely 
unsolved.  So maybe a new approach/paradigm is worth at least considering 
rather than more of the same? I'm not aware of a single idea from any AGI-er 
past or present that directly addresses that problem - are you?





Mike,

There's nothing particularly creative about keyboards. The creativity 
comes from what uses the keyboard. Maybe that was your point, but if so 
the digression about a keyboard is just confusing.


In terms of a metaphor, I'm not sure I understand your point about 
organizers. It seems to me to refer simply to that which we humans do, 
which in essence says general intelligence is what we humans do. 
Unfortunately, I found this last email to be quite muddled. Actually, I am 
sympathetic to a lot of your ideas, Mike, but I also have to say that your 
tone is quite condescending. There are a lot of smart people on this list, 
as one would expect, and a little humility and respect on your part would 
go a long way. Saying things like You see, AI-ers simply don't understand 
computers, or understand only half of them.  More often than not you 
position yourself as the sole source of enlightened wisdom on AI and other 
subjects, and that does not make me want to get to know your ideas any 
better.  Sorry to veer off topic here, but I say these things because I 
think some of your ideas are valid and could really benefit from an 
adjustment in your
presentation of them, and yourself.  If I didn't think you had anything 
worthwhile to say, I wouldn't bother.


Terren

--- On Wed, 9/3/08, Mike Tintner [EMAIL PROTECTED] wrote:


From: Mike Tintner [EMAIL PROTECTED]
Subject: [agi] A NewMetaphor for Intelligence - the Computer/Organiser
To: agi@v2.listbox.com
Date: Wednesday, September 3, 2008, 9:42 PM
Terren's request for new metaphors/paradigms for
intelligence threw me
temporarily off course.Why a new one - why not the old one?
The computer.
But the whole computer.

You see, AI-ers simply don't understand computers, or
understand only half
of them

What I'm doing here is what I said philosophers do -
outline existing
paradigms and point out how they lack certain essential
dimensions.

When AI-ers look at a computer, the paradigm that they
impose on it is that
of a Turing machine - a programmed machine, a device for
following programs.

But that is obviously only the half of it.Computers are
obviously much more
than that - and  Turing machines. You just have to look at
them. It's
staring you in the face. There's something they have
that Turing machines
don't. See it? Terren?

They have -   a keyboard.

And as a matter of scientific, historical fact, computers
are first and
foremost keyboards - i.e.devices for CREATING programs  on
keyboards, - and
only then following them. [Remember how AI gets almost
everything about
intelligence back to front?] There is not and never has
been a program that
wasn't first created on a keyboard. Indisputable fact.
Almost everything