[agi] Thought experiment on informationally limited systems

2008-02-28 Thread William Pearson
I'm going to try and elucidate my approach to building an intelligent
system, in a round about fashion. This is the problem I am trying to
solve.

Imagine you are designing a computer system to solve an unknown
problem, and you have these constraints

A) Limited space to put general information about the world
B) Communication with the system after it has been deployed. The less
the better.
C) We shall also assume limited processing ability etc

The goal is to create a system that can solve the tasks as quickly as
possible with the least interference from the outside.

I'd like people to write a brief sketch of your solution to this sort
of problem down. Is it different from your AGI designs, if so why?

Okay so an example test is: Survey wrecks in the ocean in a
submersible.  What information would you send to the submersible
system, to enable it to survey better?

My answers to these questions.

System Sketch? - It would have to be generally programmable, I would
want to be able to send it arbitrary programs after it had been
created, so I could send it a program to decrypt things or control
things. It would also need to able to generate it's own programming
and select between the different programs in order to minimise my need
to program it. It is not different to my AGI design, unsurprisingly.

It would need initial programming, those programs may be something
like AGI systems we have at the moment, but the point of the system
would be it would be able to chose between different programs
dependent upon what was found to be useful. Done in a Eric Baum/Agoric
systems fashion.

What information ? - Initially alterations to programs to change the
language that I communicate with the system to reduce the amount of
communication needed. Wreck, tide, motor, current would get a short
descriptions for example.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread William Pearson
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


 On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
  I'm going to try and elucidate my approach to building an intelligent
   system, in a round about fashion. This is the problem I am trying to
  solve.
 
  Imagine you are designing a computer system to solve an unknown
  problem, and you have these constraints
  
  A) Limited space to put general information about the world
  B) Communication with the system after it has been deployed. The less
  the better.
  C) We shall also assume limited processing ability etc
  
  The goal is to create a system that can solve the tasks as quickly as
  possible with the least interference from the outside.
 
  I'd like people to write a brief sketch of your solution to this sort
   of problem down. Is it different from your AGI designs, if so why?


 Space/time-optimality is not my top concern.  I'm focused on building an AGI
 that *works*, within reasonable space/time.  If you add these contraints, 
 you're  making the AGI problem harder than it already is.  Ditto for the 
 amount of user
 interaction.  Why make it harder?

I'm not looking for optimality, just that better is important. I don't
want to have to hold the hand of my system teaching it laboriously, so
the less information I have to feed it the better. Why ignore the
problem and make the job of teaching it harder?

Also we have limited space and time in the real world

  System Sketch? - It would have to be generally programmable, I would
  want to be able to send it arbitrary programs after it had been
  created, so I could send it a program to decrypt things or control
   things. It would also need to able to generate it's own programming
  and select between the different programs in order to minimise my need
  to program it. It is not different to my AGI design, unsurprisingly.


 Generally programmable, yes.  But that's very broad.  Many systems have this
 property.

Note I want something different than computational universality. E.g.
Von Neumann architectures are generally programmable, Harvard
architectures aren't. As they can't be reprogrammed at run time.

http://en.wikipedia.org/wiki/Harvard_architecture


 Even system with only a declarative KB can re-program itself by modifying the
 KB.

So a program could get in and remove all the items from the KB? You
can have viruses etc inside the KB?

 Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] A 1st Step To Using Your Image-ination

2008-02-15 Thread William Pearson
On 15/02/2008, Ed Porter [EMAIL PROTECTED] wrote:
 Mike,

  You have been pushing this anti-symbol/pro-image dichotomy for a long time.
  I don't understand it.

  Images are set, or nets, of symbols.  So, if, as you say


  all symbols provide an extremely limited *inventory of the

 world* and all its infinite parts and behaviors 

  then images are equally limited, since they are nothing but set or nets of
  symbols.  Your position either doesn't make sense or is poorly stated.

I think the definition of symbols, is what is the problem is here. I
tend to think of symbol (in an AI sense at least) to be about or
related to something in the world. The classic idea of having symbols
for cat or dog, and deducing facts from them.

An image is not intrinsically about anything in the world, the optical
illusions (dalmatian in spots, two faces or vase or the necker cube)
show we can view an image in different ways. Mental Images aren't even
necessarily made up of data about photon activity, they can be
entirely concocted.

Mike needs to clarify what he means by symbol before we start, or
perhaps find or invent a less confusing word.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Test

2008-02-05 Thread William Pearson
On 05/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:
 William P : I can't think
 of any external test that can't be fooled by a giant look up table
 (ned block thought of this argument first).

 A by definition requirement of a general test is that the systembuilder
 doesn't set it, and can't prepare for it as you indicate. He can't know
 whether the test for, say, his lego-constructing system is going to be
 building a machine, or constructing a water dam with rocks, or a game that
 involves fitting blocks into holes.

He can't know. but he might guess. It will be hard to test between the
builders lucky guess(es) and generality.

  His system must be able to adapt to any
 adjacent-domain activity whatsoever. That too is the point of the robot
 challenge test - the roboticists won't know beforehand what that planetary
 camp emergency is going to be.

I think we have different ideas of what a test should be. I am looking
for a scientific test, in which repeatability and fairness are
important features.

One last question what exactly defines adjacent in your test? Is
composing poetry adjacent to solving non-linear equations.

I agree that this type of testing will winnow out lots of non-general
systems.  But it might let a few slip through the cracks or say a
general system is non-general. I would fail the test some days when I
am ill, as all I would want to do is go to sleep not try and solve the
problem.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93723647-9e9867


Re: [agi] The Test

2008-02-04 Thread William Pearson
On 04/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:
 (And it's a fairly safe bet, Joseph, that no one will now do the obvious
 thing and say.. well, one idea I have had is..., but many will say, the
 reason why we can't do that is...)

And maybe they would have a reason for doing so. I would like to think
of an external objective test, I like tests and definitions. My
stumbling block for thinking of external tests is that I can't think
of any external test that can't be fooled by a giant look up table
(ned block thought of this argument first). That is something that
when input X comes in at time t, output Y goes out. It can pretend to
learn things by having poor performance early on and then improve.

Not all designs of systems use lots of external tests to prove their
abilities. Take making a new computer architecture that you want to
have the property of computational universality. You wouldn't try to
give it a few programs see if it can run them and declare it universal
you would program it to emulate a Turing Machine to prove its
universality. Similarly for new chip designs of existing architecture,
you want to prove them equivalent to the old ones.

Generality of an intelligence is this sort of problem I think, due to
the inability to capture it flawlessly with external tests. I would be
interested to discuss internal requirements of systems, if anyone else
is.

I'd have thought that you with your desire for things to be
spontaneous would be wary of any external test that can be gamed by
non-spontaneous systems.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93443404-a1be29


[agi] Types of automatic programming? was Re: Singularity Outcomes

2008-01-28 Thread William Pearson
On 28/01/2008, Bob Mottram [EMAIL PROTECTED] wrote:
 On 28/01/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
   When your computer can write and debug
   software faster and more accurately than you can, then you should worry.
 
  A tool that could generate computer code from formal specifications
  would be a wonderful thing, but not an autonomous intelligence.
 
  A program that creates its own questions based on its own goals, or
  creates its own program specifications based on its own goals, is
  a quite different thing from a tool.


 Having written a lot of computer programs, as I suspect many on this
 list have, I suspect that fully automatic programming is going to
 require the same kind of commonsense reasoning as human have.  When
 I'm writing a program I may draw upon diverse ideas derived from what
 might be called common knowledge - something which computers
 presently don't have.  The alternative is genetic programing, which is
 more of a sampled search through the space of all programs, but I
 rather doubt that this is what's going on in my mind for the most
 part.


What kind of processes would you expect to underly the brains ability
to reorganise itself during neural plasticity?

http://cogprints.org/2255/0/buss.htm

These sorts of changes we would generally expect the need of a
programmer to acheive in a computer system. Common sense programming
seems to be far too high level for this, so what sort would you expect
it to be?

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90496970-15b353


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread William Pearson
On 27/01/2008, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Vladimir Nesov [EMAIL PROTECTED] wrote:

  On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
  
   Software correctness is undecidable -- the halting problem reduces to it.
   Computer security isn't going to be magically solved by AGI.  The problem
  will
   actually get worse, because complex systems are harder to get right.
  
 
  Computer security can be solved by more robust rights management and
  by avoiding bugs that lead to security vulnerabilities. AGI can help
  with both.

 Security tools are double edged swords.  The knowledge required to protect
 against attacks is the same as the knowledge required to launch attacks.  AGI
 just continues the arms race.  We will have smarter intrusion detection
 systems and smarter intruders.  If you look at number of attacks per year, it
 is clear we are going in the wrong direction.


What I am working on is a type of system that is a type of
programmable computer hardware (much like modern computers), that has
a sense of goal or purpose built in. It is designed so that it will
self-moderate the programs within it to give control to only those
that fulfil that goal/purpose.

I personally believe that this is a necessary step for human level
AGI, as self-control and allocation of resources to the problems
important to the system are importance facets of an intelligence. But
I also suspect it will be used in a lot less smart systems as well
before we crack AGI. As such I see the computer systems of the future
moving away from the mono-culture we have currently as they will be
tailored to the users goals, making cracking them less trivial and
repeatable and more done on a case by case business.

Don't expect computer science to stand still. It is really still very young.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90392692-7480ed


Re: [agi] CEMI Field

2008-01-23 Thread William Pearson
On 23/01/2008, Günther Greindl [EMAIL PROTECTED] wrote:
 I find the theory very compelling, as I always found the functionalistic
 AI approach a bit lacking whereas I am a full endorser of a
 materialistic/monist approach (and I believe strong AI is feasible). EM
 fields arising through the organization of matter and its _dynamics_
 seems to me very plausible - at least to start significant research in
 this direction.

I'd agree that the functionalist AI approach doesn't explain
consciousness well. But with the definitions of intelligence that have
been mooted on this list, consciousness might not be needed for the
forms of AI that we are interested in. We might be happy with zombies.

My own approach to making computers more brain-like, is entirely
interested in the sub-conscious level, at the moment. Trying to find a
computer system that allows analogous changes that occur in london
cabbies brains* (autonomously applying more resources to important
problems), in modern computer systems.

I suppose what I am trying to say is that there is a lot of scope for
making computers better at problem solving/reasoning/adapting before
we hit the consciousness problem. If this is correct and the
types/shapes of EM fields are important, I'm not sure we will have
much scope for creating human-like consciousnesses apart from the old
fashioned way and biotech.

 Will Pearson

*http://news.bbc.co.uk/1/hi/sci/tech/677048.stm

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89123294-b3e698

[agi] Legg and Hutter on resource efficiency was Re: Yawn.

2008-01-14 Thread William Pearson
On 14/01/2008, Pei Wang [EMAIL PROTECTED] wrote:
 On Jan 13, 2008 7:40 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

  And, as I indicated, my particular beef was with Shane Legg's paper,
  which I found singularly content-free.

 Shane Legg and Marcus Hutter have a recent publication on this topic,
 http://www.springerlink.com/content/jm81548387248180/
 which is much richer in content.


I think this can also be found here

http://arxiv.org/abs/0712.3329

For those of us without springerlink accounts.


While we do not consider efficiency to be a part of the definition of
intelligence, this is not to say that considering the efficiency of
agents is unimportant. Indeed, a key goal of artificial intelligence is
to find algorithms which have the greatest efficiency of intelligence,
that is, which achieve the most intelligence per unit of computational
resources consumed.

Why not consider resource efficiciency a thing to be adapted? Over
which problems can be solved.

An example. consider 2 android robots with finite energy supplies
tasked with a long foot race.

One shuts down all processing non-essential to its current task of
running (sound familiar to what humans do? I certainly think better
walking), so it uses less energy.

The other one attempts to find programs that precisly predict its
input given its output, churning through billions of possibilities and
consuming vast amounts of energy.

The one that shuts down its processing finishes the race and gets
reward, the other one runs its battery down by processing too much and
has to be rescued, getting no reward.

As they have defined it only outputting can make the system more or
less likely to achieve a goal. Which is a narrow view.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85547641-0ef2b3

Re: [agi] Comments on Pei Wang 's What Do You Mean by “AI”?

2008-01-14 Thread William Pearson
Something I noticed while trying to fit my definition of AI into the
categories given.

There is another way that definitions can be principled.

This similarity would not be on the function of percepts to action.
Instead it would require a similarity on the function of percepts to
internal state as well. That is they should be able to adapt in a
similar fashion.

SC = FC(PC), SH = FH(PH), FC ≈ FH

I'm not strictly speaking working on intelligence at the moment,
rather how to build adaptive programmable computer architectures
(which I think is a necessary first step to intelligence), so it might
take me a while to get around to fully working out my definition of
intelligence. It would contain principles like the one I mention above
though.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85825259-71606f

Re: [agi] Comments on Pei Wang 's What Do You Mean by “AI”?

2008-01-14 Thread William Pearson
On 14/01/2008, Pei Wang [EMAIL PROTECTED] wrote:
 Will,

 The situation you mentioned is possible, but I'd assume, given the
 similar functions from percepts to states, there must also be similar
 functions from states to actions, that is,
AC = GC(SC), AH = GH(SH), GC ≈ GH

Pei,

Sorry I should have thought more. I would define the similarity of the
functions that it is possible to be interested in as.

St =  F(S(t-1),P)

That is the current state is important to what change is made to the
state. For example a man coming across the percept Oui, bien sieur,
would change his state in a different way depending upon whether he
was already fluent in french or not.

This doesn't really change the rest of your argument, but I feel it is
important.

 Consequently, it becomes a special case of my Principle-AI, with a
 compound function:
AC = GC(FC(PC)), AH = GH(FH(PH)), GC(FC()) ≈ GH(FH())

 Pei

To be pedantic (feel free to ignore the following if you like):

That would depend on whether the ≈ relation is exactly. If you assume
it has the same meaning when used above there are possible meanings
for it where the relation (FC ≈ FH  GC ≈GH) does not imply (GC(FC())
≈ GH(FH())).

Consider the meaning of ≈ x and y are similar because they can be
transformed to a reference programs of a reference language of the
same length + or - 20 bytes. This would mean the representation for
GC(FC()) would be within + or - 40 bytes of GH(FH()). Which wouldn't
be the same relation.

A bit contrived I know, but as we are working on the theoretical side
of things, this is the best example I could think of at short notice.

Until I get a better feeling of my own definition, I can't really say
much more that is really useful.
   Will

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85847138-e90417

Re: [agi] Definitions of Intelligence and the problem problem was Ben's Definition

2008-01-12 Thread William Pearson
My problem with both these definitions (and the one underpinning
AIXI), is that they either don't define the word problem well or
define it in a limited way.

For example AIXI defines it as the solution of a problem as finding a
function that transforms an input to an output. No mention of having
the ability to solve the problem of creating energy efficient internal
programs. It has no hope of solving problems involving mapping the
input to internal states, as we do when we can learn to control our
brain waves to defeat invading hordes of blocky aliens.[1]

My definition of an intelligence would be something like, A system
that can *potentially* use any and all information it comes across in
order to change itself to the correct state. If you are designing a
system to interact with humans the correct state is obviously one that
can communicate with humans. But an intelligent robot may move most of
its programming to a store, robbing itself of lots of problem solving
ability temporarily, while it is in transit to mars to save energy for
maneuvering, if that is the correct state for it to be in.


  Will Pearson
[1]http://news-info.wustl.edu/news/page/normal/7800.html

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85156603-84894e


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-11 Thread William Pearson
Vladimir,

 What do you mean by difference in processing here?

I said the difference was after the initial processing. By processing
I meant syntactic and semantic processing.  After processing the
syntax related sentence the realm of action is changing the system
itself, rather than knowledge of how to act on the outside world. I'm
fairly convinced that self-change/management/knowledge is the key
thing that has been lacking in AI, which is why I find it different
and interesting.

I think that both
 instructions can be perceived by AI in the same manner, using the same
 kind of internal representations, if IO is implemented on sufficiently
 low level, for example as a stream of letters (or even their binary
 codes). This way knowledge about spelling and syntax can work with
 low-level concepts influencing little chunks of IO perception and
 generation, and 'more semantic' knowledge can work with more
 high-level aspects. It's less convenient for quick dialog system setup
 or knowledge extraction from text corpus, but it should provide
 flexibility.

I'm not quite sure of the representation or system you are  describing
so I can't say what it can or cannot do.

Would you expect it to be able to do the equivalent of switching to
think in a different language?

 Will

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=84643606-cff255


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread William Pearson
On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote:
 Processing a dictionary in a useful way
 requires quite sophisticated language understanding ability, though.

 Once you can do that, the hard part of the problem is already
 solved ;-)

While this kind of system requires sophisticated language
understanding ability, I don't think that sophisticated language
understanding ability implies the ability to use the dictionary... So
you have to be careful to create a system with both abilities.

For example a language understanding system focussed on understanding
sophisticated sentences about the world external to itself does need
not be able to add to the syntactical rules. Which would make those
systems a lot slower at learning language when they get to that
language understanding ability.

I'll be a lot more interested when people start creating NLP systems
that are syntactically and semantically processing statements about
words, sentences and other linguistic structures and adding syntactic
and semantic rules based on those sentences.

I think it is a thorny problem and needs to be dealt with in a
creative way, but I would be interested to be proved wrong.

What sort of age of human do you think is capable of this kind of
linguistic rule acquisition? I'd guess when kids start asking
questions like, What is that called? or What does that word mean?.
If not before.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=84161407-3219d5


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread William Pearson
On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote:
  I'll be a lot more interested when people start creating NLP systems
  that are syntactically and semantically processing statements *about*
  words, sentences and other linguistic structures and adding syntactic
  and semantic rules based on those sentences.

Note the new emphasis ;-) You example didn't have statements *about*
words, but new rules were inferred from word usage.

 Depending on exactly what you mean by this, it's not a very far-off
 thing, and there probably are systems that do this in various ways.

What I mean by it, is systems that can learn from lessons like the following

http://www.primaryresources.co.uk/english/PC_prefix2.htm

I could easily whip up something very narrow which didn't do too
poorly for prefixes (involving regular expressions transforming the
words). But it would be horribly brittle and specific only to prefixes
and would know what prefixes were before hand.

And your, I be, example made me think of pirates rather than ebonics
:). It is also not what I am looking for, because it relies on the
system looking for regularities, rather than being explicitly told
about them. The benefits of being able to be told there are
regularities mean that you do not always have to be looking out for
them, saving processing time and memory for other more important
tasks.

  Will

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=84215783-ff2e58


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread William Pearson
On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote:
 On Jan 10, 2008 10:26 AM, William Pearson [EMAIL PROTECTED] wrote:
  On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote:
I'll be a lot more interested when people start creating NLP systems
that are syntactically and semantically processing statements *about*
words, sentences and other linguistic structures and adding syntactic
and semantic rules based on those sentences.
 
  Note the new emphasis ;-) You example didn't have statements *about*
  words, but new rules were inferred from word usage.

 Well, here's the thing.

 Dictionary text and English-grammar-textbook text are highly ambiguous and
 complex English... so you'll need a very sophisticated NLP system to be able
 to grok them...

Firstly, so what? Why not allow for the fact that there will hopefully
be a sophisticated NLP system in the system at some point? Give it the
hooks to use dictionary style acquisition, even if it won't for the
first x years of development. We are aiming for adult human-level in
the end, right? Not just a 5 year old.

It will make adding French or another language a whole lot quicker,
when it comes to that level. Retrofitting the ability may or may not
be easy at that stage. It would be better to figure out whether it is
easy or not before settling on an architecture. My hunch, is that it
is not easy.

Secondly, I'm not buying that it is any more complex than dealing with
other domains. You easily get equal complexity dealing with
non-linguistic stuff such as

This is a battery
A battery can be part of a machine
Putting a battery in the battery holder, gives the machine power

Is as complex, if not more so, than

un- is a prefix
A prefix is the front part of a word
Adding un- to a, word, is equivalent to saying, not word.

What the system does after processing these different sets of
sentences is vastly different. A difference worth exploring before
settling on an architecture, IMO.

Not building the potential to have a capability into a baby based AI,
even if it is not initially used, means when the AI is grown up it
still won't be able to have that capability. Unless you are relying on
it getting to the self-modifying code phase before the
asking-what-words-mean phase.


  Will

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=84431135-87cfe7


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread William Pearson
On 07/01/2008, Robert Wensman [EMAIL PROTECTED] wrote:
 I think what you really want to use is the
 concept of adaptability, or maybe you could say you want an AGI system that
 is programmed in an indirect way (meaning that the program instructions are
 very far away from what the system actually does). But please do not say
 things like we should write AGI systems that are not programmed. It hurts
 my ears/eyes.

 /Robert Wensman


I'd agree that Mike could do with tightening up his language. I wonder
if he would agree with the following?

The programs that determine the way system acts and changes is not
highly related to the programming provided by the AI designer.

Computer systems like this have been designed. All desktop computers
can act, solve  problems and change their programming (apt etc) in
ways un-envisaged by the people who designed the hardware and BIOS.

This approach still allows the programs the AI designer provided to
have influence in *which* programs exist in the system, if not how
they exactly they work. This is what would make it different from
current computer systems.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82558458-0ed659


Re: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread William Pearson
On 06/12/2007, Ed Porter [EMAIL PROTECTED] wrote:
 Matt,
 So if it is perceived as something that increases a machine's vulnerability,
 it seems to me that would be one more reason for people to avoid using it.
 Ed Porter


Why are you having this discussion on an AGI list?

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73366106-264b25


[agi] Flexibility of AI vs. a PC

2007-12-05 Thread William Pearson
One thing that has been puzzling me for a while is, why some people
expect an intelligence to be less flexible than a PC.

What do I mean by this? A PC can have any learning algorithm, bias or
representation of data we care to create. This raises another
question: how are we creating a representation if not copying it from
some sense from our brains? So why do we still create systems that
have fixed representations of the external world, fixed methods of
learning?

Take  the development of echo location in blind people, or the ability
to take visual information from stimulating the tongue. Isn't this
sufficient evidence to suggest we should be trying to make our AIs as
flexible as the most flexible things we know?

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=72201582-721bf8


Re: Re[8]: [agi] Funding AGI research

2007-11-21 Thread William Pearson
On 21/11/2007, Dennis Gorelik [EMAIL PROTECTED] wrote:
 Benjamin,

  That's massive amount of work, but most AGI research and development
  can be shared with narrow AI research and development.

  There is plenty overlap btw AGI and narrow AI but not as much as you 
  suggest...

 That's only because that some narrow AI products are not there yet.

 Could you describe a piece of technology that simultaneously:
 - Is required for AGI.
 - Cannot be required part of any useful narrow AI.

My theory of intelligence is something like this. Intelligence
requires the changing of programmatic-structures in an arbitrary
fashion, so that we can learn, and learn how to learn. This is because
I see intelligence as the means to solve the problem solving problem.
It does not solve one problem but changes and reconfigures itself to
solve whatever problems it faces, within its limited hardware/software
and energy constraints.

This arbitrary change can result in the equivalent of bugs and
viruses, this means there needs to be ways for these to be removed and
prevented from spreading. This requires there be a way to distinguish
good programs from bad, so that the good programs are allowed to
remove bugs from others, and the bad programs prevented from being
able to alter other programs. Solving this problem is non-trivial and
requires thinking about computer systems in a different way to other
weak AI problems.

Narrow AI is generally solving a single problem, and so does not need
to change so drastically and so does not need the safeguards. It can
just concentrate on solving its problem.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67564879-97ae32


Re: [agi] Re: Superseding the TM [Was: How valuable is Solmononoff...]

2007-11-09 Thread William Pearson
On 09/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:
 On 11/8/07, William Pearson [EMAIL PROTECTED] wrote:
  On 08/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:

   This discussion reminds me of hot rod enthusiasts arguing passionately
   about how to build the best racing car, while denigrating any
   discussion of entropy as outside the practical.
  
 
  You are over stating the case majorly.

 Yes, if I had claimed they were analogous.  I said merely that it
 reminds me of, in the simple sense that those who don't grok theory
 tend to denigrate it.

Fair enough. I've seen some of that trying to find a PhD supervisor.
Not that I fully grok the theory myself, but I feel it is important to
try.


  UAI so far has yet to prove its usefulness. It is just a
  mathematical formalism that is incomplete in a number of ways.
 
  1) Doesn't treat computation as outputting to the environment, thus
  can have no concept of saving energy or avoiding inteference with
  other systems by avoiding computation. The lack of energy saving means
  it is not valid model for solving the problem of being a
  non-reversible intelligence in an energy poor environment (which
  humans are and most mobile robots will be).

 This is intriguing, but unclear to me.  Does it entail anything
 consequential beyond the statement that Solomonoff induction is
 incomputable?  Any references?


Not that I have found, I tried to write something about it a while
back. But that is not very appropiate to this audience.

I'll try and write something more coherent, with an example in the
morning. The following paragraph is just food for thought.

Consider the energy usage of the system as an additional hidden output
on the output tape, that AIXI doesn't know to vary, and can't vary
unless it changes the amount of computation it does and knows that it
is doing so, so can't update its prior of what strategies it should
persue. My thinking is that it would not be able to find the optimum
strategy, even given infinite resources, because the problem is not
lack of computational resources, but what effect its computation has
on the environment.



  2) It is based on Sequential Interaction Machines, rather than
  Multi-Stream Interaction Machines, which means it might lose out on
  expressiveness as talked about here.
 
  http://www.cs.brown.edu/people/pw/papers/bcj1.pdf

 I began to read the paper, but it was mostly incomprehensible to me.

I'll grant you that, I've only brushed the surface of that paper, but
others I have found have been more readable and have cited that one as
discussing the MIM systems, so I thought I would include it for
correctness.

Some decent slides are here

http://www.cs.brown.edu/people/dqg/Papers/wurzburg.ps

I actually came across this relatively recently, whilst reading about it here.

http://lambda-the-ultimate.org/node/203

 Which you may find interesting.
 The terms and grammar were legitimate, but I couldn't translate the
 claims to any workable model of reality -- I actually expected to see
 Sokal mentioned as co-author -- but upon googling I found the
 following paper in response, which in comparison was quite
 comprehensible.

 http://www.macs.hw.ac.uk/~greg/publications/cm.cj07.pdf

 I'd be interested in knowing whether (and how) you think the response
 paper gets it wrong.


The first section of the response in section 5.2, with the all the
teletype conversation being recorded, I would agree that a PTM isn't
calculating anything different, it is however doing a different real
world job.  It depends whether you want to define computation as a)
calculation or b) a real world job computers can do. I'm going with
what real jobs computers do, as that is more important for practical
AI.

The second section is a bit odd, their definition of the TM with three
tapes seems very similar to the standard definition of the SIM PTM.
Anyway I shall not try and delve into the exact differences. My
problem with this section is while it may be possible to have some TM
that can do the same thing as PTM, it is going above and beyond what
we currently call equivalent to a Turing Machine. What we currently
call equivalent to a Turing Machine is everything that that can be
made to recognise the Recursively Enumerable languages.

http://en.wikipedia.org/wiki/Recursively_enumerable_language

These only require functions (fixed mappings between input and
output), and PTMs go beyond this into dealing with mappings that
change, e.g. learning etc. So while a TM might be able to go into
dealing with streams efficiently, lambda calculus is not be able to do
the same thing, despite being supposedly computationally equivalent,
because it is purely functional. So something needs to be changed in
our theory of universal computability, somewhere.

I would be satisfied to say PTM can do things that lamda calculus
can't, and that the theory of how to do those things is important for
AI. If you don't want to alter the concept of computability

Re: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-08 Thread William Pearson
On 08/11/2007, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 My impression is that most machine learning theories assume a search space
 of hypotheses as a given, so it is out of their scope to compare *between*
 learning structures (eg, between logic and neural networks).

 Algorithmic learning theory - I don't know much about it - may be useful
 because it does not assume a priori a learning structure (except that of a
 Turing machine), but then the algorithmic complexity is incomputable.

 Is there any research that can tell us what kind of structures are better
 for machine learning?

Not if all problems are equi-probable.
http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization

However this is unlikely in the real world.

It does however give an important lesson, put as much information as
you have about the problem domain into the algorithm and
representation as possible, if you want to be at all efficient.

This form of learning is only a very small part of what humans do when
we learn things. For example when we learn to play chess, we are told
or read the rules of chess and the winning conditions. This allows us
to create tentative learning strategies/algorithms that are much
better than random at playing the game and also giving us good
information about the game. Which is how we generally deal with
combinatorial explosions.

Consider a probabilistic learning system based on statements about the
real world TM, without this ability to alter how it learns and what it
tries, it would be looking at the probability of whether a bird
tweeting is correlated with his opponent winning, and also trying to
figure out whether emptying an ink well over the board is a valid
move.

I think Marcus Hutter has a bit about how slow AIXI would be at
learning chess somewhere in writings, due to only getting a small
amounts of information (1 bit ?) per game about the problem domain. My
memory might be faulty and I don't have time to dig at the moment

  Or perhaps w.r.t a certain type of data?  Are there
 learning structures that will somehow learn things faster?

Thinking in terms of fixed learning structures is IMO a mistake.
Interstingly AIXI doesn't have fixed learning structures per se, even
though it might appear to. Because it stores the entire history of the
agent and feeds it to each program under evaluation, each of these may
be a learning program and be able to create learning strategies from
that data. You would have to wait a long time for these types of
programs to become the most probable if a good prior was not given to
the system though.


 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=62882969-3d3172


Re: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-08 Thread William Pearson
On 08/11/2007, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 Thanks for the input.

 There's one perplexing theorem, in the paper about the algorithmic
 complexity of programming, that the language doesn't matter that much, ie,
 the algorithmic complexity of a program in different languages only differ
 by a constant.  I've heard something similar about the choice of Turing
 machines only affect the Kolmogorov complexity by a constant.  (I'll check
 out the proof of this one later.)


This only works if the languages are are Turing Complete so that they
can append a description of a program that converts from the language
in question to its native one, in front of the non-native program.

Also constant might not mean negligable. 2^^^9 is a constant (where ^
is knuth's up arrow notation).

 But it seems to suggest that the choice of the AGI's KR doesn't matter.  It
 can be logic, neural network, or java?  That's kind of a strange
 conclusion...


Only some neural networks are Turing complete. First order logic
should be, prepositional logic not so much.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=62912284-88dadd


Re: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-08 Thread William Pearson
On 08/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:
 I'm sorry I'm not going to be able to provide much illumination for
 you at this time.  Just the few sentences of yours quoted above, while
 of a level of comprehension equal or better than average on this list,
 demonstrate epistemological incoherence to the extent I would hardly
 know where to begin.

 This discussion reminds me of hot rod enthusiasts arguing passionately
 about how to build the best racing car, while denigrating any
 discussion of entropy as outside the practical.


You are over stating the case majorly. Entropy can be used to make
predictions about chemical reactions and help design systems. UAI so
far has yet to prove its usefulness. It is just a mathematical
formalism that is incomplete in a number of ways.

1) Doesn't treat computation as outputting to the environment, thus
can have no concept of saving energy or avoiding inteference with
other systems by avoiding computation. The lack of energy saving means
it is not valid model for solving the problem of being a
non-reversible intelligence in an energy poor environment (which
humans are and most mobile robots will be).

2) It is based on Sequential Interaction Machines, rather than
Multi-Stream Interaction Machines, which means it might lose out on
expressiveness as talked about here.

http://www.cs.brown.edu/people/pw/papers/bcj1.pdf

It is the first step on an interesting path, but it is too divorced
from what computation actually is, for me to consider it equivalent to
the entropy of AI.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63155358-fb9c39


Re: [agi] NLP + reasoning?

2007-11-06 Thread William Pearson
On 06/11/2007, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Will Pearson asked
  I'm also wondering what you consider success in this case. For example
  do you want the system to be able to maintain conversational state
  such as  would be needed to deal with the following.

 For all following sentences take the first letter of each word and
 make English sentences out of it, reply in a similar fashion. How is
 the hair? Every rainy evening calms all nightingales. Yesterday,
 ornery ungulates stampeded past every agitated koala. Fine red
 eyebrows, new Chilean hoovers?

 The majority of human judges in a Turing Test would respond to such
 utterances
 with a blanket What's are you talking about? or Are you crazy? or
 I thought we were going to have a conversation?

 A certain amount of meta questioning is to be expected like...

 What is the third word in this sentence?

 But in order to pass Turing, you just have to convince the judges that you
 are human not necessarilly as skilled in word play as Lewis Carroll.


You are under estimating Carroll or over estimating everyone who does
cryptic crosswords.

http://www.guardian.co.uk/crossword/howto/rules/0,4406,210643,00.html

I'm not trying to pass the Turing test and I will never design a
system to do just that, if anything I help to create passes the Turing
test, that would just be an added bonus. I design systems with
potential capabilities and initial capabilities, that I would want it
to have. And helping me with cryptic crosswords (which have clues in a
similar vein to my example, generally marked with the key word
Initially), is one of those things I want them to be potentially
capable of. Otherwise I would be just making a Faux intelligence,
designed to fool people, without being able to do what I know a lot of
people can.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=61674548-887a95


Re: [agi] Computational formalisms appropriate to adaptive and intelligent systems?

2007-10-31 Thread William Pearson
On 30/10/2007, Pei Wang [EMAIL PROTECTED] wrote:
 Thanks for the link. I agree that this work is moving in an
 interesting direction, though I'm afraid that for AGI (and adaptive
 systems in general), TM may be too low as a level of description ---
 the conclusions obtained in this kind of work may be correct, but not
 constructive enough.

I'm interested in this sort of work, not to tell me exactly how to
build a system but to give me some way of cutting down the number of
possibile systems. Ideally it would be adopted by the general
community, and AI work might progress more quickly. Hopefully we would
be able to make statements like, System X only uses a FSM  as the
function F that maps the Input i and work tape memory W on to W and
the output tape O, whereas experiments have shown that some
collections of neural cells can be equivalent to a memory bounded UTM
in expressiveness for F. These kind of statements would allow systems
to be evaluated theoretically without building robots or other
empirical testing methods.

Then the expensive business of narrowing down exactly which system
(and how it should be initially programmed/what knowledge it needs),
could be more tightly focused. Rather than the effort being spread out
all over the place, as it is at the moment.

 Even so, I'll be interested in how far they can
 go.

 You may be interested in the works of Peter Kugel

My own comment on TM is at
http://nars.wang.googlepages.com/wang.computation.pdf


Thanks, I had skimmed your paper before, though they are not quite
where I am looking to go, they are useful different views. I would
also go with an expansion of what computation is, rather than saying
AI is non-computational due to it learning and changing from
experience. For example, as it stands a program that downloads and
replaces a part of itself (fairly standard nowadays), would be said to
be non-computational. Which seems fairly weird.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=59761164-d49d49


[agi] Computational formalisms appropriate to adaptive and intelligent systems?

2007-10-30 Thread William Pearson
I have recently been trying to find better formalisms than TMs for
different classes of adaptive systems (including the human brain), and
have come across the Persistent Turing Machines[1], which seem to be a
good first step in that direction.

They have expressiveness claimed to be greater than TM[2], although I
have not had a chance to go through the proof, I can see the
possibility as they can change the function of the input to output mid
computation so may not be subject to the same problems of halting as
TM. Although if you include the input history as well as the
specification of a PTM in the code you are trying to prove statements
about, you can probably construct similar questions it cannot answer.

Has there been any other work towards the goal of better taxonomies
for adaptive systems?

  Will Pearson

[1] http://www.cs.brown.edu/people/dqg/Papers/wurzburg.ps
[2] http://www.cse.uconn.edu/~dqg/papers/its.pdf

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=58994851-ed5144


Re: [agi] Poll

2007-10-18 Thread William Pearson
On 18/10/2007, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 I'd be interested in everyone's take on the following:

 1. What is the single biggest technical gap between current AI and AGI? (e.g.
 we need a way to do X or we just need more development of Y or we have the
 ideas, just need hardware, etc)

The biggest gap is the design of a system that can absorb information
generated by other intelligent systems the *same sort of way* humans
can. This can include anything from copying body movements,
understanding body language (pointing, smiling) and higher maths on a
blackboard. It will also require some form of ability to control how
information is absorbed to prevent malicious changes having too much
power..

 2. Do you have an idea as to what should should be done about (1) that would
 significantly accelerate progress if it were generally adopted?


There are some problems that have to be solved first however. If you
assume that cultural information and trial and error can change most
parts of the system during human like absorption, that presents some
problems. You will need to find a system/architecture that is
goal-oriented and somewhat stable in its goal orientation under the
introduction of arbitrary programs.

So if this was created and significant numbers of people were trying
to create social robots, then things would speed up.


 3. If (2), how long would it take the field to attain (a) a baby mind, (b) a
 mature human-equivalent AI, if your idea(s) were adopted and AGI seriously
 pursued?

It depends on whether we get a good theory of how cultural information
is transmitted, processed and incorporated into a system. Without a
good theory there will have to be lots of trial and error, and as some
trials will have to be done in a social setting, they will take a long
time.

I'm also not sure human equivalent is desired (assuming you mean a
system with a goal system devoted to its own well-being).

 4. How long to (a) and (b) if AI research continues more or less as it is
 doing now?


Well if it continues as it is, you will continue to get some very
powerful narrow AI system (potentially passing the Turing test on
cursory examination), but not the flexibility of AGI.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=54962347-356c7d


Re: [agi] Do the inference rules.. P.S.

2007-10-12 Thread William Pearson
On 12/10/2007, Edward W. Porter [EMAIL PROTECTED] wrote:

 (2)  WITH REGARD TO BOOKWORLD -- IF ALL THE WORLD'S BOOKS WERE IN
 ELECTRONIC
 FORM AND YOU HAD A MASSIVE AMOUNT OF AGI HARDWARD TO READ THEM ALL I
 THINK
 YOU WOULD BE ABLE TO GAIN A TREMENDOUS AMOUNT OF WORLD KNOWLEDGE
 FROM THEM,
 AND THAT SUCH WORLD KNOWLEDGE WOULD PROVIDE A SURPRISING AMOUNT
 OF GROUNDING
 AND BE QUITE USEFUL.

You can get lots of information from books. But I don't find the
implicit view of an intelligence, suggested by this scenario, well
enough specified. An intelligence is not a passive information sponge,
it only tends to acquire the information that is useful to its goal.
So before being able to answer the question of what a bookworld AGI
would be able to do, you would have to tell me what its goals are. For
example I could see an AGI that ignored all the semantic knowledge
embedded within text and just analyse the text in terms of statistics,
bigraphs/trigraphs etc and be very good at decryption problems, but
not very good at answering questions based on the emotions of the
participants of a story. Either could be learned dependent upon the
goals.

I also don't think that just shoving lots of information at a computer
will be a productive way of teaching it. Having a teacher on hand that
can point out the missing concept or answer a question should be able
to vastly speed up how well an AGI learns. Brute forcing the
combinatorial explosion, is not really an option in my view.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52994438-1e4465


Re: Turing Completeness of a Lump of Dirt [WAS Re: [agi] Conway's Game of Life and Turing machine equivalence]

2007-10-08 Thread William Pearson
On 08/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
 William Pearson wrote:
  On 07/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
  William Pearson wrote:
  On 07/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
 
  The TM implementation not only has no relevance to the behavior of
  GoL(-T) at all, it also has even less relevance to the particular claims
  that I made about GoL (or GoL(-T)).
 
  If you think the TM implementation does impact it, you should
  demonstrate exactly how.
 
  The TM implementation has no impact *itself* to any claims, and its
  removal equally has no bearing on the properties of the whole system.
  The impact it does have is to demonstrate the system it is implemented
  in is Turing Complete. Or computationally universal if you wish to
  avoid say the word Turing.
 
  Lets say I implemented a TM on my laptop, and then had my operating
  system disallow that program to be run. Would it stop my laptop being
  computationally universal, and all that entails about its
  predictability? Nope, because the computational universality doesn't
  rest on that implementation, it is merely demonstrated by it.

 Well, I have to say that you have made a valiant effort to defend the
 idea, but nothing seems to be working.

 You argue that Game of Life is Turing Complete even when we exclude all
 the cases in which the initial cells are arranged to make a Turing
 Machine.  You then try to justify this strange idea with an analogy.

 But in your analogy, you surrepticiously insert a system that is ALREADY
 a Turing Machine at the base level (your laptop) and then you implement
 ANOTHER Turing Machine on top of that one (your TM program running on
 the laptop).  This is a false analogy.

Laptops aren't TMs. They have random access memory, registers, program
counters not a tape and 5-tuple instructions. They are computationally
universal, but that is all they share with TMs (as does the GoL).
Please read the wiki entry to see that my laptop isn't a TM.

http://en.wikipedia.org/wiki/Turing_machine

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51203540-99609d


Re: Turing Completeness of a Lump of Dirt [WAS Re: [agi] Conway's Game of Life and Turing machine equivalence]

2007-10-08 Thread William Pearson
On 08/10/2007, Mark Waser [EMAIL PROTECTED] wrote:
 From: William Pearson [EMAIL PROTECTED]
  Laptops aren't TMs.
  Please read the wiki entry to see that my laptop isn't a TM.

 But your laptop can certainly implement/simulate a Turing Machine (which was
 the obvious point of the post(s) that you replied to).


But in that case my analogy holds. Both GoL and my laptop can
implement a TM, neither *are* TMs. Both are computationally universal.
Richard Loosemore's argument in the post I was replying to was based
on saying my laptop was *already* a TM not, can simulate a TM. He
was trying to intimate some difference between the relation of my
laptop to a TM and the relation between GoL and a TM. Which was
incorrect.

 Seriously, people, can't we lose all these spurious arguments?  We have
 enough problems communicating without deliberate stupidity.

I am not being deliberately stupid, simply refuting his sloppy claim
that a laptop is a TM in a way that the GoL isn't a TM.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51213184-8328c1


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-06 Thread William Pearson
On 07/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:

 I have a question for you, Will.

 Without loss of generality, I can change my use of Game of Life to a new
 system called GoL(-T) which is all of the possible GoL instantiations
 EXCEPT the tiny subset that contain Turing Machine implementations.


As far as I am concerned it is not that simple. Turing completeness
has nothing to do with any particular implementation of a TM in that
system. It is a property of the system.

That is the ability to be organised in such as way as to compute
whatever a turing machine could. And there are many, many potential
ways of organising a Turing complete system to compute what a TM
could.

To take an analogous example. Lets say you wanted to take C and make
it no longer turing complete. Well the simple way, you would remove
loops and recursion. Then to be on the safe side self-modifying code
in case it wrote in a loop for itself. Why such drastic measures?
Because otherwise you might be able to write a java/ruby/brainfuck
interpreter and get back to Turing completeness.

So maybe I am throwing the baby out with the bath water. But what is
the alternate method of getting a non-Turing complete subset of C.
Well, you would basically have to test each string to see whether it
implemented a UTM of some variety or other. And discard those that
did. It would have to be done empirically. Automatic ways of
recognising strings that implement UTMs would probably fall foul of
Rice's theorem.

So in imagining GoL-T, you are asking me to do something I do not know
how to find simply without radically changing the system, to prevent
looping patterns. And if I do the complex way of getting rid of UTMs,
I don't know what the states left over from the great UTM purge would
look like. So I can't say whether it would still be Complex
afterwards, to know whether the rest of your reasoning holds.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50866846-9589ac


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread William Pearson
On 05/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
 We have good reason to believe, after studying systems like GoL, that
 even if there exists a compact theory that would let us predict the
 patterns from the rules (equivalent to predicting planetary dynamics
 given the inverse square law of gravitation), such a theory is going to
 be so hard to discover that we may as well give up and say that it is a
 waste of time trying.  Heck, maybe it does exist, but that's not the
 point:  the point is that there appears to be little practical chance of
 finding it.


A few theories. All states which do not three live cells adjacent,
will become cyclic with a cycle length of 0. Or won't be cyclic if you
reject cycle lengths of 0. Similarly all patterns consisting of one or
more groups of three live cells in a row inside an otherwise empty 7x7
box will have a stable cycle.

Will there be a general theory? Nope, You can see that from GoL being
Turing complete. If you had a theory that could in general predict
what a set GoL pattern was going to do, you could rework it to tell if
a TM was going to halt.

My theories are mainly to illustrate what a science of GoL would look
like. Staying firmly in the comfort zone.

Let me rework something you wrote earlier.

I want to use the class of TM as a nice-and-simple example of a system whose
overall behavior (in this case, whether the system will halt or not)
is impossible to
predict from a knowledge of the state transitions and initial state of the tape.

Computer engineering has as much or as little complexity as the
engineer wants to deal with. They can stay in the comfort zone of
easily predictable systems, much like the one I illustrated exists for
GoL. Or they can walk on the wild side a bit. My postgrad degree was
done in a place which specialised in evolutionary computation (GA, GP
and LCS) where systems were mainly tested empirically. So perhaps my
view of what computer engineering is, is perhaps a little out of the
mainstream.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50437016-7ec2cc


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread William Pearson
On 05/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
 William Pearson wrote:
  On 05/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
  We have good reason to believe, after studying systems like GoL, that
  even if there exists a compact theory that would let us predict the
  patterns from the rules (equivalent to predicting planetary dynamics
  given the inverse square law of gravitation), such a theory is going to
  be so hard to discover that we may as well give up and say that it is a
  waste of time trying.  Heck, maybe it does exist, but that's not the
  point:  the point is that there appears to be little practical chance of
  finding it.
 
 
  A few theories. All states which do not three live cells adjacent,
  will become cyclic with a cycle length of 0. Or won't be cyclic if you
  reject cycle lengths of 0. Similarly all patterns consisting of one or
  more groups of three live cells in a row inside an otherwise empty 7x7
  box will have a stable cycle.
 
  Will there be a general theory? Nope, You can see that from GoL being
  Turing complete.
 ^^

 Sorry, Will, but this not correct, and I explained the entire reason
 just yesterday, in a long and thorough post that was the beginning of
 this thread.  Just out of interest, did you read that one?

Yup, and my argument is still valid, if this is the one you are
referring to. You said:

Now, finally:  if you choose the initial state of a GoL system very,
VERY carefully, it is possible to make a Turing machine.  So, in the
infinite set of GoL systems, a very small fraction of that set can be
made to implement a Turing machine.

But what does this have to do with explaining the existence of patterns
in the set of ALL POSSIBLE GoL systems??  So what if a few of those GoL
instances have a peculiar property?  bearing in mind the definition of
complexity I have stated above, how would it affect our attempts to
account for patterns that exist across the entire set?

You are asking about the whole space, my argument was to do with a sub
space admittedly. But any theory about the whole space must be valid
on all the sub spaces it contains. All we need to do is find a single
state that we can prove that we cannot predict how it evolves to say
we will never be able to find a theory for all states.

If it was possible to find a theory, by your definition, then we could
use that theory to predict the admittedly small set of states that
were TMs.

I might reply to the rest if I think we will get anywhere from it.

 Will

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50577306-861814


Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-02 Thread William Pearson
On 02/10/2007, Mark Waser [EMAIL PROTECTED] wrote:
  A quick question, do people agree with the scenario where, once a non
  super strong RSI AI becomes mainstream it will replace the OS as the
  lowest level of software?

 For the system that it is running itself on?  Yes, eventually.  For most/all
 other machines? No.

Well that would be a potentially dangerous scenario. I wonder what
assumptions underlie our beliefs in either direction.

  And would you agree that AIs are less likely to be botnetted?

 By botnetted, do you mean taken over and incorporated into a botnet or do
 you mean composed of a botnet.  Taken over is a real problem for all sorts
 of reasons.  Being composed of multiple machines is what many people are
 proposing.

Yup, I did mean the former. Although memetic infection as Josh Storrs
Hall mentioned is a possibility. Although they may be better at
resisting some memetic infections than humans as more memes may
conflict with their goals. For humans it doesn't matter what you
believe too much as long as it doesn't interfere with you biological
goal.

  In conclusion, thinking about the potential problems of an AGI is very
  highly dependent upon your assumptions.

 Amen.

It would be quite an interesting and humorous exercise if we could
develop an assumption code, like the geek codes of yore. Then we post
that as our sigs and see exactly what was assumed for each post.
Probably unworkable, but I may kick the idea around a bit.

  Developing, and finding a way
  to test, a theory of all types of  intelligence should be the top
  priority of any person who wishes to reason about the potential
  problems, otherwise you are likely to be tilting at windmills, due to
  the sheer number of possible theories and the consequences of each.

 I believe that a theory of all types of intelligence is an intractably large
 problem -- which is normally why I don't get into discussions about the
 dangers of AGI (as opposed to the dangers of certain morality systems which
 I believe is tractable) -- though I will discuss certain specific
 intelligence proposals like Richard.  Much of what is posted on this list is
 simply hot air based upon so many (normally hidden and unrealized)
 assumptions that it is useless.


The best way I have come up with to try and develop a theory of
intelligence is to say what it is not, by discarding systems that are
not capable of what the human brain is capable of.

For example, you can trivially say that intelligence is not a
function, in the formal sense of the word. As in a function IO mapping
does not change over time, and an intelligence must at least be able
to remember something.

Another example would be to formally define the rate we gain
information when we hear telephone number once and can recall it
shortly after. And then dismiss systems such as simple back prop ANN,
which require many repetitions  of the data to be learnt.

Obviously neither of these apply to most AGI systems being developed,
but more advanced theories would hopefully cull the possibilities down
somewhat. And possibly allow us to discuss the affects of AI on
society somewhat rationally.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=49029469-b6c15e


Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-01 Thread William Pearson
On 01/10/2007, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- William Pearson [EMAIL PROTECTED] wrote:

  On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
   The real danger is this: a program intelligent enough to understand
  software
   would be intelligent enough to modify itself.
 
  Well it would always have the potential. But you are assuming it is
  implemented on standard hardware.

 I assume that is what most people are doing.  People want computers to be more
 useful, which means more intelligent.  I suppose an alternative is to
 genetically engineer humans with bigger brains.


You do not have to go that far to get the AI to not be able to access
all its own source. There are a number of scenarios where the dominant
AI does not have easy access to its own source.

A few quick definitions.

Super strong RSI - A vingean-fiction type AI, that can bootstrap
itself from nothing or simply reading the net and figure out ways to
bypass any constraints we may place on it by hacking humans or
discovering ways to manipulate physics we don't understand.

Strong RSI - Expanding itself exponentially by taking over the
internet, and then taking over robotic factories to gain domination
over humans.

Weak RSI - Slow experimental incremental improvement by the whole, or
possibly just parts of the system independently. This is the form of
RSI that humans exhibit if we do it at all.

And by RSI, I mean two abilities of the system

1) It has to be able to move through the space of TMs that map the
input to output.
2) It has to be able to move through the space of TMs that map the
input and history to a change in the mechanisms for 1) and 2).

All while maintaining a stable goal.

A quick question, do people agree with the scenario where, once a non
super strong RSI AI becomes mainstream it will replace the OS as the
lowest level of software? It does not to my mind make sense that for
it to be layered on top Vista or linux and subject to their flaws and
problems. And would you agree that AIs are less likely to be
botnetted?

The scenarios for AGI not having full and easy access to its own, include:

1) Weak RSI is needed for AGI, as contended previously. So systems
will be built to separate out good programs from bad. Memory accesses
will be tightly controlled so that bad programs do not adversely
affect useful programs.

2) An AGI might be created by a closed source company that believes in
Trusted Computing, that builds on encryption in the hardware layer.

3) In order to make a system capable of being intelligent in real
time, you may need vastly more memory bandwidth than current memory
architectures are capable of. So you may need to go vastly parallel,
or even down to cellular automata style computing. This would create
huge barriers to trying to get all the code for the system.

I think it is most likely 3 combined with 1. Even if only one of these
is correct then we may well get past any major botnetting problem with
strong recursive AI. Simply because AIs unable to read all their own
code at a time will have been purchased quickly for their economic
value and replaced vulnerable computers and thus reduced the number of
bots for the net, and would be capable of policing the net by setting
up honey pots etc. Especially if they become the internet routers.

In conclusion, thinking about the potential problems of an AGI is very
highly dependent upon your assumptions. Developing, and finding a way
to test, a theory of all types of  intelligence should be the top
priority of any person who wishes to reason about the potential
problems, otherwise you are likely to be tilting at windmills, due to
the sheer number of possible theories and the consequences of each.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48760741-25aaa6


Re: [agi] Religion-free technical content

2007-09-30 Thread William Pearson
On 29/09/2007, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Although it indeed seems off-topic for this list, calling it a
 religion is ungrounded and in this case insulting, unless you have
 specific arguments.

 Killing huge amounts of people is a pretty much possible venture for
 regular humans, so it should be at least as possible for artificial
 ones. If artificial system is going to provide intellectual labor
 comparable to that of humans, it's going to be pretty rich, and after
 that it can use obtained resources for whatever it feels like.

This statement is, in my opinion, full of unfounded assumptions about
the nature of AGI that are actually going to be produced in the world.

I am leaving this on list, because I think these assumptions are
detrimental to thinking about AGI.

If a RSI AGI infecting the internet is not possible, for whatever
theoretical reason, and we turn out to have a relatively normal
future, I would contend that Artificial People (AP) will not make up
the majority of the intelligence in the world. If we have the
knowledge to create the whole brain of an artificial person with
separate goal system, then we should have the knowledge to create a
partial Artificial Brain (PAB) without a goal system and hook it up in
some fashion to the goal system of humans.

PAB in this scenario would replace von Neumann computers and make it a
lot less easy for AP bot net the world. They would also provide most
of the economic benefits that a AP could.

I would contend that PABs is what the market will demand. Companies
would get them for managers, to replace cube workers. The general
public would get them to find out and share information about the
world with less effort and to chat and interact with them whenever
they want. And the military would want them for the ultimate
unquestioning soldier. Very few people would want computer systems
with their own identity/bank account and rights.

The places systems with their own separate goal system would be mainly
used, is where they are out of contact from humans for a long time. So
deep space and deep sea.

Now the external brain type of AI can be dangerous in its own right,
but the dangers are very different to the blade runner/terminator view
that is too prevalent today.

So can anyone give me good reasons as to why I should think that AGI
with identity will be a large factor in shaping the future (ignoring
recursive self improvement for the moment)?

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48225310-543eca


[agi] Re: An amazing blind (?!!) boy (and a super mom!)

2007-09-27 Thread William Pearson
On 27/09/2007, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
 This is why the word impossible has no place outside of math
 departments.

  Original Message 
 Subject: [bafuture] An amazing blind (?!!) boy  (and a super mom!)
 Date: Thu, 27 Sep 2007 11:49:42 -0700
 From: Kennita Watson [EMAIL PROTECTED]

 This reminds me of the line from Galaxy Quest --
 Never give up.  Never surrender.  Hurray!

 http://www.metacafe.com/watch/779704/best_video_of_the_year/


More information can be found here on wikipedia, unsurprisingly
http://en.wikipedia.org/wiki/Human_echolocation

To me this ability that he evidences should be the fundamental
building block of agi theories of the future. The ability to
re-purpose resources to find new and different information about the
state of the world buried within the information stream, seems to me a
necessary precursor to reasoning about and learning from that stream.
Admittedly they feed off each other, but without the potential for
that change, there will be languages and patterns it can never
understand or recognise and disasters it will not have a chance of
somewhat recovering from.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=47478483-6904d8


Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

On 22/06/07, Pei Wang [EMAIL PROTECTED] wrote:

Hi,

I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
Overview followed by Representative AGI Projects.

It is basically a bunch of links and quotations organized according to
my opinion. Hopefully it can help some newcomers to get a big picture
of the idea and the field.

Pei



I like the overview, but I don't think it captures every possible type
of AGI design approach. And may constrain peoples thoughts as to the
possibilities overly.

Mine, I would describe as foundationalist/integrative. That is while
we need to integrate our knowledge of
sensing/planning/natural/reasoning language, this needs to be done in
the correct foundation architecture.

My theory is that the computer architecture has to be more brain-like
than a simple stored program architecture in order to allow resource
constrained AI to implemented efficiently. The way that I am
investigating, is an architecture that can direct the changing of the
programs by allowing self-directed changes to the stored programs that
are better for following a goal, to persist.

Changes can come from any source (proof, random guess, translations of
external suggestions), so speed of change is not an issue.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

On 23/06/07, Mike Tintner [EMAIL PROTECTED] wrote:


- Will Pearson: My theory is that the computer architecture has to be
more brain-like
 than a simple stored program architecture in order to allow resource
 constrained AI to implemented efficiently. The way that I am
 investigating, is an architecture that can direct the changing of the
 programs by allowing self-directed changes to the stored programs that
 are better for following a goal, to persist.  Changes can come from any
 source (proof, random guess, translations of
 external suggestions), so speed of change is not an issue.

What's the difference between a stored program and the brain's programs that
allows these self-directed changes to come about? (You seem to be trying to
formulate something v. fundamental).


I think the brains programs have the ability to protect their own
storage from interference from other programs. The architecture will
only allow programs that have proven themselves better* to be able to
override this protection on other programs if they request it.

If you look at the brain it is fundamentally distributed and messy. To
stop errors propagating as they do in stored program architectures you
need something more decentralised than the current attempted
dictatorial kernel control.

It is instructive to look at how the stored program architectures have
been struggling to secure against buffer overruns, to protect against
code that has been inserted subverting the rest of the machine.
Measures that have been taken include the No execute bits on
non-programmatic memory and randomising where programs are stored in
memory so they can't be overwritten. You are even getting to the stage
in trusted computing where you aren't allowed to access certain
portions of memory unless you have the correct cryptographic
credentials. I would rather go another way. If you have some form of
knowledge of what a program is worth embedded in the architecture,
then you should be able to limit these sorts of problems, and allow
more experimentation.

If you try self-modifying and experimenting code on a simple stored
program system, it will generally cause errors and lots of problems,
when things go wrong, as there are no safeguards to what the program
can do. You can lock the experimental code in a sand box, as in
genetic programming, but then it can't replace older code or change
the methods of experimentation. You can also use formal proof, but
then that limits a lot what sources of information you can use as
inspiration for the experiment.

My approach allows an experimental bit of code, if it proves itself by
being useful, to take the place of other code, if it happens to be
coded to take over the function as well.


And what kind of human mental activity
do you see as evidence of the brain's different kind of programs?


Addiction. Or the general goal optimising behaviour of the various
different parts of the brain. That we notice things more if they are
important to us, which implies that our noticing functionality
improves dependent upon what our goal is. Also the general
pervasiveness of the dopaminergic neural system, that I think has an
important function in determining which programs or neural areas are
being useful.

* I shall now get back to how code is determined to be useful.
Interestingly it is somewhat like the credit attribution for how much
work people have done on the agi projects that some people have been
discussing. My current thinking is something like this. There is a
fixed function, that can recognise manifestly good and bad situations,
it provides a value every so often to all the programs than have
control of an output. If things are going well, some food is found,
the value goes up an injury is sustained the value goes down. Basic
reinforcement learning idea.

The value becomes in the architecture a fungible, distributable, but
conserved, resource.  Analogous to money, although when used to
overwrite something it is removed dependent upon hoe useful the
program overwritten was. The outputting programs pass it back to the
programs that have given them they information they needed to output,
whether that information is from long term memory or processed from
the environment. These second tier programs pass it further back.
However the method of determining who gets the credit doesn't have to
always be a simplistic function, they can have heuristics on how to
distribute the utility based on the information they get from each of
its partners. As these heuristics are just part of each program they
can change as well.

So in the end you get an economy of programs that aren't forced to do
anything. Just those that perform well can overwrite those that don't
do so well. It is a very loose constraint on what the system actually
does. On top of this in order to get an AGI you would integrate
everything we know about language, senses, naive physics, mimicry and
other things yet discovered. Also adding the new knowledge we 

Re: Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

On 24/06/07, Bo Morgan [EMAIL PROTECTED] wrote:


On Sun, 24 Jun 2007, William Pearson wrote:

) I think the brains programs have the ability to protect their own
) storage from interference from other programs. The architecture will
) only allow programs that have proven themselves better* to be able to
) override this protection on other programs if they request it.
)
) If you look at the brain it is fundamentally distributed and messy. To
) stop errors propagating as they do in stored program architectures you
) need something more decentralised than the current attempted
) dictatorial kernel control.

This is only partially true, and mainly only for the neocortex, right?
For example, removing small parts of the brainstem result in coma.


I'm talking about control in memory access, and by memory access I am
referring to synaptic

In a coma, the other bits of the brain may still be doing things. Not
inputting or outputting, but possibly other useful things (equivalents
of defragmentation, who knows). Sleep is important for learning, and a
coma is an equivalent state to deep sleep. Just one that cannot be


) The value becomes in the architecture a fungible, distributable, but
) conserved, resource.  Analogous to money, although when used to
) overwrite something it is removed dependent upon hoe useful the
) program overwritten was. The outputting programs pass it back to the
) programs that have given them they information they needed to output,
) whether that information is from long term memory or processed from
) the environment. These second tier programs pass it further back.
) However the method of determining who gets the credit doesn't have to
) always be a simplistic function, they can have heuristics on how to
) distribute the utility based on the information they get from each of
) its partners. As these heuristics are just part of each program they
) can change as well.

Are there elaborations (or a general name that I could look up) on this
theory--sounds good?  For example, you're referring to multiple tiers of
organization, which sound like larger scale organizations that maybe have
been further discussed elsewhere?


Sorry. It is pretty much all just me at the moment, and the higher
tiers of organisation are just fragments that I know will need to be
implemented or planned for, but have no concrete ideas for at the
moment. I haven't written up everything at the low level either,
because I am not working on this full time. I hope to start a PhD on
it soon, although I don't know where. It will mainly working on the
trying to get a theory of how to design the systems properly, so that
the system will only reward those programs that do well and won't
encourage defectors to spoil what other programs are doing, based on
game theory and economic theory. That is the level I am mainly
concentrating on right now.


It sounds like there are intricate dependency networks that must be
maintained, for starters.  A lot of supervision and support code that
does this--or is that evolved in the system also?


My rule of thumb is to try to put as much as possible into the
changeable/evolving section, but code it by hand to start with if is
needed for the system to start to do some work. The only reason to
keep it on the outside is if the system would be unstable with it on
the inside, e.g. the functions that give out reward.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson

Sorry, sent accidentally while half finished.

Bo wrote:

This is only partially true, and mainly only for the neocortex, right?
For example, removing small parts of the brainstem result in coma.


I'm talking about control in memory access, and by memory access I am
referring to synaptic changes in the brain. While the brain stem has
dictatorial control over conciousness and activity it does not
necessarily control all activity in the brain in terms of memory and
how it changes. Which is what I am interested in.

In a coma, the other bits of the brain may still be doing things. Not
inputting or outputting, but possibly other useful things (equivalents
of defragmentation, who knows). Sleep is important for learning, and a
coma is an equivalent brain state to deep sleep. Just one that cannot
be stopped in the usual fashion.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] about AGI designers

2007-06-06 Thread William Pearson

On 06/06/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


There're several reasons why AGI teams are fragmented and AGI designers
don't want to join a consortium:

A.  believe that one's own AGI design is superior
B.  want to ensure that the global outcome of AGI is friendly
C.  want to get bigger financial rewards


There is also

D. The other members of the consortiums philosophical approaches to
AGI share little in common with your own and the time spent trying to
communicate with the consortium about which class of system to
investigate would be better spent trying to communicate with the world
in general. For example if you are committed to a connectionist
approach but the consortium is mainly logical.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Slavery (was Re: [agi] Opensource Business Model)

2007-06-05 Thread William Pearson

On 04/06/07, Matt Mahoney [EMAIL PROTECTED] wrote:

Suppose you build a human level AGI, and argue
that it is not autonomous no matter what it does, because it is
deterministically executing a program.



I suspect an AGI that executes one fixed unchangeable program is not
physically possible.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Slavery (was Re: [agi] Opensource Business Model)

2007-06-05 Thread William Pearson

On 05/06/07, Ricardo Barreira [EMAIL PROTECTED] wrote:

On 6/5/07, William Pearson [EMAIL PROTECTED] wrote:
 On 04/06/07, Matt Mahoney [EMAIL PROTECTED] wrote:
  Suppose you build a human level AGI, and argue
  that it is not autonomous no matter what it does, because it is
  deterministically executing a program.
 

 I suspect an AGI that executes one fixed unchangeable program is not
 physically possible.

What do you mean by one fixed unchangeable program? That seems
nonsense to me... There's no necessary distinction between a program
and its data, so that concept is useless.


A function in the mathematical sense is a fixed unchangeable program.
Though I'd agree that there is no distinction between program and
data. I may have got interpreted the sentence incorrectly but the
implication I got was that because a human supplied the program that
the computer ran to be intelligent, the computer was not autonomous.
Now as you have pointed out data can seen as a program, and an
intelligent system is sure to have acquired its own data, what
determines its behaviour and learning is not fully specified by
humans, therefore it can be considered autonomous to some degree.

If, however, he was referring to questions of autonomity based upon
how autonomous systems cannot be made out of pieces that unthinkingly
following rules, then humans to the best of our understanding would
not be autonomous by this standard. So this meaning of autonomous is
useless, which is why I assumed he meant the initial meaning.

I would also go further than that and say that a system that can't
treat what determines its external behaviour and how it learns as
data, does not seem to be a good candidate for an intelligent system.
Because surely one of the pillars of intelligence is self-control. We
have examples of systems that are pretty good at self-control in
modern PCs, however they are not suited to self-experimentation in the
methods of control.


 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] New AI related charity?

2007-06-04 Thread William Pearson

Is there space within the charity world for another one related to
intelligence but with a different focus to SIAI?

Rather than specifically funding an AGI effort or creating one in
order to bring about a specific goal state of humanity in mind, it
would be dedicated to funding a search for the answers to a series of
questions that will help answer the questions of, What is
intelligence? and, What are the possible futures that follow on from
humans discovering what intelligence is? The second question being
answered once we have better knowledge of the answer to the first.

I feel a charity is needed to focus the some of the efforts of all of
us, as the time is not right for applications, and we are all pulling
in diverse directions.

The sorts of questions I would like the charity to fund to answer are
the following (in a full and useful fashion, my own very partial
answers follow).

1) What sorts of limits are there to learning systems?
2) Which systems can approach those limits?And are they suitable for
creating intelligences, depending upon their assumptions they make
about the world around them.

I am following this track, as a system that can make better use of the
information streams to alter how it behaves, than other systems, is
more likely to be what we think of as intelligent. The only caveat to
this is that this is true as long as it has to deal with the same
classes or quality of information streams as humans.

Pointers towards answers

1) No system can make justified choices about how it should behave at
a greater rate than the bit rate of their input streams.

2)A von Neumann architecture computer, loading a program from external
information sources, approaches that limit (as it makes a choice to
alter its behaviour by one bit for every bit it receives, assuming a
non redundant encoding of the program). It is not suitable for
intelligent systems though as it assumes that the information it gets
from the environment is correct and non-hostile.

How to make a system with the same ability to approach the limits to
learning and deal with potentially harmful information is what I would
like to focus on after these answers are formalised.

I would be interested in other peoples opinion on these questions and
answers, and also the questions they would get an intelligence
research charity to fund.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Beyond AI chapters up on Kurzweil

2007-06-01 Thread William Pearson

On 01/06/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

Ray Kurzweil has arranged to put a couple of sample chapters up on his site:

Kinds of Minds
http://www.kurzweilai.net/meme/frame.html?main=/articles/art0707.html
The Age of Virtuous Machines
http://www.kurzweilai.net/meme/frame.html?main=/articles/art0708.html

Enjoy!



Thanks. I had wanted to read some of your writing. I will put Beyond
AI on my long to buy list.

I like the different definitions of AIs although I don't think they
capture all types of intelligence systems. What I am interested in is
making systems that somewhat mesh with humans to the extent they are
considered part of the same system. External artificial brain tissue,
if you will. Specialised towards different tasks and not necessarily
hooked up with electrodes, but through the senses. It will only have
low bandwidth connections with the organic brain, which may or may not
limit to how much the organic and silicon system can be considered one
singular entity.

So neither as tools as some people envisage, nor sentient entities in
their own right, but part of ourselves. Perhaps symbiohuman systems
might be a good term for what I would like to create.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Tommy

2007-05-11 Thread William Pearson

On 11/05/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:


Tommy, the scientific experiment and engineering project, is almost
all about concept formation. He gets a voluminous input stream but is
required to parse it into coherent concepts (e.g. objects, positions,
velocities, etc). None of these concepts is he given originally. Tommy
1.0 will simply watch the world and try to imagine what happens next.



Interesting. This is somewhat similar to one of the projects that I am
interested in. Assuming sufficient or the correct hardware, I'm
interested in body mounted robotics for Intelligence Augmentation,
using what people would think of as AI.

An example of the robot if not the software
http://www.robots.ox.ac.uk/ActiveVision/Projects/Wear/wear.03/index.html

I would start off with that annotating its visual streams to be passed
to head mounted display on the user. Things like tracking objects the
user has pointed at, so the user could see things not directly in
front of him, or high-lighting important objects to the user, would be
some of the things it would be initially taught. I would also give it
a controlled, low power laser pointer so it could visually mark things
for other people apart from its user.

I think this sort of system is a worthy one to study, as it allows the
user and the robot to inhabit the same world (so concepts developed by
the computer should not be too alien to the user, and thus languages
may be shared between them), it also allows for long periods of time
for the researcher to be present with the computer if such time scales
as a babies development are required for the teaching of human level
intelligence. It also tries to minimise the amount of
processing/robotics required to share the similar world, meaning more
projects could possibly be attempted at once.

While user and computer do share the same world in your experimental
setup, there may be some concepts that would be hard for it to learn
such as translation of its PoV. Whether that would be a fatal flaw in
its developed mental model of the world (and limit its ability to
communicate with as hardware and its capabilities developed), I'm not
sure. More experimentation and better theories required, as ever.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


[agi] What would motivate you to put work into an AGI project?

2007-05-02 Thread William Pearson

My current thinking is that it will take lots of effort by multiple
people, to take a concept or prototype AGI and turn into something
that is useful in the real world. And even one or two people worked on
the correct concept for their whole lives it may not produce the full
thing, they may hit bottle necks in their thinking or lack the proper
expertise to build the hardware needed to make it run in anything like
real time. Building up a community seems the only rational way
forward.

So how should we go about trying to convince each other we have
reasonable concepts that deserve to be tried? I can't answer that
question as I am quite bad at convincing others of the interestingness
of my work. So I'm wondering what experiments, theories or
demonstrations would convince you that someone else was onto
something?

For me an approach should have the following feature:

1) The theory not completely divorced from brains

It doesn't have to describe everything about human brains, but you can
see how roughly a similar sort of system to it may be running in the
human brain and can account for things such as motivation, neural
plasticity.

2) It takes some note of theoretical computer science

So nothing that ignores limits to collecting information from the
environment or promises unlimited bug free creation/alteration of
programming.

3) A reason why it is different from normal computers/programs

How it deals with meaning and other things. If it could explain
conciousness in some fashion, I would have to abandon my own theories
as well.

I'm sure there are other criteria I have as well, but those three are
the most obvious. As you can see I'm not too interested in practical
results right at the moment. But what about everyone else?

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Circular definitions of intelligence

2007-04-26 Thread William Pearson

On 26/04/07, Richard Loosemore [EMAIL PROTECTED] wrote:

Consider that, folks, to be a challenge:  to those who think there is
such a definition, I await your reply.



While I don't think it the sum of all intelligence, I'm studying
something I think is a precondition of being intelligent. That is
optimisation systems that can also take in information that is used to
improve the way they optimise and improve the way they gather said
information.

So for example for a thermostat to have the abilities I am interested
in not only would it have to stabilise the temperature, it would have
to be able to accept instruction in some form so it could improve the
way it stabilised temperature. For example taking in instructions that
it should alter its programming to pre-emptively start heating rooms
when it gets information that a door is to be opened at a specific
time (from cameras, email). It would also have to be able to take
instruction about how to take instructions in different forms.
Specifically I am interested in systems that do not take in
instructions naively but have sanity checking and other means of
guarding against poor/malicious information.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] dopamine and reward prediction error

2007-04-13 Thread William Pearson

On 13/04/07, Richard Loosemore [EMAIL PROTECTED] wrote:

To convey this subtlety as simply as I can, I would suggest that you ask
yourself how much intelligence is being assumed in the preprocessing
system that does the work of (a) picking out patterns to be considered
by the system, and (b) picking the particular patterns that are to be
rewarded, according to some success criterion.  Here is the problem:
if you are not careful you will assume MORE intelligence in the
preprocessor than you were hoping to get the core of the system to
learn.  There are other issues, but that is one of the main ones.


For the record I agree with this critique of some of the neuroscience
views of reinforcement learning in the brain.


What I find tremendously frustrating is the fact that people are still
so dismally unaware of these issues that they come out with statement
such as the one in the quote:  speaking as if the idea of reward
assigment was a fantastic idea, and as if the neuroscience discovery of
a possible mechanism really meant anything.  The neuroscience discovery
was bound to collapse:  I said that much of it the first time I heard of
it, and I am glad that it has now happened so quickly.  The depressing
part is that the folks who showed it to be wrong think that they can
still tinker with the mechanism and salvage something out of it.


It think they do this because they haven't found a better hypothesis
and have too much invested in the previous status quo. I'd be curious
to know if your hypothesis for a motivation system has the potential
for the same simple signal given to systems, with different histories,
to cause the system to attempt to get the same signal again (addiction
being the pure example of this). This is one of the important
phenomenon I require a motivational system to explain.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Enumeration of useful genetic biases for AGI

2007-02-14 Thread William Pearson

On 14/02/07, Ben Goertzel [EMAIL PROTECTED] wrote:


Does anyone know of a well-thought-out list of this sort.  Of course I
could make one by surveying
the cognitive psych literature, but why reinvent the wheel?


None that I have come across. Biases that I have come across are
things like paying attention to face like objects(1) and the on going
debate over language centres, that is we are biased to expect language
of some variety.

These two biases I think are parts of the very important general bias
to expect other intelligent agents that we can learn from. Without
that starting bias, or the ability to have the general form of that
bias (the ability to learn almost arbitrary facts/skills/biases from
other agents), I think an AGI is going to be very slow at learning
about the world, even if its powers of inference are magnitudes above
humans.


 Will Pearson
1. 
http://info.anu.edu.au/mac/Media/Research_Review/_articles/_2005/_researchreviewmckone.asp

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Language acquisition in humans: How bound up is it with tonal pattern recognition...?

2006-12-02 Thread William Pearson

On 02/12/06, Ben Goertzel [EMAIL PROTECTED] wrote:

 I think that our propensity for music is pretty damn simple: it's a
 side-effect of the general skill-learning machinery that makes us memetic
 substrates. Tunes are trajectories in n-space as are the series of motor
 signals involved in walking, throwing, hitting, cracking nuts, chipping
 stones, etc, etc. Once we evolved a general learn-to-imitate-by-observing
 ability it will get used for imitating just about anything.

Well, Steve Mithen argues otherwise in his book, based on admittedly
speculative interpretations of anthropological/archaeological
evidence...

He argues for the presence of a specialized tonal pattern recognition
module in the human brain, and the specific consequences for language
learning of the existence of such a module...



Hmm, from the surface that theory would seem be hard pressed to
explain occurances like this

http://www.cbsnews.com/stories/2000/04/25/60II/main188527.shtml

Unless it has more complexity than you have described.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Information extraction from inputs and an experimental path forward

2006-11-22 Thread William Pearson

On 21/11/06, Pei Wang [EMAIL PROTECTED] wrote:

That sounds better to me. In general, I'm against attempts to get
complete, consistent, certain, and absolute descriptions (of either
internal or external state), and prefer partial,
not-necessarily-consistent, uncertain, and relative ones --- not
because the latters are better, but because they are what we can
expected in realistic situations. Also, I'm doubtful about any usage
of terms like axiom and proof outside mathematics.


Okay axiom was used in a very lax way, I apologise. Okay. Change that
to statement about the world/agent rather than a statement about the
direct way to change the memory, which is used in the first phase of
experimentation. I didn't use the word proof though.


So you are proposing an approach to describe memory change by
evaluating alternatives, right? Will it be similar to what is usually
called belief change (for example, see
http://www.pims.math.ca/science/2004/NMR/bc.html )?



I wasn't familiar with the belief change literature. So it took me a
little time to get the bare bones. It is interesting and I expect
there to be some crossover with some of the potential high levels of
mechanisms within my system, but at the fundamental level I think the
approaches are quite different. For example I stress a more pragmatic
view of the worth of a memory change, so how good the system is at
solving the problems put to it after the memory change, is more
important than internal consistency.

And at least from the cursory reading I have done it seems that most
belief change lacks details in exactly how to change belief into
action. Making beliefs available for logical reasoning is also not a
priority of mine, for example instructions on how to solve a physical
task need not be stored, that is I am happy for memory changes to be
stored procedurally.

These views may change after I have read more into it, but in the
interests of continuing discussion, I shall put these views forward
initially.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread William Pearson

Richard Loosemoore


 As for your suggestion about the problem being centered on the use of
 model-theoretic semantics, I have a couple of remarks.

 One is that YES! this is a crucial issue, and I am so glad to see you
 mention it.  I am going to have to read your paper and discuss with you
 where the idea of the distinction between a model-theoretic and a
 system-centric semantics originated.  I have been talking that way for
 years, but I have not seen it discussed explicitly in print (I could
 have sworn that Hofstadter said something like this, but maybe I dreamed
 that).



Brian Cantwell Smith covers similar topics in his book on the origin
of objects. I have only browsed the beginning of it though, so I can
not give it an unconditional recommendation.

A talk of his can also be found  here
http://www.c-span.org/congress/digitalfuture.asp

Will Pearson

P.S.


Whoever else interested in these papers can also send me emails.
However, since these papers are not available for the public, please
don't distribute them.


I would be interested in a copy of these papers.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Voodoo meta-learning and knowledge representations

2006-09-27 Thread William Pearson

I am interested in meta-learning voodoo, so I thought I would add my
view on KR in this type of system.

If you are interested in meta-learning the KR you have to ditch
thinking about knowledge as the lowest level  of changeable
information in your system, and just think about changing state. State
is related to knowledge in that states can represent knowledge. The
difference lies in the fact that you can't change the knowledge to
change the knowledge representation, however you can change state to
change the knowledge representation.

We do this when we program computers with different KR for example.
You can't call the low level bits and bytes of a computer a KR,
because they are not intrinsically about any one thing, they are just
state.

Of course with this meta-learning situation, you do have to give an
initial KR to be modified and improved upon, so discussion of the
initial KR is still interesting.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Voodoo meta-learning and knowledge representations

2006-09-27 Thread William Pearson

On 27/09/06, Richard Loosemore [EMAIL PROTECTED] wrote:

William Pearson wrote:
 I am interested in meta-learning voodoo, so I thought I would add my
 view on KR in this type of system.

 If you are interested in meta-learning the KR you have to ditch
 thinking about knowledge as the lowest level  of changeable
 information in your system, and just think about changing state. State
 is related to knowledge in that states can represent knowledge. The
 difference lies in the fact that you can't change the knowledge to
 change the knowledge representation, however you can change state to
 change the knowledge representation.

 We do this when we program computers with different KR for example.
 You can't call the low level bits and bytes of a computer a KR,
 because they are not intrinsically about any one thing, they are just
 state.

 Of course with this meta-learning situation, you do have to give an
 initial KR to be modified and improved upon, so discussion of the
 initial KR is still interesting.

I can't quite understand what you are saying.

Metalearning, the way I would use it (if I did, which is not very
often) means some adaptive process to find learning mechanisms  that
actually work.


I am using in a closely related sense. Start with a learning mechanism
and alter that learning mechanism and knowledge representation to the
specific problems the system faces, with the information available to
it. Since I believe that this sort of meta-learning occurs in humans,
I think that a physicist has different learning mechanisms and
knowledge representations than a racing car driver, when thinking
about a cars movement. Despite starting with similar knowledge
representations and mechanisms at birth. And similarly there will be
differences in knowledge representations for a beginning chess player
and a master.

You could test this by looking at the brain regions that people use
when solving problems. If this changes on a person by person basis,
then it is likely that some form of meta-learning of the sort I am
interested in occurs in humans.


This is really a methodological issue, about the
procedures we set up to find adequate learning mechnanisms, not a
run-time issue for the AGI.


In your view of AI maybe. Not mine though.


Thus:  I see a class of KR systems being defined, each member of which
differs from others in some more-or-less parameterized way, and then a
systematic exploration of the properties (mostly the stability and
generative power) of the members of that class.

Most importantly, I do not see us going back to some extremely
impoverished blank slate class of systems when we start this empirical
process:  I specifically want to see a base system that captures the
best knowledge we have about the human cognitive system.



I agree to a certain extent. A blank slate view is not appropriate.
And if you are trying to do exactly the same thing as humans, then
capturing the knowledge is the best way forward. However different
modalities may require different intitial knowledge represntations.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI open source license

2006-08-28 Thread William Pearson

On 28/08/06, Russell Wallace [EMAIL PROTECTED] wrote:

On 8/28/06, Stephen Reed [EMAIL PROTECTED] wrote:
 Google wouldn't work at all well under the GPL. Why? Because if everyone
had their own little Google, it would be quite useless [1]. The system's
usefulness comes from the fact that there is only one Google, and it is
_big_, in terms of both knowledge and the computing resources to use that
knowledge.


But google gets its knowledge from lots of little actors (web page
makers). I suspect the thing that will replace google will get its
information from lots of little AIs each attached to a
person/government or other organisation. While AGI will likely be a
google replacer, it will also be an outlook replacer as well. The
micro scale and the macro.

If the macro AGI can't translate between differences in language or
representation that the micro AGIs have acquired from being open
source, then we probably haven't done our job properly.

 Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI open source license

2006-08-28 Thread William Pearson

On 28/08/06, Russell Wallace [EMAIL PROTECTED] wrote:

On 8/28/06, William Pearson [EMAIL PROTECTED] wrote:


 If the macro AGI can't translate between differences in language or
 representation that the micro AGIs have acquired from being open
 source, then we probably haven't done our job properly.


 But I don't think that will. I think that job is impossible to do, or
rather that doing it would require a complete, fully-educated AGI - which is
precisely what we are trying to achieve, so we can't rely on its existence
while we are trying to build it.



I was thinking more long term than you. I agree in the first phase we
can't rely on it being to translate different information from
different AGI. But to start with I wouldn't attempt the google killer,
merely the outlook killer.

We may well not have enough computing resources available to do it on
the cheap using local resources. But that is the approach I am
inclined to take, I'll just wait until we do. The open source
distibuted google killer will have the problem of who decides what
goals the system has/starts with (depending upon your philosophy), and
how to upgrade the collective if the goals were incorrect to start
with. It is also not as amenable to experiment as the micro level
systems are.

Will

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI open source license

2006-08-28 Thread William Pearson

On 28/08/06, Russell Wallace [EMAIL PROTECTED] wrote:

On 8/28/06, William Pearson [EMAIL PROTECTED] wrote:



 We may well not have enough computing resources available to do it on
 the cheap using local resources. But that is the approach I am
 inclined to take, I'll just wait until we do.


 Computing power isn't the only issue, and probably not the most important
one; what do you think an Outlook killer could do that Outlook doesn't
already do, and how would it know how to do it?


Things like hooking it up to low quality sound video feeds and have it
judge by posture/expression/time of day what the most useful piece of
information in the RSS feeds/email etc to provide to the user is. We
would have to program a large amounts of the behaviour to start with,
but also by the dynamics and mechanism we create it would get more of
an information about what the individual user wanted.


 The open source
 distibuted google killer will have the problem of who decides what
 goals the system has/starts with (depending upon your philosophy)


 Do what the users want you to do.


Hmm. Possibly what we are talking about is not so different.



 and
 how to upgrade the collective if the goals were incorrect to start
 with.


 In the case of an open source AGI project, there would be no requirement
that all users form a collective as far as their goals are concerned, only
that they agree on running, maintaining and enhancing the software to serve
their separate goals, just as is the case with e.g. the Internet today.


Wouldn't interoperability be maintained by the same sort of pressures
that mean that everyones tweaked version of Open Office shares the
same file formats? The fact that the first mover that is incompatible
loses then benefits from remaining compatible?

Will

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI open source license

2006-08-28 Thread William Pearson

On 28/08/06, Russell Wallace [EMAIL PROTECTED] wrote:

On 8/28/06, William Pearson [EMAIL PROTECTED] wrote:

 Things like hooking it up to low quality sound video feeds and have it
 judge by posture/expression/time of day what the most useful piece of
 information in the RSS feeds/email etc to provide to the user is. We
 would have to program a large amounts of the behaviour to start with,
 but also by the dynamics and mechanism we create it would get more of
 an information about what the individual user wanted.


 Hmm... okay... it's not obvious to me that would be useful, but maybe it
would. The nice thing about being a pessimist, one's surprises are more
likely to be pleasant ones. Surprise me ^.^


Possibly I am not explaining things clearly enough. One of my
motivations for developing AI, apart from the challenge, is to enable
me to get the information I need, when I need it.

As a lot of the power I have in this world is through what I buy, I
need to have this information available when I might buy something,
which may be when I am in social situations etc. I can be a lot better
ethical consumer with the the details I need at the right time given
to me. As such I am interested in wearable and ubiquitous computing.
Due to the constraints wearable computer place upon the designer, you
really want the correct information given to you and nothing else that
may distract the user unnecessarily.

Knowing what the correct information is will entail knowing about the
user and the uses current environment. Whether they rate energy
efficiency or CO2 emissions as a priority, for example. It will also
entail the google like system you are focused upon.

I also think that a system designed to understand our body
language/gestures/moods will also be able to be more easily and
naturally trained as it has more information coming in about what we
want and we will not have to be so explicit in our instructions.

I'm also a pessimist in that I don't think an era of light will entail
just because AI is invented, but I hope it will allow the few people
that care to close the information gap that exists between producers
and consumers. Or the government and the populace for that matter. And
provide an economy marginally closer to what is promised by free
market theory.

You have hinted at the normative value of AI, I'm curious what you
find it to be? Is it simply to speed up technological development so
that we can escape the gravity well?

 Will

 Will

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] General problem solving

2006-07-08 Thread William Pearson

On 06/07/06, Russell Wallace [EMAIL PROTECTED] wrote:

On 7/6/06, William Pearson [EMAIL PROTECTED] wrote:
 A
 generic PC almost fulfils the description, programmable, generic and
 if given the right software to start with can solve problems. But I am
 guessing it is missing something. As someone interested in RL systems
 I would say an overarching goal system for guiding programmability,
 but I would be interested to know what you think.

 A useful line of inquiry. Okay, let's see... a PC isn't a general purpose
problem solver at all. To see why not, suppose you just have bare hardware,
a PC and whatever peripherals you want but no software at all. Well the
thing'll just sit there, it won't solve any problems.


True. I was more thinking about a PC with a nominal amount of software
pre-installed, just as we have certain things pre-installed. Say the
gcc, make and the apt-get package management system.



 Now assembler might seem general purpose, yes? But in practice you find
when you set out to actually solve any problem in it, you spend almost all
your time dealing with the headaches of assembler, leaving so little for
actually solving the problem that you'll die of old age before you get very
much done. If you get up to a block structured language like C (Algol, PL/1,
Pascal etc), it might appear to be closing off possible ways of doing things
but in practice the way of doing things it provides is greatly superior in
very many contexts so you'll get much more done.


Agreed, but I think looking at it in terms of a single language is a
mistake.  Humans  use body language and mimicry to acquire spoken
language and spoken/body to acquire written and then onto maths etc.
So a starting general problem solver may have a simple
language/representation that is uses to bootstrap to other
languages/representations suited to the tasks that it is attempting to
do.


 So we have the distinction between:

 Generic: equally good or bad across a wide range of situations.

 General: Good across a wide range of situations.

 And we note that to be generic you need only be flexible, but to be general
you need to be high level, have a powerful collection of prebuilt tools.


Agreed about the need for pre-built tools. Would you agree that some
of these tools are tools that allow the building of other tools? I
would also contend that language that the new tools are built in needs
to be generic so that a general system can be created.

If you agree with this then hopefully you will also see that there
needs to be someway of constraining the tools built so that bugs are
not introduced into the system or that if they are they can be
corrected. This I think needs a low level approach as well as the high
level tools that we have been talking about. This is what I am
interested in.

snip agreed parts


 But that is a way to think of the problem: how to create a software system
that includes a sufficiently powerful collection of tools to render
tractable things you can't do with what we have at the moment.


I don't claim to have any definite answers on this level. But the
sorts of systems I am going to experiment with once I have built the
low-level constraining mechanism are things like the following.
Systems that have tools that build new search
algorithms/representations and language parsers based on lingual
instruction and experimentation.

Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] General problem solving

2006-07-08 Thread William Pearson

On 08/07/06, Russell Wallace [EMAIL PROTECTED] wrote:

On 7/8/06, William Pearson [EMAIL PROTECTED] wrote:

 Agreed, but I think looking at it in terms of a single language is a
 mistake.  Humans  use body language and mimicry to acquire spoken
 language and spoken/body to acquire written and then onto maths etc.
 So a starting general problem solver may have a simple
 language/representation that is uses to bootstrap to other
 languages/representations suited to the tasks that it is attempting to
 do.


 Well indeed that is the situation we have: we start with some programming
language, and use it to build stuff. But note that it won't be a general
problem solver until we've built a lot more stuff than we currently have;
until then it's just a _generic_ tool that we hope to use to build a
_general_ kit.


Okay.

snip


 If you agree with this then hopefully you will also see that there
 needs to be someway of constraining the tools built so that bugs are
 not introduced into the system or that if they are they can be
 corrected. This I think needs a low level approach as well as the high
 level tools that we have been talking about. This is what I am
 interested in.


 Clearly when (not if) bugs are introduced into any system they need to be
corrected. But I'm not sure what you mean by low level approach - low level
tools by their very nature can only catch low level bugs, null pointer
references or array bounds or the like; that's handy as far as it goes but
the interesting bugs are at higher levels. Can you explain what you're
getting at here?



I am not relying on one singular apporach to dealing with bugs as I
expect new methods to deal with bugs to be introduced (what we call
rational thought being one of them). When process X wishes to correct
process Y, it basically starts a conflict. X might be correct and Y is
malfunctioning or X might be malfunctioning in trying to alter Y and
may cause further harm.

Note that malfunction can also mean functioning poorly and X may be
trying to improve Y as well as correct and error.

To resolve this conflict requires us to have some form of knowledge of
how well the processes have been performing. If X has been performing
better than Y we can hope that X is functioning correctly and allow X
to correct Y. How can we know whether a process is performing well?
Well it has to be up to each process to monitor the others it
interacts with. As no one process has the knowledge (as one process
interacts with many others) then it has to be scored by many other
processes on how useful it is being. They each give it a quantity
called usefulness, utility, happiness points or fitness (as it sets up
a form of evolution). To minimise errors in this usefulness system,
usefulness has to be conserved, a process can only give out some of
the usefulness it has. Also to correct an error or create an
improvement the usefulness of the process being corrected has to be
paid and used up, to indicate the amount of potential damage the
correction tool has done. So if a process is not being given any
usefulness it will be very easy to get rid of.

So how is usefulness introduced into the system? A fixed system that
monitors  feedback from the world and then gives usefulness to the
processes that are outputting to the world at that time. They then
have to pass it back to the processes that have been useful to them,
such as the ones that give them useful information or the tools that
have created them and then those recipients pass it back c.

I am currently thinking of connecting this to a big button users can
smash when the system malfunctions.

The reason why I called this approach to error correction low-level is
that it requires that the passing of usefulness to be conserved and to
be done in hardware (or at least unalterable in the language used to
construct tools) else it can be subverted by aberrant processes.
Although the actual error correction can happen at any level, it is
the low-level method that allows it to happen or not.

 Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-06 Thread William Pearson

On 06/07/06, Russell Wallace [EMAIL PROTECTED] wrote:

On 7/6/06, William Pearson [EMAIL PROTECTED] wrote:

 How would you define the sorts of tasks humans are designed to carry
 out? I can't see an easy way of categorising all the problems
 individual humans have shown there worth at, such as key-hole surgery,
 fighter piloting, cryptography and quantum physics.


 Well, there are two timescales involved, that of the species and that of
the individual. The short answer to the first question is: survival in Stone
Age tribes on the plains of Africa. That this produced an entity that can do
all the things on your list invokes something between wonder and existential
paranoia depending on one's mood and predilections.


Wonder for me. This long timescale viewpoint is useful because it
tells us that there will be lots of programming in humans that is not
useful for a robot/computer to act and survive in the real world. For
example blindly copying a baby human neural net to a electronic robot
wouldn't be smart. it wouldn't have the inherent fear/knowledge to
stay away from water that it would need.


(The absence of any
steps of the Great Filter between the Tertiary and the Cold War is a common
assumption - but it is only an assumption. But I digress.)

 On the individual timescale we're programmable general purpose problem
solvers:


This is an interesting term. If we could define what it means
precisely we would be a long way to building a useful system. What do
you think the closest system humanity has created to a pgpps is?  A
generic PC almost fulfils the description, programmable, generic and
if given the right software to start with can solve problems. But I am
guessing it is missing something. As someone interested in RL systems
I would say an overarching goal system for guiding programmability,
but I would be interested to know what you think.


We're good at learning from our environment, but that only gets you
so far, by itself it won't let you do any of the above things because you'll
be dead before you get the hang of them.


So this whittles away AIXI and similar formalisms from the possible
candidates for being a pgpps.


However, our environment also
contains other people and we can do any of the above by learning the
solutions other people worked out.


Agreed. I definately think this is where a lot of work needs to be
done. There is a variety of different methods we can learn from
others. Copying others, getting instruction even just knowing
something is possible can enable you to get to the same end point
without exact copying, e.g. building an Atom bomb.

 Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-18 Thread William Pearson

On 17/06/06, arnoud [EMAIL PROTECTED] wrote:




 As long as some of those things are learnt by watching humans doing
 them, in practise I agree with you. In theory though a sufficiently
 powerful Giant look up table, could also seem to learn these things,
 so I also going to be look at the systems insides and see if they look
 like Look Up tables.



Ah, so this is how you view those practical knowledge domains. Easy enough
for lookup tables. Maybe for (near-)infinite ones, otherwise forget it.



I use LUT as a short hand to indicate something that looks
intelligent, but isn't on the right path. Aibo for example. This is
based on Ned Blocks objection to the Turing test. Your conception of
how to create intelligence, or any test that is purely based on
behaviour might fail in the same way.

I'm not saying that they will definitely fail, the heuristic that if
it looks intelligent then it is, has some value especially in everyday
life. But, I would still prefer a good testable theory of cognitive
systems, to base a system design upon.

 Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-15 Thread William Pearson

On 15/06/06, arnoud [EMAIL PROTECTED] wrote:

On Thursday 15 June 2006 21:35, Ben Goertzel wrote:
  If this doesn't seem to be the case, this is because of that some
  concepts are so abstract that they don't seem to be tied to perception
  anymore. It is obvious that they are (directly) tied to more concrete
  concepts (be defined/described in terms of ...), but those concepts can
  also still be very abstract. And so abstract concepts can seem to only
  depend on other abstract concepts, and together lead their own life, not
  tied to/determined by perception/sensation. However, if you would/could
  trace all the dependencies of any concept you would end up on the
  perception level.

 Hmmm... well, although I learned mathematics via perceiving books and
 spoken words and so forth,

And by interacting with the world: counting objects, rotating objects,
translating objects, manipulating sequences of symbols on the basis of
rules... And making predictions about the effect of your actions.


This conversation reminds me of paper I read by Aaron Sloman recently
on symbol grounding vs symbol tethering. I can't find the one I read
but here is a similar one

http://www.cs.bham.ac.uk/research/cogaff/talks/meaning-types-slides.pdf

I am definately of the symbol tethering view. One of the example
Sloman uses towards tethering (that is loose coupling) is the concept
of neutrinos, that certainly hasn't helped me predict the affect of my
actions, apart from in the very loose way that they help me predict
what I may find on wikis when I look up neutrinos.

Which is as useful as knowing information about soccer players. And
yet I value the neutrino information more, because of the way I have
been told it connects with all the other information that has been
useful when I fiddled about with chemicals.

Will Pearson

ps If people are wondering why, I, someone interested in evolutionary
systems and distributed representation is interested in the symbol
grounding arguments, it is because eventually the starting modules
that evolve will have to be as least as complex as the systems people
are suggesting for AGI and have to have something like the shared
representation and world modelling people suggest. That is I expect to
put a fair amount of precocial programming, that then evolves, in the
system.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Worthwhile time sinks was Re: [agi] list vs. forum

2006-06-13 Thread William Pearson

On 13/06/06, sanjay padmane [EMAIL PROTECTED] wrote:


On the suggestion of creating a wiki, we already have it here
http://en.wikipedia.org/wiki/Artificial_general_intelligence


I wouldn't want to pollute the wiki proper with our unverified claims.


, as you know, and its exposure is much wider. I feel, wiki cannot be a good
format for discussions. No one would like their views edited out by a random


It is not meant so much to replace our discussions on list but to
display the various questions people have asked and the various
answers to them in a persistant and easy to use fashion.

Will

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Worthwhile time sinks was Re: [agi] list vs. forum

2006-06-13 Thread William Pearson

On 13/06/06, Yan King Yin [EMAIL PROTECTED] wrote:


Will,

I've been thinking of hosting a wiki for some time, but not sure if we have
reached critical mass here.


Possibly not. I may just collate my own list of questions and answers
until the time does come.


When we get down to the details, people's views may diverge even further.  I
can think of some potential points of disagreement:
0. what's the overall AGI architecture?
1. neurally based or logic based?


I think this question would be better as analog or digital.  While the
system I am interested in uses logical operations (AND, OR etc) for
running a program, I do not expect it to be constrained to be logical
in the everday sense.


2. what's the view on Friendliness?
3. initially, self-improving or static?


I think the distinction would need to be between static, weakly
self-improving (like the human brain) and strongly self-improving.


4. open source or not?
5. commercial or not?


I would also add

6. Does specialist hardware need to be made to make AGI practical?
7. Do you deal with combinatorial explosions in your AGI, if not why not?
7. Similarly for the No free lunch theorems.


May be we can set up a simple poll place to see who agrees with whom??


It might be hard to keep the poll simple

Will

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Reward versus Punishment? .... Motivational system

2006-06-12 Thread William Pearson

On 12/06/06, James Ratcliff [EMAIL PROTECTED] wrote:

Will,
  Right now I would think that a negative reward would be usable for this
aspect.


I agree it is usable. But I am not sure it necessary, you can just
normalise the reward value.

Let say for most states you normally give 0 for a satiated entity, the
best state is 100 and the worst and the worst -100. You can just
transform that to 0 for the worst state 100 for everyday satiated
state and 200 for the best state, without affecting the choices that
most reinforcement systems would make.

So pain would be a below baseline reward.


I am using the positive negative reward system right now for
motivational/planning aspects for the AGI.
So if sitting at a desk considering a plan of action that might hurt himself
or another, the plan would have a negative rating, where another safer plan
may have a higher rating.


Heh.  Well I expect an AI system that worked like a human would have a
very tenuous link between the motivation and planning systems.

The tenuous link is ably shown by my own actions. I have stated that I
think the plausible genetically specified positive motivations are to
do with food, sex and positive social interaction. Yet I tend to plan
how to create interesting computer systems, which isn't the best route
to any of the above

More later...

 Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Bayes in the brain

2006-06-11 Thread William Pearson

On 11/06/06, Philip Goetz [EMAIL PROTECTED] wrote:

An article with an opposing point of view than the one I mentioned yesterday...

http://www.bcs.rochester.edu/people/alex/pub/articles/KnillPougetTINS04.pdf


Why do you find whether there are bayesian estimators in the brain an
interesting question?

I shall explain why I ask this question (from the point of view of
building a weakly self-improving optimisation system).

The approach I take is the baby-based, that is starting from a simple
system that can extract information from the environment and become a
more complex system. From this I have to question which parts of an
adult system are in-built and important for the development of the
system and which are the products of the developmental process.

For example this
http://neuro.caltech.edu/publications/nbb408.pdf
review publication on neural plasticity suggests that some of the
neural locations normally used for processing visual information can
be used for information in processing braille if the vision is lost
early on.

This suggests that not all the sections of brain that are responsible
for the bayesian optimal classification (if it exists) of certain
signals aren't genetically programmed for those specific signals. So
the more interesting question becomes how are they hooked up to those
signals and the correct or nearly correct bias acquired or assigned
for learning about them, whether bayesian optimal or not.

Such questions also arise in how we manage to usefully integrate
visual or orientation data that we acquire through our tongue(!) into
our models of the world.

http://www.wicab.com/

Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Reward versus Punishment? .... Motivational system

2006-06-10 Thread William Pearson

On Fri, 09 Jun 2006 19:13:19 -500, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


What about punishment?


Currently I see it as the programs in control of outputting (and hence
the ones to get reward), losing the control and the chance to get
reinforcement. However experiment or better theory would be needed to
determine whether this is sufficient or negative reward would be
needed.

Will

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Motivational system was Re: [agi] AGI bottlenecks

2006-06-09 Thread William Pearson

On 09/06/06, Richard Loosemore [EMAIL PROTECTED] wrote:

 Likewise, an artificial general
intelligence is not a set of environment states S, a set of actions A,
and a set of scalar rewards in the Reals.)

Watching history repeat itself is pretty damned annoying.



While I would agree with you the set of environmental states and
actions are not well defined for anything we would call intelligence.
I would argue the concept of rewards, probably not Reals, does have a
place in understanding intelligence.

It is very simple and I wouldn't apply it to everything that
behaviourists would (we don't get direct rewards for solving crossword
puzzles). But there is a necessity for a simple explanation for how
simple chemicals can lead to the alteration of complex goals. How and
why do we get addicted? What is it about morphine that allows the
alteration of a brain to one that wants more morphine, when the desire
for morphine didn't previously exist?

That would be like bit flipping a piece of code or variable in an AI
and then the AI deciding that bit-flipping that code was somehow good
and should be sort after.

The RL answer would be that the reward was variable altered.

If your model of motivation can explain that sort of change, I would
be interested to know more. Otherwise I have to stick with the best
models I know.

Will

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Motivational system

2006-06-09 Thread William Pearson

On 09/06/06, Dennis Gorelik [EMAIL PROTECTED] wrote:

William,

 It is very simple and I wouldn't apply it to everything that
 behaviourists would (we don't get direct rewards for solving crossword
 puzzles).

How do you know that we don't get direct rewards on solving crossword
puzzles (or any other mental task)?


I don't know, I only make hypotheses. As far as my model is concerned
the structures that give direct reward have to be pretty much in-built
otherwise for a selectionist system allowing a selected for behaviour
to give direct reward would quickly lead to behaviour that gives
itself direct reward and doesn't actually do anything.


Chances are that under certain mental condition (achievement state),
brain produces some form of pleasure signal.
If there is no such reward, then what's your explanation why people
like to solve crossword puzzles?


Why? By indirect rewards! If you will allow me to slip into my
economics metaphor, I shall try to explain my view of things. The
consumer is the direct reward giver, something that attempts to mold
the system to produce certain products, it doesn't say what is wants
just what is good, by giving money ( direct reward).

In humans this role played by the genome constructing structures that
says nice food and sex is good, along with respect from your peers
(probably the Hypothalamus and amygdala).

The role of raw materials is played by the information coming from the
environment. It can be converted to products or tools.

You have retail outlets that interact directly with the consumer,
being closest to the outputs they get directly the money that allows
their survival. However they have to pass some of the money onto the
companies that produced the products they passed onto the consumer.
This network of money passing will have to carefully controlled so
that more money isn't produced in one company than was given
(currently I think of the network of dopaminergic neurons being this
part).

Now with this sort of system you can make a million just so stories
about why one program would be selected that passes reward to another,
that is give indirect reward. This is where the complexity kicks in.
In terms of crossword solving one possibility is that a program closer
to the output and with lots of reward has selected for rewarding
logical problem solving because in general it is useful for getting
reward and so passes reward on to a program that has proven its
ability to logical problem solve, possibly entering into a deal of
some sort.

This is all very subconcious, as it is needed to be to be able to
encompass and explain low level learning such as neural plasticity,
which is very subconcious itself.

Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] The weak option

2006-06-08 Thread William Pearson

On 08/06/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:

William Pearson wrote:

 I tried posting this to SL4 and it got sucked into some vacuum.

As far as I can tell, it went normally through SL4.  I got it.


It is harder to tell on gmail than other email systems what gets
through, my mistake.

With regards to how careful I am being with the system: one of the
central design guidances for the system is to assume the programs in
the hardware are selfish and may do things I don't want. The failure
mode I envisage more than exponential self-improvement is wireheading,
but the safeguards for making sure it can't wirehead also make sure it
is weak.

But as no-one seems interested in discussing this I shall not mention it again.

 Will

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] The weak option

2006-06-08 Thread William Pearson

On 08/06/06, William Pearson [EMAIL PROTECTED] wrote:



With regards to how careful I am being with the system: one of the
central design guidances for the system is to assume the programs in
the hardware are selfish and may do things I don't want. The failure
mode I envisage more than exponential self-improvement is wireheading,
but the safeguards for making sure it can't wirehead also make sure it
is weak.


Eugen asked me off list what I meant by wireheading, in humans or in the AI.

I mean it in the AI. My main model of how weak intelligent systems work is a
combination of decentralised reinforcement learning combined with a
very loose form of neural darwinism. So each program is a selfish
replicator whose goal is not so much to get reinforcement but to
survive, if the system works to plan they should do this by attempting
to get positive reinforcement in the proscribed fashion.

However if the system breaks they don't actually need to care about
getting positive reinforcement or the manner which it occurs, all they
would need to care about was survival. So one failure mode is
wireheading, getting reinforcement in a manner the system designer
doesn't chose. Another is reducing the reinforcement a competitor can
get without penalty  so it can't overwrite the sabotaging program
(e.g. running the battery down by computing, so the controlling
program can't achieve goals). Both failure modes tend to suggest the
AI would sit gibbering in the corner, rather than taking over the
universe.

I am also interested in avoiding monopolies on information within the
system.  If that isn't done the evolutionary mechanisms, which choose
which programs should be within the system, would break down as the
program with the monopoly would have too much power. This
failure mode would characterised by an inability to get rid of or
change or improve upon a part of the system. That is programs of the
system would naturally tend to conservatism unless they were forced by
evolutionary pressure. So again not a particularly exciting failure
mode.

As no one program would be allowed to read all the other programs (to
avoid information monopoly and giving away information to
competitors), the system as a whole should be as ignorant of the
meaning/purpose of each program/setting as a human would be if given
access to the important variables within their own brain. So the best
it could do if attempting to bot net the whole Internet would be to
make a copy of itself or send individual programs out.

So the first line of defense is then preventing the robot in the real
world having access to its own total code, but this is the same as the
first line of defense in stopping physical wireheading. I am mainly
interested in manipulator less robots such as the oxford wearable
robot, but for ones with manipulators you could make them not like to
access their own internals, giving them negative feedback for
attempting to open themselves up.

I suppose I might be downplaying the existential risks of strong
self-improvement, but as I am interested in vertebrate kinds of
intelligence, I am more concerned about those failure modes we see in
every day life (addiction, OCD) of those sorts of systems. The
measures I put in place to try and prevent these failures, also happen to
put road blocks in the path to it becoming strongly self-improving.

Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] The weak option

2006-06-07 Thread William Pearson

I don't think this has been raised before, the only similar suggestion
is that we should start by understanding systems that might be weak
and then convert it to a strong system rather than aiming for weakness
that is hard to convert to a strong system.

Caveats:

1) I don't believe strong self-improvement is at all likely, but I shall
treat it as if it were possible for the purposes of this post.

2) I am also not working on AI right this moment, but simply the
hardware and tools that will allow me to create the sort of AI I am
interested in.

The currently accepted best method for achieving a first Friendly
Strongly improving system (FSIS) is simply to have teams trying to
build it. I would like to present a slightly contrarian view that to
make the first strongly improving system more likely Friendly, it could actually
be better to start a research effort to build a weakly improving
system in public and think about strong systems in private.


First as a prelude, I shall write something about the relative
difficulties of each type of system. Strong systems are hard or at
least from the evidence from nature less probable than weak systems,
else it would be likely that evolution would have found one by
accident as it was searching through chimps, dolphins and humans and
other optimisation processes. In a similar vein a weak system should
be easy, as it seems more probable and we have a number of different
examples of weak systems in nature to use as rough guidelines. Also it
is generally thought of as easier or roughly as easy to build an
unfriendly strong system than a friendly strong system, else there would be
less need to discuss Friendliness.

Currently we are at a stage equal orders of magnitude of resources are
going into weak and strong general intelligence research and a lot
less into friendly.Assuming that the amount of resources going into
each strand of research has some positive relation to the likely date
of that strands completion. Now the ideal would be to decrease the
amount of resources going into strong or potentially strong
unfriendly research and increase the amount going into Friendly
research. Why might concentrating on weak systems, to start with at
least, do that?

The first reason is that it would initially reduce the amount of
resources going into strong systems. By creating a solid and promising
research agenda focused on weak systems it would draw those people
interested in optimisation processes and lure them on an easy path
away from the very dangerous strong systems. Currently people are
scattered all over the space of optimisation processes, drawing them to
weak ones should lead to a local minima, as it has with humans, that
will be hard to escape from.

The second reason to start off weak is that experience is a very
good teacher of humans. As humanity built and interacted with weak
systems they would get experience with non-human optimisation
processes and so those interested in optimisation processes would be
less likely to commit the anthropomorphic error. Also it is likely
that we would have a fair amount of trouble with poorly goal systems for weak
systems, which may inspire people to show more caution when attempting
to create stronger systems. In all they would see the dangers of
stronger system as a lot more real and probable, and so increase the
amount of effort to make their systems Friendly, if they went onto
create strong systems.

To make a weak system we would need to analyse why the human brain is
weak (possibly due to decentralised control and decentralised changes
on decentralised hardware) and implement a similar system in silicon.
This would not be in software, so that we can be sure that sub goal
stomp cannot completely rewrite the system and lead to strongness.

People can assume that I have decided to focus on the weak option if
they see my writings elsewhere.

I shall be trying my utmost so that there is no part that can alter
all other parts and the hardware should maintain this.

I tried posting this to SL4 and it got sucked into some vacuum.

Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI bottlenecks

2006-06-02 Thread William Pearson

On 01/06/06, Richard Loosemore [EMAIL PROTECTED] wrote:


I had similar feelings about William Pearson's recent message about
systems that use reinforcement learning:


 A reinforcement scenario, from wikipedia is defined as

 Formally, the basic reinforcement learning model consists of:

  1. a set of environment states S;
  2. a set of actions A; and
  3. a set of scalar rewards in the Reals.
 

Here is my standard response to Behaviorism (which is what the above
reinforcement learning model actually is):  Who decides when the rewards
should come, and who chooses what are the relevant states and actions?


The rewards I don't deal with, I am interested in external brain
add-ons rather than autonomous systems, so the reward system will be
closely coupled to a human in some fashion.

The rest of post I was trying to outline a system that could alter
what it considered actions and states (and bias, learning algorithms
etc). The RL definition  was just there as an example to work against.


If you find out what is doing *that* work, you have found your
intelligent system.  And it will probably turn out to be so enormously
complex, relative to the reinforcement learning part shown above, that
the above formalism (assuming it has not been discarded by then) will be
almost irrelevant.


The internals of the system will be enormously more complex compared
to the reinforcement part I described. But it won't make that
irrelevent. What goes on inside a PC is vastly more complex than the
system that governs the permissions of what each *nix program can do.
This doesn't mean the permission governing system is irrelevent.

Like the permissions system in *nix the reinforcement system it is
only supposed to govern who is allowed to do what, not what actually
happens. Unlike the permission system it is supposed to get that from
the affect of the programs on the environment.  Without it both sorts
of systems would be highly unstable.

I see it as a necessity for complete modular flexibility. If you get
one of the bits that does the work wrong, or wrong for the current
environment, how do you allow it to change?


Just my deux centimes' worth.



Appreciated.



On a more positive note, I do think it is possible for AGI researchers
to work together within a common formalism.  My presentation at the
AGIRI workshop was about that, and when I get the paper version of the
talk finalized I will post it somewhere.



I'll be interested, but sceptical.

 Will

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Re: Representing Thoughts

2005-09-23 Thread William Pearson
On 9/20/05, Yan King Yin [EMAIL PROTECTED] wrote:
 William wrote:

  I suspect that it will be quite important in competition between agents. If

  one agent has a constant method of learning it will be more easily
 predicted
  by an agent that can figure out its constant method (if it is simple). If
 it
  changes (and changes how it changes), then
  it will be less predictable and may avoid other agents exploiting it.

  Well, I'm only interested in building an intelligent agent that can
 maintain knowledge and answer queries, for possible application to
 scientific and medical research. I'm not interested in building AIs that
 compete with each other, especially not in military ways. Others may build
 those things, but that's not my purpose.

However science is also a form of competition  between agents (humans
being a type of agent), the winner being the most cited.

Let us say that your type of Intelligence becomes prevalent, it would
become very easy to predict what this type of intelligence would find
interesting (just feed it all the research that is commonly fed it,
and then test it). People would then tailor there own research to be
interesting to this type of system (regardless of whether it was
innovative or ground breaking). It would stultify research.

  They can also be what I think of as soft wired. Programmed but also allowed

  to be altered by other parts of the system.

  Soft wiring is a good concept, but I believe that
 mechanisms of inference may be totally fixed for all pratical purposes, and

 we'll let later generations deal with the extra subtleties.

You can do what you wish. I'm going to study softwiring now.

  If you include such things as reward in labelling, and self-labeling, then

  I agree. I would like to call the feeling where I don't want to go to bed

  because of too much coffee 'the jitterings', and I be able to learn that.

  In the most straightforward analysis, you cannot have an AGI labeling
 things all by itself. Somehow, a teacher must label the concepts even though

 the nameless concepts may have emerged automatically. That's the bottomline.

 How we can do better than that? If your AGI calls coffee XYZ, and you don't

 know what XYZ refers to, then you basically have a Rosetta stone kind of
 problem. Translating between 2 languages requires AGI, which begs the
 question.

This is only the case if we have words that are the same as the
concept that has emerged. In science there is a large amount of
creation of new concepts. What happens if in studying astonomic data
the system comes across a new type of star that varies its colour
slightly, the AGI decides to call it a chromar. The sort of system you
are describing doesn't seem able to do this.

I can see that a lot of learning will be supervised. But other types
will have to be unsupervised if we want it to discover new things.

  But the sub parts of the brain aren't intelligent surely? Only the whole
  is? You didn't define intelligence so I can't determine where our
 disconnect
  lies.

  You said the visual cortex can rewire to auditory parts in an unsupervised

 manner. My question is how do you make use of this trick to build an AGI
 from scratch, without supervised learning?

My goal has never been to dismis supervised learning from AI designs,
rather it should be softwired in, and there should be unsupervised
methods of learning that act on it.

  I work with a strange sort of reinforcement learning of sorts as a base.
  You can layer whatever sort of learning you want on top of it, I would
  probably layer supervised learning on it if the system was going to be
  social. Which would probably be needed for something to be
  considered AGI.

  Reinforcement learning may be good for procedural learning, but in my
 approach I focus only on knowledge maintenance. I guess reinforcement
 learning is not an efficient way to deal with knowledge.

Not directly no. But then I am suggesting layered approach with
supervised learning to do most of the knowledge maintenance. I am also
interested in procedural learning, hence the difference in emphasis.

  Saying that because the brain uses neurons to classify things, those
  methods of classification are fixed, is like saying because a Pentiums
 uses
  transistors to compute things and they are fixed, what a pentium can
 compute
  is fixed.
 
 Also if all neurons do is feature extraction/classification etc how can we
  as humans reason and cogitate?

  I think the mechanisms of thinking in the brain are not that hard to
 understand. We don't know the exact details but we have some very basic
 understanding of it.

  Induction, deduction we know. However there are many things we don't know.

  For example getting information from other humans is an important part of

  reasoning. Which humans we should trust, who may be
  out to fool us, we don't.

  That's pattern recognition.

It is more than pattern recognition, because we also take into
consideration 

[agi] Re: Representing Thoughts

2005-09-12 Thread William Pearson
On 9/12/05, Yan King Yin [EMAIL PROTECTED] wrote:
 Will Pearson wrote:
 
 Define what you mean by an AGI. Learning to learn is vital if you wish to 
  try and ameliorate the No Free Lunch theorems of learning. 
 
  I suspect that No Free Lunch is not very relevant in practice. Any learning
 
 algorithm has its implicit way of generalization and it may turn out to be 
 good enough.

I suspect that it will be quite important in competition between
agents. If one agent has a constant method of learning it will be more
easily predicted by an agent that can figure out its constant method
(if it is simple). If it changes  (and changes how it changes), then
it will be less predictable and may avoid other agents exploiting it.

 Having a system learn from the environment is superior than programming it 
  by hand and not be able to learn from the environment. They are not
 mutually 
  exclusive. It is superior because humans/genomes have imperfect knowledge
 of 
  what the system they are trying to program will come up against in the 
  environment.
 
  I agree that learning is necessary, like any sensible person would. The 
 question is how to learn efficiently, and *what* to learn. High level 
 mechanisms of thinking can be hard-wired and that would save a lot of time.

They can also be what I think of as soft wired. Programmed but also
allowed to be altered by other parts of the system.

 It depends what you characterise as learning, I tend to include such things
 
  as the visual centres being repurposed to act for audio processing in
 blind 
  individuals as learning. there you do not have labeled examples. 
 
  My point is that unsupervised learning still requires labeled examples 
 eventually.

If you include such things as reward in labelling, and self-labeling,
then I agree. I would like to call the feeling where I don't want to
go to bed because of too much coffee 'the jitterings', and I be able
to learn that.

 Your human brain example in not pertinent to AGI because you're
 
 talking about a brain that is already intelligent,

But the sub parts of the brain aren't intelligent surely? Only the
whole is? You didn't define intelligence so I can't determine where
our disconnect lies.

 recruiting extra 
 resources. We should think about how to build an AGI from scratch. Then you
 
 may realize that unsupervised learning is problematic.

I work with a strange sort of reinforcement learning of sorts as a
base. You can layer whatever sort of learning you want on top of it, I
would probably layer supervised learning on it if the system was going
to be social. Which would probably be needed for something to be
considered AGI.
 
 
  We do not have to duplicate the evolutionary process.

I am not saying we should imitate a flatworm then a mouse then a bird
etc. I am saying that we should look at the problem classes solved by
evolution at first, and then see how we would solve them with silicon.
This would hopefully keep us on the straight and narrow and not
diverge into a little intellectual cul-de-sac

 I think directly 
 programming a general reasoning mechanism is easier. My approach is to look
 
 at how such a system can be designed from an architectural viewpoint.

Not my style, but you may produce something I find interesting. 

 This I don't agree with. Humans and other animals can reroute things 
  unconsciously, such as switching the visual system to see things upside
 down 
  (having placed prisms in front of the eyes 24/7). It takes a while (over 
  weeks), but it then it does happen and I see it as 
  evidence for low-level self-modification.
 
  Your example is show that experience can alter the brain, which is true. It
 
 does not show that the brain's processing mechanism is flexible -- namely 
 the use of neural networks for feature extraction, classification, etc. 
 Those mechanisms are fixed. 

Saying that because the brain uses neurons to classify things, those
methods of classification are fixed, is like saying because a Pentiums
uses transistors to compute things and they are fixed, what a pentium
can compute is fixed.

Also if all neurons do is feature extraction/classification etc how
can we as humans reason and cogitate?

 Likewise, we can directly program an AGI's 
 reasoning mechanisms rather than evolve them.

Once again, I have never said anything about not programming the
system as much as possible.

 It can speed up the acquisition of basic knowledge, if the programmer got 
  the assumptions about the world wrong. Which I think is very likely.
 
  This is not true. We *know* the rules of thinking: induction, deduction, 
 etc, and they are pretty immutable. Why let the AGI re-learn these rules?

Induction, deduction we know. However there are many things we don't
know. For example getting information from other humans is an
important part of reasoning. Which humans we should trust, who may be
out to fool us, we don't.

Another thing we can't specify completely in advance is the frame

[agi] Re: Representing Thoughts

2005-09-09 Thread William Pearson
On 9/9/05, Ben Goertzel [EMAIL PROTECTED] wrote:
 
 
 Leitl wrote:
   In the language of Gregory Bateson (see his book Mind and Nature),
   you're suggesting to do away with learning how to learn --- which is
   not at all a workable idea for AGI.
 
  Learning to evolve by evolution is sure a workable idea. It's
  also sufficient
  for an AGI: look into the mirror.
 
 Of course I agree with that...
 
 What YKY suggested was to make an AGI based on a fixed set of reasoning
 rules and heuristics that are not pliable and adaptable based on
 experience.
 I don't think this is viable in practice, I think one's system needs to be
 able to learn how to learn.  Evolution is one example of a dynamic that is
 able to learn how to learn, but it need not be the only example.

Does evolution have the the lowest level of inference that you talked
about? Or would it be better characterised as self-modifying (e.g.
crossover that can alter the mechanics of crossover).

  Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Re: Representing Thoughts

2005-09-09 Thread William Pearson
On 9/9/05, Yan King Yin [EMAIL PROTECTED] wrote:

 learning to learn which I interpret as applying the current knowledge 
 rules to the knowledge base itself. Your idea is to build an AGI that can 
 modify its own ways of learning. This is a very fanciful idea but is not the
 
 most direct way to build an AGI. Instead of building an AGI you're trying to

Define what you mean by an AGI. Learning to learn is vital if you wish
to try and ameliorate the No Free Lunch theorems of learning.

 build something that can learn to become an AGI. Unfortunately, this 
 approach is computationally *inefficient*.
  You seem to think that letting an AGI *learn* from its environment is 
 superior than programming it by hand. In reality, learning is not magic. 1.

Having a system learn from the environment is superior than
programming it by hand and not be able to learn from the environment.
They are not mutually exclusive. It is superior because humans/genomes
have imperfect knowledge of what the system they are trying to program
will come up against in the environment.

 It takes time. 2. It takes supervision (in the form of labeled examples). 

It depends what you characterise as learning, I tend to include such
things as the visual centres being repurposed to act for audio
processing in blind individuals as learning. there you do not have
labeled examples.

 Because of these two things, programming an AGI by hand is not necessarily 
 dumber than building an AGI that can learn.

So I look at learning systems and what I can learn from them...

  But of course we cannot have a system that is totally rigid. To be 
 practical, we need to have a flexible system that can learn and that can 
 also be programmed.
  In summary I think your problem is that you're not focusing on building an
 
 AGI *efficiently*. Instead you're fantasizing about how the AGI can improve
 
 itself once it is built.

Personally as someone trying to build a system that can be modify
itself as much as possible, I am simply following in the footsteps of
dealing with the problems that evolution had to deal with when
building us. It is all problem solving of sorts (and as such comes
under the heading of AI), but dealing with failure, erroneous inputs ,
energy usage are much more fundemental problems to solve than high
level cognition.

 The ability of an AGI to modify itself is not 
 essential to building an AGI efficiently. Nor can it help the AGI to learn 
 its basic knowledge faster. Self modification of an AGI will only happen 
 after it has acquired at least human-level knowledge.

This I don't agree with. Humans and other animals can reroute things
unconsciously, such as switching the visual system to see things
upside down  (having placed prisms in front of the eyes 24/7). It
takes a while (over weeks), but it then it does happen and I see it as
evidence for low-level self-modification.

 It is just a fantasy 
 that self-modification can *speed up* the acquisition of basic knowledge. 

It can speed up the acquisition of basic knowledge, if the programmer
got the assumptions about the world wrong. Which I think is very
likely.

 The difference would be like driving an ordinary car and a Formula-1, in the
 
 city area =) Not to mention that we don't possess the tools to make the 
 Formula-1 yet.

That is all I am trying to do at the moment make tools. Whether they
are tools to
do what you describe as making a Formula 1 car, I don't know.

Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Growth of computer power

2005-08-17 Thread William Pearson
 Eugen Leitl Thu, 23 Jun 2005 02:18:14 -0700

 Do any of you here use MPI, and assume 10^3..10^5 node parallelism?

I assume 2^14 node parallelism with only a small fraction computing at
any time. But then my nodes are really smart memory rather than
full-blown processors and not async yet. At this time I use a random
access model for my simulation, but I realise this is not going to
work in the hardware implementation and will need some form of message
passing between nodes  Although looking at MPI, this seems far too
complex an implementation

Numbers of connections between the different units is a tricky
question, although I am thinking that a scale-free network may be a
good compromise between minimising path length and (silicon) resource
usage. Anybody know of any research on this in parallel computing?

 Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] JOIN: Will Pearson and adaptive systems

2005-08-16 Thread William Pearson
Some of you may remember me from other places, and once before on this
list. But I thought now is the correct time for some criticism of my
ideas as they are slightly refined.

Now first off I have recently renounced my status as an AI researcher,
as studying intelligence is not what I wish to do. Instead I wish to
study systems that may or may not end up being called intelligent by
other people. If I can get them to do what I want, it doesn't matter
whether they are considered intelligent or conscious. As what I want
to build is best considered an external brain add-on I see no moral
problems with this.

So why post to this list? Because the criterion for what I want to do
crosses over quite a lot with what people call general intelligence.
So your criticism will be useful.

I want to build systems that can potentially alter the performance of
the following attributes of the system, as much as possible whilst
still having a stable system:

- Functionality: the output given the input.
- Timeliness: the output given the input as near as possible to certain time
- Energy usage of the system: So it can preserve battery life in
mobile situations
- Differential system resources per sub-task: Similar to nars
- Robustness: The ability to alter itself to cope with unexpected
input and errors, such as overheating or cosmic rays.
- Patterns of EM radiation given off as it processes: Mainly for
completeness sake, but you may want it not to interfere with your HAM
or medical equipment

As human + machine can do all these things (by installing a different
operating system for example), it should be possible for a machine to
do it by itself.

I also specify that the most basic type of change is unproven change,
so that you can escape the initial axioms and not be constrained too
much by the initial assumptions of the designer. That is not to say
you could not have provers on top of the experimental part of change,
but that they would be subject to experimentation and changing of
axioms and deductive methods.

Now as can be seen by the patterns of EM radiation and energy usage
factors, the system that changes has to be very close to the hardware,
on the level of the operating system. But if we experiment with
changes to the operating system what is to stop one part becoming a
virus and destroying the others? Not a lot, so our hardware would need
safeguards to stop unintentionally malicious programs taking over.

So at least from my point of view to get very adaptive systems we need
first to concentrate on the hardware (or at least for my limited
budget, a software emulation of the desired hardware) before building
the software that actually makes it adaptive.

I have a lot more to say about the design of the hardware, and even
something about part of the design of the software. But I will leave
that to a later date.

 Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


<    1   2