Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread William Pearson
2009/1/9 Ben Goertzel b...@goertzel.org:
 This is an attempt to articulate a virtual world infrastructure that
 will be adequate for the development of human-level AGI

 http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf

goertzel.org seems to be down. So I can't refresh my memory of the paper.

 Most of the paper is taken up by conceptual and requirements issues,
 but at the end specific world-design proposals are made.

 This complements my earlier paper on AGI Preschool.  It attempts to
 define what kind of underlying virtual world infrastructure an
 effective AGI preschool would minimally require.


In some ways this question is under defined. It depends what the
learning system is like. If it is like a human brain it would need a
sufficiently (lawfully) changing world to stimulate its neural
plasticity (rain, seasons, new buildings, death of pets, growth of its
own body).  That is a never ending series of connectible but new
situations to push the brain in different directions. Cat's eyes
deprived of stimulation go blind, so a brain in an unstimulating
environment might fail to develop.

So I would say that not only are certain dynamics important but there
should also be a large variety of externally presented examples.
Consider for example learning electronics, the metaphor of rivers and
dams is often used to teach it, but if the only example of fluid
dynamics you have come across is a flat pool of beads, then you might
not get the metaphor.  Similarly a kettle boiling dry might be used to
teach about part of the water cycle.

There may be lots of other subconscious  analogies of these sorts that
have to be made when we are young that we don't know about. It would
be my worry when implementing a virtual world for AI development.

If it is not like a human brain (in this respect), then the question
is a lot harder. Also are you expecting the AIs to make tools out of
the blocks and beads?

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread William Pearson
2009/1/13 Ben Goertzel b...@goertzel.org:
 Yes, I'm expecting the AI to make tools from blocks and beads

 No, i'm not attempting to make a detailed simulation of the human
 brain/body, just trying to use vaguely humanlike embodiment and
 high-level mind-architecture together with computer science
 algorithms, to achieve AGI

I wasn't suggesting you were/should. The comment about ones own
changing body was simply one of the many examples of things that
happen in the world that we have to try and cope with and adjust to,
making our brains flexible and leading to development rather than
stagnation.

As we don't have a formal specification for all the mind agents in
opencog it is hard to know how it will actually learn.  The question
is how humanlike do you have to be for the problem of lack of varied
stimulation to lead to developmental problems. If you emphasised that
you were going to make the world the AI exist in alive, that is not
just play pens for the AI/humans to do things and see results of those
things but some sort of consistent ecology, I would be happier. Humans
managed to develop fairly well before there was such thing as
structured pre-school, the replication of that sort of system seems
more important for AI growth, as humans still develop there as well as
structured teacher lead pre-school.

Since I can now get to the paper some further thoughts. Concepts that
would seem hard to form in your world is organic growth and phase
changes of materials. Also naive chemistry would seem to be somewhat
important (cooking, dissolving materials, burning: these are things
that a pre-schooler would come into contact more at home than in
structured pre-school).

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread William Pearson
2008/12/29 Ben Goertzel b...@goertzel.org:

 Hi,

 I expanded a previous blog entry of mine on hypercomputation and AGI into a
 conference paper on the topic ... here is a rough draft, on which I'd
 appreciate commentary from anyone who's knowledgeable on the subject:

 http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf

I'm still a bit fuzzy about your argument. So I am going to ask a
question to hopefully clarify things somewhat.

Couldn't you use similar arguments to say that we can't use science to
distinguish between finite state machines and Turing machines? And
thus question the usefulness of Turing Machines for science? As if you
are talking about a finite data sets these can always be represented
by a  compressed giant look up table.

 Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread William Pearson
2008/12/30 Ben Goertzel b...@goertzel.org:

 It seems to come down to the simplicity measure... if you can have

 simplicity(Turing program P that generates lookup table T)
 
 simplicity(compressed lookup table T)

 then the Turing program P can be considered part of a scientific
 explanation...


Can you clarify what type of language this is in? You mention
L-expressions however that is not very clear what that means. lambda
expressions I'm guessing.

If you start with a language that has infinity built in to its fabric,
TMs will be simple, however if you started with a language that only
allowed FSM to be specified e.g. regular expressions, you wouldn't be
able to simply specify TMs, as you need to represent an infinitely
long tape in order to define a TM.

Is this analogous to the argument at the end of section 3? It is that
bit that is the least clear as far as I am concerned.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Taleb on Probability

2008-11-08 Thread William Pearson
You can read the full essay online here

http://www.edge.org/3rd_culture/taleb08/taleb08.1_index.html

  Will

2008/11/8 Mike Tintner [EMAIL PROTECTED]:
 REAL LIFE IS NOT A CASINO
 By Nassim Nicholas Taleb

 On New Years day I received a a prescient essay from Nassim Taleb, author of
 The Black Swan, as his response to the 2008 Edge Question: What Have You
 Change Your Mind About? In Real Life Is Not A Casino, he wrote:

 I've shown that institutions that are exposed to negative black swans-such
 as banks and some classes of insurance ventures-have almost never been
 profitable over long periods. The problem of the illustrative current
 subprime mortgage mess is not so much that the quants and other
 pseudo-experts in bank risk-management were wrong about the probabilities
 (they were) but that they were severely wrong about the different layers of
 depth of potential negative outcomes.

 Taleb had changed his mind about his belief in the centrality of
 probability in life, and advocating that we should express everything in
 terms of degrees of credence, with unitary probabilities as a special case
 for total certainties and null for total implausibility.

 Critical thinking, knowledge, beliefs-everything needed to be probabilized.
 Until I came to realize, twelve years ago, that I was wrong in this notion
 that the calculus of probability could be a guide to life and help society.
 Indeed, it is only in very rare circumstances that probability (by itself)
 is a guide to decision making. It is a clumsy academic construction,
 extremely artificial, and nonobservable. Probability is backed out of
 decisions; it is not a construct to be handled in a stand-alone way in
 real-life decision making. It has caused harm in many fields.

 The essay is one of more than one hundred that have been edited for a new
 book What Have You Changed Your Mind About? (forthcoming, Harper Collins,
 January 9th).

 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread William Pearson
2008/10/28 Ben Goertzel [EMAIL PROTECTED]:

 On the other hand, I just want to point out that to get around Hume's
 complaint you do need to make *some* kind of assumption about the regularity
 of the world.  What kind of assumption of this nature underlies your work on
 NARS (if any)?

Not directed to me, but my take on this interesting question. The
initial architecture would have limited assumptions about the world.
Then the programming in the architecture would for the bias.

Initially the system would divide up the world into the simple
(inanimate) and highly complex (animate). Why should the system expect
animate things to be complex? Because it applies the intentional
stance and thinks that they are optimal problem solvers. Optimal
problems solvers in a social environment tend to high complexity, as
there is an arms race as to who can predict the others, but not be
predicted and exploited by the others.

Thinking, there are other things like me out here, when you are a
complex entity entails thinking things are complex, even when there
might be simpler explanations. E.g. what causes weather.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


On architecture was Re: [agi] On programming languages

2008-10-24 Thread William Pearson
2008/10/24 Mark Waser [EMAIL PROTECTED]:
 But I thought I'd mention that for OpenCog we are planning on a
 cross-language approach.  The core system is C++, for scalability and
 efficiency reasons, but the MindAgent objects that do the actual AI
 algorithms should be creatable in various languages, including Scheme or
 LISP.

 *nods* As you know, I'm of the opinion that C++ is literally the worst
 possible choice in this context. However...

 ROTFL.  OpenCog is dead-set on reinventing the wheel while developing the
 car.

  They may eventually create a better product for doing so -- but many
 of us software engineers contend that the car could be more quickly and
 easily developed without going that far back (while the OpenCog folk contend
 that the current wheel is insufficient).


Perhaps we don't need wheels? Perhaps we need a machine that can
retrofit different propulsion systems as an when they are needed
We don't seem to be getting anywhere of much with wheeled prototypes,
towards generality anyway.


 (To be clear, the specific wheels in this case are things like memory
 management, garbage collection, etc. -- all those things that need to be
 written in C++ and are baked into more modern languages and platforms).

I'd go further back and throw out the dumb VMM, at least eventually.
Who wants a robot that while it is catching something you threw to it,
pauses for half a second due to it having to move information between
hard disk and memory?  The whole edifice of most operating
system/programming language isn't very suited for real time operation.
We have real time kernels and systems to deal with that (which .Net is
not one of AFAIK).

Although to be fair my throwing out the architecture is not based on
the real-time system argument, if you have any sort of experimental
self-modifying code, you really want an architecture with vastly more
nuanced security capabilities so prevent accidents spreading too far.

You can go to a POLA architecture like one of the capability security
ones (E, keykos), yet they all require a user to manage security
rather than allowing systems to control what the code does.

In brief my long term road map:

1) VM with security and real time potential
2) High level languages to make use of the features of the languages
3) Write code to solve problems and rewrite code.

We are interested in generality of intelligence, we must be prepared
to go back to the roots of generality in computing.

AI to me has been a series of premature optimisations. People saying,
I'm going to create a system to solve problem X, with no thought
into how a system that solves X can become one that solves Y. There is
always a human in the loop to program the next generation, we need to
break that cycle to one where the systems can look after themselves.

I can't see a way to retrofit current systems to allow them to try out
a new kernel and revert to the previous one if the new one is worse
and malicious, without a human having to be involved.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Value of philosophy

2008-10-20 Thread William Pearson
2008/10/20 Mike Tintner [EMAIL PROTECTED]:
 (There is a separate, philosophical discussion,  about feasibility in a
 different sense -  the lack of  a culture of feasibility, which is perhaps,
 subconsciously what Ben was also referring to  -  no one, but no one, in
 AGI, including Ben,  seems willing to expose their AGI ideas and proposals
 to any kind of feasibility discussion at all  -  i.e. how can this or that
 method solve any of the problem of general intelligence?

This is because you define GI to be totally about creativity, analogy
etc. Now that is part of GI, but no means all.  I'm a firm believer in
splitting tasks down and people specialising in those tasks, so I am
not worrying about creativity at the moment, apart from making sure
that any architecture I build doesn't constrain people working on it
with the types of creativity they can produce.

Many useful advances in computer technology (operating systems,
networks including the internet) have come about by not assuming too
much about what will be done with them. I think the first layer of a
GI system can be done the same way.

My self-selected speciality is resource allocation (RA). There are
some times when certain forms of creativity are not a good option,
e.g. flying a passenger jet. When shouldn't humans be creative? How
should creativity and X other systems be managed?

Looking at opencog the RA is not baked into the arch so I have doubts
about how well it would survive in its current state under recursive
self-change. It will probably be reasonable for what the opencog team
is doing at the moment, but getting low-level arch wrong or not fit
for the next stage is a good way to waste work.

 Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread William Pearson
2008/10/19 Dr. Matthias Heger [EMAIL PROTECTED]:
 The process of outwardly expressing meaning may be fundamental to any social
 intelligence but the process itself needs not much intelligence.

 Every email program can receive meaning, store meaning and it can express it
 outwardly in order to send it to another computer. It even can do it without
 loss of any information. Regarding this point, it even outperforms humans
 already who have no conscious access to the full meaning (information) in
 their brains.

 The only thing which needs much intelligence from the nowadays point of view
 is the learning of the process of outwardly expressing meaning, i.e. the
 learning of language. The understanding of language itself is simple.

I'd disagree, there is another part of dealing with language that we
don't have a good idea of how to do. Deciding whether to assimilate it
and if so how.

If I specify in a language to a computer that it should do something,
it will do it no matter what (as long as I have sufficient authority).
Telling a human to do something, e.g. wave your hands in the air and
shout, the human will decide to do that based on how much it trusts
you and whether they think it is a good idea. Generally a good idea in
a situation where you are attracting the attention of rescuers,
otherwise likely to make you look silly.

I'm generally in favour of getting some NLU into AIs mainly because a
lot of the information we have about the world is still in that form,
so an AI without access to that information would have to reinvent it,
which I think would take a long time. Even mathematical proofs are
still somewhat in natural language. Other than that you could work on
machine language understanding where information was taken in
selectively and judged on its merits not its security credentials.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-18 Thread William Pearson
2008/10/18 Ben Goertzel [EMAIL PROTECTED]:

 1)
 There definitely IS such a thing as a better algorithm for intelligence in
 general.  For instance, compare AIXI with an algorithm called AIXI_frog,
 that works exactly like AIXI, but inbetween each two of AIXI's computational
 operations, it internally produces and then deletes the word frog one
 billion times.  Clearly AIXI is better than AIXI_frog, according to many
 reasonable quantitative intelligence measures.

Hi Ben,

First off, the quantitative measure of intelligence of both systems
would be the same! They both can't exist :-p

Brief definition: A system is intelligent in world X if it achieves it's goals.

Over most worlds, maybe AIXI is more intelligent. Over all worlds I'd
say definately no. The theory behind AIXI doesn't account for death,
there is nothing that the system can do to the environment that makes
the system stop computing. One world that could easily exist is one
where very fast computers where cracked down on, and eliminated with
extreme prejudice. A system that slowed down it's aparrent ability to
process might not incur the wrath of the anti seedAI police. If you
reject that scenario I can construct a real world one, with me a
debugger and the dreaded kill command, where I can make the froggy AI
more able to do whatever it is trying to do, because it still exists.
Arbitrary sure, but humans are strange and arbitrary.

So it might be the right thing for the non_froggy AI to change itself
to a froggy AI to better achieve it's goals in the long term. This
could be considered an improvement, but won't help it improve more
quickly in the future.


 2)
 More relevantly, there is definitely such a thing as a better algorithm for
 intelligence about, say, configuring matter into various forms rapidly.  Or
 you can substitute any other broad goal here.

Let me put it this way as system designers we are often tasked with
choosing between accuracy and speed. Say the problem is generating
realistic pictures. Sure we could render everything with radiosity,
but you will get far far fewer frames in a given time period than with
ray casting even with todays hardware. If you want the best possible
picyure you use radiosity and ray tracing, if you want real time
motion pictures you would just project the polygons onto a screen with
a zbuffer.

I see no reason why AIs won't have to make the same decision between
speed and accuracy for whatever is best for their intelligence in
the future. Therefore the better algorithm is context dependent.

In terms of reconfiguring matter, if you are sending an expensive
probe into the reaches of space you would probably want to model
everything down to the last angstrom. Less so if you were just making
a towel quickly. These require different ways of thinking. Or would
your idea of best would be one that could do both? But then you might
be wasting computer resources, if you were never asked to do one or
the other of these types of tasks.

 3)
 Anyway, I think it's reasonable to doubt my story about how RSI will be
 achieved.  All I have is a plausibility argument, not a proof.  What got my
 dander up about Matt's argument was that he was claiming to have a
 debunking of the RSI ... a proof that it is impossible or infeasible.  I
 do not think he presented any such thing; I think he presented an opinion in
 the guise of a proof  It may be a reasonable opinion but that's very
 different from a proof.

He defines RSI too tightly to make much use in the real world. However
similar definitions have been done for intelligence. And then things
proved about overly tight definition.

It worries me that we don't have a way of settling disputes like
these. Whether RSI is likely is an important fact for the future of
humanity, surely we should be able to pick out some thread of reality
that we can experiment on without building full AI. Theories of
intelligence should guide us enough to say, If we exist in a world
where RSI is unlikely, X should also be unlikely and vice versa. As
the possibility of faster than light travel was squashed without have
to try travelling faster than the speed of light.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-17 Thread William Pearson
2008/10/17 Ben Goertzel [EMAIL PROTECTED]:


 The difficulty of rigorously defining practical intelligence doesn't tell
 you ANYTHING about the possibility of RSI ... it just tells you something
 about the possibility of rigorously proving useful theorems about RSI ...

 More importantly, you haven't dealt with my counterargument that the posited
 AGI that is qualitatively intellectually superior to humans in every way
 would

 a) be able to clone itself N times for large N

 b) have the full knowledge-base and infrastructure of human society at its
 disposal

 Surely these facts will help it to self-improve far more quickly than would
 otherwise be the case...

 I'm not thinking about this so abstractly, really.  I'm thinking,
 qualitatively, that

 1-- The members of this list, collectively, could solve algorithmic problems
 that a team of one million people with IQ 100 would not be able to solve in
 a feasible period of time

 2-- an AGI that was created by, say, the members of this list, would be
 architected based on **our** algorithms

 3-- so, if we could create an AGI that was qualitatively intellectually
 superior to **us** (even if only moderately so), this AGI (or a team of
 such) could probably solve algorithmic problems that one million of **us**
 would not be able to solve in a feasible period of time

 4--thus, this AGI we created would be able to create another AGI that was
 qualitatively much smarter than **it**

 5--etc.


I don't buy the 5 step plan, either. For a few reasons. Apologies for
the rather disjointed nature of this message, it is rather late, and I
want to finish it before I am busy again.

I don't think there is such thing as an better algorithm for
intelligence, there are algorithms suited to certain problems. Human
intelligences seem to adapt their main reasoning algorithms in an
experimental self-changing fashion at a sub concious level. Different
biases are appropriate for different problems, including at the
meta-level. See deceptive functions from genetic algorithms for
examples. And deceptive functions can always appear in the world, as
humans can create whatever problems are needed to fool the other
agents around them.

What Intelligence generally measures in day to day life is the ability
to adopt other peoples mental machinery, for your own purposes. It
gives no guarantee of finding new solutions to problems. The search
spaces are so huge that you can easily lose yourself, trying to hit a
tiny point. You might have the correct biases to get to point A, but
that doesn't mean you have the right biases to get to point B. True
innovation is very hard.

It is not hard to be bayesian optimal if you know what data you should
be looking at to solve a problem, it is knowing what data is
pertinent. This is not always obvious and requires trial, error and
the correct bias to limit this to reasonable time scales.

Copying yourself doesn't get you different biases. You would all try
the same approach to start with, or if you purposefully set it so that
you didn't you would all still rate certain things/approaches as very
unlikely to be any good, when they might well be what you need to do.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread William Pearson
2008/10/14 Terren Suydam [EMAIL PROTECTED]:


 --- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 An AI that is twice as smart as a
 human can make no more progress than 2 humans.

 Spoken like someone who has never worked with engineers. A genius engineer 
 can outproduce 20 ordinary engineers in the same timeframe.

 Do you really believe the relationship between intelligence and output is 
 linear?

I'm going to use this post as a place to grind one of my axes, apologies Terren.

The relationship between processing power and results is not
necessarily linear or even positively  correlated. And as an increase
in intelligence above a certain level requires increased processing
power (or perhaps not? anyone disagree?).

When the cost of adding more computational power, outweighs the amount
of money or energy that you acquire from adding the power, there is
not much point adding the computational power.  Apart from if you are
in competition with other agents, that can out smart you. Some of the
traditional views of RSI neglects this and thinks that increased
intelligence is always a useful thing. It is not very

There is a reason why lots of the planets biomass has stayed as
bacteria. It does perfectly well like that. It survives.

Too much processing power is a bad thing, it means less for
self-preservation and affecting the world. Balancing them is a tricky
proposition indeed.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread William Pearson
Hi Terren,

 I think humans provide ample evidence that intelligence is not necessarily 
 correlated with processing power. The genius engineer in my example solves a 
 given problem with *much less* overall processing than the ordinary engineer, 
 so in this case intelligence is correlated with some measure of cognitive 
 efficiency (which I will leave undefined). Likewise, a grandmaster chess 
 player looks at a given position and can calculate a better move in one 
 second than you or me could come up with if we studied the board for an hour. 
 Grandmasters often do publicity events where they play dozens of people 
 simultaneously, spending just a few seconds on each board, and winning most 
 of the games.


What I meant was at processing power/memory Z, there is an problem
solving ability Y which is the maximum. To increase the problem
solving ability above Y you would have to increase processing
power/memory. That is when cognitive efficiency reaches one, in your
terminology. Efficiency is normally measured in ratios so that seems
natural.

There are things you can't model with limits of processing
power/memory which restricts your ability to solve them.

 Of course, you were referring to intelligence above a certain level, but if 
 that level is high above human intelligence, there isn't much we can assume 
 about that since it is by definition unknowable by humans.


Not quite what I meant.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-05 Thread William Pearson
2008/10/4 Colin Hales [EMAIL PROTECTED]:
 Hi Will,
 It's not an easy thing to fully internalise the implications of quantum
 degeneracy. I find physicists and chemists have no trouble accepting it, but
 in the disciplines above that various levels of mental brick walls are in
 place. Unfortunately physicists and chemists aren't usually asked to create
 vision!... I inhabit an extreme multidisciplinary zone. This kind of mental
 resistance comes with the territory. All I can say is 'resistance is futile,
 you will be assimilated' ... eventually. :-) It's part of my job to enact
 the necessary advocacy. In respect of your comments I can offer the
 following:

I started off doing chemistry at Uni, but I didn't like all the wet
experiments. There are things like the bonds in graphite sheets that
are degenerate, but that is of a completely different nature to the
electrical signals in the brain.

 You are exactly right: humans don't encounter the world directly (naive
 realism). Nor are we entirely operating from a cartoon visual fantasy(naive
 solipsism). You are also exactly right in that vision is not 'perfect'. It
 has more than just a level of indirectness in representation, it can
 malfunction and be fooled - just as you say. In the benchmark behaviour:
 scientific behaviour, we know scientists have to enact procedures (all based
 around the behaviour called 'objectivity') which minimises the impact of
 these aspects of our scientific observation system.

 However, this has nothing to say about the need for an extra information
 source. necessary for there is not enough information in the signals to do
 the job. This is what you cannot see. It took me a long while to discard the
 tendency to project my mental capacity  into the job the brain has when it
 encounters a retinal data stream. In vision processing using computing we
 know the structure of the distal natural world. We imagine the photon/CCD
 camera chip measurements to be the same as that of the retina. It looks like
 a simple reconstruction job.

I've never thought computer vision to be simple...

 But it is not like that at all. It is impossible to tell, from the signals
 in their natural state in the brain, whether they are about vision or sound
 or smell. They all look the same. So I did not completely reveal the extent
 of the retinal impact/visual scene degeneracy in my post. The degeneracy
 operates on multiple levels. Signal encoding into standardised action
 potentials is another level.

The locations that the signals travel through would be a strong
indication of what they are about.

It also seems likely that the different signals would have different
statistics. For example somehow the human brain can learn to get
visual data from the tongue with a brainport.
http://vision.wicab.com/index.php

I'm not entirely sure what you are getting at, do you think we are in
superposition with the environment? Would you expect a camera +
signals going through your tongue to preserve that?

 Maybe I can just paint a mental picture of the job the brain has to do.
 Imagine this:

 You have no phenomenal consciousness at all. Your internal life is of a
 dreamless  sleep.
 Except ... for a new perceptual mode called Wision.
 Looming in front of you embedded in a roughly hemispherical blackness is a
 gigantic array of numbers.
 The numbers change.

 Now:
 a) make a visual scene out of it representing the world outside: convert
 Wision into Vision.
 b) do this without any information other than the numbers in front of you
 and without assuming you have any a-priori knowledge of the outside world.

 That is the job the brain has. Resist the attempt to project your own
 knowledge into the circumstance. You will find the attempt futile.

The brain starts with at least some structure that has implicit
knowledge of the outside world (just as bones shows the genome stores
information of what is strong in the world). The Blank slate does not
seem a viable hypothesis.

There are no numbers in the brain or even in a computer it is all
electric signals distributed spatially, temporally and with different
statistics that allow them to be distinguished.

I'd be curious to read your thoughts in a bit more of a structured
format, but I can't get a grasp of what you are trying to say at the
moment, it seems degenerate with other signals :P

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread William Pearson
Hi Colin,

I'm not entirely sure that computers can implement consciousness. But
I don't find your arguments sway me one way or the other. A brief
reply follows.

2008/10/4 Colin Hales [EMAIL PROTECTED]:
 Next empirical fact:
 (v) When  you create a turing-COMP substrate the interface with space is
 completely destroyed and replaced with the randomised machinations of the
 matter of the computer manipulating a model of the distal world. All actual
 relationships with the real distal external world are destroyed. In that
 circumstance the COMP substrate is implementing the science of an encounter
 with a model, not an encounter with the actual distal natural world.

 No amount of computation can make up for that loss, because you are in a
 circumstance of an intrinsically unknown distal natural world, (the novelty
 of an act of scientific observation).
 .

But humans don't encounter the world directly, else optical illusions
wouldn't exist, we would know exactly what was going on.

Take this site for example. http://www.michaelbach.de/ot/

It is impossible by physics to do vision perfectly without extra
information, but we do not do vision by any means perfectly, so I see
no need to posit an extra information source.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Waiting to gain information before acting

2008-09-21 Thread William Pearson
I've started to wander away from my normal sub-cognitive level of AI,
and have been thinking about reasoning systems. One scenario I have
come up with is the, foresight of extra knowledge, scenario.

Suppose Alice and Bob have decided to bet $10 on the weather in the 10
days time in alaska whether it is warmer or colder than average, it is
Bobs turn to pick his side. He already thinks that it is going to be
warmer than average (p 0.6) based on global warming and prevailing
conditions. But he also knows that the weather in russia 5 day before
is a good indicator of the conditions, that is he has a p 0.9 that if
the russian weather is colder than average on day x alaskan weather
will be colder than average on day x+5 and likewise for warmer. He has
to pick his side of the bet 3 days before the due date so he can
afford to wait.

My question is, are current proposed reasoning systems able to act so
that Bob doesn't bet straight away, and waits for the extra
information from Russia before making the bet?

Lets try some backward chaining.

Make money - Win bet - Pick most likely side - Get more information
about the most likely side

The probability that a warm russia implies a warm alaska, does not
intrinsically indicate that it gives you more information, allowing
you to make a better bet.

So, this is where I come to a halt, somewhat. How do you proceed the
inference from here, it would seem you would have to do something
special and treat every possible event that increases your ability to
make a good guess on this bet as implying you have got more
information (and some you don't?). You also would need to go with the
meta-probability or some other indication of how good an estimate is,
so that more information could be quantified.

There are also more esoteric examples of waiting for more information,
for example suppose Bob doesn't know about the russia-alaska
connections but knows that a piece of software is going to be released
that improves weather predictions in general. Can we still hook up
that knowledge somehow?

 Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-17 Thread William Pearson
2008/9/16 Terren Suydam [EMAIL PROTECTED]:

 Hi Will,

 Such an interesting example in light of a recent paper, which deals with 
 measuring the difference between activation of the visual cortex and blood 
 flow to the area, depending on whether the stimulus was subjectively 
 invisible. If the result can be trusted, it shows that blood flow to the 
 cortex is correlated with whether the stimulus is being perceived or not, as 
 opposed to the neural activity, which does not change... see a discussion 
 here:

 http://network.nature.com/groups/bpcc/forum/topics/2974

 In this case then the reward that the cortex receives in the form of 
 nutrients is based somehow on feedback from other parts of the brain involved 
 with attention. It's like a heuristic that says, if we're paying attention 
 to something, we're probably going to keep paying attention to it.


 Maier A, Wilke M, Aura C, Zhu C, Ye FQ, Leopold DA.  Nat Neurosci. 2008 Aug 
 24. [Epub ahead of print], Divergence of fMRI and neural signals in V1 during 
 perceptual suppression in the awake monkey.


Interesting, I'll have to check it out. Thanks. I really need to keep
up with brain research a little better.

 Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-16 Thread William Pearson
2008/9/15 Vladimir Nesov [EMAIL PROTECTED]:
 I guess that intuitively, argument goes like this:
 1) economy is more powerful than individual agents, it allows to
 increase the power of intelligence in individual agents;
 2) therefore, economy has an intelligence-increasing potency;
 3) so, we can take stupid agents, apply the economy potion to them and
 get powerful intelligence as a result.

I like economies. They are not a magic bullet, but part of the
solution at the low level.

Let me see if I can explain why, note please that I am explaining why
economies might be used in human type fallible systems, very
improbably fallible systems (the sort required for friendliness) seem
improbable to me.

Take the problem of the human brain blood flow. You only have a
certain amount of blood to flow to each part of the brain. Now there
are three obvious things that you can do:

1) Have a centralised bit of the brain that makes choices about how to
distribute the blood flow. This in turn would need a large blood flow
to it. It would also need to be vastly complex and know what was going
on in all bits of the brain so as it changed it could make sensible
decisions about how to direct the blood flow. It might end up taking a
large proportion of the blood flow, and if any errors were in it they
would not self-correct.

2) Each bit of the brain indicates how much blood it needs at the
moment. However if any bit of the brain thinks it needs more blood
than it actually needs it could sit their stuck in a loop wasting lots
of oxygen.

3) Each bit of the brain pays a bit of non-forgable credit for the
blood flow they get. They get credit by participating in an economy
with the amygdala as the ultimate money source. If they pay more
credit than they get, they tend to lose credit and go bankrupt, so
they can't foul up the system any more.

Now this is a very hypothetical view of the brain. But I think the
answer to how the brain decides to allocate blood flow would be more
similar to the third case, expecting errors but self-correcting and
taking up minimal resources.

However despite it being nothing to do with bayesian reasoning or
rational decision making, if we didn't have a good way of allocating
blood flow in our brains we really couldn't do very much of use at all
(as blood would be directed to the wrong parts at the wrong times).

Decentralised economies of dumb things can be somewhat useful. See for
example Learning Classifier Systems. Personally I would prefer to
create economies of things as smart as our best systems. They would
then work on solving different parts of how to win.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Does prior knowledge/learning cause GAs to converge too fast on sub-optimal solutions?

2008-09-09 Thread William Pearson
2008/9/8 Benjamin Johnston [EMAIL PROTECTED]:

 Does this issue actually crop up in GA-based AGI work? If so, how did you
 get around it? If not, would you have any comments about what makes AGI
 special so that this doesn't happen?


Does it also happen in humans? I'd say yes, therefore it might be a
problem we can't avoid but only mitigate by having communities of
intelligences sharing ideas so that they can shake each other out of
their maxima assuming they settle in different ones (different search
landscapes and priors help with this). The community might reach a
maxima as well, but the world isn't constant so good ideas might
always be good, changing the search landscapes, meaning a maxima my
not be a maxima any longer.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread William Pearson
2008/9/5 Mike Tintner [EMAIL PROTECTED]:
 MT:By contrast, all deterministic/programmed machines and computers are

 guaranteed to complete any task they begin.

 Will:If only such could be guaranteed! We would never have system hangs,
 dead locks. Even if it could be made so, computer systems would not
 always want to do so.

 Will,

 That's a legalistic, not a valid objection, (although heartfelt!).In the
 above case, the computer is guaranteed to hang - and it does, strictly,
 complete its task.

Not necessarily, the task could be interrupted at that process stopped
or paused indefinately.

 What's happened is that you have had imperfect knowledge of the program's
 operations. Had you known more, you would have known that it would hang.

If it hung because of mult-process issues, you would need perfect
knowledge of the environment to know the possible timing issues as
well.

 Were your computer like a human mind, it would have been able to say (as
 you/we all do) - well if that part of the problem is going to be difficult,
 I'll ignore it  or.. I'll just make up an answer... or by God I'll keep
 trying other ways until I do solve this.. or... ..  or ...
 Computers, currently, aren't free thinkers.


Computers aren't free thinkers, but it does not follow from an
inability to switch,  cancel, pause and restart or modify tasks. All
of which they can do admirably. They just don't tend to do so, because
they aren't smart enough (and cannot change themselves to be so) to
know when it might be appropriate for what they are trying to do, so
it is left up to the human operator to do so.

I'm very interested in computers that self-maintain, that is reduce
(or eliminate) the need for a human to be in the loop or know much
about the internal workings of the computer. However it doesn't need a
vastly different computing paradigm  it just needs a different way of
thinking about the systems. E.g. how can you design a system that does
not need a human around to fix mistakes, upgrade it or maintain it in
general.

As they change their own system I will not know what they are going to
do, because they can get information from the environment about how to
act. This will me it a 'free thinker' of sorts. Whether it will be
enough to get what you want, is an empirical matter, as far as I am
concerned.

 Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread William Pearson
2008/9/6 Mike Tintner [EMAIL PROTECTED]:
 Will,

 Yes, humans are manifestly a RADICALLY different machine paradigm- if you
 care to stand back and look at the big picture.

 Employ a machine of any kind and in general, you know what you're getting -
 some glitches (esp. with complex programs) etc sure - but basically, in
 general,  it will do its job.

What exactly is a desktop computers job?

 Humans are only human, not a machine. Employ one of those, incl. yourself,
 and, by comparison, you have only a v. limited idea of what you're getting -
 whether they'll do the job at all, to what extent, how well. Employ a
 programmer, a plumber etc etc.. Can you get a good one these days?... VAST
 difference.

If you find a new computer that I do not know how it has been
programmed (whether it has linux/windows and what version). You also
lack knowledge of what it is going to do. Aibo is a computer as well!
It follows a program.

 And that's the negative side of our positive side - the fact that we're 1)
 supremely adaptable, and 2) can tackle those problems that no machine or
 current AGI  - (actually of course, there is no such thing at the mo, only
 pretenders) - can even *begin* to tackle.

 Our unreliability
 .

 That, I suggest, only comes from having no set structure - no computer
 program - no program of action in the first place. (Hey, good  idea, who
 needs a program?)

You equate set structure with computer program. A computer program is
not set! There is set structure of some sorts in the brain, at the
neural level anyway. so you would have to be more precise in what you
mean by lack of set structure.

Wait, program of action? You don't think computer programs are like
lists of things to do in the real world, do you? That is just
something cooked up by the language writers to make things easier to
deal with, a computer program is really only about memory
manipulation. Some of the memory locations might be hooked up to the
real world, but at the end of the day the computer treats it all as
semanticless memory manipulations. Since what controls the memory
manipulations are themselves in memory, they to can be manipulated!

 Here's a simple, extreme example.

 Will,  I want you to take up to an hour, and come up with a dance, called
 the Keyboard Shuffle. (A very ill-structured problem.)

How about you go learn about self-modifying assembly language,
preferably with real-time interrupts. That would be a better use of
the time, I think.


 Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread William Pearson
2008/9/5 Mike Tintner [EMAIL PROTECTED]:
 By contrast, all deterministic/programmed machines and computers are
 guaranteed to complete any task they begin.

If only such could be guaranteed! We would never have system hangs,
dead locks. Even if it could be made so, computer systems would not
always want to do so. Have you every had a programmed computer system
say to you. This program is not responding, do you wish to terminate
it. There is no reason in principle why the decision to terminate the
program couldn't be made automatically.

 (Zero procrastination or
 deviation).

Multi-tasking systems deviate all the time...

 Very different kinds of machines to us. Very different paradigm.
 (No?)

We commonly talk about single program systems because they are
generally interesting, and can be analysed simply. My discussion on
self-modifying systems ignored the interrupt driven multi-tasking
nature of the system I want to build, because that makes analysis a
lot more hard. I will still be building an interrupt driven, multi
tasking system.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread William Pearson
2008/9/4 Mike Tintner [EMAIL PROTECTED]:
 Terren,

 If you think it's all been said, please point me to the philosophy of AI
 that includes it.

 A programmed machine is an organized structure. A keyboard (and indeed a
 computer with keyboard) are something very different - there is no
 organization to those 26 letters etc.   They can be freely combined and
 sequenced to create an infinity of texts. That is the very essence and
 manifestly, the whole point, of a keyboard.

 Yes, the keyboard is only an instrument. But your body - and your brain -
 which use it,  are themselves keyboards. They consist of parts which also
 have no fundamental behavioural organization - that can be freely combined
 and sequenced to create an infinity of sequences of movements and thought -
 dances, texts, speeches, daydreams, postures etc.

 In abstract logical principle, it could all be preprogrammed. But I doubt
 that it's possible mathematically - a program for selecting from an infinity
 of possibilities? And it would be engineering madness - like trying to
 preprogram a particular way of playing music, when an infinite repertoire is
 possible and the environment, (in this case musical culture), is changing
 and evolving with bewildering and unpredictable speed.

 To look at computers as what they are (are you disputing this?) - machines
 for creating programs first, and following them second,  is a radically
 different way of looking at computers. It also fits with radically different
 approaches to DNA - moving away from the idea of DNA as coded program, to
 something that can be, as it obviously can be, played like a keyboard  - see
 Dennis Noble, The Music of Life. It fits with the fact (otherwise
 inexplicable) that all intelligences have both deliberate (creative) and
 automatic (routine) levels - and are not just automatic, like purely
 programmed computers. And it fits with the way computers are actually used
 and programmed, rather than the essentially fictional notion of them as pure
 turing machines.

 And how to produce creativity is the central problem of AGI - completely
 unsolved.  So maybe a new approach/paradigm is worth at least considering
 rather than more of the same? I'm not aware of a single idea from any AGI-er
 past or present that directly addresses that problem - are you?


You can't create a program out of thin air. So you have to have some
sort of program to start with. You probably want to change the initial
program in some way as well as perhaps adding more programming. This
leads you to recursive self-change and its subset RSI, which is a very
tricky business even if you don't think it is going to go FOOM and
take over the world.

So this very list has been discussing in abstract terms the very thing
you want it to be discussing!

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread William Pearson
2008/9/2 Ben Goertzel [EMAIL PROTECTED]:

 Yes, I agree that your Turing machine approach can model the same
 situations, but the different formalisms seem to lend themselves to
 different kinds of analysis more naturally...

 I guess it all depends on what kinds of theorems you want to formulate...


What I am interested in is if someone gives me a computer system that
changes its state is some fashion, can I state how powerful that
method of change is likely to be? That is what the exact difference
between a traditional learning algorithm and the way I envisage AGIs
changing their state.

Also can you formalise the difference between a humans method of
learning how to learn, and boot strapping language off language (both
examples of a strange loop), and a program inspecting and changing its
source code.

I'm also interested in recursive self changing systems and whether you
can be sure they will stay recursive self changing systems, as they
change. This last one especially with regard to people designs systems
with singletons in mind.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-09-03 Thread William Pearson
2008/8/28 Valentina Poletti [EMAIL PROTECTED]:
 Got ya, thanks for the clarification. That brings up another question. Why
 do we want to make an AGI?



To understand ourselves as intelligent agents better? It might enable
us to have decent education policy, rehabilitation of criminals.

Even if we don't make human like AGIs the principles should help us
understand ourselves, just as optics of the lens helped us understand
the eye and aerodynamics of wings helps us understand bird flight.

It could also gives us more leverage, more brain power on the planet
to help solve the planets problems.

This is all predicated on the idea that fast take off is pretty much
impossible. It is possible then all bets are off.

 Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Recursive self-change: some definitions

2008-09-02 Thread William Pearson
I've put up a short fairly dense un-referenced paper (basically an
email but in a pdf to allow for maths) here.

http://codesoup.sourceforge.net/RSC.pdf

Any thoughts/ feed back welcomed. I'll try and make it more accessible
at some point, but I don't want to spend too much time on it at the
moment.

 Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-02 Thread William Pearson
2008/9/2 Ben Goertzel [EMAIL PROTECTED]:

 Hmmm..

 Rather, I would prefer to model a self-modifying AGI system as something
 like

 F(t+1) =  (F(t))( F(t), E(t) )

 where E(t) is the environment at time t and F(t) is the system at time t

Are you assuming the system knows the environment totally? Or did you
mean the input the system gets from the environment? Would you have to
assume the environment was deterministic as well in order to construct
a hyperset? Unless you can construct a hyperset tree kind of thing,
with branches for each possible environmental state?

 This is a hyperset equation, but it seems to nicely and directly capture the
 fact that the system is actually acting on and modifying itself...


I'll use _ to indicate subscript for now.

I think s_n+1 = g_s_n(x) encompasses the same idea of
self-modification, as the function that g performs on x is determined
by the state if you consider g to be a UTM and s to be a program it
becomes a bit clearer. Consider g() and f() to be the hardware or
physics of the system.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread William Pearson
2008/8/29 Ben Goertzel [EMAIL PROTECTED]:

 About recursive self-improvement ... yes, I have thought a lot about it, but
 don't have time to write a huge discourse on it here

 One point is that if you have a system with N interconnected modules, you
 can approach RSI by having the system separately think about how to improve
 each module.  I.e. if there are modules A1, A2,..., AN ... then you can for
 instance hold A1,...,A(N-1) constant while you think about how to improve
 AN.  One can then iterate through all the modules and improve them in
 sequence.   (Note that the modules are then doing the improving of each
 other.)

I'm not sure what you are getting at here...

Is modification system implemented in a module (Ai)? If so how would
evaluate whether a modification Ai, call it AI' did a better job?

What I am trying to figure out is whether the system you are
describing could change to one which modules A1 to A10 were modified
twice as often as the other modules? Can it change itself so it could
remove a module altogether, or duplicate a module and specialise each
of the modules to a different purpose?

  Will Pearson


  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread William Pearson
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:


 On Sat, Aug 30, 2008 at 10:06 AM, William Pearson [EMAIL PROTECTED]
 wrote:

 2008/8/29 Ben Goertzel [EMAIL PROTECTED]:
 
  About recursive self-improvement ... yes, I have thought a lot about it,
  but
  don't have time to write a huge discourse on it here
 
  One point is that if you have a system with N interconnected modules,
  you
  can approach RSI by having the system separately think about how to
  improve
  each module.  I.e. if there are modules A1, A2,..., AN ... then you can
  for
  instance hold A1,...,A(N-1) constant while you think about how to
  improve
  AN.  One can then iterate through all the modules and improve them in
  sequence.   (Note that the modules are then doing the improving of each
  other.)

 I'm not sure what you are getting at here...

 Is modification system implemented in a module (Ai)? If so how would
 evaluate whether a modification Ai, call it AI' did a better job?

 The modification system is implemented in a module (subject to
 modification), but this is a small module,
 which does most of its work by calling on other AI modules (also subject to
 modification)...


Isn't it an evolutionary stable strategy for the modification system
module to change to a state where it does not change itself?1 Let me
give you a just so story and you can tell me whether you think it
likely. I'd be curious as to why you don't.

Let us say the AI is trying to learn a different language (say french
with its genders), so the system finds it is better to concentrate its
change on the language modules as these need the most updating. So a
modification to the modification module that completely concentrates
the modifications on the language module should be the best at that
time. But then it would be frozen forever and once the need to vary
the language module was past it wouldn't be able to go back to
modifying other modules. Short sighted I know, but I have yet to come
across an RSI system that isn't either short sighted or limited to
what it can prove.

  Will

1 Assuming there is no pressure on it for variation.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread William Pearson
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:



 Isn't it an evolutionary stable strategy for the modification system
 module to change to a state where it does not change itself?1

 Not if the top-level goals are weighted toward long-term growth


 Let me
 give you a just so story and you can tell me whether you think it
 likely. I'd be curious as to why you don't.

 Let us say the AI is trying to learn a different language (say french
 with its genders), so the system finds it is better to concentrate its
 change on the language modules as these need the most updating. So a
 modification to the modification module that completely concentrates
 the modifications on the language module should be the best at that
 time. But then it would be frozen forever and once the need to vary
 the language module was past it wouldn't be able to go back to
 modifying other modules. Short sighted I know, but I have yet to come
 across an RSI system that isn't either short sighted or limited to
 what it can prove.

 You seem to be assuming that subgoal alienation will occur, and the
 long-term goal of dramatically increasing intelligence will be forgotten
 in favor of the subgoal of improving NLP.  But I don't see why you
 make this assumption; this seems an easy problem to avoid in a
 rationally-designed AGI system, although not so easy in the context
 of human psychology.

Have you implemented a long term growth goal atom yet? Don't they have
to specify a specific state? Or am I reading
http://opencog.org/wiki/OpenCogPrime:GoalAtom wrong?

Also do you have any information on how the top level goal will play a
part in assigning a fitness in Moses? How can you evaluate how good a
change to a module will be for long term growth, without allowing the
system to run for a long time and measure its growth?

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread William Pearson
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:


 Don't they have
 to specify a specific state? Or am I reading
 http://opencog.org/wiki/OpenCogPrime:GoalAtom wrong?

 They don't have to specify a specific state.  A goal could
 be some PredicateNode P expressing an abstract evaluation of
 state, programmed in Combo (a general purpose programming
 language)...

So it could be a specific set of states? To specify long term growth
as a goal, wouldn't you need to be able to do an abstract evaluation
of how the state *changes* rather than just the current state?



 Also do you have any information on how the top level goal will play a
 part in assigning a fitness in Moses?

 That comes down to the basic triad

 Context  Procedure == Goal

 The aim of the Ai mind is to understand the context it's in, then learn
 or select a procedure that it estimates (infers) will have a high
 probability
 of helping it achieve its goal in the relevant context.

 MOSES is a procedure learning algorithm...

 This is described in the chapter on goal-oriented cognition in the OCP
 wikibook...


Searching for goal in the wikibook got me a whole lot of pages, none
of them with goal in the title. Is there any way to de-wiki the titles
so that a search for goal would pick up
http://opencog.org/wiki/OpenCogPrime:SchemaContextGoalTriad in its
title? Goal picks up way too many text searches.

I'll have a read of it.



 How can you evaluate how good a
 change to a module will be for long term growth, without allowing the
 system to run for a long time and measure its growth?

 By inference...

 ... at least, that's the theory ;-)


What are your expected false positive rates for classifying a change
to the modification module that leads to long term growth?

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread William Pearson
2008/8/29 j.k. [EMAIL PROTECTED]:
 On 08/28/2008 04:47 PM, Matt Mahoney wrote:

 The premise is that if humans can create agents with above human
 intelligence, then so can they. What I am questioning is whether agents at
 any intelligence level can do this. I don't believe that agents at any level
 can recognize higher intelligence, and therefore cannot test their
 creations.

 The premise is not necessary to arrive at greater than human intelligence.
 If a human can create an agent of equal intelligence, it will rapidly become
 more intelligent (in practical terms) if advances in computing technologies
 continue to occur.

 An AGI with an intelligence the equivalent of a 99.-percentile human
 might be creatable, recognizable and testable by a human (or group of
 humans) of comparable intelligence. That same AGI at some later point in
 time, doing nothing differently except running 31 million times faster, will
 accomplish one genius-year of work every second.

Will it? It might be starved for lack of interaction with the world
and other intelligences, and so be a lot less productive than
something working at normal speeds.

Most learning systems aren't constrained by lack of processing power
for how long it takes them to learn things (AIXI excepted), but by the
speed of running an experiment.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread William Pearson
2008/8/29 j.k. [EMAIL PROTECTED]:
 On 08/29/2008 01:29 PM, William Pearson wrote:

 2008/8/29 j.k.[EMAIL PROTECTED]:


 An AGI with an intelligence the equivalent of a 99.-percentile human
 might be creatable, recognizable and testable by a human (or group of
 humans) of comparable intelligence. That same AGI at some later point in
 time, doing nothing differently except running 31 million times faster,
 will
 accomplish one genius-year of work every second.


 Will it? It might be starved for lack of interaction with the world
 and other intelligences, and so be a lot less productive than
 something working at normal speeds.



 Yes, you're right. It doesn't follow that its productivity will necessarily
 scale linearly, but the larger point I was trying to make was that it would
 be much faster and that being much faster would represent an improvement
 that improves its ability to make future improvements.

 The numbers are unimportant, but I'd argue that even if there were just one
 such human-level AGI running 1 million times normal speed and even if it did
 require regular interaction just like most humans do, that it would still be
 hugely productive and would represent a phase-shift in intelligence in terms
 of what it accomplishes. Solving one difficult problem is probably not
 highly parallelizable in general (many are not at all parallelizable), but
 solving tens of thousands of such problems across many domains over the
 course of a year or so probably is. The human-level AGI running a million
 times faster could simultaneously interact with tens of thousands of
 scientists at their pace, so there is no reason to believe it need be
 starved for interaction to the point that its productivity would be limited
 to near human levels of productivity.

Only if it had millions of times normal human storage capacity and
memory bandwidth, else it couldn't keep track of all the
conversations, and sufficient bandwidth for ten thousand VOIP calls at
once.

We should perhaps clarify what you mean by speed here? The speed of
the transistor is not all of what makes a system useful. It is worth
noting that processor speed hasn't gone up appreciably from the heady
days of Pentium 4s with 3.8 GHZ in 2005.

Improvements have come from other directions (better memory bandwidth,
better pipelines and multi cores). The hard disk is probably what is
holding back current computers at the moment.


  Will Pearson





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-25 Thread William Pearson
2008/8/25 Terren Suydam [EMAIL PROTECTED]:

 --- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam
 wrong. This ability might be an end in itself, the whole
 point of
 building an AI, when considered as applying to the dynamics
 of the
 world as a whole and not just AI aspect of it. After all,
 we may make
 mistakes or be swayed by unlucky happenstance in all
 matters, not just
 in a particular self-vacuous matter of building AI.

 I don't deny the possibility of disaster. But my stance is, if the only 
 approach you have to mitigate disaster is being able to control the AI 
 itself, well, the game is over before you even start it. It seems profoundly 
 naive to me that anyone could, even in principle, guarantee a 
 super-intelligent AI to renormalize, in whatever sense that means. Then you 
 have the difference between theory and practice... just forget it. Why would 
 anyone want to gamble on that?

You may be interested in goedel machines. I think this roughly fits
the template that Eliezer is looking for, something that reliably self
modifies to be better.

http://www.idsia.ch/~juergen/goedelmachine.html

Although he doesn't like explicit utility functions, the provably
better is something he want. Although what you would accept as axioms
for the proofs upon which humanity fate rests I really don't know.

Personally I think strong self-modification is not going to be useful,
the very act of trying to understand the way the code for an
intelligence is assembled will change the way that some of that code
is assembled. That is I think that intelligences have to be weakly
self modifying, in the same way bits of the brain rewire themselves
locally and subconciously, so to, AI  will  need to have the same sort
of changes in order to keep up with humans. Computers at the moment
can do lots of things better that humans (logic, bayesian stats), but
are really lousy at adapting and managing themselves so the blind
spots of infallible computers are always exploited by slow and error
prone, but changeable, humans.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-23 Thread William Pearson
2008/8/23 Matt Mahoney [EMAIL PROTECTED]:
 Valentina Poletti [EMAIL PROTECTED] wrote:
 I was wondering why no-one had brought up the information-theoretic aspect 
 of this yet.

 It has been studied. For example, Hutter proved that the optimal strategy of 
 a rational goal seeking agent in an unknown computable environment is AIXI: 
 to guess that the environment is simulated by the shortest program consistent 
 with observation so far [1].

By my understanding, I would qualify this as Hutter proved that the
*one of the* optimal strategies of a rational error-free goal seeking
agent, which has no impact on the environment beyond its explicit
output, in an unknown computable environment is AIXI: to guess that
the environment is simulated by the shortest program consistent with
observation so far


  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-11 Thread William Pearson
2008/8/11 Mike Tintner [EMAIL PROTECTED]:
 Will: thought you meant rational as applied to the system builder :P
 Consistency of systems is overrated, as far as I am concerned.
 Consistency is only important if it ever the lack becomes exploited. A
 system that alter itself to be consistent after the fact is
 sufficient.

 Do you remember when I wrote this?

 http://www.mail-archive.com/agi@v2.listbox.com/msg07233.html

 What parts of it suggest a fixed and totalitarian system to you?


 WIll,

 I didn't  still don't quite understand your ideas there. You need to give
 some examples of how they might apply to particular problems.The fact that a
 program/set of programs can change v. radically - and even engage opposite
 POV's - doesn't necessarily mean it isn't still a totalitarian system.

My ideas, at present, don't as such apply to particular problems. They
apply to the shaping of the system. It would make as much sense as
asking how setting up the method of voting in a country applied to
solving the national debt. Or the monetary system applied to how to
transport a person from A to B. Now at some point I or someone else
will have to try to solve the practical problems, But if the system
allows the analogues of vote rigging or free money in some fashion
then even if I set up the system with the right non-totalitarian
methods it could still all go horribly wrong. So the shaping system is
more fundamental and needs to be solved first.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-10 Thread William Pearson
2008/8/10 Mike Tintner [EMAIL PROTECTED]:
 Just as you are in a rational, specialist way picking off isolated features,
 so, similarly, rational, totalitarian thinkers used to object to the crazy,
 contradictory complications of the democratic, conflict system of
 decisionmaking by contrast with their pure ideals. And hey, there *are*
 crazy and inefficient features - it's a real, messy system. But, as a
 whole, it works better than any rational, totalitarian, non-conflict system.
 Cog sci can't yet explain why, though, can it? (You guys, without realising
 it, are all rational, totalitarian systembuilders).



All?  I'm a rational economically minded system builder, thank you
very much. I can't answer questions you want answered, like how will
my system reason with imagination precisely because I am not a
totalitarian. If you wish to be non-totalitarian you have set up a
system in a certain way and let the dynamics set up potentially
transform the system into something that can reason as you want.

Theoretically the system could be set up to reason as you want
straight away. But setting up a baby level system seems orders of
magnitude easier than expecting it solve problems straight away. In
which exact knowledge of the inner workings of mature imagination is
not required.

The more you ask for early results of systems, the more you are likely
to get totalitarians building your machines. Because they can get
results quick.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


[agi] Definition of Pattern?

2008-08-03 Thread William Pearson
Is there a mathematical wiki-pedia sized definition of the a
Goertzelian pattern out there?

It would make assessing the underpinnings of Open Cog Prime easier.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The exact locus of the supposed 'complexity'

2008-08-03 Thread William Pearson
2008/8/3 Richard Loosemore [EMAIL PROTECTED]:
 I probably don't need to labor the rest of the story, because you have heard
 it before.  If there is a brick wall between the overall behavior of the
 system and the design choices that go into it  -  if it is impossible to go
 from 'I want the system to behave like [that]' to 'therefore I need to make
 [this] choice of design at the low level'  - then all the stuff about using
 intuition to sense the right design would go out the window.  This is why
 the conversation yesterday about what John Conway actually did when he came
 up with Game of Life was so important:  the documentary evidence suggests
 that what he and his team did was just blind search.  Other people have
 tried to assert that he used mathematical intuition.  The complex systems
 community would say that in almost all projects like the one Conway
 undertook, there would be absolutely no choice whatsoever but to do a blind
 search.


Might it be worth setting people a challenge? Set people the task of
building a complex system with a certain property or maybe a few
(nothing too bad, perhaps selecting a rule number from something akin
to Wolframs numbering). They give reasons why they picked the rules
they did and see if they do better than a RNG at picking the correct
number. You appear to be going against a strong intuition here, so
giving people a practical experiment they can play on themselves might
be worthwhile.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


What does it do? useful in AGI? Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-23 Thread William Pearson
2008/7/22 Mike Archbold [EMAIL PROTECTED]:
 It looks to me to be borrowed from Aristotle's ethics.  Back in my college
 days, I was trying to explain my project and the professor kept
 interrupting me to ask:  What does it do?  Tell me what it does.  I don't
 understand what your system does.  What he wanted was
 input-function-output.
 He didn't care about my fancy data structure or architecture goals, he
 just wanted to know what it DID.


I have come across this a lot. And while it is a very useful heuristic
for sniffing out bad ideas that don't do anything I also think it is
harmful to certain other endeavours. Imagine this hypothetical
conversation between Turing  and someone else (please ignore all
historical inaccuracies).

Sceptic: Hey Turing, how is it going. Hmm, what are you working on at
the moment?
Turing: A general purpose computing machine.
Sceptic: I'm not really sure what you mean by computing. Can you give
me an example of something it does?
Turing: Well you can use it calculate differential equations
Sceptic: So it is a calculator, we already have machines that can do that.
Turing: Well it can also be a chess player.
Sceptic: Wait, what? How can something be a chess player and a calculator?
Turing: Well it isn't both at the same time, but you can reconfigure
it to do one then the other.
Sceptic: If you can reconfigure something, that means it doesn't
intrinsically do one or the other. So what does the machine do itself?
Turing: Well, err, nothing.

I think the quest for general intelligence (if we are to keep any
meaning in the word general), will have be hindered by trying to pin
down what candidate systems do, in the same way general computing
would be.

I think the requisite question in AGI to fill the gap formed by not
allowing this question, is, How does it change?

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: Location of goal/purpose was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-15 Thread William Pearson
2008/7/14 Terren Suydam [EMAIL PROTECTED]:

 Will,

 --- On Fri, 7/11/08, William Pearson [EMAIL PROTECTED] wrote:
 Purpose and goal are not intrinsic to systems.

 I agree this is true with designed systems.

And I would also say of evolved systems. My fingers purpose could
equally well be said to be for picking ticks out of the hair of my kin
or for touch typing. E.g. why do I keep my fingernails short, so that
they do not impede my typing. The purpose of gut bacteria is to help
me digest my food. The purpose of part of my brain is to do
differentiation of functions, because I have .

 The designed system is ultimately an extension of the designer's mind, 
 wherein lies the purpose.

Oddly enough that is what I want the system to be. Rather an extension
of my brain.

Of course, as you note, the system in question can serve multiple purposes, 
each of which lies in the mind of some other observer. The same is true of 
your system, even though its behavior may evolve. Your button is what tethers 
its purpose to your mind.


 On the other hand, we can create simulations in which purpose is truly 
 emergent. To support emergence our design must support large-scale, (global) 
 interactions of locally specified entities. Conway's Game of Life is an 
 example of such a system - what is its purpose?

To provide an interesting system for researchers to research cellular
automata? ;) I think I can see your point, It has no practical purpose
as such. Just a research purpose.

It certainly wasn't specified.

And neither am I specifying the purpose of mine! I'm quite happy to
hook up the button to something I press when I feel like it. I could
decide the purpose of the system was to learn and be good at
backgammon one day, in which case my presses would reflect that, or I
could decide the purpose of the system was to search the web.

If you want to think of a good analogy for how emergent I want the
system to be. Imagine someone came along to one of your life
simulations and interfered with the simulation to give some more food
to some of the entities that he liked the look of. This wouldn't be
anything so crude as to specify the fitness or artificial breeding,
but it would tilt the scales in the favour of entities that he liked
all else being equal. Would this invalidate the whole simulation
because he interfered and bought some of his purpose into it? If so, I
don't see why.

 The simplest answer is probably that it has none. But what if our design of 
 the local level was a little more interesting, such that at the global level, 
 we would eventually see self-sustaining entities that reproduced, competed 
 for resources, evolved, etc, and became more complex over a large number of 
 iterations?

Then the system itself still wouldn't have a practical purpose. For a
system Y to have a purpose, you have to have be able to say part X is
like it is for Y to perform its function. Internal state corresponding
to the entities might be said to have purpose, but not the system as a
whole.

 Whether that's possible is another matter, but assuming for the moment it 
 was, the purpose of that system could be defined in roughly the same way as 
 trying to define the purpose of life itself.

We have to be careful here.  What meaning of the word life are you using?

1) The biosphere + evolution
2) And individuals exsistance.

The first has no purpose. You can never look at the biosphere and
figure out what bits are for what in the grander scheme of things, or
ask yourself what mutations are likely to be thrown up to better
achieve its goal. That we have some self-regulation on the Gaian scale
is purely anthropic, biospheres without it would likely have driven
themselves to a state not able to support lives. An individual entity
has a purpose, though. So to that extent the purposeless can create
the purposeful.

 So unless you believe that life was designed by God (in which case the 
 purpose of life would lie in the mind of God), the purpose of the system is 
 indeed intrinsic to the system itself.


I think I would still say it didn't have a purpose. If I get your meaning right.

   Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Is clustering fundamental?

2008-07-09 Thread William Pearson
2008/7/6 Abram Demski [EMAIL PROTECTED]:
 In fact, adding hidden predicates and entities in the case of Markov
 logic makes the space of models Turing-complete (and even bigger than
 that if higher-order logic is used). But if I am not mistaken the
 clustering used in the paper I refer to is not that powerful. So the
 question is: is clustering in general powerful enough for AGI? Is it
 fundamental to how minds can and should work?


I would say very important, but not fundamental.

Consider the square/rectangle problem.

You are given a number of pairs of numbers and you want to somehow say
that pairs with the same numbers are in one class (of squares) and
pairs of different numbers are rectangles. Imagine you have to learn
to eat squares but not rectangles.

However most cluster methods, while they could represent the cluster
of squares, would require a lot of samples to get a long thin cluster
running up the x=y line.  If there was another dimension called z
which was 1 when x equalled y, clustering would be very easy.  Where
does z come from? And why not z = 1 if x-9 = y^2? Finding decent
dimensions to cluster on is a tricky problem.

So I think the process by which the dimensions of clustering problems
are created is more fundamental than clustering itself.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Formal proved code change vs experimental was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-07 Thread William Pearson
2008/7/3 Steve Richfield [EMAIL PROTECTED]:
 William and Vladimir,

 IMHO this discussion is based entirely on the absence of any sort of
 interface spec. Such a spec is absolutely necessary for a large AGI project
 to ever succeed, and such a spec could (hopefully) be wrung out to at least
 avoid the worst of the potential traps.

And if you want the interface to be upgradeable, or alterable what
then? This conversation was based on the ability to change as much of
the functional and learning parts of the systems as possible.

 Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-04 Thread William Pearson
Terren,

 Remember when I said that a purpose is not the same thing
 as a goal?
 The purpose that the system might be said to have embedded
 is
 attempting to maximise a certain signal. This purpose
 presupposes no
 ontology. The fact that this signal is attached to a human
 means the
 system as a whole might form the goal to try and please the
 human. Or
 depending on what the human does it might develop other
 goals. Goals
 are not the same as purposes. Goals require the intentional
 stance,
 purposes the design.

 To the extent that purpose is not related to goals, it is a meaningless term. 
 In what possible sense is it worthwhile to talk about purpose if it doesn't 
 somehow impact what an intelligent actually does?

Does the following make sense? The purpose embedded within the system
will be try and make the system not decrease in its ability to receive
some abstract number.

The way I connect up the abstract number to the real world will the
govern what goals the system will likely develop (along with the
initial programming). That is there is some connection, but it is
tenuous and I don't have to specify an ontology.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
2008/7/3 Terren Suydam [EMAIL PROTECTED]:

 --- On Wed, 7/2/08, William Pearson [EMAIL PROTECTED] wrote:
 Evolution! I'm not saying your way can't work, just
 saying why I short
 cut where I do. Note a thing has a purpose if it is useful
 to apply
 the design stance* to it. There are two things to
 differentiate
 between, having a purpose and having some feedback of a
 purpose built
 in to the system.

 I don't believe evolution has a purpose. See Hod Lipson's TED talk for an 
 intriguing experiment in which replication is an inevitable outcome for a 
 system of building blocks explicitly set up in a random fashion. In other 
 words, purpose is emergent and ultimately in the mind of the beholder.

 See this article for an interesting take that increasing complexity is a 
 property of our laws of thermodynamics for non-equilibrium systems:

 http://biology.plosjournals.org/perlserv/?request=get-documentdoi=10.1371/journal.pbio.0050142ct=1

 In other words, Darwinian evolution is a special case of a more basic kind of 
 selection based on the laws of physics. This would deprive evolution of any 
 notion of purpose.


Evolution doesn't have a purpose, it creates things with purpose.
Where purpose means it is useful to apply the design stance on it,
e.g. ask what an eye on a frog is for.

 It is the second I meant, I should have been more specific.
 That is to
 apply the intentional stance to something successfully, I
 think a
 sense of its own purpose is needed to be embedded in that
 entity (this
 may only be a very crude approximation to the purpose we
 might assign
 something looking from an evolution eye view).

 Specifying a system's goals is limiting in the sense that we don't force the 
 agent to construct its own goals based on it own constructions. In other 
 words, this is just a different way of creating an ontology. It narrows the 
 domain of applicability. That may be exactly what you want to do, but for AGI 
 researchers, it is a mistake.

Remember when I said that a purpose is not the same thing as a goal?
The purpose that the system might be said to have embedded is
attempting to maximise a certain signal. This purpose presupposes no
ontology. The fact that this signal is attached to a human means the
system as a whole might form the goal to try and please the human. Or
depending on what the human does it might develop other goals. Goals
are not the same as purposes. Goals require the intentional stance,
purposes the design.

 Also your way we will end up with entities that may not be
 useful to
 us, which I think of as a negative for a long costly
 research program.

  Will

 Usefulness, again, is in the eye of the beholder. What appears not useful 
 today may be absolutely critical to an evolved descendant. This is a popular 
 explanation for how diversity emerges in nature, that a virus or bacteria 
 does some kind of horizontal transfer of its genes into a host genome, and 
 that gene becomes the basis for a future adaptation.

 When William Burroughs said language is a virus, he may have been more 
 correct than he knew. :-]



Possibly, but it will be another huge research topic to actually talk
to the things that evolve in the artificial universe, as they will
share very little background knowledge or ontology with us. I wish you
luck and will be interested to see where you go but the alife route is
just to slow and resource intensive for my liking.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 12:59 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
 They would get less credit from the human supervisor. Let me expand on
 what I meant about the economic competition. Let us say vmprogram A
 makes a copy of itself, called A', with some purposeful tweaks, trying
 to make itself more efficient.

 So, this process performs optimization, A has a goal that it tries to
 express in form of A'. What is the problem with the algorithm that A
 uses? If this algorithm is stupid (in a technical sense), A' is worse
 than A and we can detect that. But this means that in fact, A' doesn't
 do its job and all the search pressure comes from program B that ranks
 the performance of A or A'. This
 generate-blindly-or-even-stupidly-and-check is a very inefficient
 algorithm. If, on the other hand, A happens to be a good program, then
 A' has a good change of being better than A, and anyway A has some
 understanding of what 'better' means, then what is the role of B? B
 adds almost no additional pressure, almost everything is done by A.

 How do you distribute the optimization pressure between generating
 programs (A) and checking programs (B)? Why do you need to do that at
 all, what is the benefit of generating and checking separately,
 compared to reliably generating from the same point (A alone)? If
 generation is not reliable enough, it probably won't be useful as
 optimization pressure anyway.


 The point of A and A' is that A', if better, may one day completely
 replace A. What is very good? Is 1 in 100 chances of making a mistake
 when generating its successor very good? If you want A' to be able to
 replace A, that is only 100 generations before you have made a bad
 mistake, and then where do you go? You have a bugged program and
 nothing to act as a watchdog.

 Also if A' is better than time A at time t, there is no guarantee that
 it will stay that way. Changes in the environment might favour one
 optimisation over another. If they both do things well, but different
 things then both A and A' might survive in different niches.


 I suggest you read ( http://sl4.org/wiki/KnowabilityOfFAI )
 If your program is a faulty optimizer that can't pump the reliability
 out of its optimization, you are doomed. I assume you argue that you
 don't want to include B in A, because a descendant of A may start to
 fail unexpectedly.

Nope. I don't include B in A because if A' is faulty it can cause
problems to whatever is in the same vmprogram as it, by overwriting
memory locations. A' being a separate vmprogram means it is insulated
from the B and A, and can only have limited impact on them.

I don't get what your obsession is with having things all be in one
program is anyway. Why is that better? I'll read knowability of FAI
again, but I have read it before and I don't think it will enlighten
me. I'll come back to the rest of your email once I have done that.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote:

 Nope. I don't include B in A because if A' is faulty it can cause
 problems to whatever is in the same vmprogram as it, by overwriting
 memory locations. A' being a separate vmprogram means it is insulated
 from the B and A, and can only have limited impact on them.

 Why does it need to be THIS faulty? If there is a known method to
 prevent such faultiness, it can be reliably implemented in A, so that
 all its descendants keep it, unless they are fairly sure it's not
 needed anymore or there is a better alternative.

Because it is dealing with powerful stuff, when it gets it wrong it
goes wrong powerfully. You could lock the experimental code away in a
sand box inside A, but then it would be a separate program just one
inside A, but it might not be able to interact with programs in a way
that it can do its job.

There are two grades of faultiness. frequency and severity. You cannot
predict the severity of faults of arbitrary programs (and accepting
arbitrary programs from the outside world is something I want the
system to be able to do, after vetting etc).


 I don't get what your obsession is with having things all be in one
 program is anyway. Why is that better? I'll read knowability of FAI
 again, but I have read it before and I don't think it will enlighten
 me. I'll come back to the rest of your email once I have done that.

 It's not necessarily better, but I'm trying to make explicit in what
 sense is it worse, that is what is the contribution of your framework
 to the overall problem, if virtually the same thing can be done
 without it.


I'm not sure why you see this distinction as being important though. I
call the vmprograms separate because they have some protection around
them, but you could see them as all one big program if you wanted. The
instructions don't care whether we call the whole set of operations a
program or not. This, from one point of view, is true at least while
it is being simulated the whole VM is one program inside a larger
system.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Formal proved code change vs experimental was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
Sorry about the long thread jack

2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
 Because it is dealing with powerful stuff, when it gets it wrong it
 goes wrong powerfully. You could lock the experimental code away in a
 sand box inside A, but then it would be a separate program just one
 inside A, but it might not be able to interact with programs in a way
 that it can do its job.

 There are two grades of faultiness. frequency and severity. You cannot
 predict the severity of faults of arbitrary programs (and accepting
 arbitrary programs from the outside world is something I want the
 system to be able to do, after vetting etc).


 You can't prove any interesting thing about an arbitrary program. It
 can behave like a Friendly AI before February 25, 2317, and like a
 Giant Cheesecake AI after that.

Whoever said you could? The whole system is designed around the
ability to take in or create arbitrary code, give it only minimal
access to other programs that it can earn and lock it out from that
ability when it does something bad.

By arbitrary code I don't mean random, I mean stuff that has not
formally been proven to have the properties you want. Formal proof is
too high a burden to place on things that you want to win. You might
not have the right axioms to prove the changes you want are right.

Instead you can see the internals of the system as a form of
continuous experiments. B is always testing a property of A or  A', if
at any time it stops having the property that B looks for then B flags
it as buggy.

I know this doesn't have the properties you would look for in a
friendly AI set to dominate the world. But I think it is similar to
the way humans work, and will be as chaotic and hard to grok as our
neural structure. So as likely as humans are to explode intelligently.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Theoretic estimation of reliability vs experimental

2008-07-03 Thread William Pearson
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 9:36 PM, William Pearson [EMAIL PROTECTED] wrote:
 Sorry about the long thread jack

 2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
 Because it is dealing with powerful stuff, when it gets it wrong it
 goes wrong powerfully. You could lock the experimental code away in a
 sand box inside A, but then it would be a separate program just one
 inside A, but it might not be able to interact with programs in a way
 that it can do its job.

 There are two grades of faultiness. frequency and severity. You cannot
 predict the severity of faults of arbitrary programs (and accepting
 arbitrary programs from the outside world is something I want the
 system to be able to do, after vetting etc).


 You can't prove any interesting thing about an arbitrary program. It
 can behave like a Friendly AI before February 25, 2317, and like a
 Giant Cheesecake AI after that.

 Whoever said you could? The whole system is designed around the
 ability to take in or create arbitrary code, give it only minimal
 access to other programs that it can earn and lock it out from that
 ability when it does something bad.

 By arbitrary code I don't mean random, I mean stuff that has not
 formally been proven to have the properties you want. Formal proof is
 too high a burden to place on things that you want to win. You might
 not have the right axioms to prove the changes you want are right.

 Instead you can see the internals of the system as a form of
 continuous experiments. B is always testing a property of A or  A', if
 at any time it stops having the property that B looks for then B flags
 it as buggy.

 The point isn't particularly about formal proof, but more about any
 theoretic estimation of reliability and optimality. If you produce an
 artifact A' and theoretically estimate that probability of it working
 correctly is such that you don't expect it to fail in 10^9 years, you
 can't beat this reliability with a result of experimental testing.
 Thus, if theoretic estimation is possible (and it's much more feasible
 for purposefully designed A' than for arbitrary A'), experimental
 testing has vanishingly small relevance.

This, I think, is a wild goose chase, hence why I am not following it.
Why won't the estimation system will run out of steam, like Lenats
Automated Mathematician?


 I know this doesn't have the properties you would look for in a
 friendly AI set to dominate the world. But I think it is similar to
 the way humans work, and will be as chaotic and hard to grok as our
 neural structure. So as likely as humans are to explode intelligently.


 Yes, one can argue that AGI of minimal reliability is sufficient to
 jump-start singularity (it's my current position anyway, Oracle AI),
 but the problem with faulty design is not only that it's not going to
 be Friendly, but that it isn't going to work at all.

By what principles do you think humans develop their intellects? I
don't seem to be made processes that probabilistically guarantee that
I will work better tomorrow than I did today. How do you explain
developing echolocation or specific areas specialised for reading
braille in blind people?

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
Sorry about the late reply.

snip some stuff sorted out

2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
 On Tue, Jul 1, 2008 at 2:02 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:

 If internals are programmed by humans, why do you need automatic
 system to assess them? It would be useful if you needed to construct
 and test some kind of combination/setting automatically, but not if
 you just test manually-programmed systems. How does the assessment
 platform help in improving/accelerating the research?


 Because to be interesting the human specified programs need to be
 autogenous, as in Josh Storr Hall's terminology, which means
 self-building. Capable of altering the stuff they are made of. In this
 case machine code equivalent. So you need the human to assess the
 improvements the system makes, for whatever purpose the human wants
 the system to perform.


 Altering the stuff they are made of is instrumental to achieving the
 goal, and should be performed where necessary, but it doesn't happen,
 for example, with individual brains.

I think it happens at the level of neural structures. I.e. I think
neural structures control the development of other neural structures.

 (I was planning to do the next
 blog post on this theme, maybe tomorrow.) Do you mean to create
 population of altered initial designs and somehow select from them (I
 hope not, it is orthogonal to what modification is for in the first
 place)? Otherwise, why do you still need automated testing? Could you
 present a more detailed use case?


I'll try and give a fuller explanation later on.


 This means he needs to use a bunch more resources to get a singular
 useful system. Also the system might not do what he wants, but I don't
 think he minds about that.

 I'm allowing humans to design everything, just allowing the very low
 level to vary. Is this clearer?

 What do you mean by varying low level, especially in human-designed systems?

 The machine code the program is written in. Or in a java VM, the java 
 bytecode.


 This still didn't make this point clearer. You can't vary the
 semantics of low-level elements from which software is built, and if
 you don't modify the semantics, any other modification is superficial
 and irrelevant. If it's not quite 'software' that you are running, and
 it is able to survive the modification of lower level, using the terms
 like 'machine code' and 'software' is misleading. And in any case,
 it's not clear what this modification of low level achieves. You can't
 extract work from obfuscation and tinkering, the optimization comes
 from the lawful and consistent pressure in the same direction.


Okay let us clear things up. There are two things that need to be
designed, a computer architecture or virtual machine and programs that
form the initial set of programs within the system. Let us call the
internal programs vmprograms to avoid confusion.The vmprograms should
do all the heavy lifting (reasoning, creating new programs), this is
where the lawlful and consistent pressure would come from.

It is at source code of vmprograms that all needs to be changeable.

However the pressure will have to be somewhat experimental to be
powerful, you don't know what bugs a new program will have (if you are
doing a non-tight proof search through the space of programs). So the
point of the VM is to provide a safety net. If an experiment goes
awry, then the VM should allow each program to limit the bugged
vmprograms ability to affect it and eventually have it removed and the
resources applied to it.

Here is a toy scenario where the system needs this ability. *Note it
is not anything that is like a full AI but illustrates a facet of
something a full AI needs IMO*.

Consider a system trying to solve a task, e.g. navigate a maze, that
also has a number of different people out there giving helpful hints
on how to solve the maze. These hints are in the form of patches to
the vmprograms, e.g. changing the representation to 6-dimensional,
giving another patch language that has better patches. So the system
would make copies of the part of it to be patched and then patch it.
Now you could give a patch evaluation module to see which patch works
best, but what would happen if the module that implemented that
vmprogram wanted to be patched? My solution to the problem is to allow
the patch and non-patched version compete in the adhoc economic arena,
and see which one wins.

Does this clear things up?

 Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Terren Suydam [EMAIL PROTECTED]:

 Mike,

 This is going too far. We can reconstruct to a considerable
 extent how  humans think about problems - their conscious thoughts.

 Why is it going too far?  I agree with you that we can reconstruct thinking, 
 to a point. I notice you didn't say we can completely reconstruct how humans 
 think about problems. Why not?

 We have two primary means for understanding thought, and both are deeply 
 flawed:

 1. Introspection. Introspection allows us to analyze our mental life in a 
 reflective way. This is possible because we are able to construct mental 
 models of our mental models. There are three flaws with introspection. The 
 first, least serious flaw is that we only have access to that which is 
 present in our conscious awareness. We cannot introspect about unconscious 
 processes, by definition.

 This is a less serious objection because it's possible in practice to become 
 conscious of phenomena there were previously unconscious, by developing our 
 meta-mental-models. The question here becomes, is there any reason in 
 principle that we cannot become conscious of *all* mental processes?

 The second flaw is that, because introspection relies on the meta-models we 
 need to make sense of our internal, mental life, the possibility is always 
 present that our meta-models themselves are flawed. Worse, we have no way of 
 knowing if they are wrong, because we often unconsciously, unwittingly deny 
 evidence contrary to our conception of our own cognition, particularly when 
 it runs counter to a positive account of our self-image.

 Harvard's Project Implicit experiment 
 (https://implicit.harvard.edu/implicit/) is a great way to demonstrate how we 
 remain ignorant of deep, unconscious biases. Another example is how little we 
 understand the contribution of emotion to our decision-making. Joseph Ledoux 
 and others have shown fairly convincingly that emotion is a crucial part of 
 human cognition, but most of us (particularly us men) deny the influence of 
 emotion on our decision making.

 The final flaw is the most serious. It says there is a fundamental limit to 
 what introspection has access to. This is the an eye cannot see itself 
 objection. But I can see my eyes in the mirror, says the devil's advocate. Of 
 course, a mirror lets us observe a reflected version of our eye, and this is 
 what introspection is. But we cannot see inside our own eye, directly - it's 
 a fundamental limitation of any observational apparatus. Likewise, we cannot 
 see inside the very act of model-simulation that enables introspection. 
 Introspection relies on meta-models, or models about models, which are 
 activated/simulated *after the fact*. We might observe ourselves in the act 
 of introspection, but that is nothing but a meta-meta-model. Each 
 introspectional act by necessity is one step (at least) removed from the 
 direct, in-the-present flow of cognition. This means that we can never 
 observe the cognitive machinery that enables the act of introspection itself.

 And if you don't believe that introspection relies on cognitive machinery 
 (maybe you're a dualist, but then why are you on an AI list? :-), ask 
 yourself why we can't introspect about ourselves before a certain point in 
 our young lives. It relies on a sufficiently sophisticated toolset that 
 requires a certain amount of development before it is even possible.

 2. Theory. Our theories of cognition are another path to understanding, and 
 much of theory is directly or indirectly informed by introspection. When 
 introspection fails (as in language acquisition), we rely completely on 
 theory. The flaw with theory should be obvious. We have no direct way of 
 testing theories of cognition, since we don't understand the connection 
 between the mental and the physical. At best, we can use clever indirect 
 means for generating evidence, and we usually have to accept the limits of 
 reliability of subjective reports.


My plan is go for 3) Usefulness. Cognition is useful from an
evolutionary point of view, if we try to create systems that are
useful in the same situations (social, building world models), then we
might one day stumble upon cognition.

To expand on usefulness in social contexts, you have to ask yourself
what the point of language is, why is it useful in an evolutionary
setting. One thing the point of language is not, is fooling humans
that you are human, which makes me annoyed at all the chatbots that
get coverage as AI.

I'll write more on this later.

This by the way is why I don't self-organise purpose. I am pretty sure
a specified purpose (not the same thing as a goal, at all) is needed
for an intelligence.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Wed, Jul 2, 2008 at 2:48 PM, William Pearson [EMAIL PROTECTED] wrote:

 Okay let us clear things up. There are two things that need to be
 designed, a computer architecture or virtual machine and programs that
 form the initial set of programs within the system. Let us call the
 internal programs vmprograms to avoid confusion.The vmprograms should
 do all the heavy lifting (reasoning, creating new programs), this is
 where the lawlful and consistent pressure would come from.

 It is at source code of vmprograms that all needs to be changeable.

 However the pressure will have to be somewhat experimental to be
 powerful, you don't know what bugs a new program will have (if you are
 doing a non-tight proof search through the space of programs). So the
 point of the VM is to provide a safety net. If an experiment goes
 awry, then the VM should allow each program to limit the bugged
 vmprograms ability to affect it and eventually have it removed and the
 resources applied to it.

 Here is a toy scenario where the system needs this ability. *Note it
 is not anything that is like a full AI but illustrates a facet of
 something a full AI needs IMO*.

 Consider a system trying to solve a task, e.g. navigate a maze, that
 also has a number of different people out there giving helpful hints
 on how to solve the maze. These hints are in the form of patches to
 the vmprograms, e.g. changing the representation to 6-dimensional,
 giving another patch language that has better patches. So the system
 would make copies of the part of it to be patched and then patch it.
 Now you could give a patch evaluation module to see which patch works
 best, but what would happen if the module that implemented that
 vmprogram wanted to be patched? My solution to the problem is to allow
 the patch and non-patched version compete in the adhoc economic arena,
 and see which one wins.


 What are the criteria that VM applies to vmprograms? If VM just
 shortcircuits the economic pressure of agents to one another, it in
 itself doesn't specify the direction of the search. The human economy
 works to efficiently satisfy the goals of human beings who already
 have their moral complexity. It propagates the decisions that
 customers make, and fuels the allocation of resources based on these
 decisions. Efficiency of economy is in efficiency of responding to
 information about human goals. If your VM just feeds the decisions on
 themselves, what stops the economy from focusing on efficiently doing
 nothing?

They would get less credit from the human supervisor. Let me expand on
what I meant about the economic competition. Let us say vmprogram A
makes a copy of itself, called A', with some purposeful tweaks, trying
to make itself more efficient.

A' has some bugs such that the human notices something wrong with the
system, she gives less credit on average each time A' is helping out
rather than A.

Now A and A' both have to bid for the chance to help program B which
is closer to the outputting (due to the programming of B), B pays a
proportion of the credit it gets back. Now the credit B gets will be
lower when A' is helping, than when A is helping. So A' will get less
in general than A. There are a few scenarios, ordered from quickest
acting to slowest.

1 ) B keeps records of who helps him and sees that A' is not helping
him as well as the average, so no longer lets A' bid. A' resources get
used when it can't keep up bidding for them.
2) A' continues bidding a lot, to outbid A. However the average amount
A' gets is less than it gets back from B. A' bankrupts itself and
other programs use its resources.
3) A' doesn't manage to outbid A' after a fair few trials, so gets the
same fate as it does in scenario 1)

If you start with a bunch of stupid vmprograms, you won't get
anywhere. It can just go to nothingness, you do have to design them
fairly well, just in such a way that that design can change later.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Abram Demski [EMAIL PROTECTED]:
 How do you assign credit to programs that are good at generating good
 children?

I never directly assign credit, apart from the first stage. The rest
of the credit assignment is handled by the vmprograms, er,
programming.


 Particularly, could a program specialize in this, so that it
 doesn't do anything useful directly but always through making highly
 useful children?

As the parent controls the code of its offspring, it could embed code
in its offspring to pass a small portion of the credit they get back
to it. They would have to be careful how much to skim off so the
offspring could still thrive.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
 They would get less credit from the human supervisor. Let me expand on
 what I meant about the economic competition. Let us say vmprogram A
 makes a copy of itself, called A', with some purposeful tweaks, trying
 to make itself more efficient.

 So, this process performs optimization, A has a goal that it tries to
 express in form of A'. What is the problem with the algorithm that A
 uses? If this algorithm is stupid (in a technical sense), A' is worse
 than A and we can detect that. But this means that in fact, A' doesn't
 do its job and all the search pressure comes from program B that ranks
 the performance of A or A'. This
 generate-blindly-or-even-stupidly-and-check is a very inefficient
 algorithm. If, on the other hand, A happens to be a good program, then
 A' has a good change of being better than A, and anyway A has some
 understanding of what 'better' means, then what is the role of B? B
 adds almost no additional pressure, almost everything is done by A.

 How do you distribute the optimization pressure between generating
 programs (A) and checking programs (B)? Why do you need to do that at
 all, what is the benefit of generating and checking separately,
 compared to reliably generating from the same point (A alone)? If
 generation is not reliable enough, it probably won't be useful as
 optimization pressure anyway.


The point of A and A' is that A', if better, may one day completely
replace A. What is very good? Is 1 in 100 chances of making a mistake
when generating its successor very good? If you want A' to be able to
replace A, that is only 100 generations before you have made a bad
mistake, and then where do you go? You have a bugged program and
nothing to act as a watchdog.

Also if A' is better than time A at time t, there is no guarantee that
it will stay that way. Changes in the environment might favour one
optimisation over another. If they both do things well, but different
things then both A and A' might survive in different niches.

I would also be interested in why you think we have programmers and
system testers in the real world.

Also worth noting is most optimisation will be done inside the
vmprograms, this process is only for very fundamental code changes,
e.g. changing representations, biases, ways of creating offspring.
Things that cannot be tested easily any other way. I'm quite happy for
it to be slow, because this process is not where the majority of
quickness of the system will rest. But this process is needed for
intelligence else you will be stuck with certain ways of doing things
when they are not useful.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread William Pearson
2008/6/30 Terren Suydam [EMAIL PROTECTED]:

 Hi Will,

 --- On Mon, 6/30/08, William Pearson [EMAIL PROTECTED] wrote:
 The only way to talk coherently about purpose within
 the computation is to simulate self-organized, embodied
 systems.

 I don't think you are quite getting my system. If you
 had a bunch of
 programs that did the following

 1) created new programs, by trial and error and taking
 statistics of
 variables or getting arbitrary code from the outside.
 2) communicated with each other to try and find programs
 that perform
 services they need.
 3) Bid for computer resources, if a program loses its
 memory resources
 it is selected against, in a way.

 Would this be sufficiently self-organised? If not, why not?
 And the
 computer programs would be as embodied as your virtual
 creatures. They
 would just be embodied within a tacit economy, rather than
 an
 artificial chemistry.

 It boils down to your answer to the question: how are the resources 
 ultimately allocated to the programs?  If you're the one specifying it, via 
 some heuristic or rule, then the purpose is driven by you. If resource 
 allocation is handled by some self-organizing method (this wasn't clear in 
 the article you provided), then I'd say that the system's purpose is 
 self-defined.

I'm not sure how the system qualifies. It seems to be half way between
the two definitions you gave. The programs can have special
instructions in that bid for a specific resource with as much credit
as they want (see my recent message replying to Vladimir Nesov for
more information about banks, bidding and credit). The instructions
can be removed or not done, the amount of credit bid can be changed.
The credit is given to some programs by a fixed function, but they
have instructions they can execute (or not) to give it to other
programs forming an economy. What say you, self-organised or not?

 As for embodiment, my question is, how do your programs receive input?  
 Embodiment, as I define it, requires that inputs are merely reflections of 
 state variables, and not even labeled in any way... i.e. we can't pre-define 
 ontologies. The embodied entity starts from the most unstructured state 
 possible and self-structures whatever inputs it receives.

Bits and bytes from the outside world, or bits and bytes from reading
other programs programing and data. No particular ontology.

 That said, you may very well be doing that and be creating embodied programs 
 in this way... if so, that's cool because I hadn't considered that 
 possibility and I'll be interested to see how you fare.

It is going to take a while. Virtual machine writing is very
unrewarding programming. I have other things to do right now, I'll get
back to the rest of the message in a bit.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread William Pearson
2008/6/30 Terren Suydam [EMAIL PROTECTED]:

 Ben,

 I agree, an evolved design has limits too, but the key difference between a 
 contrived design and one that is allowed to evolve is that the evolved
 critter's intelligence is grounded in the context of its own 'experience', 
 whereas the contrived one's intelligence is grounded in the experience of its
 creator, and subject to the limitations built into that conception of 
 intelligence. For example, we really have no idea how we arrive at spontaneous
 insights (in the shower, for example). A chess master suddenly sees the 
 game-winning move. We can be fairly certain that often, these insights are not
 the product of logical analysis. So if our conception of intelligence fails 
 to explain these important aspects, our designs based on those conceptions 
 will
 fail to exhibit them. An evolved intelligence, on the other hand, is not 
 limited in this way, and has the potential to exhibit intelligence in ways 
 we're not
 capable of comprehending.

I'm seeking to do something half way between what you suggest (from
bacterial systems to human alife) and AI. I'd be curious to know
whether you think it would suffer from the same problems.

First are we agreed that the von Neumann model of computing has no
hidden bias to its problem solving capabilities. It might be able to
do some jobs more efficiently than other and need lots of memory to do
others but it is not particularly suited to learning chess or running
down a gazelle. Which means it can be reprogrammed to do either.

However it has no guide to what it should be doing, so can become
virus infested or subverted. It has a purpose but we can't explicitly
define it. So let us try and put in the most minimal guide that we can
so we don't give it a specific goal, just a tendency to favour certain
activities or programs. How to do this? Form and economy based on
reinforcement signals, those that get more reinforcement signals can
outbid the others for control of system resources.

This is obviously reminiscent of tierra and a million and one other
alife system. The difference being is that I want the whole system to
exhibit intelligence. Any form of variation is allowed, from random to
getting in programs from the outside. It should be able to change the
whole from the OS level up based on the variation.

I agree that we want the systems we make to be free of our design
constraints long term, that is eventually correct all the errors and
oversimplifications or gaps we left. But I don't see the need to go
all the way back to bacteria. Even then you would need to design the
system correctly in terms of chemical concentrations. I think both
would count as the passive approach* to helping solve the problem,
yours is more indirect than is needed I think.

  Will Pearson

* http://www.mail-archive.com/agi@v2.listbox.com/msg11399.html


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread William Pearson
Hello Terren

 A Von Neumann computer is just a machine. It's only purpose is to compute.
 When you get into higher-level purpose, you have to go up a level to the 
 stuff being computed. Even then, the purpose is in the mind of the programmer.

What I don't see is why your simulation gets away from this, where as
my architecture doesn't.  Read the linked post in the previous
message, if you want to understand more about the philosophy of the
system.

The only way to talk coherently about purpose within the computation is to 
simulate self-organized, embodied systems.

I don't think you are quite getting my system. If you had a bunch of
programs that did the following

1) created new programs, by trial and error and taking statistics of
variables or getting arbitrary code from the outside.
2) communicated with each other to try and find programs that perform
services they need.
3) Bid for computer resources, if a program loses its memory resources
it is selected against, in a way.

Would this be sufficiently self-organised? If not, why not? And the
computer programs would be as embodied as your virtual creatures. They
would just be embodied within a tacit economy, rather than an
artificial chemistry.

 And I applaud your intuition to make the whole system intelligent. One of my 
 biggest criticisms of traditional AI philosophy is over-emphasis on the 
 agent. Indeed, the ideal simulation, in my mind, is one in which the boundary 
 between agent and environment is blurry.  In nature, for example, at 
 low-enough levels of description it is impossible to find a boundary between 
 the two, because the entities at that level are freely exchanged.

 You are right that starting with bacteria is too indirect, if your goal is to 
 achieve AGI in something like decades. It would certainly take an enormous 
 amount of time and computation to get from there to human-level AI and 
 beyond, perhaps a hundred years or more. But you're asking, aren't there 
 shortcuts we can take that don't limit the field of potential intelligence in 
 important ways.

If you take this attitude you would have to ask yourself whether
implementing your simulation on a classical computer is not cutting
off the ability to create intelligence. Perhaps quantum affects are
important in whether a system can produce intelligence. Protein
folding probably wouldn't be the same.

You have to at some point simplify. I'm going to have my system have
as many degrees of freedom to vary as a stored program computer (or as
near as I can make it). Whilst having the internal programs
self-organise and vary in ways that would make a normal stored program
computer become unstable.  Any simulations you do on a computer cannot
have any more degrees of freedom.

 For example, starting with bacteria means we have to let multi-cellular 
 organisms evolve on their own in a virtual geometry. That project alone is an 
 enormous challenge. So let's skip it and go right to the multi-cellular 
 design. The trouble is, our design of the multi-cellular organism is 
 limiting. Alternative designs become impossible.

What do you mean by design here? Do you mean an abstract multicellular
cell model or do you mean design as in what Tom Ray (you do know
Tierra right, I can use this as a common language?) did with his first
self replicator, by creating an artificial genome. I can see problems
with the first in restricting degrees of freedom, but the second, the
degrees of freedom are still there to be acted on by the pressures of
variation within the system. Even though Tom Ray built a certain type
of replicator, they still managed to replicate in other ways, the one
I can remember is stealing other peoples replication machinery as
parasites.

Lets say you started with an artificial chemistry. You could then
design within that chemistry a replicator, then test that replicator.
See if the variation is working okay. Then design a multicellular
variant, by changing its genome. It could still slip back to single
cellularity and find a different way to multicellularity. The degrees
of freedom do not go away the second a human starts to design
something (else genetically modified foods would not be such a thorny
issue), you just got to allow the forces of variation to be able to
act upon them.

 The question at that point is, are we excluding any important possibilities 
 for intelligence if we build in our assumptions about what is necessary to 
 support it, on a low-level basis. In what ways is our designed brain leaving 
 out some key to adapting to unforeseen domains?

Just apply a patch :P Or have an architecture that is capable of
supporting a self-patching system. I have no fixed design for an AI
myself. Intelligence means winning, winning requires flexibility.

 One of the basic threads of scientific progress is the ceaseless denigration 
 of the idea that there is something special about humans. Pretending that we 
 can solve AGI by mimicking top-down high-level human 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread William Pearson
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
 On Mon, Jun 30, 2008 at 10:34 PM, William Pearson [EMAIL PROTECTED] wrote:

 I'm seeking to do something half way between what you suggest (from
 bacterial systems to human alife) and AI. I'd be curious to know
 whether you think it would suffer from the same problems.

 First are we agreed that the von Neumann model of computing has no
 hidden bias to its problem solving capabilities. It might be able to
 do some jobs more efficiently than other and need lots of memory to do
 others but it is not particularly suited to learning chess or running
 down a gazelle. Which means it can be reprogrammed to do either.

 However it has no guide to what it should be doing, so can become
 virus infested or subverted. It has a purpose but we can't explicitly
 define it. So let us try and put in the most minimal guide that we can
 so we don't give it a specific goal, just a tendency to favour certain
 activities or programs.

 It is a wrong level of organization: computing hardware is the physics
 of computation, it isn't meant to implement specific algorithms, so I
 don't quite see what you are arguing.


I'm not implementing a specific algorithm I am controlling how
resources are allocated. Currently architecture does whatever the
kernel says, from memory allocation to irq allocation. Instead of this
my architecture would allow any program to bid credit for a resource.
The one that bids the most wins and spends its credit. Certain
resources like output memory space, (i.e if the program is controlling
the display or an arm or something) allow the program to specify a
bank, and give the program income.

A bank is a special variable that can't be edited by programs normally
but can be spent. The bank of an outputing program  will be given
credit depending upon how well the system as whole is performing . If
it is doing well the amount of credit it gets would be above average,
poorly it would be below. After a certain time the resources will need
to be bid for again. So credit is coming into the system and
continually being sunk.

The system will be seeded with programs that can perform rudimentarily
well. E.g. you will have programs that know how to deal with visual
input and they will bid for the video camera interupt. They will then
sell their services for credit (so that they can bid for the interrupt
again), to a program that correlates visual and auditory responses.
Who sell their services to a high level planning module etc, on down
to the arm that actually gets the credit.

All these modules are subject to change and re-evaluation. They merely
suggest one possible way for it to be used. It is supposed to be
ultimately flexible. You could seed it with a self-replicating neural
simulator that tried to hook its inputs and outputs up to other
neurons. Neurons would die out if they couldn't find anything to do.

 How to do this? Form and economy based on
 reinforcement signals, those that get more reinforcement signals can
 outbid the others for control of system resources.

 Where do reinforcement signals come from? What does this specification
 improve over natural evolution that needed billions of years to get
 here (that is, why do you expect any results in the forseable future)?

Most of the internals are programmed by humans, and they can be
arbitrarily complex. The feedback comes from a human, or from a
utility function although those are harder to define. The architecture
simply doesn't restrict the degrees of freedom that the programs
inside it can explore.


 This is obviously reminiscent of tierra and a million and one other
 alife system. The difference being is that I want the whole system to
 exhibit intelligence. Any form of variation is allowed, from random to
 getting in programs from the outside. It should be able to change the
 whole from the OS level up based on the variation.

 What is your meaning of `intelligence'? I now see it as merely the
 efficiency of optimization process that drives the environment towards
 higher utility, according to whatever criterion (reinforcement, in
 your case). In this view, how does I'll do the same, but with
 intelligence differ from I'll do the same, but better?

Terran's artificial chemistry as whole could not be said to have a
goal. Or to put it another way applying the intentional stance to it
probably wouldn't help you predict what it did next. Applying the
intentional stance to what my system does should help you predict what
it does.

This means he needs to use a bunch more resources to get a singular
useful system. Also the system might not do what he wants, but I don't
think he minds about that.

I'm allowing humans to design everything, just allowing the very low
level to vary. Is this clearer?

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread William Pearson
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
 On Tue, Jul 1, 2008 at 1:31 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:

 It is a wrong level of organization: computing hardware is the physics
 of computation, it isn't meant to implement specific algorithms, so I
 don't quite see what you are arguing.


 I'm not implementing a specific algorithm I am controlling how
 resources are allocated. Currently architecture does whatever the
 kernel says, from memory allocation to irq allocation. Instead of this
 my architecture would allow any program to bid credit for a resource.
 The one that bids the most wins and spends its credit. Certain
 resources like output memory space, (i.e if the program is controlling
 the display or an arm or something) allow the program to specify a
 bank, and give the program income.

 A bank is a special variable that can't be edited by programs normally
 but can be spent. The bank of an outputing program  will be given
 credit depending upon how well the system as whole is performing . If
 it is doing well the amount of credit it gets would be above average,
 poorly it would be below. After a certain time the resources will need
 to be bid for again. So credit is coming into the system and
 continually being sunk.

 The system will be seeded with programs that can perform rudimentarily
 well. E.g. you will have programs that know how to deal with visual
 input and they will bid for the video camera interupt. They will then
 sell their services for credit (so that they can bid for the interrupt
 again), to a program that correlates visual and auditory responses.
 Who sell their services to a high level planning module etc, on down
 to the arm that actually gets the credit.

 All these modules are subject to change and re-evaluation. They merely
 suggest one possible way for it to be used. It is supposed to be
 ultimately flexible. You could seed it with a self-replicating neural
 simulator that tried to hook its inputs and outputs up to other
 neurons. Neurons would die out if they couldn't find anything to do.

 Well, yes, you implement some functionality, but why would you
 contrast it with underlying levels (hardware, OS)?

 Like Java virtual
 machine, your system is a platform, and it does some things not
 handled by lower levels, or, in this case, by any superficially
 analogous platforms.

Because I want it done in silicon at some stage. It is also assumed to
be the whole system, that is no other significant programs on it.
Machines that run lisp natively have been made, this makes the most
sense as the whole computer. Rather than as a component.


 How to do this? Form and economy based on
 reinforcement signals, those that get more reinforcement signals can
 outbid the others for control of system resources.

 Where do reinforcement signals come from? What does this specification
 improve over natural evolution that needed billions of years to get
 here (that is, why do you expect any results in the forseable future)?

 Most of the internals are programmed by humans, and they can be
 arbitrarily complex. The feedback comes from a human, or from a
 utility function although those are harder to define. The architecture
 simply doesn't restrict the degrees of freedom that the programs
 inside it can explore.

 If internals are programmed by humans, why do you need automatic
 system to assess them? It would be useful if you needed to construct
 and test some kind of combination/setting automatically, but not if
 you just test manually-programmed systems. How does the assessment
 platform help in improving/accelerating the research?


Because to be interesting the human specified programs need to be
autogenous, as in Josh Storr Hall's terminology, which means
self-building. Capable of altering the stuff they are made of. In this
case machine code equivalent. So you need the human to assess the
improvements the system makes, for whatever purpose the human wants
the system to perform.

 This is obviously reminiscent of tierra and a million and one other
 alife system. The difference being is that I want the whole system to
 exhibit intelligence. Any form of variation is allowed, from random to
 getting in programs from the outside. It should be able to change the
 whole from the OS level up based on the variation.

 What is your meaning of `intelligence'? I now see it as merely the
 efficiency of optimization process that drives the environment towards
 higher utility, according to whatever criterion (reinforcement, in
 your case). In this view, how does I'll do the same, but with
 intelligence differ from I'll do the same, but better?

 Terran's artificial chemistry as whole could not be said to have a
 goal. Or to put it another way applying the intentional stance to it
 probably wouldn't help you predict what it did next. Applying the
 intentional stance to what my system does should help you predict what
 it does.

 What

Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-27 Thread William Pearson
2008/6/27 Steve Richfield [EMAIL PROTECTED]:
 Russell and William,

 OK, I think that I am finally beginning to get it. No one here is really
 planning to do wonderful things that people can't reasonably do, though
 Russell has pointed out some improvements which I will comment on
 separately.

I still don't think you do. The general as far as I am concerned means
it can reconfigure itself to do other things it couldn't previously.
Just like a human learns differentiation. So when you ask for shopping
list of things for it to do, you will get our first steps and things
we know that can be done (because they have been done by humans)  for
testing etc...

Consider a human computer team. Now the human can code/configure the
machine to help her/him do pretty much anything. I just want to shift
that coding/configuring work to the machine. That is hard convey in
concrete examples, although I tried.

 I am interested in things that people can NOT reasonably do. Note that many
 computer programs have been written to way outperform people in specific
 tasks, and my own Dr. Eliza would seem to far exceed human capability in
 handling large amounts of qualitative knowledge that work within its
 paradigm limits. Hence, it would seem that I may have stumbled into the
 wrong group (opinions invited).

Probably so ;) The solution to a specific tasks is not within the
remit of the study of generality.  You would be like someone going up
to Turing and asking him what specific tasks ACE was going to solve.
If he said cryptography, you would go on about the Bombe cracking
engima.

 Unfortunately, no one here appears to be interested in understanding this
 landscape of solving future hyper-complex problems, but instead apparently
 everyone wishes to leave this work to some future AGI, that cannot possibly
 be constructed in the short time frame that I have in mind. Of course,
 future AGIs are doomed to fail at such efforts, just as people have failed
 for the last million years or so.

If Humans and AGIs are doomed to fail at the task perhaps it is impossible?

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-27 Thread William Pearson
I'm going to ignore the oversimplifications of a variety of peoples positions.


 But no one in AGI knows how to design or instruct a machine to work without
 algorithms - or, to be more precise, *complete* algorithms. It's unthinkable
 - it seems like asking someone not to breathe...  until, like every
 problem,. you start thinking about it.

Have you managed to create/design a toy system that can do this very
basically.  If so, do share.

 Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-26 Thread William Pearson
2008/6/26 Steve Richfield [EMAIL PROTECTED]:

 Jiri previously noted that perhaps AGIs would best be used to manage the
 affairs of humans so that we can do as we please without bothering with the
 complex details of life. Of course, people and some (communist) governments
 now already perform this function, so while this might be a
 potential application, it doesn't count for this posting, as I am looking
 for things that people either can not do at all, or can not do adequately
 well.
snip

 Thanks in advance for your concrete examples.


Personally I concentrate on things humans could do, but that they
don't have the time to do. Mostly I want to do Intelligence
Augmentation through augmented reality.

Highlight on a heads up display
 - food that corresponds to a certain health guidelines/ethical
standards by object recognition and searching on-line information
 - books that might be interesting (again by searching information) or
other people the user has known has read.

None of these should have to be explictly programmed/configured by the
user, the system should pick them up by interacting with the user and
other machines. They should also only be done in contexts when the
user is looking at the items involved (in a book store/library), and
not just all the time.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread William Pearson
2008/6/23 Bob Mottram [EMAIL PROTECTED]:
 2008/6/22 William Pearson [EMAIL PROTECTED]:
 2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
 Well since intelligence explosions haven't happened previously in our
 light cone, it can't be a simple physical pattern

 Probably the last intelligence explosion - a relatively rapid
 increase in the degree of adaptability capabile of being exhibited by
 an organism - was the appearance of the first Homo sapiens.  The
 number and variety of tools created by Homo sapiens compared to
 earlier hominids indicate that this was one of the great leaps forward
 in history (probably greatly facilitated by a more elaborate language
 ability).

I am using intelligence explosion to mean what would Eliezer mean by it. See

http://www.overcomingbias.com/2008/06/optimization-an.html#more

I.e. something never seen on this planet.

I am sceptical of whether such a process is theoretically possible.

Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread William Pearson
2008/6/23 Vladimir Nesov [EMAIL PROTECTED]:
 On Mon, Jun 23, 2008 at 12:50 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:


 Two questions:
 1) Do you know enough to estimate which scenario is more likely?

 Well since intelligence explosions haven't happened previously in our
 light cone, it can't be a simple physical pattern, so I think
 non-exploding intelligences have the evidence for being simpler on
 their side.

 This message that I'm currently writing hasn't happened previously in
 out light code. By your argument, it is evidence for it being more
 difficult to write, than to recreate life on Earth and human
 intellect, which is clearly false, for all practical purposes. You
 should state that argument more carefully, in order for it to make
 sense.

If your message was an intelligent entity then you would have a point.
I'm looking at classes of technologies and their natural or current
human created analogues.

Let me give you an example. You have two people claiming to be able to
give you an improved TSP solver. One person claims to be able to do
all examples in polynomial time the other simply has a better
algorithm which can do certain types of graphs in polynomial time, but
resorts to exponential time for random graphs.

Which would you consider more likely if neither of them have detailed
proofs and why?


 So we might find them more easily. I also think I have
 solid reasoning to think intelligence exploding is unlikely, which
 requires paper length rather than post length. So it I think I do, but
 should I trust my own rationality?

 But not too much, especially when the argument is not technical (which
 is clearly the case for questions such as this one).

The question is one of theoretical computer science and should be able
to be decided as well as the resolution to the halting problem.
I'm leaning towards something like Russell Wallace's resolution, but
there maybe some complications when you have a program that learns
from the environment. I would like to see it done in formally at some
point.

 If argument is
 sound, you should be able to convince seed AI crowd too

Since the concept is their idea they have to be the ones to define it.
They won't accept any arguments against it otherwise. They haven't as
yet formally defined it, or if they have I haven't seen it.


 I agree, but it works only if you know that the answer is correct, and
 (which you didn't address and which is critical for these issues) you
 won't build a doomsday machine as a result of your efforts, even if
 this particular path turns out to be more feasible.

I don't think a doomsday machine is possible. But considering I would
be doing my best to make the system incapable of modifying it's own
source code *in the fashion that eliezer wants/is afraid of* anyway, I
am not too worried. See http://www.sl4.org/archive/0606/15131.html

 Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-22 Thread William Pearson
While SIAI fills that niche somewhat, it concentrates on the
Intelligence explosion scenario. Is there a sufficient group of
researchers/thinkers with a shared vision of the future of AI coherent
enough to form an organisation? This organisation would discus,
explore and disseminate what can be done to make the introduction as
painless as possible.

The base beliefs shared between the group would be something like

 - The entities will not have goals/motivations inherent to their
form. That is robots aren't likely to band together to fight humans,
or try to take over the world for their own means.  These would have
to be programmed into them, as evolution has programmed group loyalty
and selfishness into humans.
- The entities will not be capable of fully wrap around recursive
self-improvement. They will improve in fits and starts in a wider
economy/ecology like most developments in the world *
- The goals and motivations of the entities that we will likely see in
the real world will be shaped over the long term by the forces in the
world, e.g. evolutionary, economic and physics.

Basically an organisation trying to prepare for a world where AIs
aren't sufficiently advanced technology or magic genies, but still
dangerous and a potentially destabilising world change. Could a
coherent message be articulated by the subset of the people that agree
with these points. Or are we all still too fractured?

  Will Pearson

* I will attempt to give an inside view of why I take this view, at a
later date.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-22 Thread William Pearson
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:


 Two questions:
 1) Do you know enough to estimate which scenario is more likely?

Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern, so I think
non-exploding intelligences have the evidence for being simpler on
their side. So we might find them more easily. I also think I have
solid reasoning to think intelligence exploding is unlikely, which
requires paper length rather than post length. So it I think I do, but
should I trust my own rationality?

Getting a bunch of people together to argue for both paths seems like
a good bet at the moment.

 2) What does this difference change for research at this stage?

It changes the focus of research from looking for simple principles of
intelligence (that can be improved easily on the fly), to one that
expects intelligence creation to be a societal process over decades.

It also makes secrecy no longer be the default position. If you take
the intelligence explosion scenario seriously you won't write anything
in public forums that might help other people make AI. As bad/ignorant
people might get hold of it and cause the first explosion.

  Otherwise it sounds like you are just calling to start a cult that
 believes in this particular unsupported thing, for no good reason. ;-)


Hope that gives you some reasons. Let me know if I have misunderstood
your questions.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Breaking Solomonoff induction (really)

2008-06-21 Thread William Pearson
2008/6/21 Wei Dai [EMAIL PROTECTED]:
 A different way to break Solomonoff Induction takes advantage of the fact
 that it restricts Bayesian reasoning to computable models. I wrote about
 this in is induction unformalizable? [2] on the everything mailing list.
 Abram Demski also made similar points in recent posts on this mailing list.


I think this is a lot stronger objection when you actually implement
an implementable variant of Solomonoff Induction (it has started to
make me chuckle that a model of induction makes assumptions about the
universe that would have to be broken to have it implemented). When
you restrict the the memory space of a system a lot more functions
become uncomputable with respects to that system. It is not a safe
assumption that the world is computable in this restricted notion of
computable, i.e. computable with respect to a finite system.

Also solomonoff induction ignores any potential physical affects of
the computation, as does all probability theory. See section 5 of this
attempted paper by me of an formalised example of where things could
go wrong.

http://codesoup.sourceforge.net/easa.pdf

It is not quite an anthropic problem, but it is closely related.  I'll
tentatively label the observer-world interaction problem. That is the
exact nature of the world you see is altered dependent upon the type
of system you happen to be.

All these are problem with tacit (a la Dennet) representations of
beliefs embedded within the Solomonoff induction formalism.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread William Pearson
2008/6/12 J Storrs Hall, PhD [EMAIL PROTECTED]:
 I'm getting several replies to this that indicate that people don't understand
 what a utility function is.

 If you are an AI (or a person) there will be occasions where you have to make
 choices. In fact, pretty much everything you do involves making choices. You
 can choose to reply to this or to go have a beer. You can choose to spend
 your time on AGI or take flying lessons. Even in the middle of typing a word,
 you have to choose which key to hit next.

 One way of formalizing the process of making choices is to take all the
 actions you could possibly do at a given point, predict as best you can the
 state the world will be in after taking such actions, and assign a value to
 each of them.  Then simply do the one with the best resulting value.

 It gets a bit more complex when you consider sequences of actions and delayed
 values, but that's a technicality. Basically you have a function U(x) that
 rank-orders ALL possible states of the world (but you only have to evaluate
 the ones you can get to at any one time).


We do mean slightly different things then. By U(x) I am just talking
about a function that generates the set of scalar rewards for actions
performed for a reinforcement learning algorithm. Not that evaluates
every potential action from where the current system is (since I
consider computation an action in order to take energy efficiency into
consideration, this would be a massive space).

 Economists may crudely approximate it, but it's there whether they study it
 or not, as gravity is to physicists.

 ANY way of making decisions can either be reduced to a utility function, or
 it's irrational -- i.e. you would prefer A to B, B to C, and C to A. The math
 for this stuff is older than I am. If you talk about building a machine that
 makes choices -- ANY kind of choices -- without understanding it, you're
 talking about building moon rockets without understanding the laws of
 gravity, or building heat engines without understanding the laws of
 thermodynamics.

The kinds of choices I am interested in designing for at the moment
are should program X or program Y get control of this bit of memory or
IRQ for the next time period. X and Y can also make choices and you
would need to nail them down as well in order to get the entire U(x)
as you talk about it.

As the function I am interested in is only concerned about
programmatic changes call it PCU(x).

Can you give me a reason why the utility function can't be separated
out this way?

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread William Pearson
2008/6/12 J Storrs Hall, PhD [EMAIL PROTECTED]:
 On Thursday 12 June 2008 02:48:19 am, William Pearson wrote:

 The kinds of choices I am interested in designing for at the moment
 are should program X or program Y get control of this bit of memory or
 IRQ for the next time period. X and Y can also make choices and you
 would need to nail them down as well in order to get the entire U(x)
 as you talk about it.

 As the function I am interested in is only concerned about
 programmatic changes call it PCU(x).

 Can you give me a reason why the utility function can't be separated
 out this way?


 This is roughly equivalent to a function where the highest-level arbitrator
 gets to set the most significant digit, the programs X,Y the next most, and
 so forth. As long as the possibility space is partitioned at each stage, the
 whole business is rational -- doesn't contradict itself.

Modulo special cases, agreed.

 Allowing the program to play around with the less significant digits, i.e. to
 make finer distinctions, is probably pretty safe (and the way many AIers
 envisioning doing it). It's also reminiscent of the way Maslow's hierarchy
 works.

 But it doesn't work for full fledged AGI.

It is the best design I have at the moment, whether it can make what
you want is another matter. I'll continue to try to think of better
ones. It should get me a useful system if nothing else, and hopefully
more people interested in the full AGI problem, if it proves
inadequate.

What path are you going to continue down?

 Suppose you are a young man who's
 always been taught not to get yourself killed, and not to kill people (as top
 priorities). You are confronted with your country being invaded and faced
 with the decision to join the defense with a high liklihood of both.

With the system I am thinking of it can get stuck in positions that
aren't optimal as the the program control utility function only
chooses from the extant programs in the system. It is possible for the
system to be dominated by a monopoly or cartel of programs, such that
the program chooser doesn't have a choice. This would only happen if
there was a long period of stasis and a very powerful/useful set of
programs. Such as possibly patriotism or the protection of other
sentients in this case, being very useful during peace time.

This does seem like you would consider it a bug, and it might be. It
is not one I can currently see a guard against.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-11 Thread William Pearson
2008/6/11 J Storrs Hall, PhD [EMAIL PROTECTED]:
 Vladimir,

 You seem to be assuming that there is some objective utility for which the
 AI's internal utility function is merely the indicator, and that if the
 indicator is changed it is thus objectively wrong and irrational.

 There are two answers to this. First is to assume that there is such an
 objective utility, e.g. the utility of the AI's creator. I implicitly assumed
 such a point of view when I described this as the real problem. But
 consider: Any AI who believes this must realize that there may be errors and
 approximations in its own utility function as judged by the real utility,
 and must thus have as a first priority fixing and upgrading its own utility
 function. Thus it turns into a moral philosopher and it never does anything
 useful -- exactly the kind of Nirvana attractor I'm talking about.

 On the other hand, it might take its utility function for granted, i.e. assume
 (or agree to act as if) there were no objective utility. It's pretty much
 going to have to act this way just to get on with life, as indeed most people
 (except moral philosophers) do.

 But this leaves it vulnerable to modifications to its own U(x), as in my
 message. You could always say that you'll build in U(x) and make it fixed,
 which not only solves my problem but friendliness -- but leaves the AI unable
 to learn utility. I.e. the most important part of the AI mind is forced to
 remain brittle GOFAI construct. Solution unsatisfactory.

I'm not quite sure what you find unsatisfactory. I think humans have a
fixed U(x), but it is not a hard goal for the system but an implicit
tendency for the internal programs to not self-modify away from (an
agoric economy of programs is not oblidged to find better ways of
getting credit, but a good set of programs is hard to dislodge by a
bad set). I also think that part of humanity's U(x) relies on social
interaction which can be a very complex function.  Which can lead to
very complex behaviour.

Imagine if we were trying to raise children like we teach computers,
we wouldn't reward the socially for playing with balls or saying their
first words, but would put them straight into designing electronic
circuits.

Hence why I think that having one or more humans act as part of the
U(x) of a system is necessary for interesting behaviour. If there is
only one human acting as the input to the U(x) then I think the system
and human should be considered part of a larger intentional system, as
it will be trying to optimise one goal. Unless the human decides to
try and teach it to think for itself, with its own goals. Which would
be odd for an intentional system.


 I claim that there's plenty of historical evidence that people fall into this
 kind of attractor, as the word nirvana indicates (and you'll find similar
 attractors at the core of many religions).

I don't know many people that have actively wasted away due to
self-modification of their goals. Hunger strikes is the closest, but
not many people fall into it.

Our U(x) is quite limited, and easily satisified in the current
economy (food, sexual stimulation, warmth, positive social
indicators). This leaves the rest of our software to range all over
the place as long as these are satifisfied.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[Humour of some sort] Re: Are rocks conscious? (was RE: [agi] Did this message get completely lost?)

2008-06-04 Thread William Pearson
2008/6/4 Bob Mottram [EMAIL PROTECTED]:
 2008/6/4 J Storrs Hall, PhD [EMAIL PROTECTED]:
 What is the rock thinking?

  T h i s   i s   w a a a y   o f f   t o p i c . . . 



 Rocks are obviously superintelligences.  By behaving like inert matter
 and letting us build monuments and gravel pathways out of them they're
 just lulling us into a false sense of security.


Nope they are just seed AIs which started off with the goal of being
a rock, and recursively self-improved themselves to fulfil their
goal.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Merging - or: Multiplicity

2008-05-28 Thread William Pearson
2008/5/27 Mike Tintner [EMAIL PROTECTED]:
 Will:And you are part of the problem insisting that an AGI should be tested
 by its ability to learn on its own and not get instruction/help from
 other agents be they human or other artificial intelligences.

 I insist[ed] that an AGI should be tested on its ability to solve some
 *problems* on its own - cross-domain problems - just as we do. Of course, it
 should learn from others, and get help on other problems, as we do too.

But you don't test for that, and as the loebner prize shows you only
tend to get what you test for.

 But
 if it can't solve many general problems on its own - which seemed OK by you
 (after setting up your initially appealing submersible problem - solutio
 interrupta!) - then it's only a narrow AI.

I am happy for the baby machine (which is what we will be dealing with
to start with) not to be able to solve general problems on its own.
Later on I would be disappointed.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Design Phase Announce - VRRM project

2008-05-28 Thread William Pearson
2008/5/27 Steve Richfield [EMAIL PROTECTED]:
 William,

 This sounds like you should be announcing the analysis phase! Detailed
 comments follow...

Design/research/analysis, call it what you will.

 On 5/26/08, William Pearson [EMAIL PROTECTED] wrote:

 VRRM  - Virtual Reinforcement Resource Managing Machine

 Overview

 This is a virtual machine designed to allow non-catastrophic
 unconstrained experimentation of programs in a system as close to the
 hardware as possible.


 There have been some interesting real machines in the past, e.g. the
 Burroughs 5000 and 6000 series computers that seldom crashed. When they did,
 it was presumed to be either an OS or a hardware problem. At Remote
 Time-Sharing we extended these in a virtual machine, to make a commercial
 time-sharing system that NEVER EVER crashed after initial debugging. This
 while servicing secondary schools in the Seattle Area with many hackers,
 including a very young Bill Gates and Paul Allen.

 Systems now crash NOT because of the lack of some whiz-bang technology, but
 because architectural development has been in a state of arrested
 development for the last ~35 years.

It is not just crashes that I worry about but memory corruption and
other forms of subversion.


 This should allow the system to change as much
 as is possible and needed for the application under consideration.
 Currently the project expects to go to the operating system level
 (including experimentation on schedulers and device drivers).  A
 separate sub-system supplies information on how well the experiment is
 going.  The information is made affective by making it a form of
 credit periodically used to bid for computational system resources and
 to pass around between programs.


 This sounds like a problem for real-time applications.

In what sense?


 Expected deployment scenarios

 - Research and possible small scale applications on the following
 - Autonomous Self-managing robotics
 - A Smart operating system that customises itself to the users
 preferences without extensive knowledge on the users part

 Language - C


 Whoops, there are SERIOUS limitations to what can be made reliable in C.

C is purely the language for the VRRM, what the programs will be
implemented inside the VM is completely up to the people that
implement them.


 Progress

 Currently I am hacking/designing my own, but I am open to going to a
 standard machine emulator if that seems easy at any point. I expect to
 heavily re-factor. I am focussing on the architectural registers,
 memory space and memory protection first and will get on to the actual
 instruction set last.


 This effort would most usefully be merged with the 10K architectures that I
 have discussed on this forum. Merging disparate concerns might actually
 result in a design that someone actually constructs.

Possibly after I have completed the VRRM and tested it to see if it
works how I think it works. But silicon implementation is not on the
agenda at the moment.


 I'm also in parallel trying to design a high level language for this
 architecture so the internals initial programs can be cross-compiled
 for it more easily.


 Does this require a new language, or just some cleverly-named subroutines?

A different set of system calls in the least. Some indication of how
important the memory is in dynamic memory creation is needed for
example.

 Current Feature plans

 - Differentiation between transient and long term storage to avoid
 unwanted disk thrashing


 Based on the obsolete concept of virtual memory rather than limitless RAM.

We don't have limitless RAM, and I won't be implementing virtual memory.
snip because I don't have time



 - Specialised Capability registers as well as floating point and integers


 Have you seen my/our proposed improvements to IEEE-754 floating point, that
 itself incorporates a capability register?! Perhaps we should look at a
 common design?

Do you mean capability in the same sense as me?

http://en.wikipedia.org/wiki/Capability-based_security
  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Merging - or: Multiplicity

2008-05-27 Thread William Pearson
2008/5/27 Mike Tintner [EMAIL PROTECTED]:

 Actually, that's an absurdity. The whole story of evolution tells us that
 the problems of living in this world for any species of
 creature/intelligence at any level can only be solved by a SOCIETY of
 individuals. This whole dimension seems to be entirely missing from AGI.


And you are part of the problem insisting that an AGI should be tested
by its ability to learn on its own and not get instruction/help from
other agents be they human or other artificial intelligences.

The social aspect of mimicry has been picked up Ben Goertzel at least
in the initial stages of development of his AGI, he may think it will
evolve beyond that eventually.

I don't think it will, as every mind is capable of getting stuck in a
rut (they are attractor states), getting out of that rut is easier
with other intelligences to show the way out (themselves getting stuck
in different ruts). Societies can get stuck in their own ruts but
generally have bigger  spaces to explore, so might find their way out
in a long time.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] Design Phase Announce - VRRM project

2008-05-26 Thread William Pearson
VRRM  - Virtual Reinforcement Resource Managing Machine

Overview

This is a virtual machine designed to allow non-catastrophic
unconstrained experimentation of programs in a system as close to the
hardware as possible. This should allow the system to change as much
as is possible and needed for the application under consideration.
Currently the project expects to go to the operating system level
(including experimentation on schedulers and device drivers).  A
separate sub-system supplies information on how well the experiment is
going.  The information is made affective by making it a form of
credit periodically used to bid for computational system resources and
to pass around between programs.

Expected deployment scenarios

 - Research and possible small scale applications on the following
 - Autonomous Self-managing robotics
 - A Smart operating system that customises itself to the users
preferences without extensive knowledge on the users part

Language - C

Progress

Currently I am hacking/designing my own, but I am open to going to a
standard machine emulator if that seems easy at any point. I expect to
heavily re-factor. I am focussing on the architectural registers,
memory space and memory protection first and will get on to the actual
instruction set last.

I'm also in parallel trying to design a high level language for this
architecture so the internals initial programs can be cross-compiled
for it more easily.

Current Feature plans

 - Differentiation between transient and long term storage to avoid
unwanted disk thrashing
 - Unified memory space
 - Capability based security between programs
 - Specialised Capability registers as well as floating point and integers
 - Keyboard, mouse and display virtual devices as well as extensible
models for people to build their own

Comments and criticisms welcomed.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Different problem types was Re: [agi] AGI and Wiki, was Understanding a sick puppy

2008-05-16 Thread William Pearson
2008/5/16 Steve Richfield [EMAIL PROTECTED]:
 Does anyone else here share my dream of a worldwide AI with all of the
 knowledge of the human race to support it - built with EXISTING Wikipedia
 and Dr. Eliza software and a little glue to hold it all together?


I'm taking this as a jumping off point to try and describe and expand
upon something I have been mulling over whilst reading your messages.

I think you and Matt are interested in solving the oracle problem.
That is going to one entity  for answers to general questions. I am
interested in solving more personal problems. That is there are
problems that are unique to the individual at each time.

The search problem is a good example. To present the optimal search
for an individual you must have as much data about the individual as
possible. For example if the search engine knew I had been talking to
you it would return different results when I searched for Dr. Eliza
(assuming google knows anything about your system). As I would not be
comfortable with this level of information being known about me, a
centralized search oracle will not work (I will have to stop using
gmail when AI gets too advanced).

The problems essence is finding pertinent information and presenting
it at the right time to the user.  I shall call it the whisperer class
of problems for the moment. I am also strongly interested in Augmented
Reality, where knowing when to interrupt you with emails and other
communications is an important thing for the system to do.

Both types of system are important, I don't think I can do a decent
whisperer system with current technologies, including Dr Eliza. Not to
denigrate your approach, but to acknowledge that there are more types
of problems out there to be solved.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread William Pearson
Matt mahoney:
  I am not sure what you mean by AGI.  I consider a measure of intelligence
  to be the degree to which goals are satisfied in a range of environments.
  It does not matter what the goals are.  They may seem irrational to you.
  The goal of a smart bomb is to blow itself up at a given target.  I would
  consider bombs that hit their targets more often to be more intelligent.

  I consider understanding to mean intelligence in this context.  You
  can't say that a robot that does nothing is unintelligent unless you
  specify its goals.

  We may consider intelligence as a measure and AGI as a threshold.  AGI is
  not required for understanding.  You can measure the degree to which
  various search engines understand your query, spam filters understand your
  email, language translators understand your document, vision systems
  understand images, intrusion detection systems understand network traffic,
  etc.  Each system was designed with a goal and can be evaluated according
  to how well that goal is met.

  AIXI allows us to evaluate intelligence independent of goals.  An agent
  understands its input if it can predict it.  This can be measured
  precisely.


I think you are thinking of solomonoff induction, AIXI won't answer
your questions unless it has the goal of getting reward from you for
answering the question. It will do what it predicts will get it
reward, not try and output the end of all strings given to it.

I propose prediction as a general test of understanding.  For example, do
you understand the sequence 0101010101010101 ?  If I asked you to predict
the next bit and you did so correctly, then I would say you understand it.

What would happen if I said, I don't have time for silly games,
please stop emailing me. Would you consider that I understood it?

If I want to test your understanding of X, I can describe X, give you part
of the description, and test if you can predict the rest.  If I want to
test if you understand a picture, I can cover part of it and ask you to
predict what might be there.

This only works if my goal includes revealing my understanding to you.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-11 Thread William Pearson
2008/5/11 Russell Wallace [EMAIL PROTECTED]:
 On Sat, May 10, 2008 at 10:10 PM, William Pearson [EMAIL PROTECTED] wrote:
 It depends on the system you are designing on. I think you can easily
 create as many types of sand box as you want in programming language E
 (1) for example. If the principle of least authority (2) is embedded
 in the system, then you shouldn't have any problems.

 Sure, I'm talking about much lower-level concepts though. For example,
 on a system with 8 gigabytes of memory, a candidate program has
 computed a 5 gigabyte string. For its next operation, it appends that
 string to itself, thereby crashing the VM due to running out of
 memory. How _exactly_ do you prevent this from happening (while
 meeting all the other requirements for an AI platform)? It's a
 trickier problem than it sounds like it ought to be.


I'm starting to mod qemu (it is not a straightforward process) to add
capabilities.  The VM will have a set amount of memory and if a
location outside this memory is referenced, it will throw a page fault
inside the VM, not crash it directly. The system will be able to deal
with it how it wants to, something smarter than, Oh no I have done a
bad memory reference, I must stop all my work and lose everything!!!
Hopefully.
In the greater scheme of things the model that a computer has
unlimited virtual memory has to go as well. Else you might get
important things on the hard disk and have much thrashing and ephemera
in main memory. You could still make high level abstractions but the
virtual memory one is not the one to display to the low level
programs.
  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-10 Thread William Pearson
2008/5/10 Richard Loosemore [EMAIL PROTECTED]:


 This is still quite ambiguous on a number of levels, so would it be possible
 for you to give us a road map of where the argument is going? At the moment
 I am not sure what the theme is.

That is because I am still ambiguous as to what the later levels are.
A fair amount depends upon the application. For example If you are
trying to build an AI for jet plane you need to be a lot more careful
in how the system explores.

I have in my mind a number of things I can't do with current computers
and would like to experiment with.

1) A system that has programs  that can generate a new learning
program dependent upon the inputs it received. The learning program
would have inputs either from the outside, the outputs of other
learning programs or the value of other learning programs. It would
use a variable number of resources. The outputs could be to learning
systems, or it could be the generation of a new learning program.
Self-maintenance is required to rate and whittle down the learning
programs. The learning program could be anything from a SVM to a
genetic programming system.

2) A system similar to automatic programming that takes descriptions
in a formal language given from the outside and potentially malicious
sources and generates a program from them. The language would be
sufficient to specify new generative elements in and so extensible in
that fashion. A system that cannot maintain itself trying to do this
would quickly get swamped by viruses and the like.

I'd probably create a hybrid of the two, but I am fully aware that I
don't have enough knowledge to discount other approaches. Once I have
got the system working to my satisfaction (both experimentally and by
showing that being good is evolutionarily stable and I have minimised
tragedy of the commons type failures), I'll go on to study more about
higher level problems. I have the (slight) hope that other people will
pick up my system and take it places I can't currently imagine.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Self-maintaining Architecture first for AI

2008-05-09 Thread William Pearson
After getting completely on the wrong foot last time I posted
something, and not having had time to read the papers I should have. I
have decided to try and start afresh and outline where I am coming
from. I'll get around to do a proper paper later.

There are two possible modes for designing a computer system. I shall
characterise them as the active and the passive. The active approach
attempts to solve the problem directly where as the passive approach
gives a framework under which the problem and related other ones can
be more easily solved. The passive approach is generally less
efficient but much more reconfigurable.

The passive approach is used when there is large number of related
possible problems, with a large variety of solutions. Examples of the
passive approach are mainly architectures, programming languages and
operating systems, with a variety of different goals. They are not
always completely passive, for example automatic garbage collection
impacts the system somewhat. One illuminating example is the variety
of security systems that have been built along this structure.
Security in this sense means that the computer system is composed of
domains, where not all of them are equally trusted or allowed
resources. Now it is possible to set up a passive system designed with
security in mind insecurely, by allowing all domains to access every
file on the hard disk. Passive systems do not guarantee the solution
they are aiming to aid, the most they can do is allow as many possible
things to be represented and permit the prevention of certain things.
A passive security system allows the prevention of a domain lowering
the security of a part of another domain.

The set of problems that I intend to help solve is the set of
self-maintainanceing computer systems. Self-maintainance is basically
reconfiguring the computer to be suited to the environment it finds
itself in.  The reason why I think it needs to be solved first before
AI is attempted is 1) humans self-maintenance, 2) otherwise the very
complex computer systems we build for AI will have to be maintained by
ourselves which may become increasingly difficult as they approach
human level.

It is worth noting that I am using AI in the pure sense of being able
to solve problems. It is entirely possible to get very high complexity
problem solvers (including potentially passing the turing test) that
cannot self-maintaince.

There a large variety of possible AIs (different
bodies/environments/computational resources/goals) as can be seen from
the variety of humans and (proto?) intelligences of animals, so a
passive approach is not unreasonable.

In the case of self-maintaining system, what is that we wish the
architecture to prevent? About the only thing we can prevent is a
useless program being able to degrade of the system from the current
level of operation by taking control of resources. However we also
want to enable useful programs to be able to control more resources.
To do this we must protect the resources and make sure the correct
programs can somehow get the correct resources, the internal programs
should do the rest. So it is a resource management problem. Any active
force for better levels of operation has to come from the internal
programs of the architecture, and once the higher level of operation
has been reached the architecture should act as a ratchet to prevent
it from slipping down again.

Protecting resources amounts to the security problem which we have a
fair amount of literature on and the only passive form of resource
allocation we know of is a economic system.

... to be continued

I might go into further detail about what I mean by resource but that
will have to wait for a further post.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-27 Thread William Pearson
2008/4/27 Dr. Matthias Heger [EMAIL PROTECTED]:

   Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54


   Yes, truly general AI is only possible in the case of infinite
   processing power, which is
   likely not physically realizable.
   How much generality can be achieved with how much
   Processing power, is not yet known -- math hasn't advanced that far yet.


  My point is not only that  'general intelligence without any limits' would
  need infinite resources of time and memory.
  This is trivial of course. What I wanted to say is that any intelligence has
  to be narrow in a sense if it wants be powerful and useful. There must
  always be strong assumptions of the world deep in any algorithm of useful
  intelligence.

I am probably the one on this list the closest to the position you
think AGI means.

I would agree. Any algorithms needs to be very specific to be useful.
However the *architecture*, of an AGI needs to be general (by this I
mean capable of instantiating any TM equivalent function, from input
and current state to output and current state).

So I think the the lowest level of the system space should be massive
as you argue against. However I would not make it a search space, as
such, with a fixed method searching it. On its own, it should be
passive, however it is be able to have active programs within it. As
these are programs on there own they can search the space of possible
programs. These programs could search sub spaces of the entire space,
or get information from the outside about which subspaces to search.
However, there is no limit to which subspaces they do actually search.

What makes my approach different to a bog standard computer system, is
that it would guide the searching of the programs within it, by acting
as reinforcement based ratchet. Those programs with the most
reinforcement, that act sensibly, will be able to protect and expand
the influence they have over the system. With the right internal
programs and environment, this will look as if the system has a goal
for what it is trying to become.

See this post for more details.
http://www.mail-archive.com/agi@v2.listbox.com/msg02892.html


  Every recursive procedure has to have a non-reducible base and it is clear,
  that the overall performance and abilities depend crucially on that basic
  non-reducible procedure. If this procedure is too general, the performance
  slows exponentially with the space with which this  basic procedure works.

There are recursive procedures that abandon the base, see for example
booting a machine.

 Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread William Pearson
2008/4/26 Dr. Matthias Heger [EMAIL PROTECTED]:
 How general should be AGI?

My answer, as *potentially* general as possible. In a similar fashion
that a UTM is as potentially as general as possible, but with more
purpose.

There are plenty of problems you can define that don't need the
halting problem to be impossible to solve, e.g. remember a number with
more digits that the potential states of the universe.

Some other comments. Have you looked at the literature on neuro plasticity?

This wired article is a good introduction.

http://www.wired.com/wired/archive/15.04/esp_pr.html

Although there are more academic papers out there, a google can find them.

 Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread William Pearson
On 21/04/2008, Ed Porter [EMAIL PROTECTED] wrote:
  So when people are given a sentence such as the one you quoted about verbs,
  pronouns, and nouns, presuming they have some knowledge of most of the words
  in the sentence, they will understand the concept that verbs are doing
  words.   This is because of the groupings of words that tend to occur in
  certain syntactical linguistic contexts, the ones that would be most
  associated with the types of experiences the mind would associates with
  doing would be largely word senses that are verbs and that the mind's
  experience and learned patterns most often proceeds by nouns or pronouns.
  So all this stuff falls out of the magic of spreading activation in a
  Novamente-like hierarchical experiential memories (with the help of a
  considerable control structure such as that envisioned for Novamente).

  Declarative information learned by NL gets projected into the same type of
  activations in the hierarchical memory

How does this happen?  What happens when you try and project, This
sentence is false. into the activations of the hierarchical memory?
And consider that the whole of the english understanding is likely to
be in the hierarchical memory. That is the projection must be learnt.

 as would actual experiences that
  teaches the same thing, but at least as episodes, and in some patterns
  generalized from episodes, such declarative information would remain linked
  to the experience of having been learned from reading or hearing from other
  humans.
  So in summary, a Novamete-like system should be able to handle this alleged
  problem, and at the moment it does not appear to provide an major unanswered
  conceptual problem. 

My conversation with Ben about similar subject (words acting on the
knowledge of words) didn't get anywhere.

The conversation starting here -
http://www.mail-archive.com/agi@v2.listbox.com/msg09485.html

And I consider him the authority on Novamente-like systems, for now at least.

Will

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-20 Thread William Pearson
On 19/04/2008, Ed Porter [EMAIL PROTECTED] wrote:
 WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?


I'm not quite sure how to describe it, but this brief sketch will have
to do until I get some more time. These may be in some new AI
material, but I haven't had the chance to read up much recently.

Linguistic information and other non-inductive information integrated
into learning/modelling strategies, including the learning of
linguistic rules.

Consider an AI learning chess, it is told in plain english that
Knights move two hops in one direction and one hop 90 degrees to
that.

Now our AI has learnt english so how do we hook this knowledge into
our modelling system, so that it can predict when it might lose or
take a piece because of the position of a knight?

Consider also the sentence, There are words such as verbs, that are
doing words, you need to put a pronoun or noun before the verb.

People are given this sort of information when learning languages, it
seems to help them. How and why does it help them?

 Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The resource allocation problem

2008-04-05 Thread William Pearson
On 05/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Sat, Apr 5, 2008 at 12:24 AM, William Pearson [EMAIL PROTECTED] wrote:
   On 01/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:


This question supposes a specific kind of architecture, where these
  things are in some sense separate from each other.
  
I am agnostic to how much things are separate. At any particular time
a machine can be doing less or more of each of these things. For
example in humans it is quite common to talk of concentration.
  
E.g. I'm sorry I wasn't concentrating on what you said, could you repeat 
 it.
Stop thinking about the girl, concentrate on the problem at hand.
  
Do you think this is meaningful?


 It is in some sense, but you need to distinguish levels of
  description. Implementation of system doesn't have a
  thinking-about-the-girl component,

Who ever said it did? All I have said is there needs to be the
mechanisms for an economy, not exactly what the economic agents are. I
don't know what they should be. It is body/environment specific, most
likely.

 but when system obtains certain
  behaviors, you can say that this process that is going now is a
  thinking-about-the-girl process. If, along with learning this
  process, you form a mechanism for moving attention elsewhere, you can
  evoke that mechanism by, for example, sending a phrase Stop thinking
  about the girl to sensory input. But these specific mechanisms are
  learned, what you need as a system designer is provide ways of their
  formation in general case.

You also need a way to decide that something should get more attention
than something else. Being told to attend to something is not always
enough.

  Also, your list contained 'reasoning', 'seeing past experiences and
  how they apply to the current one', 'searching for new ways of doing
  things' and 'applying each heuristic'. Only in some architectures will
  these things be explicit parts of system design.

I don't have them as explicit parts of system design, I have nothing
that people would call a cognitive design at the moment. I am not so
interested in thinking at the moment as building a more *useful*
system (although under some circumstances a thinking system will be a
useful one).

 From my perspective,
  it's analogous to adding special machine instructions for handling
  'Internet browsing' in general-purpose processor, where browser is
  just one of thousands of applications that can run on it, and it would
  be inadequately complex for processor anyway.

I'd agree, I'm just adding a very loose economy. Any actor is allowed
to exist in an economy, I was just giving some examples of potential
ways to separate things. If they don't fit in your system ignore them
and add what does fit.

  You need to ration resources, but these are anonymous modelling
  resources that don't have inherent 'bicycle-properties' or
  'language-processing-properties'.

So does the whatever allows your system to differentiate between
bicycle and non-bicycle somehow manage to not take up resources when
not being used?

 Some of them happen to correlate
  with things we want them to, by virtue of being placed in contact with
  sensory input that can communicate structure of those things.
  Resources are used to build inference structures within the system
  that allow it to model hidden processes, which in turn should allow it
  to achieve its goals.

I'm still not seeing why it should model the right hidden processes.
Stick your system in the real world, which processes (from other
people, the weather, fluid dynamics, itself) should it try and model?
Why do some people have a lot more elaborate models of these things
than other people?

 If there are high-level resource allocation
  rules to be discovered, these rules will look at goals and formed
  inference structures and determine that certain changes are good for
  overall performance.

What happens if two rules conflict? Which rule wins? What happens if
rules can only be discovered experimentally?

 Discussion of such rules needs at least some
  notions about makeup of inference process and its relation to goals.

I'm not creating rules to determine how resources are distributed.
That would not be a free market economy. I agree the creation of the
rules will come about when the cognitive system is being designed, but
would be local to each agent.

  Even worse, goals can be implicit in inference system itself and be
  learned starting from a blank slate,

There is no useful system that is a blank slate. All learning systems
have bias as you well know, and so have implicit information about the
world.

I would view an economy as having an implicit goal. The closest thing
to an explicit goal for an agent in my economy is, to survive, but
it is in no way hard binding. To survive credit is needed to purchase
resources (including memory to stay in and processing power to earn
more credit), for which you need to please

Re: [agi] The resource allocation problem

2008-04-04 Thread William Pearson
On 01/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Tue, Apr 1, 2008 at 6:30 PM, William Pearson [EMAIL PROTECTED] wrote:
   The resource allocation problem and why it needs to be solved first
  
How much memory and processing power should you apply to the following 
 things?:
  
Visual Processing
Reasoning
Sound Processing
Seeing past experiences and how they apply to the current one
Searching for new ways of doing things
Applying each heuristic
  


 This question supposes a specific kind of architecture, where these
  things are in some sense separate from each other.

I am agnostic to how much things are separate. At any particular time
a machine can be doing less or more of each of these things. For
example in humans it is quite common to talk of concentration.

E.g. I'm sorry I wasn't concentrating on what you said, could you repeat it.
Stop thinking about the girl, concentrate on the problem at hand.

Do you think this is meaningful?

 If they are but
  aspects of the same process, with modalities integrated parts of
  reasoning, resources can't be rationed on such a high level. Rather
  underlying low-level elements should be globally restricted and
  differentiate to support different high-level processes (so that
  certain portion of them gets mainly devoted to visual processing,
  high-level reasoning, language, etc.).


It boils down to the same thing. If more low level elements are
devoted (neuron-equivalents?) to a task in general it is giving it
more potential memory, processing power and bandwidth. How is that
decided? In some connectionist systems, I would associate it with the
stability-plasticity problem described here in section 6.

http://www.cns.bu.edu/Profiles/Grossberg/Gro1987CogSci.pdf

The nutshell is if you have new learning, should it overwrite the old?
If so, which information, if not, please can I have your infinite
memory system.

Assuming you are implementing this on a normal computer you can easily
see that it all boils down to these resources.

In the brain not all elements can work at peak effectiveness at the
same time (consider the perils of driving while on the mobile *). So
even if you had devoted the elements, then further decisions need to
be made at run time. Different regions of elements become the
resources to be rationed. For example Short-term memory becomes a
resource you have to decide how to use.

Also oxygen would seem to be a resource in short supply, within the brain.

The question remains the same, how should a system choose what to do
or what to be.

  Will

* http://www.bmj.com/cgi/content/abstract/bmj.38537.397512.55v1

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] The resource allocation problem

2008-04-01 Thread William Pearson
The resource allocation problem and why it needs to be solved first

How much memory and processing power should you apply to the following things?:

Visual Processing
Reasoning
Sound Processing
Seeing past experiences and how they apply to the current one
Searching for new ways of doing things
Applying each heuristic

Is there one right way of deciding these things when you have limited
resources? At time A  might you want more reasoning done (while in a
debate) and at time B more visual processing (while driving).

There is also the long term memory problem, should you remember your
first kiss or the first star trek episode you saw. Which is more
important?

An intelligent system needs to solve this problem for itself, as only
it will know what is important for the problems it faces. That is it
is a local problem. It also requires resources itself. If resources
are tight then very approximate methods of determining how many
resources to spend on each activity.

Due to this, the resource management should not be algorithmic, but
free to adapt to the amount of resources at hand. I'm intent on a
economic solution to the problem, where each  activity is an economic
actor.

This approach needs to be at the lowest level because each activity
has to be programmed with the knowledge of how to act in an economic
setting as well as to perform its job. How much should it pay for the
other activities of the the programs around it?

I'll attempt to write a paper on this, with proper references (Baum,
Mark Miller et Al.) But I would be interested in feedback at this
stage,

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-31 Thread William Pearson
On 26/03/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
 Hi all,

  A lot of students email me asking me what to read to get up to speed on AGI.

  So I started a wiki page called Instead of an AGI Textbook,

  
 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics


I've decided to go my own way and have started a new annotated text
book, trying to link in all the topics I think relevant to my current
state of work.

http://www.agiri.org/wiki/AACA_Textbook

I'll try putting in content in for each of those links. But coding for
the architecture is probably more pointful at this point. Once I have
it up and running on QEmu, I'll try and devote more time to education.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread William Pearson
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:
 Although I symphathize with some of Hawkin's general ideas about unsupervised
learning, his current HTM framework is unimpressive in comparison with
state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional nets and the promising low-entropy coding variants.

 But it should be quite clear that such methods could eventually be very handy
 for AGI. For example, many of you would agree that a reliable, computationally
 affordable solution to Vision is a crucial factor for AGI: much of the world's
 information, even on the internet, is encoded in audiovisual information.
 Extracting (sub)symbolic semantics from these sources would open a world of
 learning data to symbolic systems.

 An audiovisual perception layer generates semantic interpretation on the
 (sub)symbolic level. How could a symbolic engine ever reason about the real
 world without access to such information?

So a deafblind person couldn't reason about the real world? Put ear
muffs and a blind fold on, see what you can figure out about the world
around you. Less certainly, but then you could figure out more about
the world if you had magnetic sense like pidgeons.

Intelligence is not about the modalities of the data you get, it is
about the what you do with the data you do get.

All of the data on the web is encoded in electronic form, it is only
because of our comfort with incoming photons and phonons that it is
translated to video and sound. This fascination with A/V is useful,
but does not help us figure out the core issues that are holding us up
whilst trying to create AGI.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread William Pearson
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:

 Intelligence is not *only* about the modalities of the data you get,
  but modalities are certainly important. A deafblind person can still
  learn a lot about the world with taste, smell, and touch, but the
  senses one has access to defines the limits to the world model one can
  build.

As long as you have one high bandwidth modality you should be able to
add on technological gizmos to convert information to that modality,
and thus be able to model the phenomenon from that part of the world.

Humans manage to convert modalities E.g.

http://www.engadget.com/2006/04/25/the-brain-port-neural-tongue-interface-of-the-future/
Using touch on the tongue.

We don't do it very well, but that is mainly because we don't have to
do it it very often.

AIs that are designed to have new modalities added to them, using
their major modality of their memory space+interrupts (or other
computational modality), may be even more flexible than humans and
able to adapt to to a new modality as quickly as  a current computer
is able to add a new device.


  If I put on ear muffs and a blind fold right now, I can still reason
  quite well using touch, since I have access to a world model build
  using e.g. vision. If you were deafblind and paralysed since your
  birth, would you have any possibility of spatial reasoning? No, maybe
  except for some extremely crude genetically coded heuristics.

Sure if you don't get any spatial information you won't be able to
model spatially. But getting the information is different from having
a dedicated modality.  My point was that audiovisual is not the only
way to get spatial information. It may not even be the best way for
what we happen to want to do. So not to get too hung up on any
specific modality when discussing intelligence.

  Sure, you could argue that an intelligence purely based on text,
  disconnected from the physical world, could be intelligent, but it
  would have a very hard time reasoning about interaction of entities in
  the physicial world. It would be unable to understand humans in many
  aspects: I wouldn't call that generally intelligent.

I'm not so much interested in this case, but what about the case where
you have a robot with Sonar, Radar and other sensors. But not the
normal 2 camera +2 microphone thing people imply when they say
audiovisual.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread William Pearson
On 25/03/2008, Vladimir Nesov [EMAIL PROTECTED] wrote

 Simple systems can be computationally universal, so it's not an issue
  in itself. On the other hand, no learning algorithm is universal,
  there are always distributions that given algorithms will learn
  miserably. The problem is to find a learning algorithm/representation
  that has the right kind of bias to implement human-like performance.

First a riddle: What can be all learning algorithms, but is none?

I'd disagree. Okay simple systems can be computationally universal,
but what does that really mean.

Computational universality means to be able to represent any
computable function, the range and domain of this function are assumed
to be from the natural numbers to itself.

Most AI formulations when they say that are computationally universal
are only talking about function of F: I → O where I is the input and O
is the output. These include the formulations of neural networks/GA
etc that I have seen. However there are lots of interesting programs
in computers that do not map the input to the output. Humans also do
not just map the input to the output, we also think, ruminate, model
and remember. This does not affect the range of functions from the
input to the output, but it does change how quickly they can be moved
between. What I am interested in is in systems where the ranges and
domains of the functions are entities inside the system.

That is the F: I → S, F: S → O, and F: S→ S are important and should
be potentially computationally universal. Where S is the internal
memory of the system. This allows the system to be all possible
learning algorithms (although only one at any time), but also it is no
algorithm (else F: I x S → S, would be fixed).

General purpose desktop computers are these kinds of systems. If they
weren't how else could we implement any type of learning system on
them? Thus the answer to my riddle.

The question I have been trying to answer precisely is how to govern
these sorts of systems so they roughly do what you want, without you
having to give precise instructions.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: I know, I KNOW :-) WAS Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread William Pearson
On 26/03/2008, Mark Waser [EMAIL PROTECTED] wrote:
  First a riddle: What can be all learning algorithms, but is none?

  A human being!


Well my answer was a common PC, which I hope is more illuminating
because we know it well.

But human being works, as does any future AI design, as far as I am concerned.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread William Pearson
On 24/03/2008, Jim Bromer [EMAIL PROTECTED] wrote:



 To try to understand what I am talking about, start by imagining a
 simulation of some physical operation, like a part of a complex factory in a
 Sim City kind of game.  In this kind of high-level model no one would ever
 imagine all of the objects should interact in one stereotypical way,
 different objects would interact with other objects in different kinds of
 ways.  And no one would imagine that the machines that operated on other
 objects in the simulation were not also objects in their own right.  For
 instance the machines used in production might require the use of other
 machines to fix or enhance them.  And the machines might produce or
 operate on objects that were themselves machines.  When you think about a
 simulation of some complicated physical systems it becomes very obvious that
 different kinds of objects can have different effects on other objects.  And
 yet, when it comes to AI, people go on an on about systems that totally
 disregard this seemingly obvious divergence of effect that is so typical of
 nature.  Instead most theories see insight as if it could be funneled
 through some narrow rational system or other less rational field operations
 where the objects of the operations are only seen as the ineffective object
 of the pre-defined operations of the program.



How would this differ from the sorts of computational systems I have been
muttering about? Where you have an architecture where an active bit of code
or program is equivalent to an object in the above paragraph. Also have a
look at Eurisko by Doug Lenat.

   Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Flies Neural Networks

2008-03-16 Thread William Pearson
On 16/03/2008, Ed Porter [EMAIL PROTECTED] wrote:




 I am not an expert on neural nets, but from my limited understanding it is far
 from clear exactly what the new insight into neural nets referred to in this 
 article
 is, other than that timing neuron firings is important in the brain, which is
 something multiple people have been saying for years.



 So I fail to understand what is new here, other than a reiteration of the 
 view that
 most of the traditional neural net models used in machine learning are gross
 simplifications of the neural nets in the brain, particularly with regard to 
 their
 simplification of the temporal complexity of brain networks.


Disclaimer: I am not a neuro scientist either.

The spiking neuron model, as I understand it, has the neuron when it
is activated give not one signal but a series of signals. This was
called a spike train. It was thought that the exact timing of this
series of signals was unimportant, just the number and so the neurons
just acted as integrators getting an average value.

So if a train was like this

A) _|_|_|

was thought to be the same as this

B) __||_|

Having the same number of spikes on average.

So people simulating real neural networks (not back prop etc) i.e.
computational neuroscientists the sorts of people working on blue
brain*, used this as the model for there systems.

Now it turns out the spike train is more like a message in a packet
rather than a number, so A and B are different signals and may be
treated differently.

It might also have implications for the computational capacity of the
brain. I think the max bandwidth calculations would have gone up.

* I don't know what model of neuron they are using, but it is that
sort of neural network researcher I am talking about.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] AGI 08 blogging?

2008-03-02 Thread William Pearson
Anyone blogging what they are finding interesting in AGI 08?

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


 On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
  Note I want something different than computational universality. E.g.
  Von Neumann architectures are generally programmable, Harvard
  architectures aren't. As they can't be reprogrammed at run time.

 It seems that you want to build the AGI from the programming level.
 This is in contrast to John MacCarthy's declarative paradigm.  Your
 approach offers more flexibility (perhaps maximum flexibility), but may not
 make AGI easier to build.  Learning, in your case, is a matter of
 algorithmic learning.  It may be harder / less efficient than logic-based
 learning.


Algorithmic learning is hard. But just because the system is based upon
programs as its lowest level representation, does not mean that all learning
is going to be algorithmic learning. It is possible to have programs that
learn in any fashion within the system. If it makes sense in the system, you
could have a logic based learning program. Just that it will be in
competition with other learners to see which is the most useful for the
system.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 28/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:

  You must first define its existing skills, then define the new challenge
  with some degree of precision - then explain the principles by which it will
  extend its skills. It's those principles of extension/generalization that
  are the be-all and end-all, (and NOT btw, as you suggest, any helpful info
  that the robot will receive - that,sir, is cheating - it has to work these
  things out for itself - although perhaps it could *ask* for info).


Why is that cheating? Would you never give instructions to a child
about what to do? Taking instuctions is something that all
intelligences need to be able to do, but it should be attempted to be
minimised. I'm not saying it should take instructions unquestioningly
either, ideally it should figure out whether the instructions you give
are any use for it.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 02/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
 Jeez, Will, the point of Artificial General Intelligence is that it can
  start adapting to an unfamiliar situation and domain BY ITSELF.  And your
  FIRST and only response to the problem you set was to say: I'll get someone
  to tell it what to do.

Nothing we ever do is by ourselves, entirely, we have a wealth of
examples to draw from that we have acquired from family/friends and
teachers. The situation I described was like throwing a baby into a
completely unfamiliar problem, without the wealth of experience we
have built up over the years, so some hand holding is to be expected.
Also I'm not planning to have a full AI made any time soon, I'm merely
laying the ground work, for many other people to work upon. I may get
animal level adaptivity/intelligence myself, it depends how quickly I
can build the first layer and the tools I need for the next.

This is also why I concentrate on the most flexible system possible, I
do not wish to constrain the system to do any more than needs be done
to achieve my current goal. This goal is to add a way of selecting
between the programs within a computer system dependent upon what the
system needs to do.

It is more fundamental than your cross-over idea, in that it is a
lower level phenomenon, but not in the sense it is more important for
acting intelligently.

  IOW you simply avoided the problem and thought only of cheating. What a
  solution, or merest idea for a solution, must do is tell me how that
  intelligence will start adapting by itself  - will generalize from its
  existing skills to cross over domains.

I'm not building the solution, merely a framework which I think will
enable people to build the solution. I think this needs to be done
first, in essence I am trying to deal with the develop and acquire
skills problem.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction Question

2008-03-01 Thread William Pearson
On 29/02/2008, Abram Demski [EMAIL PROTECTED] wrote:
 I'm an undergrad who's been lurking here for about a year. It seems to me that
 many people on this list take Solomonoff Induction to be the ideal learning
 technique (for unrestricted computational resources). I'm wondering what
 justification there is for the restriction to turing-machine models of the 
 universe
 that Solomonoff Induction uses. Restricting an AI to computable models will
 obviously make it more realistically manageable. However, Solomonoff 
 induction  needs infinite computational resources, so this clearly isn't a 
 justification.

There is a gotcha here, at least when you are trying to go to a
computable solution (that doesn't require infinite memory).

When you go to a FSM (which all our computers are ), then there opens
up a whole range of things that are uncomputable for the FSM in
question. Including a whole raft of FSM more complex than it.

Keeping the same general shape of the system (trying to account for
all the detail) means we are likely to overfit, due to trying to model
systems that are are too complex for us to be able to model,  whilst
trying to model for the noise in our systems.

This would make the most probable TM more complex than it needs to be,
without actually improving its predictive power.

Not quite what you worried about, but might add weight to your call to
have uncomputablility included in general models of intelligence.

  Will

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction Question

2008-03-01 Thread William Pearson
On 01/03/2008, Jey Kottalam [EMAIL PROTECTED] wrote:
 On Sat, Mar 1, 2008 at 3:10 AM, William Pearson [EMAIL PROTECTED] wrote:
  
Keeping the same general shape of the system (trying to account for
all the detail) means we are likely to overfit, due to trying to model
systems that are are too complex for us to be able to model,  whilst
trying to model for the noise in our systems.


 Could you explain this further? I followed what you're saying up to
  this paragraph. How and why does the overfitting happen?


Lets say you have a sine wave on an electrical cable with some noise,
you are trying to predict its next value. The noise isn't actually
unpredictable, its main components are from radiation from the
ionosphere and sun spot activity (or magnetic field variations from
the earths crust or whatever).

The Turing machine to represent such noise precisely (that is model
the sun/ionosphere in enough detail) is more complex than you inductor
can represent. However as it is trying to find a program that
*exactly* predicts the sequence, sin x will be discarded for some
other program that achieves greater accuracy on the training set. Not
that it would be bad over fitting, just that it would likely be sin x
+ a function for pseudo random noise.  Real infinite resources
Solomonoff induction avoids this by eventually being able to predict
what we generally think of as noise.

Hope this clears up what I mean.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


  1   2   >