Re: [agi] Recursive self-change: some definitions

2008-09-07 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote:
> Bryan,
>
> How do you know the brain has a code? Why can't it be entirely
> "impression-istic" - a system for literally forming, storing and
> associating sensory impressions (including abstracted, simplified,
> hierarchical impressions of other impressions)?
>
> 1). FWIW some comments from a cortically knowledgeable robotics
> friend:
>
> "The issue mentioned below is a major factor for die-hard
> card-carrying Turing-istas, and to me is also their greatest
> stumbling-block.
>
> You called it a "code", but I see computation basically involves
> setting up a "model" or "description" of something, but many people
> think this is actually "synonomous" with the real-thing. It's not,
> but many people are in denial about this. All models involves tons of
> simplifying assumptions.
>
> EG, XXX is adamant that the visual cortex performs sparse-coded
> [whatever that means] wavelet transforms, and not edge-detection. To
> me, a wavelet transform is just "one" possible - and extremely
> simplistic (meaning subject to myriad assumptions) - mathematical
> description of how some cells in the VC appear to operate.

No, this is just a confusion of terminologies. I most certainly was not 
talking about 'code' in the sense of "sparse-coded wavelet transform". 
I'm talking about code in the sense of source code. Sorry.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-04 Thread Mike Tintner

Bryan,

How do you know the brain has a code? Why can't it be entirely 
"impression-istic" - a system for literally forming, storing and associating 
sensory impressions (including abstracted, simplified, hierarchical 
impressions of other impressions)?


1). FWIW some comments from a cortically knowledgeable robotics friend:

"The issue mentioned below is a major factor for die-hard card-carrying 
Turing-istas, and to me is also their greatest stumbling-block.


You called it a "code", but I see computation basically involves setting up 
a "model" or "description" of something, but many people think this is 
actually "synonomous" with the real-thing. It's not, but many people are in 
denial about this. All models involves tons of simplifying assumptions.


EG, XXX is adamant that the visual cortex performs sparse-coded [whatever 
that means] wavelet transforms, and not edge-detection. To me, a wavelet 
transform is just "one" possible - and extremely simplistic (meaning subject 
to myriad assumptions) - mathematical description of how some cells in the 
VC appear to operate.


Real biological systems are immensely more complex than our simple models. 
Eg, every single cell in the body contains the entire genome, and genes are 
being turned on+off continually during normal operation, and based upon an 
immense #feedback loops in the cells, and not just during reproduction. On 
and on."


2) I vaguely recall de Bono having a model of an imprintable surface that 
was non-coded:


http://en.wikipedia.org/wiki/The_Mechanism_of_the_Mind

(But I think you may have to read the book. Forgive me if I'm wrong).

3) Do you know anyone who has thought of using or designing some kind of 
computer as an imprintable rather than just a codable medium? Perhaps that 
is somehow possible.


PS Go to bed. :)


Bryan/MT
:

I think this is a good important point. I've been groping confusedly
here. It seems to me computation necessarily involves the idea of
using a code (?). But the nervous system seems to me something
capable of functioning without a code - directly being imprinted on
by the world, and directly forming movements, (even if also involving
complex hierarchical processes), without any code. I've been
wondering whether computers couldn't also be designed to function
without a code in somewhat similar fashion. Any thoughts or ideas of
your own?


Hold on there -- the brain most certainly has "a code", if you will
remember the gene expression and the general neurophysical nature of it
all. I think partly the difference you might be seeing here is how much
more complex and grown the brain is in comparison to somewhat fragile
circuits and the ecological differences between the WWW and the
combined evolutionary history keeping your neurons healthy each day.

Anyway, because of the quantified nature of energy in general, the brain
must be doing something physical and "operating on a code", or i.e.
have an actual nature to it. I would like to see alternatives to this
line of reasoning, of course.

As for computers that don't have to be executing code all of the time.
I've been wondering about machines that could also imitate the
biological ability to recover from "errors" and not spontaneously burst
into flames when something goes wrong in the Source. Clearly there's
something of interest here.

- 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Ben Goertzel wrote:
>  I'm also interested in recursive self changing systems and whether
> you can be sure they will stay recursive self changing systems, as
> they change.
>
>
> I'm almost certain there is no certainty in this world, regarding
> empirical predictions like that ;-)

One of the issues that I would expect to popup in an analysis like that 
are the typical identity issues. I've been looking around for a 
dissociative approach to philosophy after a chat with Natasha last 
month, and frankly neither of us have found much of anything at all.

Oh, except drug users. They might count. Maybe not.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote:
> I think this is a good important point. I've been groping confusedly
> here. It seems to me computation necessarily involves the idea of
> using a code (?). But the nervous system seems to me something
> capable of functioning without a code - directly being imprinted on
> by the world, and directly forming movements, (even if also involving
> complex hierarchical processes), without any code. I've been
> wondering whether computers couldn't also be designed to function
> without a code in somewhat similar fashion.  Any thoughts or ideas of
> your own?

Hold on there -- the brain most certainly has "a code", if you will 
remember the gene expression and the general neurophysical nature of it 
all. I think partly the difference you might be seeing here is how much 
more complex and grown the brain is in comparison to somewhat fragile 
circuits and the ecological differences between the WWW and the 
combined evolutionary history keeping your neurons healthy each day. 

Anyway, because of the quantified nature of energy in general, the brain 
must be doing something physical and "operating on a code", or i.e. 
have an actual nature to it. I would like to see alternatives to this 
line of reasoning, of course.

As for computers that don't have to be executing code all of the time. 
I've been wondering about machines that could also imitate the 
biological ability to recover from "errors" and not spontaneously burst 
into flames when something goes wrong in the Source. Clearly there's 
something of interest here.

- Bryan
who has gone 36 hours without sleep. Why am I here?

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Matt Mahoney
--- On Thu, 9/4/08, Abram Demski <[EMAIL PROTECTED]> wrote:

> So, my only remaining objection is that while the universe
> *could* be
> computable, it seems unwise to me to totally rule out the
> alternative.

You're right. We cannot prove that the universe is computable. We have evidence 
like Occam's Razor (if the universe is computable, then algorithmically simple 
models are to be preferred), but that is not proof.

At one time our models of physics were not computable. Then we discovered 
atoms, quantization of electric charge, general relativity (which bounds 
density and velocity), the big bang (history is finite) and quantum mechanics. 
Our models would still not be computable (requiring infinite description 
length) if any one of these events did not occur.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Abram Demski
On Thu, Sep 4, 2008 at 10:53 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> To clarify what I mean by "observable universe", I am including any part that 
> could be observed in the future, and therefore must be modeled to make 
> accurate predictions. For example, if our universe is computed by one of an 
> enumeration of Turing machines, then the other enumerations are outside our 
> observable universe.
>
> -- Matt Mahoney, [EMAIL PROTECTED]


OK, that works. But, you cannot invoke current physics to argue that
this sort of observable universe is finite (so far as I know).

Of course, that is not central to your point anyway. The universe
might be spatially infinite while still having a finite description
length.

So, my only remaining objection is that while the universe *could* be
computable, it seems unwise to me to totally rule out the alternative.
As you said, the idea is something that makes testable predictions.
So, it is something to be decided experimentally, not philosophically.

-Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Matt Mahoney
To clarify what I mean by "observable universe", I am including any part that 
could be observed in the future, and therefore must be modeled to make accurate 
predictions. For example, if our universe is computed by one of an enumeration 
of Turing machines, then the other enumerations are outside our observable 
universe.

-- Matt Mahoney, [EMAIL PROTECTED]


--- On Thu, 9/4/08, Abram Demski <[EMAIL PROTECTED]> wrote:

> From: Abram Demski <[EMAIL PROTECTED]>
> Subject: Re: Computation as an explanation of the universe (was Re: [agi] 
> Recursive self-change: some definitions)
> To: agi@v2.listbox.com
> Date: Thursday, September 4, 2008, 9:43 AM
> > OK, then the observable universe has a finite
> description length. We don't need to describe anything
> else to model it, so by "universe" I mean only the
> observable part.
> >
> 
> But, what good is it to only have finite description of the
> observable
> part, since new portions of the universe enter the
> observable portion
> continually? Physics cannot then be modeled as a computer
> program,
> because computer programs do not increase in Kolmogorov
> complexity as
> they run (except by a logarithmic term to count how long it
> has been
> running).
> 
> > I am saying that the universe *is* deterministic. It
> has a definite quantum state, but we would need about 10^122
> bits of memory to describe it. Since we can't do that,
> we have to resort to approximate models like quantum
> mechanics.
> >
> 
> Yes, I understood that you were suggesting a deterministic
> universe.
> What I'm saying is that it seems plausible for us to be
> able to have
> an accurate knowledge of that deterministic physics,
> lacking only the
> exact knowledge of particle locations et cetera. We would
> be forced to
> use probabilistic methods as you argue, but they would not
> necessarily
> be built into our physical theories; instead, our physical
> theories
> act as a deterministic function that is given probabilistic
> input and
> therefore yields probabilistic output.
> 
> > I believe there is a simpler description. First, the
> description length is increasing with the square of the age
> of the universe, since it is proportional to area. So it
> must have been very small at one time. Second, the most
> efficient way to enumerate all possible universes would be
> to run each B-bit machine for 2^B steps, starting with B =
> 0, 1, 2... until intelligent life is found. For our
> universe, B ~ 407. You could reasonably argue that the
> algorithmic complexity of the free parameters of string
> theory and general relativity is of this magnitude. I
> believe that Wolfram also argued that the (observable)
> universe is a few lines of code.
> >
> 
> I really do not understand your willingness to restrict
> "universe" to
> "observable universe". The description length of
> the observable
> universe was very small at one time because at that time
> none of the
> basic stuffs of the universe had yet interacted, so by
> definition the
> description length of the observable universe for each
> basic entity is
> just the description length of that entity. As time moves
> forward, the
> entities interact and the description lengths of their
> observable
> universes increase. Similarly, today, one might say that
> the
> observable universe for each person is slightly different,
> and indeed
> the universe observable from my right hand would be
> slightly different
> then the one observable from my left. They could have
> differing
> description lengths.
> 
> In short, I think you really want to apply your argument to
> the
> "actual" universe, not merely observable
> subsets... or if you don't,
> you should, because otherwise it seems like a very strange
> argument.
> 
> > But even if we discover this program it does not mean
> we could model the universe deterministically. We would need
> a computer larger than the universe to do so.
> 
> Agreed... partly thanks to your argument below.
> 
> > There is a simple argument using information theory.
> Every system S has a Kolmogorov complexity K(S), which is
> the smallest size that you can compress a description of S
> to. A model of S must also have complexity K(S). However,
> this leaves no space for S to model itself. In particular,
> if all of S's memory is used to describe its model,
> there is no memory left over to store any results of the
> simulation.
> 
> Point conceded.
> 
> 
> --Abram
> 
> 
> ---
> agi
> Archives: https://www.listbox.com/member/

Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Abram Demski
> OK, then the observable universe has a finite description length. We don't 
> need to describe anything else to model it, so by "universe" I mean only the 
> observable part.
>

But, what good is it to only have finite description of the observable
part, since new portions of the universe enter the observable portion
continually? Physics cannot then be modeled as a computer program,
because computer programs do not increase in Kolmogorov complexity as
they run (except by a logarithmic term to count how long it has been
running).

> I am saying that the universe *is* deterministic. It has a definite quantum 
> state, but we would need about 10^122 bits of memory to describe it. Since we 
> can't do that, we have to resort to approximate models like quantum mechanics.
>

Yes, I understood that you were suggesting a deterministic universe.
What I'm saying is that it seems plausible for us to be able to have
an accurate knowledge of that deterministic physics, lacking only the
exact knowledge of particle locations et cetera. We would be forced to
use probabilistic methods as you argue, but they would not necessarily
be built into our physical theories; instead, our physical theories
act as a deterministic function that is given probabilistic input and
therefore yields probabilistic output.

> I believe there is a simpler description. First, the description length is 
> increasing with the square of the age of the universe, since it is 
> proportional to area. So it must have been very small at one time. Second, 
> the most efficient way to enumerate all possible universes would be to run 
> each B-bit machine for 2^B steps, starting with B = 0, 1, 2... until 
> intelligent life is found. For our universe, B ~ 407. You could reasonably 
> argue that the algorithmic complexity of the free parameters of string theory 
> and general relativity is of this magnitude. I believe that Wolfram also 
> argued that the (observable) universe is a few lines of code.
>

I really do not understand your willingness to restrict "universe" to
"observable universe". The description length of the observable
universe was very small at one time because at that time none of the
basic stuffs of the universe had yet interacted, so by definition the
description length of the observable universe for each basic entity is
just the description length of that entity. As time moves forward, the
entities interact and the description lengths of their observable
universes increase. Similarly, today, one might say that the
observable universe for each person is slightly different, and indeed
the universe observable from my right hand would be slightly different
then the one observable from my left. They could have differing
description lengths.

In short, I think you really want to apply your argument to the
"actual" universe, not merely observable subsets... or if you don't,
you should, because otherwise it seems like a very strange argument.

> But even if we discover this program it does not mean we could model the 
> universe deterministically. We would need a computer larger than the universe 
> to do so.

Agreed... partly thanks to your argument below.

> There is a simple argument using information theory. Every system S has a 
> Kolmogorov complexity K(S), which is the smallest size that you can compress 
> a description of S to. A model of S must also have complexity K(S). However, 
> this leaves no space for S to model itself. In particular, if all of S's 
> memory is used to describe its model, there is no memory left over to store 
> any results of the simulation.

Point conceded.


--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-03 Thread Matt Mahoney
--- On Wed, 9/3/08, Abram Demski <[EMAIL PROTECTED]> wrote:

> From: Abram Demski <[EMAIL PROTECTED]>
> Subject: Re: Computation as an explanation of the universe (was Re: [agi] 
> Recursive self-change: some definitions)
> To: agi@v2.listbox.com
> Date: Wednesday, September 3, 2008, 7:35 PM
> Matt, I have several objections.
> 
> First, as I understand it, your statement about the
> universe having a
> finite description length only applies to the *observable*
> universe,
> not the universe as a whole. The hubble radius expands at
> the speed of
> light as more light reaches us, meaning that the observable
> universe
> has a longer description length every day. So it does not
> seem very
> relevant to say that the description length is finite.
>
> The universe as a whole (observable and not-observable)
> *could* be
> finite, but we don't know one way or the other so far
> as I am aware.

OK, then the observable universe has a finite description length. We don't need 
to describe anything else to model it, so by "universe" I mean only the 
observable part.

> 
> Second, I do not agree with your reason for saying that
> physics is
> necessarily probabilistic. It seems possible to have a
> completely
> deterministic physics, which merely suffers from a lack of
> information
> and computation ability. Imagine if the universe happened
> to follow
> Newtonian physics, with atoms being little billiard balls.
> The
> situation is deterministic, if only we knew the starting
> state of the
> universe and had large enough computers to approximate the
> differential equations to arbitrary accuracy.

I am saying that the universe *is* deterministic. It has a definite quantum 
state, but we would need about 10^122 bits of memory to describe it. Since we 
can't do that, we have to resort to approximate models like quantum mechanics.

I believe there is a simpler description. First, the description length is 
increasing with the square of the age of the universe, since it is proportional 
to area. So it must have been very small at one time. Second, the most 
efficient way to enumerate all possible universes would be to run each B-bit 
machine for 2^B steps, starting with B = 0, 1, 2... until intelligent life is 
found. For our universe, B ~ 407. You could reasonably argue that the 
algorithmic complexity of the free parameters of string theory and general 
relativity is of this magnitude. I believe that Wolfram also argued that the 
(observable) universe is a few lines of code.

But even if we discover this program it does not mean we could model the 
universe deterministically. We would need a computer larger than the universe 
to do so.

> Third, this is nitpicking, but I also am not sure about the
> argument
> that we cannot predict our thoughts. It seems formally
> possible that a
> system could predict itself. The system would need to be
> compressible,
> so that a model of itself could fit inside the whole. I
> could be wrong
> here, feel free to show me that I am. Anyway, the same
> objection also
> applies back to the necessity of probabilistic physics: is
> it really
> impossible for beings within a universe to have an accurate
> compressed
> model of the entire universe? (Similarly, if we have such a
> model,
> could we use it to run a simulation of the entire universe?
> This seems
> much less possible.)

There is a simple argument using information theory. Every system S has a 
Kolmogorov complexity K(S), which is the smallest size that you can compress a 
description of S to. A model of S must also have complexity K(S). However, this 
leaves no space for S to model itself. In particular, if all of S's memory is 
used to describe its model, there is no memory left over to store any results 
of the simulation.

> 
> --Abram
> 
> 
> On Wed, Sep 3, 2008 at 6:45 PM, Matt Mahoney
> <[EMAIL PROTECTED]> wrote:
> > I think that computation is not so much a metaphor for
> understanding the universe as it is an explanation. If you
> enumerate all possible Turing machines, thus enumerating all
> possible laws of physics, then some of those universes will
> have the right conditions for the evolution of intelligent
> life. If neutrons were slightly heavier than they actually
> are (relative to protons), then stars could not sustain
> fusion. If they were slightly lighter, then they would be
> stable and we would have no elements.
> >
> > Because of gravity, the speed of light, Planck's
> constant, the quantization of electric charge, and the
> finite age of the universe, the universe has a finite length
> description, and is therefore computable. The Bekenstein
> bound of the Hubble radius is 2.91 x 10^122 bits. Any
> compute

Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread Terren Suydam

Hi Mike,

I see two ways to answer your question. One is along the lines that Jaron 
Lanier has proposed - the idea of software interfaces that are fuzzy. So rather 
than function calls that take a specific set of well defined arguments, 
software components talk somehow in 'patterns' such that small errors can be 
tolerated. While there would still be a kind of 'code' that executes, the 
process of translating it to processor instructions would be much more highly 
abstracted than any current high level language. I'm not sure I truly grokked 
Lanier's concept, but it's clear that for it to work, this high-level pattern 
idea would still need to somehow translate to instructions the processor can 
execute.

The other way of answering this question is in terms of creating simulations of 
things like brains that don't execute code. You model the parallelism in code 
from which emerges the structures of interest. This is the A-Life approach that 
I advocate.

But at bottom, a computer is a processor that executes instructions. Unless 
you're talking about a radically different kind of computer... if so, care to 
elaborate?

Terren

--- On Wed, 9/3/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
From: Mike Tintner <[EMAIL PROTECTED]>
Subject: Re: [agi] Recursive self-change: some definitions
To: agi@v2.listbox.com
Date: Wednesday, September 3, 2008, 7:02 PM



 
 

Terren:My own 
feeling is that computation is just the latest in a series of technical 
metaphors that we apply in service of understanding how the universe works. 
Like 
the others before it, it captures some valuable aspects and leaves out others. 
It leaves me wondering: what future metaphors will we apply to the universe, 
ourselves, etc., that will make computation-as-metaphor seem as quaint as the 
old clockworks analogies?

I think this is a good important point. 
I've been groping confusedly here. It seems to me computation necessarily 
involves the idea of using a code (?). But the nervous system seems to me 
something capable of functioning without a code - directly being imprinted on 
by 
the world, and directly forming movements, (even if also involving complex 
hierarchical processes), without any code. I've been wondering whether 
computers 
couldn't also be designed to function without a code in somewhat similar 
fashion.  Any thoughts or ideas of your own?



  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-03 Thread Abram Demski
Matt, I have several objections.

First, as I understand it, your statement about the universe having a
finite description length only applies to the *observable* universe,
not the universe as a whole. The hubble radius expands at the speed of
light as more light reaches us, meaning that the observable universe
has a longer description length every day. So it does not seem very
relevant to say that the description length is finite.

The universe as a whole (observable and not-observable) *could* be
finite, but we don't know one way or the other so far as I am aware.

Second, I do not agree with your reason for saying that physics is
necessarily probabilistic. It seems possible to have a completely
deterministic physics, which merely suffers from a lack of information
and computation ability. Imagine if the universe happened to follow
Newtonian physics, with atoms being little billiard balls. The
situation is deterministic, if only we knew the starting state of the
universe and had large enough computers to approximate the
differential equations to arbitrary accuracy.

Third, this is nitpicking, but I also am not sure about the argument
that we cannot predict our thoughts. It seems formally possible that a
system could predict itself. The system would need to be compressible,
so that a model of itself could fit inside the whole. I could be wrong
here, feel free to show me that I am. Anyway, the same objection also
applies back to the necessity of probabilistic physics: is it really
impossible for beings within a universe to have an accurate compressed
model of the entire universe? (Similarly, if we have such a model,
could we use it to run a simulation of the entire universe? This seems
much less possible.)

--Abram


On Wed, Sep 3, 2008 at 6:45 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> I think that computation is not so much a metaphor for understanding the 
> universe as it is an explanation. If you enumerate all possible Turing 
> machines, thus enumerating all possible laws of physics, then some of those 
> universes will have the right conditions for the evolution of intelligent 
> life. If neutrons were slightly heavier than they actually are (relative to 
> protons), then stars could not sustain fusion. If they were slightly lighter, 
> then they would be stable and we would have no elements.
>
> Because of gravity, the speed of light, Planck's constant, the quantization 
> of electric charge, and the finite age of the universe, the universe has a 
> finite length description, and is therefore computable. The Bekenstein bound 
> of the Hubble radius is 2.91 x 10^122 bits. Any computer within a finite 
> universe must have less memory than it, and therefore cannot simulate it 
> except by using an approximate (probabilistic) model. One such model is 
> quantum mechanics.
>
> For the same reason, an intelligent agent (which must be Turing computable if 
> the universe is) cannot model itself, except probabilistically as an 
> approximation. Thus, we cannot predict what we will think without actually 
> thinking it. This property makes our own intelligence seem mysterious.
>
> An explanation is only useful if it makes predictions, and it does. If the 
> universe were not Turing computable, then Solomonoff induction and AIXI as 
> ideal models of prediction and intelligence would not be applicable to the 
> real world. Yet we have Occam's Razor and find in practice that all 
> successful machine learning algorithms use algorithmically simple hypothesis 
> sets.
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> --- On Wed, 9/3/08, Terren Suydam <[EMAIL PROTECTED]> wrote:
> From: Terren Suydam <[EMAIL PROTECTED]>
> Subject: Re: [agi] Recursive self-change: some definitions
> To: agi@v2.listbox.com
> Date: Wednesday, September 3, 2008, 4:17 PM
>
>
> Hi Ben,
>
> My own feeling is that computation is just the latest in a series of 
> technical metaphors that we apply in service of understanding how the 
> universe works. Like the others before it, it captures some valuable aspects 
> and leaves out others. It leaves me wondering: what future metaphors will we 
> apply to the universe, ourselves, etc., that will make 
> computation-as-metaphor seem as quaint as the old clockworks analogies?
>
> I believe that computation is important in that it can help us simulate 
> intelligence, but intelligence itself is not simply computation (or if it is, 
> it's in a way that requires us to transcend our current notions of 
> computation). Note that I'm not suggesting anything mystical or dualistic at 
> all, just offering the possibility that we can find still greater metaphors 
> for how intelligence works.
>
> Either way though,
>  I'm very interested in the results of your work - at worst, it will sh

Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread Mike Tintner
Terren:My own feeling is that computation is just the latest in a series of 
technical metaphors that we apply in service of understanding how the universe 
works. Like the others before it, it captures some valuable aspects and leaves 
out others. It leaves me wondering: what future metaphors will we apply to the 
universe, ourselves, etc., that will make computation-as-metaphor seem as 
quaint as the old clockworks analogies?

I think this is a good important point. I've been groping confusedly here. It 
seems to me computation necessarily involves the idea of using a code (?). But 
the nervous system seems to me something capable of functioning without a code 
- directly being imprinted on by the world, and directly forming movements, 
(even if also involving complex hierarchical processes), without any code. I've 
been wondering whether computers couldn't also be designed to function without 
a code in somewhat similar fashion.  Any thoughts or ideas of your own?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-03 Thread Matt Mahoney
I think that computation is not so much a metaphor for understanding the 
universe as it is an explanation. If you enumerate all possible Turing 
machines, thus enumerating all possible laws of physics, then some of those 
universes will have the right conditions for the evolution of intelligent life. 
If neutrons were slightly heavier than they actually are (relative to protons), 
then stars could not sustain fusion. If they were slightly lighter, then they 
would be stable and we would have no elements.

Because of gravity, the speed of light, Planck's constant, the quantization of 
electric charge, and the finite age of the universe, the universe has a finite 
length description, and is therefore computable. The Bekenstein bound of the 
Hubble radius is 2.91 x 10^122 bits. Any computer within a finite universe must 
have less memory than it, and therefore cannot simulate it except by using an 
approximate (probabilistic) model. One such model is quantum mechanics.

For the same reason, an intelligent agent (which must be Turing computable if 
the universe is) cannot model itself, except probabilistically as an 
approximation. Thus, we cannot predict what we will think without actually 
thinking it. This property makes our own intelligence seem mysterious.

An explanation is only useful if it makes predictions, and it does. If the 
universe were not Turing computable, then Solomonoff induction and AIXI as 
ideal models of prediction and intelligence would not be applicable to the real 
world. Yet we have Occam's Razor and find in practice that all successful 
machine learning algorithms use algorithmically simple hypothesis sets.


-- Matt Mahoney, [EMAIL PROTECTED]

--- On Wed, 9/3/08, Terren Suydam <[EMAIL PROTECTED]> wrote:
From: Terren Suydam <[EMAIL PROTECTED]>
Subject: Re: [agi] Recursive self-change: some definitions
To: agi@v2.listbox.com
Date: Wednesday, September 3, 2008, 4:17 PM


Hi Ben, 

My own feeling is that computation is just the latest in a series of technical 
metaphors that we apply in service of understanding how the universe works. 
Like the others before it, it captures some valuable aspects and leaves out 
others. It leaves me wondering: what future metaphors will we apply to the 
universe, ourselves, etc., that will make computation-as-metaphor seem as 
quaint as the old clockworks analogies?

I believe that computation is important in that it can help us simulate 
intelligence, but intelligence itself is not simply computation (or if it is, 
it's in a way that requires us to transcend our current notions of 
computation). Note that I'm not suggesting anything mystical or dualistic at 
all, just offering the possibility that we can find still greater metaphors for 
how intelligence works. 

Either way though,
 I'm very interested in the results of your work - at worst, it will shed some 
needed light on the subject. At best... well, you know that part. :-]

Terren

--- On Tue, 9/2/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
From: Ben Goertzel <[EMAIL PROTECTED]>
Subject: Re: [agi] Recursive self-change: some definitions
To: agi@v2.listbox.com
Date: Tuesday, September 2, 2008, 4:50 PM



On Tue, Sep 2, 2008 at 4:43 PM, Eric Burton <[EMAIL PROTECTED]> wrote:

I really see a number of algorithmic breakthroughs as necessary for

the development of strong general AI 

I hear that a lot, yet I never hear any convincing  arguments in that regard...

So, hypothetically (and I hope not insultingly),
 I tend to view this as a kind of unconscious overestimation of the awesomeness 
of our own

species ... we feel intuitively like we're doing SOMETHING so cool in our 
brains, it couldn't
possibly be emulated or superseded by mere algorithms like the ones computer 
scientists
have developed so far ;-)


ben




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread Ben Goertzel
hi,


>
> What I am interested in is if someone gives me a computer system that
> changes its state is some fashion, can I state how powerful that
> method of change is likely to be? That is what the exact difference
> between a traditional learning algorithm and the way I envisage AGIs
> changing their state.
>

I'm sure this question is unsolvable in general ... so the interesting
question may be: Is there a subset of the class of possible AGI's, which
includes systems of an extremely (and hopefully unlimitedly) high level of
intelligence, and for which it *is* tractable to usefully probabilistically
predict the consequences of the system's self-modifications...


>
> Also can you formalise the difference between a humans method of
> learning how to learn, and boot strapping language off language (both
> examples of a strange loop), and a program inspecting and changing its
> source code.
>

Suppose one has a program of size N that has some self-reprogramming
capability.   There's a question of: for a certain probability p, how large
is the subset of program space that the program has probability > p of
entering (where the probability is calculated across possible worlds, e.g.
according to an occam distribution).





>
> I'm also interested in recursive self changing systems and whether you
> can be sure they will stay recursive self changing systems, as they
> change.



I'm almost certain there is no certainty in this world, regarding empirical
predictions like that ;-)

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread William Pearson
2008/9/2 Ben Goertzel <[EMAIL PROTECTED]>:
>
> Yes, I agree that your Turing machine approach can model the same
> situations, but the different formalisms seem to lend themselves to
> different kinds of analysis more naturally...
>
> I guess it all depends on what kinds of theorems you want to formulate...
>

What I am interested in is if someone gives me a computer system that
changes its state is some fashion, can I state how powerful that
method of change is likely to be? That is what the exact difference
between a traditional learning algorithm and the way I envisage AGIs
changing their state.

Also can you formalise the difference between a humans method of
learning how to learn, and boot strapping language off language (both
examples of a strange loop), and a program inspecting and changing its
source code.

I'm also interested in recursive self changing systems and whether you
can be sure they will stay recursive self changing systems, as they
change. This last one especially with regard to people designs systems
with singletons in mind.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread Terren Suydam

Hi Ben, 

My own feeling is that computation is just the latest in a series of technical 
metaphors that we apply in service of understanding how the universe works. 
Like the others before it, it captures some valuable aspects and leaves out 
others. It leaves me wondering: what future metaphors will we apply to the 
universe, ourselves, etc., that will make computation-as-metaphor seem as 
quaint as the old clockworks analogies?

I believe that computation is important in that it can help us simulate 
intelligence, but intelligence itself is not simply computation (or if it is, 
it's in a way that requires us to transcend our current notions of 
computation). Note that I'm not suggesting anything mystical or dualistic at 
all, just offering the possibility that we can find still greater metaphors for 
how intelligence works. 

Either way though, I'm very interested in the results of your work - at worst, 
it will shed some needed light on the subject. At best... well, you know that 
part. :-]

Terren

--- On Tue, 9/2/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
From: Ben Goertzel <[EMAIL PROTECTED]>
Subject: Re: [agi] Recursive self-change: some definitions
To: agi@v2.listbox.com
Date: Tuesday, September 2, 2008, 4:50 PM



On Tue, Sep 2, 2008 at 4:43 PM, Eric Burton <[EMAIL PROTECTED]> wrote:

I really see a number of algorithmic breakthroughs as necessary for

the development of strong general AI 

I hear that a lot, yet I never hear any convincing  arguments in that regard...

So, hypothetically (and I hope not insultingly),
 I tend to view this as a kind of unconscious overestimation of the awesomeness 
of our own

species ... we feel intuitively like we're doing SOMETHING so cool in our 
brains, it couldn't
possibly be emulated or superseded by mere algorithms like the ones computer 
scientists
have developed so far ;-)


ben







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-02 Thread Ben Goertzel
On Tue, Sep 2, 2008 at 4:43 PM, Eric Burton <[EMAIL PROTECTED]> wrote:

> I really see a number of algorithmic breakthroughs as necessary for
> the development of strong general AI



I hear that a lot, yet I never hear any convincing  arguments in that
regard...

So, hypothetically (and I hope not insultingly),
 I tend to view this as a kind of unconscious overestimation of the
awesomeness of our own
species ... we feel intuitively like we're doing SOMETHING so cool in our
brains, it couldn't
possibly be emulated or superseded by mere algorithms like the ones computer
scientists
have developed so far ;-)

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-02 Thread Ben Goertzel
On Tue, Sep 2, 2008 at 3:00 PM, William Pearson <[EMAIL PROTECTED]>wrote:

> 2008/9/2 Ben Goertzel <[EMAIL PROTECTED]>:
> >
> > Hmmm..
> >
> > Rather, I would prefer to model a self-modifying AGI system as something
> > like
> >
> > F(t+1) =  (F(t))( F(t), E(t) )
> >
> > where E(t) is the environment at time t and F(t) is the system at time t
>
> Are you assuming the system knows the environment totally?


no, that is not implied by the formalism...


> Or did you
> mean the input the system gets from the environment? Would you have to
> assume the environment was deterministic as well in order to construct
> a hyperset? Unless you can construct a hyperset tree kind of thing,
> with branches for each possible environmental state?


the hyperset formalism can encompass stochastic as well as deterministic
sets...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-02 Thread Eric Burton
I really see a number of algorithmic breakthroughs as necessary for
the development of strong general AI but it seems like an imminent
event to me regardless. Nonetheless much of what we learn about the
brain in the meantime may be nonsense until we fundamentally grok the
mind.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-02 Thread Eric Burton
I don't understand how mimicry in specific occurs without some kind of
turing-complete GA spawning a huge number of possible paths. I'm
thinking of humanoid robots mapping the movements of a human trainer
onto their motor cortex. I've certainly heard somewhere that this is
one way to do it and I don't see a simpler way. GA's are not a fast or
deterministic kind of search and I think a good AI would be fast and
deterministic in most regards...

On 9/2/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Yes, I agree that your Turing machine approach can model the same
> situations, but the different formalisms seem to lend themselves to
> different kinds of analysis more naturally...
>
> I guess it all depends on what kinds of theorems you want to formulate...
>
> ben
>
> On Tue, Sep 2, 2008 at 3:00 PM, William Pearson
> <[EMAIL PROTECTED]>wrote:
>
>> 2008/9/2 Ben Goertzel <[EMAIL PROTECTED]>:
>> >
>> > Hmmm..
>> >
>> > Rather, I would prefer to model a self-modifying AGI system as something
>> > like
>> >
>> > F(t+1) =  (F(t))( F(t), E(t) )
>> >
>> > where E(t) is the environment at time t and F(t) is the system at time t
>>
>> Are you assuming the system knows the environment totally? Or did you
>> mean the input the system gets from the environment? Would you have to
>> assume the environment was deterministic as well in order to construct
>> a hyperset? Unless you can construct a hyperset tree kind of thing,
>> with branches for each possible environmental state?
>>
>> > This is a hyperset equation, but it seems to nicely and directly capture
>> the
>> > fact that the system is actually acting on and modifying itself...
>> >
>>
>> I'll use _ to indicate subscript for now.
>>
>> I think s_n+1 = g_s_n(x) encompasses the same idea of
>> self-modification, as the function that g performs on x is determined
>> by the state if you consider g to be a UTM and s to be a program it
>> becomes a bit clearer. Consider g() and f() to be the hardware or
>> physics of the system.
>>
>>  Will
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "Nothing will ever be attempted if all possible objections must be first
> overcome " - Dr Samuel Johnson
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-02 Thread Ben Goertzel
Yes, I agree that your Turing machine approach can model the same
situations, but the different formalisms seem to lend themselves to
different kinds of analysis more naturally...

I guess it all depends on what kinds of theorems you want to formulate...

ben

On Tue, Sep 2, 2008 at 3:00 PM, William Pearson <[EMAIL PROTECTED]>wrote:

> 2008/9/2 Ben Goertzel <[EMAIL PROTECTED]>:
> >
> > Hmmm..
> >
> > Rather, I would prefer to model a self-modifying AGI system as something
> > like
> >
> > F(t+1) =  (F(t))( F(t), E(t) )
> >
> > where E(t) is the environment at time t and F(t) is the system at time t
>
> Are you assuming the system knows the environment totally? Or did you
> mean the input the system gets from the environment? Would you have to
> assume the environment was deterministic as well in order to construct
> a hyperset? Unless you can construct a hyperset tree kind of thing,
> with branches for each possible environmental state?
>
> > This is a hyperset equation, but it seems to nicely and directly capture
> the
> > fact that the system is actually acting on and modifying itself...
> >
>
> I'll use _ to indicate subscript for now.
>
> I think s_n+1 = g_s_n(x) encompasses the same idea of
> self-modification, as the function that g performs on x is determined
> by the state if you consider g to be a UTM and s to be a program it
> becomes a bit clearer. Consider g() and f() to be the hardware or
> physics of the system.
>
>  Will
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-02 Thread William Pearson
2008/9/2 Ben Goertzel <[EMAIL PROTECTED]>:
>
> Hmmm..
>
> Rather, I would prefer to model a self-modifying AGI system as something
> like
>
> F(t+1) =  (F(t))( F(t), E(t) )
>
> where E(t) is the environment at time t and F(t) is the system at time t

Are you assuming the system knows the environment totally? Or did you
mean the input the system gets from the environment? Would you have to
assume the environment was deterministic as well in order to construct
a hyperset? Unless you can construct a hyperset tree kind of thing,
with branches for each possible environmental state?

> This is a hyperset equation, but it seems to nicely and directly capture the
> fact that the system is actually acting on and modifying itself...
>

I'll use _ to indicate subscript for now.

I think s_n+1 = g_s_n(x) encompasses the same idea of
self-modification, as the function that g performs on x is determined
by the state if you consider g to be a UTM and s to be a program it
becomes a bit clearer. Consider g() and f() to be the hardware or
physics of the system.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-02 Thread Ben Goertzel
Hmmm..

Rather, I would prefer to model a self-modifying AGI system as something
like

F(t+1) =  (F(t))( F(t), E(t) )

where E(t) is the environment at time t and F(t) is the system at time t

This is a hyperset equation, but it seems to nicely and directly capture the
fact that the system is actually acting on and modifying itself...

-- Ben

On Tue, Sep 2, 2008 at 9:30 AM, William Pearson <[EMAIL PROTECTED]>wrote:

> I've put up a short fairly dense un-referenced paper (basically an
> email but in a pdf to allow for maths) here.
>
> http://codesoup.sourceforge.net/RSC.pdf
>
> Any thoughts/ feed back welcomed. I'll try and make it more accessible
> at some point, but I don't want to spend too much time on it at the
> moment.
>
>  Will
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com