Re: [agi] Determinism

2007-05-15 Thread Christian Mauduit
Hi,

On Mon, May 14, 2007 4:57 pm, David Clark wrote:
 Some people take Mathematics and their so called proofs as the gospel
 when it comes to programming and AGI.  Even though I have a Math minor
 from University, I have used next to no Mathematics in my 30 year
 programming/design career.  I have never been impressed by complicated
 formulas and I have been many slick (Math) talking people who couldn't
 produce anything that worked in the real world.
I don't think the question is wether the program itself should rely on
math or not. The point is math  computing are deeply linked, and math
theory does say that some things are possible, and some or not.

It's a fact that a computer with 1GB memory will have much trouble
simulating a computer with 1GB memory. Maybe a computer with 1GB + 1 bit
could do the job of simulating the same computer with 1GB, but well, I
imagine it would still need a little more than that. For other reasons -
but still math related - you'll never ever find an algorithm which
compresses *every* file. Of course you'll find an algorithm which
compresses most real world files (we use them everyday) but there's a
slight difference between all file and most files.

Of course real world programs that do usefull stuff is what we need, but
when theory says this kind of program / computer / foo / bar cannot be
built, it's wise to pay attention. I'm happy Godel discovered that it
wasn't possible to answer some questions, or else I might still be
searching for the answer.

Now if a computer with 1,1GB can simulate a computer with 1GB this is
probably, from your real world point of view, pretty much enough, since
someone capable of building a 1GB computer can probably build a 1,1GB
computer for that purpose. Scaling being much easier on computers than on
a human brain, this property might by itself justify the interest in AGI.
But a computer with 1,1GB is not the same than a computer with 1GB. Or
else I could proove you no matter how much memory you put in a computer,
they are all the same. The conclusion is that a real world computer, as we
know them today (Turing machines) can't simulate itself. But it can
simulate something that is so close to itself that in most cases it
won't make any difference. Most. Not all.

I suspect you consider math related all activities that are linked to
formulas, calculus, probability, numbers, and such things. It happens that
knowing that ((not a) and (not b)) is equivalent to (not (a or b)) is just
plain math. And that's the kind of code you find everywhere in web
servers, arcade games, cryptographic algorithms, regexp engines, well,
anywhere.

Of course, we can debate of what math is but IMHO most computer related
concepts and skills are derived from math.

Have a nice day,

Christian.

-- 
Christian Mauduit [EMAIL PROTECTED] __/\__ ___
\~/ ~/(`_ \   ___
http://www.ufoot.org/   /_o _\   \ \_/ _ \_
http://www.ufoot.org/ufoot.pub (GnuPG)\/  \___/ \__)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Determinism

2007-05-14 Thread David Clark
- Original Message - 
  From: Derek Zahn 
  To: agi@v2.listbox.com 
  Sent: Sunday, May 13, 2007 1:49 PM
  Subject: RE: [agi] Determinism


  Matt Mahoney writes:
   
   (sigh)
   
  http://en.wikipedia.org/wiki/Scruffies

I don't think my disagreement with Matt is about Neats vs Scruffies!

Some people take Mathematics and their so called proofs as the gospel when it 
comes to programming and AGI.  Even though I have a Math minor from University, 
I have used next to no Mathematics in my 30 year programming/design career.  I 
have never been impressed by complicated formulas and I have been many slick 
(Math) talking people who couldn't produce anything that worked in the real 
world.

I'm not saying that some Math isn't useful for AGI in limited instances, but I 
have yet to see any great program that relied very heavily on Math.  Most of 
the examples I have seen of explaining simple concepts using Math on this list 
has resulted in less accurate communication rather than better.

Turing machines and Chinese room experiments are fine for the philosophers that 
create nothing but hot air but I respect real designs that work in the real 
world.

There are many algorithmic systems that are not solid state, one for one, real 
time simulations or models.  Humans use what we call models all the time, and 
we have a tiny short term memory to work with.

Saying that you don't trade time for memory in models or any computer program 
just shows a lack of real world experience on the part of Matt IMHO.

David Clark

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Determinism

2007-05-14 Thread Benjamin Goertzel



Saying that you don't trade time for memory in models or any computer
program just shows a lack of real world experience on the part of Matt IMHO.

David Clark




Of course CS is full of time/memory tradeoffs, but I think Matt's point was
that a finite-state machine just can't completely simulate itself internally
for basic algorithmic-information-theory reasons.

However, it can **approximately** simulate itself, which is the important
thing and is what we all do every day -- via the approximate simulations of
ourselves that we call our selves...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Determinism

2007-05-14 Thread Shane Legg

On 5/14/07, David Clark [EMAIL PROTECTED] wrote:



Even though I have a Math minor from University, I have used next to no
Mathematics in my 30 year programming/design career.



Yes, but what do you program?

I've been programming for 24 years and I use math all the time.
Recently I've been working with Marcus Hutter on a new learning
algorithm based on a rather nasty mathematical derivation.  The
results kick butt.  Another tricky derivation that Hutter did a few
years back is now producing good results in processing gene
expression data for cancer research.  I could list many more...

Anyway, my point is, whether you need math in your programing
or not all depends on what it is that you are trying to program.

Shane

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

RE: [agi] Determinism

2007-05-14 Thread John G. Rose
Have to blurb on this as it irks me - 

 

Even if you write a Hello World app it is a mathematical entity expressed
through a mathematical medium.  Software layers from source to binary to OS
to drivers are a gradation from the mathematically abstract to the physical
world as is with painting an abstract image with oils on canvas.  The
electronics hierarchy that the software runs on is more binary and
systematic than oils on canvas.

 

Great programs that heavily use math to name a few I've used are engineering
apps like Autocad, Ansys, Solidworks, 

3D Design - 3D Studio Max, Lightwave, Maya, SoftImage

 

the list is unfathomable

 

Chemical design - ChemOffice

Electronics - Electronics Workbench, VisualSpice

Audio Processing - Audition, SpectraPlus

Accounting - Quickbooks, spreadsheets -Excel, Data analysis - MatLab,

 

there are too many categories and these are just some off the shelf greats
not custom built...

 

The closer the software can get to mathematics the better and more powerful.
It's like the world can be described instead of mass, energy, the
fundamental interactions will probably be united with a model that is
described with data only as data is the unifying force, expression and
operations on data are just re-expressions of data, the data and operations
are the same.  Software is a very good utilitarian expression virtualized on
this unifying data space.

 

For AGI there are the pure knowledge cognition engines existing only in
theory of which these engines approach infinite intelligence.  The problem
is breaking chunks of these models off, or coming up with simpler models,
and fitting the models into the real world running on practical engineering
systems like software running on computers that are contemporary or will be
contemporary in the near future.

 

For AGI design we tend to leave the mathematical proofs up to the experts
and rely on their results using that knowledge as a supply of tools and
components for building data interaction systems.  The library of
mathematical results from proofs is so large and untapped that it is really
hard for me to imagine an AGI is not running right now somewhere on
someone's computer.  That is the opportunity in this sort of thing.
Software building in general is a continuous struggle of materializing into
software the non-materialized mathematics that is applicable and can
generate or enhance a revenue model or interest for its survival.

 

As classifying all software it could be done based on source code as DNA but
I'm now thinking that it should be done based on incorporated and/or the
utilized mathematics, as source code is just a mathematical structure and
the function of the software is a type of emergent behavior of the souce
code and can be distilled into mathematical descriptor trees for different
software categories and lineages.

 

John

 

 

From: David Clark [mailto:[EMAIL PROTECTED] 



Some people take Mathematics and their so called proofs as the gospel when
it comes to programming and AGI.  Even though I have a Math minor from
University, I have used next to no Mathematics in my 30 year
programming/design career.  I have never been impressed by complicated
formulas and I have been many slick (Math) talking people who couldn't
produce anything that worked in the real world.

 

I'm not saying that some Math isn't useful for AGI in limited instances, but
I have yet to see any great program that relied very heavily on Math.  Most
of the examples I have seen of explaining simple concepts using Math on this
list has resulted in less accurate communication rather than better.

 

Turing machines and Chinese room experiments are fine for the philosophers
that create nothing but hot air but I respect real designs that work in the
real world.

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Determinism

2007-05-14 Thread J Storrs Hall, PhD
On Monday 14 May 2007 11:02:33 am Benjamin Goertzel wrote:

 We use some probability theory ... and some of the theory of rewriting
 systems, lambda calculus, etc.   This stuff is in a subordinate role to a
 cognitive-systems-theory-based design, but is still very useful...

ditto -- and for my part, quite a lot of linear algebra and calculus thru ODEs 
(but no PDEs so far).

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Determinism

2007-05-14 Thread David Clark

  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, May 14, 2007 8:02 AM
  Subject: Re: [agi] Determinism




 I have never been impressed by complicated formulas and I have been many 
slick (Math) talking people who couldn't produce anything that worked in the 
real world.

  A fascinating Freudian slip!  ;-)

You caught me for not checking my wording better.  It should have read I have 
seen not I have been. 


I'm not saying that some Math isn't useful for AGI in limited instances, 
but I have yet to see any great program that relied very heavily on Math.  Most 
of the examples I have seen of explaining simple concepts using Math on this 
list has resulted in less accurate communication rather than better.


  Well, if Novamente succeeds, it will be a disproof of your above statement.  
The basic NM design is not motivated by mathematics, but plenty of math has 
been used in deriving the details of various aspects of the system.   Mostly 
math at the advanced undergraduate level though --  no algebraic topology, 
several complex variables, etc. 

  We use some probability theory ... and some of the theory of rewriting 
systems, lambda calculus, etc.   This stuff is in a subordinate role to a 
cognitive-systems-theory-based design, but is still very useful...

You might be correct for your project but I doubt that the Math contained in 
your project is more than a small fraction of the code.  All algorithms aren't 
Math and most code has to do with Computer Science techniques not Math.

Some people view all computer code as a kind of Math but I don't see giving 
Math such a broad definition very useful.

I didn't say Math was useless for AGI, just not a relevant as other Computer 
Science techniques.

David Clark

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Determinism

2007-05-14 Thread Mark Waser
 Well, since we are using C++, most of our code has to do with neither 
 Mathematics nor Computer Science but just C++ language muckery 

Consider me to be tweaking you again for this . . . . :-)

Have you considered a real development language and platform?  (No need to 
reply . . . . I'm just abusing you . . . . :-)

 In terms of Novamente, it doesn't really make sense to prioritize 
 mathematics versus computer science.  We'd have no Novamente without 
 probability theory, but we'd also have no Novamente without basic 
 algorithms  data structures stuff.  Both are absolutely necessary given 
 the type of approach that we're taking. 

You mean that probability theory is math instead of computer science?:-)

Seriously though, I'd have a really hard time drawing a Venn diagram for 
mathematics and computer science (and I'm sure that anything that I did do 
would be up for *serious* debate (like making operating system holy wars a 
minor skirmish in comparison :-)


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, May 14, 2007 2:59 PM
  Subject: Re: [agi] Determinism






You might be correct for your project but I doubt that the Math contained 
in your project is more than a small fraction of the code.  All algorithms 
aren't Math and most code has to do with Computer Science techniques not Math.


  Well, since we are using C++, most of our code has to do with neither 
Mathematics nor Computer Science but just C++ language muckery 
   
  
I didn't say Math was useless for AGI, just not a relevant as other 
Computer Science techniques.

  Well, it seems to me that for AGI using methods other than human brain 
emulation, math is pretty important.

  In terms of Novamente, it doesn't really make sense to prioritize mathematics 
versus computer science.  We'd have no Novamente without probability theory, 
but we'd also have no Novamente without basic algorithms  data structures 
stuff.  Both are absolutely necessary given the type of approach that we're 
taking. 

  -- Ben


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Determinism

2007-05-14 Thread Mike Tintner
dc: I have never been impressed by complicated formulas and I have been many 
slick (Math) talking people who couldn't produce anything that worked in the 
real world.

Ben: A fascinating Freudian slip!  ;-)

Wow - you're the first AI person I've come across with any Freudian 
perspective. Minsky made a similar slip. I'd argued that he had never really 
defined the problem of AGI. His response was:

MM: As E. just said, There are many defs but few help . Making definitions 
often does more hard than good when you don't 
understand situations

I pointed out that the Freudian slip - substituting  hard for harm - was 
revealing of the real reason for his lack of a good definition. But I think it 
all passed him by.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Determinism

2007-05-13 Thread David Clark

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, May 11, 2007 4:15 PM
Subject: Re: [agi] Determinism


 Suppose machine A has 1 MB of memory and machine B has 2 MB.  They may
have
 different instruction sets.  You have a program written for A but you want
to
 test it on B to see if it will work on A.  So you write a program on B
that
 simulates A.  Your simulator has to include a 1 MB array to represent A's
 memory.  You load the test program in this array, simulate running A's
 instructions and get the output that you would have gotten on A.

 If you reversed the roles, you could not do it because you would need to
 declare a 2 MB array on a computer with only 1 MB of memory.  The best you
 could do is simulate a machine like B but with a smaller memory.  For some
 test programs you will get the same answer, but for others your simulation
 will get an out of memory error, whereas the real program would not.  This
is
 a probabilistic model.  It is useful, but not 100% accurate.

As I said already, this is not true.  If you have off line memory, you can
simulate a much larger memory than you have physical memory.  All current
PC's use paged memory, for instance, but if you have to swap real memory out
to a disk drive, it slows the program down a lot.  This is why I said that
you can trade time for memory.

 Now suppose you wanted to simulate A on A.  (You may suspect a program has
a
 virus and want to see what it would do without actually running it).  Now
you
 have the same problem.  You need an array to reprsent your own memory, and
it
 would use all of your memory with no space left over for your simulator
 program.  This is true even if you count disk and virtual memory, because
that
 has to be part of your simulation too.

 Why is this important to AGI?  Because the brain is a computer with finite
 memory.  When you think about how you think, you are simulating your own
 brain.  Whatever model you use must be a simplified approximation, because
you
 don't have enough memory to model it exactly.  Any such model cannot give
the
 right answer every time.  So the result is we perceive our own thoughts as
 having some randomness, and this must be true whether the brain is
 deterministic or not.

As I said before, a full complete real time model as you describe is rarely
needed.  Humans simulate their brains and others all the time.  The part
they simulate obviously is at a much higher level than the physical neurons
in our brains but it is a simulation non the less.  We never simulate
anything but a tiny aspect of ours or others brains because we don't need
to, for it to be useful and we don't have the mental tools in any case.

Some models can give the right answer all the time if the model is either
perfectly known or simple enough.  In the case of humans, we model many
situations at a very high level so that we also can get exactly the same
answer from the model every time.  Simulate and model aren't words that
define just one low level.  They can be correctly and usefully used at many
levels and all levels of modeling must be accommodated if you want to make
sweeping generalizations about whether humans/AGI are deterministic or not.

I fully understand what you are saying but I disagree with your narrow usage
of model and simulate.

David Clark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Determinism

2007-05-13 Thread Matt Mahoney

--- David Clark [EMAIL PROTECTED] wrote:

  If you reversed the roles, you could not do it because you would need to
  declare a 2 MB array on a computer with only 1 MB of memory.  The best you
  could do is simulate a machine like B but with a smaller memory.  For some
  test programs you will get the same answer, but for others your simulation
  will get an out of memory error, whereas the real program would not.  This
 is
  a probabilistic model.  It is useful, but not 100% accurate.
 
 As I said already, this is not true.  If you have off line memory, you can
 simulate a much larger memory than you have physical memory.  All current
 PC's use paged memory, for instance, but if you have to swap real memory out
 to a disk drive, it slows the program down a lot.  This is why I said that
 you can trade time for memory.

(sigh)  This has nothing to do with speed.  Offline memory is still memory. 
It does not change the fact that a finite state machine cannot simulate itself
no matter how much time you give it.  You can simulate some programs and get
the right answer, but you can't do it for every case.

I am trying to explain this using a minimum of mathematics, because if you
don't understand the math, you won't believe a proof.  If you want a formal
proof of the more general case of Turing machines bounded by algorithmic
complexity, see Legg's paper, http://www.vetta.org/documents/IDSIA-12-06-1.pdf


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Determinism

2007-05-13 Thread Derek Zahn
Matt Mahoney writes:
 
 (sigh)
 
http://en.wikipedia.org/wiki/Scruffies

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Determinism

2007-05-11 Thread Vladimir Nesov
Saturday, May 12, 2007, Matt Mahoney wrote:

MM Now suppose you wanted to simulate A on A.  (You may suspect a program has a
MM virus and want to see what it would do without actually running it).  Now 
you
MM have the same problem.  You need an array to reprsent your own memory, and 
it
MM would use all of your memory with no space left over for your simulator
MM program.

If system simulates itself from current state to future state, it only
needs additional memory for about the amount of memory changed during
such simulation. So simulation is not very different from actual execution, 
unless
actual execution comes very near to out-of-memory condition, which I
believe is not the case with AGI systems anyone wants to consider.

-- 
 Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Determinism

2007-05-10 Thread Derek Zahn




David Clark writes:
 I can predict with high accuracy what I will think on almost any topic. 
 People that can't, either don't know much about the principles they use to 
 think or aren't very rational. I don't use emotion or the current room 
 temperature to make decisions. (No implication that you might ;)  Our 
 brains on the microscopic scale might be lossy or non-deterministic but 
 thinking people fix this at the macroscopic level by removing these design 
 defects as quickly as possible from their higher level thinking.
Despite rereading this thread I'm not certain what it is even about -- perhaps 
it is about the difference between a theoretical capability (usually the case 
when the phrase Turing Machine pops up) and a practical application.
 
However, I don't know if I am the only one, but I do not know with high 
accuracy what I will think on almost any topic.  Just trying to understand what 
that sentence even means makes my head hurt.  I can't predict what I will want 
for lunch, much less which of the current crop of presidential candidates I 
prefer (something I have not thought about) or any other nontrivial thing.
 
That may imply that I am irrational; I'd accept that.  It feels like squeezing 
out logical rational inferences is usually difficult and is rarely the way I go 
about my daily existence.  I bother posting this only so you know that not 
every GI thinks the way you apparently do.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Determinism

2007-05-10 Thread Matt Mahoney
--- David Clark [EMAIL PROTECTED] wrote:

 
 - Original Message - 
 From: Matt Mahoney [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, May 09, 2007 7:10 PM
 Subject: Re: [agi] Determinism
 
 
  By simulate, I mean in the formal sense, as a universal Turing machine can
  simulate any other Turing machine, for example, you can write a program in
 C
  that runs programs written in Pascal (e.g. a compiler or interpreter).
 Thus,
  you can predict what the Pascal program will do.
 
  Languages like Pascal and C define Turing machines.  They have unlimited
  memory.  Real machines have finite memory, so you do the simulation
 properly
  you need to also define the hardware limits of the target machine.  So if
 the
  real program reports an out of memory error, the simulation should too, at
  precisely the same point.  Now if the target machine (running Pascal) has
 2 MB
  memory, and your machine (running C) has 1 MB, then you can't do it.  Your
  simulator will run out of memory first.
 
 My first computer had 32kb of memory.  I know all about doing a lot in very
 little memory.  Lack of memory is about losing time not about the size of
 physical memory.  Virtual memory can be used so that small real memories can
 simulate much bigger memories.  People can simulate absolutely huge systems
 by just running and looking at small parts of a system at a single time.
 Your argument might be true IF you talked about full REAL TIME simulation
 but in general your argument is false.

Perhaps I did not state clearly.  I assume you are familiar with the concept
of a universal Turing machine.  Suppose a machine M produces for each input x
the output M(x) (or {} if it runs forever).  We say a machine U simulates M if
for all x, U(m,x) = M(x), where m is a description of M.  One may construct
universal Turing machines, such that this is true for all M and all x.  You
can think of U as predicting what M will output for x, without actually
running M.   In this sense, U can predict its own computations, e.g. U(u,x) =
U(x) for all x, where u is a description of U.  In other words, U can simulate
itself.

Turing machines have infinite memory.  Real computers have finite memory. 
There is no such thing as a universal finite state machine.  If a machine M
has n states (or log2(n) bits of memory), it is not possible to construct a
machine U with less than n states such that U(m,x) = M(x) for all x.  For some
x, yes.  I assume that is what you mean.  This has nothing to do with speed,
and makes no distinction between memory in RAM or on disk.

 My example used the output from the formula of a line which produces
 infinite results from only a Y intercept and a slope.  You didn't bother to
 show how my analogy was incorrect.

I didn't understand how it was relevant.

 I can predict with high accuracy what I will think on almost any topic.
 People that can't, either don't know much about the principles they use to
 think or aren't very rational.

You can't predict when you will next think of something, because then you are
thinking of it right now.  Maybe you can predict some of your future thoughts,
but not all of them.  Your brain has finite memory.  The best you can do is
use a probabilistic approximation of your own thought processes.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Determinism

2007-05-10 Thread David Clark

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, May 10, 2007 6:04 PM
Subject: Re: [agi] Determinism


 Perhaps I did not state clearly.  I assume you are familiar with the
concept
 of a universal Turing machine.  Suppose a machine M produces for each
input x
 the output M(x) (or {} if it runs forever).  We say a machine U simulates
M if
 for all x, U(m,x) = M(x), where m is a description of M.  One may
construct
 universal Turing machines, such that this is true for all M and all x.
You
 can think of U as predicting what M will output for x, without actually
 running M.   In this sense, U can predict its own computations, e.g.
U(u,x) =
 U(x) for all x, where u is a description of U.  In other words, U can
simulate
 itself.

I could care less about Turing machines or infinite memories.  If you want
to create an AGI, you will have to use real life computers and real life
software, not imaginary musings.

 Turing machines have infinite memory.  Real computers have finite memory.
 There is no such thing as a universal finite state machine.  If a machine
M
 has n states (or log2(n) bits of memory), it is not possible to construct
a
 machine U with less than n states such that U(m,x) = M(x) for all x.  For
some
 x, yes.  I assume that is what you mean.  This has nothing to do with
speed,
 and makes no distinction between memory in RAM or on disk.

  My example used the output from the formula of a line which produces
  infinite results from only a Y intercept and a slope.  You didn't bother
to
  show how my analogy was incorrect.

 I didn't understand how it was relevant.

A simulation can be as simple as the formula for a line.  The formula is the
algorithm that defines the simulation and the input uses this formula to
produce results.  This seems pretty simple to me.  With the correct
algorithm that has minimal memory or storage requirements, you get an
infinite set of answers.  This is certainly a class of models.  The human
brain is defined by a small set of instructions encoded by DNA and this
produces the hugely complex brain.
Small input, huge output.  The memory requirement for a simulation is not
proportional to the volume of output.

  I can predict with high accuracy what I will think on almost any topic.
  People that can't, either don't know much about the principles they use
to
  think or aren't very rational.

 You can't predict when you will next think of something, because then you
are
 thinking of it right now.  Maybe you can predict some of your future
thoughts,
 but not all of them.  Your brain has finite memory.  The best you can do
is
 use a probabilistic approximation of your own thought processes.

I never said I could PREDICT what I would think at some time in the future,
only what I would conclude if I thought about some particular problem.  If
you told me I said XYZ 5 years ago, I could tell you with absolute accuracy
if in fact I did say XYZ or not.  The reason is that I am meticulously
consistent in the conclusions I draw based on the information I have.  This
knowledge of what I know and how I think is not probabilistic or
approximate.  It is totally deterministic and intentional regardless of the
inherent non-determinism of my human brain.

David Clark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Determinism

2007-05-09 Thread David Clark
I work very hard to produce the exact same answer to the same question.  If
some humans don't actually do that, then they are just exhibiting the flaws
that exist in our design.  This is not to be confused with answering better
over time, based on more and better information.  The exact same information
should always produce the exact same result in human or AGI.

Irrational thought could be simulated by an AGI so that a better model of
some humans could be had but the less intentional defects built into the AGI
the better.

  A computer with finite memory can
 only model (predict) a computer with less memory.  No computer can
simulate
 itself.  When we introspect on our own brains, we must simplify the model
to a
 probabilistic one, whether or not it is actually deterministic.

This is NOT true.  How many answers can be had by the formula for a single
straight line?  The answer is infinite.  A computer CAN model/simulate
anything including itself (whatever that means) given enough time.  If the
model has understanding (formulas or algorithms) then any amount of
simulated detail can be realized.

David Clark

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 08, 2007 12:47 PM
Subject: Re: [agi] Determinism


 I really hate to get into this endless discussion.  I think everyone
agrees
 that some randomness in AGI decision making is good (e.g. learning through
 exploration).  Also it does not matter if the source of randomness is a
true
 random source, such as thermal noise in neurons, or a deterministic pseudo
 random number generator, such as iterating a cryptographic hash function
with
 a secret seed.

 I think what is confusing Mike (and I am sure he will correct me) is that
the
 inability of humans to predict their own thoughts (what will I later
decide to
 have for dinner?) is something that needs to be programmed into an AGI.
There
 is actually no other way to program it.  A computer with finite memory can
 only model (predict) a computer with less memory.  No computer can
simulate
 itself.  When we introspect on our own brains, we must simplify the model
to a
 probabilistic one, whether or not it is actually deterministic.


 -- Matt Mahoney, [EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Determinism

2007-05-09 Thread Matt Mahoney

--- David Clark [EMAIL PROTECTED] wrote:
   A computer with finite memory can
  only model (predict) a computer with less memory.  No computer can
 simulate
  itself.  When we introspect on our own brains, we must simplify the model
 to a
  probabilistic one, whether or not it is actually deterministic.
 
 This is NOT true.  How many answers can be had by the formula for a single
 straight line?  The answer is infinite.  A computer CAN model/simulate
 anything including itself (whatever that means) given enough time.  If the
 model has understanding (formulas or algorithms) then any amount of
 simulated detail can be realized.

By simulate, I mean in the formal sense, as a universal Turing machine can
simulate any other Turing machine, for example, you can write a program in C
that runs programs written in Pascal (e.g. a compiler or interpreter).  Thus,
you can predict what the Pascal program will do. 

Languages like Pascal and C define Turing machines.  They have unlimited
memory.  Real machines have finite memory, so you do the simulation properly
you need to also define the hardware limits of the target machine.  So if the
real program reports an out of memory error, the simulation should too, at
precisely the same point.  Now if the target machine (running Pascal) has 2 MB
memory, and your machine (running C) has 1 MB, then you can't do it.  Your
simulator will run out of memory first.

Likewise, you can't simulate your own machine, because you need additional
memory to run the simulator.

When we lack the memory for an exact simulation, we can use an approximation,
one that usually but not always gives the right answer.  For example, we
forecast the weather using an approximation of the state of the Earth's
atmosphere and get an approximate answer.  We can do the same with programs. 
For example, if a program outputs a string of bits according to some
algorithm, then you can often predict most of the bits by looking up the last
few bits of context in a table and predicting whatever bit was last output in
this context.  The cache and branch prediction logic in your CPU do something
like this.  This is an example of your computer simulating itself using a
simplified, probabilistic model.  A more accurate model would analyze the
entire program and make exact predictions, but this is not only impractical
but also impossible.  So we must have some cache misses and branch
mispredictions.

In the same way, the brain cannot predict itself.  The brain has finite
memory.  Even if the brain were deterministic (no neuron noise), this would
still be the case.  If a powerful enough computer knew the exact state of your
brain, it could predict what you would think next, but you could not predict
what that computer would output.  I know in theory you could follow the
computer's algorithm on pencil and paper, but even then you would still not
know the result of that manual computation until you did it.  No matter what
you do, you cannot predict your own thoughts with 100% accuracy.  Your mental
model must be probabilistic, whether the hardware is deterministic or not.




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Determinism

2007-05-08 Thread Pei Wang

On 5/8/07, James Ratcliff [EMAIL PROTECTED] wrote:

Pei,
  The only problem I see with choosing the first one Pei, is that given the
3 choices, and taking #1, if the system does not learn anything extra that
would help it make a decision, it would be forever stuck in that loop, and
never able to break free.
  If given the choice again, it would always choose path one though path 2 o
r3 may be a better choice instead.  Random would be a better choice there.


Choosing the first one the system locates is different from always
taking the same #1. If #1, #2, and #3 are evaluated similarly in the
system, they will have the same priority, and in the long run, get the
same chance.

I do use random choice here, but it is a very bad idea to think
non-determinism = random choice. As I explained before, (biased)
random choice is used in NARS for resources distribution, while
non-determinism of the system comes from context-sensitive processing.
These two are related, but not the same idea at all. It is actually
possible to get this type of non-determinism without any randomness.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Determinism

2007-05-08 Thread Matt Mahoney
I really hate to get into this endless discussion.  I think everyone agrees
that some randomness in AGI decision making is good (e.g. learning through
exploration).  Also it does not matter if the source of randomness is a true
random source, such as thermal noise in neurons, or a deterministic pseudo
random number generator, such as iterating a cryptographic hash function with
a secret seed.

I think what is confusing Mike (and I am sure he will correct me) is that the
inability of humans to predict their own thoughts (what will I later decide to
have for dinner?) is something that needs to be programmed into an AGI.  There
is actually no other way to program it.  A computer with finite memory can
only model (predict) a computer with less memory.  No computer can simulate
itself.  When we introspect on our own brains, we must simplify the model to a
probabilistic one, whether or not it is actually deterministic.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936