Re: [agi] How do we Score Hypotheses?

2010-07-14 Thread Jim Bromer
On Tue, Jul 13, 2010 at 9:05 PM, Jim Bromer jimbro...@gmail.com wrote:
Even if you refined your model until it was just right, you would have only
caught up to everyone else with a solution to a narrow AI problem.


I did not mean that you would just have a solution to a narrow AI problem,
but that your solution, if put in the form of scoring of points on the basis
of the observation *of definitive* events, would constitute a narrow AI
method.  The central unsolved problem, in my view, is: How can hypotheses be
conceptually integrated along with the observable definitive events of the
problem to form good explanatory connections that can mesh well with other
knowledge about the problem that is considered to be reliable.  The second
problem is finding efficient ways to represent this complexity of knowledge
so that the program can utilize it efficiently.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we Score Hypotheses?

2010-07-14 Thread David Jones
What do you mean by definitive events?

I guess the first problem I see with my approach is that the movement of the
window is also a hypothesis. I need to analyze it in more detail and see how
the tree of hypotheses affects the hypotheses regarding the es on the
windows.

What I believe is that these problems can be broken down into types of
hypotheses,  types of events and types of relationships. then those types
can be reasoned about in a general way. If possible, then you have a method
for reasoning about any object that is covered by the types of hypotheses,
events and relationships that you have defined.

How to reason about specific objects should not be preprogrammed. But, I
think the solution to this part of AGI is to find general ways to reason
about a small set of concepts that can be combined to describe specific
objects and situations.

There are other parts to AGI that I am not considering yet. I believe the
problem has to be broken down into separate pieces and understood before
putting it back together into a complete system. I have not covered
inductive learning for example, which would be an important part of AGI. I
have also not yet incorporated learned experience into the algorithm, which
is also important.

The general AI problem is way too complicated to consider all at once. I
simply can't solve hypothesis generation, comparison and disambiguation
while at the same time solving induction and experience-based reasoning. It
becomes unwieldly. So, I'm starting where I can and I'll work my way up to
the full complexity of the problem.

I don't really understand what you mean here: The central unsolved problem,
in my view, is: How can hypotheses be conceptually integrated along with the
observable definitive events of the problem to form good explanatory
connections that can mesh well with other knowledge about the problem that
is considered to be reliable.  The second problem is finding efficient ways
to represent this complexity of knowledge so that the program can utilize it
efficiently.

You also might want to include concrete problems to analyze for your central
problem suggestions. That would help define the problem a bit better for
analysis.

Dave

On Wed, Jul 14, 2010 at 8:30 AM, Jim Bromer jimbro...@gmail.com wrote:



 On Tue, Jul 13, 2010 at 9:05 PM, Jim Bromer jimbro...@gmail.com wrote:
 Even if you refined your model until it was just right, you would have only
 caught up to everyone else with a solution to a narrow AI problem.


 I did not mean that you would just have a solution to a narrow AI problem,
 but that your solution, if put in the form of scoring of points on the basis
 of the observation *of definitive* events, would constitute a narrow AI
 method.  The central unsolved problem, in my view, is: How can hypotheses be
 conceptually integrated along with the observable definitive events of the
 problem to form good explanatory connections that can mesh well with other
 knowledge about the problem that is considered to be reliable.  The second
 problem is finding efficient ways to represent this complexity of knowledge
 so that the program can utilize it efficiently.

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Matt Mahoney
Actually, Fibonacci numbers can be computed without loops or recursion.

int fib(int x) {
  return round(pow((1+sqrt(5))/2, x)/sqrt(5));
}

unless you argue that loops are needed to compute sqrt() and pow().

The brain and DNA use redundancy and parallelism and don't use loops because 
their operations are slow and unreliable. This is not necessarily the best 
strategy for computers because computers are fast and reliable but don't have a 
lot of parallelism.

 -- Matt Mahoney, matmaho...@yahoo.com



- Original Message 
From: Michael Swan ms...@voyagergaming.com
To: agi agi@v2.listbox.com
Sent: Wed, July 14, 2010 12:18:40 AM
Subject: Re: [agi] What is the smallest set of operations that can potentially  
define everything and how do you combine them ?

Brain loops:


Premise:
Biological brain code does not contain looping constructs, or the
ability to creating looping code, (due to the fact they are extremely
dangerous on unreliable hardware) except for 1 global loop that fires
about 200 times a second.

Hypothesis:
Brains cannot calculate iterative problems quickly, where calculations
in the previous iteration are needed for the next iteration and, where
brute force operations are the only valid option.

Proof:
Take as an example, Fibonacci numbers
http://en.wikipedia.org/wiki/Fibonacci_number

What are the first 100 Fibonacci numbers?

int Fibonacci[102];
Fibonacci[0] = 0;
Fibonacci[1] = 1;
for(int i = 0; i  100; i++)
{
// Getting the next Fibonacci number relies on the previous values
Fibonacci[i+2] = Fibonacci[i] + Fibonacci[i+1];
}  

My brain knows the process to solve this problem but it can't directly
write a looping construct into itself. And so it solves it very slowly
compared to a computer. 

The brain probably consists of vast repeating look-up tables. Of course,
run in parallel these seem fast.


DNA has vast tracks of repeating data. Why would DNA contain repeating
data, instead of just having the data once and the number of times it's
repeated like in a loop? One explanation is that DNA can't do looping
construct either.



On Wed, 2010-07-14 at 02:43 +0100, Mike Tintner wrote:
 Michael: We can't do operations that
 require 1,000,000 loop iterations.  I wish someone would give me a PHD
 for discovering this ;) It far better describes our differences than any
 other theory.
 
 Michael,
 
 This isn't a competitive point - but I think I've made that point several 
 times (and so of course has Hawkins). Quite obviously, (unless you think the 
 brain has fabulous hidden powers), it conducts searches and other operations 
 with extremely few limited steps, and nothing remotely like the routine 
 millions to billions of current computers.  It must therefore work v. 
 fundamentally differently.
 
 Are you saying anything significantly different to that?
 
 --
 From: Michael Swan ms...@voyagergaming.com
 Sent: Wednesday, July 14, 2010 1:34 AM
 To: agi agi@v2.listbox.com
 Subject: Re: [agi] What is the smallest set of operations that can 
 potentially  define everything and how do you combine them ?
 
 
  On Tue, 2010-07-13 at 07:00 -0400, Ben Goertzel wrote:
  Well, if you want a simple but complete operator set, you can go with
 
  -- Schonfinkel combinator plus two parentheses
 
  I'll check this out soon.
  or
 
  -- S and K combinator plus two parentheses
 
  and I suppose you could add
 
  -- input
  -- output
  -- forget
 
  statements to this, but I'm not sure what this gets you...
 
  Actually, adding other operators doesn't necessarily
  increase the search space your AI faces -- rather, it
  **decreases** the search space **if** you choose the right operators, 
  that
  encapsulate regularities in the environment faced by the AI
 
  Unfortunately, an AGI needs to be absolutely general. You are right that
  higher level concepts reduce combinations, however, using them, will
  increase combinations for simpler operator combinations, and if you
  miss a necessary operator, then some concepts will be impossible to
  achieve. The smallest set can define higher level concepts, these
  concepts can be later integrated as single operations, which means
  using operators than can be understood in terms of smaller operators
  in the beginning, will definitely increase you combinations later on.
 
  The smallest operator set is like absolute zero. It has a defined end. A
  defined way of finding out what they are.
 
 
 
 
  Exemplifying this, writing programs doing humanly simple things
  using S and K is a pain and involves piling a lot of S and K and 
  parentheses
  on top of each other, whereas if we introduce loops and conditionals and
  such, these programs get shorter.  Because loops and conditionals happen
  to match the stuff that our human-written programs need to do...
  Loops are evil in most situations.
 
  Let me show you why:
  Draw a square using put_pixel(x,y)
  // loops are more scalable, but, damage this code 

[agi] Comments On My Skepticism of Solomonoff Induction

2010-07-14 Thread Jim Bromer
Last week I came up with a sketch that I felt showed that Solomonoff
Induction was incomputable *in practice* using a variation of Cantor's
Diagonal Argument.  I wondered if my argument made sense or not.  I will
explain why I think it did.



First of all, I should have started out by saying something like, Suppose
Solomonoff Induction was computable, since there is some reason why people
feel that it isn't.



Secondly I don't think I needed to use Cantor's Diagonal Argument (for the *in
practice* case), because it would be sufficient to point out that since it
was impossible to say whether or not the probabilities ever approached any
sustained (collared) limits due to the lack of adequate mathematical
definition of the concept all programs, it would be impossible to make the
claim that they were actual representations of the probabilities of all
programs that could produce certain strings.



But before I start to explain why I think my variation of the Diagonal
Argument was valid, I would like to make another comment about what was
being claimed.



Take a look at the n-ary expansion of the square root of 2 (such as the
decimal expansion or the binary expansion).  The decimal expansion or the
binary expansion of the square root of 2 is an infinite string.  To say that
the algorithm that produces the value is predicting the value is a
torturous use of meaning of the word 'prediction'.  Now I have less than
perfect grammar, but the idea of prediction is so important in the field of
intelligence that I do not feel that this kind of reduction of the concept
of prediction is illuminating.



Incidentally, There are infinite ways to produce the square root of 2 (sqrt
2 +1-1, sqrt2 +2-2, sqrt2 +3-3,...).  So the idea that the square root of 2
is unlikely is another stretch of conventional thinking.  But since there
are an infinite ways for a program to produce any number (that can be
produced by a program) we would imagine that the probability that one of the
infinite ways to produce the square root of 2 approaches 0 but never reaches
it.  We can imagine it, but we cannot prove that this occurs in Solomonoff
Induction because Solomonoff Induction is not limited to just this class of
programs (which could be proven to approach a limit).  For example, we could
make a similar argument for any number, including a 0 or a 1 which would
mean that the infinite string of digits for the square root of 2 is just as
likely as the string 0.



But the reason why I think a variation of the diagonal argument can work in
my argument is that since we cannot prove that the infinite computations of
the probabilities that a -program will produce a string- will ever approach
a limit, to use the probabilities reliably (even as an infinite theoretical
method) we would have to find some way to rearrange the computations of the
probabilities so that they could.  While the number of ways to rearrange the
ordering of a finite number of things is finite no matter how large the
number is, the number of possible ways to rearrange an infinite number of
things is infinite.  I believe that this problem of finding the right
rearrangement of an infinite list of computations of values after the
calculation of the list is finished qualifies for an infinite to infinite
diagonal argument.


I want to add one more thing to this in a few days.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-14 Thread Matt Mahoney
Jim Bromer wrote:
 Last week I came up with a sketch that I felt showed that Solomonoff 
 Induction 
was incomputable in practice using a variation of Cantor's Diagonal Argument.

Cantor proved that there are more sequences (infinite length strings) than 
there 
are (finite length) strings, even though both sets are infinite. This means 
that 
some, but not all, sequences have finite length descriptions or are the output 
of finite length programs (which is the same thing in a more formal sense). For 
example, the digits of pi or sqrt(2) are infinite length sequences that have 
finite descriptions (or finite programs that output them). There are many more 
sequences that don't have finite length descriptions, but unfortunately I can't 
describe any of them except to say they contain infinite amounts of random data.

Cantor does not prove that Solomonoff induction is not computable. That was 
proved by Kolmogorov (and also by Solomonoff). Solomonoff induction says to use 
the shortest program that outputs the observed sequence to predict the next 
symbol. However, there is no procedure for finding the length of the shortest 
description. The proof sketch is that if there were, then I could describe the 
first string that cannot be described in less than a million bits even though 
I 
just did. The formal proof 
is 
http://en.wikipedia.org/wiki/Kolmogorov_complexity#Incomputability_of_Kolmogorov_complexity


I think your confusion is using the uncomputability of Solomonoff induction to 
question its applicability. That is an experimental question, not one of 
mathematics. The validity of using the shortest or simplest explanation of the 
past to predict the future was first observed by William of Ockham in the 
1400's. It is standard practice in all fields of science. The minimum 
description length principle is applicable to all branches of machine learning.

However, in the conclusion 
of http://mattmahoney.net/dc/dce.html#Section_Conclusion I argue for Solomonoff 
induction on the basis of physics. Solomonoff induction supposes that all 
observable strings are finite prefixes of computable sequences. Occam's Razor 
might not hold if it were possible for the universe to produce uncomputable 
sequences, i.e. infinite sources of random data. I argue that is not possible 
because the observable universe if finitely computable according to the laws of 
physics as they are now understood.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Wed, July 14, 2010 11:29:13 AM
Subject: [agi] Comments On My Skepticism of Solomonoff Induction


Last week I came up with a sketch that I felt showed that Solomonoff Induction 
was incomputable in practice using a variation of Cantor's Diagonal Argument.  
I 
wondered if my argument made sense or not.  I will explain why I think it did.
 
First of all, I should have started out by saying something like, Suppose 
Solomonoff Induction was computable, since there is some reason why people feel 
that it isn't.
 
Secondly I don't think I needed to use Cantor's Diagonal Argument (for the in 
practice case), because it would be sufficient to point out that since it was 
impossible to say whether or not the probabilities ever approached any 
sustained 
(collared) limits due to the lack of adequate mathematical definition of the 
concept all programs, it would be impossible to make the claim that they were 
actual representations of the probabilities of all programs that could produce 
certain strings.
 
But before I start to explain why I think my variation of the Diagonal Argument 
was valid, I would like to make another comment about what was being claimed.
 
Take a look at the n-ary expansion of the square root of 2 (such as the decimal 
expansion or the binary expansion).  The decimal expansion or the binary 
expansion of the square root of 2 is an infinite string.  To say that the 
algorithm that produces the value is predicting the value is a torturous use 
of meaning of the word 'prediction'.  Now I have less than perfect grammar, but 
the idea of prediction is so important in the field of intelligence that I do 
not feel that this kind of reduction of the concept of prediction is 
illuminating.  

 
Incidentally, There are infinite ways to produce the square root of 2 (sqrt 2 
+1-1, sqrt2 +2-2, sqrt2 +3-3,...).  So the idea that the square root of 2 is 
unlikely is another stretch of conventional thinking.  But since there are an 
infinite ways for a program to produce any number (that can be produced by a 
program) we would imagine that the probability that one of the infinite ways to 
produce the square root of 2 approaches 0 but never reaches it.  We can imagine 
it, but we cannot prove that this occurs in Solomonoff Induction because 
Solomonoff Induction is not limited to just this class of programs (which could 
be proven to approach a limit).  For example, we could make a 

Re: [agi] How do we Score Hypotheses?

2010-07-14 Thread David Jones
Actually, I just realized that there is a way to included inductive
knowledge and experience into this algorithm. Inductive knowledge and
experience about a specific object or object type can be exploited to know
which hypotheses in the past were successful, and therefore which hypothesis
is most likely. By choosing the most likely hypothesis first, we skip a lot
of messy hypothesis comparison processing and analysis. If we choose the
right hypothesis first, all we really have to do is verify that this
hypothesis reveals in the data what we expect to be there. If we confirm
what we expect, that is reason enough not to look for other hypotheses
because the data is explained by what we originally believed to be likely.
We only look for additional hypotheses when we find something unexplained.
And even then, we don't look at the whole problem. We only look at what we
have to to explain the unexplained data. In fact, we could even ignore the
unexplained data if we believe, from experience, that it isn't pertinent.

I discovered this because I'm analyzing how a series of hypotheses are
navigated when analyzing images. It seems to me that it is done very
similarly to way we do it. We sort of confirm what we expect and try to
explain what we don't expect. We try out hypotheses in a sort of trial and
error manor and see how each hypothesis affects what we find in the image.
If we confirm things because of the hypothesis, we are likely to keep it. We
keep going, navigating the tree of hypotheses, conflicts and unexpected
observations until we find a good hypothesis. Something like that. I'm
attempting to construct an algorithm for doing this as I analyze specific
problems.

Dave

On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.com wrote:

 What do you mean by definitive events?

 I guess the first problem I see with my approach is that the movement of
 the window is also a hypothesis. I need to analyze it in more detail and see
 how the tree of hypotheses affects the hypotheses regarding the es on the
 windows.

 What I believe is that these problems can be broken down into types of
 hypotheses,  types of events and types of relationships. then those types
 can be reasoned about in a general way. If possible, then you have a method
 for reasoning about any object that is covered by the types of hypotheses,
 events and relationships that you have defined.

 How to reason about specific objects should not be preprogrammed. But, I
 think the solution to this part of AGI is to find general ways to reason
 about a small set of concepts that can be combined to describe specific
 objects and situations.

 There are other parts to AGI that I am not considering yet. I believe the
 problem has to be broken down into separate pieces and understood before
 putting it back together into a complete system. I have not covered
 inductive learning for example, which would be an important part of AGI. I
 have also not yet incorporated learned experience into the algorithm, which
 is also important.

 The general AI problem is way too complicated to consider all at once. I
 simply can't solve hypothesis generation, comparison and disambiguation
 while at the same time solving induction and experience-based reasoning. It
 becomes unwieldly. So, I'm starting where I can and I'll work my way up to
 the full complexity of the problem.

 I don't really understand what you mean here: The central unsolved
 problem, in my view, is: How can hypotheses be conceptually integrated along
 with the observable definitive events of the problem to form good
 explanatory connections that can mesh well with other knowledge about the
 problem that is considered to be reliable.  The second problem is finding
 efficient ways to represent this complexity of knowledge so that the program
 can utilize it efficiently.

 You also might want to include concrete problems to analyze for your
 central problem suggestions. That would help define the problem a bit better
 for analysis.

 Dave


 On Wed, Jul 14, 2010 at 8:30 AM, Jim Bromer jimbro...@gmail.com wrote:



 On Tue, Jul 13, 2010 at 9:05 PM, Jim Bromer jimbro...@gmail.com wrote:
 Even if you refined your model until it was just right, you would have
 only caught up to everyone else with a solution to a narrow AI problem.


 I did not mean that you would just have a solution to a narrow AI problem,
 but that your solution, if put in the form of scoring of points on the basis
 of the observation *of definitive* events, would constitute a narrow AI
 method.  The central unsolved problem, in my view, is: How can hypotheses be
 conceptually integrated along with the observable definitive events of the
 problem to form good explanatory connections that can mesh well with other
 knowledge about the problem that is considered to be reliable.  The second
 problem is finding efficient ways to represent this complexity of knowledge
 so that the program can utilize it efficiently.

*agi* | 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-14 Thread Abram Demski
Jim,

There is a simple proof of convergence for the sum involved in defining the
probability of a given string in the Solomonoff distribution:

At its greatest, a particular string would be output by *all* programs. In
this case, its sum would come to 1. This puts an upper bound on the sum.
Since there is no subtraction, there is a lower bound at 0 and the sum
monotonically increases as we take the limit. Knowing these facts, suppose
it *didn't* converge. It must then increase without bound, since it cannot
fluctuate back and forth (it can only go up). But this contradicts the upper
bound of 1. So, the sum must stop at 1 or below (and in fact we can prove it
stops below 1, though we can't say where precisely without the infinite
computing power required to compute the limit).

--Abram

On Wed, Jul 14, 2010 at 11:29 AM, Jim Bromer jimbro...@gmail.com wrote:

 Last week I came up with a sketch that I felt showed that Solomonoff
 Induction was incomputable *in practice* using a variation of Cantor's
 Diagonal Argument.  I wondered if my argument made sense or not.  I will
 explain why I think it did.



 First of all, I should have started out by saying something like, Suppose
 Solomonoff Induction was computable, since there is some reason why people
 feel that it isn't.



 Secondly I don't think I needed to use Cantor's Diagonal Argument (for the
 *in practice* case), because it would be sufficient to point out that
 since it was impossible to say whether or not the probabilities ever
 approached any sustained (collared) limits due to the lack of adequate
 mathematical definition of the concept all programs, it would be
 impossible to make the claim that they were actual representations of the
 probabilities of all programs that could produce certain strings.



 But before I start to explain why I think my variation of the Diagonal
 Argument was valid, I would like to make another comment about what was
 being claimed.



 Take a look at the n-ary expansion of the square root of 2 (such as the
 decimal expansion or the binary expansion).  The decimal expansion or the
 binary expansion of the square root of 2 is an infinite string.  To say
 that the algorithm that produces the value is predicting the value is a
 torturous use of meaning of the word 'prediction'.  Now I have less than
 perfect grammar, but the idea of prediction is so important in the field of
 intelligence that I do not feel that this kind of reduction of the concept
 of prediction is illuminating.



 Incidentally, There are infinite ways to produce the square root of 2 (sqrt
 2 +1-1, sqrt2 +2-2, sqrt2 +3-3,...).  So the idea that the square root of
 2 is unlikely is another stretch of conventional thinking.  But since
 there are an infinite ways for a program to produce any number (that can be
 produced by a program) we would imagine that the probability that one of the
 infinite ways to produce the square root of 2 approaches 0 but never reaches
 it.  We can imagine it, but we cannot prove that this occurs in Solomonoff
 Induction because Solomonoff Induction is not limited to just this class of
 programs (which could be proven to approach a limit).  For example, we
 could make a similar argument for any number, including a 0 or a 1 which
 would mean that the infinite string of digits for the square root of 2 is
 just as likely as the string 0.



 But the reason why I think a variation of the diagonal argument can work in
 my argument is that since we cannot prove that the infinite computations of
 the probabilities that a -program will produce a string- will ever approach
 a limit, to use the probabilities reliably (even as an infinite theoretical
 method) we would have to find some way to rearrange the computations of the
 probabilities so that they could.  While the number of ways to rearrange
 the ordering of a finite number of things is finite no matter how large the
 number is, the number of possible ways to rearrange an infinite number of
 things is infinite.  I believe that this problem of finding the right
 rearrangement of an infinite list of computations of values after the
 calculation of the list is finished qualifies for an infinite to infinite
 diagonal argument.


 I want to add one more thing to this in a few days.

 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Michael Swan
On Wed, 2010-07-14 at 07:48 -0700, Matt Mahoney wrote:
 Actually, Fibonacci numbers can be computed without loops or recursion.
 
 int fib(int x) {
   return round(pow((1+sqrt(5))/2, x)/sqrt(5));
 }
;) I know. I was wondering if someone would pick up on it. This won't
prove that brains have loops though, so I wasn't concerned about the
shortcuts. 
 unless you argue that loops are needed to compute sqrt() and pow().
 
I would find it extremely unlikely that brains have *, /, and even more
unlikely to have sqrt and pow inbuilt. Even more unlikely, even if it
did have them, to figure out how to combine them to round(pow((1
+sqrt(5))/2, x)/sqrt(5)). 

Does this mean we should discount all maths that use any complex
operations ? 

I suspect the brain is full of look-up tables mainly, with some fairly
primitive methods of combining the data. 

eg What's 6 / 3 ?
ans = 2 most people would get that because it's been wrote learnt, a
common problem.

What 3456/6 ?
we don't know, at least not from the top of our head.


 The brain and DNA use redundancy and parallelism and don't use loops because 
 their operations are slow and unreliable. This is not necessarily the best 
 strategy for computers because computers are fast and reliable but don't have 
 a 
 lot of parallelism.

The brains slow and unreliable methods I think are the price paid for
generality and innately unreliable hardware. Imagine writing a computer
program that runs for 120 years without crashing and surviving damage
like a brain can. I suspect the perfect AGI program is a rigorous
combination of the 2. 


 
  -- Matt Mahoney, matmaho...@yahoo.com
 
 
 
 - Original Message 
 From: Michael Swan ms...@voyagergaming.com
 To: agi agi@v2.listbox.com
 Sent: Wed, July 14, 2010 12:18:40 AM
 Subject: Re: [agi] What is the smallest set of operations that can 
 potentially  
 define everything and how do you combine them ?
 
 Brain loops:
 
 
 Premise:
 Biological brain code does not contain looping constructs, or the
 ability to creating looping code, (due to the fact they are extremely
 dangerous on unreliable hardware) except for 1 global loop that fires
 about 200 times a second.
 
 Hypothesis:
 Brains cannot calculate iterative problems quickly, where calculations
 in the previous iteration are needed for the next iteration and, where
 brute force operations are the only valid option.
 
 Proof:
 Take as an example, Fibonacci numbers
 http://en.wikipedia.org/wiki/Fibonacci_number
 
 What are the first 100 Fibonacci numbers?
 
 int Fibonacci[102];
 Fibonacci[0] = 0;
 Fibonacci[1] = 1;
 for(int i = 0; i  100; i++)
 {
 // Getting the next Fibonacci number relies on the previous values
 Fibonacci[i+2] = Fibonacci[i] + Fibonacci[i+1];
 }  
 
 My brain knows the process to solve this problem but it can't directly
 write a looping construct into itself. And so it solves it very slowly
 compared to a computer. 
 
 The brain probably consists of vast repeating look-up tables. Of course,
 run in parallel these seem fast.
 
 
 DNA has vast tracks of repeating data. Why would DNA contain repeating
 data, instead of just having the data once and the number of times it's
 repeated like in a loop? One explanation is that DNA can't do looping
 construct either.
 
 
 
 On Wed, 2010-07-14 at 02:43 +0100, Mike Tintner wrote:
  Michael: We can't do operations that
  require 1,000,000 loop iterations.  I wish someone would give me a PHD
  for discovering this ;) It far better describes our differences than any
  other theory.
  
  Michael,
  
  This isn't a competitive point - but I think I've made that point several 
  times (and so of course has Hawkins). Quite obviously, (unless you think 
  the 
  brain has fabulous hidden powers), it conducts searches and other 
  operations 
  with extremely few limited steps, and nothing remotely like the routine 
  millions to billions of current computers.  It must therefore work v. 
  fundamentally differently.
  
  Are you saying anything significantly different to that?
  
  --
  From: Michael Swan ms...@voyagergaming.com
  Sent: Wednesday, July 14, 2010 1:34 AM
  To: agi agi@v2.listbox.com
  Subject: Re: [agi] What is the smallest set of operations that can 
  potentially  define everything and how do you combine them ?
  
  
   On Tue, 2010-07-13 at 07:00 -0400, Ben Goertzel wrote:
   Well, if you want a simple but complete operator set, you can go with
  
   -- Schonfinkel combinator plus two parentheses
  
   I'll check this out soon.
   or
  
   -- S and K combinator plus two parentheses
  
   and I suppose you could add
  
   -- input
   -- output
   -- forget
  
   statements to this, but I'm not sure what this gets you...
  
   Actually, adding other operators doesn't necessarily
   increase the search space your AI faces -- rather, it
   **decreases** the search space **if** you choose the right operators, 
   that
   encapsulate regularities in the 

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Matt Mahoney
Michael Swan wrote:
 What 3456/6 ?
 we don't know, at least not from the top of our head.

No, it took me about 10 or 20 seconds to get 576. Starting with the first 
digit, 
3/6 = 1/2 (from long term memory) and 3 is in the thousands place, so 1/2 of 
1000 is 500 (1/2 = .5 from LTM). I write 500 into short term memory (STM), 
which 
only has enough space to hold about 7 digits. Then to divide 45/6 I get 42/6 = 
7 
with a remainder of 3, or 7.5, but since this is in the tens place I get 75. I 
put 75 in STM, add to 500 to get 575, put the result back in STM replacing 500 
and 75 for which there is no longer room. Finally, 6/6 = 1, which I add to 575 
to get 576. I hold this number in STM long enough to check with a calculator.

One could argue that this calculation in my head uses a loop iterator (in STM) 
to keep track of which digit I am working on. It definitely involves a sequence 
of instructions with intermediate results being stored temporarily. The brain 
can only execute 2 or 3 sequential instructions per second and has very limited 
short term memory, so it needs to draw from a large database of rules to 
perform 
calculations like this. A calculator, being faster and having more RAM, is able 
to use simpler but more tedious algorithms such as converting to binary, 
division by shift and subtract, and converting back to decimal. Doing this with 
a carbon based computer would require pencil and paper to make up for lack of 
STM, and it would require enough steps to have a high probability of making a 
mistake.

Intelligence = knowledge + computing power. The human brain has a lot of 
knowledge. The calculator has less knowledge, but makes up for it in speed and 
memory.

 -- Matt Mahoney, matmaho...@yahoo.com



- Original Message 
From: Michael Swan ms...@voyagergaming.com
To: agi agi@v2.listbox.com
Sent: Wed, July 14, 2010 7:53:33 PM
Subject: Re: [agi] What is the smallest set of operations that can potentially  
define everything and how do you combine them ?

On Wed, 2010-07-14 at 07:48 -0700, Matt Mahoney wrote:
 Actually, Fibonacci numbers can be computed without loops or recursion.
 
 int fib(int x) {
   return round(pow((1+sqrt(5))/2, x)/sqrt(5));
 }
;) I know. I was wondering if someone would pick up on it. This won't
prove that brains have loops though, so I wasn't concerned about the
shortcuts. 
 unless you argue that loops are needed to compute sqrt() and pow().
 
I would find it extremely unlikely that brains have *, /, and even more
unlikely to have sqrt and pow inbuilt. Even more unlikely, even if it
did have them, to figure out how to combine them to round(pow((1
+sqrt(5))/2, x)/sqrt(5)). 

Does this mean we should discount all maths that use any complex
operations ? 

I suspect the brain is full of look-up tables mainly, with some fairly
primitive methods of combining the data. 

eg What's 6 / 3 ?
ans = 2 most people would get that because it's been wrote learnt, a
common problem.

What 3456/6 ?
we don't know, at least not from the top of our head.


 The brain and DNA use redundancy and parallelism and don't use loops because 
 their operations are slow and unreliable. This is not necessarily the best 
 strategy for computers because computers are fast and reliable but don't have 
 a 

 lot of parallelism.

The brains slow and unreliable methods I think are the price paid for
generality and innately unreliable hardware. Imagine writing a computer
program that runs for 120 years without crashing and surviving damage
like a brain can. I suspect the perfect AGI program is a rigorous
combination of the 2. 


 
  -- Matt Mahoney, matmaho...@yahoo.com
 
 
 
 - Original Message 
 From: Michael Swan ms...@voyagergaming.com
 To: agi agi@v2.listbox.com
 Sent: Wed, July 14, 2010 12:18:40 AM
 Subject: Re: [agi] What is the smallest set of operations that can 
 potentially  

 define everything and how do you combine them ?
 
 Brain loops:
 
 
 Premise:
 Biological brain code does not contain looping constructs, or the
 ability to creating looping code, (due to the fact they are extremely
 dangerous on unreliable hardware) except for 1 global loop that fires
 about 200 times a second.
 
 Hypothesis:
 Brains cannot calculate iterative problems quickly, where calculations
 in the previous iteration are needed for the next iteration and, where
 brute force operations are the only valid option.
 
 Proof:
 Take as an example, Fibonacci numbers
 http://en.wikipedia.org/wiki/Fibonacci_number
 
 What are the first 100 Fibonacci numbers?
 
 int Fibonacci[102];
 Fibonacci[0] = 0;
 Fibonacci[1] = 1;
 for(int i = 0; i  100; i++)
 {
 // Getting the next Fibonacci number relies on the previous values
 Fibonacci[i+2] = Fibonacci[i] + Fibonacci[i+1];
 }  
 
 My brain knows the process to solve this problem but it can't directly
 write a looping construct into itself. And so it solves it very slowly
 compared to a computer. 
 
 The brain probably consists of vast repeating look-up 

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Mike Tintner
Michael :The brains slow and unreliable methods I think are the price paid 
for

generality and innately unreliable hardware

Yes to one - nice to see an AGI-er finally starting to join up the dots, 
instead of simply dismissing the brain's massive difficulties in maintaining 
a train of thought.


No to two -innately unreliable hardware is the price of innately 
*adaptable* hardware - that can radically grow and rewire (wh. is the other 
advantage the brain has over computers).  Any thoughts about that and what 
in more detail are the advantages of an organic computer?


In addition, the unreliable hardware is also a price of  global 
ardware  - that has the basic capacity to connect more or less any bit of 
information in any part of the brain with any bit of information in any 
other part of the brain - as distinct from the local hardware of computers 
wh. have to go through limited local channels to limited local stores of 
information to make v. limited local kinds of connections. Well, that's my 
tech-ignorant take on it - but perhaps you can expand on the idea.  I would 
imagine v. broadly the brain is globally connected vs the computer wh. is 
locally connected. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Mike Tintner

A demonstration of global connectedness is -  associate with anO   

I get:
number, sun, dish, disk, ball, letter, mouth, two fingers, oh, circle, 
wheel, wire coil, outline, station on metro, hole, Kenneth Noland painting, 
ring, coin, roundabout


connecting among other things - language, numbers, geometry, food, cartoons, 
paintings, speech, sports, science, technology, art, transport, 
transportation system, money.


Note though the other crucial weakness of the brain wh. impairs global 
connections - fatigue. To maintain any piece of information in consciousness 
for long is a strain,  (unless it's sexual?).


But the above demonstrates IMO why the brain is and has to be an image 
processor. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Michael Swan

On Wed, 2010-07-14 at 17:51 -0700, Matt Mahoney wrote:
 Michael Swan wrote:
  What 3456/6 ?
  we don't know, at least not from the top of our head.
 
 No, it took me about 10 or 20 seconds to get 576. Starting with the first 
 digit, 
 3/6 = 1/2 (from long term memory) and 3 is in the thousands place, so 1/2 of 
 1000 is 500 (1/2 = .5 from LTM). I write 500 into short term memory (STM), 
 which 
 only has enough space to hold about 7 digits. Then to divide 45/6 I get 42/6 
 = 7 
 with a remainder of 3, or 7.5, but since this is in the tens place I get 75. 
 I 
 put 75 in STM, add to 500 to get 575, put the result back in STM replacing 
 500 
 and 75 for which there is no longer room. Finally, 6/6 = 1, which I add to 
 575 
 to get 576. I hold this number in STM long enough to check with a calculator.
The brain does have one global loop, which I think goes at about 100~200
hertz. I would argue that your using that. Also note, brain are unlikely
to use RAM. Memory is most likely stored very locally to the process, as
the brain prob. can't access memory frivolously like in a computer. So,
the processes that require going backwards have to wait for the next
global loop to get the data, causing massive loss in time.
So about (~10sec * ~100hertz)= 1000+ loops I suspect is about right.


 
 One could argue that this calculation in my head uses a loop iterator (in 
 STM) 
 to keep track of which digit I am working on. It definitely involves a 
 sequence 
 of instructions with intermediate results being stored temporarily. The brain 
 can only execute 2 or 3 sequential instructions per second and has very 
 limited 
 short term memory, so it needs to draw from a large database of rules to 
 perform 
 calculations like this. A calculator, being faster and having more RAM, is 
 able 
 to use simpler but more tedious algorithms such as converting to binary, 
 division by shift and subtract, and converting back to decimal. Doing this 
 with 
 a carbon based computer would require pencil and paper to make up for lack of 
 STM, and it would require enough steps to have a high probability of making a 
 mistake.
 
 Intelligence = knowledge + computing power.
+ an clever way of using that computing power

  The human brain has a lot of 
 knowledge. The calculator has less knowledge, but makes up for it in speed 
 and 
 memory.

 
  -- Matt Mahoney, matmaho...@yahoo.com
 
 
 
 - Original Message 
 From: Michael Swan ms...@voyagergaming.com
 To: agi agi@v2.listbox.com
 Sent: Wed, July 14, 2010 7:53:33 PM
 Subject: Re: [agi] What is the smallest set of operations that can 
 potentially  
 define everything and how do you combine them ?
 
 On Wed, 2010-07-14 at 07:48 -0700, Matt Mahoney wrote:
  Actually, Fibonacci numbers can be computed without loops or recursion.
  
  int fib(int x) {
return round(pow((1+sqrt(5))/2, x)/sqrt(5));
  }
 ;) I know. I was wondering if someone would pick up on it. This won't
 prove that brains have loops though, so I wasn't concerned about the
 shortcuts. 
  unless you argue that loops are needed to compute sqrt() and pow().
  
 I would find it extremely unlikely that brains have *, /, and even more
 unlikely to have sqrt and pow inbuilt. Even more unlikely, even if it
 did have them, to figure out how to combine them to round(pow((1
 +sqrt(5))/2, x)/sqrt(5)). 
 
 Does this mean we should discount all maths that use any complex
 operations ? 
 
 I suspect the brain is full of look-up tables mainly, with some fairly
 primitive methods of combining the data. 
 
 eg What's 6 / 3 ?
 ans = 2 most people would get that because it's been wrote learnt, a
 common problem.
 
 What 3456/6 ?
 we don't know, at least not from the top of our head.
 
 
  The brain and DNA use redundancy and parallelism and don't use loops 
  because 
  their operations are slow and unreliable. This is not necessarily the best 
  strategy for computers because computers are fast and reliable but don't 
  have a 
 
  lot of parallelism.
 
 The brains slow and unreliable methods I think are the price paid for
 generality and innately unreliable hardware. Imagine writing a computer
 program that runs for 120 years without crashing and surviving damage
 like a brain can. I suspect the perfect AGI program is a rigorous
 combination of the 2. 
 
 
  
   -- Matt Mahoney, matmaho...@yahoo.com
  
  
  
  - Original Message 
  From: Michael Swan ms...@voyagergaming.com
  To: agi agi@v2.listbox.com
  Sent: Wed, July 14, 2010 12:18:40 AM
  Subject: Re: [agi] What is the smallest set of operations that can 
  potentially  
 
  define everything and how do you combine them ?
  
  Brain loops:
  
  
  Premise:
  Biological brain code does not contain looping constructs, or the
  ability to creating looping code, (due to the fact they are extremely
  dangerous on unreliable hardware) except for 1 global loop that fires
  about 200 times a second.
  
  Hypothesis:
  Brains cannot calculate iterative problems quickly, where 

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Robert Picone
On Wed, Jul 14, 2010 at 4:53 PM, Michael Swan ms...@voyagergaming.comwrote:

 On Wed, 2010-07-14 at 07:48 -0700, Matt Mahoney wrote:
  Actually, Fibonacci numbers can be computed without loops or recursion.
 
  int fib(int x) {
return round(pow((1+sqrt(5))/2, x)/sqrt(5));
  }
 ;) I know. I was wondering if someone would pick up on it. This won't
 prove that brains have loops though, so I wasn't concerned about the
 shortcuts.
  unless you argue that loops are needed to compute sqrt() and pow().
 
 I would find it extremely unlikely that brains have *, /, and even more
 unlikely to have sqrt and pow inbuilt. Even more unlikely, even if it
 did have them, to figure out how to combine them to round(pow((1
 +sqrt(5))/2, x)/sqrt(5)).

 Does this mean we should discount all maths that use any complex
 operations ?

 I suspect the brain is full of look-up tables mainly, with some fairly
 primitive methods of combining the data.

 eg What's 6 / 3 ?
 ans = 2 most people would get that because it's been wrote learnt, a
 common problem.

 What 3456/6 ?
 we don't know, at least not from the top of our head.




I'd argue that mathematical operations are unnecesary, we don't even have
integer support inbuilt.  The number meme is a bit of a hack on top of
language that has been modified throughout the years.  We have a peripheral
that allows us decent support for the numbers 1-10, but beyond that numbers
are basically words to which several different finicky grammars can be
applied as far as our brains are concerned.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Michael Swan

On Thu, 2010-07-15 at 01:37 +0100, Mike Tintner wrote:
 Michael :The brains slow and unreliable methods I think are the price paid 
 for
 generality and innately unreliable hardware
 
 Yes to one - nice to see an AGI-er finally starting to join up the dots, 
 instead of simply dismissing the brain's massive difficulties in maintaining 
 a train of thought.
 
 No to two -innately unreliable hardware is the price of innately 
 *adaptable* hardware - that can radically grow and rewire (wh. is the other 
 advantage the brain has over computers).  Any thoughts about that and what 
 in more detail are the advantages of an organic computer?
Program software can rewire themselves in some senses, one creates
virtual hardware inside the program as though it was real hardware.
But it's extremely rare to find ones that are purely general, so much so
I doubt purely general ones even exist. Are NN's purely general? Are
GA's purely general? I thought perhaps code that writes code could
potentially reach such a lofty goal (as it can turn into a GA or NN or ,
well, anything). Then I thought the code writing the code restricts what
the written code can be. 

So, then I made some simple experiments of the code modifying itself.
The end result was surprisingly ( at least I suspect it was) similar to
DNA. 

I still had a large section of code, whose purpose was to read part
itself, and modify it, and this large piece of code had no bearing in
what the modified code actually did. 

DNA has 2 sections, a coding section, which actually most of the hard
work, and poorly named junk DNA (or non-coding DNA), which most
biologist thought did nothing, until they discovered it doing stuff all
over the place but in a somewhat discrete, subtle fashion.

So, is my experiment 6
http://codegenerationdesign.webs.com/index.htm
the first ever program to roughly mimic the programming of DNA ?

I find this really hard to prove, but I think it remains a possibility.

Apparently, Biologists don't think much my degree in biology from the
University of Wikipedia, nature docs, and other random stuff you read
from the internet.


 
 In addition, the unreliable hardware is also a price of  global 
 ardware  - that has the basic capacity to connect more or less any bit of 
 information in any part of the brain with any bit of information in any 
 other part of the brain - as distinct from the local hardware of computers 
 wh. have to go through limited local channels to limited local stores of 
 information to make v. limited local kinds of connections. Well, that's my 
 tech-ignorant take on it - but perhaps you can expand on the idea.  I would 
 imagine v. broadly the brain is globally connected vs the computer wh. is 
 locally connected. 
Yep, the ability to grab memory from anywhere is called RAM - Random
Access Memory. A single neurons can only access data from it's 25,000
connections, which sounds like a lot, but isn't because computers can
access a theoretical infinite set of data. 

Being that the program in a brain can only go forward, how does it tell
other neurons that it wants data about X that is behind it ?

One theory, is that certain neurons detect that they need more data, and
create a greater positive charge to attract more of negatively charged
data. So in a sense they sux more data into themselves, effectively
sending a different, non-dangerous backward running signal. (Author
note: that I can't prove this at all, and is just a possibility)




 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Michael Swan

 
 
 I'd argue that mathematical operations are unnecesary,
  we don't even have integer support inbuilt.
I'd disagree.  is a mathematical operation, and in combination can
become an enormous number of concepts.

Sure, I think the brain is more sensibly understood in a
programattical sense than mathematical.

I say programattical because it probably has 100 billion or so
conditional statements, a difficult thing to represent mathematically.
Even so, each conditional is going to have maths constructs in it.


   The number meme is a bit of a hack on top of language that has been
 modified throughout the years.
   We have a peripheral that allows us decent support for the numbers
 1-10, but beyond that numbers are basically words to which several
 different finicky grammars can be applied as far as our brains are
 concerned.

True, but numbers awesomeness lies it there power to represent relative
differences between any concepts. With this power, numbers are a
universal language, a language that can represent any other language,
and hence, the ideal language and probably only real choice for an AGI. 



 agi | Archives  | Modify Your
 Subscription
 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com