Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-15 Thread Robert Picone
On Wed, Jul 14, 2010 at 10:35 PM, Michael Swan ms...@voyagergaming.comwrote:


 
 
  I'd argue that mathematical operations are unnecesary,
   we don't even have integer support inbuilt.
 I'd disagree.  is a mathematical operation, and in combination can
 become an enormous number of concepts.

 Sure, I think the brain is more sensibly understood in a
 programattical sense than mathematical.

 I say programattical because it probably has 100 billion or so
 conditional statements, a difficult thing to represent mathematically.
 Even so, each conditional is going to have maths constructs in it.


Sorry, I meant unnecessary to demonstrate that particular point.  There's no
need to say you have no innate ability to know what 3456/6 is when you are
unlikely to have an innate concept of the number 3456 or any other arbitrary
number greater than a few hundred to begin with, you can get by with a few
lookup tables upon which you get a vague idea what 3456 of something would
be, but if I were to show you a sheet of paper with 3000-4000 dots on it,
you would be unlikely to be able to tell me whether it was greater or less
than 3456.

I don't see any way an evaluator of some sort wouldn't be completely
necessary for an AGI, sorry for the confusion.  Though, you do bring to mind
the point that while  can be an extremely useful tool for composing other
concepts, our internal comparisons do seem to tend more towards the analog
than towards the binary, and while you can compose those analog outputs with
 and - easily enough, you probably want concepts supported as close to
natively as is possible.  Remember, there are turing-complete
one-dimensional systems of cellular automata, but that doesn't make it
feasible to port the Linux kernel to them.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-15 Thread Mike Tintner
And yet you dream dreams wh. are broad-ranging in subject matter, unlike all 
programs wh. are extremely narrow-ranging.


--
From: Michael Swan ms...@voyagergaming.com
Sent: Thursday, July 15, 2010 5:16 AM
To: agi agi@v2.listbox.com
Subject: Re: [agi] What is the smallest set of operations that can 
potentially  define everything and how do you combine them ?




I watched a brain experiment last night that proved that connections
between major parts of the brain stop when you are asleep.

They put electricity at different brain points, and it went everywhere
when the person was a awake, and dissipated when they were asleep.


On Thu, 2010-07-15 at 02:13 +0100, Mike Tintner wrote:

A demonstration of global connectedness is -  associate with anO   

I get:
number, sun, dish, disk, ball, letter, mouth, two fingers, oh, circle,
wheel, wire coil, outline, station on metro, hole, Kenneth Noland 
painting,

ring, coin, roundabout

connecting among other things - language, numbers, geometry, food, 
cartoons,

paintings, speech, sports, science, technology, art, transport,
transportation system, money.

Note though the other crucial weakness of the brain wh. impairs global
connections - fatigue. To maintain any piece of information in 
consciousness

for long is a strain,  (unless it's sexual?).

But the above demonstrates IMO why the brain is and has to be an image
processor.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Matt Mahoney
Actually, Fibonacci numbers can be computed without loops or recursion.

int fib(int x) {
  return round(pow((1+sqrt(5))/2, x)/sqrt(5));
}

unless you argue that loops are needed to compute sqrt() and pow().

The brain and DNA use redundancy and parallelism and don't use loops because 
their operations are slow and unreliable. This is not necessarily the best 
strategy for computers because computers are fast and reliable but don't have a 
lot of parallelism.

 -- Matt Mahoney, matmaho...@yahoo.com



- Original Message 
From: Michael Swan ms...@voyagergaming.com
To: agi agi@v2.listbox.com
Sent: Wed, July 14, 2010 12:18:40 AM
Subject: Re: [agi] What is the smallest set of operations that can potentially  
define everything and how do you combine them ?

Brain loops:


Premise:
Biological brain code does not contain looping constructs, or the
ability to creating looping code, (due to the fact they are extremely
dangerous on unreliable hardware) except for 1 global loop that fires
about 200 times a second.

Hypothesis:
Brains cannot calculate iterative problems quickly, where calculations
in the previous iteration are needed for the next iteration and, where
brute force operations are the only valid option.

Proof:
Take as an example, Fibonacci numbers
http://en.wikipedia.org/wiki/Fibonacci_number

What are the first 100 Fibonacci numbers?

int Fibonacci[102];
Fibonacci[0] = 0;
Fibonacci[1] = 1;
for(int i = 0; i  100; i++)
{
// Getting the next Fibonacci number relies on the previous values
Fibonacci[i+2] = Fibonacci[i] + Fibonacci[i+1];
}  

My brain knows the process to solve this problem but it can't directly
write a looping construct into itself. And so it solves it very slowly
compared to a computer. 

The brain probably consists of vast repeating look-up tables. Of course,
run in parallel these seem fast.


DNA has vast tracks of repeating data. Why would DNA contain repeating
data, instead of just having the data once and the number of times it's
repeated like in a loop? One explanation is that DNA can't do looping
construct either.



On Wed, 2010-07-14 at 02:43 +0100, Mike Tintner wrote:
 Michael: We can't do operations that
 require 1,000,000 loop iterations.  I wish someone would give me a PHD
 for discovering this ;) It far better describes our differences than any
 other theory.
 
 Michael,
 
 This isn't a competitive point - but I think I've made that point several 
 times (and so of course has Hawkins). Quite obviously, (unless you think the 
 brain has fabulous hidden powers), it conducts searches and other operations 
 with extremely few limited steps, and nothing remotely like the routine 
 millions to billions of current computers.  It must therefore work v. 
 fundamentally differently.
 
 Are you saying anything significantly different to that?
 
 --
 From: Michael Swan ms...@voyagergaming.com
 Sent: Wednesday, July 14, 2010 1:34 AM
 To: agi agi@v2.listbox.com
 Subject: Re: [agi] What is the smallest set of operations that can 
 potentially  define everything and how do you combine them ?
 
 
  On Tue, 2010-07-13 at 07:00 -0400, Ben Goertzel wrote:
  Well, if you want a simple but complete operator set, you can go with
 
  -- Schonfinkel combinator plus two parentheses
 
  I'll check this out soon.
  or
 
  -- S and K combinator plus two parentheses
 
  and I suppose you could add
 
  -- input
  -- output
  -- forget
 
  statements to this, but I'm not sure what this gets you...
 
  Actually, adding other operators doesn't necessarily
  increase the search space your AI faces -- rather, it
  **decreases** the search space **if** you choose the right operators, 
  that
  encapsulate regularities in the environment faced by the AI
 
  Unfortunately, an AGI needs to be absolutely general. You are right that
  higher level concepts reduce combinations, however, using them, will
  increase combinations for simpler operator combinations, and if you
  miss a necessary operator, then some concepts will be impossible to
  achieve. The smallest set can define higher level concepts, these
  concepts can be later integrated as single operations, which means
  using operators than can be understood in terms of smaller operators
  in the beginning, will definitely increase you combinations later on.
 
  The smallest operator set is like absolute zero. It has a defined end. A
  defined way of finding out what they are.
 
 
 
 
  Exemplifying this, writing programs doing humanly simple things
  using S and K is a pain and involves piling a lot of S and K and 
  parentheses
  on top of each other, whereas if we introduce loops and conditionals and
  such, these programs get shorter.  Because loops and conditionals happen
  to match the stuff that our human-written programs need to do...
  Loops are evil in most situations.
 
  Let me show you why:
  Draw a square using put_pixel(x,y)
  // loops are more scalable, but, damage this code 

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Michael Swan
On Wed, 2010-07-14 at 07:48 -0700, Matt Mahoney wrote:
 Actually, Fibonacci numbers can be computed without loops or recursion.
 
 int fib(int x) {
   return round(pow((1+sqrt(5))/2, x)/sqrt(5));
 }
;) I know. I was wondering if someone would pick up on it. This won't
prove that brains have loops though, so I wasn't concerned about the
shortcuts. 
 unless you argue that loops are needed to compute sqrt() and pow().
 
I would find it extremely unlikely that brains have *, /, and even more
unlikely to have sqrt and pow inbuilt. Even more unlikely, even if it
did have them, to figure out how to combine them to round(pow((1
+sqrt(5))/2, x)/sqrt(5)). 

Does this mean we should discount all maths that use any complex
operations ? 

I suspect the brain is full of look-up tables mainly, with some fairly
primitive methods of combining the data. 

eg What's 6 / 3 ?
ans = 2 most people would get that because it's been wrote learnt, a
common problem.

What 3456/6 ?
we don't know, at least not from the top of our head.


 The brain and DNA use redundancy and parallelism and don't use loops because 
 their operations are slow and unreliable. This is not necessarily the best 
 strategy for computers because computers are fast and reliable but don't have 
 a 
 lot of parallelism.

The brains slow and unreliable methods I think are the price paid for
generality and innately unreliable hardware. Imagine writing a computer
program that runs for 120 years without crashing and surviving damage
like a brain can. I suspect the perfect AGI program is a rigorous
combination of the 2. 


 
  -- Matt Mahoney, matmaho...@yahoo.com
 
 
 
 - Original Message 
 From: Michael Swan ms...@voyagergaming.com
 To: agi agi@v2.listbox.com
 Sent: Wed, July 14, 2010 12:18:40 AM
 Subject: Re: [agi] What is the smallest set of operations that can 
 potentially  
 define everything and how do you combine them ?
 
 Brain loops:
 
 
 Premise:
 Biological brain code does not contain looping constructs, or the
 ability to creating looping code, (due to the fact they are extremely
 dangerous on unreliable hardware) except for 1 global loop that fires
 about 200 times a second.
 
 Hypothesis:
 Brains cannot calculate iterative problems quickly, where calculations
 in the previous iteration are needed for the next iteration and, where
 brute force operations are the only valid option.
 
 Proof:
 Take as an example, Fibonacci numbers
 http://en.wikipedia.org/wiki/Fibonacci_number
 
 What are the first 100 Fibonacci numbers?
 
 int Fibonacci[102];
 Fibonacci[0] = 0;
 Fibonacci[1] = 1;
 for(int i = 0; i  100; i++)
 {
 // Getting the next Fibonacci number relies on the previous values
 Fibonacci[i+2] = Fibonacci[i] + Fibonacci[i+1];
 }  
 
 My brain knows the process to solve this problem but it can't directly
 write a looping construct into itself. And so it solves it very slowly
 compared to a computer. 
 
 The brain probably consists of vast repeating look-up tables. Of course,
 run in parallel these seem fast.
 
 
 DNA has vast tracks of repeating data. Why would DNA contain repeating
 data, instead of just having the data once and the number of times it's
 repeated like in a loop? One explanation is that DNA can't do looping
 construct either.
 
 
 
 On Wed, 2010-07-14 at 02:43 +0100, Mike Tintner wrote:
  Michael: We can't do operations that
  require 1,000,000 loop iterations.  I wish someone would give me a PHD
  for discovering this ;) It far better describes our differences than any
  other theory.
  
  Michael,
  
  This isn't a competitive point - but I think I've made that point several 
  times (and so of course has Hawkins). Quite obviously, (unless you think 
  the 
  brain has fabulous hidden powers), it conducts searches and other 
  operations 
  with extremely few limited steps, and nothing remotely like the routine 
  millions to billions of current computers.  It must therefore work v. 
  fundamentally differently.
  
  Are you saying anything significantly different to that?
  
  --
  From: Michael Swan ms...@voyagergaming.com
  Sent: Wednesday, July 14, 2010 1:34 AM
  To: agi agi@v2.listbox.com
  Subject: Re: [agi] What is the smallest set of operations that can 
  potentially  define everything and how do you combine them ?
  
  
   On Tue, 2010-07-13 at 07:00 -0400, Ben Goertzel wrote:
   Well, if you want a simple but complete operator set, you can go with
  
   -- Schonfinkel combinator plus two parentheses
  
   I'll check this out soon.
   or
  
   -- S and K combinator plus two parentheses
  
   and I suppose you could add
  
   -- input
   -- output
   -- forget
  
   statements to this, but I'm not sure what this gets you...
  
   Actually, adding other operators doesn't necessarily
   increase the search space your AI faces -- rather, it
   **decreases** the search space **if** you choose the right operators, 
   that
   encapsulate regularities in the 

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Matt Mahoney
Michael Swan wrote:
 What 3456/6 ?
 we don't know, at least not from the top of our head.

No, it took me about 10 or 20 seconds to get 576. Starting with the first 
digit, 
3/6 = 1/2 (from long term memory) and 3 is in the thousands place, so 1/2 of 
1000 is 500 (1/2 = .5 from LTM). I write 500 into short term memory (STM), 
which 
only has enough space to hold about 7 digits. Then to divide 45/6 I get 42/6 = 
7 
with a remainder of 3, or 7.5, but since this is in the tens place I get 75. I 
put 75 in STM, add to 500 to get 575, put the result back in STM replacing 500 
and 75 for which there is no longer room. Finally, 6/6 = 1, which I add to 575 
to get 576. I hold this number in STM long enough to check with a calculator.

One could argue that this calculation in my head uses a loop iterator (in STM) 
to keep track of which digit I am working on. It definitely involves a sequence 
of instructions with intermediate results being stored temporarily. The brain 
can only execute 2 or 3 sequential instructions per second and has very limited 
short term memory, so it needs to draw from a large database of rules to 
perform 
calculations like this. A calculator, being faster and having more RAM, is able 
to use simpler but more tedious algorithms such as converting to binary, 
division by shift and subtract, and converting back to decimal. Doing this with 
a carbon based computer would require pencil and paper to make up for lack of 
STM, and it would require enough steps to have a high probability of making a 
mistake.

Intelligence = knowledge + computing power. The human brain has a lot of 
knowledge. The calculator has less knowledge, but makes up for it in speed and 
memory.

 -- Matt Mahoney, matmaho...@yahoo.com



- Original Message 
From: Michael Swan ms...@voyagergaming.com
To: agi agi@v2.listbox.com
Sent: Wed, July 14, 2010 7:53:33 PM
Subject: Re: [agi] What is the smallest set of operations that can potentially  
define everything and how do you combine them ?

On Wed, 2010-07-14 at 07:48 -0700, Matt Mahoney wrote:
 Actually, Fibonacci numbers can be computed without loops or recursion.
 
 int fib(int x) {
   return round(pow((1+sqrt(5))/2, x)/sqrt(5));
 }
;) I know. I was wondering if someone would pick up on it. This won't
prove that brains have loops though, so I wasn't concerned about the
shortcuts. 
 unless you argue that loops are needed to compute sqrt() and pow().
 
I would find it extremely unlikely that brains have *, /, and even more
unlikely to have sqrt and pow inbuilt. Even more unlikely, even if it
did have them, to figure out how to combine them to round(pow((1
+sqrt(5))/2, x)/sqrt(5)). 

Does this mean we should discount all maths that use any complex
operations ? 

I suspect the brain is full of look-up tables mainly, with some fairly
primitive methods of combining the data. 

eg What's 6 / 3 ?
ans = 2 most people would get that because it's been wrote learnt, a
common problem.

What 3456/6 ?
we don't know, at least not from the top of our head.


 The brain and DNA use redundancy and parallelism and don't use loops because 
 their operations are slow and unreliable. This is not necessarily the best 
 strategy for computers because computers are fast and reliable but don't have 
 a 

 lot of parallelism.

The brains slow and unreliable methods I think are the price paid for
generality and innately unreliable hardware. Imagine writing a computer
program that runs for 120 years without crashing and surviving damage
like a brain can. I suspect the perfect AGI program is a rigorous
combination of the 2. 


 
  -- Matt Mahoney, matmaho...@yahoo.com
 
 
 
 - Original Message 
 From: Michael Swan ms...@voyagergaming.com
 To: agi agi@v2.listbox.com
 Sent: Wed, July 14, 2010 12:18:40 AM
 Subject: Re: [agi] What is the smallest set of operations that can 
 potentially  

 define everything and how do you combine them ?
 
 Brain loops:
 
 
 Premise:
 Biological brain code does not contain looping constructs, or the
 ability to creating looping code, (due to the fact they are extremely
 dangerous on unreliable hardware) except for 1 global loop that fires
 about 200 times a second.
 
 Hypothesis:
 Brains cannot calculate iterative problems quickly, where calculations
 in the previous iteration are needed for the next iteration and, where
 brute force operations are the only valid option.
 
 Proof:
 Take as an example, Fibonacci numbers
 http://en.wikipedia.org/wiki/Fibonacci_number
 
 What are the first 100 Fibonacci numbers?
 
 int Fibonacci[102];
 Fibonacci[0] = 0;
 Fibonacci[1] = 1;
 for(int i = 0; i  100; i++)
 {
 // Getting the next Fibonacci number relies on the previous values
 Fibonacci[i+2] = Fibonacci[i] + Fibonacci[i+1];
 }  
 
 My brain knows the process to solve this problem but it can't directly
 write a looping construct into itself. And so it solves it very slowly
 compared to a computer. 
 
 The brain probably consists of vast repeating look-up 

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Mike Tintner
Michael :The brains slow and unreliable methods I think are the price paid 
for

generality and innately unreliable hardware

Yes to one - nice to see an AGI-er finally starting to join up the dots, 
instead of simply dismissing the brain's massive difficulties in maintaining 
a train of thought.


No to two -innately unreliable hardware is the price of innately 
*adaptable* hardware - that can radically grow and rewire (wh. is the other 
advantage the brain has over computers).  Any thoughts about that and what 
in more detail are the advantages of an organic computer?


In addition, the unreliable hardware is also a price of  global 
ardware  - that has the basic capacity to connect more or less any bit of 
information in any part of the brain with any bit of information in any 
other part of the brain - as distinct from the local hardware of computers 
wh. have to go through limited local channels to limited local stores of 
information to make v. limited local kinds of connections. Well, that's my 
tech-ignorant take on it - but perhaps you can expand on the idea.  I would 
imagine v. broadly the brain is globally connected vs the computer wh. is 
locally connected. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Mike Tintner

A demonstration of global connectedness is -  associate with anO   

I get:
number, sun, dish, disk, ball, letter, mouth, two fingers, oh, circle, 
wheel, wire coil, outline, station on metro, hole, Kenneth Noland painting, 
ring, coin, roundabout


connecting among other things - language, numbers, geometry, food, cartoons, 
paintings, speech, sports, science, technology, art, transport, 
transportation system, money.


Note though the other crucial weakness of the brain wh. impairs global 
connections - fatigue. To maintain any piece of information in consciousness 
for long is a strain,  (unless it's sexual?).


But the above demonstrates IMO why the brain is and has to be an image 
processor. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Michael Swan

On Wed, 2010-07-14 at 17:51 -0700, Matt Mahoney wrote:
 Michael Swan wrote:
  What 3456/6 ?
  we don't know, at least not from the top of our head.
 
 No, it took me about 10 or 20 seconds to get 576. Starting with the first 
 digit, 
 3/6 = 1/2 (from long term memory) and 3 is in the thousands place, so 1/2 of 
 1000 is 500 (1/2 = .5 from LTM). I write 500 into short term memory (STM), 
 which 
 only has enough space to hold about 7 digits. Then to divide 45/6 I get 42/6 
 = 7 
 with a remainder of 3, or 7.5, but since this is in the tens place I get 75. 
 I 
 put 75 in STM, add to 500 to get 575, put the result back in STM replacing 
 500 
 and 75 for which there is no longer room. Finally, 6/6 = 1, which I add to 
 575 
 to get 576. I hold this number in STM long enough to check with a calculator.
The brain does have one global loop, which I think goes at about 100~200
hertz. I would argue that your using that. Also note, brain are unlikely
to use RAM. Memory is most likely stored very locally to the process, as
the brain prob. can't access memory frivolously like in a computer. So,
the processes that require going backwards have to wait for the next
global loop to get the data, causing massive loss in time.
So about (~10sec * ~100hertz)= 1000+ loops I suspect is about right.


 
 One could argue that this calculation in my head uses a loop iterator (in 
 STM) 
 to keep track of which digit I am working on. It definitely involves a 
 sequence 
 of instructions with intermediate results being stored temporarily. The brain 
 can only execute 2 or 3 sequential instructions per second and has very 
 limited 
 short term memory, so it needs to draw from a large database of rules to 
 perform 
 calculations like this. A calculator, being faster and having more RAM, is 
 able 
 to use simpler but more tedious algorithms such as converting to binary, 
 division by shift and subtract, and converting back to decimal. Doing this 
 with 
 a carbon based computer would require pencil and paper to make up for lack of 
 STM, and it would require enough steps to have a high probability of making a 
 mistake.
 
 Intelligence = knowledge + computing power.
+ an clever way of using that computing power

  The human brain has a lot of 
 knowledge. The calculator has less knowledge, but makes up for it in speed 
 and 
 memory.

 
  -- Matt Mahoney, matmaho...@yahoo.com
 
 
 
 - Original Message 
 From: Michael Swan ms...@voyagergaming.com
 To: agi agi@v2.listbox.com
 Sent: Wed, July 14, 2010 7:53:33 PM
 Subject: Re: [agi] What is the smallest set of operations that can 
 potentially  
 define everything and how do you combine them ?
 
 On Wed, 2010-07-14 at 07:48 -0700, Matt Mahoney wrote:
  Actually, Fibonacci numbers can be computed without loops or recursion.
  
  int fib(int x) {
return round(pow((1+sqrt(5))/2, x)/sqrt(5));
  }
 ;) I know. I was wondering if someone would pick up on it. This won't
 prove that brains have loops though, so I wasn't concerned about the
 shortcuts. 
  unless you argue that loops are needed to compute sqrt() and pow().
  
 I would find it extremely unlikely that brains have *, /, and even more
 unlikely to have sqrt and pow inbuilt. Even more unlikely, even if it
 did have them, to figure out how to combine them to round(pow((1
 +sqrt(5))/2, x)/sqrt(5)). 
 
 Does this mean we should discount all maths that use any complex
 operations ? 
 
 I suspect the brain is full of look-up tables mainly, with some fairly
 primitive methods of combining the data. 
 
 eg What's 6 / 3 ?
 ans = 2 most people would get that because it's been wrote learnt, a
 common problem.
 
 What 3456/6 ?
 we don't know, at least not from the top of our head.
 
 
  The brain and DNA use redundancy and parallelism and don't use loops 
  because 
  their operations are slow and unreliable. This is not necessarily the best 
  strategy for computers because computers are fast and reliable but don't 
  have a 
 
  lot of parallelism.
 
 The brains slow and unreliable methods I think are the price paid for
 generality and innately unreliable hardware. Imagine writing a computer
 program that runs for 120 years without crashing and surviving damage
 like a brain can. I suspect the perfect AGI program is a rigorous
 combination of the 2. 
 
 
  
   -- Matt Mahoney, matmaho...@yahoo.com
  
  
  
  - Original Message 
  From: Michael Swan ms...@voyagergaming.com
  To: agi agi@v2.listbox.com
  Sent: Wed, July 14, 2010 12:18:40 AM
  Subject: Re: [agi] What is the smallest set of operations that can 
  potentially  
 
  define everything and how do you combine them ?
  
  Brain loops:
  
  
  Premise:
  Biological brain code does not contain looping constructs, or the
  ability to creating looping code, (due to the fact they are extremely
  dangerous on unreliable hardware) except for 1 global loop that fires
  about 200 times a second.
  
  Hypothesis:
  Brains cannot calculate iterative problems quickly, where 

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Robert Picone
On Wed, Jul 14, 2010 at 4:53 PM, Michael Swan ms...@voyagergaming.comwrote:

 On Wed, 2010-07-14 at 07:48 -0700, Matt Mahoney wrote:
  Actually, Fibonacci numbers can be computed without loops or recursion.
 
  int fib(int x) {
return round(pow((1+sqrt(5))/2, x)/sqrt(5));
  }
 ;) I know. I was wondering if someone would pick up on it. This won't
 prove that brains have loops though, so I wasn't concerned about the
 shortcuts.
  unless you argue that loops are needed to compute sqrt() and pow().
 
 I would find it extremely unlikely that brains have *, /, and even more
 unlikely to have sqrt and pow inbuilt. Even more unlikely, even if it
 did have them, to figure out how to combine them to round(pow((1
 +sqrt(5))/2, x)/sqrt(5)).

 Does this mean we should discount all maths that use any complex
 operations ?

 I suspect the brain is full of look-up tables mainly, with some fairly
 primitive methods of combining the data.

 eg What's 6 / 3 ?
 ans = 2 most people would get that because it's been wrote learnt, a
 common problem.

 What 3456/6 ?
 we don't know, at least not from the top of our head.




I'd argue that mathematical operations are unnecesary, we don't even have
integer support inbuilt.  The number meme is a bit of a hack on top of
language that has been modified throughout the years.  We have a peripheral
that allows us decent support for the numbers 1-10, but beyond that numbers
are basically words to which several different finicky grammars can be
applied as far as our brains are concerned.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Michael Swan

On Thu, 2010-07-15 at 01:37 +0100, Mike Tintner wrote:
 Michael :The brains slow and unreliable methods I think are the price paid 
 for
 generality and innately unreliable hardware
 
 Yes to one - nice to see an AGI-er finally starting to join up the dots, 
 instead of simply dismissing the brain's massive difficulties in maintaining 
 a train of thought.
 
 No to two -innately unreliable hardware is the price of innately 
 *adaptable* hardware - that can radically grow and rewire (wh. is the other 
 advantage the brain has over computers).  Any thoughts about that and what 
 in more detail are the advantages of an organic computer?
Program software can rewire themselves in some senses, one creates
virtual hardware inside the program as though it was real hardware.
But it's extremely rare to find ones that are purely general, so much so
I doubt purely general ones even exist. Are NN's purely general? Are
GA's purely general? I thought perhaps code that writes code could
potentially reach such a lofty goal (as it can turn into a GA or NN or ,
well, anything). Then I thought the code writing the code restricts what
the written code can be. 

So, then I made some simple experiments of the code modifying itself.
The end result was surprisingly ( at least I suspect it was) similar to
DNA. 

I still had a large section of code, whose purpose was to read part
itself, and modify it, and this large piece of code had no bearing in
what the modified code actually did. 

DNA has 2 sections, a coding section, which actually most of the hard
work, and poorly named junk DNA (or non-coding DNA), which most
biologist thought did nothing, until they discovered it doing stuff all
over the place but in a somewhat discrete, subtle fashion.

So, is my experiment 6
http://codegenerationdesign.webs.com/index.htm
the first ever program to roughly mimic the programming of DNA ?

I find this really hard to prove, but I think it remains a possibility.

Apparently, Biologists don't think much my degree in biology from the
University of Wikipedia, nature docs, and other random stuff you read
from the internet.


 
 In addition, the unreliable hardware is also a price of  global 
 ardware  - that has the basic capacity to connect more or less any bit of 
 information in any part of the brain with any bit of information in any 
 other part of the brain - as distinct from the local hardware of computers 
 wh. have to go through limited local channels to limited local stores of 
 information to make v. limited local kinds of connections. Well, that's my 
 tech-ignorant take on it - but perhaps you can expand on the idea.  I would 
 imagine v. broadly the brain is globally connected vs the computer wh. is 
 locally connected. 
Yep, the ability to grab memory from anywhere is called RAM - Random
Access Memory. A single neurons can only access data from it's 25,000
connections, which sounds like a lot, but isn't because computers can
access a theoretical infinite set of data. 

Being that the program in a brain can only go forward, how does it tell
other neurons that it wants data about X that is behind it ?

One theory, is that certain neurons detect that they need more data, and
create a greater positive charge to attract more of negatively charged
data. So in a sense they sux more data into themselves, effectively
sending a different, non-dangerous backward running signal. (Author
note: that I can't prove this at all, and is just a possibility)




 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Michael Swan

 
 
 I'd argue that mathematical operations are unnecesary,
  we don't even have integer support inbuilt.
I'd disagree.  is a mathematical operation, and in combination can
become an enormous number of concepts.

Sure, I think the brain is more sensibly understood in a
programattical sense than mathematical.

I say programattical because it probably has 100 billion or so
conditional statements, a difficult thing to represent mathematically.
Even so, each conditional is going to have maths constructs in it.


   The number meme is a bit of a hack on top of language that has been
 modified throughout the years.
   We have a peripheral that allows us decent support for the numbers
 1-10, but beyond that numbers are basically words to which several
 different finicky grammars can be applied as far as our brains are
 concerned.

True, but numbers awesomeness lies it there power to represent relative
differences between any concepts. With this power, numbers are a
universal language, a language that can represent any other language,
and hence, the ideal language and probably only real choice for an AGI. 



 agi | Archives  | Modify Your
 Subscription
 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-13 Thread Ben Goertzel
Well, if you want a simple but complete operator set, you can go with

-- Schonfinkel combinator plus two parentheses

or

-- S and K combinator plus two parentheses

and I suppose you could add

-- input
-- output
-- forget

statements to this, but I'm not sure what this gets you...

Actually, adding other operators doesn't necessarily
increase the search space your AI faces -- rather, it
**decreases** the search space **if** you choose the right operators, that
encapsulate regularities in the environment faced by the AI

Exemplifying this, writing programs doing humanly simple things
using S and K is a pain and involves piling a lot of S and K and parentheses
on top of each other, whereas if we introduce loops and conditionals and
such, these programs get shorter.  Because loops and conditionals happen
to match the stuff that our human-written programs need to do...

A better question IMO is what set of operators and structures has the
property that the compact expressions tend to be the ones that are useful
for survival and problem-solving in the environments that humans and human-
like AIs need to cope with...

-- Ben G

On Tue, Jul 13, 2010 at 1:43 AM, Michael Swan ms...@voyagergaming.com wrote:
 Hi,

 I'm interested in combining the simplest, most derivable operations
 ( eg operations that cannot be defined by other operations) for creating
 seed AGI's. The simplest operations combined in a multitude ways can
 form extremely complex patterns, but the underlying logic may be
 simple.

 I wonder if varying combinations of the smallest set of operations:

 {  , memory (= for memory assignment), ==, (a logical way to
 combine them), (input, output), () brackets  }

 can potentially learn and define everything.

 Assume all input is from numbers.

 We want the smallest set of elements, because less elements mean less
 combinations which mean less chance of hitting combinatorial explosion.

  helps for generalisation, reducing combinations.

 memory(=) is for hash look ups, what should one remember? What can be
 discarded?

 == This does a comparison between 2 values x == y is 1 if x and y are
 exactly the same. Returns 0 if they are not the same.

 (a logical way to combine them) Any non-narrow algorithm that reduces
 the raw data into a simpler state will do. Philosophically like
 Solomonoff Induction. This is the hardest part. What is the most optimal
 way of combining the above set of operations?

 () brackets are used to order operations.




 Conditionals (only if statements) + memory assignment are the only valid
 form of logic - ie no loops. Just repeat code if you want loops.


 If you think that the set above cannot define everything, then what is
 the smallest set of operations that can potentially define everything?

 --
 Some proofs / Thought experiments :

 1) Can , ==, (), and memory define other logical operations like 
 (AND gate) ?

 I propose that x==y==1 defines xy

 xy             x==y==1
 00 = 0         0==0==1 = 0
 10 = 0         1==0==1 = 0
 01 = 0         0==1==1 = 0
 11 = 1         1==1==1 = 1

 It means  can be completely defined using == therefore  is not
 one of the smallest possible general concepts.  can be potentially
 learnt from ==.

 -

 2) Write a algorithm that can define 1 using only ,==, ().

 Multiple answers
 a) discrete 1 could use
 x == 1

 b) continuous 1.0 could use this rule
 For those not familiar with C++, ! means not
 (x  0.9)  !(x  1.1)   expanding gives ( getting rid of ! and )
 (x  0.9) == ((x  1.1) == 0) == 1    note !x can be defined in terms
 of == like so x == 0.

 (b) is a generalisation, and expansion of the definition of (a) and can
 be scaled by changing the values 0.9 and 1.1 to fit what others
 would generally define as being 1.




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if
we are to give everything its due, two times two makes five is
sometimes a very charming thing too. -- Fyodor Dostoevsky


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-13 Thread Michael Swan

On Tue, 2010-07-13 at 07:00 -0400, Ben Goertzel wrote:
 Well, if you want a simple but complete operator set, you can go with
 
 -- Schonfinkel combinator plus two parentheses
 
I'll check this out soon.
 or
 
 -- S and K combinator plus two parentheses
 
 and I suppose you could add
 
 -- input
 -- output
 -- forget
 
 statements to this, but I'm not sure what this gets you...
 
 Actually, adding other operators doesn't necessarily
 increase the search space your AI faces -- rather, it
 **decreases** the search space **if** you choose the right operators, that
 encapsulate regularities in the environment faced by the AI

Unfortunately, an AGI needs to be absolutely general. You are right that
higher level concepts reduce combinations, however, using them, will
increase combinations for simpler operator combinations, and if you
miss a necessary operator, then some concepts will be impossible to
achieve. The smallest set can define higher level concepts, these
concepts can be later integrated as single operations, which means
using operators than can be understood in terms of smaller operators
in the beginning, will definitely increase you combinations later on.

The smallest operator set is like absolute zero. It has a defined end. A
defined way of finding out what they are.



 
 Exemplifying this, writing programs doing humanly simple things
 using S and K is a pain and involves piling a lot of S and K and parentheses
 on top of each other, whereas if we introduce loops and conditionals and
 such, these programs get shorter.  Because loops and conditionals happen
 to match the stuff that our human-written programs need to do...
Loops are evil in most situations.

Let me show you why:
Draw a square using put_pixel(x,y)
// loops are more scalable, but, damage this code anywhere and it can
potentially kill every other process, not just itself. This is why
computers die all the time.

for (int x = 0; x  2; x++)
{   
for (int y = 0; y  2; y++)
{
put_pixel(x,y);
}
}

opposed to
/* The below is faster (even on single step instructions), and can be
run in parallel, damage resistant ( ie destroy  put_pixel(0,1); and the
rest of the code will still run), is less scalable ( more code is
required for larger operations)

put_pixel(0,0);
put_pixel(0,1);
put_pixel(1,0);
put_pixel(1,1);

The lack of loops in the brain is a fundamental difference between
computers and brains. Think about it. We can't do operations that
require 1,000,000 loop iterations.  I wish someone would give me a PHD
for discovering this ;) It far better describes our differences than any
other theory.


 A better question IMO is what set of operators and structures has the
 property that the compact expressions tend to be the ones that are useful
 for survival and problem-solving in the environments that humans and human-
 like AIs need to cope with...

For me that is stage 2.

 
 -- Ben G
 
 On Tue, Jul 13, 2010 at 1:43 AM, Michael Swan ms...@voyagergaming.com wrote:
  Hi,
 
  I'm interested in combining the simplest, most derivable operations
  ( eg operations that cannot be defined by other operations) for creating
  seed AGI's. The simplest operations combined in a multitude ways can
  form extremely complex patterns, but the underlying logic may be
  simple.
 
  I wonder if varying combinations of the smallest set of operations:
 
  {  , memory (= for memory assignment), ==, (a logical way to
  combine them), (input, output), () brackets  }
 
  can potentially learn and define everything.
 
  Assume all input is from numbers.
 
  We want the smallest set of elements, because less elements mean less
  combinations which mean less chance of hitting combinatorial explosion.
 
   helps for generalisation, reducing combinations.
 
  memory(=) is for hash look ups, what should one remember? What can be
  discarded?
 
  == This does a comparison between 2 values x == y is 1 if x and y are
  exactly the same. Returns 0 if they are not the same.
 
  (a logical way to combine them) Any non-narrow algorithm that reduces
  the raw data into a simpler state will do. Philosophically like
  Solomonoff Induction. This is the hardest part. What is the most optimal
  way of combining the above set of operations?
 
  () brackets are used to order operations.
 
 
 
 
  Conditionals (only if statements) + memory assignment are the only valid
  form of logic - ie no loops. Just repeat code if you want loops.
 
 
  If you think that the set above cannot define everything, then what is
  the smallest set of operations that can potentially define everything?
 
  --
  Some proofs / Thought experiments :
 
  1) Can , ==, (), and memory define other logical operations like 
  (AND gate) ?
 
  I propose that x==y==1 defines xy
 
  xy x==y==1
  00 = 0 0==0==1 = 0
  10 = 0 1==0==1 = 0
  01 = 0 0==1==1 = 0
  11 = 1 1==1==1 

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-13 Thread Mike Tintner

Michael: We can't do operations that
require 1,000,000 loop iterations.  I wish someone would give me a PHD
for discovering this ;) It far better describes our differences than any
other theory.

Michael,

This isn't a competitive point - but I think I've made that point several 
times (and so of course has Hawkins). Quite obviously, (unless you think the 
brain has fabulous hidden powers), it conducts searches and other operations 
with extremely few limited steps, and nothing remotely like the routine 
millions to billions of current computers.  It must therefore work v. 
fundamentally differently.


Are you saying anything significantly different to that?

--
From: Michael Swan ms...@voyagergaming.com
Sent: Wednesday, July 14, 2010 1:34 AM
To: agi agi@v2.listbox.com
Subject: Re: [agi] What is the smallest set of operations that can 
potentially  define everything and how do you combine them ?




On Tue, 2010-07-13 at 07:00 -0400, Ben Goertzel wrote:

Well, if you want a simple but complete operator set, you can go with

-- Schonfinkel combinator plus two parentheses


I'll check this out soon.

or

-- S and K combinator plus two parentheses

and I suppose you could add

-- input
-- output
-- forget

statements to this, but I'm not sure what this gets you...

Actually, adding other operators doesn't necessarily
increase the search space your AI faces -- rather, it
**decreases** the search space **if** you choose the right operators, 
that

encapsulate regularities in the environment faced by the AI


Unfortunately, an AGI needs to be absolutely general. You are right that
higher level concepts reduce combinations, however, using them, will
increase combinations for simpler operator combinations, and if you
miss a necessary operator, then some concepts will be impossible to
achieve. The smallest set can define higher level concepts, these
concepts can be later integrated as single operations, which means
using operators than can be understood in terms of smaller operators
in the beginning, will definitely increase you combinations later on.

The smallest operator set is like absolute zero. It has a defined end. A
defined way of finding out what they are.





Exemplifying this, writing programs doing humanly simple things
using S and K is a pain and involves piling a lot of S and K and 
parentheses

on top of each other, whereas if we introduce loops and conditionals and
such, these programs get shorter.  Because loops and conditionals happen
to match the stuff that our human-written programs need to do...

Loops are evil in most situations.

Let me show you why:
Draw a square using put_pixel(x,y)
// loops are more scalable, but, damage this code anywhere and it can
potentially kill every other process, not just itself. This is why
computers die all the time.

for (int x = 0; x  2; x++)
{
for (int y = 0; y  2; y++)
{
put_pixel(x,y);
}
}

opposed to
/* The below is faster (even on single step instructions), and can be
run in parallel, damage resistant ( ie destroy  put_pixel(0,1); and the
rest of the code will still run), is less scalable ( more code is
required for larger operations)

put_pixel(0,0);
put_pixel(0,1);
put_pixel(1,0);
put_pixel(1,1);

The lack of loops in the brain is a fundamental difference between
computers and brains. Think about it. We can't do operations that
require 1,000,000 loop iterations.  I wish someone would give me a PHD
for discovering this ;) It far better describes our differences than any
other theory.



A better question IMO is what set of operators and structures has the
property that the compact expressions tend to be the ones that are useful
for survival and problem-solving in the environments that humans and 
human-

like AIs need to cope with...


For me that is stage 2.



-- Ben G

On Tue, Jul 13, 2010 at 1:43 AM, Michael Swan ms...@voyagergaming.com 
wrote:

 Hi,

 I'm interested in combining the simplest, most derivable operations
 ( eg operations that cannot be defined by other operations) for 
 creating

 seed AGI's. The simplest operations combined in a multitude ways can
 form extremely complex patterns, but the underlying logic may be
 simple.

 I wonder if varying combinations of the smallest set of operations:

 {  , memory (= for memory assignment), ==, (a logical way to
 combine them), (input, output), () brackets  }

 can potentially learn and define everything.

 Assume all input is from numbers.

 We want the smallest set of elements, because less elements mean less
 combinations which mean less chance of hitting combinatorial explosion.

  helps for generalisation, reducing combinations.

 memory(=) is for hash look ups, what should one remember? What can be
 discarded?

 == This does a comparison between 2 values x == y is 1 if x and y are
 exactly the same. Returns 0 if they are not the same.

 (a logical way to combine them) Any non-narrow algorithm that reduces
 the raw data into a simpler state 

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-13 Thread Michael Swan
Brain loops:


Premise:
Biological brain code does not contain looping constructs, or the
ability to creating looping code, (due to the fact they are extremely
dangerous on unreliable hardware) except for 1 global loop that fires
about 200 times a second.

Hypothesis:
Brains cannot calculate iterative problems quickly, where calculations
in the previous iteration are needed for the next iteration and, where
brute force operations are the only valid option.

Proof:
Take as an example, Fibonacci numbers
http://en.wikipedia.org/wiki/Fibonacci_number

What are the first 100 Fibonacci numbers?

int Fibonacci[102];
Fibonacci[0] = 0;
Fibonacci[1] = 1;
for(int i = 0; i  100; i++)
{
// Getting the next Fibonacci number relies on the previous values
Fibonacci[i+2] = Fibonacci[i] + Fibonacci[i+1];
}  

My brain knows the process to solve this problem but it can't directly
write a looping construct into itself. And so it solves it very slowly
compared to a computer. 

The brain probably consists of vast repeating look-up tables. Of course,
run in parallel these seem fast.


DNA has vast tracks of repeating data. Why would DNA contain repeating
data, instead of just having the data once and the number of times it's
repeated like in a loop? One explanation is that DNA can't do looping
construct either.



On Wed, 2010-07-14 at 02:43 +0100, Mike Tintner wrote:
 Michael: We can't do operations that
 require 1,000,000 loop iterations.  I wish someone would give me a PHD
 for discovering this ;) It far better describes our differences than any
 other theory.
 
 Michael,
 
 This isn't a competitive point - but I think I've made that point several 
 times (and so of course has Hawkins). Quite obviously, (unless you think the 
 brain has fabulous hidden powers), it conducts searches and other operations 
 with extremely few limited steps, and nothing remotely like the routine 
 millions to billions of current computers.  It must therefore work v. 
 fundamentally differently.
 
 Are you saying anything significantly different to that?
 
 --
 From: Michael Swan ms...@voyagergaming.com
 Sent: Wednesday, July 14, 2010 1:34 AM
 To: agi agi@v2.listbox.com
 Subject: Re: [agi] What is the smallest set of operations that can 
 potentially  define everything and how do you combine them ?
 
 
  On Tue, 2010-07-13 at 07:00 -0400, Ben Goertzel wrote:
  Well, if you want a simple but complete operator set, you can go with
 
  -- Schonfinkel combinator plus two parentheses
 
  I'll check this out soon.
  or
 
  -- S and K combinator plus two parentheses
 
  and I suppose you could add
 
  -- input
  -- output
  -- forget
 
  statements to this, but I'm not sure what this gets you...
 
  Actually, adding other operators doesn't necessarily
  increase the search space your AI faces -- rather, it
  **decreases** the search space **if** you choose the right operators, 
  that
  encapsulate regularities in the environment faced by the AI
 
  Unfortunately, an AGI needs to be absolutely general. You are right that
  higher level concepts reduce combinations, however, using them, will
  increase combinations for simpler operator combinations, and if you
  miss a necessary operator, then some concepts will be impossible to
  achieve. The smallest set can define higher level concepts, these
  concepts can be later integrated as single operations, which means
  using operators than can be understood in terms of smaller operators
  in the beginning, will definitely increase you combinations later on.
 
  The smallest operator set is like absolute zero. It has a defined end. A
  defined way of finding out what they are.
 
 
 
 
  Exemplifying this, writing programs doing humanly simple things
  using S and K is a pain and involves piling a lot of S and K and 
  parentheses
  on top of each other, whereas if we introduce loops and conditionals and
  such, these programs get shorter.  Because loops and conditionals happen
  to match the stuff that our human-written programs need to do...
  Loops are evil in most situations.
 
  Let me show you why:
  Draw a square using put_pixel(x,y)
  // loops are more scalable, but, damage this code anywhere and it can
  potentially kill every other process, not just itself. This is why
  computers die all the time.
 
  for (int x = 0; x  2; x++)
  {
  for (int y = 0; y  2; y++)
  {
  put_pixel(x,y);
  }
  }
 
  opposed to
  /* The below is faster (even on single step instructions), and can be
  run in parallel, damage resistant ( ie destroy  put_pixel(0,1); and the
  rest of the code will still run), is less scalable ( more code is
  required for larger operations)
 
  put_pixel(0,0);
  put_pixel(0,1);
  put_pixel(1,0);
  put_pixel(1,1);
 
  The lack of loops in the brain is a fundamental difference between
  computers and brains. Think about it. We can't do operations that
  require 1,000,000 loop iterations.  I wish someone would give me a PHD
  for 

[agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-12 Thread Michael Swan
Hi,

I'm interested in combining the simplest, most derivable operations
( eg operations that cannot be defined by other operations) for creating
seed AGI's. The simplest operations combined in a multitude ways can
form extremely complex patterns, but the underlying logic may be
simple. 

I wonder if varying combinations of the smallest set of operations:

{  , memory (= for memory assignment), ==, (a logical way to
combine them), (input, output), () brackets  } 

can potentially learn and define everything. 

Assume all input is from numbers.

We want the smallest set of elements, because less elements mean less
combinations which mean less chance of hitting combinatorial explosion.

 helps for generalisation, reducing combinations. 

memory(=) is for hash look ups, what should one remember? What can be
discarded? 

== This does a comparison between 2 values x == y is 1 if x and y are
exactly the same. Returns 0 if they are not the same.

(a logical way to combine them) Any non-narrow algorithm that reduces
the raw data into a simpler state will do. Philosophically like
Solomonoff Induction. This is the hardest part. What is the most optimal
way of combining the above set of operations?

() brackets are used to order operations. 




Conditionals (only if statements) + memory assignment are the only valid
form of logic - ie no loops. Just repeat code if you want loops. 


If you think that the set above cannot define everything, then what is
the smallest set of operations that can potentially define everything? 

--
Some proofs / Thought experiments :

1) Can , ==, (), and memory define other logical operations like 
(AND gate) ?

I propose that x==y==1 defines xy

xy x==y==1
00 = 0 0==0==1 = 0
10 = 0 1==0==1 = 0
01 = 0 0==1==1 = 0 
11 = 1 1==1==1 = 1

It means  can be completely defined using == therefore  is not
one of the smallest possible general concepts.  can be potentially
learnt from ==.

-

2) Write a algorithm that can define 1 using only ,==, ().

Multiple answers
a) discrete 1 could use
x == 1

b) continuous 1.0 could use this rule 
For those not familiar with C++, ! means not 
(x  0.9)  !(x  1.1)   expanding gives ( getting rid of ! and )
(x  0.9) == ((x  1.1) == 0) == 1note !x can be defined in terms
of == like so x == 0.

(b) is a generalisation, and expansion of the definition of (a) and can
be scaled by changing the values 0.9 and 1.1 to fit what others
would generally define as being 1.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com