Re: [agi] How do we hear music

2010-07-25 Thread Michael Swan

On Fri, 2010-07-23 at 23:38 +0100, Mike Tintner wrote:
 Michael:but those things do have patterns.. A mushroom (A) is like a cloud
  mushroom (B).
 
  if ( (input_source_A == An_image) AND ( input_source_B == An_image ))
 
  One pattern is that they both came from an image source, and I just used
  maths + logic to prove it
 
 Michael,
 
 This is a bit desperate isn't it?
It's a common misconception that high level queries aren't very good.
Imagine 5 senses, sight, touch taste .. etc.

We confirm the input is from sight. By doing this we potentially reduce
the combination of what it could be by 4/5 ~ 80%. which is pretty
awesome. 

Computer programs know nothing. You have to tell them everything (narrow
AI) or allow the mechanics to find out things for themselves.

 
 They both come from image sources. So do a zillion other images, from 
 Obama to dung - so they're all alike? Everything in the world is alike and 
 metaphorical for everything else?
 
 And their images must be alike because they both have an 'o' and a 'u' in 
 their words, (not their images)-  unless you're a Chinese speaker.
 
 Pace Lear, that way madness lies.
 
 Why don't you apply your animation side to the problem - and analyse the 
 images per images, and how to compare them as images? Some people in AGI 
 although not AFAIK on this forum are actually addressing the problem. I'm 
 sure *you* can too.
 
 
 
 --
 From: Michael Swan ms...@voyagergaming.com
 Sent: Friday, July 23, 2010 8:28 AM
 To: agi agi@v2.listbox.com
 Subject: Re: [agi] How do we hear music
 
 
 
 
 
 
  On Fri, 2010-07-23 at 03:45 +0100, Mike Tintner wrote:
  Let's crystallise the problem   - all the unsolved problems of AGI - 
  visual
  object recognition, conceptualisation, analogy, metaphor, creativity,
  language understanding and generation -  are problems where you're 
  dealing
  with freeform, irregular patchwork objects - objects which clearly do not
  fit any *patterns* -   the raison d'etre of maths .
 
  To focus that , these objects do not have common parts in more or less
  precisely repeating structures - i.e. fit patterns.
 
  A cartoon and a photo of the same face may have no parts or structure in
  common.
  Ditto different versions of the Google logo. Zero common parts or 
  structure
 
  Ditto cloud and mushroom - no common parts, or common structure.
 
  Yet the mind amazingly can see likenesses between all these things.
 
  Just about all the natural objects in the world , with some obvious
  exceptions, do not fit common patterns - they do not have the same parts 
  in
  precisely the same places/structures.  They may  have common loose
  organizations of parts - e.g. mouths, eyes, noses, lips  - but they are
  not precisely patterned.
 
  So you must explain how a mathematical approach, wh. is all about
  recognizing patterns, can apply to objects wh. do not fit patterns.
 
  You won't be able to - because if you could bring yourselves to look at 
  the
  real world or any depictions of it other than geometric, (metacognitively
  speaking),you would see for yourself that these objects don't have 
  precise
  patterns.
 
  It's obvious also that when the mind likens a cloud to a mushroom, it 
  cannot
  be using any math. techniques.
 
  .. but those things do have patterns.. A mushroom (A) is like a cloud
  mushroom (B).
 
  if ( (input_source_A == An_image) AND ( input_source_B == An_image ))
 
  One pattern is that they both came from an image source, and I just used
  maths + logic to prove it.
 
  But we have to understand how the mind does do that - because it's fairly
  clearly  the same technique the mind also uses to conceptualise even more
  vastly different forms such as those of  chair, tree,  dog, cat.
 
  And that technique - like concepts themselves -  is at the heart of AGI.
 
  And you can sit down and analyse the problem visually, physically and see
  also pretty obviously that if the mind can liken such physically 
  different
  objects as cloud and mushroom, then it HAS to do that with something like 
  a
  fluid schema. There's broadly no other way but to fluidly squash the 
  objects
  to match each other (there could certainly be different techniques of
  achieving that  - but the broad principles are fairly self evident). 
  Cloud
  and mushroom certainly don't match formulaically, mathematically. Neither 
  do
  those different versions of a tune. Or the different faces of Madonna.
 
  But what we've got here is people who don't in the final analysis give a
  damn about how to solve AGI - if it's a choice between doing maths and
  failing, and having some kind of artistic solution to AGI that actually
  works, most people here will happily fail forever. Mathematical AI has
  indeed consistently failed at AGI. You have to realise, mathematicians 
  have
  a certain kind of madness. Artists don't go around saying God is an 
  artist,
  or everything is art. Only

Re: [agi] How do we hear music

2010-07-23 Thread Michael Swan





On Fri, 2010-07-23 at 03:45 +0100, Mike Tintner wrote:
 Let's crystallise the problem   - all the unsolved problems of AGI -  visual 
 object recognition, conceptualisation, analogy, metaphor, creativity, 
 language understanding and generation -  are problems where you're dealing 
 with freeform, irregular patchwork objects - objects which clearly do not 
 fit any *patterns* -   the raison d'etre of maths .
 
 To focus that , these objects do not have common parts in more or less 
 precisely repeating structures - i.e. fit patterns.
 
 A cartoon and a photo of the same face may have no parts or structure in 
 common.
 Ditto different versions of the Google logo. Zero common parts or structure
 
 Ditto cloud and mushroom - no common parts, or common structure.
 
 Yet the mind amazingly can see likenesses between all these things.
 
 Just about all the natural objects in the world , with some obvious 
 exceptions, do not fit common patterns - they do not have the same parts in 
 precisely the same places/structures.  They may  have common loose 
 organizations of parts - e.g. mouths, eyes, noses, lips  - but they are 
 not precisely patterned.
 
 So you must explain how a mathematical approach, wh. is all about 
 recognizing patterns, can apply to objects wh. do not fit patterns.
 
 You won't be able to - because if you could bring yourselves to look at the 
 real world or any depictions of it other than geometric, (metacognitively 
 speaking),you would see for yourself that these objects don't have precise 
 patterns.
 
 It's obvious also that when the mind likens a cloud to a mushroom, it cannot 
 be using any math. techniques.

.. but those things do have patterns.. A mushroom (A) is like a cloud
mushroom (B).

if ( (input_source_A == An_image) AND ( input_source_B == An_image ))

One pattern is that they both came from an image source, and I just used
maths + logic to prove it.
 
 But we have to understand how the mind does do that - because it's fairly 
 clearly  the same technique the mind also uses to conceptualise even more 
 vastly different forms such as those of  chair, tree,  dog, cat.
 
 And that technique - like concepts themselves -  is at the heart of AGI.
 
 And you can sit down and analyse the problem visually, physically and see 
 also pretty obviously that if the mind can liken such physically different 
 objects as cloud and mushroom, then it HAS to do that with something like a 
 fluid schema. There's broadly no other way but to fluidly squash the objects 
 to match each other (there could certainly be different techniques of 
 achieving that  - but the broad principles are fairly self evident). Cloud 
 and mushroom certainly don't match formulaically, mathematically. Neither do 
 those different versions of a tune. Or the different faces of Madonna.
 
 But what we've got here is people who don't in the final analysis give a 
 damn about how to solve AGI - if it's a choice between doing maths and 
 failing, and having some kind of artistic solution to AGI that actually 
 works, most people here will happily fail forever. Mathematical AI has 
 indeed consistently failed at AGI. You have to realise, mathematicians have 
 a certain kind of madness. Artists don't go around saying God is an artist, 
 or everything is art. Only mathematicians have that compulsion to reduce 
 everything to maths, when the overwhelming majority of representations are 
 clearly not mathematical - or claim that the obviously irregular abstract 
 arts (think Pollock) are mathematical. You're in good company - Wolfram, a 
 brilliant fellow, thinks his patterns constitute a new kind of science, when 
 the vast majority of scientists can see they only constitute a new  kind of 
 pattern, and do not apply to the real world.
 
 Look again - the brain is primarily a patchwork adapted to a patchwork, 
 very extensively unpatterned world -  incl. the internet itself - adapted 
 primarily not to neat, patterned networks, but  to  tangled, patchwork, 
 non-mathematical webs. See fotos.
 
 The outrageous one here is not me.
 
 
 
 
 --
 From: Michael Swan ms...@voyagergaming.com
 Sent: Friday, July 23, 2010 2:19 AM
 To: agi agi@v2.listbox.com
 Subject: Re: [agi] How do we hear music
 
  Hi,
 
  Sometimes outrageous comments are a catalyst for better ideas.
 
  On Fri, 2010-07-23 at 01:48 +0200, Jan Klauck wrote:
  Mike Tintner trolled
 
   And maths will handle the examples given :
  
   same tunes - different scales, different instruments
   same face -  cartoon, photo
   same logo  - different parts [buildings/ fruits/ human figures]
 
  Unfortunately I forgot. The answer is somewhere down there:
 
  http://en.wikipedia.org/wiki/Eigenvalue,_eigenvector_and_eigenspace
  http://en.wikipedia.org/wiki/Pattern_recognition
  http://en.wikipedia.org/wiki/Curve_fitting
  http://en.wikipedia.org/wiki/System_identification
 
  No-one has successfully integrated these concepts

Re: [agi] How do we hear music

2010-07-22 Thread Michael Swan
Hi,

Sometimes outrageous comments are a catalyst for better ideas. 

On Fri, 2010-07-23 at 01:48 +0200, Jan Klauck wrote:
 Mike Tintner trolled
 
  And maths will handle the examples given :
 
  same tunes - different scales, different instruments
  same face -  cartoon, photo
  same logo  - different parts [buildings/ fruits/ human figures]
 
 Unfortunately I forgot. The answer is somewhere down there:
 
 http://en.wikipedia.org/wiki/Eigenvalue,_eigenvector_and_eigenspace
 http://en.wikipedia.org/wiki/Pattern_recognition
 http://en.wikipedia.org/wiki/Curve_fitting
 http://en.wikipedia.org/wiki/System_identification
 
No-one has successfully integrated these concepts into a working AGI,
despite numerous attempts. Even though these concept feel general, when
implemented, only narrow or affected by combinatorial explosion have
succeeded. 
 
  revealing them to be the same  -   how exactly?
 
 Why should anybody explain that mystery to you? You are not an
 accepted member of the Grand Lodge of AGI Masons or its affiliates.
 
  Or you could take two arseholes -  same kind of object, but radically
  different configurations - maths will show them to belong to the same
  category, how?
 
 How will you do it? By licking them?

Personal attacks only weaken your arguments.

 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Collective Brain

2010-07-20 Thread Michael Swan

On Wed, 2010-07-21 at 02:25 +0100, Mike Tintner wrote:

 By implicitly pretending that artificial brains exist - in the form of 
 computer programs -  you (and most AGI-ers), deflect attention away from all 
 the unsolved dimensions of what is required for an independent 
 brain-cum-living system,
I for one would like to see this brain-cum living system. It's erotic
intelligence would be astronomical!

  natural or artificial. One of those dimensions is a 
 society of brains/systems. Another is a body. And there are more., none of 
 wh. are incorporated in computer programs - they only represent one 
 dimension of what is needed for a brain.

 --
 From: Jan Klauck jkla...@uni-osnabrueck.de
 Sent: Wednesday, July 21, 2010 1:56 AM
 To: agi agi@v2.listbox.com
 Subject: Re: [agi] The Collective Brain
 
  Mike Tintner wrote
 
  No, the collective brain is actually a somewhat distinctive idea.
 
  Just a way of looking at social support networks. Even social
  philosophers centuries ago had similar ideas--they were lacking our
  technical understanding and used analogies from biology (organicism)
  instead.
 
  more like interdependently functioning with society
 
  As I said it's long known to economists and sociologists. There's even
  an African proverb pointing at this: It takes a village to raise a
  child.
  System researcher investigate those interdependencies since decades.
 
  Did you watch the talk?
 
  No flash here. I just answer on what you're writing.
 
  The evidence of the idea's newness is precisely the discussions of
  superAGI's and AGI futures by the groups here
 
  We talked about the social dimensions some times. It's not the most
  important topic around here, but that doesn't mean we're all ignorant.
 
  In case you haven't noticed I'm not building an AGI, I'm interested
  in the stuff around, e.g., tests, implementation strategies etc. by
  the means of social simulation.
 
  Your last question is also an example of cocooned-AGI thinking? Which
  brains?  The only real AGI brains are those of living systems
 
  A for Artificial. Living systems don't qualify for A.
 
  My question was about certain attributes of brains (whether natural or
  artificial). Societies are constrained by their members' capacities.
  A higher individual capacity can lead to different dependencies and
  new ways groups and societies are working.
 
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: 
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com 
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Collective Brain

2010-07-20 Thread Michael Swan

The most powerful concept in the universe is working together.

If atoms didn't attract and repel each other, then we'd have a universe
where nothing ever happened.

So, Collective Brain is a subset of the collective intelligence of
the universe.


On Wed, 2010-07-21 at 02:25 +0100, Mike Tintner wrote:
 You partly illustrate my point - you talk of artificial brains as if they 
 actually exist  - there aren't any; there are only glorified, extremely 
 complex calculators/computer programs  - extensions/augmentations of 
 individual faculties of human brains.  To obviously exaggerate, it's 
 somewhat as if you were to talk of cameras as brains.
 
 By implicitly pretending that artificial brains exist - in the form of 
 computer programs -  you (and most AGI-ers), deflect attention away from all 
 the unsolved dimensions of what is required for an independent 
 brain-cum-living system, natural or artificial. One of those dimensions is a 
 society of brains/systems. Another is a body. And there are more., none of 
 wh. are incorporated in computer programs - they only represent one 
 dimension of what is needed for a brain.
 
 Yes you may know these things some times as you say, but most of the time 
 they're forgotten.
 
 --
 From: Jan Klauck jkla...@uni-osnabrueck.de
 Sent: Wednesday, July 21, 2010 1:56 AM
 To: agi agi@v2.listbox.com
 Subject: Re: [agi] The Collective Brain
 
  Mike Tintner wrote
 
  No, the collective brain is actually a somewhat distinctive idea.
 
  Just a way of looking at social support networks. Even social
  philosophers centuries ago had similar ideas--they were lacking our
  technical understanding and used analogies from biology (organicism)
  instead.
 
  more like interdependently functioning with society
 
  As I said it's long known to economists and sociologists. There's even
  an African proverb pointing at this: It takes a village to raise a
  child.
  System researcher investigate those interdependencies since decades.
 
  Did you watch the talk?
 
  No flash here. I just answer on what you're writing.
 
  The evidence of the idea's newness is precisely the discussions of
  superAGI's and AGI futures by the groups here
 
  We talked about the social dimensions some times. It's not the most
  important topic around here, but that doesn't mean we're all ignorant.
 
  In case you haven't noticed I'm not building an AGI, I'm interested
  in the stuff around, e.g., tests, implementation strategies etc. by
  the means of social simulation.
 
  Your last question is also an example of cocooned-AGI thinking? Which
  brains?  The only real AGI brains are those of living systems
 
  A for Artificial. Living systems don't qualify for A.
 
  My question was about certain attributes of brains (whether natural or
  artificial). Societies are constrained by their members' capacities.
  A higher individual capacity can lead to different dependencies and
  new ways groups and societies are working.
 
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: 
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com 
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread Michael Swan

Numbers combined together are a form of language that can form every
other language. 

and...

If you insist on using a natural language, why don't you use the
language most natural to computers - ie code ( which can directly
translates to numbers - machine language ...)

Code is better because you can automatically test then observe to see if
your new code combination works. It's also more pedantic and doesn't
allow ambiguity. 




On Sun, 2010-07-18 at 21:28 +0100, Ian Parker wrote:
 In my view the main obstacle to AGI is the understanding of Natural
 Language. If we have NL comprehension we have the basis for doing a
 whole host of marvellous things.
 
 
 There is the Turing test. A good question to ask is What is the
 difference between laying concrete at 50C and fighting Israel. Google
 translated wsT jw AlmErkp or وسط جو المعركة  as central air
 battle. Correct is the climatic environmental battle or a more free
 translation would be the battle against climate and environment. In
 Turing competitions no one ever asks the questions that really would
 tell AGI apart from a brand X chatterbox.
 
 
 http://sites.google.com/site/aitranslationproject/Home/formalmethods
 
 
 We can I think say that anything which can carry out the program of my
 blog would be well on its way. AGI will also be the link between NL
 and formal mathematics. Let me take yet another example.
 
 
 http://sites.google.com/site/aitranslationproject/deepknowled
 
 
 Google translated it as 4 times the temperature. Ponder this, you have
 in fact 3 chances to get this right.
 
 
 1)  درجة means degree. GT has not translated this word. In this
 context it means power.
 
 
 2) If you search for Stefan Boltzmann or Black Body Google gives
 you the correct law.
 
 
 3) The translation is obviously mathematically incorrect from the
 dimensional stand-point.
 
 
 This 3 things in fact represent different aspects of knowledge. In AGI
 they all have to be present.
 
 
 The other interesting point is that there are programs in existence
 now that will address the last two questions. A translator that
 produces OWL solves 2.
 
 
 If we match up AGI to Mizar we can put dimensions into the proof
 engine.
 
 
 There are a great many things on the Web which will solve specific
 problems. NL is THE problem since it will allow navigation between the
 different programs on the Web.
 
 
 MOLTO BTW does have its mathematical parts even though it is
 primerally billed as a translator.
 
 
 
 
   - Ian Parker
 
 
 On 18 July 2010 14:41, deepakjnath deepakjn...@gmail.com wrote:
 Yes, but is there a competition like the XPrize or something
 that we can work towards. ?
 
 On Sun, Jul 18, 2010 at 6:40 PM, Panu Horsmalahti
 nawi...@gmail.com wrote:
 2010/7/18 deepakjnath deepakjn...@gmail.com
 
 I wanted to know if there is any bench mark
 test that can really convince majority of
 today's AGIers that a System is true AGI?
 
 Is there some real prize like the XPrize for
 AGI or AI in general?
 
 thanks,
 Deepak
 
 Have you heard about the Turing test?
 
 - Panu Horsmalahti 
 
 agi | Archives | Modify
 Your Subscription
 
 
 
 
 -- 
 cheers,
 Deepak
 agi | Archives | Modify Your
 Subscription
 
 
 
 agi | Archives  | Modify Your
 Subscription
 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Michael Swan
On Wed, 2010-07-14 at 07:48 -0700, Matt Mahoney wrote:
 Actually, Fibonacci numbers can be computed without loops or recursion.
 
 int fib(int x) {
   return round(pow((1+sqrt(5))/2, x)/sqrt(5));
 }
;) I know. I was wondering if someone would pick up on it. This won't
prove that brains have loops though, so I wasn't concerned about the
shortcuts. 
 unless you argue that loops are needed to compute sqrt() and pow().
 
I would find it extremely unlikely that brains have *, /, and even more
unlikely to have sqrt and pow inbuilt. Even more unlikely, even if it
did have them, to figure out how to combine them to round(pow((1
+sqrt(5))/2, x)/sqrt(5)). 

Does this mean we should discount all maths that use any complex
operations ? 

I suspect the brain is full of look-up tables mainly, with some fairly
primitive methods of combining the data. 

eg What's 6 / 3 ?
ans = 2 most people would get that because it's been wrote learnt, a
common problem.

What 3456/6 ?
we don't know, at least not from the top of our head.


 The brain and DNA use redundancy and parallelism and don't use loops because 
 their operations are slow and unreliable. This is not necessarily the best 
 strategy for computers because computers are fast and reliable but don't have 
 a 
 lot of parallelism.

The brains slow and unreliable methods I think are the price paid for
generality and innately unreliable hardware. Imagine writing a computer
program that runs for 120 years without crashing and surviving damage
like a brain can. I suspect the perfect AGI program is a rigorous
combination of the 2. 


 
  -- Matt Mahoney, matmaho...@yahoo.com
 
 
 
 - Original Message 
 From: Michael Swan ms...@voyagergaming.com
 To: agi agi@v2.listbox.com
 Sent: Wed, July 14, 2010 12:18:40 AM
 Subject: Re: [agi] What is the smallest set of operations that can 
 potentially  
 define everything and how do you combine them ?
 
 Brain loops:
 
 
 Premise:
 Biological brain code does not contain looping constructs, or the
 ability to creating looping code, (due to the fact they are extremely
 dangerous on unreliable hardware) except for 1 global loop that fires
 about 200 times a second.
 
 Hypothesis:
 Brains cannot calculate iterative problems quickly, where calculations
 in the previous iteration are needed for the next iteration and, where
 brute force operations are the only valid option.
 
 Proof:
 Take as an example, Fibonacci numbers
 http://en.wikipedia.org/wiki/Fibonacci_number
 
 What are the first 100 Fibonacci numbers?
 
 int Fibonacci[102];
 Fibonacci[0] = 0;
 Fibonacci[1] = 1;
 for(int i = 0; i  100; i++)
 {
 // Getting the next Fibonacci number relies on the previous values
 Fibonacci[i+2] = Fibonacci[i] + Fibonacci[i+1];
 }  
 
 My brain knows the process to solve this problem but it can't directly
 write a looping construct into itself. And so it solves it very slowly
 compared to a computer. 
 
 The brain probably consists of vast repeating look-up tables. Of course,
 run in parallel these seem fast.
 
 
 DNA has vast tracks of repeating data. Why would DNA contain repeating
 data, instead of just having the data once and the number of times it's
 repeated like in a loop? One explanation is that DNA can't do looping
 construct either.
 
 
 
 On Wed, 2010-07-14 at 02:43 +0100, Mike Tintner wrote:
  Michael: We can't do operations that
  require 1,000,000 loop iterations.  I wish someone would give me a PHD
  for discovering this ;) It far better describes our differences than any
  other theory.
  
  Michael,
  
  This isn't a competitive point - but I think I've made that point several 
  times (and so of course has Hawkins). Quite obviously, (unless you think 
  the 
  brain has fabulous hidden powers), it conducts searches and other 
  operations 
  with extremely few limited steps, and nothing remotely like the routine 
  millions to billions of current computers.  It must therefore work v. 
  fundamentally differently.
  
  Are you saying anything significantly different to that?
  
  --
  From: Michael Swan ms...@voyagergaming.com
  Sent: Wednesday, July 14, 2010 1:34 AM
  To: agi agi@v2.listbox.com
  Subject: Re: [agi] What is the smallest set of operations that can 
  potentially  define everything and how do you combine them ?
  
  
   On Tue, 2010-07-13 at 07:00 -0400, Ben Goertzel wrote:
   Well, if you want a simple but complete operator set, you can go with
  
   -- Schonfinkel combinator plus two parentheses
  
   I'll check this out soon.
   or
  
   -- S and K combinator plus two parentheses
  
   and I suppose you could add
  
   -- input
   -- output
   -- forget
  
   statements to this, but I'm not sure what this gets you...
  
   Actually, adding other operators doesn't necessarily
   increase the search space your AI faces -- rather, it
   **decreases** the search space **if** you choose the right operators, 
   that
   encapsulate regularities

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Michael Swan

On Wed, 2010-07-14 at 17:51 -0700, Matt Mahoney wrote:
 Michael Swan wrote:
  What 3456/6 ?
  we don't know, at least not from the top of our head.
 
 No, it took me about 10 or 20 seconds to get 576. Starting with the first 
 digit, 
 3/6 = 1/2 (from long term memory) and 3 is in the thousands place, so 1/2 of 
 1000 is 500 (1/2 = .5 from LTM). I write 500 into short term memory (STM), 
 which 
 only has enough space to hold about 7 digits. Then to divide 45/6 I get 42/6 
 = 7 
 with a remainder of 3, or 7.5, but since this is in the tens place I get 75. 
 I 
 put 75 in STM, add to 500 to get 575, put the result back in STM replacing 
 500 
 and 75 for which there is no longer room. Finally, 6/6 = 1, which I add to 
 575 
 to get 576. I hold this number in STM long enough to check with a calculator.
The brain does have one global loop, which I think goes at about 100~200
hertz. I would argue that your using that. Also note, brain are unlikely
to use RAM. Memory is most likely stored very locally to the process, as
the brain prob. can't access memory frivolously like in a computer. So,
the processes that require going backwards have to wait for the next
global loop to get the data, causing massive loss in time.
So about (~10sec * ~100hertz)= 1000+ loops I suspect is about right.


 
 One could argue that this calculation in my head uses a loop iterator (in 
 STM) 
 to keep track of which digit I am working on. It definitely involves a 
 sequence 
 of instructions with intermediate results being stored temporarily. The brain 
 can only execute 2 or 3 sequential instructions per second and has very 
 limited 
 short term memory, so it needs to draw from a large database of rules to 
 perform 
 calculations like this. A calculator, being faster and having more RAM, is 
 able 
 to use simpler but more tedious algorithms such as converting to binary, 
 division by shift and subtract, and converting back to decimal. Doing this 
 with 
 a carbon based computer would require pencil and paper to make up for lack of 
 STM, and it would require enough steps to have a high probability of making a 
 mistake.
 
 Intelligence = knowledge + computing power.
+ an clever way of using that computing power

  The human brain has a lot of 
 knowledge. The calculator has less knowledge, but makes up for it in speed 
 and 
 memory.

 
  -- Matt Mahoney, matmaho...@yahoo.com
 
 
 
 - Original Message 
 From: Michael Swan ms...@voyagergaming.com
 To: agi agi@v2.listbox.com
 Sent: Wed, July 14, 2010 7:53:33 PM
 Subject: Re: [agi] What is the smallest set of operations that can 
 potentially  
 define everything and how do you combine them ?
 
 On Wed, 2010-07-14 at 07:48 -0700, Matt Mahoney wrote:
  Actually, Fibonacci numbers can be computed without loops or recursion.
  
  int fib(int x) {
return round(pow((1+sqrt(5))/2, x)/sqrt(5));
  }
 ;) I know. I was wondering if someone would pick up on it. This won't
 prove that brains have loops though, so I wasn't concerned about the
 shortcuts. 
  unless you argue that loops are needed to compute sqrt() and pow().
  
 I would find it extremely unlikely that brains have *, /, and even more
 unlikely to have sqrt and pow inbuilt. Even more unlikely, even if it
 did have them, to figure out how to combine them to round(pow((1
 +sqrt(5))/2, x)/sqrt(5)). 
 
 Does this mean we should discount all maths that use any complex
 operations ? 
 
 I suspect the brain is full of look-up tables mainly, with some fairly
 primitive methods of combining the data. 
 
 eg What's 6 / 3 ?
 ans = 2 most people would get that because it's been wrote learnt, a
 common problem.
 
 What 3456/6 ?
 we don't know, at least not from the top of our head.
 
 
  The brain and DNA use redundancy and parallelism and don't use loops 
  because 
  their operations are slow and unreliable. This is not necessarily the best 
  strategy for computers because computers are fast and reliable but don't 
  have a 
 
  lot of parallelism.
 
 The brains slow and unreliable methods I think are the price paid for
 generality and innately unreliable hardware. Imagine writing a computer
 program that runs for 120 years without crashing and surviving damage
 like a brain can. I suspect the perfect AGI program is a rigorous
 combination of the 2. 
 
 
  
   -- Matt Mahoney, matmaho...@yahoo.com
  
  
  
  - Original Message 
  From: Michael Swan ms...@voyagergaming.com
  To: agi agi@v2.listbox.com
  Sent: Wed, July 14, 2010 12:18:40 AM
  Subject: Re: [agi] What is the smallest set of operations that can 
  potentially  
 
  define everything and how do you combine them ?
  
  Brain loops:
  
  
  Premise:
  Biological brain code does not contain looping constructs, or the
  ability to creating looping code, (due to the fact they are extremely
  dangerous on unreliable hardware) except for 1 global loop that fires
  about 200 times a second.
  
  Hypothesis:
  Brains cannot calculate iterative problems quickly, where

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Michael Swan

On Thu, 2010-07-15 at 01:37 +0100, Mike Tintner wrote:
 Michael :The brains slow and unreliable methods I think are the price paid 
 for
 generality and innately unreliable hardware
 
 Yes to one - nice to see an AGI-er finally starting to join up the dots, 
 instead of simply dismissing the brain's massive difficulties in maintaining 
 a train of thought.
 
 No to two -innately unreliable hardware is the price of innately 
 *adaptable* hardware - that can radically grow and rewire (wh. is the other 
 advantage the brain has over computers).  Any thoughts about that and what 
 in more detail are the advantages of an organic computer?
Program software can rewire themselves in some senses, one creates
virtual hardware inside the program as though it was real hardware.
But it's extremely rare to find ones that are purely general, so much so
I doubt purely general ones even exist. Are NN's purely general? Are
GA's purely general? I thought perhaps code that writes code could
potentially reach such a lofty goal (as it can turn into a GA or NN or ,
well, anything). Then I thought the code writing the code restricts what
the written code can be. 

So, then I made some simple experiments of the code modifying itself.
The end result was surprisingly ( at least I suspect it was) similar to
DNA. 

I still had a large section of code, whose purpose was to read part
itself, and modify it, and this large piece of code had no bearing in
what the modified code actually did. 

DNA has 2 sections, a coding section, which actually most of the hard
work, and poorly named junk DNA (or non-coding DNA), which most
biologist thought did nothing, until they discovered it doing stuff all
over the place but in a somewhat discrete, subtle fashion.

So, is my experiment 6
http://codegenerationdesign.webs.com/index.htm
the first ever program to roughly mimic the programming of DNA ?

I find this really hard to prove, but I think it remains a possibility.

Apparently, Biologists don't think much my degree in biology from the
University of Wikipedia, nature docs, and other random stuff you read
from the internet.


 
 In addition, the unreliable hardware is also a price of  global 
 ardware  - that has the basic capacity to connect more or less any bit of 
 information in any part of the brain with any bit of information in any 
 other part of the brain - as distinct from the local hardware of computers 
 wh. have to go through limited local channels to limited local stores of 
 information to make v. limited local kinds of connections. Well, that's my 
 tech-ignorant take on it - but perhaps you can expand on the idea.  I would 
 imagine v. broadly the brain is globally connected vs the computer wh. is 
 locally connected. 
Yep, the ability to grab memory from anywhere is called RAM - Random
Access Memory. A single neurons can only access data from it's 25,000
connections, which sounds like a lot, but isn't because computers can
access a theoretical infinite set of data. 

Being that the program in a brain can only go forward, how does it tell
other neurons that it wants data about X that is behind it ?

One theory, is that certain neurons detect that they need more data, and
create a greater positive charge to attract more of negatively charged
data. So in a sense they sux more data into themselves, effectively
sending a different, non-dangerous backward running signal. (Author
note: that I can't prove this at all, and is just a possibility)




 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Michael Swan

 
 
 I'd argue that mathematical operations are unnecesary,
  we don't even have integer support inbuilt.
I'd disagree.  is a mathematical operation, and in combination can
become an enormous number of concepts.

Sure, I think the brain is more sensibly understood in a
programattical sense than mathematical.

I say programattical because it probably has 100 billion or so
conditional statements, a difficult thing to represent mathematically.
Even so, each conditional is going to have maths constructs in it.


   The number meme is a bit of a hack on top of language that has been
 modified throughout the years.
   We have a peripheral that allows us decent support for the numbers
 1-10, but beyond that numbers are basically words to which several
 different finicky grammars can be applied as far as our brains are
 concerned.

True, but numbers awesomeness lies it there power to represent relative
differences between any concepts. With this power, numbers are a
universal language, a language that can represent any other language,
and hence, the ideal language and probably only real choice for an AGI. 



 agi | Archives  | Modify Your
 Subscription
 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-13 Thread Michael Swan

On Tue, 2010-07-13 at 07:00 -0400, Ben Goertzel wrote:
 Well, if you want a simple but complete operator set, you can go with
 
 -- Schonfinkel combinator plus two parentheses
 
I'll check this out soon.
 or
 
 -- S and K combinator plus two parentheses
 
 and I suppose you could add
 
 -- input
 -- output
 -- forget
 
 statements to this, but I'm not sure what this gets you...
 
 Actually, adding other operators doesn't necessarily
 increase the search space your AI faces -- rather, it
 **decreases** the search space **if** you choose the right operators, that
 encapsulate regularities in the environment faced by the AI

Unfortunately, an AGI needs to be absolutely general. You are right that
higher level concepts reduce combinations, however, using them, will
increase combinations for simpler operator combinations, and if you
miss a necessary operator, then some concepts will be impossible to
achieve. The smallest set can define higher level concepts, these
concepts can be later integrated as single operations, which means
using operators than can be understood in terms of smaller operators
in the beginning, will definitely increase you combinations later on.

The smallest operator set is like absolute zero. It has a defined end. A
defined way of finding out what they are.



 
 Exemplifying this, writing programs doing humanly simple things
 using S and K is a pain and involves piling a lot of S and K and parentheses
 on top of each other, whereas if we introduce loops and conditionals and
 such, these programs get shorter.  Because loops and conditionals happen
 to match the stuff that our human-written programs need to do...
Loops are evil in most situations.

Let me show you why:
Draw a square using put_pixel(x,y)
// loops are more scalable, but, damage this code anywhere and it can
potentially kill every other process, not just itself. This is why
computers die all the time.

for (int x = 0; x  2; x++)
{   
for (int y = 0; y  2; y++)
{
put_pixel(x,y);
}
}

opposed to
/* The below is faster (even on single step instructions), and can be
run in parallel, damage resistant ( ie destroy  put_pixel(0,1); and the
rest of the code will still run), is less scalable ( more code is
required for larger operations)

put_pixel(0,0);
put_pixel(0,1);
put_pixel(1,0);
put_pixel(1,1);

The lack of loops in the brain is a fundamental difference between
computers and brains. Think about it. We can't do operations that
require 1,000,000 loop iterations.  I wish someone would give me a PHD
for discovering this ;) It far better describes our differences than any
other theory.


 A better question IMO is what set of operators and structures has the
 property that the compact expressions tend to be the ones that are useful
 for survival and problem-solving in the environments that humans and human-
 like AIs need to cope with...

For me that is stage 2.

 
 -- Ben G
 
 On Tue, Jul 13, 2010 at 1:43 AM, Michael Swan ms...@voyagergaming.com wrote:
  Hi,
 
  I'm interested in combining the simplest, most derivable operations
  ( eg operations that cannot be defined by other operations) for creating
  seed AGI's. The simplest operations combined in a multitude ways can
  form extremely complex patterns, but the underlying logic may be
  simple.
 
  I wonder if varying combinations of the smallest set of operations:
 
  {  , memory (= for memory assignment), ==, (a logical way to
  combine them), (input, output), () brackets  }
 
  can potentially learn and define everything.
 
  Assume all input is from numbers.
 
  We want the smallest set of elements, because less elements mean less
  combinations which mean less chance of hitting combinatorial explosion.
 
   helps for generalisation, reducing combinations.
 
  memory(=) is for hash look ups, what should one remember? What can be
  discarded?
 
  == This does a comparison between 2 values x == y is 1 if x and y are
  exactly the same. Returns 0 if they are not the same.
 
  (a logical way to combine them) Any non-narrow algorithm that reduces
  the raw data into a simpler state will do. Philosophically like
  Solomonoff Induction. This is the hardest part. What is the most optimal
  way of combining the above set of operations?
 
  () brackets are used to order operations.
 
 
 
 
  Conditionals (only if statements) + memory assignment are the only valid
  form of logic - ie no loops. Just repeat code if you want loops.
 
 
  If you think that the set above cannot define everything, then what is
  the smallest set of operations that can potentially define everything?
 
  --
  Some proofs / Thought experiments :
 
  1) Can , ==, (), and memory define other logical operations like 
  (AND gate) ?
 
  I propose that x==y==1 defines xy
 
  xy x==y==1
  00 = 0 0==0==1 = 0
  10 = 0 1==0==1 = 0
  01 = 0 0==1==1 = 0
  11 = 1 1==1==1

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-13 Thread Michael Swan
Brain loops:


Premise:
Biological brain code does not contain looping constructs, or the
ability to creating looping code, (due to the fact they are extremely
dangerous on unreliable hardware) except for 1 global loop that fires
about 200 times a second.

Hypothesis:
Brains cannot calculate iterative problems quickly, where calculations
in the previous iteration are needed for the next iteration and, where
brute force operations are the only valid option.

Proof:
Take as an example, Fibonacci numbers
http://en.wikipedia.org/wiki/Fibonacci_number

What are the first 100 Fibonacci numbers?

int Fibonacci[102];
Fibonacci[0] = 0;
Fibonacci[1] = 1;
for(int i = 0; i  100; i++)
{
// Getting the next Fibonacci number relies on the previous values
Fibonacci[i+2] = Fibonacci[i] + Fibonacci[i+1];
}  

My brain knows the process to solve this problem but it can't directly
write a looping construct into itself. And so it solves it very slowly
compared to a computer. 

The brain probably consists of vast repeating look-up tables. Of course,
run in parallel these seem fast.


DNA has vast tracks of repeating data. Why would DNA contain repeating
data, instead of just having the data once and the number of times it's
repeated like in a loop? One explanation is that DNA can't do looping
construct either.



On Wed, 2010-07-14 at 02:43 +0100, Mike Tintner wrote:
 Michael: We can't do operations that
 require 1,000,000 loop iterations.  I wish someone would give me a PHD
 for discovering this ;) It far better describes our differences than any
 other theory.
 
 Michael,
 
 This isn't a competitive point - but I think I've made that point several 
 times (and so of course has Hawkins). Quite obviously, (unless you think the 
 brain has fabulous hidden powers), it conducts searches and other operations 
 with extremely few limited steps, and nothing remotely like the routine 
 millions to billions of current computers.  It must therefore work v. 
 fundamentally differently.
 
 Are you saying anything significantly different to that?
 
 --
 From: Michael Swan ms...@voyagergaming.com
 Sent: Wednesday, July 14, 2010 1:34 AM
 To: agi agi@v2.listbox.com
 Subject: Re: [agi] What is the smallest set of operations that can 
 potentially  define everything and how do you combine them ?
 
 
  On Tue, 2010-07-13 at 07:00 -0400, Ben Goertzel wrote:
  Well, if you want a simple but complete operator set, you can go with
 
  -- Schonfinkel combinator plus two parentheses
 
  I'll check this out soon.
  or
 
  -- S and K combinator plus two parentheses
 
  and I suppose you could add
 
  -- input
  -- output
  -- forget
 
  statements to this, but I'm not sure what this gets you...
 
  Actually, adding other operators doesn't necessarily
  increase the search space your AI faces -- rather, it
  **decreases** the search space **if** you choose the right operators, 
  that
  encapsulate regularities in the environment faced by the AI
 
  Unfortunately, an AGI needs to be absolutely general. You are right that
  higher level concepts reduce combinations, however, using them, will
  increase combinations for simpler operator combinations, and if you
  miss a necessary operator, then some concepts will be impossible to
  achieve. The smallest set can define higher level concepts, these
  concepts can be later integrated as single operations, which means
  using operators than can be understood in terms of smaller operators
  in the beginning, will definitely increase you combinations later on.
 
  The smallest operator set is like absolute zero. It has a defined end. A
  defined way of finding out what they are.
 
 
 
 
  Exemplifying this, writing programs doing humanly simple things
  using S and K is a pain and involves piling a lot of S and K and 
  parentheses
  on top of each other, whereas if we introduce loops and conditionals and
  such, these programs get shorter.  Because loops and conditionals happen
  to match the stuff that our human-written programs need to do...
  Loops are evil in most situations.
 
  Let me show you why:
  Draw a square using put_pixel(x,y)
  // loops are more scalable, but, damage this code anywhere and it can
  potentially kill every other process, not just itself. This is why
  computers die all the time.
 
  for (int x = 0; x  2; x++)
  {
  for (int y = 0; y  2; y++)
  {
  put_pixel(x,y);
  }
  }
 
  opposed to
  /* The below is faster (even on single step instructions), and can be
  run in parallel, damage resistant ( ie destroy  put_pixel(0,1); and the
  rest of the code will still run), is less scalable ( more code is
  required for larger operations)
 
  put_pixel(0,0);
  put_pixel(0,1);
  put_pixel(1,0);
  put_pixel(1,1);
 
  The lack of loops in the brain is a fundamental difference between
  computers and brains. Think about it. We can't do operations that
  require 1,000,000 loop iterations.  I wish someone would give me a PHD

Re: [agi] Mechanical Analogy for Neural Operation!

2010-07-12 Thread Michael Swan
Hi,

I pretty much always think of a NN as a physical device.

I think the first binary computer was dreamt up with balls going through
the system with ball representing 1's and 0's. The idea was written down
but never built.

Jamming balls that give way at a certain point is the same as using .

ie When more than 6 balls jam up, the pressure is released, sending a 1
or a value  6 balls.

Addition can be a little different in such systems.

ie a value  6 + a value  3 = a value  9. 

On Sun, 2010-07-11 at 23:02 -0700, Steve Richfield wrote:
 Everyone has heard about the water analogy for electrical operation. I
 have a mechanical analogy for neural operation that just might be
 solid enough to compute at least some characteristics optimally.
 
 No, I am NOT proposing building mechanical contraptions, just using
 the concept to compute neuronal characteristics (or AGI formulas for
 learning).
 
 Suppose neurons were mechanical contraptions, that receive inputs and
 communicate outputs via mechanical movements. If one or more of the
 neurons connected to an output of a neuron, can't make sense of a
 given input given its other inputs, then its mechanism would
 physically resist the several inputs that didn't make mutual sense
 because its mechanism would jam, with the resistance possibly coming
 from some downstream neuron.
 
 This would utilize position to resolve opposing forces, e.g. one
 force being the observed inputs, and the other force being that
 they don't make sense, suggest some painful outcome, etc. In short,
 this would enforce the sort of equation over the present formulaic
 view of neurons (and AGI coding) that I have suggested in past
 postings may be present, and show that the math may not be all that
 challenging.
 
 Uncertainty would be expressed in stiffness/flexibility, computed
 limitations would be handled with over-running clutches, etc.
 
 Propagation of forces would come close (perfect?) to being able to
 identify just where in a complex network something should change to
 learn as efficiently as possible.
 
 Once the force concentrates at some point, it then gives, something
 slips or bends, to unjam the mechanism. Thus, learning is effected.
 
 Note that this suggests little difference between forward propagation
 and backwards propagation, though real-world wet design considerations
 would clearly prefer fast mechanisms for forward propagation, and
 compact mechanisms for backwards propagation.
 
 Epiphany or mania?
 
 Any thoughts?
 
 Steve
 
 agi | Archives  | Modify Your
 Subscription
 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-12 Thread Michael Swan
Hi,

I'm interested in combining the simplest, most derivable operations
( eg operations that cannot be defined by other operations) for creating
seed AGI's. The simplest operations combined in a multitude ways can
form extremely complex patterns, but the underlying logic may be
simple. 

I wonder if varying combinations of the smallest set of operations:

{  , memory (= for memory assignment), ==, (a logical way to
combine them), (input, output), () brackets  } 

can potentially learn and define everything. 

Assume all input is from numbers.

We want the smallest set of elements, because less elements mean less
combinations which mean less chance of hitting combinatorial explosion.

 helps for generalisation, reducing combinations. 

memory(=) is for hash look ups, what should one remember? What can be
discarded? 

== This does a comparison between 2 values x == y is 1 if x and y are
exactly the same. Returns 0 if they are not the same.

(a logical way to combine them) Any non-narrow algorithm that reduces
the raw data into a simpler state will do. Philosophically like
Solomonoff Induction. This is the hardest part. What is the most optimal
way of combining the above set of operations?

() brackets are used to order operations. 




Conditionals (only if statements) + memory assignment are the only valid
form of logic - ie no loops. Just repeat code if you want loops. 


If you think that the set above cannot define everything, then what is
the smallest set of operations that can potentially define everything? 

--
Some proofs / Thought experiments :

1) Can , ==, (), and memory define other logical operations like 
(AND gate) ?

I propose that x==y==1 defines xy

xy x==y==1
00 = 0 0==0==1 = 0
10 = 0 1==0==1 = 0
01 = 0 0==1==1 = 0 
11 = 1 1==1==1 = 1

It means  can be completely defined using == therefore  is not
one of the smallest possible general concepts.  can be potentially
learnt from ==.

-

2) Write a algorithm that can define 1 using only ,==, ().

Multiple answers
a) discrete 1 could use
x == 1

b) continuous 1.0 could use this rule 
For those not familiar with C++, ! means not 
(x  0.9)  !(x  1.1)   expanding gives ( getting rid of ! and )
(x  0.9) == ((x  1.1) == 0) == 1note !x can be defined in terms
of == like so x == 0.

(b) is a generalisation, and expansion of the definition of (a) and can
be scaled by changing the values 0.9 and 1.1 to fit what others
would generally define as being 1.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-28 Thread Michael Swan

On Mon, 2010-06-28 at 13:21 +0100, Mike Tintner wrote:
 MS: I'm solving this by using an algorithm + exceptions routines.
 
 You're saying there are predictable patterns to human and animal behaviour 
 in their activities, (like sports and investing) - and in this instance how 
 humans change tactics?
 
 What empirical evidence do you have for this, apart from zero, and over 300 
 years of scientific failure to produce any such laws or patterns of 
 behaviour?
 
 What evidence in the slightest do you have for your algorithm working?
Still in the testing phase. It's more complicated than just (algorithm +
exceptions), there are multiple levels of accuracy of data and how you
combine the multiple levels of data.
 

 
 The evidence to the contrary, that human and animal behaviour, are not 
 predictable is pretty overwhelming.
 
 Taking into account the above, how would you mathematically assess the cases 
 for proceeding on the basis that a) living organisms  ARE predictable vs b) 
 living organisms are NOT predictable?  Roughly about the same as a) you WILL 
 win the lottery vs b) you WON'T win? Actually that is almost certainly being 
 extremely kind - you do have a chance of winning the lottery.
 
 --
 From: Michael Swan ms...@voyagergaming.com
 Sent: Monday, June 28, 2010 4:17 AM
 To: agi agi@v2.listbox.com
 Subject: Re: [agi] Huge Progress on the Core of AGI
 
 
  On Sun, 2010-06-27 at 19:38 -0400, Ben Goertzel wrote:
 
  Humans may use sophisticated tactics to play Pong, but that doesn't
  mean it's the only way to win
 
  Humans use subtle and sophisticated methods to play chess also, right?
  But Deep Blue still kicks their ass...
 
  If the rules of chess changed slightly, without being reprogrammed deep
  blue sux.
  And also there is anti deep blue chess. Play chess where you avoid
  losing and taking pieces for as long as possible to maintain high
  combination of possible outcomes, and avoid moving pieces in known
  arrangements.
 
  Playing against another human player like this you would more than
  likely lose.
 
 
  The stock market is another situation where narrow-AI algorithms may
  already outperform humans ... certainly they outperform all except the
  very best humans...
 
  ... ben g
 
  On Sun, Jun 27, 2010 at 7:33 PM, Mike Tintner
  tint...@blueyonder.co.uk wrote:
  Oh well that settles it...
 
  How do you know then when the opponent has changed his
  tactics?
 
  How do you know when he's switched from a predominantly
  baseline game say to a net-rushing game?
 
  And how do you know when the market has changed from bull to
  bear or vice versa, and I can start going short or long? Why
  is there any difference between the tennis  market
  situations?
 
 
  I'm solving this by using an algorithm + exceptions routines.
 
  eg Input 100 numbers - write an algorithm that generalises/compresses
  the input.
 
  ans may be
  (input_is_always  0)  // highly general
 
  (if fail try exceptions)
  // exceptions
  // highly accurate exceptions
  (input35 == -4)
  (input75 == -50)
  ..
  more generalised exceptions, etc
 
  I believe such a system is similar to the way we remember things. eg -
  We tend to have highly detailed memory for exceptions - we tend to
  remember things about white whales more than ordinary whales. In
  fact, there was a news story the other night on a returning white whale
  in Brisbane, and there are additional laws to stay way from this whale
  in particular, rather than all whales in general.
 
 
 
 
 
 
 
 
 
 
  From: Ben Goertzel
  Sent: Monday, June 28, 2010 12:03 AM
 
  To: agi
  Subject: Re: [agi] Huge Progress on the Core of AGI
 
 
 
  Even with the variations you mention, I remain highly
  confident this is not a difficult problem for narrow-AI
  machine learning methods
 
  -- Ben G
 
  On Sun, Jun 27, 2010 at 6:24 PM, Mike Tintner
  tint...@blueyonder.co.uk wrote:
  I think you're thinking of a plodding limited-movement
  classic Pong line.
 
  I'm thinking of a line that can like a human
  player move with varying speed and pauses to more or
  less any part of its court to hit the ball, and then
  hit it with varying speed to more or less any part of
  the opposite court. I think you'll find that bumps up
  the variables if not unknowns massively.
 
  Plus just about every shot exchange presents you with
  dilemmas of how to place your shot and then move in
  anticipation of your opponent's return .
 
  Remember the object here is to present a would-be AGI
  with a simple but *unpredictable* object to deal with,
  reflecting

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Michael Swan

On Mon, 2010-06-28 at 16:15 +0100, Mike Tintner wrote:
 That's why Michael can't bear to even contemplate a world in which
 things 
 and people behave unpredictably. (And Ben can't bear to contemplate a 
 stockmarket that is obviously unpredictable).
 
 If he were an artist his instincts would be the opposite - he'd go for
 the 
 irregular and patchy and unpredictable twists. If he were drawing a
 box 
 going across a screen, he would have to put some irregularity in 
 omewhere  - put in some fits and starts and stops - there's always an 
 irregular twist in the picture or the tale. An artist has to put some 
 surprise and life into what he does -

You patternise the things that are patternisable - like an erratic
waving arm is still an arm, and it's pattern is erratic. Also, note I
used to be a art and animation lecturer for 2 years ;) 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Michael Swan

On Sun, 2010-06-27 at 19:38 -0400, Ben Goertzel wrote:
 
 Humans may use sophisticated tactics to play Pong, but that doesn't
 mean it's the only way to win
 
 Humans use subtle and sophisticated methods to play chess also, right?
 But Deep Blue still kicks their ass...

If the rules of chess changed slightly, without being reprogrammed deep
blue sux. 
And also there is anti deep blue chess. Play chess where you avoid
losing and taking pieces for as long as possible to maintain high
combination of possible outcomes, and avoid moving pieces in known
arrangements. 

Playing against another human player like this you would more than
likely lose.

 
 The stock market is another situation where narrow-AI algorithms may
 already outperform humans ... certainly they outperform all except the
 very best humans...
 
 ... ben g
 
 On Sun, Jun 27, 2010 at 7:33 PM, Mike Tintner
 tint...@blueyonder.co.uk wrote:
 Oh well that settles it...
  
 How do you know then when the opponent has changed his
 tactics?
  
 How do you know when he's switched from a predominantly
 baseline game say to a net-rushing game?
  
 And how do you know when the market has changed from bull to
 bear or vice versa, and I can start going short or long? Why
 is there any difference between the tennis  market
 situations?


I'm solving this by using an algorithm + exceptions routines.

eg Input 100 numbers - write an algorithm that generalises/compresses
the input.

ans may be
(input_is_always  0)  // highly general

(if fail try exceptions)
// exceptions   
// highly accurate exceptions
(input35 == -4) 
(input75 == -50)
..
more generalised exceptions, etc

I believe such a system is similar to the way we remember things. eg -
We tend to have highly detailed memory for exceptions - we tend to
remember things about white whales more than ordinary whales. In
fact, there was a news story the other night on a returning white whale
in Brisbane, and there are additional laws to stay way from this whale
in particular, rather than all whales in general.

  
  
  
  
  
  
  
 
 
 From: Ben Goertzel 
 Sent: Monday, June 28, 2010 12:03 AM
 
 To: agi 
 Subject: Re: [agi] Huge Progress on the Core of AGI
 
 
 
 Even with the variations you mention, I remain highly
 confident this is not a difficult problem for narrow-AI
 machine learning methods
 
 -- Ben G
 
 On Sun, Jun 27, 2010 at 6:24 PM, Mike Tintner
 tint...@blueyonder.co.uk wrote:
 I think you're thinking of a plodding limited-movement
 classic Pong line.
  
 I'm thinking of a line that can like a human
 player move with varying speed and pauses to more or
 less any part of its court to hit the ball, and then
 hit it with varying speed to more or less any part of
 the opposite court. I think you'll find that bumps up
 the variables if not unknowns massively.
  
 Plus just about every shot exchange presents you with
 dilemmas of how to place your shot and then move in
 anticipation of your opponent's return .
  
 Remember the object here is to present a would-be AGI
 with a simple but *unpredictable* object to deal with,
 reflecting the realities of there being a great many
 such objects in the real world - as distinct from
 Dave's all too predictable objects.
  
 The possible weakness of this pong example is that
 there might at some point cease to be unknowns, as
 there always are in real world situations, incl
 tennis. One could always introduce them if necessary -
 allowing say creative spins on the ball.
  
 But I doubt that it will be necessary here for the
 purposes of anyone like Dave -  and v. offhand and
 with no doubt extreme license this strikes me as not a
 million miles from a hyper version of the TSP problem,
 where the towns can move around, and you can't be sure
 whether they'll be there when you arrive.  Or is there
 an obviously true solution for that problem too?
 [Very convenient these obviously true solutions].
  
 
 
 From: Jim Bromer 
 Sent: Sunday, June 27, 2010 8:53 PM
 
 To: agi 
 Subject: Re: [agi] Huge Progress on the Core 

Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread Michael Swan
Hi,
* AGI should be scalable - More data just mean the potential for more
accurate results.
* More data can chew up more computation time without a benefit. ie If
all you want to do is identify a bird, it's still a bird at 1 fps and
1000 fps.
* Don't aim for precision, aim for generality. Eg. AGI KNOWS 1000
objects. If you test to see if your object is a bird, and it is not, you
still have 999 possible objects. If you test if it is an animal, you can
split your search space in half - you've reduce the possibilities to
500.  Successive generalisation produce accuracy, sometimes referred as
a hierarchical approach.

On Fri, 2010-06-18 at 14:19 -0400, David Jones wrote:
 I just came up with an awesome idea. I just realized that the brain
 takes advantage of high frame rates to reduce uncertainty when it is
 estimating motion. The slower the frame rate, the more uncertainty
 there is because objects may have traveled too far between images to
 match with high certainty using simple techniques. 
 
 So, this made me think, what if the secret to the brain's ability to
 learn generally stems from this high frame rate trick. What if we made
 a system that could process even high frame rates than the brain can.
 By doing this you can reduce the uncertainty of matches very very low
 (well in my theory so far). If you can do that, then you can learn
 about the objects in a video, how they move together or separately
 with very high certainty. 
 
 You see, matching is the main barrier when learning about objects. But
 with a very high frame rate, we can use a fast algorithm and could
 potentially reduce the uncertainty to almost nothing. Once we learn
 about objects, matching gets easier because now we have training data
 and experience to take advantage of. 
 
 In addition, you can also gain knowledge about lighting, color
 variation, noise, etc. With that knowledge, you can then automatically
 create a model of the object with extremely high confidence. You will
 also be able to determine the effects of light and noise on the
 object's appearance, which will help match the object invariantly in
 the future. It allows you to determine what is expected and unexpected
 for the object's appearance with much higher confidence. 
 
 Pretty cool idea huh?
 
 Dave
 agi | Archives  | Modify Your
 Subscription
 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com