[agi] Mechanical Analogy for Neural Operation!

2010-07-12 Thread Steve Richfield
Everyone has heard about the water analogy for electrical operation. I have
a mechanical analogy for neural operation that just might be solid enough
to compute at least some characteristics optimally.

No, I am NOT proposing building mechanical contraptions, just using the
concept to compute neuronal characteristics (or AGI formulas for learning).

Suppose neurons were mechanical contraptions, that receive inputs and
communicate outputs via mechanical movements. If one or more of the neurons
connected to an output of a neuron, can't make sense of a given input given
its other inputs, then its mechanism would physically resist the several
inputs that didn't make mutual sense because its mechanism would jam, with
the resistance possibly coming from some downstream neuron.

This would utilize position to resolve opposing forces, e.g. one force
being the observed inputs, and the other force being that they don't make
sense, suggest some painful outcome, etc. In short, this would enforce the
sort of equation over the present formulaic view of neurons (and AGI coding)
that I have suggested in past postings may be present, and show that the
math may not be all that challenging.

Uncertainty would be expressed in stiffness/flexibility, computed
limitations would be handled with over-running clutches, etc.

Propagation of forces would come close (perfect?) to being able to identify
just where in a complex network something should change to learn as
efficiently as possible.

Once the force concentrates at some point, it then gives, something slips
or bends, to unjam the mechanism. Thus, learning is effected.

Note that this suggests little difference between forward propagation and
backwards propagation, though real-world wet design considerations would
clearly prefer fast mechanisms for forward propagation, and compact
mechanisms for backwards propagation.

Epiphany or mania?

Any thoughts?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] New KurzweilAI.net site... with my silly article sillier chatbot ;-p ;) ....

2010-07-12 Thread The Wizard
Have you guys talked to the army's artificial intelligence chat bot yet?
http://sgtstar.goarmy.com/ActiveAgentUI/Welcome.aspx

nothing really special other than the voice sounds really natural..
http://sgtstar.goarmy.com/ActiveAgentUI/Welcome.aspx

On Thu, Jul 8, 2010 at 11:09 PM, Mike Archbold jazzbo...@gmail.com wrote:

 The concept of citizen science sounds great, Ben -- especially in
 this age.  From my own perspective I feel like my ideas are good but
 it falls short always of the rigor of a proper scientist, so I don't
 have that pretense.  The internet obviously helps out a lot.The
 plight of the solitary laborer is better than it used to be, I think,
 due to the availability of information/research.

 Mike Archbold

 On Mon, Jul 5, 2010 at 8:52 PM, Ben Goertzel b...@goertzel.org wrote:
  Check out my article on the H+ Summit
 
 
 http://www.kurzweilai.net/h-summit-harvard-the-rise-of-the-citizen-scientist
 
  and also the Ramona4 chatbot that Novamente LLC built for Ray Kurzweil
  a while back
 
  http://www.kurzweilai.net/ramona4/ramona.html
 
  It's not AGI at all; but it's pretty funny ;-)
 
  -- Ben
 
 
 
  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  CTO, Genescient Corp
  Vice Chairman, Humanity+
  Advisor, Singularity University and Singularity Institute
  External Research Professor, Xiamen University, China
  b...@goertzel.org
 
  
  “When nothing seems to help, I go look at a stonecutter hammering away
  at his rock, perhaps a hundred times without as much as a crack
  showing in it. Yet at the hundred and first blow it will split in two,
  and I know it was not that blow that did it, but all that had gone
  before.”
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Carlos A Mejia

Taking life one singularity at a time.
www.Transalchemy.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Mechanical Analogy for Neural Operation!

2010-07-12 Thread Mike Tintner
One tangential comment.

You're still thinking linearly. Machines are linear chains of parts. 
Cause-and-effect thinking made flesh/metal.

With organisms, however you have whole webs of parts acting more or less 
simultaneously.

We will probably need to bring that organic thinking/framework - field vs chain 
thinking? -  into the design of AGI machines, robots.

In relation to your subject, you see, incoming information is actually analysed 
by the human system on multiple levels and in terms often of multiple domain 
associations simultaneously.

And that's why we often get confused -  and don't always not understand. 
Sometimes we do know clearly what we don't understand - what does that word 
[actually] mean? But sometimes we attend to a complex argument and we know it 
doesn't really make sense to us, but we don't know which part[s] of it don't 
make sense or why - and we have to patiently and gradually unravel that knot of 
confusion.


From: Steve Richfield 
Sent: Monday, July 12, 2010 7:02 AM
To: agi 
Subject: [agi] Mechanical Analogy for Neural Operation!


Everyone has heard about the water analogy for electrical operation. I have a 
mechanical analogy for neural operation that just might be solid enough to 
compute at least some characteristics optimally.

No, I am NOT proposing building mechanical contraptions, just using the concept 
to compute neuronal characteristics (or AGI formulas for learning).

Suppose neurons were mechanical contraptions, that receive inputs and 
communicate outputs via mechanical movements. If one or more of the neurons 
connected to an output of a neuron, can't make sense of a given input given its 
other inputs, then its mechanism would physically resist the several inputs 
that didn't make mutual sense because its mechanism would jam, with the 
resistance possibly coming from some downstream neuron.

This would utilize position to resolve opposing forces, e.g. one force being 
the observed inputs, and the other force being that they don't make sense, 
suggest some painful outcome, etc. In short, this would enforce the sort of 
equation over the present formulaic view of neurons (and AGI coding) that I 
have suggested in past postings may be present, and show that the math may not 
be all that challenging.

Uncertainty would be expressed in stiffness/flexibility, computed limitations 
would be handled with over-running clutches, etc.

Propagation of forces would come close (perfect?) to being able to identify 
just where in a complex network something should change to learn as efficiently 
as possible.

Once the force concentrates at some point, it then gives, something slips or 
bends, to unjam the mechanism. Thus, learning is effected.

Note that this suggests little difference between forward propagation and 
backwards propagation, though real-world wet design considerations would 
clearly prefer fast mechanisms for forward propagation, and compact mechanisms 
for backwards propagation.

Epiphany or mania?

Any thoughts?

Steve

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Mechanical Analogy for Neural Operation!

2010-07-12 Thread Matt Mahoney
Steve Richfield wrote:
 No, I am NOT proposing building mechanical contraptions, just using the 
 concept 
to compute neuronal characteristics (or AGI formulas for learning).

Funny you should mention that. Ross Ashby actually built such a device in 1948 
called a homeostat ( http://en.wikipedia.org/wiki/Homeostat ), a fully 
interconnected neural network with 4 neurons using mechanical components and 
vacuum tubes. Synaptic weights were implemented by motor driven water filled 
potentiometers in which electrodes moved through a tank to vary the electrical 
resistance. It implemented a type of learning algorithm in which weights were 
varied using a rotating switch wired randomly using the RAND book of a million 
random digits. He described the device in his 1960 book, Design for a Brain.
 -- Matt Mahoney, matmaho...@yahoo.com





From: Steve Richfield steve.richfi...@gmail.com
To: agi agi@v2.listbox.com
Sent: Mon, July 12, 2010 2:02:20 AM
Subject: [agi] Mechanical Analogy for Neural Operation!

Everyone has heard about the water analogy for electrical operation. I have a 
mechanical analogy for neural operation that just might be solid enough to 
compute at least some characteristics optimally.

No, I am NOT proposing building mechanical contraptions, just using the concept 
to compute neuronal characteristics (or AGI formulas for learning).

Suppose neurons were mechanical contraptions, that receive inputs and 
communicate outputs via mechanical movements. If one or more of the neurons 
connected to an output of a neuron, can't make sense of a given input given its 
other inputs, then its mechanism would physically resist the several inputs 
that 
didn't make mutual sense because its mechanism would jam, with the resistance 
possibly coming from some downstream neuron.

This would utilize position to resolve opposing forces, e.g. one force being 
the observed inputs, and the other force being that they don't make sense, 
suggest some painful outcome, etc. In short, this would enforce the sort of 
equation over the present formulaic view of neurons (and AGI coding) that I 
have 
suggested in past postings may be present, and show that the math may not be 
all 
that challenging.

Uncertainty would be expressed in stiffness/flexibility, computed limitations 
would be handled with over-running clutches, etc.

Propagation of forces would come close (perfect?) to being able to identify 
just 
where in a complex network something should change to learn as efficiently as 
possible.

Once the force concentrates at some point, it then gives, something slips or 
bends, to unjam the mechanism. Thus, learning is effected.

Note that this suggests little difference between forward propagation and 
backwards propagation, though real-world wet design considerations would 
clearly 
prefer fast mechanisms for forward propagation, and compact mechanisms for 
backwards propagation.

Epiphany or mania?

Any thoughts?

Steve

 
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Cash in on robots

2010-07-12 Thread Mike Tintner
http://www.moneyweek.com/investment-advice/cash-in-on-the-robot-revolution-49024.aspx?utm_source=newsletterutm_medium=emailutm_campaign=Money%2BMorning

http://www.moneyweek.com/investment-advice/share-tips-five-ways-into-the-robotics-sector-49025.aspx


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-12 Thread Abram Demski
David,

I tend to think of probability theory and statistics as different things.
I'd agree that statistics is not enough for AGI, but in contrast I think
probability theory is a pretty good foundation. Bayesianism to me provides a
sound way of integrating the elegance/utility tradeoff of explanation-based
reasoning into the basic fabric of the uncertainty calculus. Others advocate
different sorts of uncertainty than probabilities, but so far what I've seen
indicates more a lack of ability to apply probability theory than a need for
a new type of uncertainty. What other methods do you favor for dealing with
these things?

--Abram

On Sun, Jul 11, 2010 at 12:30 PM, David Jones davidher...@gmail.com wrote:

 Thanks Abram,

 I know that probability is one approach. But there are many problems with
 using it in actual implementations. I know a lot of people will be angered
 by that statement and retort with all the successes that they have had using
 probability. But, the truth is that you can solve the problems many ways and
 every way has its pros and cons. I personally believe that probability has
 unacceptable cons if used all by itself. It must only be used when it is the
 best tool for the task.

 I do plan to use some probability within my approach. But only when it
 makes sense to do so. I do not believe in completely statistical solutions
 or completely Bayesian machine learning alone.

 A good example of when I might use it is when a particular hypothesis
 predicts something with 70% accuracy, well it may be better than any other
 hypothesis we can come up with so far. So, we may use that hypothesis. But,
 the 30% unexplained errors should be explained if possible with the
 resources and algorithms available, if at all possible. This is where my
 method differs from statistical methods. I want to build algorithms that
 resolve the 30% and explain it. For many problems, there are rules and
 knowledge that will solve them effectively. Probability should only be used
 when you cannot find a more accurate solution.

 Basically we should use probability when we don't know the factors
 involved, can't find any rules to explain the phenomena or we don't have the
 time and resources to figure it out. So you must simply guess at the most
 probable event without any rules for figuring out which event is more
 applicable under the current circumstances.

 So, in summary, probability definitely has its place. I just think that
 explanatory reasoning and other more accurate methods should be preferred
 whenever possible.

 Regarding learning the knowledge being the bigger problem, I completely
 agree. That is why I think it is so important to develop machine learning
 that can learn by direct observation of the environment. Without that, it is
 practically impossible to gather the knowledge required for AGI-type
 applications. We can learn this knowledge by analyzing the world
 automatically and generally through video.

 My step by step approach for learning and then applying the knowledge for
 agi is as follows:
 1) Understand and learn about the environment(through Computer Vision for
 now and other sensory perceptions in the future)
 2) learn about your own actions and how they affect the environment
 3) learn about language and how it is associated with or related to the
 environment.
 4) learn goals from language(such as through dedicated inputs).
 5) Goal pursuit
 6) Other Miscellaneous capabilities as needed

 Dave

 On Sat, Jul 10, 2010 at 8:40 PM, Abram Demski abramdem...@gmail.comwrote:

 David,

 Sorry for the slow response.

 I agree completely about expectations vs predictions, though I wouldn't
 use that terminology to make the distinction (since the two terms are
 near-synonyms in English, and I'm not aware of any technical definitions
 that are common in the literature). This is why I think probability theory
 is necessary: to formalize this idea of expectations.

 I also agree that it's good to utilize previous knowledge. However, I
 think existing AI research has tackled this over and over; learning that
 knowledge is the bigger problem.

 --Abram

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] New KurzweilAI.net site... with my silly article sillier chatbot ;-p ;) ....

2010-07-12 Thread John G. Rose
These video/rendered chatbots have huge potential and will be taken in many
different directions.

 

They are gradually over time approaching a p-zombie-esque situation.

 

They add multi-modal communication - body/facial language/expression and
prosody. So even if the text alone is not too good the simultaneous rending
of the multi-channel information adds some sort of legitimacy. Though in
these simple cases the bot only takes text as input so much of the
communication complexity
http://en.wikipedia.org/wiki/Communication_complexity  is running semi
half-duplex.

 

John

 

From: The Wizard [mailto:key.unive...@gmail.com] 
Sent: Monday, July 12, 2010 1:02 AM
To: agi
Subject: Re: [agi] New KurzweilAI.net site... with my silly article 
sillier chatbot ;-p ;) 

 

Have you guys talked to the army's artificial intelligence chat bot yet?

http://sgtstar.goarmy.com/ActiveAgentUI/Welcome.aspx

 

nothing really special other than the voice sounds really natural..

 

On Thu, Jul 8, 2010 at 11:09 PM, Mike Archbold jazzbo...@gmail.com wrote:

The concept of citizen science sounds great, Ben -- especially in
this age.  From my own perspective I feel like my ideas are good but
it falls short always of the rigor of a proper scientist, so I don't
have that pretense.  The internet obviously helps out a lot.The
plight of the solitary laborer is better than it used to be, I think,
due to the availability of information/research.

Mike Archbold


On Mon, Jul 5, 2010 at 8:52 PM, Ben Goertzel b...@goertzel.org wrote:
 Check out my article on the H+ Summit


http://www.kurzweilai.net/h-summit-harvard-the-rise-of-the-citizen-scientist

 and also the Ramona4 chatbot that Novamente LLC built for Ray Kurzweil
 a while back

 http://www.kurzweilai.net/ramona4/ramona.html

 It's not AGI at all; but it's pretty funny ;-)

 -- Ben



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 CTO, Genescient Corp
 Vice Chairman, Humanity+
 Advisor, Singularity University and Singularity Institute
 External Research Professor, Xiamen University, China
 b...@goertzel.org

 
 When nothing seems to help, I go look at a stonecutter hammering away
 at his rock, perhaps a hundred times without as much as a crack
 showing in it. Yet at the hundred and first blow it will split in two,
 and I know it was not that blow that did it, but all that had gone
 before.


 ---

 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?
https://www.listbox.com/member/?; 

 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?
https://www.listbox.com/member/?; 

Powered by Listbox: http://www.listbox.com




-- 
Carlos A Mejia

Taking life one singularity at a time.
www.Transalchemy.com  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Mechanical Analogy for Neural Operation!

2010-07-12 Thread Michael Swan
Hi,

I pretty much always think of a NN as a physical device.

I think the first binary computer was dreamt up with balls going through
the system with ball representing 1's and 0's. The idea was written down
but never built.

Jamming balls that give way at a certain point is the same as using .

ie When more than 6 balls jam up, the pressure is released, sending a 1
or a value  6 balls.

Addition can be a little different in such systems.

ie a value  6 + a value  3 = a value  9. 

On Sun, 2010-07-11 at 23:02 -0700, Steve Richfield wrote:
 Everyone has heard about the water analogy for electrical operation. I
 have a mechanical analogy for neural operation that just might be
 solid enough to compute at least some characteristics optimally.
 
 No, I am NOT proposing building mechanical contraptions, just using
 the concept to compute neuronal characteristics (or AGI formulas for
 learning).
 
 Suppose neurons were mechanical contraptions, that receive inputs and
 communicate outputs via mechanical movements. If one or more of the
 neurons connected to an output of a neuron, can't make sense of a
 given input given its other inputs, then its mechanism would
 physically resist the several inputs that didn't make mutual sense
 because its mechanism would jam, with the resistance possibly coming
 from some downstream neuron.
 
 This would utilize position to resolve opposing forces, e.g. one
 force being the observed inputs, and the other force being that
 they don't make sense, suggest some painful outcome, etc. In short,
 this would enforce the sort of equation over the present formulaic
 view of neurons (and AGI coding) that I have suggested in past
 postings may be present, and show that the math may not be all that
 challenging.
 
 Uncertainty would be expressed in stiffness/flexibility, computed
 limitations would be handled with over-running clutches, etc.
 
 Propagation of forces would come close (perfect?) to being able to
 identify just where in a complex network something should change to
 learn as efficiently as possible.
 
 Once the force concentrates at some point, it then gives, something
 slips or bends, to unjam the mechanism. Thus, learning is effected.
 
 Note that this suggests little difference between forward propagation
 and backwards propagation, though real-world wet design considerations
 would clearly prefer fast mechanisms for forward propagation, and
 compact mechanisms for backwards propagation.
 
 Epiphany or mania?
 
 Any thoughts?
 
 Steve
 
 agi | Archives  | Modify Your
 Subscription
 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-12 Thread Michael Swan
Hi,

I'm interested in combining the simplest, most derivable operations
( eg operations that cannot be defined by other operations) for creating
seed AGI's. The simplest operations combined in a multitude ways can
form extremely complex patterns, but the underlying logic may be
simple. 

I wonder if varying combinations of the smallest set of operations:

{  , memory (= for memory assignment), ==, (a logical way to
combine them), (input, output), () brackets  } 

can potentially learn and define everything. 

Assume all input is from numbers.

We want the smallest set of elements, because less elements mean less
combinations which mean less chance of hitting combinatorial explosion.

 helps for generalisation, reducing combinations. 

memory(=) is for hash look ups, what should one remember? What can be
discarded? 

== This does a comparison between 2 values x == y is 1 if x and y are
exactly the same. Returns 0 if they are not the same.

(a logical way to combine them) Any non-narrow algorithm that reduces
the raw data into a simpler state will do. Philosophically like
Solomonoff Induction. This is the hardest part. What is the most optimal
way of combining the above set of operations?

() brackets are used to order operations. 




Conditionals (only if statements) + memory assignment are the only valid
form of logic - ie no loops. Just repeat code if you want loops. 


If you think that the set above cannot define everything, then what is
the smallest set of operations that can potentially define everything? 

--
Some proofs / Thought experiments :

1) Can , ==, (), and memory define other logical operations like 
(AND gate) ?

I propose that x==y==1 defines xy

xy x==y==1
00 = 0 0==0==1 = 0
10 = 0 1==0==1 = 0
01 = 0 0==1==1 = 0 
11 = 1 1==1==1 = 1

It means  can be completely defined using == therefore  is not
one of the smallest possible general concepts.  can be potentially
learnt from ==.

-

2) Write a algorithm that can define 1 using only ,==, ().

Multiple answers
a) discrete 1 could use
x == 1

b) continuous 1.0 could use this rule 
For those not familiar with C++, ! means not 
(x  0.9)  !(x  1.1)   expanding gives ( getting rid of ! and )
(x  0.9) == ((x  1.1) == 0) == 1note !x can be defined in terms
of == like so x == 0.

(b) is a generalisation, and expansion of the definition of (a) and can
be scaled by changing the values 0.9 and 1.1 to fit what others
would generally define as being 1.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com