Re: [agi] just a thought

2009-01-14 Thread Christopher Carr

Vladimir Nesov wrote:

On Wed, Jan 14, 2009 at 4:40 AM, Matt Mahoney matmaho...@yahoo.com wrote:
  

--- On Tue, 1/13/09, Valentina Poletti jamwa...@gmail.com wrote:



Anyways my point is, the reason why we have achieved so much technology, so much 
knowledge in this time is precisely the we, it's the union of several 
individuals together with their ability to communicate with one-other that has made us 
advance so much.
  

I agree. A machine that is 10 times as smart as a human in every way could not 
achieve much more than hiring 10 more people. In order to automate the economy, 
we have to replicate the capabilities of not one human mind, but a system of 
10^10 minds. That is why my AGI proposal is so hideously expensive.
http://www.mattmahoney.net/agi2.html




Let's fire Matt and hire 10 chimps instead.

  
Problems with IQ notwithstanding, I'm confident that, were my silly IQ 
of 145 merely doubled, I could convince Dr. Goertzel to give me the 
majority of his assets, including control of his businesses. And if he 
were to really meet someone that bright, he would be a fool or 
super-human not to do so, which he isn't (a fool, that is).


Even a simpleton such as myself can see that Mr. Mahoney is quite on a 
wrong track with this line of thought. Matt can know little or nothing 
about the strategies of such a bright, single agent.


-Christopher Carr



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Mike Tintner
Valentina,

It's a v.g. point  does refer. You're saying there is no individual without 
social intelligence, and we'll need a society of AGI's. It does refer to 
creativity. Every creative problem we face is actually, strictly, inseparable 
from a whole body of creative problems.  In trying to solve the engram problem, 
 Richard is not a lone hero, but a part of the vast collective enterprise of 
science/scientists trying to understand the brain as a whole, and his eventual 
discovery will have to dovetail with others' efforts. So not just one AGI, Ben, 
a whole society of them.  He's on to it.

(And every problem we solve - just using language - is also interdependent with 
the whole society's efforts/problems - and use of language).Of course, only a 
woman would think about other people here :).
  Valentina:

  Not in reference to any specific current discussion, 


  I find it interesting that when people talk of human like intelligence in the 
realm of AGI, they refer to the ability of a human individual, or human brain 
if you like. It just occurred to me that human beings are not that intelligent. 
Well, of course we are super intelligent compared to a frog (as some would say) 
but then again a frog is super intelligent compared to an aunt. 




  Anyways my point is, the reason why we have achieved so much technology, so 
much knowledge in this time is precisely the we, it's the union of several 
individuals together with their ability to communicate with one-other that has 
made us advance so much. In a sense we are a single being with millions of 
eyes, ears, hands, brains, which alltogether can create amazing things. But 
take any human being alone, isolate him/her from any contact with any other 
human being and rest assured he/she will not achieve a single artifact of 
technology. In fact he/she might not survive long.




  So that's why I think it is important to put emphasis on this when talking 
about super-human intelligence. 




  That was my 2-in-the-morning thought. I guess I should sleep now.


--
agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Ronald C. Blue





On Wed, Jan 14, 2009 at 4:40 AM, Matt Mahoney matmaho...@yahoo.com 
wrote:

--- On Tue, 1/13/09, Valentina Poletti jamwa...@gmail.com wrote:

Anyways my point is, the reason why we have achieved so much technology, 
so much knowledge in this time is precisely the we, it's the union of 
several individuals together with their ability to communicate with 
one-other that has made us advance so much.


I agree. A machine that is 10 times as smart as a human in every way 
could not achieve much more than hiring 10 more people. In order to 
automate the economy, we have to replicate the capabilities of not one 
human mind, but a system of 10^10 minds. That is why my AGI proposal is 
so hideously expensive.

http://www.mattmahoney.net/agi2.html




Now really expensive if quantum entanglement is in fact present in a hybrid 
of quantum circuits stored in carbon tetrachloride functioning as a 
capacitor.  In principle 420 billion human minds or about  84 octillion 
qubits can be stored entangled in 8 Mayonnaise jars of carbon tetrachloride. 
Carbon tetrachloride causes cancer and requires a government permit to use. 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Bob Mottram
2009/1/14 Valentina Poletti jamwa...@gmail.com:
 Anyways my point is, the reason why we have achieved so much technology, so
 much knowledge in this time is precisely the we, it's the union of several
 individuals together with their ability to communicate with one-other that
 has made us advance so much. In a sense we are a single being with millions
 of eyes, ears, hands, brains, which alltogether can create amazing things.
 But take any human being alone, isolate him/her from any contact with any
 other human being and rest assured he/she will not achieve a single artifact
 of technology. In fact he/she might not survive long.


Yes.  I think Ben made a similar point in The Hidden Pattern.  People
studying human intelligence - psychologists, psychiatrists, cognitive
scientists, etc - tend to focus narrowly on the individual brain, but
human intelligence is more of an emergent networked phenomena
populated by strange meta-entities such as archetypes and memes.  Even
the greatest individuals from the world of science or art didn't make
their achievements in a vacuum, and were influenced by earlier works.

Years ago I was chatting with someone who was about to patent some
piece of machinery.  He had his name on the patent, but was pointing
out that it's very difficult to be able to say exactly who made the
invention - who was the guiding mind.  In this case many individuals
within his company had some creative input, and there was really no
one inventor as such.  I think many human-made artifacts are like
this.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Mike Tintner

Ron et al:

I suspect that you guys are thinking like classic nerds -  ok we just 
*multiply* the number of minds - a purely mathematical operation. But 
actually social thinking is much more than that - the minds will really need 
to *interact* - they can't just *parallel process* in the current meaning of 
the term. They would need to do the equivalent of what happens here and in 
every organization and society - adopt competing positions, form into 
opposing schools of thought, refuse to talk to each other and go off in 
theatrical huffs like some who shall be nameless here, conspire against each 
other, form alliances -  of course have sex with each other.


Ron  co:


Anyways my point is, the reason why we have achieved so much 
technology, so much knowledge in this time is precisely the we, it's 
the union of several individuals together with their ability to 
communicate with one-other that has made us advance so much.


I agree. A machine that is 10 times as smart as a human in every way 
could not achieve much more than hiring 10 more people. In order to 
automate the economy, we have to replicate the capabilities of not one 
human mind, but a system of 10^10 minds. That is why my AGI proposal is 
so hideously expensive.

http://www.mattmahoney.net/agi2.html




Now really expensive if quantum entanglement is in fact present in a 
hybrid of quantum circuits stored in carbon tetrachloride functioning as a 
capacitor.  In principle 420 billion human minds or about  84 octillion 
qubits can be stored entangled in 8 Mayonnaise jars of carbon 
tetrachloride. Carbon tetrachloride causes cancer and requires a 
government permit to use.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Matt Mahoney
--- On Wed, 1/14/09, Christopher Carr cac...@pdx.edu wrote:

 Problems with IQ notwithstanding, I'm confident that, were my silly IQ
of 145 merely doubled, I could convince Dr. Goertzel to give me the
majority of his assets, including control of his businesses. And if he
were to really meet someone that bright, he would be a fool or
super-human not to do so, which he isn't (a fool, that is).

First, if you knew what you would do if you were twice as smart, you would 
already be that smart. Therefore you don't know.

Second, you have never even met anyone with an IQ of 290. How do you know what 
they would do?

How do you measure an IQ of 100n?

- Ability to remember n times as much?
- Ability to learn n times faster?
- Ability to solve problems n times faster?
- Ability to do the work of n people?
- Ability to make n times as much money?
- Ability to communicate with n people at once?

Please give me an IQ test that measures something that can't be done by n log n 
people (allowing for some organizational overhead).

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


RE: [agi] just a thought

2009-01-14 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 --- On Wed, 1/14/09, Christopher Carr cac...@pdx.edu wrote:
 
  Problems with IQ notwithstanding, I'm confident that, were my silly IQ
 of 145 merely doubled, I could convince Dr. Goertzel to give me the
 majority of his assets, including control of his businesses. And if he
 were to really meet someone that bright, he would be a fool or
 super-human not to do so, which he isn't (a fool, that is).
 
 First, if you knew what you would do if you were twice as smart, you
 would already be that smart. Therefore you don't know.
 
 Second, you have never even met anyone with an IQ of 290. How do you
 know what they would do?
 
 How do you measure an IQ of 100n?
 
 - Ability to remember n times as much?
 - Ability to learn n times faster?
 - Ability to solve problems n times faster?
 - Ability to do the work of n people?
 - Ability to make n times as much money?
 - Ability to communicate with n people at once?
 
 Please give me an IQ test that measures something that can't be done by
 n log n people (allowing for some organizational overhead).
 

How do you measure the collective IQ of humanity? Individual IQ's are just a
subset.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


[agi] Encouraging?

2009-01-14 Thread Mike Tintner
You have talked about past recessions being real opportunities for 
business. But in past recessions, wasn't business able to get lending? And 
doesn't the tightness of the credit market today inhibit some opportunities?


Typically not. Most new innovations are started without access to credit in 
good times or bad. Microsoft (MSFT) was started without any access to 
credit. It's only in crazy times that people lend money to people who are 
experimenting with innovations. Most of the great businesses today were 
started with neither a lot of venture capital nor with any bank lending 
until five or six years after they were [up and running]. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Encouraging?

2009-01-14 Thread Steve Richfield
Mike,

On 1/14/09, Mike Tintner tint...@blueyonder.co.uk wrote:

 You have talked about past recessions being real opportunities for
 business. But in past recessions, wasn't business able to get lending? And
 doesn't the tightness of the credit market today inhibit some opportunities?


It definitely changes things. This could all change in a heartbeat. Suppose
for a moment that Saudi Arabia decided to secure its Riyal (their dollar)
with a liter of oil. The Trillions now sitting in Federal Reserve accounts
without interest and at risk of the dollar collapsing, could simply be
transferred into Riyals instead and be secure. Of course, this would
instantly bankrupt the U.S. government and many of those Trillions would be
lost, but it WOULD instantly restore whatever survived of the worldwide
monetary system.

Typically not. Most new innovations are started without access to credit in
 good times or bad.


Only because business can't recognize a good thing when they see it, e.g.
Zerox not seeing the value of their early version of Windows.

Microsoft (MSFT) was started without any access to credit.


Unless you count the millions that his parents had to help him over the
rough spots.

It's only in crazy times that people lend money to people who are
 experimenting with innovations.


In ordinary times, they want stock instead, with its MUCH greater upside
potential.

Most of the great businesses today were started with neither a lot of
 venture capital nor with any bank lending until five or six years after they
 were [up and running].


This really gets down to what up and running is. For HP, they were making
light bulb stabilized audio oscillators in their garage.

Because of numerous possibilities like a secured Riyal mentioned above, I
suspect that things will instantly change one way or another as quickly as
they came down from Credit Default Swaps, a completely hidden boondoggle
until it went off.

Note how Zaire solved their monetary problems years ago. They closed their
borders over the Christmas holidays and exchanged new dollars for old. Then
they re-opened their borders, leaving the worthless old dollars held by
foreigners.as someone ELSE's problem.

Mexico went through a sudden 1000:1 devaluation to solve their problems. In
one stroke this wiped out their foreign debt.

Expect something REALLY dramatic in the relatively near future. I suspect
that our bribe-o-cratic form of government will prohibit our taking
preemptive action, and thereby leave us at the (nonexistent) mercy of other
powers that aren't so inhibited.

I have NEVER seen a desperate action based on a simple lack of alternatives,
like the various proposed stimulus plans, ever work. Never expect a problem
to be solved by the same mindset that created it (Einstein). The lack of
investment money will soon be seen as the least of our problems.

On a side note, There hasn't been $20 spent on real genuine industrial
research in the last decade. This means that you can own the field of your
choice by simply investing a low level of research effort, and waiting for
things to change. I have selected 3 narrow disjoint areas and now appear to
be a/the leader in each. I am just waiting for the world to recognize that
it desperately needs one of them.

Any thoughts?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


[agi] Doubts raised over brain scan findings

2009-01-14 Thread Richard Loosemore


For anyone interested in recent discussions of neuroscience and the 
level of scientific validity to the various brain-scann claims, the 
study by Vul et al, discussed here:


http://www.newscientist.com/article/mg20126914.700-doubts-raised-over-brain-scan-findings.html

and available here:

http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf

... is a welcome complement to the papers by Trevor Harley (and myself).


The title of the paper is Voodoo Correlations in Social Neuroscience, 
and that use of the word voodoo pretty much sums up the attitude of a 
number of critics of the field.


We've attacked from a different direction, but we had a wide range of 
targets to choose, believe me.


The short version of the overall story is that neuroscience is out of 
control as far as overinflated claims go.





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Valentina Poletti
Cool,

this idea has already been applied successfully to some areas of AI, such as
ant-colony algorithms and swarm intelligence algorithms. But I was thinking
that it would be interesting to apply it at a high level. For example,
consider that you create the best AGI agent you can come up with and,
instead of running just one, you create several copies of it (perhaps with
slight variations), and you initiate each in a different part of your
reality or environment for such agents, after letting them have the ability
to communicate. In this way whenever one such agents learns anything
meaningful he passes the information to all other agents as well, that is,
it not only modifies its own policy but it also affects the other's to some
extent (determined by some constant or/and by how much the other agent likes
this one, that is how useful learning from it has been in the past and so
on). This way not only each agent would learn much faster, but also the
agents could learn to use this communication ability to their advantage to
ameliorate. I just think it would be interesting to implement this, not that
I am capable of right now.


On Wed, Jan 14, 2009 at 2:34 PM, Bob Mottram fuzz...@gmail.com wrote:

 2009/1/14 Valentina Poletti jamwa...@gmail.com:
  Anyways my point is, the reason why we have achieved so much technology,
 so
  much knowledge in this time is precisely the we, it's the union of
 several
  individuals together with their ability to communicate with one-other
 that
  has made us advance so much. In a sense we are a single being with
 millions
  of eyes, ears, hands, brains, which alltogether can create amazing
 things.
  But take any human being alone, isolate him/her from any contact with any
  other human being and rest assured he/she will not achieve a single
 artifact
  of technology. In fact he/she might not survive long.


 Yes.  I think Ben made a similar point in The Hidden Pattern.  People
 studying human intelligence - psychologists, psychiatrists, cognitive
 scientists, etc - tend to focus narrowly on the individual brain, but
 human intelligence is more of an emergent networked phenomena
 populated by strange meta-entities such as archetypes and memes.  Even
 the greatest individuals from the world of science or art didn't make
 their achievements in a vacuum, and were influenced by earlier works.

 Years ago I was chatting with someone who was about to patent some
 piece of machinery.  He had his name on the patent, but was pointing
 out that it's very difficult to be able to say exactly who made the
 invention - who was the guiding mind.  In this case many individuals
 within his company had some creative input, and there was really no
 one inventor as such.  I think many human-made artifacts are like
 this.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Mike Tintner
Chris: Problems with IQ notwithstanding, I'm confident that, were my silly 
IQ

of 145 merely doubled,..


Chris/Matt: Hasn't anyone ever told you - it's not the size of it, it's what 
you do with it that counts? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Pei Wang
I guess something like this is in the plan of many, if not all, AGI
projects. For NARS, see
http://nars.wang.googlepages.com/wang.roadmap.pdf , under (4)
Socialization in page 11.

It is just that to attempt any non-trivial multi-agent experiment, the
work in single agent needs to be mature enough. The AGI projects are
not there yet.

Pei

On Wed, Jan 14, 2009 at 4:10 PM, Valentina Poletti jamwa...@gmail.com wrote:
 Cool,

 this idea has already been applied successfully to some areas of AI, such as
 ant-colony algorithms and swarm intelligence algorithms. But I was thinking
 that it would be interesting to apply it at a high level. For example,
 consider that you create the best AGI agent you can come up with and,
 instead of running just one, you create several copies of it (perhaps with
 slight variations), and you initiate each in a different part of your
 reality or environment for such agents, after letting them have the ability
 to communicate. In this way whenever one such agents learns anything
 meaningful he passes the information to all other agents as well, that is,
 it not only modifies its own policy but it also affects the other's to some
 extent (determined by some constant or/and by how much the other agent likes
 this one, that is how useful learning from it has been in the past and so
 on). This way not only each agent would learn much faster, but also the
 agents could learn to use this communication ability to their advantage to
 ameliorate. I just think it would be interesting to implement this, not that
 I am capable of right now.


 On Wed, Jan 14, 2009 at 2:34 PM, Bob Mottram fuzz...@gmail.com wrote:

 2009/1/14 Valentina Poletti jamwa...@gmail.com:
  Anyways my point is, the reason why we have achieved so much technology,
  so
  much knowledge in this time is precisely the we, it's the union of
  several
  individuals together with their ability to communicate with one-other
  that
  has made us advance so much. In a sense we are a single being with
  millions
  of eyes, ears, hands, brains, which alltogether can create amazing
  things.
  But take any human being alone, isolate him/her from any contact with
  any
  other human being and rest assured he/she will not achieve a single
  artifact
  of technology. In fact he/she might not survive long.


 Yes.  I think Ben made a similar point in The Hidden Pattern.  People
 studying human intelligence - psychologists, psychiatrists, cognitive
 scientists, etc - tend to focus narrowly on the individual brain, but
 human intelligence is more of an emergent networked phenomena
 populated by strange meta-entities such as archetypes and memes.  Even
 the greatest individuals from the world of science or art didn't make
 their achievements in a vacuum, and were influenced by earlier works.

 Years ago I was chatting with someone who was about to patent some
 piece of machinery.  He had his name on the patent, but was pointing
 out that it's very difficult to be able to say exactly who made the
 invention - who was the guiding mind.  In this case many individuals
 within his company had some creative input, and there was really no
 one inventor as such.  I think many human-made artifacts are like
 this.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 --
 A true friend stabs you in the front. - O. Wilde

 Einstein once thought he was wrong; then he discovered he was wrong.

 For every complex problem, there is an answer which is short, simple and
 wrong. - H.L. Mencken
 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Joshua Cowan
Is having a strong sense of self one aspect of mature enough? Also, Dr. 
Wang, do you see this as a primary way for teaching empathy. I believe Ben 
has written about hardwiring the desire to work with other agents as a 
possible means of encouraging empathy. Do you agree with this approach 
and/or have other ideas for encouraging empathy (assuming you see empathy as 
a good goal)?




From: Pei Wang mail.peiw...@gmail.com
Reply-To: agi@v2.listbox.com
To: agi@v2.listbox.com
Subject: Re: [agi] just a thought
Date: Wed, 14 Jan 2009 16:21:23 -0500

I guess something like this is in the plan of many, if not all, AGI
projects. For NARS, see
http://nars.wang.googlepages.com/wang.roadmap.pdf , under (4)
Socialization in page 11.

It is just that to attempt any non-trivial multi-agent experiment, the
work in single agent needs to be mature enough. The AGI projects are
not there yet.

Pei

On Wed, Jan 14, 2009 at 4:10 PM, Valentina Poletti jamwa...@gmail.com 
wrote:

 Cool,

 this idea has already been applied successfully to some areas of AI, 
such as
 ant-colony algorithms and swarm intelligence algorithms. But I was 
thinking

 that it would be interesting to apply it at a high level. For example,
 consider that you create the best AGI agent you can come up with and,
 instead of running just one, you create several copies of it (perhaps 
with

 slight variations), and you initiate each in a different part of your
 reality or environment for such agents, after letting them have the 
ability

 to communicate. In this way whenever one such agents learns anything
 meaningful he passes the information to all other agents as well, that 
is,
 it not only modifies its own policy but it also affects the other's to 
some
 extent (determined by some constant or/and by how much the other agent 
likes
 this one, that is how useful learning from it has been in the past and 
so

 on). This way not only each agent would learn much faster, but also the
 agents could learn to use this communication ability to their advantage 
to
 ameliorate. I just think it would be interesting to implement this, not 
that

 I am capable of right now.


 On Wed, Jan 14, 2009 at 2:34 PM, Bob Mottram fuzz...@gmail.com wrote:

 2009/1/14 Valentina Poletti jamwa...@gmail.com:
  Anyways my point is, the reason why we have achieved so much 
technology,

  so
  much knowledge in this time is precisely the we, it's the union of
  several
  individuals together with their ability to communicate with one-other
  that
  has made us advance so much. In a sense we are a single being with
  millions
  of eyes, ears, hands, brains, which alltogether can create amazing
  things.
  But take any human being alone, isolate him/her from any contact with
  any
  other human being and rest assured he/she will not achieve a single
  artifact
  of technology. In fact he/she might not survive long.


 Yes.  I think Ben made a similar point in The Hidden Pattern.  People
 studying human intelligence - psychologists, psychiatrists, cognitive
 scientists, etc - tend to focus narrowly on the individual brain, but
 human intelligence is more of an emergent networked phenomena
 populated by strange meta-entities such as archetypes and memes.  Even
 the greatest individuals from the world of science or art didn't make
 their achievements in a vacuum, and were influenced by earlier works.

 Years ago I was chatting with someone who was about to patent some
 piece of machinery.  He had his name on the patent, but was pointing
 out that it's very difficult to be able to say exactly who made the
 invention - who was the guiding mind.  In this case many individuals
 within his company had some creative input, and there was really no
 one inventor as such.  I think many human-made artifacts are like
 this.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 --
 A true friend stabs you in the front. - O. Wilde

 Einstein once thought he was wrong; then he discovered he was wrong.

 For every complex problem, there is an answer which is short, simple and
 wrong. - H.L. Mencken
 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
On Wed, Jan 14, 2009 at 10:59 PM, Richard Loosemore r...@lightlink.com wrote:

 For anyone interested in recent discussions of neuroscience and the level of
 scientific validity to the various brain-scann claims, the study by Vul et
 al, discussed here:

 http://www.newscientist.com/article/mg20126914.700-doubts-raised-over-brain-scan-findings.html

 and available here:

 http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf

 ... is a welcome complement to the papers by Trevor Harley (and myself).


 The title of the paper is Voodoo Correlations in Social Neuroscience, and
 that use of the word voodoo pretty much sums up the attitude of a number
 of critics of the field.

 We've attacked from a different direction, but we had a wide range of
 targets to choose, believe me.

 The short version of the overall story is that neuroscience is out of
 control as far as overinflated claims go.


Richard, even if your concerns are somewhat valid, why is it
interesting here? It's not like neuroscience is dominated by
discussions of (mis)interpretation of results, they are collecting
data, and with that they are steadily getting somewhere.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Richard Loosemore

Vladimir Nesov wrote:

On Wed, Jan 14, 2009 at 10:59 PM, Richard Loosemore r...@lightlink.com wrote:

For anyone interested in recent discussions of neuroscience and the level of
scientific validity to the various brain-scann claims, the study by Vul et
al, discussed here:

http://www.newscientist.com/article/mg20126914.700-doubts-raised-over-brain-scan-findings.html

and available here:

http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf

... is a welcome complement to the papers by Trevor Harley (and myself).


The title of the paper is Voodoo Correlations in Social Neuroscience, and
that use of the word voodoo pretty much sums up the attitude of a number
of critics of the field.

We've attacked from a different direction, but we had a wide range of
targets to choose, believe me.

The short version of the overall story is that neuroscience is out of
control as far as overinflated claims go.



Richard, even if your concerns are somewhat valid, why is it
interesting here? It's not like neuroscience is dominated by
discussions of (mis)interpretation of results, they are collecting
data, and with that they are steadily getting somewhere.


I don't understand.

The whole point about the paper referenced above is that they are 
collecting (in a large number of cases) data that is just random noise.


And in the work that I did, analyzing several neuroscience papers, the 
conclusion was that many of their conclusions were unfounded.


That is exactly the opposite of what you just said:  they are not 
steadily getting somewhere they are filling the research world with 
noise.  I do not understand how you can see what was said in the above 
paper, and yet say what you just said.


Bear in mind that we are targeting the (extremely large number of) 
claims of psychological validity that are coming out of the 
neuroscience community.  If they collect data and do not make 
psychological claims, all power to them.



I don't particularly want to get into an argument about it.  It was just 
a little backup information for what I said before.





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore r...@lightlink.com wrote:

 The whole point about the paper referenced above is that they are collecting
 (in a large number of cases) data that is just random noise.


So what? The paper points out a methodological problem that in itself
has little to do with neuroscience. The field as a whole is hardly
mortally afflicted with that problem (whether it's even real or not).
If you look at any field large enough, there will be bad science. How
is it relevant to study of AGI?

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Ronald C. Blue



The whole point about the paper referenced above is that they are
collecting (in a large number of cases) data that is just random noise.



Agreed that they could be random noise, but it could also be related to 
protein production which is related to nerve cell firings and increase 
oxygen consumption.  I am of the opinion that a high activity area does not 
mean this is where a particular signal is being processed or created.

Example:

http://vsg.quasihome.com/interfer.htm

Notice the interference pattern projected on the back of the wall.  The 
pattern is a holographic reading.  Nerve cells are not like photo graphic 
plates were a positive picture generates a negative picture.  They are more 
like a saxophone.  The resonance from a vibrating reed causes sound to 
reverberate in chambers.  So if you did a brain scan of a saxophone you 
may miss the importance of the reed and air flow in making the resonance 
sounds.  In my opinion the areas of the brain that are not firing are more 
important than those that are.  How else can you explain low brain activity 
for a master chess player and massive brain firings for a novice chess 
player.


To know means less neurological work.

Ron



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Ronald C. Blue

So what? The paper points out a methodological problem that in itself
has little to do with neuroscience. The field as a whole is hardly
mortally afflicted with that problem (whether it's even real or not).
If you look at any field large enough, there will be bad science. How
is it relevant to study of AGI?



Your child comes home and says they make a zero on the big test.

A child says they made 80 on the test and failed ,  The reason they missed 
80 questions out of 100.


A child says they had a grade of 98 right and the teacher gave them a B. 
The reason there were 110

questions on the test.

The value of data is not the data it self but the meaning in the 
global/local system.


The question then do you focus on data production in an AGI system or to 
focus your
attention on the relativitistic meaning of the information and can that be 
created in
an electronic, sound, light, liquid or wavelet system.  Which system will 
give you the

best performance?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Richard Loosemore

Vladimir Nesov wrote:

On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore r...@lightlink.com wrote:

The whole point about the paper referenced above is that they are collecting
(in a large number of cases) data that is just random noise.



So what? The paper points out a methodological problem that in itself
has little to do with neuroscience.


Not correct at all:  this *is* neuroscience.  I don't understand why you 
say that it is not.



The field as a whole is hardly
mortally afflicted with that problem


I mentioned it because there is a context in which this sits.  The 
context is that an entire area - which might be called deriving 
psychological conclusions from barin scan data - is getting massive 
funding and massive attention, and yet it is quite arguably in an 
Emperor's New Clothes state.  In other words, the conclusions being 
drawn are (for a variety of reasons) of very dubious quality.


(whether it's even real or not).

It is real.


If you look at any field large enough, there will be bad science.


According to the significant number of people who criticize it, this 
field appears to be dominated by bad science.  This is not just an 
isolated case.



How
is it relevant to study of AGI?


People here are sometimes interested in cognitive science matters, and 
some are interested in the concept of building an AGI by brain 
emulation.  Neuroscience is relevant to that.


Beyond that, this is just an FYI.

I really do not care to put much effort into this.  If people are 
interested, they can read the paper.  But if they doubt the validity of 
the entire idea that there is a problem with neuroscience claims about 
psychological processes, I'm afraid I do not have the time to argue, 
simply because the level of general expertise here is not such that I 
can discuss it without explaining the whole critique from scratch.


As you say, it is not important enough, in an AGI context, to spend much 
time on.





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
On Thu, Jan 15, 2009 at 4:34 AM, Richard Loosemore r...@lightlink.com wrote:
 Vladimir Nesov wrote:

 On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore r...@lightlink.com
 wrote:

 The whole point about the paper referenced above is that they are
 collecting
 (in a large number of cases) data that is just random noise.


 So what? The paper points out a methodological problem that in itself
 has little to do with neuroscience.

 Not correct at all:  this *is* neuroscience.  I don't understand why you say
 that it is not.

From what I got from the abstract and by skimming the paper, it's a
methodological problem in handling data from neuroscience experiments
(bad statistics).


 The field as a whole is hardly
 mortally afflicted with that problem

 I mentioned it because there is a context in which this sits.  The context
 is that an entire area - which might be called deriving psychological
 conclusions from barin scan data - is getting massive funding and massive
 attention, and yet it is quite arguably in an Emperor's New Clothes state.
  In other words, the conclusions being drawn are (for a variety of reasons)
 of very dubious quality.

 If you look at any field large enough, there will be bad science.

 According to the significant number of people who criticize it, this field
 appears to be dominated by bad science.  This is not just an isolated case.


That's a whole new level of alarm, relevant for anyone trying to learn
from neuroscience, but it requires stronger substantiation, mere 50
papers that got confused with statistics don't do it justice.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Mike Tintner


Richard: I'm afraid I do not have the time to argue,
simply because the level of general expertise here is not such that I
can discuss it without explaining the whole critique from scratch.

Thanks for refs.   It is all important. But imprecise neuronal correlational 
with emotions (or awareness of) doesn't strike me as a v.big deal, since 
emotions are so vague anyway.  If you have criticisms of the lack of 
correlation with more precise cognitive observations, like words or sights, 
that would be v. interesting.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


[agi] Bayesian surprise attracts human attention

2009-01-14 Thread Ronald C. Blue
Bayesian surprise attracts human attention 
http://tinyurl.com/77p9xo



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Encouraging?

2009-01-14 Thread Kyle Kidd
Mexico went through a sudden 1000:1 devaluation to solve their problems. In
one stroke this wiped out their foreign debt.

I expect something similar to happen here.  Maybe buying precious metal or
oil securities would shield from most of the fallout; that is if the
government does not decide to pursue a stiff windfall tax or confiscate
these assets.

On a side note, There hasn't been $20 spent on real genuine industrial
research in the last decade. This means that you can own the field of your
choice by simply investing a low level of research effort, and waiting for
things to change. I have selected 3 narrow disjoint areas and now appear to
be a/the leader in each. I am just waiting for the world to recognize that
it desperately needs one of them.

This particularly peaked my interest.  Can you be more specific on fields
worthy of investment?

You can contact me directly if you like.

Kyle Kidd
kylek...@gmail.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com