Re: [agi] Encouraging?

2009-01-14 Thread Kyle Kidd
Mexico went through a sudden 1000:1 devaluation to solve their problems. In
one stroke this wiped out their foreign debt.

I expect something similar to happen here.  Maybe buying precious metal or
oil securities would shield from most of the fallout; that is if the
government does not decide to pursue a stiff windfall tax or confiscate
these assets.

On a side note, There hasn't been $20 spent on real genuine industrial
research in the last decade. This means that you can own the field of your
choice by simply investing a low level of research effort, and waiting for
things to change. I have selected 3 narrow disjoint areas and now appear to
be a/the leader in each. I am just waiting for "the world" to recognize that
it desperately needs one of them.

This particularly peaked my interest.  Can you be more specific on fields
worthy of investment?

You can contact me directly if you like.

Kyle Kidd
kylek...@gmail.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


[agi] Bayesian surprise attracts human attention

2009-01-14 Thread Ronald C. Blue
Bayesian surprise attracts human attention 
http://tinyurl.com/77p9xo



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


[agi] Synaptic depression enables neuronal gain control

2009-01-14 Thread Ronald C. Blue
Nature advance online publication 14 January 2009 | doi:10.1038/nature07604; 
Received 18 July 2008; Accepted 30 October 2008; Published online 14 January 
2009


Synaptic depression enables neuronal gain control
Jason S. Rothman 1, Laurence Cathala 1,2, Volker Steuber 1,2 & R. Angus 
Silver 1


1.. Department of Neuroscience, Physiology and Pharmacology, University 
College London, Gower Street, London WC1E 6BT, UK

2.. These authors contributed equally to this work.
Correspondence to: R. Angus Silver 1 Correspondence and requests for 
materials should be addressed to R.A.S. (Email: a.sil...@ucl.ac.uk).


To act as computational devices, neurons must perform mathematical 
operations as they transform synaptic and modulatory input into output 
firing rate. Experiments and theory indicate that neuronal firing typically 
represents the sum of synaptic inputs, an additive operation, but 
multiplication of inputs is essential for many computations. Multiplication 
by a constant produces a change in the slope, or gain, of the input-output 
relationship, amplifying or scaling down the sensitivity of the neuron to 
changes in its input. Such gain modulation occurs in vivo, during contrast 
invariance of orientation tuning, attentional scaling, translation-invariant 
object recognition, auditory processing and coordinate transformations. 
Moreover, theoretical studies highlight the necessity of gain modulation in 
several of these tasks. Although potential cellular mechanisms for gain 
modulation have been identified, they often rely on membrane noise and 
require restrictive conditions to work. Because nonlinear components are 
used to scale signals in electronics, we examined whether synaptic 
nonlinearities are involved in neuronal gain modulation. We used synaptic 
stimulation and the dynamic-clamp technique to investigate gain modulation 
in granule cells in acute slices of rat cerebellum. Here we show that when 
excitation is mediated by synapses with short-term depression (STD), 
neuronal gain is controlled by an inhibitory conductance in a 
noise-independent manner, allowing driving and modulatory inputs to be 
multiplied together. The nonlinearity introduced by STD transforms 
inhibition-mediated additive shifts in the input-output relationship into 
multiplicative gain changes. When granule cells were driven with bursts of 
high-frequency mossy fibre input, as observed in vivo, larger 
inhibition-mediated gain changes were observed, as expected with greater 
STD. Simulations of synaptic integration in more complex neocortical neurons 
suggest that STD-based gain modulation can also operate in neurons with 
large dendritic trees. Our results establish that neurons receiving 
depressing excitatory inputs can act as powerful multiplicative devices even 
when integration of postsynaptic conductances is linear.'


Source: Nature
http://www.nature.com/nature/journal/vaop/ncurrent/abs/nature07604.html?lang=en 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Mike Tintner


Richard: I'm afraid I do not have the time to argue,
simply because the level of general expertise here is not such that I
can discuss it without explaining the whole critique from scratch.

Thanks for refs.   It is all important. But imprecise neuronal correlational 
with emotions (or awareness of) doesn't strike me as a v.big deal, since 
emotions are so vague anyway.  If you have criticisms of the lack of 
correlation with more precise cognitive observations, like words or sights, 
that would be v. interesting.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
On Thu, Jan 15, 2009 at 4:34 AM, Richard Loosemore  wrote:
> Vladimir Nesov wrote:
>>
>> On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore 
>> wrote:
>>>
>>> The whole point about the paper referenced above is that they are
>>> collecting
>>> (in a large number of cases) data that is just random noise.
>>>
>>
>> So what? The paper points out a methodological problem that in itself
>> has little to do with neuroscience.
>
> Not correct at all:  this *is* neuroscience.  I don't understand why you say
> that it is not.

>From what I got from the abstract and by skimming the paper, it's a
methodological problem in handling data from neuroscience experiments
(bad statistics).

>
>> The field as a whole is hardly
>> mortally afflicted with that problem
>
> I mentioned it because there is a context in which this sits.  The context
> is that an entire area - which might be called "deriving psychological
> conclusions from barin scan data" - is getting massive funding and massive
> attention, and yet it is quite arguably in an Emperor's New Clothes state.
>  In other words, the conclusions being drawn are (for a variety of reasons)
> of very dubious quality.
>
>> If you look at any field large enough, there will be bad science.
>
> According to the significant number of people who criticize it, this field
> appears to be dominated by bad science.  This is not just an isolated case.
>

That's a whole new level of alarm, relevant for anyone trying to learn
from neuroscience, but it requires stronger substantiation, mere 50
papers that got confused with statistics don't do it justice.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Richard Loosemore

Vladimir Nesov wrote:

On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore  wrote:

The whole point about the paper referenced above is that they are collecting
(in a large number of cases) data that is just random noise.



So what? The paper points out a methodological problem that in itself
has little to do with neuroscience.


Not correct at all:  this *is* neuroscience.  I don't understand why you 
say that it is not.



The field as a whole is hardly
mortally afflicted with that problem


I mentioned it because there is a context in which this sits.  The 
context is that an entire area - which might be called "deriving 
psychological conclusions from barin scan data" - is getting massive 
funding and massive attention, and yet it is quite arguably in an 
Emperor's New Clothes state.  In other words, the conclusions being 
drawn are (for a variety of reasons) of very dubious quality.


(whether it's even real or not).

It is real.


If you look at any field large enough, there will be bad science.


According to the significant number of people who criticize it, this 
field appears to be dominated by bad science.  This is not just an 
isolated case.



How
is it relevant to study of AGI?


People here are sometimes interested in cognitive science matters, and 
some are interested in the concept of building an AGI by brain 
emulation.  Neuroscience is relevant to that.


Beyond that, this is just an FYI.

I really do not care to put much effort into this.  If people are 
interested, they can read the paper.  But if they doubt the validity of 
the entire idea that there is a problem with neuroscience claims about 
psychological processes, I'm afraid I do not have the time to argue, 
simply because the level of general expertise here is not such that I 
can discuss it without explaining the whole critique from scratch.


As you say, it is not important enough, in an AGI context, to spend much 
time on.





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Ronald C. Blue

So what? The paper points out a methodological problem that in itself
has little to do with neuroscience. The field as a whole is hardly
mortally afflicted with that problem (whether it's even real or not).
If you look at any field large enough, there will be bad science. How
is it relevant to study of AGI?



Your child comes home and says they make a zero on the big test.

A child says they made 80 on the test and failed ,  The reason they missed 
80 questions out of 100.


A child says they had a grade of 98 right and the teacher gave them a B. 
The reason there were 110

questions on the test.

The value of data is not the data it self but the meaning in the 
global/local system.


The question then do you focus on data production in an AGI system or to 
focus your
attention on the relativitistic meaning of the information and can that be 
created in
an electronic, sound, light, liquid or wavelet system.  Which system will 
give you the

best performance?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Ronald C. Blue



The whole point about the paper referenced above is that they are
collecting (in a large number of cases) data that is just random noise.



Agreed that they could be random noise, but it could also be related to 
protein production which is related to nerve cell firings and increase 
oxygen consumption.  I am of the opinion that a high activity area does not 
mean this is where a particular signal is being processed or created.

Example:

http://vsg.quasihome.com/interfer.htm

Notice the interference pattern projected on the back of the wall.  The 
pattern is a holographic reading.  Nerve cells are not like photo graphic 
plates were a positive picture generates a negative picture.  They are more 
like a saxophone.  The resonance from a vibrating reed causes sound to 
reverberate in chambers.  So if you did a "brain scan" of a saxophone you 
may miss the importance of the reed and air flow in making the resonance 
sounds.  In my opinion the areas of the brain that are not firing are more 
important than those that are.  How else can you explain low brain activity 
for a master chess player and massive brain firings for a novice chess 
player.


To know means less neurological work.

Ron



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore  wrote:
>
> The whole point about the paper referenced above is that they are collecting
> (in a large number of cases) data that is just random noise.
>

So what? The paper points out a methodological problem that in itself
has little to do with neuroscience. The field as a whole is hardly
mortally afflicted with that problem (whether it's even real or not).
If you look at any field large enough, there will be bad science. How
is it relevant to study of AGI?

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Richard Loosemore

Vladimir Nesov wrote:

On Wed, Jan 14, 2009 at 10:59 PM, Richard Loosemore  wrote:

For anyone interested in recent discussions of neuroscience and the level of
scientific validity to the various brain-scann claims, the study by Vul et
al, discussed here:

http://www.newscientist.com/article/mg20126914.700-doubts-raised-over-brain-scan-findings.html

and available here:

http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf

... is a welcome complement to the papers by Trevor Harley (and myself).


The title of the paper is "Voodoo Correlations in Social Neuroscience", and
that use of the word "voodoo" pretty much sums up the attitude of a number
of critics of the field.

We've attacked from a different direction, but we had a wide range of
targets to choose, believe me.

The short version of the overall story is that neuroscience is out of
control as far as overinflated claims go.



Richard, even if your concerns are somewhat valid, why is it
interesting here? It's not like neuroscience is dominated by
discussions of (mis)interpretation of results, they are collecting
data, and with that they are steadily getting somewhere.


I don't understand.

The whole point about the paper referenced above is that they are 
collecting (in a large number of cases) data that is just random noise.


And in the work that I did, analyzing several neuroscience papers, the 
conclusion was that many of their conclusions were unfounded.


That is exactly the opposite of what you just said:  they are not 
"steadily getting somewhere" they are filling the research world with 
noise.  I do not understand how you can see what was said in the above 
paper, and yet say what you just said.


Bear in mind that we are targeting the (extremely large number of) 
claims of "psychological validity" that are coming out of the 
neuroscience community.  If they collect data and do not make 
psychological claims, all power to them.



I don't particularly want to get into an argument about it.  It was just 
a little backup information for what I said before.





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Pei Wang
On Wed, Jan 14, 2009 at 4:40 PM, Joshua Cowan  wrote:
> Is having a strong sense of self one aspect of "mature enough"?

I meant something more basic --- you need to have an individual system
complete and running, before you can have a society of individuals.

> Also, Dr. Wang, do you see this as a primary way for teaching empathy.

Yes, as well as everything else that depend on social experience.

> I believe Ben
> has written about hardwiring the desire to work with other agents as a
> possible means of encouraging empathy. Do you agree with this approach
> and/or have other ideas for encouraging empathy (assuming you see empathy as
> a good goal)?

It is too big a topic for me to explain at the current moment, but you
can take my abstract at http://nars.wang.googlepages.com/gti-5 as a
starting point.

Pei

>
>> From: "Pei Wang" 
>> Reply-To: agi@v2.listbox.com
>> To: agi@v2.listbox.com
>> Subject: Re: [agi] just a thought
>> Date: Wed, 14 Jan 2009 16:21:23 -0500
>>
>> I guess something like this is in the plan of many, if not all, AGI
>> projects. For NARS, see
>> http://nars.wang.googlepages.com/wang.roadmap.pdf , under "(4)
>> Socialization" in page 11.
>>
>> It is just that to attempt any non-trivial multi-agent experiment, the
>> work in single agent needs to be mature enough. The AGI projects are
>> not there yet.
>>
>> Pei
>>
>> On Wed, Jan 14, 2009 at 4:10 PM, Valentina Poletti 
>> wrote:
>> > Cool,
>> >
>> > this idea has already been applied successfully to some areas of AI,
>> > such as
>> > ant-colony algorithms and swarm intelligence algorithms. But I was
>> > thinking
>> > that it would be interesting to apply it at a high level. For example,
>> > consider that you create the best AGI agent you can come up with and,
>> > instead of running just one, you create several copies of it (perhaps
>> > with
>> > slight variations), and you initiate each in a different part of your
>> > reality or environment for such agents, after letting them have the
>> > ability
>> > to communicate. In this way whenever one such agents learns anything
>> > meaningful he passes the information to all other agents as well, that
>> > is,
>> > it not only modifies its own policy but it also affects the other's to
>> > some
>> > extent (determined by some constant or/and by how much the other agent
>> > likes
>> > this one, that is how useful learning from it has been in the past and
>> > so
>> > on). This way not only each agent would learn much faster, but also the
>> > agents could learn to use this communication ability to their advantage
>> > to
>> > ameliorate. I just think it would be interesting to implement this, not
>> > that
>> > I am capable of right now.
>> >
>> >
>> > On Wed, Jan 14, 2009 at 2:34 PM, Bob Mottram  wrote:
>> >>
>> >> 2009/1/14 Valentina Poletti :
>> >> > Anyways my point is, the reason why we have achieved so much
>> >> > technology,
>> >> > so
>> >> > much knowledge in this time is precisely the "we", it's the union of
>> >> > several
>> >> > individuals together with their ability to communicate with one-other
>> >> > that
>> >> > has made us advance so much. In a sense we are a single being with
>> >> > millions
>> >> > of eyes, ears, hands, brains, which alltogether can create amazing
>> >> > things.
>> >> > But take any human being alone, isolate him/her from any contact with
>> >> > any
>> >> > other human being and rest assured he/she will not achieve a single
>> >> > artifact
>> >> > of technology. In fact he/she might not survive long.
>> >>
>> >>
>> >> Yes.  I think Ben made a similar point in The Hidden Pattern.  People
>> >> studying human intelligence - psychologists, psychiatrists, cognitive
>> >> scientists, etc - tend to focus narrowly on the individual brain, but
>> >> human intelligence is more of an emergent networked phenomena
>> >> populated by strange meta-entities such as archetypes and memes.  Even
>> >> the greatest individuals from the world of science or art didn't make
>> >> their achievements in a vacuum, and were influenced by earlier works.
>> >>
>> >> Years ago I was chatting with someone who was about to patent some
>> >> piece of machinery.  He had his name on the patent, but was pointing
>> >> out that it's very difficult to be able to say exactly who made the
>> >> invention - who was the "guiding mind".  In this case many individuals
>> >> within his company had some creative input, and there was really no
>> >> one "inventor" as such.  I think many human-made artifacts are like
>> >> this.
>> >>
>> >>
>> >> ---
>> >> agi
>> >> Archives: https://www.listbox.com/member/archive/303/=now
>> >> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> >> Modify Your Subscription: https://www.listbox.com/member/?&;
>> >> Powered by Listbox: http://www.listbox.com
>> >
>> >
>> >
>> > --
>> > A true friend stabs you in the front. - O. Wilde
>> >
>> > Einstein once thought he was wrong; then he discovered he was wrong

Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
On Wed, Jan 14, 2009 at 10:59 PM, Richard Loosemore  wrote:
>
> For anyone interested in recent discussions of neuroscience and the level of
> scientific validity to the various brain-scann claims, the study by Vul et
> al, discussed here:
>
> http://www.newscientist.com/article/mg20126914.700-doubts-raised-over-brain-scan-findings.html
>
> and available here:
>
> http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf
>
> ... is a welcome complement to the papers by Trevor Harley (and myself).
>
>
> The title of the paper is "Voodoo Correlations in Social Neuroscience", and
> that use of the word "voodoo" pretty much sums up the attitude of a number
> of critics of the field.
>
> We've attacked from a different direction, but we had a wide range of
> targets to choose, believe me.
>
> The short version of the overall story is that neuroscience is out of
> control as far as overinflated claims go.
>

Richard, even if your concerns are somewhat valid, why is it
interesting here? It's not like neuroscience is dominated by
discussions of (mis)interpretation of results, they are collecting
data, and with that they are steadily getting somewhere.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Joshua Cowan
Is having a strong sense of self one aspect of "mature enough"? Also, Dr. 
Wang, do you see this as a primary way for teaching empathy. I believe Ben 
has written about hardwiring the desire to work with other agents as a 
possible means of encouraging empathy. Do you agree with this approach 
and/or have other ideas for encouraging empathy (assuming you see empathy as 
a good goal)?




From: "Pei Wang" 
Reply-To: agi@v2.listbox.com
To: agi@v2.listbox.com
Subject: Re: [agi] just a thought
Date: Wed, 14 Jan 2009 16:21:23 -0500

I guess something like this is in the plan of many, if not all, AGI
projects. For NARS, see
http://nars.wang.googlepages.com/wang.roadmap.pdf , under "(4)
Socialization" in page 11.

It is just that to attempt any non-trivial multi-agent experiment, the
work in single agent needs to be mature enough. The AGI projects are
not there yet.

Pei

On Wed, Jan 14, 2009 at 4:10 PM, Valentina Poletti  
wrote:

> Cool,
>
> this idea has already been applied successfully to some areas of AI, 
such as
> ant-colony algorithms and swarm intelligence algorithms. But I was 
thinking

> that it would be interesting to apply it at a high level. For example,
> consider that you create the best AGI agent you can come up with and,
> instead of running just one, you create several copies of it (perhaps 
with

> slight variations), and you initiate each in a different part of your
> reality or environment for such agents, after letting them have the 
ability

> to communicate. In this way whenever one such agents learns anything
> meaningful he passes the information to all other agents as well, that 
is,
> it not only modifies its own policy but it also affects the other's to 
some
> extent (determined by some constant or/and by how much the other agent 
likes
> this one, that is how useful learning from it has been in the past and 
so

> on). This way not only each agent would learn much faster, but also the
> agents could learn to use this communication ability to their advantage 
to
> ameliorate. I just think it would be interesting to implement this, not 
that

> I am capable of right now.
>
>
> On Wed, Jan 14, 2009 at 2:34 PM, Bob Mottram  wrote:
>>
>> 2009/1/14 Valentina Poletti :
>> > Anyways my point is, the reason why we have achieved so much 
technology,

>> > so
>> > much knowledge in this time is precisely the "we", it's the union of
>> > several
>> > individuals together with their ability to communicate with one-other
>> > that
>> > has made us advance so much. In a sense we are a single being with
>> > millions
>> > of eyes, ears, hands, brains, which alltogether can create amazing
>> > things.
>> > But take any human being alone, isolate him/her from any contact with
>> > any
>> > other human being and rest assured he/she will not achieve a single
>> > artifact
>> > of technology. In fact he/she might not survive long.
>>
>>
>> Yes.  I think Ben made a similar point in The Hidden Pattern.  People
>> studying human intelligence - psychologists, psychiatrists, cognitive
>> scientists, etc - tend to focus narrowly on the individual brain, but
>> human intelligence is more of an emergent networked phenomena
>> populated by strange meta-entities such as archetypes and memes.  Even
>> the greatest individuals from the world of science or art didn't make
>> their achievements in a vacuum, and were influenced by earlier works.
>>
>> Years ago I was chatting with someone who was about to patent some
>> piece of machinery.  He had his name on the patent, but was pointing
>> out that it's very difficult to be able to say exactly who made the
>> invention - who was the "guiding mind".  In this case many individuals
>> within his company had some creative input, and there was really no
>> one "inventor" as such.  I think many human-made artifacts are like
>> this.
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>
>
>
> --
> A true friend stabs you in the front. - O. Wilde
>
> Einstein once thought he was wrong; then he discovered he was wrong.
>
> For every complex problem, there is an answer which is short, simple and
> wrong. - H.L. Mencken
> 
> agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered

Re: [agi] just a thought

2009-01-14 Thread Pei Wang
I guess something like this is in the plan of many, if not all, AGI
projects. For NARS, see
http://nars.wang.googlepages.com/wang.roadmap.pdf , under "(4)
Socialization" in page 11.

It is just that to attempt any non-trivial multi-agent experiment, the
work in single agent needs to be mature enough. The AGI projects are
not there yet.

Pei

On Wed, Jan 14, 2009 at 4:10 PM, Valentina Poletti  wrote:
> Cool,
>
> this idea has already been applied successfully to some areas of AI, such as
> ant-colony algorithms and swarm intelligence algorithms. But I was thinking
> that it would be interesting to apply it at a high level. For example,
> consider that you create the best AGI agent you can come up with and,
> instead of running just one, you create several copies of it (perhaps with
> slight variations), and you initiate each in a different part of your
> reality or environment for such agents, after letting them have the ability
> to communicate. In this way whenever one such agents learns anything
> meaningful he passes the information to all other agents as well, that is,
> it not only modifies its own policy but it also affects the other's to some
> extent (determined by some constant or/and by how much the other agent likes
> this one, that is how useful learning from it has been in the past and so
> on). This way not only each agent would learn much faster, but also the
> agents could learn to use this communication ability to their advantage to
> ameliorate. I just think it would be interesting to implement this, not that
> I am capable of right now.
>
>
> On Wed, Jan 14, 2009 at 2:34 PM, Bob Mottram  wrote:
>>
>> 2009/1/14 Valentina Poletti :
>> > Anyways my point is, the reason why we have achieved so much technology,
>> > so
>> > much knowledge in this time is precisely the "we", it's the union of
>> > several
>> > individuals together with their ability to communicate with one-other
>> > that
>> > has made us advance so much. In a sense we are a single being with
>> > millions
>> > of eyes, ears, hands, brains, which alltogether can create amazing
>> > things.
>> > But take any human being alone, isolate him/her from any contact with
>> > any
>> > other human being and rest assured he/she will not achieve a single
>> > artifact
>> > of technology. In fact he/she might not survive long.
>>
>>
>> Yes.  I think Ben made a similar point in The Hidden Pattern.  People
>> studying human intelligence - psychologists, psychiatrists, cognitive
>> scientists, etc - tend to focus narrowly on the individual brain, but
>> human intelligence is more of an emergent networked phenomena
>> populated by strange meta-entities such as archetypes and memes.  Even
>> the greatest individuals from the world of science or art didn't make
>> their achievements in a vacuum, and were influenced by earlier works.
>>
>> Years ago I was chatting with someone who was about to patent some
>> piece of machinery.  He had his name on the patent, but was pointing
>> out that it's very difficult to be able to say exactly who made the
>> invention - who was the "guiding mind".  In this case many individuals
>> within his company had some creative input, and there was really no
>> one "inventor" as such.  I think many human-made artifacts are like
>> this.
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>
>
>
> --
> A true friend stabs you in the front. - O. Wilde
>
> Einstein once thought he was wrong; then he discovered he was wrong.
>
> For every complex problem, there is an answer which is short, simple and
> wrong. - H.L. Mencken
> 
> agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Mike Tintner
Chris: Problems with IQ notwithstanding, I'm confident that, were my silly 
IQ

of 145 merely doubled,..


Chris/Matt: Hasn't anyone ever told you - it's not the size of it, it's what 
you do with it that counts? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Valentina Poletti
Cool,

this idea has already been applied successfully to some areas of AI, such as
ant-colony algorithms and swarm intelligence algorithms. But I was thinking
that it would be interesting to apply it at a high level. For example,
consider that you create the best AGI agent you can come up with and,
instead of running just one, you create several copies of it (perhaps with
slight variations), and you initiate each in a different part of your
reality or environment for such agents, after letting them have the ability
to communicate. In this way whenever one such agents learns anything
meaningful he passes the information to all other agents as well, that is,
it not only modifies its own policy but it also affects the other's to some
extent (determined by some constant or/and by how much the other agent likes
this one, that is how useful learning from it has been in the past and so
on). This way not only each agent would learn much faster, but also the
agents could learn to use this communication ability to their advantage to
ameliorate. I just think it would be interesting to implement this, not that
I am capable of right now.


On Wed, Jan 14, 2009 at 2:34 PM, Bob Mottram  wrote:

> 2009/1/14 Valentina Poletti :
> > Anyways my point is, the reason why we have achieved so much technology,
> so
> > much knowledge in this time is precisely the "we", it's the union of
> several
> > individuals together with their ability to communicate with one-other
> that
> > has made us advance so much. In a sense we are a single being with
> millions
> > of eyes, ears, hands, brains, which alltogether can create amazing
> things.
> > But take any human being alone, isolate him/her from any contact with any
> > other human being and rest assured he/she will not achieve a single
> artifact
> > of technology. In fact he/she might not survive long.
>
>
> Yes.  I think Ben made a similar point in The Hidden Pattern.  People
> studying human intelligence - psychologists, psychiatrists, cognitive
> scientists, etc - tend to focus narrowly on the individual brain, but
> human intelligence is more of an emergent networked phenomena
> populated by strange meta-entities such as archetypes and memes.  Even
> the greatest individuals from the world of science or art didn't make
> their achievements in a vacuum, and were influenced by earlier works.
>
> Years ago I was chatting with someone who was about to patent some
> piece of machinery.  He had his name on the patent, but was pointing
> out that it's very difficult to be able to say exactly who made the
> invention - who was the "guiding mind".  In this case many individuals
> within his company had some creative input, and there was really no
> one "inventor" as such.  I think many human-made artifacts are like
> this.
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


[agi] Doubts raised over brain scan findings

2009-01-14 Thread Richard Loosemore


For anyone interested in recent discussions of neuroscience and the 
level of scientific validity to the various brain-scann claims, the 
study by Vul et al, discussed here:


http://www.newscientist.com/article/mg20126914.700-doubts-raised-over-brain-scan-findings.html

and available here:

http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf

... is a welcome complement to the papers by Trevor Harley (and myself).


The title of the paper is "Voodoo Correlations in Social Neuroscience", 
and that use of the word "voodoo" pretty much sums up the attitude of a 
number of critics of the field.


We've attacked from a different direction, but we had a wide range of 
targets to choose, believe me.


The short version of the overall story is that neuroscience is out of 
control as far as overinflated claims go.





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Encouraging?

2009-01-14 Thread Matt Mahoney
--- On Wed, 1/14/09, Mike Tintner  wrote:

> "You have talked about past recessions being real
> opportunities for business. But in past recessions,
> wasn't business able to get lending? And doesn't the
> tightness of the credit market today inhibit some
> opportunities?
> 
> Typically not. Most new innovations are started without
> access to credit in good times or bad. Microsoft (MSFT) was
> started without any access to credit. It's only in crazy
> times that people lend money to people who are experimenting
> with innovations. Most of the great businesses today were
> started with neither a lot of venture capital nor with any
> bank lending until five or six years after they were [up and
> running]." 

This is the IQ testing problem again. The genius of Socrates wasn't recognized 
until after he was executed. Modern Nobel prize winners are awarded for work 
done decades ago. How do you distinguish one genius from millions of cranks? 
You wait until the rest of society catches up in intelligence.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Encouraging?

2009-01-14 Thread Steve Richfield
Mike,

On 1/14/09, Mike Tintner  wrote:
>
> "You have talked about past recessions being real opportunities for
> business. But in past recessions, wasn't business able to get lending? And
> doesn't the tightness of the credit market today inhibit some opportunities?


It definitely changes things. This could all change in a heartbeat. Suppose
for a moment that Saudi Arabia decided to secure its Riyal (their dollar)
with a liter of oil. The Trillions now sitting in Federal Reserve accounts
without interest and at risk of the dollar collapsing, could simply be
transferred into Riyals instead and be secure. Of course, this would
instantly bankrupt the U.S. government and many of those Trillions would be
lost, but it WOULD instantly restore whatever survived of the worldwide
monetary system.

Typically not. Most new innovations are started without access to credit in
> good times or bad.


Only because business can't recognize a good thing when they see it, e.g.
Zerox not seeing the value of their early version of Windows.

Microsoft (MSFT) was started without any access to credit.


Unless you count the millions that his parents had to help him over the
rough spots.

It's only in crazy times that people lend money to people who are
> experimenting with innovations.


In ordinary times, they want stock instead, with its MUCH greater upside
potential.

Most of the great businesses today were started with neither a lot of
> venture capital nor with any bank lending until five or six years after they
> were [up and running]."


This really gets down to what "up and running" is. For HP, they were making
light bulb stabilized audio oscillators in their garage.

Because of numerous possibilities like a secured Riyal mentioned above, I
suspect that things will instantly change one way or another as quickly as
they came down from Credit Default Swaps, a completely hidden boondoggle
until it "went off".

Note how Zaire solved their monetary problems years ago. They closed their
borders over the Christmas holidays and exchanged new dollars for old. Then
they re-opened their borders, leaving the worthless old dollars held by
foreigners.as someone ELSE's problem.

Mexico went through a sudden 1000:1 devaluation to solve their problems. In
one stroke this wiped out their foreign debt.

Expect something REALLY dramatic in the relatively near future. I suspect
that our bribe-o-cratic form of government will prohibit our taking
preemptive action, and thereby leave us at the (nonexistent) mercy of other
powers that aren't so inhibited.

I have NEVER seen a desperate action based on a simple lack of alternatives,
like the various proposed stimulus plans, ever work. "Never expect a problem
to be solved by the same mindset that created it" (Einstein). The lack of
investment money will soon be seen as the least of our problems.

On a side note, There hasn't been $20 spent on real genuine industrial
research in the last decade. This means that you can own the field of your
choice by simply investing a low level of research effort, and waiting for
things to change. I have selected 3 narrow disjoint areas and now appear to
be a/the leader in each. I am just waiting for "the world" to recognize that
it desperately needs one of them.

Any thoughts?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


[agi] Encouraging?

2009-01-14 Thread Mike Tintner
"You have talked about past recessions being real opportunities for 
business. But in past recessions, wasn't business able to get lending? And 
doesn't the tightness of the credit market today inhibit some opportunities?


Typically not. Most new innovations are started without access to credit in 
good times or bad. Microsoft (MSFT) was started without any access to 
credit. It's only in crazy times that people lend money to people who are 
experimenting with innovations. Most of the great businesses today were 
started with neither a lot of venture capital nor with any bank lending 
until five or six years after they were [up and running]." 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


RE: [agi] just a thought

2009-01-14 Thread John G. Rose
> From: Matt Mahoney [mailto:matmaho...@yahoo.com]
> 
> --- On Wed, 1/14/09, John G. Rose  wrote:
> 
> > How do you measure the collective IQ of humanity?
> > Individual IQ's are just a subset.
> 
> Good question. Some possibilities:
> 
> - World GDP ($54 trillion in 2007).
> - Size of the population that can be supported (> 6 billion).
> - Average life expectancy (66 years).
> - Number of bits of recorded information.
> - Combined processing power of brains and computers in OPS.
> 

Here's one: a change in the persistence rate of new meme generation can be
correlated to a change in collective intelligence. IOW, less new ideas, less
increase in intelligence for that particular component of collective
intelligence... there may be other components. OR if that component has
reached some sort of local maxima new meme persistence rate will decrease
because it can only get so smart due to the incomputability of K
complexity...

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


RE: [agi] just a thought

2009-01-14 Thread Matt Mahoney
--- On Wed, 1/14/09, John G. Rose  wrote:

> How do you measure the collective IQ of humanity?
> Individual IQ's are just a subset.

Good question. Some possibilities:

- World GDP ($54 trillion in 2007).
- Size of the population that can be supported (> 6 billion).
- Average life expectancy (66 years).
- Number of bits of recorded information.
- Combined processing power of brains and computers in OPS.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


RE: [agi] just a thought

2009-01-14 Thread John G. Rose
> From: Matt Mahoney [mailto:matmaho...@yahoo.com]
> --- On Wed, 1/14/09, Christopher Carr  wrote:
> 
> > Problems with IQ notwithstanding, I'm confident that, were my silly IQ
> of 145 merely doubled, I could convince Dr. Goertzel to give me the
> majority of his assets, including control of his businesses. And if he
> were to really meet someone that bright, he would be a fool or
> super-human not to do so, which he isn't (a fool, that is).
> 
> First, if you knew what you would do if you were twice as smart, you
> would already be that smart. Therefore you don't know.
> 
> Second, you have never even met anyone with an IQ of 290. How do you
> know what they would do?
> 
> How do you measure an IQ of 100n?
> 
> - Ability to remember n times as much?
> - Ability to learn n times faster?
> - Ability to solve problems n times faster?
> - Ability to do the work of n people?
> - Ability to make n times as much money?
> - Ability to communicate with n people at once?
> 
> Please give me an IQ test that measures something that can't be done by
> n log n people (allowing for some organizational overhead).
> 

How do you measure the collective IQ of humanity? Individual IQ's are just a
subset.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Matt Mahoney
--- On Wed, 1/14/09, Christopher Carr  wrote:

> Problems with IQ notwithstanding, I'm confident that, were my silly IQ
of 145 merely doubled, I could convince Dr. Goertzel to give me the
majority of his assets, including control of his businesses. And if he
were to really meet someone that bright, he would be a fool or
super-human not to do so, which he isn't (a fool, that is).

First, if you knew what you would do if you were twice as smart, you would 
already be that smart. Therefore you don't know.

Second, you have never even met anyone with an IQ of 290. How do you know what 
they would do?

How do you measure an IQ of 100n?

- Ability to remember n times as much?
- Ability to learn n times faster?
- Ability to solve problems n times faster?
- Ability to do the work of n people?
- Ability to make n times as much money?
- Ability to communicate with n people at once?

Please give me an IQ test that measures something that can't be done by n log n 
people (allowing for some organizational overhead).

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Mike Tintner

Ron et al:

I suspect that you guys are thinking like classic nerds -  "ok we just 
*multiply* the number of minds" - a purely mathematical operation. But 
actually social thinking is much more than that - the minds will really need 
to *interact* - they can't just *parallel process* in the current meaning of 
the term. They would need to do the equivalent of what happens here and in 
every organization and society - adopt competing positions, form into 
opposing schools of thought, refuse to talk to each other and go off in 
theatrical huffs like some who shall be nameless here, conspire against each 
other, form alliances - & of course have sex with each other.


Ron & co:


Anyways my point is, the reason why we have achieved so much 
technology, so much knowledge in this time is precisely the "we", it's 
the union of several individuals together with their ability to 
communicate with one-other that has made us advance so much.


I agree. A machine that is 10 times as smart as a human in every way 
could not achieve much more than hiring 10 more people. In order to 
automate the economy, we have to replicate the capabilities of not one 
human mind, but a system of 10^10 minds. That is why my AGI proposal is 
so hideously expensive.

http://www.mattmahoney.net/agi2.html




Now really expensive if quantum entanglement is in fact present in a 
hybrid of quantum circuits stored in carbon tetrachloride functioning as a 
capacitor.  In principle 420 billion human minds or about  84 octillion 
qubits can be stored entangled in 8 Mayonnaise jars of carbon 
tetrachloride. Carbon tetrachloride causes cancer and requires a 
government permit to use.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Bob Mottram
2009/1/14 Valentina Poletti :
> Anyways my point is, the reason why we have achieved so much technology, so
> much knowledge in this time is precisely the "we", it's the union of several
> individuals together with their ability to communicate with one-other that
> has made us advance so much. In a sense we are a single being with millions
> of eyes, ears, hands, brains, which alltogether can create amazing things.
> But take any human being alone, isolate him/her from any contact with any
> other human being and rest assured he/she will not achieve a single artifact
> of technology. In fact he/she might not survive long.


Yes.  I think Ben made a similar point in The Hidden Pattern.  People
studying human intelligence - psychologists, psychiatrists, cognitive
scientists, etc - tend to focus narrowly on the individual brain, but
human intelligence is more of an emergent networked phenomena
populated by strange meta-entities such as archetypes and memes.  Even
the greatest individuals from the world of science or art didn't make
their achievements in a vacuum, and were influenced by earlier works.

Years ago I was chatting with someone who was about to patent some
piece of machinery.  He had his name on the patent, but was pointing
out that it's very difficult to be able to say exactly who made the
invention - who was the "guiding mind".  In this case many individuals
within his company had some creative input, and there was really no
one "inventor" as such.  I think many human-made artifacts are like
this.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Ronald C. Blue





On Wed, Jan 14, 2009 at 4:40 AM, Matt Mahoney  
wrote:

--- On Tue, 1/13/09, Valentina Poletti  wrote:

Anyways my point is, the reason why we have achieved so much technology, 
so much knowledge in this time is precisely the "we", it's the union of 
several individuals together with their ability to communicate with 
one-other that has made us advance so much.


I agree. A machine that is 10 times as smart as a human in every way 
could not achieve much more than hiring 10 more people. In order to 
automate the economy, we have to replicate the capabilities of not one 
human mind, but a system of 10^10 minds. That is why my AGI proposal is 
so hideously expensive.

http://www.mattmahoney.net/agi2.html




Now really expensive if quantum entanglement is in fact present in a hybrid 
of quantum circuits stored in carbon tetrachloride functioning as a 
capacitor.  In principle 420 billion human minds or about  84 octillion 
qubits can be stored entangled in 8 Mayonnaise jars of carbon tetrachloride. 
Carbon tetrachloride causes cancer and requires a government permit to use. 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Mike Tintner
Valentina,

It's a v.g. point & does refer. You're saying there is no individual without 
social intelligence, and we'll need a society of AGI's. It does refer to 
creativity. Every creative problem we face is actually, strictly, inseparable 
from a whole body of creative problems.  In trying to solve the engram problem, 
 Richard is not a lone hero, but a part of the vast collective enterprise of 
science/scientists trying to understand the brain as a whole, and his eventual 
discovery will have to dovetail with others' efforts. So not just one AGI, Ben, 
a whole society of them.  He's on to it.

(And every problem we solve - just using language - is also interdependent with 
the whole society's efforts/problems - and use of language).Of course, only a 
woman would think about other people here :).
  Valentina:

  Not in reference to any specific current discussion, 


  I find it interesting that when people talk of human like intelligence in the 
realm of AGI, they refer to the ability of a human individual, or human brain 
if you like. It just occurred to me that human beings are not that intelligent. 
Well, of course we are super intelligent compared to a frog (as some would say) 
but then again a frog is super intelligent compared to an aunt. 




  Anyways my point is, the reason why we have achieved so much technology, so 
much knowledge in this time is precisely the "we", it's the union of several 
individuals together with their ability to communicate with one-other that has 
made us advance so much. In a sense we are a single being with millions of 
eyes, ears, hands, brains, which alltogether can create amazing things. But 
take any human being alone, isolate him/her from any contact with any other 
human being and rest assured he/she will not achieve a single artifact of 
technology. In fact he/she might not survive long.




  So that's why I think it is important to put emphasis on this when talking 
about super-human intelligence. 




  That was my 2-in-the-morning thought. I guess I should sleep now.


--
agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-14 Thread Christopher Carr

Vladimir Nesov wrote:

On Wed, Jan 14, 2009 at 4:40 AM, Matt Mahoney  wrote:
  

--- On Tue, 1/13/09, Valentina Poletti  wrote:



Anyways my point is, the reason why we have achieved so much technology, so much 
knowledge in this time is precisely the "we", it's the union of several 
individuals together with their ability to communicate with one-other that has made us 
advance so much.
  

I agree. A machine that is 10 times as smart as a human in every way could not 
achieve much more than hiring 10 more people. In order to automate the economy, 
we have to replicate the capabilities of not one human mind, but a system of 
10^10 minds. That is why my AGI proposal is so hideously expensive.
http://www.mattmahoney.net/agi2.html




Let's fire Matt and hire 10 chimps instead.

  
Problems with IQ notwithstanding, I'm confident that, were my silly IQ 
of 145 merely doubled, I could convince Dr. Goertzel to give me the 
majority of his assets, including control of his businesses. And if he 
were to really meet someone that bright, he would be a fool or 
super-human not to do so, which he isn't (a fool, that is).


Even a simpleton such as myself can see that Mr. Mahoney is quite on a 
wrong track with this line of thought. Matt can know little or nothing 
about the strategies of such a bright, single agent.


-Christopher Carr



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com