Re: Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-04 Thread Ben Goertzel
On Fri, Oct 3, 2008 at 9:57 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Fri, 10/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
  You seem to misunderstand the notion of a Global Brain, see
 
  http://pespmc1.vub.ac.be/GBRAIFAQ.html
 
  http://en.wikipedia.org/wiki/Global_brain

 You are right. That is exactly what I am proposing.



It's too bad you missed the Global Brain 0 workshop that Francis Heylighen
and I organized in Brussels in 2001 ...

Some larger follow-up Global Brain conferences were planned, but Francis and
I both got distracted by other things

It would be an exaggeration to say that any real collective conclusions were
arrived at, during the workshop, but it was certainly
interesting...





 I am open to alternative suggestions.



Well, what I suggested in my 2002 book Creating Internet Intelligence was
essentially a global brain based on a hybrid model:

-- a human-plus-computer-network global brain along the lines of what you
and Heylighen suggest

coupled with

-- a superhuman AI mind, that interacts with and is coupled with this global
brain

To use a simplistic metaphor,

-- the superhuman AI mind at the center of the hybrid global brain would
provide an overall goal system and attentional-focus, and

-- the human-plus-computer-network portion of the hybrid global brain would
serve as a sort of unconscious for the hybrid global brain...

This is one way that humans may come to, en masse, interact with superhuman
non-human AI

Anyway this was a fun line of thinking but since that point I diverted
myself more towards the creation of the superhuman-AI component

At the time I had a lot of ideas about how to modify Internet infrastructure
so as to make it more copacetic to the emergence of a
human-plus-computer-network, collective-intelligence type global brain.   I
think many of those ideas could have worked, but they are not the direction
the development of the Net worked, and obviously I (like you) lack the
influence to nudge the Net-masters in that direction.  Keeping a
build-a-superhuman-AI project moving is not easy either, but it's a more
tractable task...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Let's face it, this is just dumb.

2008-10-03 Thread Brad Paulsen
Wow, that's a pretty strong response there, Matt.  Friends of yours?

If I were in control of such things, I wouldn't DARE walk out of a lab and
announce results like that.  So I have no fear of being the one to bring
that type of criticism on myself.  But, I'm just as vulnerable as any of us
to having colleagues do it for (to) me.

So, yeah.  I have a problem with premature release, or announcement, of a
technology that's associated with an industry in which I work.  It's
irresponsible science when scientists do it.  It's irresponsible marketing
(now, there's a redundant phrase for you) when company management does it.

And, it's irresponsible for you to defend such practices.  That stuff
deserved to be mocked.  Get over it.

Cheers,
Brad


Matt Mahoney wrote:
 So here is another step toward AGI, a hard image classification problem
 solved with near human-level ability, and all I hear is criticism.
 Sheesh! I hope your own work is not attacked like this.
 
 I would understand if the researchers had proposed something stupid like
 using the software in court to distinguish adult and child pornography.
 Please try to distinguish between the research and the commentary by the
 reporters. A legitimate application could be estimating the average age
 plus or minus 2 months of a group of 1000 shoppers in a marketing study.
 
 
 In any case, machine surveillance is here to stay. Get used to it.
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 --- On Thu, 10/2/08, Bob Mottram [EMAIL PROTECTED] wrote:
 
 From: Bob Mottram [EMAIL PROTECTED] Subject: Re: [agi] Let's face
 it, this is just dumb. To: agi@v2.listbox.com Date: Thursday, October
 2, 2008, 6:21 AM 2008/10/2 Brad Paulsen [EMAIL PROTECTED]:
 It boasts a 50% recognition accuracy rate
 +/-5 years and an 80%
 recognition accuracy rate +/-10 years.  Unless, of
 course, the subject is
 wearing a big floppy hat, makeup or has had Botox
 treatment recently.  Or
 found his dad's Ronald Reagan mask.  'Nuf
 said.
 
 
 Yes.  This kind of accuracy would not be good enough to enforce age 
 related rules surrounding the buying of certain products, nor does it 
 seem likely to me that refinements of the technique will give the 
 needed accuracy.  As you point out people have been trying to fool 
 others about their age for millenia, and this trend is only going to 
 complicate matters further.  In future if De Grey gets his way this 
 kind of recognition will be useless anyway.
 
 
 P.S. Oh, yeah, and the guy responsible for this
 project claims it doesn't
 violate anyone's privacy because it can't be
 used to identify individuals.
 Right.  They don't say who sponsored this
 research, but I sincerely doubt
 it was the vending machine companies or purveyors of
 Internet porn.
 
 
 It's good to question the true motives behind something like this, and
  where the funding comes from.  I do a lot of stuff with computer 
 vision, and if someone came to me saying they wanted something to 
 visually recognise the age of a person I'd tell them that they're 
 probably wasting their time, and that indicators other than visual 
 ones would be more likely to give a reliable result.
 
 
 
 --- agi Archives:
 https://www.listbox.com/member/archive/303/=now RSS Feed:
 https://www.listbox.com/member/archive/rss/303/ Modify Your
 Subscription:
 https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-03 Thread Ben Goertzel
Hi,


 CMR (my proposal) has no centralized control (global brain). It is a
 competitive market in which information has negative value. The environment
 is a peer-to-peer network where peers receive messages in natural language,
 cache a copy, and route them to appropriate experts based on content.


You seem to misunderstand the notion of a Global Brain, see

http://pespmc1.vub.ac.be/GBRAIFAQ.html

http://en.wikipedia.org/wiki/Global_brain

It does not require centralized control, but is in fact more focused on
emergent dynamical control mechanisms.



 I believe that CMR is initially friendly in the sense that a market is
 friendly.



Which is to say: dangerous, volatile, hard to predict ... and often not
friendly at all!!!


 A market is the most efficient way to satisfy the collective goals of its
 participants. It is fair, but not benevolent.


I believe this is an extremely oversimplistic and dangerous view of
economics ;-)

Traditional economic theory which argues that free markets are optimally
efficient, is based on a patently false assumption of infinitely rational
economic actors.This assumption is **particularly** poor when the
economic actors are largely **humans**, who are highly nonrational.

As a single isolated example, note that in the US right now, many people are
withdrawing their $$ from banks even if they have less than $100K in their
accounts ... even though the government insures bank accounts up to $100K.
What are they doing?  Insuring themselves against a total collapse of the US
economic system?  If so they should be buying gold with their $$, but only a
few of them are doing that.  People are in large part emotional not rational
actors, and for this reason pure free-markets involving humans are far from
the most efficient way to satisfy the collective goals of a set of humans.

Anyway a deep discussion of economics would likely be too big of a
digression, though it may be pertinent insofar as it's a metaphor for the
internal dynamics of an AGI ... (for instance Eric Baum, who is a fairly
hardcore libertarian politically, is in favor of free markets as a model for
credit assignment in AI systems ... and OpenCog/NCE contains an economic
attention allocation component...)

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Let's face it, this is just dumb.

2008-10-03 Thread Gabriel Recchia
I remember reading awhile back that certain Japanese vending machines
dispensing adult-only materials actually employed such age-estimation
software for a short time, but quickly pulled it after discovering that
teens were thwarting it by holding magazine covers up to the camera. No
floppy hat or Ronald Reagan mask necessary.

On Fri, Oct 3, 2008 at 6:00 AM, Brad Paulsen [EMAIL PROTECTED] wrote:

 Wow, that's a pretty strong response there, Matt.  Friends of yours?

 If I were in control of such things, I wouldn't DARE walk out of a lab and
 announce results like that.  So I have no fear of being the one to bring
 that type of criticism on myself.  But, I'm just as vulnerable as any of us
 to having colleagues do it for (to) me.

 So, yeah.  I have a problem with premature release, or announcement, of a
 technology that's associated with an industry in which I work.  It's
 irresponsible science when scientists do it.  It's irresponsible marketing
 (now, there's a redundant phrase for you) when company management does it.

 And, it's irresponsible for you to defend such practices.  That stuff
 deserved to be mocked.  Get over it.

 Cheers,
 Brad


 Matt Mahoney wrote:
  So here is another step toward AGI, a hard image classification problem
  solved with near human-level ability, and all I hear is criticism.
  Sheesh! I hope your own work is not attacked like this.
 
  I would understand if the researchers had proposed something stupid like
  using the software in court to distinguish adult and child pornography.
  Please try to distinguish between the research and the commentary by the
  reporters. A legitimate application could be estimating the average age
  plus or minus 2 months of a group of 1000 shoppers in a marketing study.
 
 
  In any case, machine surveillance is here to stay. Get used to it.
 
  -- Matt Mahoney, [EMAIL PROTECTED]
 
 
  --- On Thu, 10/2/08, Bob Mottram [EMAIL PROTECTED] wrote:
 
  From: Bob Mottram [EMAIL PROTECTED] Subject: Re: [agi] Let's face
  it, this is just dumb. To: agi@v2.listbox.com Date: Thursday, October
  2, 2008, 6:21 AM 2008/10/2 Brad Paulsen [EMAIL PROTECTED]:
  It boasts a 50% recognition accuracy rate
  +/-5 years and an 80%
  recognition accuracy rate +/-10 years.  Unless, of
  course, the subject is
  wearing a big floppy hat, makeup or has had Botox
  treatment recently.  Or
  found his dad's Ronald Reagan mask.  'Nuf
  said.
 
 
  Yes.  This kind of accuracy would not be good enough to enforce age
  related rules surrounding the buying of certain products, nor does it
  seem likely to me that refinements of the technique will give the
  needed accuracy.  As you point out people have been trying to fool
  others about their age for millenia, and this trend is only going to
  complicate matters further.  In future if De Grey gets his way this
  kind of recognition will be useless anyway.
 
 
  P.S. Oh, yeah, and the guy responsible for this
  project claims it doesn't
  violate anyone's privacy because it can't be
  used to identify individuals.
  Right.  They don't say who sponsored this
  research, but I sincerely doubt
  it was the vending machine companies or purveyors of
  Internet porn.
 
 
  It's good to question the true motives behind something like this, and
   where the funding comes from.  I do a lot of stuff with computer
  vision, and if someone came to me saying they wanted something to
  visually recognise the age of a person I'd tell them that they're
  probably wasting their time, and that indicators other than visual
  ones would be more likely to give a reliable result.
 
 
 
  --- agi Archives:
  https://www.listbox.com/member/archive/303/=now RSS Feed:
  https://www.listbox.com/member/archive/rss/303/ Modify Your
  Subscription:
  https://www.listbox.com/member/?;
   Powered by Listbox: http://www.listbox.com
 


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-03 Thread Matt Mahoney
--- On Fri, 10/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 You seem to misunderstand the notion of a Global Brain, see

 http://pespmc1.vub.ac.be/GBRAIFAQ.html
 
 http://en.wikipedia.org/wiki/Global_brain

You are right. That is exactly what I am proposing.

I believe that CMR is initially friendly in the sense that a market is 
friendly.

Which is to say: dangerous, volatile, hard to predict ... and often not 
friendly at all!!!

I am open to alternative suggestions.
 
 A market is the most efficient way to satisfy the collective goals of its 
 participants. It is fair, but not benevolent. 

I believe this is an extremely oversimplistic and dangerous view of economics 
;-)

 Traditional economic theory which argues that free markets are optimally 
 efficient, is based on a patently false assumption of infinitely rational 
 economic actors.    This assumption is **particularly** poor when the 
 economic actors are largely **humans**, who are highly nonrational.

I think that CMR will make markets more rational. Humans will have more access 
to information, which will enable them to make more rational decisions. I 
believe that AGI will result in pervasive public surveillance of everyone. All 
of your movements, communication, and financial transactions will be public and 
instantly accessible to anyone. We will demand it, and AGI will make it cheap. 
Sure you could have secrets, but nobody will hire you, loan you money, or buy 
or sell you anything without knowing everything about you.

Anyway a deep discussion of economics would likely be too big of a digression, 
though it may be pertinent insofar as it's a metaphor for the internal 
dynamics of an AGI ... (for instance Eric Baum, who is a fairly hardcore 
libertarian politically, is in favor of free markets as a model for credit 
assignment in AI systems ... and OpenCog/NCE contains an economic attention 
allocation component...)

Economics is not a metaphor, but is central to the design of distributed AGI. 
There are hard problems that need to be solved. Economic systems have positive 
feedback loops such as speculative investment that are unstable and can crash. 
AGI and instant communication can lead to events where most of the world's 
wealth can disappear in a wave of panic selling traveling at the speed of 
light. I don't believe that competition for resources and a market where 
information has negative value has positive feedback loops, but it is something 
that needs to be studied.

My concern is that trust networks are unstable. They may lead to monopolies, 
and rare but catastrophic failures when a peer with high reputation decides to 
cheat. This is not just a problem for CMR, but any AGI where knowledge comes 
from many people. How do you know which information to trust?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Let's face it, this is just dumb.

2008-10-02 Thread Bob Mottram
2008/10/2 Brad Paulsen [EMAIL PROTECTED]:
 It boasts a 50% recognition accuracy rate +/-5 years and an 80%
 recognition accuracy rate +/-10 years.  Unless, of course, the subject is
 wearing a big floppy hat, makeup or has had Botox treatment recently.  Or
 found his dad's Ronald Reagan mask.  'Nuf said.


Yes.  This kind of accuracy would not be good enough to enforce age
related rules surrounding the buying of certain products, nor does it
seem likely to me that refinements of the technique will give the
needed accuracy.  As you point out people have been trying to fool
others about their age for millenia, and this trend is only going to
complicate matters further.  In future if De Grey gets his way this
kind of recognition will be useless anyway.


 P.S. Oh, yeah, and the guy responsible for this project claims it doesn't
 violate anyone's privacy because it can't be used to identify individuals.
  Right.  They don't say who sponsored this research, but I sincerely doubt
 it was the vending machine companies or purveyors of Internet porn.


It's good to question the true motives behind something like this, and
where the funding comes from.  I do a lot of stuff with computer
vision, and if someone came to me saying they wanted something to
visually recognise the age of a person I'd tell them that they're
probably wasting their time, and that indicators other than visual
ones would be more likely to give a reliable result.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Let's face it, this is just dumb.

2008-10-02 Thread Matt Mahoney
So here is another step toward AGI, a hard image classification problem solved 
with near human-level ability, and all I hear is criticism. Sheesh! I hope your 
own work is not attacked like this.

I would understand if the researchers had proposed something stupid like using 
the software in court to distinguish adult and child pornography. Please try to 
distinguish between the research and the commentary by the reporters. A 
legitimate application could be estimating the average age plus or minus 2 
months of a group of 1000 shoppers in a marketing study.

In any case, machine surveillance is here to stay. Get used to it.

-- Matt Mahoney, [EMAIL PROTECTED]


--- On Thu, 10/2/08, Bob Mottram [EMAIL PROTECTED] wrote:

 From: Bob Mottram [EMAIL PROTECTED]
 Subject: Re: [agi] Let's face it, this is just dumb.
 To: agi@v2.listbox.com
 Date: Thursday, October 2, 2008, 6:21 AM
 2008/10/2 Brad Paulsen [EMAIL PROTECTED]:
  It boasts a 50% recognition accuracy rate
 +/-5 years and an 80%
  recognition accuracy rate +/-10 years.  Unless, of
 course, the subject is
  wearing a big floppy hat, makeup or has had Botox
 treatment recently.  Or
  found his dad's Ronald Reagan mask.  'Nuf
 said.
 
 
 Yes.  This kind of accuracy would not be good enough to
 enforce age
 related rules surrounding the buying of certain products,
 nor does it
 seem likely to me that refinements of the technique will
 give the
 needed accuracy.  As you point out people have been trying
 to fool
 others about their age for millenia, and this trend is only
 going to
 complicate matters further.  In future if De Grey gets his
 way this
 kind of recognition will be useless anyway.
 
 
  P.S. Oh, yeah, and the guy responsible for this
 project claims it doesn't
  violate anyone's privacy because it can't be
 used to identify individuals.
   Right.  They don't say who sponsored this
 research, but I sincerely doubt
  it was the vending machine companies or purveyors of
 Internet porn.
 
 
 It's good to question the true motives behind something
 like this, and
 where the funding comes from.  I do a lot of stuff with
 computer
 vision, and if someone came to me saying they wanted
 something to
 visually recognise the age of a person I'd tell them
 that they're
 probably wasting their time, and that indicators other than
 visual
 ones would be more likely to give a reliable result.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Let's face it, this is just dumb.

2008-10-02 Thread Ben Goertzel
I hope not to sound like a broken record here ... but ... not every narrow
AI advance is actually a step toward AGI ...

On Thu, Oct 2, 2008 at 12:35 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 So here is another step toward AGI, a hard image classification problem
 solved with near human-level ability, and all I hear is criticism. Sheesh! I
 hope your own work is not attacked like this.

 I would understand if the researchers had proposed something stupid like
 using the software in court to distinguish adult and child pornography.
 Please try to distinguish between the research and the commentary by the
 reporters. A legitimate application could be estimating the average age plus
 or minus 2 months of a group of 1000 shoppers in a marketing study.

 In any case, machine surveillance is here to stay. Get used to it.

 -- Matt Mahoney, [EMAIL PROTECTED]


 --- On Thu, 10/2/08, Bob Mottram [EMAIL PROTECTED] wrote:

  From: Bob Mottram [EMAIL PROTECTED]
  Subject: Re: [agi] Let's face it, this is just dumb.
  To: agi@v2.listbox.com
  Date: Thursday, October 2, 2008, 6:21 AM
  2008/10/2 Brad Paulsen [EMAIL PROTECTED]:
   It boasts a 50% recognition accuracy rate
  +/-5 years and an 80%
   recognition accuracy rate +/-10 years.  Unless, of
  course, the subject is
   wearing a big floppy hat, makeup or has had Botox
  treatment recently.  Or
   found his dad's Ronald Reagan mask.  'Nuf
  said.
 
 
  Yes.  This kind of accuracy would not be good enough to
  enforce age
  related rules surrounding the buying of certain products,
  nor does it
  seem likely to me that refinements of the technique will
  give the
  needed accuracy.  As you point out people have been trying
  to fool
  others about their age for millenia, and this trend is only
  going to
  complicate matters further.  In future if De Grey gets his
  way this
  kind of recognition will be useless anyway.
 
 
   P.S. Oh, yeah, and the guy responsible for this
  project claims it doesn't
   violate anyone's privacy because it can't be
  used to identify individuals.
Right.  They don't say who sponsored this
  research, but I sincerely doubt
   it was the vending machine companies or purveyors of
  Internet porn.
 
 
  It's good to question the true motives behind something
  like this, and
  where the funding comes from.  I do a lot of stuff with
  computer
  vision, and if someone came to me saying they wanted
  something to
  visually recognise the age of a person I'd tell them
  that they're
  probably wasting their time, and that indicators other than
  visual
  ones would be more likely to give a reliable result.



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Let's face it, this is just dumb.

2008-10-02 Thread Matt Mahoney
--- On Thu, 10/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:

I hope not to sound like a broken record here ... but ... not every
narrow AI advance is actually a step toward AGI ...

It is if AGI is billions of narrow experts and a distributed index to get your 
messages to the right ones.

I understand your objection that it is way too expensive ($1 quadrillion), even 
if it does pay for itself. I would like to be proved wrong...

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Let's face it, this is just dumb.

2008-10-02 Thread Ben Goertzel
On Thu, Oct 2, 2008 at 2:02 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Thu, 10/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 I hope not to sound like a broken record here ... but ... not every
 narrow AI advance is actually a step toward AGI ...

 It is if AGI is billions of narrow experts and a distributed index to get
 your messages to the right ones.

 I understand your objection that it is way too expensive ($1 quadrillion),
 even if it does pay for itself. I would like to be proved wrong...


IMO, that would be a very interesting AGI, yet not the **most** interesting
kind due to its primarily heterarchical nature ... the human mind has this
sort of self-organized, widely-distributed aspect, but also a more
centralized, coordinated control aspect.  I think an AGI which similarly
combines these two aspects will be much  more interesting and powerful.  For
instance, your proposed AGI would have no explicit self-model, and no
capacity to coordinate a large percentage of its resources into a single
deliberative process.   It's much like what Francis Heyllighen envisions
as the Global Brain.  Very interesting, yet IMO not the way to get the
maximum intelligence out of a given amount of computational substrate...


ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Let's face it, this is just dumb.

2008-10-02 Thread Matt Mahoney
 For instance, your proposed AGI would have no explicit self-model, and
no capacity to coordinate a large percentage of its resources into a
single deliberative process.

That's a feature, not a bug. If an AGI could do this, I would regard it as 
dangerous. Who decides what it should do? In my proposal, resources are owned 
by humans who can trade them on a market. Either a large number of people or a 
smaller group with a lot of money would have to be convinced that the problem 
was important. However, the AGI would also make it easy to form complex 
organizations quickly.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Thu, 10/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Let's face it, this is just dumb.
To: agi@v2.listbox.com
Date: Thursday, October 2, 2008, 2:08 PM



On Thu, Oct 2, 2008 at 2:02 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

--- On Thu, 10/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:



I hope not to sound like a broken record here ... but ... not every

narrow AI advance is actually a step toward AGI ...



It is if AGI is billions of narrow experts and a distributed index to get your 
messages to the right ones.



I understand your objection that it is way too expensive ($1 quadrillion), even 
if it does pay for itself. I would like to be proved wrong...
IMO, that would be a very interesting AGI, yet not the **most** interesting 
kind due to its primarily heterarchical nature ... the human mind has this sort 
of self-organized, widely-distributed aspect, but also a more centralized, 
coordinated control aspect.  I think an AGI which similarly combines these two 
aspects will be much  more interesting and powerful.  For instance, your 
proposed AGI would have no explicit self-model, and no capacity to coordinate a 
large percentage of its resources into a single deliberative process.   
It's much like what Francis Heyllighen envisions as the Global Brain.  Very 
interesting, yet IMO not the way to get the  maximum intelligence out of a 
given amount of computational substrate...



ben g
 






  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Let's face it, this is just dumb.

2008-10-02 Thread Ben Goertzel
More powerful, more interesting, and if done badly quite dangerous,
indeed...

OTOH a global brain coordinating humans and narrow-AI's can **also** be
quite dangerous ... and arguably more so, because it's **definitely** very
unpredictable in almost every aspect ... whereas a system with a dual
hierarchical/heterarchical structure and a well-defined goal system, may
perhaps be predictable in certain important aspects, if it is designed with
this sort of predictability in mind...

ben

On Thu, Oct 2, 2008 at 2:48 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

  For instance, your proposed AGI would have no explicit self-model, and no
 capacity to coordinate a large percentage of its resources into a single
 deliberative process.

 That's a feature, not a bug. If an AGI could do this, I would regard it as
 dangerous. Who decides what it should do? In my proposal, resources are
 owned by humans who can trade them on a market. Either a large number of
 people or a smaller group with a lot of money would have to be convinced
 that the problem was important. However, the AGI would also make it easy to
 form complex organizations quickly.

 -- Matt Mahoney, [EMAIL PROTECTED]

 --- On *Thu, 10/2/08, Ben Goertzel [EMAIL PROTECTED]* wrote:

 From: Ben Goertzel [EMAIL PROTECTED]
 Subject: Re: [agi] Let's face it, this is just dumb.
 To: agi@v2.listbox.com
 Date: Thursday, October 2, 2008, 2:08 PM



 On Thu, Oct 2, 2008 at 2:02 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Thu, 10/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 I hope not to sound like a broken record here ... but ... not every
 narrow AI advance is actually a step toward AGI ...

 It is if AGI is billions of narrow experts and a distributed index to get
 your messages to the right ones.

 I understand your objection that it is way too expensive ($1 quadrillion),
 even if it does pay for itself. I would like to be proved wrong...


 IMO, that would be a very interesting AGI, yet not the **most** interesting
 kind due to its primarily heterarchical nature ... the human mind has this
 sort of self-organized, widely-distributed aspect, but also a more
 centralized, coordinated control aspect.  I think an AGI which similarly
 combines these two aspects will be much  more interesting and powerful.  For
 instance, your proposed AGI would have no explicit self-model, and no
 capacity to coordinate a large percentage of its resources into a single
 deliberative process.   It's much like what Francis Heyllighen envisions
 as the Global Brain.  Very interesting, yet IMO not the way to get the
 maximum intelligence out of a given amount of computational substrate...


 ben g


  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-02 Thread Matt Mahoney
--- On Thu, 10/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 OTOH a global brain coordinating humans and narrow-AI's can **also** be quite 
 dangerous ... and arguably more so, because it's **definitely** very 
 unpredictable in almost every aspect ... whereas a system with a dual 
 hierarchical/heterarchical structure and a well-defined goal system, may 
 perhaps be predictable in certain important aspects, if it is designed with 
 this sort of predictability in mind...

CMR (my proposal) has no centralized control (global brain). It is a 
competitive market in which information has negative value. The environment is 
a peer-to-peer network where peers receive messages in natural language, cache 
a copy, and route them to appropriate experts based on content.

Peers have incomplete knowledge of the network, so messages may need to be 
routed via multiple hops through redundant paths to multiple experts. Each 
message identifies the sender and time sent. The receiver is responsible for 
authenticating the sender, e.g. by password and registration via an encrypted 
channel. The sender is a peer, not tied to a human. A human may manage multiple 
identities and be anonymous. Peer owners can set their own policies with regard 
to which messages to keep, route, or discard.

Initially, peers can be simple. When a peer receives a message, it matches 
terms to words in its cache, and forwards the message to the authors identified 
in the headers of the cached matches. A peer's domain of expertise is simply 
those messages posted by the author which are kept permanently in the cache. 
Peers can be more intelligent than this, of course. For example, they may match 
messages with attached pictures or video based on content.

The network's behavior can only be predicted in terms of market incentives. The 
network is hostile. Peers may be flooded with spam, so they will need some 
intelligence to decide which messages to route and which to discard. Resource 
owners (humans) compete for attention, which requires resources (storage and 
bandwidth) on other people's peers. Peers (or their owners) thus have an 
incentive to provide useful information so that they can sell advertising and 
are not blocked. Peers have an incentive to protect their reputations by 
preventing their identities from being forged. Thus, they have an incentive to 
keep passwords secret by e.g. registering with each neighbor using a different 
password.

I believe that CMR is initially friendly in the sense that a market is 
friendly. A market is the most efficient way to satisfy the collective goals of 
its participants. It is fair, but not benevolent. There is an incentive to 
cheat, but also an incentive to protect one's reputation by being honest. There 
is an incentive for peers to become more intelligent, as measured by earnings. 
Peers need to be selective in routing messages or else they will be exploited 
by spammers. Likewise, spammers have an incentive to outsmart weaker peers.

I believe that CMR becomes more dangerous as peers get smarter. We will rely on 
peers with high reputations to sort truth from lies and to rank the reputations 
of other peers. The problem is that we have to train these machines, for 
example, by clicking the spam button. But when machines are smarter than us, 
we can no longer make that distinction. I believe that eventually we will no 
longer know what our computers are doing as they acquire all available 
resources.

Although CMR is a specific proposal, I think it is clear that the internet is 
headed in this direction, even if it is not adopted as I described. We already 
depend on trust networks, like Google rankings alongside sponsored links, 
seller ratings on eBay, etc. Intelligent machines in any form will have to 
compete in this environment.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Let's face it, this is just dumb.

2008-10-01 Thread Brad Paulsen
This is probably a tad off-topic, but I couldn't help myself.

From the Technology-We-Could-Probably-Do-Without files:

STEP RIGHT UP, LET THE COMPUTER LOOK AT YOUR FACE AND TELL YOU YOUR AGE
http://www.physorg.com/news141394850.html

From the article:

...age-recognition algorithms could ... prevent minors from purchasing
tobacco products from vending machines, and deny children access to adult
Web sites.

Sixteen-year-old-male's inner dialog: I need a smoke and some porn.  Let
me think... Where did dad put that Ronald Reagan Halloween mask?

It boasts a 50% recognition accuracy rate +/-5 years and an 80%
recognition accuracy rate +/-10 years.  Unless, of course, the subject is
wearing a big floppy hat, makeup or has had Botox treatment recently.  Or
found his dad's Ronald Reagan mask.  'Nuf said.

Cheers,
Brad

P.S. Oh, yeah, and the guy responsible for this project claims it doesn't
violate anyone's privacy because it can't be used to identify individuals.
 Right.  They don't say who sponsored this research, but I sincerely doubt
it was the vending machine companies or purveyors of Internet porn.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com