Re: Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-04 Thread Ben Goertzel
On Fri, Oct 3, 2008 at 9:57 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Fri, 10/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
  You seem to misunderstand the notion of a Global Brain, see
 
  http://pespmc1.vub.ac.be/GBRAIFAQ.html
 
  http://en.wikipedia.org/wiki/Global_brain

 You are right. That is exactly what I am proposing.



It's too bad you missed the Global Brain 0 workshop that Francis Heylighen
and I organized in Brussels in 2001 ...

Some larger follow-up Global Brain conferences were planned, but Francis and
I both got distracted by other things

It would be an exaggeration to say that any real collective conclusions were
arrived at, during the workshop, but it was certainly
interesting...





 I am open to alternative suggestions.



Well, what I suggested in my 2002 book Creating Internet Intelligence was
essentially a global brain based on a hybrid model:

-- a human-plus-computer-network global brain along the lines of what you
and Heylighen suggest

coupled with

-- a superhuman AI mind, that interacts with and is coupled with this global
brain

To use a simplistic metaphor,

-- the superhuman AI mind at the center of the hybrid global brain would
provide an overall goal system and attentional-focus, and

-- the human-plus-computer-network portion of the hybrid global brain would
serve as a sort of unconscious for the hybrid global brain...

This is one way that humans may come to, en masse, interact with superhuman
non-human AI

Anyway this was a fun line of thinking but since that point I diverted
myself more towards the creation of the superhuman-AI component

At the time I had a lot of ideas about how to modify Internet infrastructure
so as to make it more copacetic to the emergence of a
human-plus-computer-network, collective-intelligence type global brain.   I
think many of those ideas could have worked, but they are not the direction
the development of the Net worked, and obviously I (like you) lack the
influence to nudge the Net-masters in that direction.  Keeping a
build-a-superhuman-AI project moving is not easy either, but it's a more
tractable task...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-03 Thread Ben Goertzel
Hi,


 CMR (my proposal) has no centralized control (global brain). It is a
 competitive market in which information has negative value. The environment
 is a peer-to-peer network where peers receive messages in natural language,
 cache a copy, and route them to appropriate experts based on content.


You seem to misunderstand the notion of a Global Brain, see

http://pespmc1.vub.ac.be/GBRAIFAQ.html

http://en.wikipedia.org/wiki/Global_brain

It does not require centralized control, but is in fact more focused on
emergent dynamical control mechanisms.



 I believe that CMR is initially friendly in the sense that a market is
 friendly.



Which is to say: dangerous, volatile, hard to predict ... and often not
friendly at all!!!


 A market is the most efficient way to satisfy the collective goals of its
 participants. It is fair, but not benevolent.


I believe this is an extremely oversimplistic and dangerous view of
economics ;-)

Traditional economic theory which argues that free markets are optimally
efficient, is based on a patently false assumption of infinitely rational
economic actors.This assumption is **particularly** poor when the
economic actors are largely **humans**, who are highly nonrational.

As a single isolated example, note that in the US right now, many people are
withdrawing their $$ from banks even if they have less than $100K in their
accounts ... even though the government insures bank accounts up to $100K.
What are they doing?  Insuring themselves against a total collapse of the US
economic system?  If so they should be buying gold with their $$, but only a
few of them are doing that.  People are in large part emotional not rational
actors, and for this reason pure free-markets involving humans are far from
the most efficient way to satisfy the collective goals of a set of humans.

Anyway a deep discussion of economics would likely be too big of a
digression, though it may be pertinent insofar as it's a metaphor for the
internal dynamics of an AGI ... (for instance Eric Baum, who is a fairly
hardcore libertarian politically, is in favor of free markets as a model for
credit assignment in AI systems ... and OpenCog/NCE contains an economic
attention allocation component...)

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-03 Thread Matt Mahoney
--- On Fri, 10/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 You seem to misunderstand the notion of a Global Brain, see

 http://pespmc1.vub.ac.be/GBRAIFAQ.html
 
 http://en.wikipedia.org/wiki/Global_brain

You are right. That is exactly what I am proposing.

I believe that CMR is initially friendly in the sense that a market is 
friendly.

Which is to say: dangerous, volatile, hard to predict ... and often not 
friendly at all!!!

I am open to alternative suggestions.
 
 A market is the most efficient way to satisfy the collective goals of its 
 participants. It is fair, but not benevolent. 

I believe this is an extremely oversimplistic and dangerous view of economics 
;-)

 Traditional economic theory which argues that free markets are optimally 
 efficient, is based on a patently false assumption of infinitely rational 
 economic actors.    This assumption is **particularly** poor when the 
 economic actors are largely **humans**, who are highly nonrational.

I think that CMR will make markets more rational. Humans will have more access 
to information, which will enable them to make more rational decisions. I 
believe that AGI will result in pervasive public surveillance of everyone. All 
of your movements, communication, and financial transactions will be public and 
instantly accessible to anyone. We will demand it, and AGI will make it cheap. 
Sure you could have secrets, but nobody will hire you, loan you money, or buy 
or sell you anything without knowing everything about you.

Anyway a deep discussion of economics would likely be too big of a digression, 
though it may be pertinent insofar as it's a metaphor for the internal 
dynamics of an AGI ... (for instance Eric Baum, who is a fairly hardcore 
libertarian politically, is in favor of free markets as a model for credit 
assignment in AI systems ... and OpenCog/NCE contains an economic attention 
allocation component...)

Economics is not a metaphor, but is central to the design of distributed AGI. 
There are hard problems that need to be solved. Economic systems have positive 
feedback loops such as speculative investment that are unstable and can crash. 
AGI and instant communication can lead to events where most of the world's 
wealth can disappear in a wave of panic selling traveling at the speed of 
light. I don't believe that competition for resources and a market where 
information has negative value has positive feedback loops, but it is something 
that needs to be studied.

My concern is that trust networks are unstable. They may lead to monopolies, 
and rare but catastrophic failures when a peer with high reputation decides to 
cheat. This is not just a problem for CMR, but any AGI where knowledge comes 
from many people. How do you know which information to trust?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-02 Thread Matt Mahoney
--- On Thu, 10/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 OTOH a global brain coordinating humans and narrow-AI's can **also** be quite 
 dangerous ... and arguably more so, because it's **definitely** very 
 unpredictable in almost every aspect ... whereas a system with a dual 
 hierarchical/heterarchical structure and a well-defined goal system, may 
 perhaps be predictable in certain important aspects, if it is designed with 
 this sort of predictability in mind...

CMR (my proposal) has no centralized control (global brain). It is a 
competitive market in which information has negative value. The environment is 
a peer-to-peer network where peers receive messages in natural language, cache 
a copy, and route them to appropriate experts based on content.

Peers have incomplete knowledge of the network, so messages may need to be 
routed via multiple hops through redundant paths to multiple experts. Each 
message identifies the sender and time sent. The receiver is responsible for 
authenticating the sender, e.g. by password and registration via an encrypted 
channel. The sender is a peer, not tied to a human. A human may manage multiple 
identities and be anonymous. Peer owners can set their own policies with regard 
to which messages to keep, route, or discard.

Initially, peers can be simple. When a peer receives a message, it matches 
terms to words in its cache, and forwards the message to the authors identified 
in the headers of the cached matches. A peer's domain of expertise is simply 
those messages posted by the author which are kept permanently in the cache. 
Peers can be more intelligent than this, of course. For example, they may match 
messages with attached pictures or video based on content.

The network's behavior can only be predicted in terms of market incentives. The 
network is hostile. Peers may be flooded with spam, so they will need some 
intelligence to decide which messages to route and which to discard. Resource 
owners (humans) compete for attention, which requires resources (storage and 
bandwidth) on other people's peers. Peers (or their owners) thus have an 
incentive to provide useful information so that they can sell advertising and 
are not blocked. Peers have an incentive to protect their reputations by 
preventing their identities from being forged. Thus, they have an incentive to 
keep passwords secret by e.g. registering with each neighbor using a different 
password.

I believe that CMR is initially friendly in the sense that a market is 
friendly. A market is the most efficient way to satisfy the collective goals of 
its participants. It is fair, but not benevolent. There is an incentive to 
cheat, but also an incentive to protect one's reputation by being honest. There 
is an incentive for peers to become more intelligent, as measured by earnings. 
Peers need to be selective in routing messages or else they will be exploited 
by spammers. Likewise, spammers have an incentive to outsmart weaker peers.

I believe that CMR becomes more dangerous as peers get smarter. We will rely on 
peers with high reputations to sort truth from lies and to rank the reputations 
of other peers. The problem is that we have to train these machines, for 
example, by clicking the spam button. But when machines are smarter than us, 
we can no longer make that distinction. I believe that eventually we will no 
longer know what our computers are doing as they acquire all available 
resources.

Although CMR is a specific proposal, I think it is clear that the internet is 
headed in this direction, even if it is not adopted as I described. We already 
depend on trust networks, like Google rankings alongside sponsored links, 
seller ratings on eBay, etc. Intelligent machines in any form will have to 
compete in this environment.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com