RE: [agi] An idea for promoting AI development.

2002-12-02 Thread Bill Hibbard
Hi Ben,

I think that true machine intelligence will be computationally
demanding and will initially appear on expensive hardware
available only to wealthy institutions like the government or
corporations. Even when it is possible on commodity hardware,
expensive hardware will still support much greater intelligence.

I also think it is not realistic to imagine a small group
creating machine intelligence in total secrecy.

The right approach is to educate the public as widely and loudly
as possible about the nature and dangers of machine intelligence.
In the modern world, wide public education seems to be the best
way to resist public dangers.

As you said in another message, this is going to be a very
difficult issue as time goes on. But I think military applications
present a good opportunity for public education, since people
already accept the idea that biological, chemical and nuclear
weapons should not be used.

Cheers,
Bill

On Mon, 2 Dec 2002, Ben Goertzel wrote:



 Regarding being wary about military apps of AI technology, it seems to me
 there are two different objectives one might pursue:

 1) to avoid militarization of one's technology

 2) to avoid the military achieving *exclusive* control over one's technology

 It seems to me that the first objective is very hard, regardless of whether
 one accepts military funding or not.  The only ways that I can think of to
 achieve 1) would be

 1a) total secrecy in one's project all the way

 1b) extremely rapid ascendancy from proto-AGI to superhuman AGI -- i.e.
 reach the end goal before the military notices one's project.  This relies
 on security through simply being ignored up to the proto-AGI phase...

 On the other hand, the second objective seems to me relatively easy.  If one
 publishes one's work and involves a wide variety of developers in it, no one
 is going to achieve exclusive power to create AGI.  AGI is not like nuclear
 weapons, at least not if a software-on-commodity-hardware approach works (as
 I think it well).  Commodity hardware only is required, programming skills
 are common, and math/cog-sci skills are not all *that* rare...

 -- Ben G





  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
  Behalf Of Alexander E. Richter
  Sent: Monday, December 02, 2002 7:48 AM
  To: [EMAIL PROTECTED]
  Subject: RE: [agi] An idea for promoting AI development.
 
 
  At 07:18 02.12.02 -0500, Ben wrote:
  
  Can one use military funding for early-stage AGI work and then somehow
  delimitarize one's research once it reaches a certain point?
  One can try,
  but will one succeed?
 
  They will squeeze you out, like Lillian Reynolds and Michael Brace in
  BRAINSTORM (1983) (Christopher Walken, Natalie Wood)
 
  cu Alex
 
  ---
  To unsubscribe, change your address, or temporarily deactivate
  your subscription,
  please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
 

 ---
 To unsubscribe, change your address, or temporarily deactivate your subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] An idea for promoting AI development.

2002-12-02 Thread maitri
It is true that eventually this technology will be in the public domain and
be available to DARPA.

The important thing is to avoid DARPA getting it before everyone else does.
The ***only*** way to do this is to avoid accepting funding from them.  If
this means that it takes 5 more years to develop, then so be it.  If it
means that you have to flip burgers by day, and code by night, then so be
it.  If someone makes a deal with the devil, they are only going to receive
a bad result.

Some people want to delude themselves that they are doing something good,
but their real motives may lie $$$elsewhere$$$. (not referring to the
Novamente team).

Peace,
Kevin


- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, December 02, 2002 9:09 AM
Subject: RE: [agi] An idea for promoting AI development.




 Regarding being wary about military apps of AI technology, it seems to me
 there are two different objectives one might pursue:

 1) to avoid militarization of one's technology

 2) to avoid the military achieving *exclusive* control over one's
technology

 It seems to me that the first objective is very hard, regardless of
whether
 one accepts military funding or not.  The only ways that I can think of to
 achieve 1) would be

 1a) total secrecy in one's project all the way

 1b) extremely rapid ascendancy from proto-AGI to superhuman AGI -- i.e.
 reach the end goal before the military notices one's project.  This relies
 on security through simply being ignored up to the proto-AGI phase...

 On the other hand, the second objective seems to me relatively easy.  If
one
 publishes one's work and involves a wide variety of developers in it, no
one
 is going to achieve exclusive power to create AGI.  AGI is not like
nuclear
 weapons, at least not if a software-on-commodity-hardware approach works
(as
 I think it well).  Commodity hardware only is required, programming skills
 are common, and math/cog-sci skills are not all *that* rare...

 -- Ben G





  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
  Behalf Of Alexander E. Richter
  Sent: Monday, December 02, 2002 7:48 AM
  To: [EMAIL PROTECTED]
  Subject: RE: [agi] An idea for promoting AI development.
 
 
  At 07:18 02.12.02 -0500, Ben wrote:
  
  Can one use military funding for early-stage AGI work and then somehow
  delimitarize one's research once it reaches a certain point?
  One can try,
  but will one succeed?
 
  They will squeeze you out, like Lillian Reynolds and Michael Brace in
  BRAINSTORM (1983) (Christopher Walken, Natalie Wood)
 
  cu Alex
 
  ---
  To unsubscribe, change your address, or temporarily deactivate
  your subscription,
  please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
 

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] An idea for promoting AI development.

2002-12-02 Thread Ben Goertzel

Stephen Read wrote:
 As Cycorp is the best funded company among those organizations with AGI as
 their primary goal, I would state that for us enrichment is not the
 motive.

Steve, I accept this as an honest statement of your personal motivations.

However, I'm not sure that Cycorp's investors would endorse such a
statement, would they?

I'm sure they would prefer to endorse a statement such as Cycorp's mission
is to make a healthy profit via powerful AI technology.

If given the choice between more profits and better AI, I am betting they
would choose the former

In the best of scenarios, profitability and scientific progress go hand in
hand; but there are times in the history of almost any technology company
when the two conflict as well.

Having been on both the tech and biz sides of technology firms, I am aware
of the difficulty of juggling one's responsibility to science with one's
responsibility to shareholders.

I'm not trying to say anything negative about Cycorp here -- I have the same
issue with my own business pursuits, e.g. Biomind LLC, the company we've set
up to pursue bioinformatics applications of Novamente technology.  The
Biomind investors love AGI and Novamente, but they also expect Biomind to
provide a significant ROI.  And building AGI while making money along the
way is certainly a harder problem than building AGI -- which is a rather
hard problem in itself!!


-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] An idea for promoting AI development.

2002-12-02 Thread Gary Miller
On 12/01 Ben Goertzel said:

 2) to avoid the military achieving *exclusive* control over one's
technology

What I am about to say may sound blasphemous, but the military may be
the group with resources to protect the technology!

By publicizing and making AGI technology generally available other
hostile governments/military may see AGI as a potential weapon and
resort to traditional methods to acquire the technology, industrial
espionage, kidnappings of key scientists.  Or they may fear it as a
another potential tool to reign in their aggression and target it for
destruction which means facilities and people.  If this sounds
farfetched just look at the lengths certain countries go to acquire
Plutonium.

If the potential for AGI is seen as great and world changing.  Who
better to protect it or at least offer stewardship than the US or NATO
military.  What private or non-profit is prepared and qualified to
protect the technology when it start's to get really interesting?

There are a few possibilities of why DoD is currently a prime
contributor to AI research.  

1. They fear it (saw Terminator and War games), so they better damn keep
an eye on it (Their paranoia).  

2. They may have it already in the NSA basements and want to be control
the direction of other research to keep their tactical advantage. (My
paranoia)

3. They are starting to see the results of years of research in robotics
and drone warriors in keeping the casualty counts down and the American
public happy and look at AI as another way to continue this trend.

4. They are good at fumbling the ball when it comes to acting in a
timely manner on terrorist threats and it's much easier to blame an AI
when they screw up than risk their cushy jobs.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Ben Goertzel
Sent: Monday, December 02, 2002 9:09 AM
To: [EMAIL PROTECTED]
Subject: RE: [agi] An idea for promoting AI development.





Regarding being wary about military apps of AI technology, it seems to
me there are two different objectives one might pursue:

1) to avoid militarization of one's technology

2) to avoid the military achieving *exclusive* control over one's
technology

It seems to me that the first objective is very hard, regardless of
whether one accepts military funding or not.  The only ways that I can
think of to achieve 1) would be

1a) total secrecy in one's project all the way

1b) extremely rapid ascendancy from proto-AGI to superhuman AGI -- i.e.
reach the end goal before the military notices one's project.  This
relies on security through simply being ignored up to the proto-AGI
phase...

On the other hand, the second objective seems to me relatively easy.  If
one publishes one's work and involves a wide variety of developers in
it, no one is going to achieve exclusive power to create AGI.  AGI is
not like nuclear weapons, at least not if a
software-on-commodity-hardware approach works (as I think it well).
Commodity hardware only is required, programming skills are common, and
math/cog-sci skills are not all *that* rare...

-- Ben G





 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
 Behalf Of Alexander E. Richter
 Sent: Monday, December 02, 2002 7:48 AM
 To: [EMAIL PROTECTED]
 Subject: RE: [agi] An idea for promoting AI development.


 At 07:18 02.12.02 -0500, Ben wrote:
 
 Can one use military funding for early-stage AGI work and then 
 somehow delimitarize one's research once it reaches a certain point?
 One can try,
 but will one succeed?

 They will squeeze you out, like Lillian Reynolds and Michael Brace in 
 BRAINSTORM (1983) (Christopher Walken, Natalie Wood)

 cu Alex

 ---
 To unsubscribe, change your address, or temporarily deactivate your 
 subscription, please go to 
 http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your
subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

BEGIN:VCARD
VERSION:2.1
N:Miller;Gary;A.
FN:Gary A. Miller ([EMAIL PROTECTED]) ([EMAIL PROTECTED])
ORG:New Millennium Consulting
TITLE:Principal Consultant
TEL;WORK;VOICE:(440) 942-9264
TEL;HOME;VOICE:(440) 942-9264
ADR;WORK:;;7222 Hodgson Rd.;Mentor;OH;44060;United States of America
LABEL;WORK;ENCODING=QUOTED-PRINTABLE:7222 Hodgson Rd.=0D=0AMentor, OH 44060=0D=0AUnited States of America
EMAIL;PREF;INTERNET:[EMAIL PROTECTED]
REV:20021108T231940Z
END:VCARD



Re: [agi] An idea for promoting AI development.

2002-12-01 Thread Alan Grimes
 We have a  team of computational linguists who have added the 
 vocabulary to make Cyc able to represent lexical concepts.

But its still not the meta-vocabluary/meta-ontology that is required to
whack the problem.


 In 2003, our data entry activities will be emphasized as a result of 
 our participation in Darpa's Total Information Awarenesss program for 
 which we will construct a Terrorism Knowledge Base, containing all the 
 open-source terrorist individuals, organizations and events that we and 
 our sub-contractor experts can input.  It is hoped that the TKB will 
 prove useful, by answering interesting paramertized questions, for US
 defense and intelligence communities.

Working for the Ministry of Information, I see... 
I DON'T EVEN WANT TO KNOW YOU. =\

This is serious stuff. I would say that you have an ethical duity _NOT_
to aid in this monsterous abuse of power. 

-- 
pain (n): see Linux.
http://users.rcn.com/alangrimes/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] An idea for promoting AI development.

2002-12-01 Thread Ben Goertzel

Alan Grimes wrote:
  In 2003, our data entry activities will be emphasized as a result of
  our participation in Darpa's Total Information Awarenesss program for
  which we will construct a Terrorism Knowledge Base, containing all the
  open-source terrorist individuals, organizations and events that we and
  our sub-contractor experts can input.  It is hoped that the TKB will
  prove useful, by answering interesting paramertized questions, for US
  defense and intelligence communities.

 Working for the Ministry of Information, I see...
 I DON'T EVEN WANT TO KNOW YOU. =\

 This is serious stuff. I would say that you have an ethical duity _NOT_
 to aid in this monsterous abuse of power.

Alan,

I agree that the TIA program is an ethical minefield.  However, the
situation doesn't seem as clear to me as it apparently does to you.

On the one hand, terrorism is a real threat, and there's a real value in
using information technology -- including AI -- to combat it.

On the other hand, once a terrorism-focused DB is created, it's hard to
argue that the intelligence community will necessarily ALWAYS use it ONLY
for good purposes focusing on defense against terrorism.

I think that if TIA databases and AI programs are constructed responsibly,
they can serve as a force of net good, even though abuses will always be
possible.

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] An idea for promoting AI development.

2002-11-29 Thread Arthur T. Murray


On Fri, 29 Nov 2002, Alan Grimes wrote:

 Jeremy Smith wrote: [...]
 
  He also seems to be just asking for a huge sum of money to implement 
  it!!!

Mentifex/Arthur here with an announcement.  I'm asking for $17.95 U.S.

The Mentifex AI Textbook has today Thurs.29.Nov.2002 just been published
by iUniverse.com as AI4U: Mind-1.1 Programmer's Manual on the Web at
http://www.iuniverse.com/bookstore/book_detail.asp?isbn=0595259227 (q.v.).

It would probably cost less to buy the print-on-demand (POD) textbook
than to print out all the associated Mentifex pages on the Web.

In a few weeks it should be possible for interested or curious parties
to track AI4U on Amazon and see how many millions down it is ranked!

/End interrupt mode -- Arthur T. Murray

 
 Perspective: 
 The latest release of MS windows cost $2Billion...
 
 A typical internet start-up would receive anywhere from 20 to 50 million
 in VC. 
 
 Heck, in the VC world you need to ask for large sums of money just to
 get people's attention. 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] An idea for promoting AI development.

2002-11-29 Thread Arthur T. Murray


On Fri, 29 Nov 2002, Alan Grimes wrote:

 Arthur T. Murray wrote:
  Mentifex/Arthur here with an announcement.  I'm asking for $17.95 U.S.
 
 While a mind-forth isn't too far from where I am in my current
 thinking, I must ask you: Have you ever tested this idea on an actual
 robotic platform? Does it behave anthing like you would expect it would?
ATM:
No, I have not had the opportunity to test the AI on a robot.

http://books.iuniverse.com/viewbooks.asp?isbn=0595259227page=20 is the

Motorium module with plans and ideas for a robotic implementation.

 
 By all accounts the java-script applet that 9/10ths of the links on the
 menafex site point to is broken and simply doesn't work. Unless you can
 demonstrate a physical or virtual robot performing significantly
 non-trivial behaviors in a complex and dynamic environment I don't think
 you can justify writing a textbook on it at this juncture. 

http://www.scn.org/~mentifex/jsaimind.html is not broken, but it

requires Microsoft Internet Explorer to work properly.
 
 I do want to see AGI move forward and I have no bias whatsoever against
 the Mentifex model. I just havn't seen any real evidence that your
 software has met any of its design goals.

http://www.iuniverse.com/bookstore/book_detail.asp?isbn=0595259227 AI4U

is not only AI software, it is primarily an AI Theory of Mind, as
shown in the 34 brain-mind diagrams that start all 34 chapters.
 
 (I have made similar responses directly to you over the course of the
 years). 
ATM:
Yes, and I appreciate them.  Bye for now. -Arthur
 
 -- 
 pain (n): see Linux.
 http://users.rcn.com/alangrimes/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] An idea for promoting AI development.

2002-11-29 Thread Gary Miller
FYI Arthur T. Murray

I just tried to order your Metifex book at iUniverse, but the site was
bombing at the checkout screen.

I'll try again later but just wanted to let you know you might be losing
orders!
 


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Arthur T. Murray
Sent: Friday, November 29, 2002 11:38 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: [agi] An idea for promoting AI development.





On Fri, 29 Nov 2002, Alan Grimes wrote:

 Jeremy Smith wrote: [...]
 
  He also seems to be just asking for a huge sum of money to implement
  it!!!

Mentifex/Arthur here with an announcement.  I'm asking for $17.95 U.S.

The Mentifex AI Textbook has today Thurs.29.Nov.2002 just been published
by iUniverse.com as AI4U: Mind-1.1 Programmer's Manual on the Web at
http://www.iuniverse.com/bookstore/book_detail.asp?isbn=0595259227
(q.v.).

It would probably cost less to buy the print-on-demand (POD) textbook
than to print out all the associated Mentifex pages on the Web.

In a few weeks it should be possible for interested or curious parties
to track AI4U on Amazon and see how many millions down it is ranked!

/End interrupt mode -- Arthur T. Murray

 
 Perspective:
 The latest release of MS windows cost $2Billion...
 
 A typical internet start-up would receive anywhere from 20 to 50 
 million in VC.
 
 Heck, in the VC world you need to ask for large sums of money just to 
 get people's attention.

---
To unsubscribe, change your address, or temporarily deactivate your
subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

BEGIN:VCARD
VERSION:2.1
N:Miller;Gary;A.
FN:Gary A. Miller ([EMAIL PROTECTED]) ([EMAIL PROTECTED])
ORG:New Millennium Consulting
TITLE:Principal Consultant
TEL;WORK;VOICE:(440) 942-9264
TEL;HOME;VOICE:(440) 942-9264
ADR;WORK:;;7222 Hodgson Rd.;Mentor;OH;44060;United States of America
LABEL;WORK;ENCODING=QUOTED-PRINTABLE:7222 Hodgson Rd.=0D=0AMentor, OH 44060=0D=0AUnited States of America
EMAIL;PREF;INTERNET:[EMAIL PROTECTED]
REV:20021108T231940Z
END:VCARD