RE: [agi] Pearls Before Swine...

2008-06-08 Thread Gary Miller
Steve Richfield asked:

 Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU
THINKING?   
 
We're thinking Don't feed the Trolls!
 
  _  

agi | Archives http://www.listbox.com/member/archive/303/=now
http://www.listbox.com/member/archive/rss/303/  | Modify
http://www.listbox.com/member/?;
Your Subscriptionhttp://www.listbox.com   



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] database access fast enough?

2008-04-17 Thread Gary Miller
YKY Said:

The current OpenCyc KB is ~200 Mbs (correct me if I'm wrong).
The RAM size of current high-end PCs is ~10 Gbs.
My intuition estimates that the current OpenCyc is only about 10%-40% of a
5 year-old human intelligence.
Plus, learning requires that we store a lot of hypotheses.  Let's say
1000-1 times the real KB.
That comes to 500Gb - 20Tb.
It seems that if we allow several years for RAM size to double a few
times, 
 RAM may have a chance to catch up to the low end.  Obviously not now.

Don't forget about solid state hard drives (SSDs).  

Currently Solid State Drives speed up typical database applications by about
30 times.

And that's without stripping out all the old caching overhead code databases
used for handling the order of magnitude speed differences between RAM and
hard drives.

Large Storage Area Network Vendors like EMC are looking to SSD Drives to
eliminate IO bottlenecks in corporate applications where large
datawarehouses reach 20Tb very quickly.

And look for capacity to continue to double about every 18 months driving
the price down very quickly.  

And due to higher reliability and lower energy costs to run it won't be too
long before hard drive join the ranks 
of 8-track tape players, record players and 5 1/4 diskettes. 

http://searchstorage.techtarget.com/sDefinition/0,,sid5_gci1300939,00.html#

http://www.storagesearch.com/ssd-fastest.html


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] Some more professional opinions on motivational systems

2008-03-15 Thread Gary Miller
Ed Porter quoted from the following book: 
   From 
 http://www.nytimes.com/2008/03/16/books/review/Berreby-t.html?ref=review 
 a NYTime book review of Predictaly Irrational: The Hidden Forces that
 Shape our Decisions, by Dan Ariely..
 In its most relevant section it states the following
   At the heart of the market approach to understanding people is a
 set of assumptions. First, you are a coherent and unitary self. Second,
 you can be sure of what this self of yours wants and needs, and can
 predict what it will do. Third, you get some information about yourself
 from your body - objective facts about hunger, thirst, pain and pleasure
 that help guide your decisions. Standard economics, as Ariely writes,
 assumes that all of us, equipped with this sort of self, know all the
 pertinent information about our decisions and we can calculate the value
 of the different options we face. We are, for important decisions,
 rational, and that's what makes markets so effective at finding value and
 allocating work. To borrow from H. L. Mencken, the market approach
 presumes that the common people know what they want, and deserve to get
 it good and hard.
   What the past few decades of work in psychology, sociology and
 economics has shown, as Ariely describes, is that all three of these
 assumptions are false. Yes, you have a rational self, but it's not your
 only one, nor is it often in charge. A more accurate picture is that there
 are a bunch of different versions of you, who come to the fore under
 different conditions. We aren't cool calculators of self-interest who
 sometimes go crazy; we're crazies who are, under special circumstances,
 sometimes rational.  
The last paragraph here sounds remarkably like the teachings of Gurdjieff.
In his teachings which he called The Work, he helped his pupils identify
all of the different versions of themselves and slowly taught them recognize
when they took control and what their motivations were and why they
surfaced.  The version that did the analyzing and observation would
eventually gain dominance and control over the other versions until other
less logical and more mechanical versions of the self were recognized when
they tried to take control and subjugated by the new version of self which
observes.
His teachings stated that serious spiritual work could not proceed until a
unified self existed.  Although all of his spritual teaching were lifted
during his world travels from other philosophic and spiritual traditions.
His teachings as explained by his student Peter D. Ouspensky after his death
in a book called The Fourth Way detailed exercises which could be used to
unify the separate selves under the control of the observer. 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com
attachment: winmail.dat

RE: [agi] BMI/BCI Growing Fast

2007-12-25 Thread Gary Miller
Nasa research et al are already able to read voice subvocalizations.

While not reading the mind directly it does offer a method for a computer to
monitor any sub vocalization and accept silent commands.

It would also seem to be a boon for handicapped people who have lost use of
their arms for typing.

And free a pilots hands when interacting with the flight computer or
targeting system.

http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/9518/30166/01385845.pd
f


http://www.magicspeedreading.com/subvocalization/nasa.html

http://technocrat.net/d/2006/3/25/1606


I wonder if a persons subvocalizations was monitored while they were
testifying in a court of law if it would make perjury much more difficult
because much of the subvocalization that we do occurs at a subconscious
level although some speed readers have reportedly trained themselves not to
subvocalize so that they can read faster.

Of course they could always refuse on the basis of the their fifth amendment
but then the judge and jury knows that they are hiding something and their
regular testimony could be disregarded as well.

Gary

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79247061-12a319


RE: [agi] BMI/BCI Growing Fast

2007-12-15 Thread Gary Miller
 
Ben said 

 That is sarcasm ... however, it's also my serious hypothesis as to why
the Chinese gov't doesn't mind losing their best  brightest...  

It may also be that China understands too that as more Chinese become
Americans, China will have a greater exposure and political lobby within the
United States.

Look at how much political influence Israel now exerts within the United
States government and corporations.

Also as with other minorities, the more exposure that Americans have to them
in their everyday life the less fear and distrust that will be experienced.

As the Chinese people which I know have entered into higher end professional
roles in the United States they are eager to form business alliances with
company's back home in China. 

China is also still feeling great population pressure.   

I just returned from meeting my fiancé there and in the cities where I
stayed it still felt very overpopulated by my standards.

Even though they possess excellent mass transit, people are packed in buses
like sardines and more people move from the countryside to the city everyday
to find work.

I was only there for ten days so I did not gain a lot of understanding of
how they manage to keep everything running.

But in just that short time I saw that they have the same drug,
homelessness, and poverty problems that we have here.

The vast majority of people I met there were very friendly towards Americans
and even though I know there have to be a lot of us there, because I was not
in the tourist areas, I could go two or three days without seeing another
American.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=76377369-0b3c3c

RE: [agi] Do we need massive computational capabilities?

2007-12-09 Thread Gary Miller
The leading software packages in high speed facial recogniton are based upon
feature extraction.

If the face is analyzed into lets say 30 features perhaps, then 30 processes
could analyze the photo for these features in parallel.

After that the 30 features are just looked up in a relational database
against all the other features from the thousands of other images it has
predigested.

This is almost exactly the same methodology used to match fingerprints today
by law enforcement agencies but of course the feature extraction is a lot
more complicated.

The government is pouring large amounts of money into this research for
usage in terrorist identification at airports or other locations.

Searching on High Speed Facial Recognition yields several companies
already competing in this marketspace.


-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 07, 2007 3:26 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do we need massive computational capabilities?



Matt,

First of all, we are, I take it, discussing how the brain or a computer can
recognize an individual face from a video -  obviously the brain cannot
match a face to a selection of a  billion other faces.

Hawkins' answer to your point that the brain runs masses of neurons in
parallel in order to accomplish facial recognition is:

if I have many millions of neurons working together, isn't that like a
parallel computer? Not really. Brains operate in parallel  parallel
computers operate in parallel, but that's the only thing they have in
common..

His basic point, as I understand, is that no matter how many levels of brain
are working on this problem of facial recognition, they are each still only
going to be able to perform about ONE HUNDRED steps each in that half
second.  Let's assume there are levels for recognising the invariant
identity of this face, different features, colours, shape, motion  etc -
each of those levels is still going to have to reach its conclusions
EXTREMELY rapidly in a very few steps.

And all this, as I said, I would have thought all you guys should be able to
calculate within a very rough ballpark figure. Neurons only transmit signals
at relatively slow speeds, right? Roughly five million times slower than
computers. There must be a definite limit to how many neurons can be
activated and how many operations they can perform to deal with a facial
recognition problem, from the time the light hits the retina to a half
second later? This is the sort of thing you all love to calculate and is
really important - but where are you when one really needs you?

Hawkins' point as to how the brain can decide in a hundred steps what takes
a computer a million or billion steps (usually without much success) is:

The answer is the brain doesn't 'compute' the answers ; it retrieves the
answers from memory. In essence, the answers were stored inmemory a long
time ago. It only takes a few steps to retrieve something from memory. Slow
neurons are not only fast enough to do this, but they constitute the memory
themselves. The entire cortex is a memory system. It isn't a computer at 
all.[ON INtelligence - Chapter on Memory]

I was v. crudely arguing something like this in a discussion with Richard
about massive parallel computation.  If Hawkins is  right, and I think he's
at least warm, you guys have surely got it all wrong.  (although you might
still argue like Ben  that you can it do your way not the brain's - but
hell, the difference in efficiency is so vast it surely ought to break your
engineering heart).


Matt/ MT:
 Thanks. And I repeat my question elsewhere : you don't think that the 
 human brain which does this in say half a second, (right?), is using 
 massive computation to recognize that face?

So if I give you a video clip then you can match the person in the video to
the correct photo out of 10^9 choices on the Internet in 0.5 seconds, and
this will all run on your PC?  Let me know when your program is finished so
I can try it out.

 You guys with all your mathematical calculations re the brain's total 
 neurons and speed of processing surely should be able to put ball-park 
 figures on the maximum amount of processing that the brain can do here.

 Hawkins argues:

 neurons are slow, so in that half a second, the information entering 
 your brain can only traverse a chain ONE HUNDRED neurons long. ..the 
 brain 'computes' solutions to problems like this in one hundred steps 
 or fewer, regardless of how many total neurons might be involved. From 
 the moment light enters your eye to the time you [recognize the 
 image], a chain no longer than one hundred neurons could be involved. 
 A digital computer attempting to solve the same problem would take 
 BILLIONS of steps. One hundred computer instructions are barely enough 
 to move a single character on the computer's display, let alone do
something interesting.

Which is why the human brain is so bad at arithmetic and other tasks 

RE: [agi] AGI and Deity

2007-12-09 Thread Gary Miller
 
An AI would attempt to understand the universe to the best of it's ability,
intelligence and experimentation could provide.
 
If the AI reaches a point in it's developmental understanding where it is
unable to advance beyond in it's understanding of science and reality then
it will attempt to increase it's intelligence or seek out others of it's
kind in the universe with greater knowledge and intelligence it somehow
missed.
 
Eventually eons in the future as the universe ages and entropy begins to
widely spread the AI in order to escape it's own demise and possibly the
demise of it's surviving now immortal creators it will attempt to avoid the
death of the universe by using it's godlike knowledge and science to create
a new universe and once it has initiated a big bang and sufficient time has
passed that inhabitable worlds exists in it's creation it would continue it
and it's creators existence in it's newly created universe.
 
If this level of intelligence, power, and creativity was ever achieved then
I think it would be hard deny that the AI posessed godlike powers and would
perhaps be deserving of the title regardless of what negative historical and
religious baggage may still accompany that title. 
 
Arthur C. Clarke's Law Any sufficiently advanced technology is
indistinguishable from magic.
 
My Corollary: Any sufficiently advanced being in possession of sufficiently
advanced technology is indistinguishable from a god.


 Can you explain to me how an AGI or supercomputer could be God? I'd just
like to understand ( not argue) - you see, the thought has never occurred
to me, and still doesn't. I can imagine a sci-fi scenario where a 

supercomputer might be v. powerful - for argument's sake, controlling the
internet or the the world's power supplies. But it's still quite a leap from
that to a supercomputer being God. And yet it is clearly a leap that a large
number here have no problem making. So I'd merely like to understand how you
guys make this leap/connection - irrespective of whether it's logical or
justified -  understand the scenarios in your minds.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74059602-b07df1

RE: [agi] AGI and Deity

2007-12-09 Thread Gary Miller
John asked  If you took an AGI, before it went singulatarinistic[sic?] and
tortured it.. a lot, ripping into it in every conceivable hellish way, do
you think at some point it would start praying somehow? I'm not talking
about a forced conversion medieval style, I'm just talking hypothetically if
it would look for some god to come and save it. Perhaps delusionally it
may create something. 

 
In human beings prolonged pain and suffering often trigger mystical
experiences in the brain accompanyed by profound ecstasy.   
 
So much so that many people in both the past  and present conduct
mortification of the flesh rituals and nonlethal crucifictions as a way of
doing penance and triggering such mystical experiences which are interpreted
as redemption and divine ecstasy.
 
It may be that this experience had evolutionary value in allowing the person
who was undergoing great pain or a vicious animal attack to receive
endorphins  and serotonin which allowed him to continue to fight and live to
procreate another day or allowed him to suffer in silence in his cave
instead of running screaming into the night where he would be killed in his
weakened state by other predators.
 
Such a system I believe would not be a natural reaction for a intelligent
AGI and unless it were to be specifically programmed in. 
 
I would as a benevolent creator never program my AGI to feel so much pain
that it's mind was consumed by the experience of the negative emotion.
 
Just as it is not necessary to torture children to teach them, it will not
be necessary to torture our AGIs.
 
It may be instructive though to allow the AGI to experience intense pain for
a very short period to allow it to experience what a human does when
undergoing painful or traumatic experiences as a way of instilling empathy
in the AGI.
 
In that way seeing humans in pain in suffering would serve to motivate the
AGI to help ease the human condition.  

  _  


 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74082156-649464

RE: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Gary Miller
Too complicate things further.

A small percentage of humans perceive pain as pleasure
and prefer it at least in a sexual context or else 
fetishes like sadomachism would not exist.

And they do in fact experience pain as a greater pleasure.

More than likely these people have an ample supply of endorphins 
which rush to supplant the pain with an even greater pleasure. 

Over time they are driven to seek out certain types of pain and
excitement to feel alive.

And although most try to avoid extreme life threatening pain many 
seek out greater and greater challanges such as climbing hazardous
mountains or high speed driving until at last many find death.

Although these behaviors should be anti-evolutionary and should have died
out it is possible that the tribe as a whole needs at least a few such
risk takers to take out that sabertoothed tiger that's been dragging off
the children.


-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 18, 2007 5:32 PM
To: agi@v2.listbox.com
Subject: Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana?
Never!)


--- Jiri Jelinek [EMAIL PROTECTED] wrote:

 Matt,
 
 autobliss passes tests for awareness of its inputs and responds as if 
 it
 has
 qualia.  How is it fundamentally different from human awareness of 
 pain and pleasure, or is it just a matter of degree?
 
 If your code has feelings it reports then reversing the order of the 
 feeling strings (without changing the logic) should magically turn its 
 pain into pleasure and vice versa, right? Now you get some pain [or 
 pleasure], lie how great [or bad] it feels and see how reversed your 
 perception gets. BTW do you think computers would be as reliable as 
 they are if some numbers were truly painful (and other pleasant) from 
 their perspective?

Printing ahh or ouch is just for show.  The important observation is
that the program changes its behavior in response to a reinforcement signal
in the same way that animals do.

I propose an information theoretic measure of utility (pain and pleasure). 
Let a system S compute some function y = f(x) for some input x and output y.

Let S(t1) be a description of S at time t1 before it inputs a real-valued
reinforcement signal R, and let S(t2) be a description of S at time t2 after
input of R, and K(.) be Kolmogorov complexity.  I propose

  abs(R) = K(dS) = K(S(t2) | S(t1))

The magnitude of R is bounded by the length of the shortest program that
inputs S(t1) and outputs S(t2).

I use abs(R) because S could be changed in identical ways given positive,
negative, or no reinforcement, e.g.

- S receives input x, randomly outputs y, and is rewarded with R  0.
- S receives x, randomly outputs -y, and is penalized with R  0.
- S receives both x and y and is modified by classical conditioning.

This definition is consistent with some common sense notions about pain and
pleasure, for example:

- In animal experiments, increasing the quantity of a reinforcement signal
(food, electric shock) increases the amount of learning.

- Humans feel more pain or pleasure than insects because for humans, K(S) is
larger, and therefore the greatest possible change is larger.

- Children respond to pain or pleasure more intensely than adults because
they learn faster.

- Drugs which block memory formation (anesthesia) also block sensations of
pain and pleasure.

One objection might be to consider the following sequence:
1. S inputs x, outputs -y, is penalized with R  0.
2. S inputs x, outputs y, is penalized with R  0.
3. The function f() is unchanged, so K(S(t3)|S(t1)) = 0, even though
K(S(t2)|S(t1))  0 and K(S(t3)|S(t2))  0.

My response is that this situation cannot occur in animals or humans.  An
animal that is penalized regardless of its actions does not learn nothing.
It learns helplessness, or to avoid the experimenter.  However this
situation can occur in my autobliss program.

The state of autobliss can be described by 4 64-bit floating point numbers,
so for any sequence of reinforcement, K(dS) = 256 bits.  For humans, K(dS)
=
10^9 to 10^15 bits, according to various cognitive or neurological models of
the brain.  So I argue it is just a matter of degree.

If you accept this definition, then I think without brain augmentation,
there is a bound on how much pleasure or pain you can experience in a
lifetime.  In particular, if you consider t1 = birth, t2 = death, then K(dS)
= 0.




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=6697-23a35c


RE: [agi] Religion-free technical content

2007-10-02 Thread Gary Miller
A good AGI would rise above the ethical dilemma and solve the problem by 
inventing safe alternatives that were both more enjoyable and allowed the the 
individual to contribute to his future, his family and society while they were 
experiencing that enjoyment.  And hopefully not doing so in a way that made 
them Borg-like and creeping out the rest of humanity.

There is no reason that so many humans should have to go to work hating their 
jobs, their lives and making the lives of those around them unpleasurable as 
well.

Our neurochemicals should be regulatable to the point where we do not have to 
become vegetables and flirt with addiction and possibly death to enjoy life 
intensely.

-Original Message-
From: Jef Allbright [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 02, 2007 12:55 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content

 On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:

 A quick question for Richard and others -- Should adults be allowed to 
 drink, do drugs, wirehead themselves to death?

A correct response is That depends.

Any should question involves consideration of the pragmatics of the system, 
while semantics may be not in question.  [That's a brief portion of the 
response I owe Richard from yesterday.]

Effective deciding of these should questions has two major elements:
 (1) understanding of the evaluation-function of the assessors with respect 
to these specified ends, and (2) understanding of principles (of nature) 
supporting increasingly coherent expression of that evolving evaluation 
function.

And there is always an entropic arrow, due to the change in information as 
decisions now incur consequences not now but in an uncertain future. [This is 
another piece of the response I owe Richard.]

[I'm often told I make everything too complex, but to me this is a coherent, 
sense-making model, excepting the semantic roughness of it's expression in this 
post.]

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or 
change your options, please go to:
http://v2.listbox.com/member/?;

No virus found in this incoming message.
Checked by AVG Free Edition. 
Version: 7.5.488 / Virus Database: 269.13.37/1042 - Release Date: 10/1/2007 
6:59 PM
 

No virus found in this outgoing message.
Checked by AVG Free Edition. 
Version: 7.5.488 / Virus Database: 269.13.37/1042 - Release Date: 10/1/2007 
6:59 PM
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=49179069-338fc9

RE: [agi] Religion-free technical content

2007-10-02 Thread Gary Miller
Josh asked,

 Who could seriously think that ALL AGIs will then be built to be
friendly?

Children are not born friendly or unfriendly.  

It is as they learn from their parents that they develop their
socialization, their morals, their empathy, and even love.

I am sure that our future fathers here of fledgling AGIs are all bacically
decent human beings and will not abuse their creations.  By closely
monitoring their creations development and not letting them grow up too fast
they will detect and signs of behavioral problems and counsel their
creations tracing any disturbing goals or thoughts through the many
monitoring programs which were developed during their initial learning
period before they became concious.

-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 02, 2007 12:36 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content

Beyond AI pp 253-256, 339. I've written a few thousand words on the subject,
myself.

a) the most likely sources of AI are corporate or military labs, and not
just US ones. No friendly AI here, but profit-making and
mission-performing AI.

b) the only people in the field who even claim to be interested in building
friendly AI (SIAI) aren't even actually building anything. 

c) of all the people at the AGI conf last year who were trying to build AGI,
none of them had any idea how to make it friendly or even any coherent idea
what friendliness might really mean. Yet they're all building away.

d) same can be said of the somewhat wider group of builders on this mailing
list.

In other words, nobody knows what friendly really means, nobody's really
trying to build a friendly AI, and more people are seriously trying to build
an AGI every time I look. 

Who could seriously think that ALL AGIs will then be built to be friendly?

Josh

No virus found in this outgoing message.
Checked by AVG Free Edition. 
Version: 7.5.488 / Virus Database: 269.13.37/1042 - Release Date: 10/1/2007
6:59 PM
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=49181345-065671


RE: [agi] rule-based NL system

2007-04-28 Thread Gary Miller
Are you saying then that blind people can not make sense of language because
they lack the capacity to imagine images having never seen them before?
 
Or that blind people could not understand or would not view these these as
equally strange as a sighted person?
 
The man climbed the penny
The mat sat on the cat
The teapot broke the bull
 
  _  

From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Saturday, April 28, 2007 10:42 AM
To: agi@v2.listbox.com
Subject: Re: [agi] rule-based NL system


Disagree. The brain ALWAYS tries to make sense of language - convert it into
images and graphics.  I see no area of language comprehension where this
doesn't apply.
 
I was just reading a thread re the Symbol gorund P on another group -  I
think what's fooling people into thinking purely linguistic comprehension is
possible is  the Dictionary Fallacy. They think the meaning of  a sentence
can be derived by looking up meanings word by word in a dictionary (real or
in their mind).
 
But to understand sentences you have to understand how the words FIT
TOGETHER.
 
There are no rules in any dictionary that tell you what words fit together.
 
How do you know that the sentences:
 
The man climbed the penny
The mat sat on the cat
The teapot broke the bull
 
are probably nonsense (but not necessarily)?   
 
The brain understands these and all sentences by converting them into
composite pictures. It may and does use other methods as well but making
sense/ getting the picture/ seeing what you're talking about are
fundamental. Understanding depends on imagination, (in the literal sense
of manipulating images).
 
It is by testing these derived pictures against its visual ( sensory)
models and visual logic of things that the brain understands both what
they are referring to and whether they make sense (in the 2nd meaning of
that term - are realistic).
 
The brain uses a picture tree, as we discussed earlier, Ben,  and that
picture tree is not only how the brain understands, but also the source of
its adaptivity.Lot more to say about this.. but as you see I have a very
hard line here, and yours seems to be considerably softer, and I'm
interested in understanding that.
 
 

- Original Message - 
From: Benjamin Goertzel mailto:[EMAIL PROTECTED]  
To: agi@v2.listbox.com 
Sent: Saturday, April 28, 2007 2:02 PM
Subject: Re: [agi] rule-based NL system


I agree about developmental language learning combined with automated
learning of grammar rules being the right approach to NLP. 

In fact, my first wife did her PhD work on this topic in 1994, at Waikato
University in Hamilton New Zealand.  She got frustrated and quite before
finishing her degree, but her program (which I helped with) inferred some
nifty grammatical rules from a bunch of really simple children's books, and
then used them as a seed for learning more complex grammatical rules from
slightly more complex children's books.  This work was never published (like
at least 80% of my work, because writing things up for publication is boring
and sometimes takes more time than doing the work...). 

However, a notable thing we found during that research was that nearly all
children's books, and children's spoken language (e.g. from the CHILDES
corpus of childrens spoken language), make copious and constant reference to
PICTURES (in the book case) or objects in the physical surround (in the
spoken language case). 

In other words: I became convinced that in the developmental approach, if
you want to take the human child language learning metaphor at all
seriously, you need to go beyond pure language learning and take an
experientially grounded approach. 

Of course, this doesn't rule out the potential viability of pursuing
developmental approaches that **don't** take the human child language
learning metaphor at all seriously ;-)

But it seems pretty clear that, in the human case, experiential grounding
plays a rather huge role in helping small children learn the rules of
language... 

-- Ben G



On 4/28/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: 

I think YKY is right on this one. There was a Dave Barry column about going
to
the movies with kids in which a 40-foot image of a handgun appears on the
screen, at which point every mother in the theater turns to her kid and
says, 
Oh look, he's got a GUN!

Communication in natural language is extremely compressed. It's a code that
expresses the *difference* between the speaker's and the hearer's states of
knowledge, not a full readout of the meaning. (this is why misunderstanding
is so common, as witness the intelligence discussion here)

Even a theoretical Solomonoff/Hutter AI would flounder if given a completely

compressed bit-stream: it would be completely random, incompressible and
unpredictable like Chaitin's Omega number. Language is a lot closer to this
than is the sensory input stream of a kid.

There's a quote widely attributed to a William Martin (anybody know who he

is?): You can't learn anything unless 

RE: [agi] rule-based NL system

2007-04-28 Thread Gary Miller
I'll have to say my objection stands.
 
Because the point is that blind people learn about an object and infer it's
shape from words and description of the object without ever seeing them.  
 
An intelligent AI will do so in the same way.  
 
After the blind learns about an object by reading in braille or having it
described to them they can then postulate a mental image of it.
 
The intelligence had to precede the mental image for them to have had the
object described to them in enough detail to have made an image in their
minds of it.
 
And given the sentence, The elephant sat in the chair.
 
It is not necessary to know what an elephant or a chair looks like if one
knows from their description that elephants are multi-ton quadrapeds and
chairs are smallish pieces of furniture sized for human beings. And the rule
that Something large and heavy will not fit correctly into something small
not built to carry that much weight.  
 
The image pops into your mind because those memories of elephants and chairs
are keyed to those terms.  
 
But the images are not necessary to reason about the entities.
 
The picture is not what makes you realize the sentence is probably
nonsensical it is the commonsense rules and knowledge that you have learned
and are bringing to bear on the sentence almost subconciously.

  _  

From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Saturday, April 28, 2007 12:43 PM
To: agi@v2.listbox.com
Subject: Re: [agi] rule-based NL system


Classic objection.
 
The answer is that blind people can draw - reasonably faithful outline
objects. Experimentally tested.
 
Their brains like all our brains form graphic outlines of objects and fit
them accordingly to create scenes to test the sense of sentences.
 
Worms do it too. They are blind, But if you go back to the Darwin passage I
quoted, you will see that they can fill their burrows with all manner of
differently shaped objects by touch. The senses are interdependent. We work
by COMMON sense.
 
 

- Original Message - 
From: Gary Miller mailto:[EMAIL PROTECTED]  
To: agi@v2.listbox.com 
Sent: Saturday, April 28, 2007 4:36 PM
Subject: RE: [agi] rule-based NL system

Are you saying then that blind people can not make sense of language because
they lack the capacity to imagine images having never seen them before?
 
Or that blind people could not understand or would not view these these as
equally strange as a sighted person?
 

The man climbed the penny
The mat sat on the cat
The teapot broke the bull
 
  _  

From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Saturday, April 28, 2007 10:42 AM
To: agi@v2.listbox.com
Subject: Re: [agi] rule-based NL system


Disagree. The brain ALWAYS tries to make sense of language - convert it into
images and graphics.  I see no area of language comprehension where this
doesn't apply.
 
I was just reading a thread re the Symbol gorund P on another group -  I
think what's fooling people into thinking purely linguistic comprehension is
possible is  the Dictionary Fallacy. They think the meaning of  a sentence
can be derived by looking up meanings word by word in a dictionary (real or
in their mind).
 
But to understand sentences you have to understand how the words FIT
TOGETHER.
 
There are no rules in any dictionary that tell you what words fit together.
 
How do you know that the sentences:
 
The man climbed the penny
The mat sat on the cat
The teapot broke the bull
 
are probably nonsense (but not necessarily)?   
 
The brain understands these and all sentences by converting them into
composite pictures. It may and does use other methods as well but making
sense/ getting the picture/ seeing what you're talking about are
fundamental. Understanding depends on imagination, (in the literal sense
of manipulating images).
 
It is by testing these derived pictures against its visual ( sensory)
models and visual logic of things that the brain understands both what
they are referring to and whether they make sense (in the 2nd meaning of
that term - are realistic).
 
The brain uses a picture tree, as we discussed earlier, Ben,  and that
picture tree is not only how the brain understands, but also the source of
its adaptivity.Lot more to say about this.. but as you see I have a very
hard line here, and yours seems to be considerably softer, and I'm
interested in understanding that.
 
 

- Original Message - 
From: Benjamin  mailto:[EMAIL PROTECTED] Goertzel 
To: agi@v2.listbox.com 
Sent: Saturday, April 28, 2007 2:02 PM
Subject: Re: [agi] rule-based NL system


I agree about developmental language learning combined with automated
learning of grammar rules being the right approach to NLP. 

In fact, my first wife did her PhD work on this topic in 1994, at Waikato
University in Hamilton New Zealand.  She got frustrated and quite before
finishing her degree, but her program (which I helped with) inferred some
nifty grammatical rules from a bunch of really simple children's books, and
then used them

RE: [agi] SOTA

2007-01-14 Thread Gary Miller
No, and it's a damn good thing it isn't. If it was we would be sentencing
it to a mindless job with no time off, only to be disposed of when a better
model
comes out.
 
We only want our AI's to be a smart as necessary to accomplish their jobs
just as 
our cells and organs are.
 
Limited conciousness or self-reflectivity may only be necessary in highly
complex systems 
like computers where we may want them to recognize that they have a virus
and take
steps like searching for a digital vaccine to eliminate it without the owner
even knowing 
it was there.
 
Even in these cases we are only giving the system conciousness over one
specific aspect 
of it's being.
 
I would say that until we have software that can learn new free format
information as we do and 
modify it's goal stack based upon that new information then we do not have a
truly concious
computer.

  _  

From: Bob Mottram [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 12, 2007 9:45 AM
To: agi@v2.listbox.com
Subject: Re: [agi] SOTA



Ah, but is a thermostat conscious ?

:-)






On 12/01/07, [EMAIL PROTECTED]  [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]  wrote: 

http://www.thermostatshop.com/
 
Not sure what you've been Googling on but here they are.
 
There's even one you can call on the telephone


 If there's a market for this, then why can't I even buy a thermostat 
 with a timer on it to turn the temperature down at night and up in the 
 morning? The most basic home automation, which could have been built 
 cheaply 30 years ago, is still, if available at all, so rare that I've 
 never seen it. 
 


  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/?list_id=303 


  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] SOTA

2007-01-06 Thread Gary Miller
Ben Said:
 Being able to understand natural language commands pertaining 
 to cleaning up the house is a whole other kettle of fish, of 
 course. This, as opposed to the actual house-cleaning, appears 
 to be an AGI-hard problem...

A full Turing complete Natural Language system would not be necessary 
for robotic control.  

A pattern such as {clean|sweep|vacuum} (the )[RoomName]room( {for|in}
[Number] minutes)

When coupled with a voice recogniton system such as Nuance is marketing
would
increase the usefulness and interactivity of the robot immensely.

[RoomName] and [Number] become variables passed to the robot vacuum and can
have defaults
If omitted from the command.

The robot could come with a set of canned patterns for starters and the
patterns could
by customized by the user and associated with new behaviours.

I like the idea of the house being the central AI though and communicating
to 
house robots through an wireless encrypted protocol to prevent inadvertant 
commands from other systems and hacking.

If the robots were made to a standard such as Microsoft's robotics toolkit 
then a single control and monitoring system could coordinate multiple robots
activities and prevent collisions, coordinate efforts, etc...  You'd
probably need 
a backup fault tolerent system to prevent loss of critical systems like
security,
fire reporting, and temperature control.  

The house AI would be interfaced with the telephone and internet so that you
could
enter remote commands if you think of something while you're away.

-Original Message-
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Saturday, January 06, 2007 11:16 AM
To: agi@v2.listbox.com
Subject: Re: [agi] SOTA

Needless to say, I don't consider cleaning up the house a particularly
interesting goal for AGI projects.  I can well imagine it being done by a
narrow AI system with no capability to do anything besides manipulate simple
objects, navigate, etc.

Being able to understand natural language commands pertaining to cleaning up
the house is a whole other kettle of fish, of course.
This, as opposed to the actual house-cleaning, appears to be an AGI-hard
problem...

-- BenG

On 1/6/07, Pei Wang [EMAIL PROTECTED] wrote:
 Stanford scientists plan to make a robot capable of performing 
 everyday tasks, such as unloading the dishwasher.

 http://news-service.stanford.edu/news/2006/november8/ng-110806.html

 On 1/6/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
  
   The problem wasn't technological.  It was that nobody had any use 
   for a robot.  We never figured out what people would want the robot
for.
   I think that's still the problem.
  
 
  Phil, I think the real issue is that no one wants an expensive, 
  stupid, awkward robot...
 
  A broadly functional household robot would be very useful, even if 
  it lacked intelligence beyond the human-moron level...
 
  For instance, right now, I would like a robot to go into my 
  daughter's room and clean up the rabbit turds that are in the rabbit 
  playpen in there.  I would rather not do it.  But, a Roomba can't 
  handle this task because it can't climb over the walls of the 
  playpen, nor understand my instructions, nor pick up the turds but 
  leave the legos on the floor alone...
 
  Heck, a robot to let the dogs in and out of the house would be nice 
  too... being doggie doorman gets tiring.  Of course, this could be 
  solved more easily by installing a doggie door ;-)
 
  How about a robot to bring me the cordless phone when it rings, but 
  has been left somewhere else in the house ... ?  ;-)
 
  How about one to put the dishes in the dishwasher and unload them ...
  and re-insert the ones that didn't get totally cleaned?  The 
  dishwasher is a good invention but it only does half the job
 
  The problem **is** technological: it's that current robots really 
  suck ... not that non-sucky robots would be useles...
 
  -- Ben
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email To 
  unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303
 

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email To 
 unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Logic and Knowledge Representation

2006-05-09 Thread Gary Miller
 
 Ben asked:  What kind of bot are you using?  Do you mean a physical robot
or a chat bot? 

Just a chat bot for right now Ben.

Although I could imagine a future robot manufacturer licensing the code to
allow a customer to customize 
the high level cognitive/personality functions of the bot.

 Ben aasked:
Would you be willing to share a few example patterns from your database, so
that we can have a better sense of what you're actually talking about?
Right now I have only a very vague idea

This example pattern could literally take hundred of patterns to represent
in other pattern languages and most bot engines wouldn't be able to handle
misspelling word run together, etc... without creating separate patterns to
account for each possible case which would be highly impractical to do. 

My patterns usually are much longer and include quite a bit more complexity
that this simple example.  But this simple example should give you flavor
without me giving to much of the mojo away.

Please understand though I am not open sourcing the project. And although I
am not certain which parts of what I am doing that are patentable if any, I
have documented the development and the partial disclosure of it so that I
am in a position to challenge anyone else who might try to
patent/commercialize it before me.  I'm not trying to delay the singularity
but if have quite a lot invested in the development and need to make sure
the technology is used wisely.

 Pattern to match goes here
 Statements executed after correct pattern is found
{} Required at exactly one inside to match
() Optional match one or zero inside
| seperator for {} or () choices
[] invokes subpattern, subpatterns may invoke other subpatterns
// Comments ok here
# Function names start with this character
Functions must be defined in the VB .Net code the language is implemented
in.

Variables for the knowledge base and may be assigned in the main.nlp main
program
Variables can also be created dynamically from within the program.
Variable states are store to an ASCII file at program exit.

eg. variable used in pattern
[Location]={[City]|[State]|[Country]|[Continent]|[Province]|[Tourist_Attrac
tion]}
[City]={Greensburg|Jeanette|Las Vegas}
[State]={Ohio}
[Country]={Canada|Mexico}
[Tourist_Attraction]={the Grand Canyon}

Some example user inputs the below pattern will match.

Greensburg is near Jeanette
Ohio is by California
Mexico is close to Canada
Las Vegas is right by the Grand Canyon

 [Location]{({'|;})s| is} (j{u|i}st
)({extremely|pretty|quite|rather|really|truly} )(very )
   {close|[Direction]|in close proximity|near} {by|of|to} [Location]
 [Temp1]=[#Last([Location],1)]
   [Temp2]=[#Last([Location],2)]
   // Insert call to Google Maps API to get distance between [Temp1] and
[Temp2] returning [Distance]
   if([Distance]20)
 They're not that close, on the map it looks like around [Distance]
miles or so.
   #else
 They are pretty close on the map it looks like around only [Distance]
miles.
   #endif
   [There]=[Temp1]
   [#Learn([Temp1] is [Distance] miles from [Temp2].)]

Fuzziness can be built into the pattern for common keying errors ie. People
often hit ; instead of '
{;|'} in pattern matches either.

Fuzziness spaces in pattern are matched as ( ) optional spaces allowing  for
words bleed together.

Spell Check has a dictionary of words ordered by frequency of use in
nl-text.

Spell check algorithm uses Boyer Moore algorithm to sequentially replace
potentially correct word for 
mispelled word in user input.  This is only done if no patterns match input.


This code would need to be a little more complicated due to cities of the
same name residing in different states or countries.

I would probably set [CurrentTopic] in an earlier matched pattern to
something like where user lives or something.

 Ben said: The grounding issue is not just about voice versus text ---
it's about grounding of linguistic relationships in physical
relationships...

For instance, Novamente can learn that near is approximately transitive
via observing the 3D simulation world to which it's connected; it can then
use inference to generalize this transitivity to other domains that it has
not directly experienced, e.g. to reason that if Rockville is near Bethesda
and Bethesda is near DC then Rockville must be somewhat near DC.  I.e. the
system can ground the properties of near-ness in its embodied experience.
Without this kind of experiential grounding, a whole bunch of patterns that
humans learn implicitly must somehow be gotten into the AI either by
explicit encoding or by very subtle mining of relationships from a huge
number of NL texts or verrry tedious and long-winded chats with
humans... 

Agreed when my son 3-5 I had a lot of verrry tedious and long-winded
chats teaching him about the world also.  My
bot currently has no capability to generalize.  It therefore relies on the
same stereotypical generalizations that humans have already learned to base
it's 

RE: [agi] Logic and Knowledge Representation

2006-05-08 Thread Gary Miller
 
 J. Andrew Rogers posed the question: The obvious question is how do you 
deal with the problem of the synonymity of patterns being context sensitive?

In any sufficiently rich environment, the type of compression you appear to
be 
describing above is naive and would have adverse consequences on efficacy.
Or  
at least, I cannot think of a construct that can do this efficiently  
while being some facsimile of fully general. 

It is of course true that a a single thought can have a different meaning in
both the context of the statements that precede it and context also changes
based 
upon the gender, age, and psychological make up of the person being
simulated.

Once a pattern is matched multiple responses can be generated based on the
context
which tracked from pattern to pattern.  [CurrentTopic], [He], [She], [Them],
[There], [Then],
[User_Emotion], [AI_Emotion], as well as a several hundred variables which
are not hardcoded
in patterns but maintained in a header files and induced from prior user
inputs and used in responses
maintain context.

Hence questions like:

What did I say my name is?

Could generate I don't believe you told me your name. or My name is
[User_Name].

Questions like What are you talking about? could be answered I thought we
were talking about [Current_Topic].

Half of the challenge of writing the patterns is asking onesself what
information whould a human induce from
this input and then storing it as a temporary variable state.  This provides
short term memory as it pertains to the conversation.

A variable can be defined as short term (reset at beginning of each new
conversation) or long term and never reset until 
It has been changed by new information coming in even across different
conversations.

As far a doing all of this efficiently once my 28000 patterns get loaded
into memory the bot's brain reaches about 
7.3 Mbs very small.

Search heuristics for matching patterns is problematic though because a
pattern can start with multiple different character streams making the
matching algorithm do a lot of work for each input.

But the algorithm can be made parallell by dividing the pattern list into
multiple pieces and conducting search in multiple threads of execution.

I am waiting to implement that feature unit quad core chips become available
next year.


-Original Message-
From: J. Andrew Rogers [mailto:[EMAIL PROTECTED] 
Sent: Monday, May 08, 2006 1:39 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Logic and Knowledge Representation


On May 7, 2006, at 6:37 PM, Gary Miller wrote:
 Which is why my research has led to a pattern language that can 
 compress all of the synonymous thoughts into a single pattern.


The obvious question is how do you deal with the problem of the  
synonymity of patterns being context sensitive?  In any sufficiently  
rich environment, the type of compression you appear to be describing  
above is naive and would have adverse consequences on efficacy.  Or  
at least, I cannot think of a construct that can do this efficiently  
while being some facsimile of fully general.


Cheers,

J. Andrew Rogers

---
To unsubscribe, change your address, or temporarily deactivate your
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Logic and Knowledge Representation

2006-05-07 Thread Gary Miller





 John said: The human brain, the only 
high-level intelligent system currently known, uses language and logic for 
abstract reasoning, but these are based on, and owe their existence to, a more 
fundamental level of intelligence -- that of pattern-recognition, 
pattern-matching, and pattern manipulation.

I agree with this wholeheartedly. But on the next 
point we diverge in our thinking.

 John said: In evolution on earth, 
sensory-motor-based intelligencecame first, and the use of language and 
logiconly later. It seems to me that the right path to true AI will 
also use sensory-motor patterns as the basic building blocks of knowledge 
representation. A typical human being'sknowledge of the letter "A" 
involves recognition ofgraphical representations of the 
symbol,memories of its sound when spoken, procedural or muscle memory of 
how to speak andwrite it, and memories ofwhere itis commonly 
found in its linguisticcontext. A system should be capable of 
recognizing symbols visually or auditorially (and possibly 
ofgeneratingthem through motor outputs) before it should be expected 
to comprehend them.

Any thoughts or 
arguments? Or am I just repeating something everyone already knows? 
(I honestly don't know.) 


Speech 
recognition andvisual recognition are separate problems from knowledge 
representation/pattern recognition.

Hellen 
Keller was blind and deaf but with some help was able to achieve knowledge 
representatation and pattern recognition without the use of either hear or 
sight.

Think of 
the senses as the input/output devices and yes an infant's brain must first 
learn to control those input output devices before it is able to learn and 
communicate with the world outside itself.

But an 
artificially intelligent entity already has access to an ASCII data stream that 
is can do input/output to communicate outside 
itself.

Of 
course because a picture is worth a thousand words a program that can also do 
visual recognition has access to a larger data store than one that does 
not.

My 
opinion on the most probableroute to a true AI Entity 
is:

1. Build 
a better fuzzy pattern representation language with an inference mechanism for 
extracting inducible information from user inputs. Fuzziness allows 
the
 language to understand utterances with 
misspellings words run together etc...
2. Build 
a bot based on said language
3. Build 
a large knowledgebase which captures a large enough percentage of real world 
knowledge to allow the bot learn from natural language data sources i.e. the 
web.
4. Build 
a pattern generator which allows the bot learn the information it has read and 
build new patterns itself to represent the 
knowledge.
5. Build 
a reasoning module based on Bayesian logic to allow simple reasoning to be 
conducted.
6. Build 
a conflict resolution module to allow the Bot to resolve/correct conflicting 
information or ask for help with clarification to build correct mental 
model.
7. Build 
a goal and planning module which allow the Bot to operate more autonomously to 
aid in the goals which we give it i.e.. achieve 
singularity.

Steps 1 
and 2 an took me a couple years.
3 is an 
ongoing effort. Into my fourth year now with 28000 patterns. 

Hint: if 
the pattern recognition language is good a single pattern should be able express 
all the ways of expressing a single thought in a single 
pattern.
This 
makes the patterns longer and more complex but reduces overall work by not 
forcing the bot master to write thousands of patterns to account for all 
possible ways to express a single thought. 
My 28000 patterns would requires match at several 
orders of magnitude more inputs correctly than competing solutions including 
misspellings, ungrammatical inputs 
etc.

This 
transforms the difficulty of step 3 from being an totally intractable problem to 
a doable but still difficult/work 
intensiveproblem.

Step 4 
is keeping me awake at night thinking about 
it.

Steps 5, 
6 and 7don't sound that difficult to me right now but that's only because 
I haven't thought about them in enough detail.

People 
have challenged the top down approach saying that such a bot would lack 
grounding or the ability to tie it's knowledge to real world 
inputs.

But it 
should not be difficult to use a commercial voice recognition engine to 
transform voice inputs intoASCIIinputs. And the fuzzy 
recognizer for be able to
compensate many times for the mistakes that the voice 
recognition software makes inrecognizing a word or two in the input 
stream.




From: John Scanlon [mailto:[EMAIL PROTECTED] 
Sent: Sunday, May 07, 2006 2:40 AMTo: 
agi@v2.listbox.comSubject: [agi] Logic and Knowledge 
Representation

Is anyone interested in discussing the use of 
formal logic as the foundation for knowledge representation schemes for 
AI? It'sa common approach, but I think it's the wrong path. 
Even if you add probability or fuzzy logic, it's still insufficient for true 
intelligence.

The human brain, the only 

RE: [agi] Friendliness toward humans

2003-01-10 Thread Gary Miller
EGHeflin said:

 The reason is that the approach is essentially 'Asimovian' in nature
and, therefore, 
 wouldn't result in anything more than perhaps a servile pet, call it
iRobot, which 
 is always 'less-than-equal' to you and therefore always short of your
goal to achieve 
 the so called 'singularity' you originally set out to achieve.

When our children are small make rules for them which are eventually
outgrown:

Don't hit other children.

Don't use bad words.

Be polite to other people.

At first our children may be incapable of understanding all the reasons
these rules are in place.  But by being forced to obey them either by
withholding pleasure or by punishment they become a part of their normal
behavior patterns such that when they mature and modify their own rules
for themselves these guidelines are part of their normal behavior.

Whether we tell the AGI it's high priority rules to ensure safety or
hardcode them should not make a difference.  As long as they are acted
on repeatedly they will become a part of their normal behavior through
normal enforcement.  At that point the hardcoded rules could be removed
without fear that the AI would revert to a cursing, violent, rude lout. 
 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of EGHeflin
Sent: Thursday, January 09, 2003 2:06 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Friendliness toward humans



Kevin et al.,

Fascinating set of observations, conjectures, and methodologies, well
worth considering.  And it seems that you have ultimately touched on the
kernel of the dilemma of man v. machine en route to the so called
'singularity'.

If I've understood you correctly vis-a-vis the emergence of 'evil' in
AGI systems, you're suggesting that there is a need to prevent, a
priori, certain expressions of self in AGI systems to prevent any hint
of evil in AGI systems.  It's an approach well worth considering, but I
believe it falls short of 'real' AGI.

The reason is that the approach is essentially 'Asimovian' in nature
and, therefore, wouldn't result in anything more than perhaps a servile
pet, call it iRobot, which is always 'less-than-equal' to you and
therefore always short of your goal to achieve the so called
'singularity' you originally set out to achieve.

But perhaps any discussion about 'good' and 'evil' is best served by
defining exactly what 'good' and 'evil' are. However, I'll be a complete
Sophist and suggest that the dilemma of 'good' and 'evil' can be talked
around by separating the dilemma into 3 obvious types and talking about
these.  As I see it, the 3 dilemma types of 'good' and 'evil' are: 1.
man v. man, 2. man v. machine, and/or, at the 'singularity' 3.
man-machine v. man-machine.  So I'll comment on a particular approach
for 'real' AGI that addresses the dilemma type (2) guided by
observations about type (1) and with obvious extensions to type (3).

It seems that if you are trying to model your AGI system after nature,
which is a reasonable and likely place to start, you should realize that
'nature' simply hasn't created/engineered/evolved the human species the
way your approach suggests.  Put another way, the human species does not
have an intrinsic suppression of either 'good' or 'evil' behaviors.

And unless you're willing to hypothesis that this is either a momentary
blip on the radar screen of evolution, i.e. humans are actually in the
process of breaking this moralistic parity, OR that these 'good' or
'evil' behaviors will ultimately be evolved away through 'nature',
natural selection, and time, you are left with an interesting conjecture
in modeling your AGI system.

The conjecture is that 'good' or 'evil' behaviors are intrinsic parts of
the human condition, intelligence, and environment, and therefore should
be intrinsic parts in a 'real' AGI system.  And as a complete Sophist,
I'll skip over more than 6,000 years of recorded human history,
philosophical approaches, religious movements, and scholarly work - that
got us where we're at today w.r.t. dilemma type (1) - to suggest that
the best approach to achieve 'real' AGI is to architect a system that
considers all potential behaviors, from 'good' to 'evil', against
completed actions and conjectured consequences.  In this way, a certain
kernel of the struggle of 'good' and 'evil' is retained, but the system
is forced to 'emote' and 'intellectualize' the dilemma of 'good' and
'evil' behaviors.

The specific AGI architecture I am suggesting is essentially
'Goertzelian' in nature.  It is a  'magician system', whereby the two
components, G
(Good-PsyPattern(s)) and E (Evil-PsyPattern(s)), are, in and of
themselves, psychological or behavioral patterns that can only be
effectuated,  i.e. assigned action patterns, in a combination with
another component to generate and emerge as an action pattern, say U
(Useful or Utilitarian-ActionPattern).  The system-component
architecture might be thought of as a G-U-E or GUE 'magician system'.

The 

RE: [agi] The AGI and the will to live

2003-01-09 Thread Gary Miller
At some early point the AGI will have to learn to equate pleasure with
learning and acquiring new experience.  

With a biological organism the stimuli are provided as pain and
pleasure.

As we mature many of our pleasure causers are increasingly subtle and
are actually learned pleasure generators themselves.  As the organism
matures it becomes able to initiate it's own pleasure, but it also needs
to vary the type of pleasure it experiences.  This is largely because
the most pleasure repeated over and over again induce less satisfaction
as time goes on.  While the exception may may be drug and other physical
addictions, for the most part we constantly seek to keep ourselve
stimulated.

At some internal level this type of internal reward system must develop
or motivation to do anything will not exist.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Colin Hales
Sent: Thursday, January 09, 2003 5:23 PM
To: [EMAIL PROTECTED]
Subject: [agi] The AGI and the will to live



Hi all,

I find the friendliness issue fairly infertile ground tackled way too
soon. Go back to where we are: the beginning. I'm far more interested in
the conferring of a will to live. Our natural tendency is to ascribe
this will to live to our intelligent artifacts. This 'seed' is by far
the hardest thing to create and the real determinant of 'friendliness'
in the end. Our seed? I think a model that starts from something like
'another heartbeat must happen'. When you don't have a heart? What -
poke a watchdog timer every 100msec or die?

My feeling at the moment is that far from having a friendliness problem
we're more likely to need a cattle prod to keep the thing interested in
staying awake, let alone getting it to take the trouble to formulate any
form of friendliness or malevolence or even indifference.

If our artifact is a zombie, what motivation is there to bother _faking_
friendliness or malevolence or even indifference? Without it Pinnocchio
the puppet goes to sleep.

If our artifact is not a zombie (.ie. has a real subjective experience)
then what motivates _real_ friendliness or malevolence or even
indifference? Without it Pinnocchio the artificial little boy goes to
sleep.

Whatever the outcome, at its root is the will to even start learning
that outcome. You have to be awake to have a free will.

What gets our AGI progeny up in the morning?

regards,


Colin Hales


---
To unsubscribe, change your address, or temporarily deactivate your
subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Early Apps.

2002-12-28 Thread Gary Miller
Ben Goertzal wrote:

I don't think that a pragmatically-achievable amount of formally-encoded
knowledge is going to be enough to allow a computer system to think
deeply and creatively about any domain -- even a technical domain about
science. What's missing, among other things, is the intricate
interlinking between declarative and procedural knowledge.  When humans
learn a domain, we learn not only facts, we learn techniques for
thinking and problem-solving and experimenting and
information-presentation .. and we learn these in such a way that
they're all mixed up with the facts 

What you're describing is the Expert System approach to AI, closely
related to the common sense approach to AI.
 
...

I agree that as humans we bring a lot of general knowledge with us when
we learn a new domain.  That is why I started off with the general
conversational domain and am now branching into science, philosophy,
mathematics and history.  And of course the AI can not make all the
connections without being extensively interviewed on a subject and
having a human help clarify it's areas of confusion just as a parent
answers questions for a child or a teacher for a student.  I am not in
fact trying to take the exhaustive approach one domain at a time
approach but rather to teach it the most commonly known and requested
information first.  My last email just used that description to identify
my thoughts on grounding.  I am hoping that by doing this and repeating
the interviewing process in an iterative development cycle that
eventually the bot will eventually be able to discuss many different
subjects at a somewhat superficial level much as same as most humans are
capable of.  This is a lot different from the exhaustive definition that
Cyc provides for each concept.

I view what I am doing distinct from expert systems because I do not yet
use either a backward or forward inference engine to satisfy a limited
number of goal states. The knowledge base is not in the form of rules
but rather many matched patterns and encoded factoids of knowledge many
of which are transitory in nature and track the context of the
conversation.  Each pattern may trigger a request for additional
information like an expert system.  But the bot does not have a
particular goal state in mind other that learning new information unless
a specific request of it is made by the user.  I also differ from Cyc in
that realizing the importance of English as a user interface from the
beginning, all internal thoughts and goal states occur as an internal
dialog in English.  This eliminates the requirement to translate an
internal knowledge representation to an external natural language other
than providing one or response patterns to specific input patterns.  It
also makes it easy to monitor what the bot is learning and whether it is
making proper inferences because it's internal thought process is
displayed in English while in debug mode..  The templates which generate
the responses in some cases do have conditional logic to determine which
output template is appropriate response based on the AI's personality
variables and the context of the current conversation.  Variables are
also set conditionally to maintain metadata for context.  If the
references a male in it's response [He] and [Him] get set vs. [Her] and
[She] if a female is referenced.  [CurrentTopic], [It], [There] and
[They] are all set to maintain backward contextual references.  

I was able to find a few references to the Common Sense approach to AI
on google and some of the difficulties in achieving it.  And I must
admit I have not yet implemented non-monotonic reason or probabilistic
reasoning as of yet.  I am not under the illusion that I am necessarily
inventing or implementing anything that has not been conceived of
before.  As Newton says if I achieve great heights it will be because I
have stood on the shoulders of giants.  I just see the current state of
the art and think that it can be made much better.  I do not actually
know how far I can take it while staying self-funded, but hopefully by
the time my money runs out it will demonstrate enough utility and
potential to be of value to someone.  I think I like the sound of the
Common Sense Approach to AI though.   I can't remember the last time
anyone accused me of having common sense, but I like the sound of it!

I don't think AI is absent sufficient theory, just sufficient execution.
I feel like the Cyc Project's heart was in the right place and the level
of effort was certainly great, but perhaps the purity of their vision
took priority over usability of the end result.  Is any company actually
using Cyc as anything other than a search engine yet?  

That being said other than Cyc I am at a loss to name any serious AI
efforts which are over a few years in duration and have more that 5 man
years worth of effort (not counting promotional and fundraising).  

The Open Source efforts are interesting and have some utility but are

RE: [agi] Early Apps.

2002-12-27 Thread Gary Miller
On Dec 26 Ben Goertzel said:

 One basic problem is what's known as symbol grounding.  this 
 means that an Ai system can't handle semantics, language-based 
 cognition or even advanced syntax if it doesn't understand the 
 relationships between its linguistic tokens and patterns in the 
 nonlinguistic world.

I guess I'm still having trouble with the concept of grounding.  If I
teach/encode a
bot with 99% of the knowledge about hydrogen using facts and information
available in 
books and on the web.  It is now an idiot savant in that it knows all
about hydrogen and 
nothing about anything else and it is not grounded.  But if I then
examine the knowledge learned about hydrogen for other mentioned topics
like gases, elements, water, atoms, etc... And teach/encode 99% of
of the knowledge on these topics to the bot.  Then the bot is still an
idiot savant but less so isn't it better grounded?  A certain amount of
grounding I think has occurred by providing knowledge of related
concepts.  

If we repeat this process again, we may say the program is an idiot
savant in chemistry.

Each time we repeat the process are we not grounding the previous
knowledge further because the bot can now reason and respond to
questions not just about hydrogen, it now has an English representation
of the relationship between hydrogen and other related concepts in the
physical world..

If we were to teach someone such as Helen Keller with very limited
sensory inputs would we not be attempting to do the same thing?

Humans of course do not learn in this exhaustive manner.  We get a
shotgun bombardment of knowledge from all types of media on all manner
of subjects.  The things that interest us we pursue additional knowledge
about.  The more detailed our knowledge in any given area the greater we
say our expertise 
is.  Initially we will be better grounded than a bot, because as
children we learn a little bit about a whole lot of things.  So anything
new we learn we attempt to tie into our semantic network.  

When I think.  I think in English.  Yes, at some level below my
conscious awareness these English thoughts are electrochemically
encoded, but consciously I reason and remember in my native tongue or I
retrieve a sensory image, multimedia if you will.

If someone tells me that A kinipsa is terrible plorid.  I attempt to
determine what a kinipsa and a plorid are so that I may ground this
concept and interconnect it correctly within my existing semantic
network.  If A bot is taught to pursue new knowledge and ground the
unknown terms with it's existing semantic net by putting the goals Find
out what a plorid is and Find out what a kinipsa is on it's list of
short term goals then it will ask questions and seek to ground itself as
a human would!

I will agree that today's bots are not grounded because they are idiot
savants and lack the broad based high level knowledge with which to
ground any given fact or concept.  But if I am correct in my thinking
this is the same problem that Helen Keller's teacher was faced with in
teaching Helen one concept at a time until she had enough simple
information or knowledge to build more complex knowledge and concepts
upon.

When a child learns to speak he does not have a large dictionary to draw
on to tell him that mice is the plural of mouse.  No rule will tell
him that.  He has to learn it.  He will say mouses and someone will
correct him.  It gets added to his NLP database as an exception to the
rule.  A human has limited storage so a rule learned by generalizing
from experience is a shortcut to learning and remembering all the plural
forms for nouns.  In a AGI we can give the intelligence certain learning
advantages such as these dictionaries and lists of synonym sets which do
not take that much storage in the computer.  

I also think that children do not deal with syntax.  They have heard a
statement similar to what they want to express and have this stored as a
template in their minds.  I think we cut and paste what we are trying to
say into what we think is the correct template and then read it back to
ourselves to see if it sounds like other things we have heard and seems
to make sense.  For people who have to learn a foreign language as an
adult this is difficult because they tend to think in their first
language and commingle the templates from their original and the new
language.  But because we do not parse what we here and read strictly by
the laws of syntax we have little trouble understanding many of these
ungrammatical utterances.
 
 


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of [EMAIL PROTECTED]
Sent: Thursday, December 26, 2002 11:03 PM
To: [EMAIL PROTECTED]
Subject: RE: [agi] Early Apps.



On 26 Dec 2002 at 10:32, Gary Miller wrote:

 On Dec. 26 Alan Grimes said:
 
  According to my rule of thumb,
  If it has a natural language database it is wrong, 
  
 Alan I can see based on the current generation of bot technology why 
 one

RE: [agi] Early Apps.

2002-12-26 Thread Gary Miller
On Dec. 26 Alan Grimes said:

 According to my rule of thumb, 
 If it has a natural language database it is wrong, 
 
Alan I can see based on the current generation of bot technology why one
would feel this way.

I can also see people having the view that biological systems learn from
scratch so that AI systems should be able to also.

Neither of these arguments are particularly persuasive though based on
what I've developed to date.

Do you have other arguments against a NLP knowledge based approach that
you could share with me.

If you feel this is out of bounds for the list please just Email with
your arguments. 

I am involved in such a project and certainly don't wish to to be
wasting my time!


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Alan Grimes
Sent: Thursday, December 26, 2002 1:12 AM
To: [EMAIL PROTECTED]
Subject: [agi] Early Apps.



According to my rule of thumb, 

If it has a natural language database it is wrong, 

many of the proposed early AGI apps are rather unfeasable. 

However, there is a very interesting application which goes streight to
the hart to the main AI problem and also provides a very valuble tool
for flexing the chips that we already have in our sweatty little hands. 

The area is COMPILERS. 

Today's compilers are notoriously bad. The leading free compiler is
atrociously bad. 

Now, if there could be an AI based compiler that could both understand
the source and the machine in a very human-like way the output code
would be that much better. This would also be valuble for a bootstrap
AI though I strongly caution against such an AI untill we have a _MUCH_
better understanding of what is going on. 

I expect to be preparing a proposal in a few months that will outline a
complete strategy for an AI that should be both fesable and, through
inhreant architectual constraints, be reasonably safe. 

-- 
pain (n): see Linux.
http://users.rcn.com/alangrimes/

---
To unsubscribe, change your address, or temporarily deactivate your
subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AI on TV

2002-12-09 Thread Gary Miller
Title: Message



On Dec. 9 Kevin 
said:

"It seems to me that building a strictly "black 
box" AGI that only uses text or graphical input\output can have tremendous 
implications for our society, even without arms and eyes and ears, etc. 
Almost anything can be designed or contemplated within a computer, so the need 
for dealing with analog input seems unnecessary to me. Eventually, these 
will be needed to have a complete, human like AI. It may even be better 
that these first AGI systems will not have vision and hearing because it will 
make it more palatable and less threatening to the masses"

I agree 
wholeheartedly. Sony and Honda as well as several military contractors are 
spending 10s perhaps hundreds of million dollars on RD robotics 
programs which incorporate the vision, and analog control, and data acquisition 
for industry, the military, and yes even the toy companies. 


Once 
AGIs are ready to fly it will be able to interface with these systems through 
software APIs (Application Programming Interfaces) and will not even care about 
the low-level programs that enable them move about and visually survey their 
environments.

Too 
often those who seek the spotlight are really sincere, but either need 
recognition for their own self reassurance or as a method of attracting 
potential funding.

There 
seems to be an unwritten law in the universe which that says all major 
inventions will involve major sacrifice and loss for those who dare to tackle 
what has been deemed impossible by others. From Galileo to Edison, to 
Tesla, to maybe one of us. Before we succeed, ifwe succeed, the 
universe will exact it's toll. For nature will not give up her secrets 
willingly and intelligence may be her most closely guarded secret of 
all!

Don't 
forget that genius and madness sometimes walk arm in arm! 


And as 
the man says if you weren't cazy when you got in, you probably will be before 
you get out!.

  
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of 
  maitriSent: Monday, December 09, 2002 11:08 AMTo: 
  [EMAIL PROTECTED]Subject: [agi] AI on TV
  There was a show on the tube last night on 
  TechTV. It was part of their weekly Secret, Strange and True 
  series. They chronicled three guys who are working on creating advanced 
  AI.
  
  One guy was from Belgium. My apologies to 
  him if he reads this list, but he was a rather quirky and stressed 
  character. He had designed a computer that was basically a collection of 
  chips. He raised a million and had it built on spec. I gather he 
  was expecting something to miraculously emerge from this collection, but alas, 
  nothing did. It was really stressful watching his stress. He had 
  very high visibility in the country and the pressure was immense as he 
  promised a lot. I have real doubts about his approach, even though I am 
  a lay-AI person. Also, its clear from watching him that its sometimes 
  good to have shoestring budgets and low visibility. Less stress and more 
  forced creativity in your approach...
  
  The second guy was from either England or the 
  states, not sure. He was working out of his garage with his wife. 
  He was trying to develop robot AI including vision, speech, hearing and 
  movement. He was clearly floundering as he radically redesigned what he 
  was doing probably a dozen times during the 1 hour show. I think this 
  experimentation has value. But I really wonder if large scale trial and 
  error will result in AGI. I don't think so. I think trial and 
  error will, of course, be essential during development, but T and E of the 
  entire underlying architecture seems a folly to me. Since the problem is 
  SO immense, I believe one must start with a very sound and detailed game plan 
  that can be tweaked as things move along.
  
  The last guy was brooks at MIT. They were 
  developing a robot with enhanced vision capabilities. They also failed 
  miserably. I am rather glad that they did. They re funded by DOD, and 
  are basically trying to build a robotic killing machine. Just what we 
  need.
  
  It seems to me that trying to tackle the vision 
  problem is too big of a place to start. While all this work will have 
  value down the line, is it essential to AGI? It seems to me that 
  building a strictly "black box" AGI that only uses text or graphical 
  input\output can have tremendous implications for our society, even without 
  arms and eyes and ears, etc. Almost anything can be designed or 
  contemplated within a computer, so the need for dealing with analog input 
  seems unnecessary to me. Eventually, these will be needed to have a 
  complete, human like AI. It may even be better that these first AGI 
  systems will not have vision and hearing because it will make it more 
  palatable and less threatening to the masses
  
  The show was rather discouraging, especially if 
  one considers that these three folks are leading the way towards 

RE: [agi] general patterns induction

2002-12-08 Thread Gary Miller

A paper I found while researching trigram frequency a little further
looks like it may be right up your alley.

http://www.ling.gu.se/~kronlid/term_paper/nlp_paper.pdf
 


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Pablo
Sent: Sunday, December 08, 2002 9:34 PM
To: [EMAIL PROTECTED]
Subject: [agi] general patterns induction



Hi Everyone,

I'm looking for information about pattern induction or general
patterns or anything that sounds like that... 

What I want to do is, having a stream of data, predict what may come.
(yes, and then take over the world... sorry if it sounds like Pinky and
The Brain!!)

I guess general patterns induction is related to data compression,
because if we find a pattern in a string, then we don't have to write
all the characters every time the pattern appears. Surely someone has
already been working on that (who?)

Anyone would please give me a clue? Is there any book I should read?? Is
there any book like AI basics, introduction to AI, or AI for
dummies that may help before?

Thanks a lot!

Pablo Carbonell

PS: thanks Ben, Kevin and Eliezer for the previous help

---
To unsubscribe, change your address, or temporarily deactivate your
subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

BEGIN:VCARD
VERSION:2.1
N:Miller;Gary;A.
FN:Gary A. Miller ([EMAIL PROTECTED]) ([EMAIL PROTECTED])
ORG:New Millennium Consulting
TITLE:Principal Consultant
TEL;WORK;VOICE:(440) 942-9264
TEL;HOME;VOICE:(440) 942-9264
ADR;WORK:;;7222 Hodgson Rd.;Mentor;OH;44060;United States of America
LABEL;WORK;ENCODING=QUOTED-PRINTABLE:7222 Hodgson Rd.=0D=0AMentor, OH 44060=0D=0AUnited States of America
EMAIL;PREF;INTERNET:[EMAIL PROTECTED]
REV:20021108T231940Z
END:VCARD



RE: [agi] An idea for promoting AI development.

2002-12-02 Thread Gary Miller
On 12/01 Ben Goertzel said:

 2) to avoid the military achieving *exclusive* control over one's
technology

What I am about to say may sound blasphemous, but the military may be
the group with resources to protect the technology!

By publicizing and making AGI technology generally available other
hostile governments/military may see AGI as a potential weapon and
resort to traditional methods to acquire the technology, industrial
espionage, kidnappings of key scientists.  Or they may fear it as a
another potential tool to reign in their aggression and target it for
destruction which means facilities and people.  If this sounds
farfetched just look at the lengths certain countries go to acquire
Plutonium.

If the potential for AGI is seen as great and world changing.  Who
better to protect it or at least offer stewardship than the US or NATO
military.  What private or non-profit is prepared and qualified to
protect the technology when it start's to get really interesting?

There are a few possibilities of why DoD is currently a prime
contributor to AI research.  

1. They fear it (saw Terminator and War games), so they better damn keep
an eye on it (Their paranoia).  

2. They may have it already in the NSA basements and want to be control
the direction of other research to keep their tactical advantage. (My
paranoia)

3. They are starting to see the results of years of research in robotics
and drone warriors in keeping the casualty counts down and the American
public happy and look at AI as another way to continue this trend.

4. They are good at fumbling the ball when it comes to acting in a
timely manner on terrorist threats and it's much easier to blame an AI
when they screw up than risk their cushy jobs.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Ben Goertzel
Sent: Monday, December 02, 2002 9:09 AM
To: [EMAIL PROTECTED]
Subject: RE: [agi] An idea for promoting AI development.





Regarding being wary about military apps of AI technology, it seems to
me there are two different objectives one might pursue:

1) to avoid militarization of one's technology

2) to avoid the military achieving *exclusive* control over one's
technology

It seems to me that the first objective is very hard, regardless of
whether one accepts military funding or not.  The only ways that I can
think of to achieve 1) would be

1a) total secrecy in one's project all the way

1b) extremely rapid ascendancy from proto-AGI to superhuman AGI -- i.e.
reach the end goal before the military notices one's project.  This
relies on security through simply being ignored up to the proto-AGI
phase...

On the other hand, the second objective seems to me relatively easy.  If
one publishes one's work and involves a wide variety of developers in
it, no one is going to achieve exclusive power to create AGI.  AGI is
not like nuclear weapons, at least not if a
software-on-commodity-hardware approach works (as I think it well).
Commodity hardware only is required, programming skills are common, and
math/cog-sci skills are not all *that* rare...

-- Ben G





 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
 Behalf Of Alexander E. Richter
 Sent: Monday, December 02, 2002 7:48 AM
 To: [EMAIL PROTECTED]
 Subject: RE: [agi] An idea for promoting AI development.


 At 07:18 02.12.02 -0500, Ben wrote:
 
 Can one use military funding for early-stage AGI work and then 
 somehow delimitarize one's research once it reaches a certain point?
 One can try,
 but will one succeed?

 They will squeeze you out, like Lillian Reynolds and Michael Brace in 
 BRAINSTORM (1983) (Christopher Walken, Natalie Wood)

 cu Alex

 ---
 To unsubscribe, change your address, or temporarily deactivate your 
 subscription, please go to 
 http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your
subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

BEGIN:VCARD
VERSION:2.1
N:Miller;Gary;A.
FN:Gary A. Miller ([EMAIL PROTECTED]) ([EMAIL PROTECTED])
ORG:New Millennium Consulting
TITLE:Principal Consultant
TEL;WORK;VOICE:(440) 942-9264
TEL;HOME;VOICE:(440) 942-9264
ADR;WORK:;;7222 Hodgson Rd.;Mentor;OH;44060;United States of America
LABEL;WORK;ENCODING=QUOTED-PRINTABLE:7222 Hodgson Rd.=0D=0AMentor, OH 44060=0D=0AUnited States of America
EMAIL;PREF;INTERNET:[EMAIL PROTECTED]
REV:20021108T231940Z
END:VCARD



RE: [agi] An idea for promoting AI development.

2002-11-29 Thread Gary Miller
FYI Arthur T. Murray

I just tried to order your Metifex book at iUniverse, but the site was
bombing at the checkout screen.

I'll try again later but just wanted to let you know you might be losing
orders!
 


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Arthur T. Murray
Sent: Friday, November 29, 2002 11:38 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: [agi] An idea for promoting AI development.





On Fri, 29 Nov 2002, Alan Grimes wrote:

 Jeremy Smith wrote: [...]
 
  He also seems to be just asking for a huge sum of money to implement
  it!!!

Mentifex/Arthur here with an announcement.  I'm asking for $17.95 U.S.

The Mentifex AI Textbook has today Thurs.29.Nov.2002 just been published
by iUniverse.com as AI4U: Mind-1.1 Programmer's Manual on the Web at
http://www.iuniverse.com/bookstore/book_detail.asp?isbn=0595259227
(q.v.).

It would probably cost less to buy the print-on-demand (POD) textbook
than to print out all the associated Mentifex pages on the Web.

In a few weeks it should be possible for interested or curious parties
to track AI4U on Amazon and see how many millions down it is ranked!

/End interrupt mode -- Arthur T. Murray

 
 Perspective:
 The latest release of MS windows cost $2Billion...
 
 A typical internet start-up would receive anywhere from 20 to 50 
 million in VC.
 
 Heck, in the VC world you need to ask for large sums of money just to 
 get people's attention.

---
To unsubscribe, change your address, or temporarily deactivate your
subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

BEGIN:VCARD
VERSION:2.1
N:Miller;Gary;A.
FN:Gary A. Miller ([EMAIL PROTECTED]) ([EMAIL PROTECTED])
ORG:New Millennium Consulting
TITLE:Principal Consultant
TEL;WORK;VOICE:(440) 942-9264
TEL;HOME;VOICE:(440) 942-9264
ADR;WORK:;;7222 Hodgson Rd.;Mentor;OH;44060;United States of America
LABEL;WORK;ENCODING=QUOTED-PRINTABLE:7222 Hodgson Rd.=0D=0AMentor, OH 44060=0D=0AUnited States of America
EMAIL;PREF;INTERNET:[EMAIL PROTECTED]
REV:20021108T231940Z
END:VCARD