FYI Arthur T. Murray
I just tried to order your Metifex book at iUniverse, but the site was
bombing at the checkout screen.
I'll try again later but just wanted to let you know you might be losing
orders!
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf
On 12/01 Ben Goertzel said:
2) to avoid the military achieving *exclusive* control over one's
technology
What I am about to say may sound blasphemous, but the military may be
the group with resources to protect the technology!
By publicizing and making AGI technology generally available other
A paper I found while researching trigram frequency a little further
looks like it may be right up your alley.
http://www.ling.gu.se/~kronlid/term_paper/nlp_paper.pdf
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Pablo
Sent: Sunday, December 08,
Title: Message
On Dec. 9 Kevin
said:
"It seems to me that building a strictly "black
box" AGI that only uses text or graphical input\output can have tremendous
implications for our society, even without arms and eyes and ears, etc.
Almost anything can be designed or contemplated within a
On Dec. 26 Alan Grimes said:
According to my rule of thumb,
If it has a natural language database it is wrong,
Alan I can see based on the current generation of bot technology why one
would feel this way.
I can also see people having the view that biological systems learn from
scratch so
understanding many of these
ungrammatical utterances.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of [EMAIL PROTECTED]
Sent: Thursday, December 26, 2002 11:03 PM
To: [EMAIL PROTECTED]
Subject: RE: [agi] Early Apps.
On 26 Dec 2002 at 10:32, Gary Miller wrote
Ben Goertzal wrote:
I don't think that a pragmatically-achievable amount of formally-encoded
knowledge is going to be enough to allow a computer system to think
deeply and creatively about any domain -- even a technical domain about
science. What's missing, among other things, is the intricate
At some early point the AGI will have to learn to equate pleasure with
learning and acquiring new experience.
With a biological organism the stimuli are provided as pain and
pleasure.
As we mature many of our pleasure causers are increasingly subtle and
are actually learned pleasure generators
EGHeflin said:
The reason is that the approach is essentially 'Asimovian' in nature
and, therefore,
wouldn't result in anything more than perhaps a servile pet, call it
iRobot, which
is always 'less-than-equal' to you and therefore always short of your
goal to achieve
the so called
John said: The human brain, the only
high-level intelligent system currently known, uses language and logic for
abstract reasoning, but these are based on, and owe their existence to, a more
fundamental level of intelligence -- that of pattern-recognition,
pattern-matching, and pattern
PROTECTED]
Sent: Monday, May 08, 2006 1:39 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Logic and Knowledge Representation
On May 7, 2006, at 6:37 PM, Gary Miller wrote:
Which is why my research has led to a pattern language that can
compress all of the synonymous thoughts into a single pattern
Ben asked: What kind of bot are you using? Do you mean a physical robot
or a chat bot?
Just a chat bot for right now Ben.
Although I could imagine a future robot manufacturer licensing the code to
allow a customer to customize
the high level cognitive/personality functions of the bot.
Ben Said:
Being able to understand natural language commands pertaining
to cleaning up the house is a whole other kettle of fish, of
course. This, as opposed to the actual house-cleaning, appears
to be an AGI-hard problem...
A full Turing complete Natural Language system would not be
No, and it's a damn good thing it isn't. If it was we would be sentencing
it to a mindless job with no time off, only to be disposed of when a better
model
comes out.
We only want our AI's to be a smart as necessary to accomplish their jobs
just as
our cells and organs are.
Limited
Are you saying then that blind people can not make sense of language because
they lack the capacity to imagine images having never seen them before?
Or that blind people could not understand or would not view these these as
equally strange as a sighted person?
The man climbed the penny
The mat
back to the Darwin passage I
quoted, you will see that they can fill their burrows with all manner of
differently shaped objects by touch. The senses are interdependent. We work
by COMMON sense.
- Original Message -
From: Gary Miller mailto:[EMAIL PROTECTED]
To: agi@v2.listbox.com
A good AGI would rise above the ethical dilemma and solve the problem by
inventing safe alternatives that were both more enjoyable and allowed the the
individual to contribute to his future, his family and society while they were
experiencing that enjoyment. And hopefully not doing so in a way
Josh asked,
Who could seriously think that ALL AGIs will then be built to be
friendly?
Children are not born friendly or unfriendly.
It is as they learn from their parents that they develop their
socialization, their morals, their empathy, and even love.
I am sure that our future fathers
Too complicate things further.
A small percentage of humans perceive pain as pleasure
and prefer it at least in a sexual context or else
fetishes like sadomachism would not exist.
And they do in fact experience pain as a greater pleasure.
More than likely these people have an ample supply of
The leading software packages in high speed facial recogniton are based upon
feature extraction.
If the face is analyzed into lets say 30 features perhaps, then 30 processes
could analyze the photo for these features in parallel.
After that the 30 features are just looked up in a relational
An AI would attempt to understand the universe to the best of it's ability,
intelligence and experimentation could provide.
If the AI reaches a point in it's developmental understanding where it is
unable to advance beyond in it's understanding of science and reality then
it will attempt to
John asked If you took an AGI, before it went singulatarinistic[sic?] and
tortured it.. a lot, ripping into it in every conceivable hellish way, do
you think at some point it would start praying somehow? I'm not talking
about a forced conversion medieval style, I'm just talking hypothetically if
Ben said
That is sarcasm ... however, it's also my serious hypothesis as to why
the Chinese gov't doesn't mind losing their best brightest...
It may also be that China understands too that as more Chinese become
Americans, China will have a greater exposure and political lobby within the
Nasa research et al are already able to read voice subvocalizations.
While not reading the mind directly it does offer a method for a computer to
monitor any sub vocalization and accept silent commands.
It would also seem to be a boon for handicapped people who have lost use of
their arms for
Ed Porter quoted from the following book:
From
http://www.nytimes.com/2008/03/16/books/review/Berreby-t.html?ref=review
a NYTime book review of Predictaly Irrational: The Hidden Forces that
Shape our Decisions, by Dan Ariely..
In its most relevant section it states the following
YKY Said:
The current OpenCyc KB is ~200 Mbs (correct me if I'm wrong).
The RAM size of current high-end PCs is ~10 Gbs.
My intuition estimates that the current OpenCyc is only about 10%-40% of a
5 year-old human intelligence.
Plus, learning requires that we store a lot of hypotheses. Let's say
Steve Richfield asked:
Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU
THINKING?
We're thinking Don't feed the Trolls!
_
agi | Archives http://www.listbox.com/member/archive/303/=now
http://www.listbox.com/member/archive/rss/303/ | Modify
27 matches
Mail list logo