Re: [agi] Second Life the Gaza Conflict

2009-01-04 Thread BillK
On Sun, Jan 4, 2009 at 8:23 PM, Mike Tintner  wrote:
 I thought there might possibly be some interest in this ( perhaps
 explanations)  - the news item doesn't really explain how or why Second Life
 is being used:



This has nothing to do with AGI.

It is propaganda. Some Palestinians have set up a protest site in Second Life.

There is a lot of propaganda activity (on both sides) if you do a google search.

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-19 Thread BillK
On Fri, Dec 19, 2008 at 3:55 PM, Mike Tintner wrote:

 (On the contrary, Pei, you can't get more narrow-minded than rational
 thinking. That's its strength and its weakness).



Pei

In case you haven't noticed, you won't gain anything from trying to
engage with the troll.

Mike does not discuss anything. He states his opinions in many
different ways, pretending to respond to those that waste their time
talking to him. But no matter what points are raised in discussion
with him, they will only be used as an excuse to produce yet another
variation of his unchanged opinions.  He doesn't have any technical
programming or AI background, so he can't understand that type of
argument.

He is against the whole basis of AGI research. He believes that
rationality is a dead end, a dying culture, so deep-down, rational
arguments mean little to him.

Don't feed the troll!
(Unless you really, really, think he might say something useful to you
instead of just wasting your time).


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-19 Thread BillK
On Fri, Dec 19, 2008 at 6:40 PM, Ben Goertzel wrote:

 IMHO, Mike Tintner is not often rude, and is not exactly a troll because I
 feel he is genuinely trying to understand the deeper issues related to AGI,
 rather than mainly trying to stir up trouble or cause irritation

 However, I find conversing with him generally frustrating because he
 combines
 A)
 extremely strong intuitive opinions about AGI topics
 with
 B)
 almost utter ignorance of the details of AGI (or standard AI), or the
 background knowledge needed to appreciate these details when compactly
 communicated

 This means that discussions with Mike never seem to get anywhere... and,
 frankly, I usually regret getting into them



In my opinion you are being too generous and your generosity is being
taken advantage of.
As well as trying to be nice to Mike, you have to bear list quality in
mind and decide whether his ramblings are of some benefit to all the
other list members.

There are many types of trolls; some can be quite sophisticated.
See: http://ubuntuforums.org/showthread.php?p=1032102
The definitive guide to Trolls

A classic troll tries to make us believe that he is a skeptic. He is
divisive and argumentative with need-to-be-right attitude, searching
for the truth, flaming discussion, and sometimes insulting people or
provoking people to insult him. A troll is usually an expert in
reusing the same words of its opponents and in turning it against
them.

The Contrarian Troll. A sophisticated breed, Contrarian Trolls
frequent boards whose predominant opinions are contrary to their own.
A forum dominated by those who support firearms and knife rights, for
example, will invariably be visited by Contrarian Trolls espousing
their beliefs in the benefits of gun control.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Re: [sl4] Join me on Bebo

2008-12-04 Thread BillK
On Thu, Dec 4, 2008 at 12:42 AM, Thomas McCabe wrote:
 I'm not a moderator, but as a fellow list-user I do ask that you
 please refrain from sending such things to SL4 in future.




This Bebo invite spam is more Bebo's fault than Ryan's.

Bebo (like many social networking sites) by default assume that new
users want to send an invite to everyone in their address book. They
operate a block opt-in policy.  If you just quickly click through the
buttons you send an invite to everyone in your address book.  Then sit
and panic when you realize what you've done. :)  Ryan isn't the only
one that has been caught out by this system.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Entheogins, understainding the brain, and AGI

2008-11-24 Thread BillK
On Mon, Nov 24, 2008 at 7:51 PM, Eric Burton  wrote:
 This is a really good avenue of discussion for me.
snip

You'all probably should join  rec.drugs.psychedelic

http://groups.google.com/group/rec.drugs.psychedelic/topics

People are still posting there, so the black helicopters haven't taken
them all away yet.
(Of course they might all be FBI agents. It's happened before)  :)

There are many similar interest groups for you to choose from.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread BillK
Nobody has mentioned this yet.

http://www.physorg.com/news146319784.html

Quotes:

 However, Roy's controversial ideas on how the brain works and learns
probably won't immediately win over many of his colleagues, who have
spent decades teaching robots and artificial intelligence (AI) systems
how to think using the classic connectionist theory of the brain.
Connectionists propose that the brain consists of an interacting
network of neurons and cells, and that it solves problems based on how
these components are connected. In this theory, there are no separate
controllers for higher level brain functions, but all control is local
and distributed fairly equally among all the parts.

In his paper, Roy argues for a controller theory of the brain. In this
view, there are some parts of the brain that control other parts,
making it a hierarchical system. In the controller theory, which fits
with the so-called computational theory, the brain learns lots of
rules and uses them in a top-down processing method to operate.

In his paper, Roy shows that the connectionist theory actually is
controller-based, using a logical argument and neurological evidence.
He explains that some of the simplest connectionist systems use
controllers to execute operations, and, since more complex
connectionist systems are based on simpler ones, these too use
controllers. If Roy's logic correctly describes how the brain
functions, it could help AI researchers overcome some inherent
limitations in connectionist algorithms.

Connectionism can never create autonomous learning machines, and
that's where its flaw is, Roy told PhysOrg.com. Connectionism
requires human babysitting of their learning algorithms, and that's
not very brain-like. We don't guide and control the learning inside
our head.
etc

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread BillK
On Thu, Nov 20, 2008 at 3:06 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Yeah.  Great headline -- Man beats dead horse beyond death!

 I'm sure that there will be more details at 11.

 Though I am curious . . . .  BillK, why did you think that this was worth
 posting?



???  Did you read the article?

---
Quote:
In the late '90s, Asim Roy, a professor of information systems at
Arizona State University, began to write a paper on a new brain
theory. Now, 10 years later and after several rejections and
resubmissions, the paper Connectionism, Controllers, and a Brain
Theory has finally been published in the November issue of IEEE
Transactions on Systems, Man, and Cybernetics – Part A: Systems and
Humans.

Roy's theory undermines the roots of connectionism, and that's why his
ideas have experienced a tremendous amount of resistance from the
cognitive science community. For the past 15 years, Roy has engaged
researchers in public debates, in which it's usually him arguing
against a dozen or so connectionist researchers. Roy says he wasn't
surprised at the resistance, though.

I was attempting to take down their whole body of science, he
explained. So I would probably have behaved the same way if I were in
their shoes.

No matter exactly where or what the brain controllers are, Roy hopes
that his theory will enable research on new kinds of learning
algorithms. Currently, restrictions such as local and memoryless
learning have limited AI designers, but these concepts are derived
directly from that idea that control is local, not high-level.
Possibly, a controller-based theory could lead to the development of
truly autonomous learning systems, and a next generation of
intelligent robots.

 The sentiment that the science is stuck is becoming common to AI
researchers. In July 2007, the National Science Foundation (NSF)
hosted a workshop on the Future Challenges for the Science and
Engineering of Learning. The NSF's summary of the Open Questions in
Both Biological and Machine Learning [see below] from the workshop
emphasizes the limitations in current approaches to machine learning,
especially when compared with biological learners' ability to learn
autonomously under their own self-supervision:

Virtually all current approaches to machine learning typically
require a human supervisor to design the learning architecture, select
the training examples, design the form of the representation of the
training examples, choose the learning algorithm, set the learning
parameters, decide when to stop learning, and choose the way in which
the performance of the learning algorithm is evaluated. This strong
dependence on human supervision is greatly retarding the development
and ubiquitous deployment of autonomous artificial learning systems.
Although we are beginning to understand some of the learning systems
used by brains, many aspects of autonomous learning have not yet been
identified.

Roy sees the NSF's call for a new science as an open door for a new
theory, and he plans to work hard to ensure that his colleagues
realize the potential of the controller model. Next April, he will
present a four-hour workshop on autonomous machine learning, having
been invited by the Program Committee of the International Joint
Conference on Neural Networks (IJCNN).
-


Now his 'new' theory may be old hat to you personally,  but apparently
not to the majority of AI researchers, (according to the article).  He
must be saying something a bit unusual to have been fighting for ten
years to get it published and accepted enough for him to now have been
invited to do a workshop on his theory.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread BillK
On Thu, Nov 20, 2008 at 3:52 PM, Ben Goertzel wrote:

 I skimmed over the paper at
 http://wpcarey.asu.edu/pubs/index.cfm
 and I have to say I agree with the skeptics.

 I don't doubt that this guy has made significant contributions in
 other areas of science and engineering, but this paper displeases me a
 great deal, due to making big claims of originality for ideas that are
 actually very old hat, and bolstering these claims via attacking a
 straw man of simplistic connectionism.

snip

 Double thumbs down: not for wrongheadedness, but for excessive claims
 of originality plus egregious straw man arguments...




So, basically, you don't disagree with his paper to much.
You just don't like his attitude.;)

Danged AI researchers that think they know it all!   ;)

You don't think you could call it excessive PR where he is trying to
dislodge an entrenched view?


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread BillK
On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:

 I see how this would work:  crazy people never tell lies, so you'd be able
 to nail 'em when they gave the wrong answers.



Yup. That's how they pass lie detector tests as well.

They sincerely believe the garbage they spread around.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-23 Thread BillK
On Thu, Oct 23, 2008 at 12:55 AM, Matt Mahoney wrote:


 I suppose you are right. Instead of encoding mathematical rules as a grammar, 
 with enough training
 data you can just code all possible instances that are likely to be 
 encountered. For example, instead
 of a grammar rule to encode the commutative law of addition,

  5 + 3 = a + b = b + a = 3 + 5
 a model with a much larger training data set could just encode instances with 
 no generalization:

  12 + 7 = 7 + 12
  92 + 0.5 = 0.5 + 92
  etc.

 I believe this is how Google gets away with brute force n-gram statistics 
 instead of more sophisticated  grammars. It's language model is probably 
 10^5 times larger than a human model (10^14 bits vs
 10^9 bits). Shannon observed in 1949 that random strings generated by n-gram 
 models of English
 (where n is the number of either letters or words) look like natural language 
 up to length 2n. For a
 typical human sized model (1 GB text), n is about 3 words. To model strings 
 longer than 6 words we
 would need more sophisticated grammar rules. Google can model 5-grams (see
 http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html
  ), so it is able to
 generate and recognize (thus appear to understand) sentences up to about 10 
 words.



Gigantic databases are indeed Google's secret sauce.
See:
http://googleresearch.blogspot.com/2008/09/doubling-up.html

Quote:
Monday, September 29, 2008   Posted by Franz Josef Och

Machine translation is hard. Natural languages are so complex and have
so many ambiguities and exceptions that teaching a computer to
translate between them turned out to be a much harder problem than
people thought when the field of machine translation was born over 50
years ago. At Google Research, our approach is to have the machines
learn to translate by using learning algorithms on gigantic amounts of
monolingual and translated data. Another knowledge source is user
suggestions. This approach allows us to constantly improve the
quality of machine translations as we mine more data and
get more and more feedback from users.

A nice property of the learning algorithms that we use is that they
are largely language independent -- we use the same set of core
algorithms for all languages. So this means if we find a lot of
translated data for a new language, we can just run our algorithms and
build a new translation system for that language.

As a result, we were recently able to significantly increase the number of
languages on translate.google.com. Last week, we launched eleven new
languages: Catalan, Filipino, Hebrew, Indonesian, Latvian, Lithuanian, Serbian,
Slovak, Slovenian, Ukrainian, Vietnamese. This increases the
total number of languages from 23 to 34.  Since we offer translation
between any of those languages this increases the number of language
pairs from 506 to 1122 (well, depending on how you count simplified
and traditional Chinese you might get even larger numbers).
-


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread BillK
On Tue, Oct 21, 2008 at 10:31 PM, Ben Goertzel wrote:

 Incorrect things are wrapped up with correct things in peoples' minds

 However, pure slowness at learning is another part of the problem ...



Mark seems to be thinking of something like the checklist that the ISP
technician walks through when you call with a problem. Even when you
know what the problem is, the tech won't listen. He insists on working
through his checklist, making you do all the irrelevant checks,
eventually by a process of elimination, ending up with what you knew
was wrong all along. Very little GI required.

But Ben is saying that for evaluating science, there ain't no such checklist.
The circumstances are too variable, you would need checklists to infinity.

I go along with Ben.

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Will Wright's Five Artificial Intelligence Prophecies

2008-10-18 Thread BillK
On Sat, Oct 18, 2008 at 8:28 PM, Bob Mottram wrote:
 Some thoughts on this:
 http://streebgreebling.blogspot.com/2008/10/will-wright-on-ai.html



I like his first point:
MACHINES WILL NEVER ACHIEVE HUMAN INTELLIGENCE
According to Wright, one of the main benefits of the quest for AI is a
better definition of human intelligence. Intelligence is whatever we
can do that computers can't, says Wright.

This reminds me of Mike Tintner.
Even when these non-human intelligences are building space habitats,
roaming the solar system and sending probes out to the stars, Mike
will still be sitting there saying ' Ah, but they can't write poetry,
so they are not really intelligent'.

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread BillK
On Wed, Oct 15, 2008 at 9:46 AM, Eric Burton wrote:
 My mistake I guess. I'm going to try harder to understand what you're
 saying from now on.


Colin's profile on Nature says:

I am a mature age PhD student with the sole intent of getting a novel
chip technology and derivative products into commercial production.
The chip technology facilitates natural learning of the kind biology
uses to adapt to novelty. The artifacts will have an internal life.

My mission is to create artificial (machines) that learn like biology
learns and that have an internal life. Currently that goal requires
lipid bilayer membrane molecular dynamics simulation.

Publications
  Colin Hales. AI and Science's Lost Realm IEEE Intelligent
Systems 21 , 76-81 (2006)
  Colin Hales. Physiology meets consciousness. A review of The
Primordial Emotions: The Dawning of Consciousness by Derek Denton
TRAFFIC EIGHT (2006)
  Hales, C. Qualia Ockham's Razor, Radio National, Australia 17 April (2005)
  Colin Hales. The 10 point framework and the altogether too hard
basket Science and Consciousness Review (2003)
---


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread BillK
On Wed, Oct 15, 2008 at 7:44 PM, John G. Rose wrote:
 I'd go for 2 lists. Sometimes after working intensely on something concrete
 and specific one wants to step back and theorize. And then particular AGI
 approaches may be going down the wrong trail and need to step back and look
 at things from a different perspective.

 Even so, with all this the messages in the one list still are grouped by
 subject... I mean people can parse. But to simplify moderation and
 organization, etc..



I agree. I support more type 1 discussions.

I have felt for some time that an awful lot of time-wasting has been
going on here.

I think this list should mostly be for computer tech discussion about
methods of achieving specific results on the path(s) to AGI.

I agree that there should be a place for philosophical discussion,
either on a separate list, or uniquely identified in the Subject so
that technicians can filter off such discussions.

Some people may need to discuss philosophic alternative paths to AGI,
to help clarify their thoughts. But if so, they are probably many
years away from producing working code and might be hindering others
who are further down the path of their own design.

Two lists are probably best. Then if technicians want a break from
coding, they can dip into the philosophy list, to offer advice or
maybe find new ideas to play with.
And, as John said.  it would save on moderation time.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread BillK
On Tue, Oct 14, 2008 at 2:41 PM, Matt Mahoney wrote:
 But no matter. Whichever definition you accept, RSI is not a viable path to 
 AGI. An AI that is twice as smart as a
 human can make no more progress than 2 humans.


I can't say I've noticed two dogs being smarter than one dog.
Admittedly, a pack of dogs can do hunting better, but they are not 'smarter'.
Numbers just increase capabilities.

Two humans can lift a heavier object than one human, but they are not
twice as smart.

As Ben says, I don't see a necessary connection between RSI and 'smarts'.
It's a technique applicable from very basic levels.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-20 Thread BillK
On Fri, Sep 19, 2008 at 10:05 PM, Matt Mahoney wrote:
 From http://en.wikipedia.org/wiki/Yeltsin

 Boris Yeltsin studied at Pushkin High School in Berezniki in Perm Krai. He 
 was fond of sports (in particular
 skiing, gymnastics, volleyball, track and field, boxing and wrestling) 
 despite losing the thumb and index
 finger of his left hand when he and some friends sneaked into a Red Army 
 supply depot, stole several
 grenades, and tried to dissect them.[5]

 But to be fair, Google didn't find it either.



I've had a play with this.
I think you are asking the wrong question.   See - It's your fault!  :)

The Yeltsin article doesn't say that he was a world leader.
It says he was President of Russia.

The article doesn't say he lost 2 fingers.
It says he lost a thumb and index finger.

So I think you are expecting quite a high level of understanding to
match your query with these statements.  If you ask which president
has lost a thumb and finger?. Then Powerset matches on the second
page of results but Google matches on the first page of results.
(Google is very good at keyword matching!) Cognition is still confused
as it cannot find 'concepts' to match on.


The Powerset FAQ says that it analyses your query and tries to extract
a 'subject-relation-object' which it then tries to match. They give
examples of the type of query they like.
what was banned by the fda
what caused the great depression


The Cognition FAQ says that they try to find 'concepts' in your query
and match on the 'concept' rather than actual words. i.e. The text
Did they adopt the bill?; is known by Cognition to relate to
information about the approval of Proposition A, because adopt in
the text means to approve, and bill in the text means a proposed
law.
So it looks like they don't have concepts for 'world leader =
president' or 'thumb and index finger = 2 fingers'


NLP isn't as easy as it looks!  :)


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread BillK
On Fri, Sep 19, 2008 at 3:15 PM, Jiri Jelinek wrote:
 There is a difference between being good at
 a) finding problem-related info/pages, and
 b) finding functional solutions (through reasoning), especially when
 all the needed data is available.

 Google cannot handle even trivial answer-embedded questions.


Last I heard Peter Norvig was saying that Google had no interest in
putting a natural language front-end on Google.
http://slashdot.org/article.pl?sid=07/12/18/1530209

But other companies are interested. The main two are:
Powerset http://www.powerset.com/
and
Cognition http://www.cognition.com/

A new startup Eeggi is also interesting. http://www.eeggi.com/


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread BillK
On Thu, Sep 11, 2008 at 2:28 PM, Jiri Jelinek wrote:
 If you talk to a program about changing 3D scene and the program then
 correctly answers questions about [basic] spatial relationships
 between the objects then I would say it understands 3D. Of course the
 program needs to work with a queriable 3D representation but it
 doesn't need a body. I mean it doesn't need to be a real-world
 robot, it doesn't need to associate self with any particular 3D
 object (real-world or simulated) and it doesn't need to be self-aware.
 It just needs to be the 3D-scene-aware and the scene may contain just
 a few basic 3D objects (e.g. the Shrdlu stuff).



Surely the DARPA autonomous vehicles driving themselves around the
desert and in traffic show that computers can cope quite well with a
3D environment, including other objects moving around them as well?

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread BillK
On Wed, Aug 27, 2008 at 8:43 PM, Abram Demski  wrote:
snip
 By the way, where does this term wireheading come from? I assume
 from context that it simply means self-stimulation.


Science Fiction novels.

http://en.wikipedia.org/wiki/Wirehead
In Larry Niven's Known Space stories, a wirehead is someone who has
been fitted with an electronic brain implant (called a droud in the
stories) to stimulate the pleasure centers of their brain.

In 2006, The Guardian reported that trials of Deep brain stimulation
with electric current, via wires inserted into the brain, had
successfully lifted the mood of depression sufferers.[1] This is
exactly the method used by wireheads in the earlier Niven stories
(such as the 'Gil the Arm' story Death By Ectasy).

In the Shaper/Mechanist stories of Bruce Sterling, wirehead is the
Mechanist term for a human who has given up corporeal existence and
become an infomorph.
--


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-20 Thread BillK
On Tue, Aug 19, 2008 at 2:56 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Wow, sorry about that. I am using firefox and had no problems. The
 site was just the first reference I was able to find using  google.

 Wikipedia references the same fact:

 http://en.wikipedia.org/wiki/Feedforward_neural_network#Multi-layer_perceptron



I've done a bit more investigation.

The web site is probably clean.

These attacks are probably coming from a compromised ad server.
ScanSafe Quote:
Online ads have become a primary target for malware authors because
they offer a stealthy way to distribute malware to a wide audience. In
many instances, the malware perpetrator can leverage the distributed
nature of online advertising and the decentralization of website
content to spread malware to hundreds of sites.


So you might encounter these attacks at any site, because almost all
sites serve up ads to you.
And you're correct that FireFox with AdBlock Plus and NoScript is safe
from these attacks.

Using a Linux or Apple operating system is even safer.

I dualboot to use Linux for browsing and only go into Windows when necessary.
Nowadays you can also use virtualization to run several operating
systems at once.
Cooperative Linux also runs happily alongside Windows.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-19 Thread BillK
On Tue, Aug 19, 2008 at 8:42 AM, Brad Paulsen wrote:
 Abram,

 Just FYI... When I attempted to access the Web page in your message,
 http://www.learnartificialneuralnetworks.com/ (that's without the
 backpropagation.html part), my virus checker, AVG, blocked the attempt
 with a message similar to the following:

 Threat detected!
 Virus found: JS/Downloader.Agent
 Detected on open

 Quarantined

 On a second attempt, I also got the IE 7.0 warning banner:

 This website wants to run the following add-on: Microsoft Data Access -
 Remote Data Services Dat...' from 'Microsoft Corporation'.  If you trust the
 website and the add-on and want to allow it to run, click... (of course, I
 didn't click).

 This time, AVG gave me the option to heal the virus.  I took this option.

 It may be nothing, but it also could be a drive by download attempt of
 which the owners of that site may not be aware.



Yes, the possibility that the site has been hacked should always be
considered as javascript injection attacks are becoming more and more
common. Because of this, the latest version of AVG has been made to be
very suspicious about javascript. This is causing some false
detections when AVG encounters very complicated javascript as it errs
on the side of safety. And looking at the source code for that page
there is one large function near the top that might well have confused
AVG (or it could be a hack, I'm not a javascript expert!).

However, I scanned the site with Dr Web antivirus and it said the site
was clean and the javascript was ok.
This site has not yet been scanned by McAfee Site Advisor, but I have
submitted it to them to be scanned soon.

Of course, if you use the Mozilla FireFox browser you are protected
from many drive by infections.
Especially if you use the AdBlock Plus and NoScript addons.

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] TOE -- US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-23 Thread BillK
On Wed, Jul 23, 2008 at 1:13 AM, Mike Archbold wrote:
 It seems to me like to be real AGI you have skipped over the parts of
 Aristotle more applicable to AGI, like his metaphysics and logic.  For
 example in the metaphysics he talks about beginning and end, causes,
 continuous/discrete, and this type of thing.   At first glance it looks
 like your invention starts with ethics; why not build atop a metaphysics
 base?  I'm not going to pass a judgement on your work but it seems like
 it's not going over well here with the crowd that has dealt with patent
 law.  From my perspective I guess I don't like the idea of patenting some
 automation of Aristotle unless it was in a kind of production-ready state
 (ie., beyond mere concept stage).



His invention is ethics, because that's what his field of work is.


See his list of books here:
http://www.allbookstores.com/author/John_E_Lamuth.html

* A Diagnostic Classification of the Emotions : A Three-digit
Coding System for Affective Language
  by Jay D. Edwards (Illustrator), John E. Lamuth
  April 2005, Paperback  List Price: $34.95

* Character Values : Promoting a Virtuous Lifestyle cover
Character Values : Promoting a Virtuous Lifestyle
  by Jay D. Edwards (Illustrator), John E. Lamuth (Editor)
  April 2005, Paperback  List Price: $28.95

* Communication Breakdown : Decoding The Riddle Of Mental Illness
  by Jay D. Edwards (Introduction by), John E. Lamuth (Editor)
  June 2004, Paperback  List Price: $28.95

* A Revolution in Family Values : Tradition Vs. Technology
  by Jay D. Edwards (Illustrator), John E. Lamuth
  April 2002, Paperback  List Price: $19.95

* A Revolution in Family Values : Spirituality for a New Millennium
  by John E. Lamuth
  March 2001, Hardcover  List Price: $24.95

* The Ultimate Guide to Family Values : A Grand Unified Theory of
Ethics and Morality
  by John E. Lamuth
  September 1999, Hardcover  List Price: $19.95


and his author profile here:
http://www.angelfire.com/rnb/fairhaven/Contact_Fairhaven_Books.html

Biography
John E. LaMuth is a 52 year-old counselor and author, native to the
Southern California area. Credentials include a Bachelor of Science
Degree in Biological Sciences from University of California, Irvine:
followed by a Master of Science Degree in Counseling from California
State University, Fullerton; with an emphasis in Marriage, Family, and
Child Counseling. John is currently engaged in private practice in
Divorce and Family Mediation Counseling in the San Bernardino County
area - JLM Mediation Service - Lucerne Valley, CA 92356 USA. John also
serves as an Adjunct Faculty Member at Victor Valley College,
Victorville, CA.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-21 Thread BillK
On Mon, Jul 21, 2008 at 12:59 PM, Matt Mahoney wrote:
 This is a real patent, unfortunately...
 http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1Sect2=HITOFFp=1u=%2Fnetahtml%2FPTO%2Fsearch-bool.htmlr=1f=Gl=50d=PALLRefSrch=yesQuery=PN%2F6587846

 But I think it will expire before anyone has the technology to implement it. 
 :-)



I prefer Warren Ellis's angry, profane Three Laws of Robotics.
(linked from BoingBoing)

http://www.warrenellis.com/?p=5426

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] DARPA looking for 'Cutting Edge' AI projects

2008-06-24 Thread BillK
On Tue, Jun 24, 2008 at 11:52 AM, Stefan Pernar wrote:
 I'm a consultant with DARPA, and I'm working on an initiative to push the
 boundaries of neuromorphic computing (i.e. artificial intelligence). The
 project is designed to advance ideas all fronts, including measuring and
 understanding biological brains, creating AI systems, and investigating the
 fundamental nature of intelligence. I'm conducting a wide search of these
 fields, but I wanted to know if any in this community know of neat projects
 along those lines that I might overlook. Maybe you're working on a project
 like that and want to talk it up? No promises (seriously), but interesting
 work will be brought to the attention of the project manager I'm working
 with. If you want to start up a dialog, send me an email, and we'll see
 where it goes. I'll also be reading the comments for the story.



This sounds like the DARPA SyNAPSE program.


Here is a blog post about the program with a link to the full
specification document:
http://p9.hostingprod.com/@modha.org/blog/2008/04/
April 25, 2008
SyNAPSE: Systems of Neuromorphic Adaptive Plastic Scalable Electronics
DARPA's Defense Sciences Office (DSO) has recently issued a Broad
Agency Announcement entitled SyNAPSE. The program is led by Dr. Todd
Hylton.
---

This gives an idea of the people and projects already bidding.
http://www.eventmakeronline.com/dso/View/presenter.asp?MeetingID=561
DARPA SyNAPSE Bidders Workshop  March 4, 2008
--

This is an overview of the meeting from one of the participants.
http://www.ine-news.org/view.php?article=rss-248category=Workshops%3AGeneral
Report on the DARPA SyNAPSE Teaming Workshop
Leslie Smith 14 March 2008
---


BillK


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] More Info Please

2008-05-27 Thread BillK
On Tue, May 27, 2008 at 2:20 PM, Mark Waser wrote:
 Geez.  What the heck is wrong with you people and your seriously bogus
 stats?

 Try a real recognized neutral tracking service like Netcraft
 (http://news.netcraft.com/archives/web_server_survey.html)

 Does anyone believe that they are biased and coking their data?

 March 2008 Percent April 2008 Percent Change
 Apache 82,454,415 50.69% 83,554,638 50.42% -0.27
 Microsoft 57,698,503 35.47% 58,547,355 35.33% -0.14
 Google 9,012,004 5.54% 10,079,333 6.08% 0.54
 lighttpd 1,552,650 0.95% 1,495,308 0.90% -0.05
 Sun 546,581 0.34% 547,873 0.33% -0.01


As I understand it, Netcraft's results are based on web sites, or more
precisely, hostnames, rather than actual web servers.  This introduces
a bias because some servers run a large number of low-volume (or zero
volume) web sites.

This company attempts to survey web *servers* only
(Note: Total is about 5% of Netcraft total)

http://www.securityspace.com/s_survey/data/200804/
More detail here:
http://www.securityspace.com/s_survey/data/200804/servers.html

This gives 73% for Apache and 19% for Microsoft.


BillK


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More Info Please

2008-05-27 Thread BillK
On Tue, May 27, 2008 at 3:53 PM, Mark Waser wrote:
 No.  You are not correct.  Read their methodology
 (http://www.securityspace.com/s_survey/faq.html?mondir=/200804domdir=domain=)
 which I have copied and pasted below

 We visit what we consider well-known sites. In our case, we define a
 well-known site as a site that had a link to it from at least one other 
 site
 that we consider well-known. So, if we are visiting you, it means we know
 about you through a link from another site.

 If a site stops responding to our request for 3 consecutive months, we
 automatically remove it from the survey. In this fashion, our list of known
 servers remains up to date.

 Because of this technique, we find that we actually only visit about 10%
 of the web sites out on the web. This is because approximately 90% of all
 web sites are fringe sites, such as domain squatters, personal web sites,
 etc., that are considered unimportant by the rest of the web community
 (because no-one considers them important enough to link to.)




That's fine by me. They are trying to survey the web servers that are
actually *used* on the internet.
Ignoring millions of parked domains on IIS servers run by some major registrars.

Their overall figure of 73% for Apache and 19% for Microsoft IIS
sounds reasonable to me.
As J. Andrew Rogers said, Apache is probably a larger % than this in
Silicon Valley.

BillK


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread BillK
On Sat, Apr 26, 2008 at 8:09 PM, Mike Tintner wrote:
  So what you must tell me is how your or any geometrical system of analysis
 is going to be able to take a rorschach and come up similarly with a
 recognizable object or creature. Bear in mind, your system will be given  no
 initial clues as to what objects or creatures are suitable as potential
 comparisons. It can by all means have a large set of visual images in
 memory, as we do. But you must tell me how your system will connect the
 rorschach with any of those images, such as a bat,  - by *geometrical*
 means.

snip


This is called Content-based image retrieval (CBIR), also known as
query by image content (QBIC) and content-based visual information
retrieval (CBVIR) is the application of computer vision to the image
retrieval problem, that is, the problem of searching for digital
images in large databases.
http://en.wikipedia.org/wiki/CBIR

This is a hot area of computer research, with many test systems. (see article).

Nothing to do with AGI, of course.

Every post from Mike seems to be yet another different way of saying
'You're all wrong!'
Are you sure you want to be on this list, Mike?

BillK

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Symbols

2008-03-31 Thread BillK
On Mon, Mar 31, 2008 at 10:56 AM, Mike Tintner wrote:
snip
 You guys probably think this is all rather peripheral and unimportant - they
 don't teach this in AI courses, so it can't be important.



No. It means you're on the wrong list.


 But if you can't see things whole, then you can't see or connect with the
 real world. And, in case you haven't noticed, no AGI  can connect with the
 real world. In fact, there is no such thing as an AGI at the moment. And
 there never will be if machines can't do what the brain does - which is,
 first and last, and all the time, look at the world in images as wholes.




This list is not trying to duplicate a human brain.

If you are the only person on the list who is correct, then you're
wasting your time and our time here. It can't be much fun for you to
spend all your time repeatedly telling everyone else that they've got
it all wrong. Once or twice should be sufficient, else you turn into a
troll.


BillK

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-26 Thread BillK
On Wed, Mar 26, 2008 at 12:47 PM, Ben Goertzel wrote:
 Is there some kind of online software that lets a group of people
  update a Mind Map diagram collaboratively, in the manner of a Wiki page?

  This would seem critical if a Mind Map is to really be useful for the purpose
  you suggest...



Here is a recent review of online mind mapping software:
http://usableworld.terapad.com/index.cfm?fa=contentNews.newsDetailsnewsID=41870from=listdirectoryId=14375

Online mindmap tools - Updated!
By James Breeze in Mind Maps
Published: Saturday, 08 March 08


BillK

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning knowledge

2008-02-26 Thread BillK
On Tue, Feb 26, 2008 at 8:29 PM, Ben Goertzel wrote:
snip

  I don't think that formal logic is a suitably convenient language for 
 describing
  motor movements or dealing with motor learning.

  But still, I strongly suspect one can produce software programs that do 
 handle
  motor movement and learning effectively.  They are symbolic at the level of
  the programming language, but not symbolic at the level of the deliberative,
  reflective component of the artificial mind doing the learning.

  A symbol is a symbol **to some system**.  Just because a hunk of program
  code contains symbols to the programmer, doesn't mean it contains symbols
  to the mind it helps implement.  Any more than a neuron being a symbol to a
  neuroscientist, implies that neuron is a symbol to the mind it helps 
 implement.

  Anyway, I agree with you that formal logical rules and inference are not the
  end-all of AGI and are not the right tool for handling visual imagination or
  motor learning.  But I do think they have an important role to play even so.


Asimo has a motor movement program.
Obviously he didn't 'learn' it himself. But once written, it seems
likely that similar sub-routines can be taken advantage of by later
robots.


BillK

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Where are the women?

2007-11-30 Thread BillK
On Nov 30, 2007 2:37 PM, James Ratcliff wrote:
 More Women:

 Kokoro (image attached)



So that's what a women is!  I wondered..


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70777441-ffcff3


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread BillK
On Nov 29, 2007 8:33 AM, Bob Mottram wrote:
 My own opinion of all this, for what it's worth, is that the smart
 hackers don't waste their time writing viruses/botnets.  There are
 many harder problems to which an intelligent mind can be applied.




This discussion is a bit out of date. Nowadays no hackers (except for
script kiddies) are interested in wiping hard disks or damaging your
pc.  Hackers want to *use* your pc and the data on it. Mostly the
general public don't even notice their pc is working for someone else.
When it slows down sufficiently, they either buy a new pc or take it
to the shop to get several hundred infections cleaned off. But some
infections (like rootkits) need a disk wipe to remove them completely.

See:
http://blogs.zdnet.com/BTL/?p=7160tag=nl.e589

Quote-
On Wednesday, the SANS Institute released its top 20 security risks
update for 2007. It's pretty bleak across the board. There are client
vulnerabilities in browsers, Office software (especially the Microsoft
variety), email clients and media players. On the server side, Web
applications are a joke, Windows Services are a big target, Unix and
Mac operating systems have holes, backup software is an issue as are
databases and management servers. Even anti-virus software is a
target.

And assuming you button down all of those parts–good luck folks–you
have policies to be implemented (rights, access, encrypted laptops
etc.) just so people can elude them. Meanwhile, instant messaging,
peer-to-peer programs and your VOIP system are vulnerable. The star of
the security show is the infamous zero day attack.
--

Original SANS report here -
http://www.sans.org/top20/?portal=bf37a5aa487a5aacf91e0785b7f739a4#c2
---

And, of course, all the old viruses are still floating around the net
and have to be protected against.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70081689-300ee8

Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread BillK
On 11/2/07, Eliezer S. Yudkowsky wrote:
 I didn't ask whether it's possible.  I'm quite aware that it's
 possible.  I'm asking if this is what you want for yourself.  Not what
 you think that you ought to logically want, but what you really want.

 Is this what you lived for?  Is this the most that Jiri Jelinek wants
 to be, wants to aspire to?  Forget, for the moment, what you think is
 possible - if you could have anything you wanted, is this the end you
 would wish for yourself, more than anything else?



Well, almost.
Absolute Power over others and being worshipped as a God would be neat as well.

Getting a dog is probably the nearest most humans can get to this.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60258273-c65ec9


Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-06 Thread BillK
On 10/6/07, a wrote:
 I am skeptical that economies follow the self-organized criticality
 behavior.
 There aren't any examples. Some would cite the Great Depression, but it
 was caused by the malinvestment created by Central Banks. e.g. The
 Federal Reserve System. See the Austrian Business Cycle Theory for details.
 In conclusion, economics is a bad analogy with complex systems.


My objection to economic libertarianism is that it's not a free
market. A 'free' market is an impossibility. There will always be
somebody who is bigger than me or cleverer than me or better educated
than me, etc. A regulatory environment attempts to reduce the
victimisation of the weaker members of the population and introduces
another set of biases to the economy.

A free market is just a nice intellectual theory that is of no use in
the real world.
(Unless you are in the Mafia, of course).

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50792589-4d8a77


Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread BillK
On 10/4/07, Bob Mottram [EMAIL PROTECTED] wrote:
 To me this seems like elevating that status of nanotech to magic.
 Even given RSI and the ability of the AGI to manufacture new computing
 resources it doesn't seem clear to me how this would enable it to
 prevent other AGIs from also reaching RSI capability.  Presumably
 lesser techniques means black hat activity, or traditional forms of
 despotism.  There seems to be a clarity gap in the theory here.



The first true AGI may be friendly, as suggested by Richard Loosemore.
But if the military are working on developing an intelligent weapons
system, then a sub-project will be a narrow AI project designed
specifically to seek out and attack the competition *before* it
becomes a true AGI.  The Chinese are already constantly probing and
attacking the western internet sites.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=49977621-104d4e


Re: [agi] Religion-free technical content

2007-10-02 Thread BillK
On 10/2/07, Mark Waser wrote:
 A quick question for Richard and others -- Should adults be allowed to
 drink, do drugs, wirehead themselves to death?



This is part of what I was pointing at in an earlier post.

Richard's proposal was that humans would be asked in advance by the
AGI what level of protection they required.

So presumably Richard is thinking along the lines of a non-interfering
AGI, unless specifically requested.

There are obvious problems here.

Humans don't know what is 'best' for them.
Humans frequently ask for what they want, only later to discover that
they really didn't want that.  Humans change their mind all the time.
Humans don't know 'in advance' what level of protection they would
like. Death and/or mutilation comes very quickly at times.

If I was intending to be evil, say, commit mass murder, I would
request a lot of protection from the AGI, as other humans would be
trying to stop me.

--

I think the AGI will have great problems interfacing with these
mixed-up argumentative humans.  The AGI will probably have to do a lot
of 'brain correction' to straighten out humanity.
Let's hope that it knows what it is doing.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48932774-c6db65


Re: [agi] Religion-free technical content

2007-09-30 Thread BillK
On 9/30/07, Edward W. Porter wrote:

 I think you, Don Detrich, and many others on this list believe that, for at
 least a couple of years, it's still pretty safe to go full speed ahead on
 AGI research and development.  It appears from the below post that both you
 and Don agree AGI can potentially present grave problems (which
 distinguished Don from some on this list who make fun of anyone who even
 considers such dangers).  It appears the major distinction between the two
 of you is whether, and how much, we should talk and think about the
 potential dangers of AGI in the next few years.



Take the Internet, WWW and Usenet as an example.

Nobody gave a thought to security while they were being developed.
They were delighted and amazed that the thing worked at all.

Now look at the swamp we have now.

Botnets, viruses, trojans, phishing, DOS attacks, illegal software,
illegal films, illegal music, pornography of every kind, etc.

(Just wish I had a pornograph to play it on).


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48269918-e87cb0


Re: [agi] Minimally ambiguous languages

2007-06-05 Thread BillK

On 6/5/07, Bob Mottram [EMAIL PROTECTED] wrote:

I remember last year there was some talk about possibly using Lojban
as a possible language use to teach an AGI in a minimally ambiguous
way.  Does anyone know if the same level of ambiguity found in
ordinary English language also applies to sign language?  I know very
little about sign language, but it seems possible that the constraints
applied by the relatively long time periods needed to produce gestures
with arms/hands compared to the time required to produce vocalizations
may mean that sign language communication is more compact and maybe
less ambiguous.

Also, comparing the way that the same concepts are represented using
spoken and sign language might reveal something about how we normally
parse sentences.



http://en.wikipedia.org/wiki/Basic_English

Ogden's rules of grammar for Basic English allows people to use the
850 words to talk about things and events in the normal English way.
Ogden did not put any words into Basic English that could be
paraphrased with other words, and he strove to make the words work for
speakers of any other language. He put his set of words through a
large number of tests and adjustments. He also simplified the grammar
but tried to keep it normal for English users.

More recently, it has influenced the creation of Simplified English, a
standardized version of English intended for the writing of technical
manuals.


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] My proposal for an AGI agenda

2007-04-10 Thread BillK

On 4/10/07, Eric Baum wrote:

I'd commend to the LISP hackers' attention the compiler Stalin
by Jeff Syskind, who last I knew was at Purdue.
I'm uncertain the extent to which the compiler is available,
but I imagine if you look around (for example find Syskind's home page)
you will find papers or or pointers. My erstwhile collaborator
Kevin Lang, initially a skeptic on the subject, ran extensive tests
on Stalin and concluded the compiled code was substantially faster
than compiled C and C++, even on problems where this was quite
surprising. It's possible Kevin published something on these tests.



http://community.schemewiki.org/?Stalin
http://en.wikipedia.org/wiki/Stalin_(Scheme_implementation)
http://cobweb.ecn.purdue.edu/~qobi/software.html

Stalin is an aggressively optimizing Scheme compiler. It is the most
highly optimizing Scheme compiler, and in fact one of the most highly
optimizing compilers of any sort for any language. Stalin is publicly
 freely available, licensed under the GNU GPL. It was written by
Jeffrey M. Siskind.

In detail, Stalin is a whole-program compiler that uses advanced flow
analysis  closure conversion techniques. It compiles Scheme to highly
optimized C. Stalin has a few very significant limitations, however:

* it takes a *long* time to compile anything including the compiler
* it is not designed for interactive use or debugging
* it does not support R4RS/R5RS high-level macros
* it does not support the full numeric tower

The compiler itself does lifetime analysis and hence does not generate
as much garbage as might be expected, but global reclamation of
storage is done using the Boehm garbage collector.



Scheme
http://cbbrowne.com/info/scheme.html
http://en.wikipedia.org/wiki/Scheme_(programming_language)

 Scheme is a LISP dialect that is relatively small, nicely supports
tail recursion, provides block structure and lexical scoping, and
gives a variety of object types first-class status (e.g. - first
class objects are namable and can be passed around as function
arguments, results, or as list elements).

If Common LISP is considered analogous to C++ (which is not entirely
unreasonable), Scheme would comparatively be analogous to C. Where
Common LISP requires that the rich functionality (such as a wide
variety of data structures and the Common Lisp Object System (CLOS))
come as an intrinsic part of the language, Scheme encourages the use
of libraries to provide these sorts of additional functionality.

The Scheme libraries are small enough that programmers commonly
construct functionality using primitive functions, where a LISP system
might have something already defined. This gives additional
opportunity either to tune performance or shoot oneself in the foot by
reimplementing it poorly...



BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread BillK

On 3/29/07, kevin osborne wrote:
snip

You could argue that a lot of all this is the same kind of functions
just operating in 'parrellel' with a lot of 'redundancy'.

I'm not sure I buy that. Evolution is a miserly mistress. If thinking
could have been achieved with less, it would have been, and any
'extra' would have no means of selection.

The (also ridiculously large) amount of years involved in mammalian
brain evolution all led towards what we bobble around with us today.

I think there is an untold host of support functions necessary to take
a Von Neumann machine to a tipping-point|critical-mass where it can
truly think for itself. To even begin to equate top the generalised
abilities of an imbecile.



I think you have too high an opinion of Evolution.
Evolution is kludge piled upon kludge.
This is because evolution via natural selection cannot construct
traits from scratch. New traits must be modifications of previously
existing traits. This is called historical constraint.

There are many examples available in nature of bad design.

So it is not unlikely that a lot of the human brain processing is a
redundant hangover from earlier designs.  Of course, it is not a
trivial problem to decide which functions are not required to create
AGI.   :)

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Mind mapping software

2007-03-06 Thread BillK

I thought this free software opportunity might be of interest to some here.

ConceptDraw MINDMAP 4 is a mind-mapping and team brainstorming tool
with extended drawing capabilities.

Use it to efficiently organize your ideas and tasks with the help of
Mind Mapping technique. ConceptDraw MINDMAP 4 supports extra file
formats, multi-page documents. It offers a rich collection of
pre-drawn shapes. ConceptDraw MINDMAP 4 has extended capabilities for
creating web sites and PowerPoint presentations.

This software is temporarily available for free.
* But you must download and install it within the next 19 hours. **

Restrictions for the free edition.
1. No free technical support
2. No free upgrades to future versions
3. Strictly non-commercial usage

Normal price 119 USD.

http://www.giveawayoftheday.com/conceptdraw-mindmap-personal/


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-06 Thread BillK

On 1/6/07, Bob Mottram wrote:

This is the way it's going to go in my opinion.  In a house or office the
robots would really be dumb actuators - puppets - being controlled from a
central AI which integrates multiple systems together.  That way you can
keep the cost and maintenance requirements of the robot to a bare minimum.
Such a system also future-proofs the robot in a rapidly changing software
world, and allows intelligence to be provided as an internet based service.



http://www.pinktentacle.com/2006/12/top-10-robots-selected-for-robot-award-2006/

Robotic building cleaning system (Fuji Heavy Industries/ Sumitomo)

- This autonomous robot roams the hallways of buildings, performing
cleaning operations along the way. Capable of controlling elevators,
the robot can move from floor to floor unsupervised, and it returns to
its start location once it has finished cleaning. The robot is
currently employed as a janitor at 10 high-rise buildings in Japan,
including Harumi Triton Square and Roppongi Hills.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread BillK

On 12/4/06, Mark Waser  wrote:


Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind.  The reflexive part of our minds, though,
operates analogously to a machine running on compiled code with the
compilation of code being largely *not* under the control of our conscious
mind (though some degree of this *can* be changed by our conscious minds).
The more we can correctly interpret and affect/program the reflexive part of
our mind with the reflective part, the more intelligent we are.  And,
translating this back to the machine realm circles back to my initial point,
the better the machine can explain it's reasoning and use it's explanation
to improve it's future actions, the more intelligent the machine is (or, in
reverse, no explanation = no intelligence).



Your reasoning is getting surreal.

As Ben tried to explain to you, 'explaining our actions' is our
consciousness dreaming up excuses for what we want to do anyway.  Are
you saying that the more excuses we can think up, the more intelligent
we are? (Actually there might be something in that!).

You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time. Don't you read newspapers?
You can redefine rationality if you like to say that all the crazy
people are behaving rationally within their limited scope, but what's
the point? Just admit their behaviour is not rational.

Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a new job to deciding to sleep with
your best friend's wife. Sometimes a case arises when you really,
really want to do something that you *know* is going to end in
disaster, ruined lives, ruined career, etc. and it is impossible to
think of good reasons to proceed. But you still go ahead anyway,
saying that maybe it won't be so bad, maybe nobody will find out, it's
not all my fault anyway, and so on.

Human decisions and activities are mostly emotional and irrational.
That's the way life is. Because life is uncertain and unpredictable,
human decisions are based on best guesses, gambles and basic
subconscious desires.

An AGI will have to cope with this mess. Basing an AGI on iron logic
and 'rationality' alone will lead to what we call 'inhuman'
ruthlessness.


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Marvin and The Emotion Machine [WAS Re: [agi] A question on the symbol-system hypothesis]

2006-12-05 Thread BillK
to guess both how our brains function as well as they do and why they
evolved in the ways that they did, until we have had more experience
at trying to build such systems ourselves, to learn which kinds of
bugs are likely to appear and to find ways to keep them from disabling
us.

In the coming decades, many researchers will try to develop machines
with Artificial Intelligence. And every system that they build will
keep surprising us with their flaws (that is, until those machines
become clever enough to conceal their faults from us). In some cases,
we'll be able to diagnose specific errors in those designs and then be
able to remedy them. But whenever we fail to find any such simple fix,
we will have little choice except to add more checks and balances—for
example, by adding increasingly elaborate Critics. And through all
this, we can never expect to find any foolproof strategy to balance
the advantage of immediate action against the benefit of more careful,
reflective thought. Whatever we do, we can be sure that the road
toward designing 'post-human minds' will be rough.



BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread BillK

On 12/5/06, Charles D Hixson wrote:

BillK wrote:
 ...

 Every time someone (subconsciously) decides to do something, their
 brain presents a list of reasons to go ahead. The reasons against are
 ignored, or weighted down to be less preferred. This applies to
 everything from deciding to get a new job to deciding to sleep with
 your best friend's wife. Sometimes a case arises when you really,
 really want to do something that you *know* is going to end in
 disaster, ruined lives, ruined career, etc. and it is impossible to
 think of good reasons to proceed. But you still go ahead anyway,
 saying that maybe it won't be so bad, maybe nobody will find out, it's
 not all my fault anyway, and so on.
 ...

 BillK
I think you've got a time inversion here.  The list of reasons to go
ahead is frequently, or even usually, created AFTER the action has been
done.  If the list is being created BEFORE the decision, the list of
reasons not to go ahead isn't ignored.  Both lists are weighed, a
decision is made, and AFTER the decision is made the reasons decided
against have their weights reduced.  If, OTOH, the decision is made
BEFORE the list of reasons is created, then the list doesn't *get*
created until one starts trying to justify the action, and for
justification obviously reasons not to have done the thing are
useless...except as a layer of whitewash to prove that all
eventualities were considered.

For most decisions one never bothers to verbalize why it was, or was
not, done.



No time inversion intended. What I intended to say was that most
(all?) decisions are made subconsciously before the conscious mind
starts its reason / excuse generation process. The conscious mind
pretending to weigh various reasons is just a human conceit. This
feature was necessary in early evolution for survival. When danger
threatened, immediate action was required. Flee or fight!  No time to
consider options with the new-fangled consciousness brain mechanism
that evolution was developing.

With the luxury of having plenty of time to reason about decisions,
our consciousness can now play its reasoning games to justify what
subconsciously has already been decided.

NOTE: This is probably an exaggeration / simplification. ;)


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-02 Thread BillK

On 12/2/06, Mark Waser wrote:


My contention is that the pattern that it found was simply not translated
into terms you could understand and/or explained.

Further, and more importantly, the pattern matcher *doesn't* understand it's
results either and certainly could build upon them -- thus, it *fails* the
test as far as being the central component of an RSIAI or being able to
provide evidence as to the required behavior of such.



Mark, I think you are making two very basic wrong assumptions.

1) That humans are able to understand everything if it is explained to
them simply enough and they are given unlimited time.

2) That it is even possible to explain some very complex ideas in a
simple enough fashion.

Consider teaching the sub-normal. After much repetition they can be
trained to do simple tasks. Not understanding 'why', but they can
remember instructions eventually. Even high IQ humans have the same
equipment, just a bit better. They still have limits to how much they
can remember, how much information they can hold in their heads and
access. If you can't remember all the factors at once, then you can't
understand the result. You can write down the steps, all the different
data that affect the result, but you can't assemble it in your brain
to get a result.

And I think the chess or Go examples are a good example. People who
think that they can look through the game records and understand why
they lost are just not trained chess or go players. They have a good
reason to call some people 'Go masters' or 'chess masters'. I used to
play competitive chess and I can assure you that when our top board
player consistently beat us lesser mortals we could rarely point at
move 23 and say 'we shouldn't have done that'. It is *far* more subtle
than that. If you think you can do that, then you just don't
understand the problem.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread BillK

On 11/14/06, James Ratcliff wrote:

If the contents of a knowledge base for AGI will be beyond our ability to
comprehend  then it is probably not human level AGI, it is something
entirely new, and it will be alien and completely foriegn and unable to
interact with us at all, correct?
  If you mean it will have more knowledge than we do, and do things somewhat
differently, I agree on the point.
  You can't look inside the box because it's 10^9 bits.
Size is not a acceptable barrier to looking inside.  Wiki, is huge and will
get infineltly huge, yet I can look inside it, and see that poison ivy
causes rashes or whatnot.
The AGI will have enourmous complexity, I agree, but you should ALWAYS be
able to look inside it.  Not in the tradional sense of pages of code maybe
or simple set of rules, but the AGI itself HAS to be able to generalize and
tell what it is doing.
  So something like, I see these leafs that look like this, supply picture,
can I pick them up safely, will generate a human readable output that can
itself be debugged. Or asking about the process of doing something, will
generate a possible plan that the AI would follow, and a human could say, no
thats not right, and cause the AI to go back and reconsider with new
possible information.
  We can always look inside the 'logic' of what the AGI is doing, we may not
be able to directly change that ourselves easily.




Doesn't that statement cease to apply as soon as the AGI starts
optimizing it's own code?
If the AGI is redesigning itself it will be changing before our eyes,
faster than we can inspect it.

You must be assuming a strictly controlled development system where
the AGI proposes a change, humans inspect it for a week then tell the
AGI to proceed with that change.
I suspect you will only be able to do that in the very early development stages.


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-06 Thread BillK

On 11/6/06, James Ratcliff wrote:

  In some form or another we are going to HAVE to have a natural language
interface, either a translation program that can convert our english to the
machine  understandable form, or a simplified form of english that is
trivial for a person to quickly understand and write.
  Humans use natural speech to communicate and to have an effective AGI that
we can itneract with, it will have to have easy communication with us.  That
has been a critcal problem with all software since the beginning, a
difficulty in the human computer interface.

I go further to propose that as much knowledge information should be stored
in easily recognizable natural language as well, only devolving into more
complex forms where the cases warrant it, such as complex motor-sensor data
sets, and some lower logic levels.



Anybody remember short wave radio?

The Voice of America does worldwide broadcasts in Special English.
http://www.voanews.com/specialenglish/about_special_english.cfm

Special English has a core vocabulary of 1500 words.  Most are simple
words that describe objects, actions or emotions.  Some words are more
difficult.  They are used for reporting world events and describing
discoveries in medicine and science.

Special English writers use short, simple sentences that contain only
one idea. They use active voice.  They do not use idioms.
--

There is also Basic English:
http://en.wikipedia.org/wiki/Basic_English
Basic English is a constructed language with a small number of words
created by Charles Kay Ogden and described in his book Basic English:
A General Introduction with Rules and Grammar (1930). The language is
based on a simplified version of English, in essence a subset of it.

Ogden said that it would take seven years to learn English, seven
months for Esperanto, and seven weeks for Basic English, comparable
with Ido. Thus Basic English is used by companies who need to make
complex books for international use, and by language schools that need
to give people some knowledge of English in a short time.

Also see:
http://www.basic-english.org/
Basic English is a selection of 850 English words, used in simple
structural patterns, which is both an international auxiliary language
and a self-contained first stage for the teaching of any form of wider
or Standard English. A subset, no unlearning.


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-01 Thread BillK

On 11/1/06, Charles D Hixson wrote:

So.  Lojban++ might be a good language for humans to communicate to an
AI with, but it would be a lousy language in which to implement that
same AI.  But even for this purpose the language needs a verifier to
insure that the correct forms are being followed.  Ideally such a
verifier would paraphrase the statement that it was parsing and emit
back to the sender either an error message, or the paraphrased
sentence.  Then the sender would check that the received sentence
matched in meaning the sentence that was sent.  (N.B.:  The verifier
only checks the formal properties of the language to ensure that they
are followed.  It had no understanding, so it can't check the meaning.)




This discussion reminds me of a story about the United Nations
assembly meetings.
Normally when a representative is speaking, all the translation staff
are jabbering away in tandem with the speaker.
But when the German representative starts speaking they all fall
silent and sit staring at him.

The reason is that they are waiting for the verb to come along.   :)

Billk

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-24 Thread BillK

On 10/20/06, Richard Loosemore wrote:

I would *love* to see those IBM folks put a couple of jabbering
four-year-old children in front of that translation system, to see how
it likes their 'low-intelligence' language. :-)  Does anyone have any
contacts on the team, so we could ask?



I sent an email to Liang Gu on the IBM MASTOR project team (not really
expecting a reply) :)  and have just received this response. Sounds
hopeful.

BillK
-

Bill,

Thanks for your interests on MASTOR. And your suggestion of MASTOR for
Children is really great! It is definitely much more meaningful if
MASTOR can not only help adults but also children communicate with
each other around the world using different languages!
Although recognizing Children's voice has been proved a very
challenging task, the translation and text-to-speech techniques thus
involved should be very similar to what we have now. We will seriously
investigate the possibility of this approach and will send you a test
link if we later developed a pilot system on the web.

Regards and thanks again for your enthusiasm about MASTOR,
Liang

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-19 Thread BillK

On 10/19/06, Matt Mahoney wrote:


- NLP components such as parsers, translators, grammar-checkers


Parsing is unsolved.  Translators like Babelfish have progressed little since 
the 1959
Russian-English project.  Microsoft Word's grammar checker catches some mistakes
but is clearly not AI.




http://www.charlotte.com/mld/charlotte/news/nation/15783022.htm

American soldiers bound for Iraq equipped with laptop translators
Called the Two Way Speech-to-Speech Program, it's a translator that
uses a computer to convert spoken English to Iraqi Arabic and vice
versa.
-

If it is life-or-death, it must work pretty well.   :)

I believe this is based on the IBM MASTOR project.
http://domino.watson.ibm.com/comm/research.nsf/pages/r.uit.innovation.html

MASTOR's innovations include: methods that automatically extract the
most likely meaning of the spoken utterance, store it in a tree
structured set of concepts like actions and needs, methods that
take the tree-based output of a statistical semantic parser and
transform the semantic concepts in the tree to express the same set of
concepts in a way appropriate for another language; methods for
statistical natural language generation that take the resultant set of
transformed concepts and generate a sentence for the target language;
generation of proper inflections by filtering hypotheses with an
n-gram statistical language model; etc


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-19 Thread BillK

On 10/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:


Sorry, but IMO large databases, fast hardware, and cheap memory ain't
got nothing to do with it.

Anyone who doubts this get a copy of Pim Levelt's Speaking, read and
digest the whole thing, and then meditate on the fact that that book is
a mere scratch on the surface (IMO a scratch in the wrong direction,
too, but that's neither here nor there).

I saw a recent talk about an NLP system which left me stupified that so
little progress has been made since 20 years ago.

Having a clue about just what a complex thing intelligence is, has
everything to do with it.




Most normal speaking requires relatively little 'intelligence'.

Adults who take young children on foreign holidays are amazed at how
quickly the children appear to be chattering away to other children in
a foreign language.
They manage it for several reasons:
1) they don't have the other interests and priorities that adults have.
2) they use simple sentence structures and smallish vocabularies.
3) they discuss simple subjects of interest to children.

The new IBM MASTOR system seems to be better than Babelfish. IBM are
just starting on widespread commercial marketing of the system. Aiming
at business travellers, apparently.

MASTOR project description
http://domino.watson.ibm.com/comm/research.nsf/pages/r.uit.innovation.html

Here is a pdf file describing the MASTOR system in more detail
http://acl.ldc.upenn.edu/W/W06/W06-3711.pdf

Here is a 12MB mpg download of the system in use. Simple speech, but impressive.
http://www.research.ibm.com/jam/speech_to_speech.mpg

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Numenta: article on Jeff Hawkins' AGI approach

2006-06-02 Thread BillK

On 6/2/06, Ben Goertzel wrote:

Mike

You note that Numenta's approach seems oriented toward implementing an
animal-level mind...

I agree, and I do think this is a fascinating project, and an approach
that can ultimately succeed...  but I think that for it to succeed
Hawkins will have to introduce a LOT of deep concepts that he is
currently ignoring in his approach.  Most critically he ignores the
complex, chaotic dynamics of brain systems...

I suppose part of the motivation for starting with animal mind is that
the human mind is just a minor adjustment to the animal mind, which is
sorta true genetically and evolutionarily

But on the other hand, just because animal brains evolved into human
brains, doesn't mean that every system with animal-brain functionality
has similar evolve-into-human-brain potentiality


snip

Just from a computer systems design perspective, I think this project
is admirable.

I think it is safe to claim that all the big computer design disasters
occurred because they tried to do too much all at once. ''We want it
all, and we want it now!'.

Ben may be correct in claiming that major elements are being omitted,
but if they even get an animal level intelligence running, this will
be a remarkable achievement. They will be world leaders and will learn
a lot about designing such systems.

Even if it cannot progress to higher levels of intelligence, the
experience gained will set their technicians well on the road to the
next generation design.

BillK

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]