Anyone blogging what they are finding interesting in AGI 08?
Will Pearson
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
Note I want something different than computational universality. E.g.
Von Neumann architectures are generally programmable, Harvard
architectures aren't. As they can't be
On 28/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:
You must first define its existing skills, then define the new challenge
with some degree of precision - then explain the principles by which it will
extend its skills. It's those principles of extension/generalization that
are the
Jeez, Will, the point of Artificial General Intelligence is that it can
start adapting to an unfamiliar situation and domain BY ITSELF. And your
FIRST and only response to the problem you set was to say: I'll get someone
to tell it what to do.
IOW you simply avoided the problem and thought
One thing I would expect from an AGI is that it least it would be able
to Google for something that might talk about how to do whatever it
needs and to have available library references on the subject. Being
able to follow and interpret written instructions takes a lot of
intelligence in
Yes of course an AGI will make mistakes - and sometimes fail - in adapting.
I say that v. explicitly.
But your other point also skirts the problem - which is that the AGI must
first identify what it needs to adapt to, before it can start
googling/asking for advice.
I think we need better to
Gentlemen,
For guys interested in vision, neural nets and the like, there's a very
interesting talk by Geoffrey Hinton about unsupervised learning of
low-dimensional codes:
It's been on Youtube since December, but somehow it escaped my attention for
some months.
Using informal words, how would you describe the metaphysics or
biases currently encoded into the Novamente system?
/Robert Wensman
This is a good question, and unfortunately I don't have a
systematic answer. Biases are encoded in many different
aspects of the design, e.g.
-- the knowledge
On 2/16/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Kaj Sotala wrote:
Well, the basic gist was this: you say that AGIs can't be constructed
with built-in goals, because a newborn AGI doesn't yet have built up
the concepts needed to represent the goal. Yet humans seem tend to
have
On 02/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
Jeez, Will, the point of Artificial General Intelligence is that it can
start adapting to an unfamiliar situation and domain BY ITSELF. And your
FIRST and only response to the problem you set was to say: I'll get someone
to tell it what
Thanks for that.
Dont you see the way to go on Neural nets is hybrid with genetic algorithms in
mass amounts?
eldras
- Original Message -
From: Kingma, D.P. [EMAIL PROTECTED]
To: agi@v2.listbox.com
Subject: [agi] interesting Google Tech Talk about Neural Nets
Date: Sun, 2 Mar 2008
Although top down should continue being researched tried, the complexity is
still monumental.
We KNOW that bottom up delivers AGI, and Turing's view was that heuristics are
enough to build it.
That is only doable at mass speeds assumed possible in eg quantum computing.
eldras
-
interesting you're attempting that via goals, because goals will mutate; one
alternative is to control the infrastructure eg have systems that die when
they've run a certain course., and watcher systems that check mutations.
- Original Message -
From: Kaj Sotala [EMAIL PROTECTED]
To:
13 matches
Mail list logo