Some of you may remember me from other places, and once before on this
list. But I thought now is the correct time for some criticism of my
ideas as they are slightly refined.
Now first off I have recently renounced my status as an AI researcher,
as studying intelligence is not what I wish to do.
Eugen Leitl Thu, 23 Jun 2005 02:18:14 -0700
Do any of you here use MPI, and assume 10^3..10^5 node parallelism?
I assume 2^14 node parallelism with only a small fraction computing at
any time. But then my nodes are really smart memory rather than
full-blown processors and not async yet. At
On 9/9/05, Ben Goertzel [EMAIL PROTECTED] wrote:
Leitl wrote:
In the language of Gregory Bateson (see his book Mind and Nature),
you're suggesting to do away with learning how to learn --- which is
not at all a workable idea for AGI.
Learning to evolve by evolution is sure a
On 9/9/05, Yan King Yin [EMAIL PROTECTED] wrote:
learning to learn which I interpret as applying the current knowledge
rules to the knowledge base itself. Your idea is to build an AGI that can
modify its own ways of learning. This is a very fanciful idea but is not the
most direct way to
On 9/12/05, Yan King Yin [EMAIL PROTECTED] wrote:
Will Pearson wrote:
Define what you mean by an AGI. Learning to learn is vital if you wish to
try and ameliorate the No Free Lunch theorems of learning.
I suspect that No Free Lunch is not very relevant in practice. Any learning
On 9/20/05, Yan King Yin [EMAIL PROTECTED] wrote:
William wrote:
I suspect that it will be quite important in competition between agents. If
one agent has a constant method of learning it will be more easily
predicted
by an agent that can figure out its constant method (if it is simple).
On 01/06/06, Richard Loosemore [EMAIL PROTECTED] wrote:
I had similar feelings about William Pearson's recent message about
systems that use reinforcement learning:
A reinforcement scenario, from wikipedia is defined as
Formally, the basic reinforcement learning model consists of:
1. a
I don't think this has been raised before, the only similar suggestion
is that we should start by understanding systems that might be weak
and then convert it to a strong system rather than aiming for weakness
that is hard to convert to a strong system.
Caveats:
1) I don't believe strong
On 08/06/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
William Pearson wrote:
I tried posting this to SL4 and it got sucked into some vacuum.
As far as I can tell, it went normally through SL4. I got it.
It is harder to tell on gmail than other email systems what gets
through, my
On 08/06/06, William Pearson [EMAIL PROTECTED] wrote:
With regards to how careful I am being with the system: one of the
central design guidances for the system is to assume the programs in
the hardware are selfish and may do things I don't want. The failure
mode I envisage more than
On 09/06/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Likewise, an artificial general
intelligence is not a set of environment states S, a set of actions A,
and a set of scalar rewards in the Reals.)
Watching history repeat itself is pretty damned annoying.
While I would agree with you
On 09/06/06, Dennis Gorelik [EMAIL PROTECTED] wrote:
William,
It is very simple and I wouldn't apply it to everything that
behaviourists would (we don't get direct rewards for solving crossword
puzzles).
How do you know that we don't get direct rewards on solving crossword
puzzles (or any
On Fri, 09 Jun 2006 19:13:19 -500, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
What about punishment?
Currently I see it as the programs in control of outputting (and hence
the ones to get reward), losing the control and the chance to get
reinforcement. However experiment or better theory
On 11/06/06, Philip Goetz [EMAIL PROTECTED] wrote:
An article with an opposing point of view than the one I mentioned yesterday...
http://www.bcs.rochester.edu/people/alex/pub/articles/KnillPougetTINS04.pdf
Why do you find whether there are bayesian estimators in the brain an
interesting
On 12/06/06, James Ratcliff [EMAIL PROTECTED] wrote:
Will,
Right now I would think that a negative reward would be usable for this
aspect.
I agree it is usable. But I am not sure it necessary, you can just
normalise the reward value.
Let say for most states you normally give 0 for a
On 13/06/06, sanjay padmane [EMAIL PROTECTED] wrote:
On the suggestion of creating a wiki, we already have it here
http://en.wikipedia.org/wiki/Artificial_general_intelligence
I wouldn't want to pollute the wiki proper with our unverified claims.
, as you know, and its exposure is much
On 13/06/06, Yan King Yin [EMAIL PROTECTED] wrote:
Will,
I've been thinking of hosting a wiki for some time, but not sure if we have
reached critical mass here.
Possibly not. I may just collate my own list of questions and answers
until the time does come.
When we get down to the details,
On 15/06/06, arnoud [EMAIL PROTECTED] wrote:
On Thursday 15 June 2006 21:35, Ben Goertzel wrote:
If this doesn't seem to be the case, this is because of that some
concepts are so abstract that they don't seem to be tied to perception
anymore. It is obvious that they are (directly) tied to
On 17/06/06, arnoud [EMAIL PROTECTED] wrote:
As long as some of those things are learnt by watching humans doing
them, in practise I agree with you. In theory though a sufficiently
powerful Giant look up table, could also seem to learn these things,
so I also going to be look at the
On 06/07/06, Russell Wallace [EMAIL PROTECTED] wrote:
On 7/6/06, William Pearson [EMAIL PROTECTED] wrote:
How would you define the sorts of tasks humans are designed to carry
out? I can't see an easy way of categorising all the problems
individual humans have shown there worth
On 06/07/06, Russell Wallace [EMAIL PROTECTED] wrote:
On 7/6/06, William Pearson [EMAIL PROTECTED] wrote:
A
generic PC almost fulfils the description, programmable, generic and
if given the right software to start with can solve problems. But I am
guessing it is missing something. As someone
On 08/07/06, Russell Wallace [EMAIL PROTECTED] wrote:
On 7/8/06, William Pearson [EMAIL PROTECTED] wrote:
Agreed, but I think looking at it in terms of a single language is a
mistake. Humans use body language and mimicry to acquire spoken
language and spoken/body to acquire written
On 28/08/06, Russell Wallace [EMAIL PROTECTED] wrote:
On 8/28/06, Stephen Reed [EMAIL PROTECTED] wrote:
Google wouldn't work at all well under the GPL. Why? Because if everyone
had their own little Google, it would be quite useless [1]. The system's
usefulness comes from the fact that there is
On 28/08/06, Russell Wallace [EMAIL PROTECTED] wrote:
On 8/28/06, William Pearson [EMAIL PROTECTED] wrote:
If the macro AGI can't translate between differences in language or
representation that the micro AGIs have acquired from being open
source, then we probably haven't done our job
On 28/08/06, Russell Wallace [EMAIL PROTECTED] wrote:
On 8/28/06, William Pearson [EMAIL PROTECTED] wrote:
We may well not have enough computing resources available to do it on
the cheap using local resources. But that is the approach I am
inclined to take, I'll just wait until we do
On 28/08/06, Russell Wallace [EMAIL PROTECTED] wrote:
On 8/28/06, William Pearson [EMAIL PROTECTED] wrote:
Things like hooking it up to low quality sound video feeds and have it
judge by posture/expression/time of day what the most useful piece of
information in the RSS feeds/email etc
I am interested in meta-learning voodoo, so I thought I would add my
view on KR in this type of system.
If you are interested in meta-learning the KR you have to ditch
thinking about knowledge as the lowest level of changeable
information in your system, and just think about changing state.
On 27/09/06, Richard Loosemore [EMAIL PROTECTED] wrote:
William Pearson wrote:
I am interested in meta-learning voodoo, so I thought I would add my
view on KR in this type of system.
If you are interested in meta-learning the KR you have to ditch
thinking about knowledge as the lowest level
Richard Loosemoore
As for your suggestion about the problem being centered on the use of
model-theoretic semantics, I have a couple of remarks.
One is that YES! this is a crucial issue, and I am so glad to see you
mention it. I am going to have to read your paper and discuss with you
On 21/11/06, Pei Wang [EMAIL PROTECTED] wrote:
That sounds better to me. In general, I'm against attempts to get
complete, consistent, certain, and absolute descriptions (of either
internal or external state), and prefer partial,
not-necessarily-consistent, uncertain, and relative ones --- not
On 02/12/06, Ben Goertzel [EMAIL PROTECTED] wrote:
I think that our propensity for music is pretty damn simple: it's a
side-effect of the general skill-learning machinery that makes us memetic
substrates. Tunes are trajectories in n-space as are the series of motor
signals involved in
On 14/02/07, Ben Goertzel [EMAIL PROTECTED] wrote:
Does anyone know of a well-thought-out list of this sort. Of course I
could make one by surveying
the cognitive psych literature, but why reinvent the wheel?
None that I have come across. Biases that I have come across are
things like paying
On 13/04/07, Richard Loosemore [EMAIL PROTECTED] wrote:
To convey this subtlety as simply as I can, I would suggest that you ask
yourself how much intelligence is being assumed in the preprocessing
system that does the work of (a) picking out patterns to be considered
by the system, and (b)
On 26/04/07, Richard Loosemore [EMAIL PROTECTED] wrote:
Consider that, folks, to be a challenge: to those who think there is
such a definition, I await your reply.
While I don't think it the sum of all intelligence, I'm studying
something I think is a precondition of being intelligent. That
My current thinking is that it will take lots of effort by multiple
people, to take a concept or prototype AGI and turn into something
that is useful in the real world. And even one or two people worked on
the correct concept for their whole lives it may not produce the full
thing, they may hit
On 11/05/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Tommy, the scientific experiment and engineering project, is almost
all about concept formation. He gets a voluminous input stream but is
required to parse it into coherent concepts (e.g. objects, positions,
velocities, etc). None of
On 01/06/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Ray Kurzweil has arranged to put a couple of sample chapters up on his site:
Kinds of Minds
http://www.kurzweilai.net/meme/frame.html?main=/articles/art0707.html
The Age of Virtuous Machines
Is there space within the charity world for another one related to
intelligence but with a different focus to SIAI?
Rather than specifically funding an AGI effort or creating one in
order to bring about a specific goal state of humanity in mind, it
would be dedicated to funding a search for the
On 04/06/07, Matt Mahoney [EMAIL PROTECTED] wrote:
Suppose you build a human level AGI, and argue
that it is not autonomous no matter what it does, because it is
deterministically executing a program.
I suspect an AGI that executes one fixed unchangeable program is not
physically possible.
On 05/06/07, Ricardo Barreira [EMAIL PROTECTED] wrote:
On 6/5/07, William Pearson [EMAIL PROTECTED] wrote:
On 04/06/07, Matt Mahoney [EMAIL PROTECTED] wrote:
Suppose you build a human level AGI, and argue
that it is not autonomous no matter what it does, because it is
deterministically
On 06/06/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
There're several reasons why AGI teams are fragmented and AGI designers
don't want to join a consortium:
A. believe that one's own AGI design is superior
B. want to ensure that the global outcome of AGI is friendly
C. want to get
On 22/06/07, Pei Wang [EMAIL PROTECTED] wrote:
Hi,
I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm , including an AGI
Overview followed by Representative AGI Projects.
It is basically a bunch of links and quotations organized according to
my opinion.
On 23/06/07, Mike Tintner [EMAIL PROTECTED] wrote:
- Will Pearson: My theory is that the computer architecture has to be
more brain-like
than a simple stored program architecture in order to allow resource
constrained AI to implemented efficiently. The way that I am
investigating, is an
On 24/06/07, Bo Morgan [EMAIL PROTECTED] wrote:
On Sun, 24 Jun 2007, William Pearson wrote:
) I think the brains programs have the ability to protect their own
) storage from interference from other programs. The architecture will
) only allow programs that have proven themselves better
Sorry, sent accidentally while half finished.
Bo wrote:
This is only partially true, and mainly only for the neocortex, right?
For example, removing small parts of the brainstem result in coma.
I'm talking about control in memory access, and by memory access I am
referring to synaptic changes
On 27/09/2007, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
This is why the word impossible has no place outside of math
departments.
Original Message
Subject: [bafuture] An amazing blind (?!!) boy (and a super mom!)
Date: Thu, 27 Sep 2007 11:49:42 -0700
From: Kennita
On 29/09/2007, Vladimir Nesov [EMAIL PROTECTED] wrote:
Although it indeed seems off-topic for this list, calling it a
religion is ungrounded and in this case insulting, unless you have
specific arguments.
Killing huge amounts of people is a pretty much possible venture for
regular humans, so
On 01/10/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
--- William Pearson [EMAIL PROTECTED] wrote:
On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
The real danger is this: a program intelligent enough to understand
software
would be intelligent enough to modify itself.
Well
On 02/10/2007, Mark Waser [EMAIL PROTECTED] wrote:
A quick question, do people agree with the scenario where, once a non
super strong RSI AI becomes mainstream it will replace the OS as the
lowest level of software?
For the system that it is running itself on? Yes, eventually. For
On 05/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
We have good reason to believe, after studying systems like GoL, that
even if there exists a compact theory that would let us predict the
patterns from the rules (equivalent to predicting planetary dynamics
given the inverse square law
On 05/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
William Pearson wrote:
On 05/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
We have good reason to believe, after studying systems like GoL, that
even if there exists a compact theory that would let us predict the
patterns
On 07/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
I have a question for you, Will.
Without loss of generality, I can change my use of Game of Life to a new
system called GoL(-T) which is all of the possible GoL instantiations
EXCEPT the tiny subset that contain Turing Machine
On 08/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
William Pearson wrote:
On 07/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
William Pearson wrote:
On 07/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
The TM implementation not only has no relevance to the behavior
On 08/10/2007, Mark Waser [EMAIL PROTECTED] wrote:
From: William Pearson [EMAIL PROTECTED]
Laptops aren't TMs.
Please read the wiki entry to see that my laptop isn't a TM.
But your laptop can certainly implement/simulate a Turing Machine (which was
the obvious point of the post(s) that you
On 12/10/2007, Edward W. Porter [EMAIL PROTECTED] wrote:
(2) WITH REGARD TO BOOKWORLD -- IF ALL THE WORLD'S BOOKS WERE IN
ELECTRONIC
FORM AND YOU HAD A MASSIVE AMOUNT OF AGI HARDWARD TO READ THEM ALL I
THINK
YOU WOULD BE ABLE TO GAIN A TREMENDOUS AMOUNT OF WORLD KNOWLEDGE
FROM THEM,
AND
On 18/10/2007, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
I'd be interested in everyone's take on the following:
1. What is the single biggest technical gap between current AI and AGI? (e.g.
we need a way to do X or we just need more development of Y or we have the
ideas, just need hardware,
I have recently been trying to find better formalisms than TMs for
different classes of adaptive systems (including the human brain), and
have come across the Persistent Turing Machines[1], which seem to be a
good first step in that direction.
They have expressiveness claimed to be greater than
On 30/10/2007, Pei Wang [EMAIL PROTECTED] wrote:
Thanks for the link. I agree that this work is moving in an
interesting direction, though I'm afraid that for AGI (and adaptive
systems in general), TM may be too low as a level of description ---
the conclusions obtained in this kind of work
On 06/11/2007, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Will Pearson asked
I'm also wondering what you consider success in this case. For example
do you want the system to be able to maintain conversational state
such as would be needed to deal with the following.
For all following
On 08/11/2007, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
My impression is that most machine learning theories assume a search space
of hypotheses as a given, so it is out of their scope to compare *between*
learning structures (eg, between logic and neural networks).
Algorithmic learning
On 08/11/2007, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
Thanks for the input.
There's one perplexing theorem, in the paper about the algorithmic
complexity of programming, that the language doesn't matter that much, ie,
the algorithmic complexity of a program in different languages only
On 08/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:
I'm sorry I'm not going to be able to provide much illumination for
you at this time. Just the few sentences of yours quoted above, while
of a level of comprehension equal or better than average on this list,
demonstrate epistemological
On 09/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:
On 11/8/07, William Pearson [EMAIL PROTECTED] wrote:
On 08/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:
This discussion reminds me of hot rod enthusiasts arguing passionately
about how to build the best racing car, while
On 21/11/2007, Dennis Gorelik [EMAIL PROTECTED] wrote:
Benjamin,
That's massive amount of work, but most AGI research and development
can be shared with narrow AI research and development.
There is plenty overlap btw AGI and narrow AI but not as much as you
suggest...
That's only
One thing that has been puzzling me for a while is, why some people
expect an intelligence to be less flexible than a PC.
What do I mean by this? A PC can have any learning algorithm, bias or
representation of data we care to create. This raises another
question: how are we creating a
On 06/12/2007, Ed Porter [EMAIL PROTECTED] wrote:
Matt,
So if it is perceived as something that increases a machine's vulnerability,
it seems to me that would be one more reason for people to avoid using it.
Ed Porter
Why are you having this discussion on an AGI list?
Will Pearson
-
On 07/01/2008, Robert Wensman [EMAIL PROTECTED] wrote:
I think what you really want to use is the
concept of adaptability, or maybe you could say you want an AGI system that
is programmed in an indirect way (meaning that the program instructions are
very far away from what the system actually
On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Processing a dictionary in a useful way
requires quite sophisticated language understanding ability, though.
Once you can do that, the hard part of the problem is already
solved ;-)
While this kind of system requires sophisticated
On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote:
I'll be a lot more interested when people start creating NLP systems
that are syntactically and semantically processing statements *about*
words, sentences and other linguistic structures and adding syntactic
and semantic rules
On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote:
On Jan 10, 2008 10:26 AM, William Pearson [EMAIL PROTECTED] wrote:
On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote:
I'll be a lot more interested when people start creating NLP systems
that are syntactically
Vladimir,
What do you mean by difference in processing here?
I said the difference was after the initial processing. By processing
I meant syntactic and semantic processing. After processing the
syntax related sentence the realm of action is changing the system
itself, rather than knowledge of
My problem with both these definitions (and the one underpinning
AIXI), is that they either don't define the word problem well or
define it in a limited way.
For example AIXI defines it as the solution of a problem as finding a
function that transforms an input to an output. No mention of having
On 14/01/2008, Pei Wang [EMAIL PROTECTED] wrote:
On Jan 13, 2008 7:40 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
And, as I indicated, my particular beef was with Shane Legg's paper,
which I found singularly content-free.
Shane Legg and Marcus Hutter have a recent publication on this
Something I noticed while trying to fit my definition of AI into the
categories given.
There is another way that definitions can be principled.
This similarity would not be on the function of percepts to action.
Instead it would require a similarity on the function of percepts to
internal state
On 14/01/2008, Pei Wang [EMAIL PROTECTED] wrote:
Will,
The situation you mentioned is possible, but I'd assume, given the
similar functions from percepts to states, there must also be similar
functions from states to actions, that is,
AC = GC(SC), AH = GH(SH), GC ≈ GH
Pei,
Sorry I
On 23/01/2008, Günther Greindl [EMAIL PROTECTED] wrote:
I find the theory very compelling, as I always found the functionalistic
AI approach a bit lacking whereas I am a full endorser of a
materialistic/monist approach (and I believe strong AI is feasible). EM
fields arising through the
On 27/01/2008, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Software correctness is undecidable -- the halting problem reduces to it.
Computer security isn't going to be magically
On 28/01/2008, Bob Mottram [EMAIL PROTECTED] wrote:
On 28/01/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
When your computer can write and debug
software faster and more accurately than you can, then you should worry.
A tool that could generate computer code from formal specifications
On 04/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:
(And it's a fairly safe bet, Joseph, that no one will now do the obvious
thing and say.. well, one idea I have had is..., but many will say, the
reason why we can't do that is...)
And maybe they would have a reason for doing so. I would like
On 05/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:
William P : I can't think
of any external test that can't be fooled by a giant look up table
(ned block thought of this argument first).
A by definition requirement of a general test is that the systembuilder
doesn't set it, and can't
On 15/02/2008, Ed Porter [EMAIL PROTECTED] wrote:
Mike,
You have been pushing this anti-symbol/pro-image dichotomy for a long time.
I don't understand it.
Images are set, or nets, of symbols. So, if, as you say
all symbols provide an extremely limited *inventory of
I'm going to try and elucidate my approach to building an intelligent
system, in a round about fashion. This is the problem I am trying to
solve.
Imagine you are designing a computer system to solve an unknown
problem, and you have these constraints
A) Limited space to put general information
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
I'm going to try and elucidate my approach to building an intelligent
system, in a round about fashion. This is the problem I am trying to
solve.
Imagine you are designing
On 29/02/2008, Abram Demski [EMAIL PROTECTED] wrote:
I'm an undergrad who's been lurking here for about a year. It seems to me that
many people on this list take Solomonoff Induction to be the ideal learning
technique (for unrestricted computational resources). I'm wondering what
justification
On 01/03/2008, Jey Kottalam [EMAIL PROTECTED] wrote:
On Sat, Mar 1, 2008 at 3:10 AM, William Pearson [EMAIL PROTECTED] wrote:
Keeping the same general shape of the system (trying to account for
all the detail) means we are likely to overfit, due to trying to model
systems
Anyone blogging what they are finding interesting in AGI 08?
Will Pearson
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
Note I want something different than computational universality. E.g.
Von Neumann architectures are generally programmable, Harvard
architectures aren't. As they can't
On 28/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:
You must first define its existing skills, then define the new challenge
with some degree of precision - then explain the principles by which it will
extend its skills. It's those principles of extension/generalization that
are the
On 02/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
Jeez, Will, the point of Artificial General Intelligence is that it can
start adapting to an unfamiliar situation and domain BY ITSELF. And your
FIRST and only response to the problem you set was to say: I'll get someone
to tell it what
On 16/03/2008, Ed Porter [EMAIL PROTECTED] wrote:
I am not an expert on neural nets, but from my limited understanding it is far
from clear exactly what the new insight into neural nets referred to in this
article
is, other than that timing neuron firings is important in the brain, which
On 24/03/2008, Jim Bromer [EMAIL PROTECTED] wrote:
To try to understand what I am talking about, start by imagining a
simulation of some physical operation, like a part of a complex factory in a
Sim City kind of game. In this kind of high-level model no one would ever
imagine all of the
On 25/03/2008, Vladimir Nesov [EMAIL PROTECTED] wrote
Simple systems can be computationally universal, so it's not an issue
in itself. On the other hand, no learning algorithm is universal,
there are always distributions that given algorithms will learn
miserably. The problem is to find a
On 26/03/2008, Mark Waser [EMAIL PROTECTED] wrote:
First a riddle: What can be all learning algorithms, but is none?
A human being!
Well my answer was a common PC, which I hope is more illuminating
because we know it well.
But human being works, as does any future AI design, as far as I am
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:
Although I symphathize with some of Hawkin's general ideas about unsupervised
learning, his current HTM framework is unimpressive in comparison with
state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional nets and the
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:
Intelligence is not *only* about the modalities of the data you get,
but modalities are certainly important. A deafblind person can still
learn a lot about the world with taste, smell, and touch, but the
senses one has access to defines
On 26/03/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi all,
A lot of students email me asking me what to read to get up to speed on AGI.
So I started a wiki page called Instead of an AGI Textbook,
http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
I've
The resource allocation problem and why it needs to be solved first
How much memory and processing power should you apply to the following things?:
Visual Processing
Reasoning
Sound Processing
Seeing past experiences and how they apply to the current one
Searching for new ways of doing things
On 01/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Tue, Apr 1, 2008 at 6:30 PM, William Pearson [EMAIL PROTECTED] wrote:
The resource allocation problem and why it needs to be solved first
How much memory and processing power should you apply to the following
things
On 05/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Sat, Apr 5, 2008 at 12:24 AM, William Pearson [EMAIL PROTECTED] wrote:
On 01/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
This question supposes a specific kind of architecture, where these
things are in some sense
On 19/04/2008, Ed Porter [EMAIL PROTECTED] wrote:
WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?
I'm not quite sure how to describe it, but this brief sketch will have
to do until I get some more time. These may be in some new AI
material, but I haven't had the chance to read up much recently.
1 - 100 of 183 matches
Mail list logo