2009/1/9 Ben Goertzel b...@goertzel.org:
This is an attempt to articulate a virtual world infrastructure that
will be adequate for the development of human-level AGI
http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf
goertzel.org seems to be down. So I can't refresh my memory of the paper.
2009/1/13 Ben Goertzel b...@goertzel.org:
Yes, I'm expecting the AI to make tools from blocks and beads
No, i'm not attempting to make a detailed simulation of the human
brain/body, just trying to use vaguely humanlike embodiment and
high-level mind-architecture together with computer science
2008/12/29 Ben Goertzel b...@goertzel.org:
Hi,
I expanded a previous blog entry of mine on hypercomputation and AGI into a
conference paper on the topic ... here is a rough draft, on which I'd
appreciate commentary from anyone who's knowledgeable on the subject:
2008/12/30 Ben Goertzel b...@goertzel.org:
It seems to come down to the simplicity measure... if you can have
simplicity(Turing program P that generates lookup table T)
simplicity(compressed lookup table T)
then the Turing program P can be considered part of a scientific
explanation...
You can read the full essay online here
http://www.edge.org/3rd_culture/taleb08/taleb08.1_index.html
Will
2008/11/8 Mike Tintner [EMAIL PROTECTED]:
REAL LIFE IS NOT A CASINO
By Nassim Nicholas Taleb
On New Years day I received a a prescient essay from Nassim Taleb, author of
The Black
2008/10/28 Ben Goertzel [EMAIL PROTECTED]:
On the other hand, I just want to point out that to get around Hume's
complaint you do need to make *some* kind of assumption about the regularity
of the world. What kind of assumption of this nature underlies your work on
NARS (if any)?
Not
2008/10/24 Mark Waser [EMAIL PROTECTED]:
But I thought I'd mention that for OpenCog we are planning on a
cross-language approach. The core system is C++, for scalability and
efficiency reasons, but the MindAgent objects that do the actual AI
algorithms should be creatable in various
2008/10/20 Mike Tintner [EMAIL PROTECTED]:
(There is a separate, philosophical discussion, about feasibility in a
different sense - the lack of a culture of feasibility, which is perhaps,
subconsciously what Ben was also referring to - no one, but no one, in
AGI, including Ben, seems
2008/10/19 Dr. Matthias Heger [EMAIL PROTECTED]:
The process of outwardly expressing meaning may be fundamental to any social
intelligence but the process itself needs not much intelligence.
Every email program can receive meaning, store meaning and it can express it
outwardly in order to
2008/10/18 Ben Goertzel [EMAIL PROTECTED]:
1)
There definitely IS such a thing as a better algorithm for intelligence in
general. For instance, compare AIXI with an algorithm called AIXI_frog,
that works exactly like AIXI, but inbetween each two of AIXI's computational
operations, it
2008/10/17 Ben Goertzel [EMAIL PROTECTED]:
The difficulty of rigorously defining practical intelligence doesn't tell
you ANYTHING about the possibility of RSI ... it just tells you something
about the possibility of rigorously proving useful theorems about RSI ...
More importantly, you
2008/10/14 Terren Suydam [EMAIL PROTECTED]:
--- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote:
An AI that is twice as smart as a
human can make no more progress than 2 humans.
Spoken like someone who has never worked with engineers. A genius engineer
can outproduce 20 ordinary
Hi Terren,
I think humans provide ample evidence that intelligence is not necessarily
correlated with processing power. The genius engineer in my example solves a
given problem with *much less* overall processing than the ordinary engineer,
so in this case intelligence is correlated with
2008/10/4 Colin Hales [EMAIL PROTECTED]:
Hi Will,
It's not an easy thing to fully internalise the implications of quantum
degeneracy. I find physicists and chemists have no trouble accepting it, but
in the disciplines above that various levels of mental brick walls are in
place. Unfortunately
Hi Colin,
I'm not entirely sure that computers can implement consciousness. But
I don't find your arguments sway me one way or the other. A brief
reply follows.
2008/10/4 Colin Hales [EMAIL PROTECTED]:
Next empirical fact:
(v) When you create a turing-COMP substrate the interface with space
I've started to wander away from my normal sub-cognitive level of AI,
and have been thinking about reasoning systems. One scenario I have
come up with is the, foresight of extra knowledge, scenario.
Suppose Alice and Bob have decided to bet $10 on the weather in the 10
days time in alaska whether
2008/9/16 Terren Suydam [EMAIL PROTECTED]:
Hi Will,
Such an interesting example in light of a recent paper, which deals with
measuring the difference between activation of the visual cortex and blood
flow to the area, depending on whether the stimulus was subjectively
invisible. If the
2008/9/15 Vladimir Nesov [EMAIL PROTECTED]:
I guess that intuitively, argument goes like this:
1) economy is more powerful than individual agents, it allows to
increase the power of intelligence in individual agents;
2) therefore, economy has an intelligence-increasing potency;
3) so, we can
2008/9/8 Benjamin Johnston [EMAIL PROTECTED]:
Does this issue actually crop up in GA-based AGI work? If so, how did you
get around it? If not, would you have any comments about what makes AGI
special so that this doesn't happen?
Does it also happen in humans? I'd say yes, therefore it might
2008/9/5 Mike Tintner [EMAIL PROTECTED]:
MT:By contrast, all deterministic/programmed machines and computers are
guaranteed to complete any task they begin.
Will:If only such could be guaranteed! We would never have system hangs,
dead locks. Even if it could be made so, computer systems
2008/9/6 Mike Tintner [EMAIL PROTECTED]:
Will,
Yes, humans are manifestly a RADICALLY different machine paradigm- if you
care to stand back and look at the big picture.
Employ a machine of any kind and in general, you know what you're getting -
some glitches (esp. with complex programs) etc
2008/9/5 Mike Tintner [EMAIL PROTECTED]:
By contrast, all deterministic/programmed machines and computers are
guaranteed to complete any task they begin.
If only such could be guaranteed! We would never have system hangs,
dead locks. Even if it could be made so, computer systems would not
2008/9/4 Mike Tintner [EMAIL PROTECTED]:
Terren,
If you think it's all been said, please point me to the philosophy of AI
that includes it.
A programmed machine is an organized structure. A keyboard (and indeed a
computer with keyboard) are something very different - there is no
2008/9/2 Ben Goertzel [EMAIL PROTECTED]:
Yes, I agree that your Turing machine approach can model the same
situations, but the different formalisms seem to lend themselves to
different kinds of analysis more naturally...
I guess it all depends on what kinds of theorems you want to
2008/8/28 Valentina Poletti [EMAIL PROTECTED]:
Got ya, thanks for the clarification. That brings up another question. Why
do we want to make an AGI?
To understand ourselves as intelligent agents better? It might enable
us to have decent education policy, rehabilitation of criminals.
Even if
I've put up a short fairly dense un-referenced paper (basically an
email but in a pdf to allow for maths) here.
http://codesoup.sourceforge.net/RSC.pdf
Any thoughts/ feed back welcomed. I'll try and make it more accessible
at some point, but I don't want to spend too much time on it at the
2008/9/2 Ben Goertzel [EMAIL PROTECTED]:
Hmmm..
Rather, I would prefer to model a self-modifying AGI system as something
like
F(t+1) = (F(t))( F(t), E(t) )
where E(t) is the environment at time t and F(t) is the system at time t
Are you assuming the system knows the environment totally?
2008/8/29 Ben Goertzel [EMAIL PROTECTED]:
About recursive self-improvement ... yes, I have thought a lot about it, but
don't have time to write a huge discourse on it here
One point is that if you have a system with N interconnected modules, you
can approach RSI by having the system
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:
On Sat, Aug 30, 2008 at 10:06 AM, William Pearson [EMAIL PROTECTED]
wrote:
2008/8/29 Ben Goertzel [EMAIL PROTECTED]:
About recursive self-improvement ... yes, I have thought a lot about it,
but
don't have time to write a huge discourse
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:
Isn't it an evolutionary stable strategy for the modification system
module to change to a state where it does not change itself?1
Not if the top-level goals are weighted toward long-term growth
Let me
give you a just so story and you can tell
2008/8/30 Ben Goertzel [EMAIL PROTECTED]:
Don't they have
to specify a specific state? Or am I reading
http://opencog.org/wiki/OpenCogPrime:GoalAtom wrong?
They don't have to specify a specific state. A goal could
be some PredicateNode P expressing an abstract evaluation of
state,
2008/8/29 j.k. [EMAIL PROTECTED]:
On 08/28/2008 04:47 PM, Matt Mahoney wrote:
The premise is that if humans can create agents with above human
intelligence, then so can they. What I am questioning is whether agents at
any intelligence level can do this. I don't believe that agents at any
2008/8/29 j.k. [EMAIL PROTECTED]:
On 08/29/2008 01:29 PM, William Pearson wrote:
2008/8/29 j.k.[EMAIL PROTECTED]:
An AGI with an intelligence the equivalent of a 99.-percentile human
might be creatable, recognizable and testable by a human (or group of
humans) of comparable
2008/8/25 Terren Suydam [EMAIL PROTECTED]:
--- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam
wrong. This ability might be an end in itself, the whole
point of
building an AI, when considered as applying to the dynamics
of the
world
2008/8/23 Matt Mahoney [EMAIL PROTECTED]:
Valentina Poletti [EMAIL PROTECTED] wrote:
I was wondering why no-one had brought up the information-theoretic aspect
of this yet.
It has been studied. For example, Hutter proved that the optimal strategy of
a rational goal seeking agent in an
2008/8/11 Mike Tintner [EMAIL PROTECTED]:
Will: thought you meant rational as applied to the system builder :P
Consistency of systems is overrated, as far as I am concerned.
Consistency is only important if it ever the lack becomes exploited. A
system that alter itself to be consistent after
2008/8/10 Mike Tintner [EMAIL PROTECTED]:
Just as you are in a rational, specialist way picking off isolated features,
so, similarly, rational, totalitarian thinkers used to object to the crazy,
contradictory complications of the democratic, conflict system of
decisionmaking by contrast with
Is there a mathematical wiki-pedia sized definition of the a
Goertzelian pattern out there?
It would make assessing the underpinnings of Open Cog Prime easier.
Will Pearson
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
2008/8/3 Richard Loosemore [EMAIL PROTECTED]:
I probably don't need to labor the rest of the story, because you have heard
it before. If there is a brick wall between the overall behavior of the
system and the design choices that go into it - if it is impossible to go
from 'I want the
2008/7/22 Mike Archbold [EMAIL PROTECTED]:
It looks to me to be borrowed from Aristotle's ethics. Back in my college
days, I was trying to explain my project and the professor kept
interrupting me to ask: What does it do? Tell me what it does. I don't
understand what your system does.
2008/7/14 Terren Suydam [EMAIL PROTECTED]:
Will,
--- On Fri, 7/11/08, William Pearson [EMAIL PROTECTED] wrote:
Purpose and goal are not intrinsic to systems.
I agree this is true with designed systems.
And I would also say of evolved systems. My fingers purpose could
equally well be said
2008/7/6 Abram Demski [EMAIL PROTECTED]:
In fact, adding hidden predicates and entities in the case of Markov
logic makes the space of models Turing-complete (and even bigger than
that if higher-order logic is used). But if I am not mistaken the
clustering used in the paper I refer to is not
2008/7/3 Steve Richfield [EMAIL PROTECTED]:
William and Vladimir,
IMHO this discussion is based entirely on the absence of any sort of
interface spec. Such a spec is absolutely necessary for a large AGI project
to ever succeed, and such a spec could (hopefully) be wrung out to at least
avoid
Terren,
Remember when I said that a purpose is not the same thing
as a goal?
The purpose that the system might be said to have embedded
is
attempting to maximise a certain signal. This purpose
presupposes no
ontology. The fact that this signal is attached to a human
means the
system as a
2008/7/3 Terren Suydam [EMAIL PROTECTED]:
--- On Wed, 7/2/08, William Pearson [EMAIL PROTECTED] wrote:
Evolution! I'm not saying your way can't work, just
saying why I short
cut where I do. Note a thing has a purpose if it is useful
to apply
the design stance* to it. There are two things
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 12:59 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
They would get less credit from the human supervisor. Let me
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote:
Nope. I don't include B in A because if A' is faulty it can cause
problems to whatever is in the same vmprogram as it, by overwriting
memory locations. A' being a separate
Sorry about the long thread jack
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
Because it is dealing with powerful stuff, when it gets it wrong it
goes wrong powerfully. You could lock the experimental code away in a
sand
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 9:36 PM, William Pearson [EMAIL PROTECTED] wrote:
Sorry about the long thread jack
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
Because it is dealing
Sorry about the late reply.
snip some stuff sorted out
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
On Tue, Jul 1, 2008 at 2:02 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
If internals are programmed by humans, why do you need automatic
system
2008/7/2 Terren Suydam [EMAIL PROTECTED]:
Mike,
This is going too far. We can reconstruct to a considerable
extent how humans think about problems - their conscious thoughts.
Why is it going too far? I agree with you that we can reconstruct thinking,
to a point. I notice you didn't say
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
On Wed, Jul 2, 2008 at 2:48 PM, William Pearson [EMAIL PROTECTED] wrote:
Okay let us clear things up. There are two things that need to be
designed, a computer architecture or virtual machine and programs that
form the initial set of programs within
2008/7/2 Abram Demski [EMAIL PROTECTED]:
How do you assign credit to programs that are good at generating good
children?
I never directly assign credit, apart from the first stage. The rest
of the credit assignment is handled by the vmprograms, er,
programming.
Particularly, could a program
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
They would get less credit from the human supervisor. Let me expand on
what I meant about the economic competition. Let us say vmprogram A
makes a copy of itself, called
2008/6/30 Terren Suydam [EMAIL PROTECTED]:
Hi Will,
--- On Mon, 6/30/08, William Pearson [EMAIL PROTECTED] wrote:
The only way to talk coherently about purpose within
the computation is to simulate self-organized, embodied
systems.
I don't think you are quite getting my system. If you
2008/6/30 Terren Suydam [EMAIL PROTECTED]:
Ben,
I agree, an evolved design has limits too, but the key difference between a
contrived design and one that is allowed to evolve is that the evolved
critter's intelligence is grounded in the context of its own 'experience',
whereas the
Hello Terren
A Von Neumann computer is just a machine. It's only purpose is to compute.
When you get into higher-level purpose, you have to go up a level to the
stuff being computed. Even then, the purpose is in the mind of the programmer.
What I don't see is why your simulation gets away
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
On Mon, Jun 30, 2008 at 10:34 PM, William Pearson [EMAIL PROTECTED] wrote:
I'm seeking to do something half way between what you suggest (from
bacterial systems to human alife) and AI. I'd be curious to know
whether you think it would suffer from
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
On Tue, Jul 1, 2008 at 1:31 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
It is a wrong level of organization: computing hardware is the physics
of computation, it isn't meant to implement specific
2008/6/27 Steve Richfield [EMAIL PROTECTED]:
Russell and William,
OK, I think that I am finally beginning to get it. No one here is really
planning to do wonderful things that people can't reasonably do, though
Russell has pointed out some improvements which I will comment on
separately.
I
I'm going to ignore the oversimplifications of a variety of peoples positions.
But no one in AGI knows how to design or instruct a machine to work without
algorithms - or, to be more precise, *complete* algorithms. It's unthinkable
- it seems like asking someone not to breathe... until, like
2008/6/26 Steve Richfield [EMAIL PROTECTED]:
Jiri previously noted that perhaps AGIs would best be used to manage the
affairs of humans so that we can do as we please without bothering with the
complex details of life. Of course, people and some (communist) governments
now already perform
2008/6/23 Bob Mottram [EMAIL PROTECTED]:
2008/6/22 William Pearson [EMAIL PROTECTED]:
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern
Probably the last intelligence explosion
2008/6/23 Vladimir Nesov [EMAIL PROTECTED]:
On Mon, Jun 23, 2008 at 12:50 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
Two questions:
1) Do you know enough to estimate which scenario is more likely?
Well since intelligence explosions haven't
While SIAI fills that niche somewhat, it concentrates on the
Intelligence explosion scenario. Is there a sufficient group of
researchers/thinkers with a shared vision of the future of AI coherent
enough to form an organisation? This organisation would discus,
explore and disseminate what can be
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
Two questions:
1) Do you know enough to estimate which scenario is more likely?
Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern, so I think
non-exploding intelligences have the
2008/6/21 Wei Dai [EMAIL PROTECTED]:
A different way to break Solomonoff Induction takes advantage of the fact
that it restricts Bayesian reasoning to computable models. I wrote about
this in is induction unformalizable? [2] on the everything mailing list.
Abram Demski also made similar points
2008/6/12 J Storrs Hall, PhD [EMAIL PROTECTED]:
I'm getting several replies to this that indicate that people don't understand
what a utility function is.
If you are an AI (or a person) there will be occasions where you have to make
choices. In fact, pretty much everything you do involves
2008/6/12 J Storrs Hall, PhD [EMAIL PROTECTED]:
On Thursday 12 June 2008 02:48:19 am, William Pearson wrote:
The kinds of choices I am interested in designing for at the moment
are should program X or program Y get control of this bit of memory or
IRQ for the next time period. X and Y can
2008/6/11 J Storrs Hall, PhD [EMAIL PROTECTED]:
Vladimir,
You seem to be assuming that there is some objective utility for which the
AI's internal utility function is merely the indicator, and that if the
indicator is changed it is thus objectively wrong and irrational.
There are two
2008/6/4 Bob Mottram [EMAIL PROTECTED]:
2008/6/4 J Storrs Hall, PhD [EMAIL PROTECTED]:
What is the rock thinking?
T h i s i s w a a a y o f f t o p i c . . .
Rocks are obviously superintelligences. By behaving like inert matter
and letting us build monuments and gravel pathways
2008/5/27 Mike Tintner [EMAIL PROTECTED]:
Will:And you are part of the problem insisting that an AGI should be tested
by its ability to learn on its own and not get instruction/help from
other agents be they human or other artificial intelligences.
I insist[ed] that an AGI should be tested on
2008/5/27 Steve Richfield [EMAIL PROTECTED]:
William,
This sounds like you should be announcing the analysis phase! Detailed
comments follow...
Design/research/analysis, call it what you will.
On 5/26/08, William Pearson [EMAIL PROTECTED] wrote:
VRRM - Virtual Reinforcement Resource
2008/5/27 Mike Tintner [EMAIL PROTECTED]:
Actually, that's an absurdity. The whole story of evolution tells us that
the problems of living in this world for any species of
creature/intelligence at any level can only be solved by a SOCIETY of
individuals. This whole dimension seems to be
VRRM - Virtual Reinforcement Resource Managing Machine
Overview
This is a virtual machine designed to allow non-catastrophic
unconstrained experimentation of programs in a system as close to the
hardware as possible. This should allow the system to change as much
as is possible and needed for
2008/5/16 Steve Richfield [EMAIL PROTECTED]:
Does anyone else here share my dream of a worldwide AI with all of the
knowledge of the human race to support it - built with EXISTING Wikipedia
and Dr. Eliza software and a little glue to hold it all together?
I'm taking this as a jumping off
Matt mahoney:
I am not sure what you mean by AGI. I consider a measure of intelligence
to be the degree to which goals are satisfied in a range of environments.
It does not matter what the goals are. They may seem irrational to you.
The goal of a smart bomb is to blow itself up at a
2008/5/11 Russell Wallace [EMAIL PROTECTED]:
On Sat, May 10, 2008 at 10:10 PM, William Pearson [EMAIL PROTECTED] wrote:
It depends on the system you are designing on. I think you can easily
create as many types of sand box as you want in programming language E
(1) for example. If the principle
2008/5/10 Richard Loosemore [EMAIL PROTECTED]:
This is still quite ambiguous on a number of levels, so would it be possible
for you to give us a road map of where the argument is going? At the moment
I am not sure what the theme is.
That is because I am still ambiguous as to what the later
After getting completely on the wrong foot last time I posted
something, and not having had time to read the papers I should have. I
have decided to try and start afresh and outline where I am coming
from. I'll get around to do a proper paper later.
There are two possible modes for designing a
2008/4/27 Dr. Matthias Heger [EMAIL PROTECTED]:
Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54
Yes, truly general AI is only possible in the case of infinite
processing power, which is
likely not physically realizable.
How much generality can be achieved with
2008/4/26 Dr. Matthias Heger [EMAIL PROTECTED]:
How general should be AGI?
My answer, as *potentially* general as possible. In a similar fashion
that a UTM is as potentially as general as possible, but with more
purpose.
There are plenty of problems you can define that don't need the
halting
On 21/04/2008, Ed Porter [EMAIL PROTECTED] wrote:
So when people are given a sentence such as the one you quoted about verbs,
pronouns, and nouns, presuming they have some knowledge of most of the words
in the sentence, they will understand the concept that verbs are doing
words. This is
On 19/04/2008, Ed Porter [EMAIL PROTECTED] wrote:
WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?
I'm not quite sure how to describe it, but this brief sketch will have
to do until I get some more time. These may be in some new AI
material, but I haven't had the chance to read up much recently.
On 05/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Sat, Apr 5, 2008 at 12:24 AM, William Pearson [EMAIL PROTECTED] wrote:
On 01/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
This question supposes a specific kind of architecture, where these
things are in some sense
On 01/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Tue, Apr 1, 2008 at 6:30 PM, William Pearson [EMAIL PROTECTED] wrote:
The resource allocation problem and why it needs to be solved first
How much memory and processing power should you apply to the following
things
The resource allocation problem and why it needs to be solved first
How much memory and processing power should you apply to the following things?:
Visual Processing
Reasoning
Sound Processing
Seeing past experiences and how they apply to the current one
Searching for new ways of doing things
On 26/03/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi all,
A lot of students email me asking me what to read to get up to speed on AGI.
So I started a wiki page called Instead of an AGI Textbook,
http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
I've
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:
Although I symphathize with some of Hawkin's general ideas about unsupervised
learning, his current HTM framework is unimpressive in comparison with
state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional nets and the
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:
Intelligence is not *only* about the modalities of the data you get,
but modalities are certainly important. A deafblind person can still
learn a lot about the world with taste, smell, and touch, but the
senses one has access to defines
On 25/03/2008, Vladimir Nesov [EMAIL PROTECTED] wrote
Simple systems can be computationally universal, so it's not an issue
in itself. On the other hand, no learning algorithm is universal,
there are always distributions that given algorithms will learn
miserably. The problem is to find a
On 26/03/2008, Mark Waser [EMAIL PROTECTED] wrote:
First a riddle: What can be all learning algorithms, but is none?
A human being!
Well my answer was a common PC, which I hope is more illuminating
because we know it well.
But human being works, as does any future AI design, as far as I am
On 24/03/2008, Jim Bromer [EMAIL PROTECTED] wrote:
To try to understand what I am talking about, start by imagining a
simulation of some physical operation, like a part of a complex factory in a
Sim City kind of game. In this kind of high-level model no one would ever
imagine all of the
On 16/03/2008, Ed Porter [EMAIL PROTECTED] wrote:
I am not an expert on neural nets, but from my limited understanding it is far
from clear exactly what the new insight into neural nets referred to in this
article
is, other than that timing neuron firings is important in the brain, which
Anyone blogging what they are finding interesting in AGI 08?
Will Pearson
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
Note I want something different than computational universality. E.g.
Von Neumann architectures are generally programmable, Harvard
architectures aren't. As they can't
On 28/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:
You must first define its existing skills, then define the new challenge
with some degree of precision - then explain the principles by which it will
extend its skills. It's those principles of extension/generalization that
are the
On 02/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
Jeez, Will, the point of Artificial General Intelligence is that it can
start adapting to an unfamiliar situation and domain BY ITSELF. And your
FIRST and only response to the problem you set was to say: I'll get someone
to tell it what
On 29/02/2008, Abram Demski [EMAIL PROTECTED] wrote:
I'm an undergrad who's been lurking here for about a year. It seems to me that
many people on this list take Solomonoff Induction to be the ideal learning
technique (for unrestricted computational resources). I'm wondering what
justification
On 01/03/2008, Jey Kottalam [EMAIL PROTECTED] wrote:
On Sat, Mar 1, 2008 at 3:10 AM, William Pearson [EMAIL PROTECTED] wrote:
Keeping the same general shape of the system (trying to account for
all the detail) means we are likely to overfit, due to trying to model
systems
1 - 100 of 183 matches
Mail list logo