.
Consider that, folks, to be a challenge: to those who think there is
such a definition, I await your reply.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id
Russell Wallace wrote:
On 4/26/07, *Richard Loosemore* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Russell Wallace wrote:
I disagree. The human cognitive system is very closely tied to the
hardware it runs on. Understanding it in anywhere near the level of
detail
positives)?
If a proposed definition fails that test, it goes squarely in my (b)
category above.
And if all the proposed definitions go in either 9a) or (b) or (c), then
they are all, as I said before, pointless.
Richard Loosemore.
-
This list is sponsored by AGIRI: http
that no formal definition is possible, and
that the best we can do is produce informal definitions, that's GREAT.
I think that informal lists of characteristics can sometimes be a help.
But there is a big difference between that and claiming formal
definitions.
Richard Loosemore
might not be disagreeing.
Richard Loosemore
Pragmatically, we can posit a particular pattern-recognition system S, and
talk about all the patterns in the graph of f that S can recognize.
If we let S = a typical human, then I suggest that the above definition of
intelligence captures a lot
it was brainless of me to continue ;-)
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936
% of the AI research
person-hours might be getting poured down the toilet because some people
are claiming scientific validity for what they do, when in fact they are
just pissing into the wind. This is no joke.
Richard Loosemore.
My definition covers your point:
If I drop this pencil, does
in terms of agents, goals etc. But when
you dissect the meanings of terms like agents and goals you find the
same surrepticious dependence on subjective terms. Pure nonsense. Sham
science.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
DEREK ZAHN wrote:
Richard Loosemore writes:
The best we can do is to use the human design as a close inspiration
-- we do not have to make an exact copy, we just need to get close
enough to build something in the same family of systems, that's all --
and set up progress criteria based on how
the only response I have gotten from anyone is the same 'don't
really think it is a problem'.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret
DEREK ZAHN wrote:
Richard Loosemore writes:
I am talking about distilling the essential facts uncovered by
cognitive science into a unified formalism.
Just imagine all of your favorite models and theories in Cog Sci,
integrated in such a way that they become an actual system
never use these to build a system.
Certainly this is not the kind of stuff I was talking about.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret
intelligences.
The trick is to find a non-circular definition that leaves out the
thermostats, cuddly toys, Conway's Game of Life and the little program I
once wrote in Fortran that printed out HAPPY BIRTHDAY.
Very old argument.
Richard Loosemore.
-
This list is sponsored by AGIRI
that the argument is
wrong, that you cannot divulge?
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936
is not
on trying (hard) to actually do the integration. Mine is.
Richard Loosemore.
On 4/25/07, *Richard Loosemore* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Benjamin Goertzel wrote:
Derek --
As examples I'd vote for
-- Bernard Baars' global workspace
close to the kind of approach that I am talking about anyway, so your
conclusions about your own effort may not carry over.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member
Eugen Leitl wrote:
On Wed, Apr 25, 2007 at 02:02:44PM -0400, Richard Loosemore wrote:
I am the one who is actually getting on with the job and doing it, and I
say that not only is it doable, but as far as I can see it is converging
on an extremely powerful, consistent and usable model
on the the claim that I actualy made.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
H... I think my point may have gotten lost in the confusion here.
What I was trying to say was *suppose* I produced an AGI design that
used pretty much the same principles as those that operate in the human
cognitive
= capability to compress text and
video better than any technology that now exists. The existence of such
compression, proved by a simple test, would prove the existence of AGI.
Would an AGI with exactly my (human) intelligence be able to pass your
compression test?
Richard Loosemore
-
This list
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Would an AGI with exactly my (human) intelligence be able to pass your
compression test?
Only if your intelligence was uploaded to a deterministic machine. The human
brain is not deterministic.
Then your test is surely
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Would an AGI with exactly my (human) intelligence be able to pass your
compression test?
Only if your intelligence was uploaded to a deterministic machine
that I would not want that.
Richard Loosemore
Eric B. Ramsay wrote:
Actually Richard, these are the things you imagine you would like to do
given your current level of intelligence. I suspect very much that the
moment you went super intelligent there would be a paradigm change in
what you
William Pearson wrote:
On 13/04/07, Richard Loosemore [EMAIL PROTECTED] wrote:
To convey this subtlety as simply as I can, I would suggest that you ask
yourself how much intelligence is being assumed in the preprocessing
system that does the work of (a) picking out patterns to be considered
them.
Or, to be precise, it is not at all obvious that such a situation will
ever exist.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936
Eric Baum wrote:
Richard efforts (some people seem to think that there is something
Richard inherently impossible about a human being able to design
Richard something smarter than itself, but that idea is really just
Richard science-fiction hearsay, not grounded in any real
Richard limitations).
, and who keep
repeating mistakes and running around in circles, we are going to get
absolutely nowhere.
That makes me think of something else I meant to say, but I will put
that in a separate message.
Richard Loosemore
Eugen Leitl wrote:
http://scienceblogs.com
.
What if I organised a summer school to do such a thing?
This is just my spontaneous first thought about the idea. If there was
enough initial interest, I would be happy to scope it out more thoroughly.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
was valuable/no valuable in each one. Still, something should be
squeezable into a short course.
I'm going to check out some local possible venues (e.g. by the edge of a
lake in Upstate NY in the summer time. )
Will get back soon.
Richard Loosemore.
-
This list is sponsored by AGIRI: http
only 90 days of use.
Alas, I fear that any attempt to build a horse (sorry, an AI) by making
a committee of all the existing AI techniques is going to produce
something that smells exactly as sweet as a camel
Richard Loosemore
-
This list is sponsored by AGIRI: http
who thinks that that last bit is also encoded in the human genome
has got a heck of a lot of work to do . and only 14,000 genes to
write it down on.
Richard Loosemore
Eugen Leitl wrote:
On Thu, Apr 05, 2007 at 02:03:32PM +0200, Shane Legg wrote:
I didn't mean to imply that all
our minds to it.
I am very impatient to get things done, and this sometimes comes out in
a brusque tone that nobody should take seriously. ;-)
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2
David Clark wrote:
- Original Message -
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, March 24, 2007 1:46 PM
Subject: Environments and Languages for AGI [WAS Re: [agi] My proposal for
an AGI agenda]
As for all the other talk on this list, recently
a clue about what they are
trying to build, and why, the question of what language (or environment)
they need to use will answer itself.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2
Ben Goertzel wrote:
Richard Loosemore wrote:
As for all the other talk on this list, recently, about programming
languages and the need for math, etc., I find myself amused by the
irrelevance of most of it: when someone gets a clue about what they
are trying to build, and why, the question
the things that work? Even if it did not use DNA-like
strings of symbols?
If they were doing this, I'd have to pay more attention.
I don't think this is happening, but I can't be sure.
If any one has any more perspective on this, I'd be interested.
Richard Loosemore
-
This list
against 'emergence' and apply them in the
context of this one example, a lot of them, I believe, will start to
look rather silly.
Richard Loosemore.
Ben Goertzel wrote:
Like so many other terms relevant to AGI, emergence has a lot of
different meanings.
Some have used a very strong
;-) ) it
is the fact that you try to paint some people as extremists when in fact
they are just the same as you.
Richard Loosemore.
P.S. About Daniel Amit:
I haven't read the book, but are you saying he demonstrates coherent,
*meaningful* symbol processing as the transition of the dynamics
Russell Wallace wrote:
On 3/12/07, *Richard Loosemore* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
I'm still not quite sure if what I said came across clearly, because
some of what you just said is so far away from what I intended that I
have to make some kind of response
Russell Wallace wrote:
On 3/13/07, *Richard Loosemore* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Good god no. It *is* the program. It is the architecture of an AI.
So it is part of the AI then, like I said.
Regarding the use of readable names. The atomic units
to resist that temptation. The more
concrete stuff, when it arrives, is going to take some of this stuff as
a course prerequisite.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com
how the
motivational system is affecting the behavior. For that, we need
automatic triggers that attach to the things we consider bad.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2
too close to the material to be able to see where it is confusing, but I
also do not want to release the full text until it is in proper shape.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2
am not sure what is left that
really deserves to be called module any more.
My feeling is that there is a continuum, rather than a module versus
non-module way of looking at things.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
it exactly
fits with the way I see the processes of concetual combination
happening, anyway.
Richard Loosemore.
P.S. Off topic: Does anyone know of a really reliable brand of salad
spinner? Hate the damn things: they keep breaking on me.
J. Storrs Hall, PhD. wrote:
On Monday 12 March
Russell Wallace wrote:
On 3/12/07, *Richard Loosemore* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
I'm not sure if you're just summarizing what someone would mean if they
were talking about 'logical representation,' or advocating it.
I'm saying there are 5 different things
Russell Wallace wrote:
On 3/12/07, *Richard Loosemore* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
This is puzzling, in a way, because this is my ammunition that you are
using here! That is exactly what I am trying to do: invent an AIXML.
I am a little baffled because you
J. Storrs Hall, PhD. wrote:
On Monday 12 March 2007 10:42, Richard Loosemore wrote:
P.S. Off topic: Does anyone know of a really reliable brand of salad
spinner? Hate the damn things: they keep breaking on me.
http://www.gmi-inc.com/Products/beckman%20tl100.htm
Josh
Aw, heck, I
J. Storrs Hall, PhD. wrote:
On Monday 12 March 2007 10:42, Richard Loosemore wrote:
... Overlooking the practical deficiencies of actual Lego as
a material for dealing with food, one could imagine a kind of neoLego
that really was adequate for making all the tools in my kitchen. Grant
me
names?
Huh?
That sounds like, after all, I communicated nothing whatsoever. I don't
know if that is supposed to be a serious point or not. I will assume not.
Richard Loosemore.
Russell Wallace wrote:
Ah! That makes your position much clearer, thanks. To paraphrase to make
sure I understand
-systems/complexity
approach, Ben has his eclectic approach, Pei has his NARS approach and
Peter Voss has something else again (does it make sense to call it a
neural-gas approach, Peter?).
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
Bo Morgan wrote:
On Mon, 5 Mar 2007, Richard Loosemore wrote:
) Rowan Cox wrote:
) Hey all,
)
) Just thought I'd breifly delurk to post a link (or three,..). I
) believe this is a talk from 2001, so everyone else has probably heard
) it already ;)
)
) Part 1:
) http
new theme that I missed?
Richard Loosemore.
Mark Waser wrote:
I think that it's also very important/interesting to note that his
subject headings exactly specify the development environment that
Richard Loosemoore and others are pushing for (i.e. An Infrastructure
to Support
construction of AI
systems.
Richard Loosemore
Eric Baum wrote:
Josh The other idea in OI worth noting is Mountcastle's Principle,
Josh that all of the cortex seems to be doing the same thing. Hawkins
Josh gets credit for pointing it out, but of course it was a
Josh published observation
banging the rocks together.
Having said that, there is an element of truth in what Hawkins says. My
personal opinion is that he has only a fragment of the truth, however,
and is mistaking it for the whole deal.
Richard Loosemore.
-
This list is sponsored by AGIRI: http
Chuck Esterbrook wrote:
On 2/19/07, Richard Loosemore [EMAIL PROTECTED] wrote:
Wow, I leave off email for two days and a 55-message Religious War
breaks out! ;-)
I promise this is nothing to do with languages I do or do not like (i.e.
it is non-religious...).
As many people pointed out
the general
problem. Again, apologies for coyness: possible patent pending and all
that.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
an
alternative approach.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
Bo Morgan wrote:
On Mon, 19 Feb 2007, Richard Loosemore wrote:
) Bo Morgan wrote:
)
) On Mon, 19 Feb 2007, John Scanlon wrote:
)
) ) Is there anyone out there who has a sense that most of the work being
) ) done in AI is still following the same track that has failed for
) ) fifty years
of the symbols being encoded at that hardware-dependent
level. I haven't seen any neuroscientists who talk that way show any
indication that they have a clue that there are even problems with it,
let alone that they have good answers to those problems.
In other words, I don't think I buy it.
Richard
, it was different. Lisp and Prolog, for example,
represented particular ways of thinking about the task of building an
AI. The framework for those paradigms was strongly represented by the
language itself.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
of that machinery.
And what is the boundary between an ontological bias and a lesser
tendency to learn a certain kind of thing, which can nevertheless be
overridden through experience?
Richard Loosemore.
Ben Goertzel wrote:
Hi,
In a recent offlist email dialogue with an AI researcher, he made
gts wrote:
On Sun, 11 Feb 2007 11:41:31 -0500, Richard Loosemore
[EMAIL PROTECTED] wrote:
P.S. This isn't the first time this topic has come up. For a now
famous example, see my essay at http://sl4.org/archive/0605/14748.html
and the follow-up at http://sl4.org/archive/0605/14773.html
gts wrote:
On Sat, 10 Feb 2007 13:41:33 -0500, Richard Loosemore
[EMAIL PROTECTED] wrote:
The meat of this argument is all in what exact type of AGI you claim
is the best, of the two suggested above.
The best AGI in this context would be one capable of avoiding the
conjunction fallacy
is
the best, of the two suggested above.
Hint: don't go for the dumb one, because it is not really smart enough
to be an Artificial GENERAL Intelligence.
Regards
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options
goal.
Just a thought.
Richard Loosemore.
Charles D Hixson wrote:
That's not what I meant. I don't think that people really operate on
the basis of probabilistic calculations, but rather on short-range
attractors. What I see them being motivated by is the dream of
riches, which feels closer
, is that the
possibility I raised is still completely open.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
unreasonable
position, that's all ;-).
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
!)
cognitive system is a direct rejection of the idea that I was asking
you to consider as a hypothesis.
I *know* you don't believe it to be true! ;-) What I was trying to do
was to ask on what grounds you reject it.
Richard Loosemore.
-
This list is sponsored by AGIRI: http
the type of my question is).
Richard Loosemore.
Pei Wang wrote:
Richard,
The assumption is that the underlying dynamics of things at the concept
level (or logical term level, if concept is not to your liking) can
be meaningfully described by things that look something like
probabilities.
I
Pei Wang wrote:
On 2/4/07, Richard Loosemore [EMAIL PROTECTED] wrote:
I fully accept that you don't care if the human mind does it that way,
because you want NARS to do it differently. My question was at a higher
level. If we knew for sure that the human mind was using something like
interpretation of
Oaksford and Chater is that it is actually caused by too much of it.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
Generation Project and (Naive)
Neural Networks.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
the time.
Hope that helps.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
just stated).
Richard Loosemore.
SUBGOAL PROMOTION AND ALIENATION
One very common phenomenon is when a supergoal is erased, but one of
its subgoals is promoted to the level of supergoal. For instance,
originally one may become
of repetitions of the same ideological
statement).
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
.
Richard Loosemore.
By discussing goals, I was not trying to imply that all aspect of a
mind (or even most) need to, or should, operate according to an
explicit goal hierarchy.
I believe that the human mind incorporates **both(( a set of goal
stacks (mainly useful in deliberative thought
is the present approach to AI then I
tend to agree with you John: ludicrous.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
I am disputing the very idea that monkeys (or rats or pigeons or humans)
have a part of the brain which generates the reward/punishment signal
for operant conditioning.
This is behaviorism. I find myself completely
J. Storrs Hall, PhD. wrote:
On Friday 01 December 2006 23:42, Richard Loosemore wrote:
It's a lot easier than you suppose. The system would be built in two
parts: the motivational system, which would not change substantially
during RSI, and the thinking part (for want of a better term
Philip Goetz wrote:
On 12/1/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The questions you asked above are predicated on a goal stack approach.
You are repeating the same mistakes that I already dealt with.
Some people would call it repeating the same mistakes I already dealt
with.
Some
arguments.
Does that make sense?
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
, in other words, is in the details.
Richard Loosemore.
*/Philip Goetz [EMAIL PROTECTED]/* wrote:
On 11/19/06, Richard Loosemore wrote:
The goal-stack AI might very well turn out simply not to be a
workable
design at all! I really do mean that: it won't become
Samantha Atkins wrote:
On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:
Recursive Self Inmprovement?
The answer is yes, but with some qualifications.
In general RSI would be useful to the system IF it were done in such a
way as to preserve its existing motivational priorities
at
least thirty years ago (with the exception of a few diehards in North
Wales and Cambridge).
Richard Loosemore
[With apologies to Fergus, Nick and Ian, who may someday come across
this message and start flaming me].
-
This list is sponsored by AGIRI: http://www.agiri.org/email
on a goal stack approach.
You are repeating the same mistakes that I already dealt with.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
, at some point in the future.
Richard Loosemore wrote:
The point I am heading towards, in all of this, is that we need to
unpack some of these ideas in great detail in order to come to sensible
conclusions.
I think the best way would be in a full length paper, although I did
Philip Goetz wrote:
On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote:
I was saying that *because* (for independent reasons) these people's
usage of terms like intelligence is so disconnected from commonsense
usage (they idealize so extremely that the sense of the word no longer
bears
no reason to suppose
that such a framework heads in the direction of a system that is
intelligent. You could build an entire system using the framework, and
then do some experiments, and then I'd be convinced. But short of that
I don't see any reason to be optimistic.
Richard Loosemore
to human
language really was? It sounds like Immerman is putting the
significance of complexity classes on firmer ground, but not changing
the nature of what they are saying.
Richard Loosemore
-- Ben
On 11/24/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Ben Goertzel wrote
are making with respect to the computational
complexity of processes like grammar induction and the evolutionary
construction of learning systems.
We are coming from similar points of view, but reaching diametrically
opposed conclusions.
Richard Loosemore.
-
This list is sponsored
something that was already stretched.
But maybe that was not what you meant. I stand ready to be corrected,
if it turns out I have goofed.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com
here before (Levelt's Speaking) in which the author takes apart a
single conversational exchange consisting of a couple of short sentences.
Richard Loosemore
J. Storrs Hall, PhD. wrote:
It was a true solar-plexus blow, and completely knocked out, Perkins
staggered back against the instrument
, said Marvin and trudged away.
**
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
Ben Goertzel wrote:
Rings and Models are appropriated terms, but the mathematicians
involved would never be so stupid as to confuse them with the real
things. Marcus Hutter and yourself are doing precisely that.
I rest my case.
Richard Loosemore
Please, let us avoid explicitly insulting one
to infinity... a spurious argument, of
course, because they can go in any direction.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
Ben Goertzel wrote:
Rings and Models are appropriated terms, but the mathematicians
involved would never be so stupid as to confuse them with the real
things. Marcus Hutter and yourself are doing precisely that.
I rest my case.
Richard Loosemore
IMO these analogies are not fair
depend on any special assumptions about
the nature of learning.
Richard Loosemore wrote:
I beg to differ. IIRC the sense of learning they require is
induction over example sentences. They exclude the use of
real world knowledge, in spite of the fact that such knowledge
(or at least primitives
601 - 700 of 750 matches
Mail list logo