John: Our brains are good I mean they are
us but aren't they just biological blobs of goop that are half-assed
excuses
for intelligence? I mean why are AGI's coming about anyway? Is it
because
our brains are awesome and fulfill all of our needs? No. We need to be
uploaded otherwise we
How will it handle the Mid-East crisis?
God comes crying to me every night about that one. I tell Him to shut up, be
a Man and get on with it.
Or the Iraq crisis?
As for humanising the US gun laws - even God doesn't go there.
How will it sell more Coke, or get Yahoo back on top of Google?
I like the thoughts here.
My hunch is that the human ability to learn new activities is based on
conceiving all of them as goal-seeking journeys, in which we have try to find
the way.to our goals, using a set of basic paths and series of basic steps,
[literally steps, if you're walking
Nobody came back on my suggestion for a much simpler AGI test. Let's call it
the Neo-Maze Test.
You program a robot rover or simulation robot with flexible rules to run
fairly basic-type mazes, including mazes with multiple solutions.
The test is then whether it can run very different kinds
Test [was: Why do you think
your AGI design will work?]
Mike Tintner writes:
Let's call it the Neo-Maze Test.
I think this type of test is pretty interesting; the objection
if any is whether the capabilities of this robot are really
getting toward what we would like to consider general
Ben,
People need a clear goal for your activity, so that they can decide whether
it's worth pursuing even if only from the sidelines. It's a fundamental need
for every activity we engage in... to answer: what's the point? And it's
also a fundamental tendency when formulating goals is
for a domain, or chatbots?
#3 Video game characters or robots are an AGI as well, but how limiting are
they if they dont have complex language skills to be given and learn ever
increasingly complex tasks to do.
Mike Tintner [EMAIL PROTECTED] wrote:
Nobody came back on my suggestion
complex tasks to do.
Mike Tintner [EMAIL PROTECTED] wrote:
Nobody came back on my suggestion for a much simpler AGI test. Let's call
it
the Neo-Maze Test.
You program a robot rover or simulation robot with flexible rules to run
fairly basic-type mazes, including mazes with multiple
You guys are driving me nuts.
Jumping in at the middle, here goes:
Intelligence is the capacity to solve problems.
(An intelligent agent solves problems in order to reach its goals)
Problems occur when an agent must select between two or more paths to reach
its goals.
Cognitive problems (in
vagueness..
- Original Message -
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, April 26, 2007 9:58 PM
Subject: Re: [agi] Circular definitions of intelligence
Mike Tintner wrote:
You guys are driving me nuts.
Jumping in at the middle, here goes
Message -
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, April 27, 2007 12:34 AM
Subject: Re: [agi] Circular definitions of intelligence
Mike Tintner wrote:
It's driving me nuts because it's basically simple.
Mike, you are getting the wrong end
Yes, all intelligent systems (i.e. living creatures) have a psychoeconomy
of goals - important point.
But in solving any particular problem, they may be dealing with only one or
two goals at a time.
Have measures of intelligence mentioned, included:
1) the depth of the problem - the number
) as a team in adequately subtle and complicated ways.
For instance in World of Warcraft, an individual NPC can emulate an
individual human player, much better than a team of NPCs can emulate a team of
human players.
-- BenG
On 4/27/07, Mike Tintner [EMAIL PROTECTED] wrote:
Er
Interesting but, if I've understood the Universal Intelligence paper, there are
3 major flaws.
1) It seems to assume that intelligence is based on a rational, deterministic
program - is that right? Adaptive intelligence, I would argue, definitely
isn't. There isn't a rational, right way to
pray tell which errors [The Emotion Machine BTW contains a v. rough proposal
for an AGI, human-like system which was formally put together in a separate
paper]
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Friday, April 27, 2007 8:42 PM
Subject:
Disagree. The brain ALWAYS tries to make sense of language - convert it into
images and graphics. I see no area of language comprehension where this
doesn't apply.
I was just reading a thread re the Symbol gorund P on another group - I think
what's fooling people into thinking purely
these these as
equally strange as a sighted person?
The man climbed the penny
The mat sat on the cat
The teapot broke the bull
--
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Saturday, April 28, 2007 10:42 AM
Shane,
Little bit confusing here - perhaps too general and unfocussed to pursue
really
But interestingly while you deny that the given conception of intelligence is
rational and deterministic.. you then proceed to argue rationally and
deterministically. First, that there IS a right way to
.
--
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Saturday, April 28, 2007 12:43 PM
To: agi@v2.listbox.com
Subject: Re: [agi] rule-based NL system
Classic objection.
The answer is that blind people can draw - reasonably faithful outline
objects. Experimentally tested
OK, here I think is the simplest test of adaptivity that cuts right to the
heart of the matter, and provides the most central measure of ADAPTIVE (i.e.
DIVERGENT) intelligence as distinct from CONVERGENT, algorithmic intelligence.
My assumption: your system has this agent/guy moving around a
Ah.. I didn't get the test quite right. It isn't simply how many alternative
ways can you find of achievng any goal?
You might have pre-specified a vast number of ways of moving from A to B for
the system.
The test is: how many NEW (non-specified) alternative ways can you find of
achieving
pedantic or literal.
- Original Message -
From: Mike Dougherty [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, April 29, 2007 6:21 PM
Subject: Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?
On 4/29/07, Mike Tintner [EMAIL PROTECTED] wrote:
He has a simple task: Move from
Mike,
There is something fascinating going on here - if you could suspend your
desire for precision, you might see that you are at least half-consciously
offering contributions as well as objections. (Tune in to your constructive
side).
I remember thinking that you were probably
This exchange below that with Mike D focussed another key issue of AGI which
I'd like comments back on.
My impression is: AI has been strangled by a rationalistic desire to be RIGHT -
to get the right answer every time. This can be called psychological/
behavioural monism. This desire is
obvious rejoinder: how can you have correct handling of uncertainty?
Perhaps you mean effective/ most effective available. But it's worth picking
up on, because there is a fundamental contradiction here in many thinkers -
i.e. it may well be that people are still caught between two eras
Yes, you are very right. And my point is that there are absolutely major
philosophical issues here - both the general philosophy of mind and
epistemology, and the more specific philosophy of AI. In fact, I think my
characterisation of the issue as one of monism [general - behavioural as
well
I should point out something amazing that has gone on here in all these
conversations re language images.
No one seems to understand the basic semiotic fact that language has no
intrinsic reference or relation to the real world WHATSOEVER.
The linguistic sign bears NO RELATION WHATSOEVER to
MD:What does warm look like? How about angry or happy? Can you
draw a picture of abstract or indeterminate? I understand (i
think) where you are coming from, and I agree wholeheartedly - up to
the point where you seem to imply that a picture of something is the
totality of its character. I
a program that can deal with uncertainty
and is adaptive and can think irrationally at times.. Seems like
an awful lot of things.. how should we organize all this? How do we
take existing solutions for some of these problems and make sure new ones
can get added ..
--- Mike Tintner [EMAIL PROTECTED
with words - but it doesn't work.
- Original Message -
From: Derek Zahn [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 01, 2007 2:32 PM
Subject: Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI
Mike Tintner writes:
It goes ALL THE WAY. Language
IN the final analysis, Ben, you're giving me excuses rather than solutions.
Your pet control program is a start - at least I have a vague, still v. vague
idea of what you might be doing.
You could (I'm guessing) say : this AGI is designed to control a pet which will
have to solve adaptive
Nah, analogy doesn't quite work - though could be useful.
An engine is used to move things... many different things - wheels, levers,
etc. So if you've got an engine that is twenty times more powerful, sure you
don't need to tell me what particular things it is going to move. It's
generally
No, I keep saying - I'm not asking for the odd narrowly-defined task - but
rather defining CLASSES of specific problems that your/an AGI will be able to
tackle. Part of the definition task should be to explain how if you can solve
one kind of problem, then you will be able to solve other
Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 01, 2007 7:08 PM
Subject: Re: [agi] The role of incertainty
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:
The difficulty here is that the problems to be solved by an AI or AGI
machine are NOT accepted, well-defined. We cannot
of incertainty
On 5/1/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:
No, I keep saying - I'm not asking for the odd narrowly-defined task -
but rather defining CLASSES of specific problems that your/an AGI will be able
to tackle
information.
Please take the time to try to understand what I'm saying rather than putting
up ridiculous strawmen.
- Original Message -
From: Mike Tintner
To: agi@v2.listbox.com
Sent: Wednesday, May 02, 2007 6:05 PM
Subject: Re: [agi] rule-based NL system
Bo M:
- A way to switch between representations and thinking processes when one
set of methods fails. This would keep expert knowledge in one domain
connected to expert knowledge from other domains.
What if any approaches to MULTI-DOMAIN thinking actually exist, or have
been tried?
generation
AGI accomplishing other than the ones I mentioned?
James Ratcliff
Mike Tintner [EMAIL PROTECTED] wrote:
I think you're thinking in too limited ways about the physical tasks,
simulated or embodied - although that may be the fault of my definition.
You don't, I would
of Phoenix Test [was: Why do you think your
AGI design will work?]
On 5/3/07, Mike Tintner [EMAIL PROTECTED] wrote:
James,
It's interesting - there is a general huge block here - and culture-wide -
to thinking about intelligence in terms of problems as opposed to the means and
media
think
your AGI design will work?]
Mike Tintner wrote:
James,
It's interesting - there is a general huge block here - and
culture-wide - to thinking about intelligence in terms of problems as
opposed to the means and media of solution (or if you like the tasks vs
the tools).
Your list is all
well, the AGI's or brain's sign or representation system is the heart of the
matter - that's what will enable it to be adaptive (or not) and connect
hitherto different ways of reaching, or moving (if it uses that concept)
towards, goals.
That seems to be the way Jeff Hawkins is thinking -
Well, obviously, it's up to you what you want to discuss - but I don't think
that what I'm asking for - the basic elements of the AGI's sign system -
requires a lengthy paper. Jeff Hawkins' site has a page devoted to showing how
his system derives its templates of dog - that gives you an idea
Derek,
Yes. Thanks. I had seen that. (And I still have to fully understand his
system). But my question remains: where did you get your information about
his system's FAILURES to recognize basic types?
*
Wow. Really? He can't recognize a basic dog / cat etc? Are you sure?
Depends on
, 2007 5:20 PM
Subject: Re: [agi] The University of Phoenix Test [was: Why do you think
your AGI design will work?]
Mike Tintner wrote:
I have REPEATEDLY said I am talking about defining general problem
classes, rather than setting narrow AI specialised problems. (Show me BTW
where there has
]
To: agi@v2.listbox.com
Sent: Friday, May 04, 2007 3:14 PM
Subject: Re: [agi] The University of Phoenix Test [was: Why do you think
your AGI design will work?]
Mike Tintner wrote:
Er Richard, you are opening too many too large areas - we could be here
till the end of the month.
It seems to me you
Is there any already existing competition in this area - virtual adaptive pets
- that we can look at?
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Friday, May 04, 2007 4:46 PM
Subject: Re: [agi] The role of incertainty
2. More specific
, okay. ;-) ]
Richard Loosemore.
Mike Tintner wrote:
Richard,
My apologies for coming on too strong.
Re psychosemiotics, if there were such a science, yes, it would probably
come under cognitive psychology. Is there a need for such a science? Yes.
See my picture theory below. But regardless
figure out under the rug (or behind the curtain
:-).
Mark
- Original Message -
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, May 05, 2007 7:09 AM
Subject: [agi] The Picture Tree
Richard,
My apologies for coming on too strong.
Re psychosemiotics
On 5/6/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:
YKY: Consciousness is not central to AGI .
The human mind consists of a two-tier structure. On top, you have this
conscious, executive mind that takes most
Consider a ship. From one point of view, you could separate the people
aboard
into two groups: the captain and the crew. But another just as reasonable
point of view is that captain is just one member of the crew, albeit one
distinguished in some ways.
Really? Bush? Browne [BP, just
-
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, May 06, 2007 4:45 PM
Subject: Re: [agi] The Advantages of a Conscious Mind
Mike Tintner wrote:
And if you're a betting man, pay attention to Dennett. He wrote about
Consciousness in the early 90's, together
: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, May 06, 2007 4:38 PM
Subject: Re: [agi] The Advantages of a Conscious Mind
Mike Tintner wrote:
Now to the rational philosopher and scientist and to the classical AI
person, this is all terrible (as well as flatly contradicting
] The Advantages of a Conscious Mind
Mike Tintner wrote:
There is a crashingly obvious difference between a rational computer and
a human mind - and the only way cognitive science has managed not to see
it is by resolutely refusing to look at it, just as it resolutely refused
to look
be taken as confused philosophical
understanding on your side. ;-)
Pei
On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:
Well, there obviously IS a conscious, executive mind, separate from the
unconscious mind, whatever the enormous difficulties cognitive sicentists
had in first admitting its
Richard,
I don't think I'm not getting it at all.
What you have here is a lot of good questions about how the graphics level
of processing that I am proposing, might work. And I don't have the answers,
and haven't really thought about them yet. What I have proposed is a general
idea loosely
input tasks according to a
task-specific algorithm.
[If the above description still sounds confusing or contradictionary,
you'll have to read my relevant publications. I don't have the
intelligence to explain everything by email.]
Pei
On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:
Pei,
Thanks
are not
entitled to take that tone.
It's OK, you don't need to reply.
My comment stemmed from my experience as a professional cognitive
scientist. Please don't pull this kind of stunt.
Mike Tintner wrote:
Richard,
Welcome to the Virtual Home for
the NCSU Cognitive Science Program!
Cognitive
, May 06, 2007 11:45 PM
Subject: Re: [agi] The Advantages of a Conscious Mind
On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:
Pei,
I don't think there's any confusion here. Your system as you describe it
IS
deterministic. Whether an observer might be confused by it is irrelevant.
Equally
-specific algorithm.
[If the above description still sounds confusing or contradictionary,
you'll have to read my relevant publications. I don't have the
intelligence to explain everything by email.]
Pei
On 5/6/07, Mike Tintner wrote:
Pei,
Thanks for stating
I should have added -- the difference between options can be much greater
than 5% - humans and, offhand, I imagine, most AGI's, couldn't begin to
measure and compare options, with that degree of precision... for most
decisions (not, of course, all)
-
This list is sponsored by AGIRI:
: Tuesday, May 08, 2007 6:36 PM
Subject: Re: [agi] The Advantages of a Conscious Mind
Mike Tintner [EMAIL PROTECTED] wrote:
That would indeed be free, nondeterministic choice, which, as I understood,
Pei ruled out for his system.
The only qualifications are:
* choosing randomly
Just been looking at the vids. of last year's AGI conference. One thing
really hit me from the panel talk - and that was: but, of course, only
open-source AGI will ever work. Sorry, but all these ideas of individual
systems, produced by teams of - what? - say, twenty individuals at most -
: A. T. Murray [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, May 11, 2007 4:55 AM
Subject: Re: [agi] Open-Source AGI
Mike Tintner wrote:
The greatest challenge - and these are my first,
very stumbling thoughts here - is to find ways that
people can work together on the overall problem
I should add that part of the creative challenge of developing an
integrational structure for AGI is to develop one that will allow CREATIVE
minds to work together - and not just hacks a la Wikip. - and enable them
to integrate whole sets of major new inventions and innovations.
And that too,
Josh: Thus Tommy. My robotics project discards a major component of robotics
that is apparently dear to the embodiment crowd: Tommy is stationary
and not autonomous
As Daniel Wolpert will tell you, the sea squirt devours its brain as soon as
it stops moving. In the final and the first analysis,
Josh,
Since the 90s there has been a strand in AI research that claims that
robotics is necessary to the enterprise, based on the notion that
having a body is necessary to intelligence. Symbols, it is said, must
be grounded in physical experience to have meaning. Without such
grounding AI
intelligence)
logic vs analogy
SIGN SYSTEMS/ MEDIA
literacy vs artistic education
(symbolic vs image
media media )
What
Josh: On Friday 11 May 2007 03:06:52 pm Mike Tintner wrote:
... the mind/body era inaugurated by
Descartes ( the first scientific revolution) is coming
is still playing
out, including in the current battles of AI..
What
Josh: On Friday 11 May 2007 03:06:52 pm Mike Tintner wrote:
... the mind/body era inaugurated by
Descartes ( the first scientific revolution) is coming to an end right
across our culture?
Dualism was intellectually
, taking humans out of the
loop doesn't sound like a good choice for AGI development at the
moment, unless you have a concrete design to show us otherwise.
Pei
On 5/11/07, Mike Tintner [EMAIL PROTECTED] wrote:
Josh,
Since the 90s there has been a strand in AI research that claims that
robotics
/robot that is only fractionally as exploratory.
- Original Message -
From: J Storrs Hall, PhD [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, May 12, 2007 12:41 PM
Subject: Re: [agi] Tommy
On Friday 11 May 2007 08:55:12 pm Mike Tintner wrote:
...All these machines you
Josh:My major hobby-horse in this area is that a concept has to be an active
machine, capable of recognition, generation, inference, and prediction.
This sounds very like Jeff Hawkins, (just reading On Intelligence now). Do
you see your position as generally accepted, or at the forefront of
Bob M: Minsky
says that one of the key things which an intelligent system ought to
be able to do is reason by analogy.
His thoughts tumbled in his head, making and breaking alliances
like underpants in a dryer without Cling Free.
I agree - analogy is central to AGI, hugely important. Ben
, broadly, to support my ideas..
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Saturday, May 12, 2007 7:22 PM
Subject: Re: [agi] All these moments will be lost in time, like tears in rain
Mike Tintner,
Firstly, to ground your discussion of analogy
Here's a link to a lecture of his that's clearer than anything I've read (incl.
the book):
http://mitworld.mit.edu/video/316/
Bottom line: his system can recognize simple objects from outline drawings -
like dog, cup. That seems to be the only concrete claim he's making right
now. There's no
-
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Monday, May 14, 2007 3:58 AM
Subject: Re: [agi] Quick Hawkins Questions
On 5/13/07, Mike Tintner [EMAIL PROTECTED] wrote:
Here's a link to a lecture of his that's clearer than anything I've read
(incl. the book):
http
dc: I have never been impressed by complicated formulas and I have been many
slick (Math) talking people who couldn't produce anything that worked in the
real world.
Ben: A fascinating Freudian slip! ;-)
Wow - you're the first AI person I've come across with any Freudian
perspective. Minsky
I too very largely and strongly agree with what Pei says below.
But in all this discussion, it looks like one thing is being missed (please
correct me).
The task is to define TWO kinds of intelligence not just one - you need a
dual not just a single definition of intelligence. Everyone seems
/15/07, Mike Tintner [EMAIL PROTECTED] wrote:
I too very largely and strongly agree with what Pei says below.
But in all this discussion, it looks like one thing is being missed
(please
correct me).
The task is to define TWO kinds of intelligence not just one - you need a
dual not just a single
%20page/Undergraduate/yearthree/Developmental/6-
Original Message -
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, May 16, 2007 12:17 AM
Subject: Re: [agi] definitions of intelligence, again?!
On 5/15/07, Mike Tintner [EMAIL PROTECTED] wrote:
I am suggesting
Ben,
Am a little confused here - not that we're not talking very roughly along the
same lines and about the same areas. It's just that for me conceptual blending
is simply a form of analogy, which we've just discussed (and one that works by
sensory/imaginative rather than symbolic analogy).
Josh : A blend is more like designing a helicopter by combining a dragonfly
and a
car. You take the general shape and behavior of the dragonfly, and the size,
interior seats, driver controls, etc, from a car.
In general in a blend you start with B and C without an A. Both relations
B-D
and
Josh
. If you'd read the archives,
you'd see that I've advocated constructive solid geometry in Hilbert
spaces
as the basic representational primitive.
Would you like to say more re your representational primitives? Sounds
interesting. The archives have no reference to constructive solid
Pei:
This just shows the complexity of the usual meaning of the word
intelligence --- many people do associate with the ability of solving
hard problems, but at the same time, many people (often the same
people!) don't think a brute-force solution show any intelligence.
Shane: I think
Pei,
I don't think these distinctions between terms really matter in the final
analysis - right, optimal etc. What I'm assuming, however you define it,
is that you are saying that AI can find one solution that is better than
others under conditions of insufficient knowledge/uncertainty - and
.
On 5/17/07, Mike Tintner [EMAIL PROTECTED] wrote:
Pei: AI is about what is the best solution to a problem if
the system has INSUFFICIENT knowledge and resources.
Just so. I have just spent the last hour thinking about this area, and you
have spoken the line I allotted
P.S. Eric, I haven't forgotten your question to me, will try to address it
in time - the answer is complex.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
Eric,
The point is simply that you can only fully simulate emotions with a body as
well as a brain. And emotions while identified by the conscious brain are
felt with the body
I don't find it at all hard to understand - I fully agree - that emotions
are generated as a result of
Marvin Minsky:Memes. Once you have a species with social transmission
of information, then evolution continues on a higher level.
The idea here is that the emotions of Pride and Shame
evolved as a result of the competition -- inside our species --
between different sets of contagious ideas.
MT: I
Derek Zahn: you have to have a theory of mind
One of the interesting things about current AGI projects like Ben's Stan F's
(is there any other?) is that they do indeed constitute not so much theories as
models of mind - illustrated by charts in Ben's essay on Kurzweil's site. In
essence,
Just read s.o. arguing en passant that digital processors are really analog,
and disguised to be digital. What does that mean?
Does it mean that what computers really do is compare patterns of electrons
rather than discrete symbols with meaning? - that's it only when symbols
(like words and
Except that Ogden only included a very few verbs [be , have , come - go , put -
take , give - get , make , keep , let , do , say , see , send , cause and
because are occasionally used as operators; seem was later added.] So in
practice people use about 60 of the nouns as verbs diminishing the
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21
Mike Tintner [EMAIL PROTECTED] 2007/06/05 16:48:32
Except that Ogden only included a very few verbs [be , have , come - go , put
- take , give - get , make , keep , let , do , say , see , send , cause and
because are occasionally
Peter Voss: Our goal is to create full AGI...
Do you have a proof-of-concept to use your term?
Hawkins has a simple one for his HTM - he shows his system can recognize
objects from simple line drawings. That simple will do to begin with..
Novamente, from what I've seen, doesn't have one -
Ben,
I'd be looking for a totally different proof-of-concept for AGI. (I brought in
Hawkins - not to rehash our arguments - because I consider him an example of
good proof-of-concept practice).
I'd be looking for proof of higher adaptivity (to use Peter's term). If Peter
Voss' Maze Explorer
Eric B asked me for a test to prove free thinking - I'm getting to it -
but I realised it raised a much deeper philosophical presumably
mathematical issue which some of you guys, especially the
supermathematicians, may have well have given thought to, and I certainly
haven't.
The issue is
Josh: If you want to understand why existing approaches to AI haven't
worked, try
Beyond AI by yours truly
Any major point or points worth raising here?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
Josh:
Most AI (including a lot of what gets talked about here) is the equivalent
of
trying to implement the mail-reader directly in machine code (or
transistors,
for connectionists). Why people can't get the notion that the brain is
going
to be at least as ontologically deep as a desktop GUI
[Further to the symbol grounding discussion, you might like to look at (pass
on) this trendsetting science video-journal, which is just one of many signs of
the new multimedia [vs the old literate, symbolic] culture]
Dear Scientist,
The 4th issue of JoVE, a video-based publication on
Sergio:This is because in order to *create* knowledge
(and it's all about self-creation, not of external insertion), it
is imperative to use statistical (inductive) methods of some sort.
In my way of seeing things, any architecture based solely on logical
(deductive) grounds is doomed to fail.
Eric: I claim that it is the very fact that you are making decisions about
whether to supress pain for higher goals that is the reason you are
conscious of pain. Your consciousness is the computation of a
top-level decision making module (or perhaps system). If you were not
making decisions
1 - 100 of 788 matches
Mail list logo