Steve,
I'm not sure I did miss your point. Mine is the same as Descartes:
As for Logic, its syllogisms and the majority of its other precepts are of
avail rather in the communication of what we already know, or... even in
speaking without judgment of things of which we are ignorant, than in
June 10, 2008
Brainpower May Lie in Complexity of Synapses
By NICHOLAS WADE
Evolution's recipe for making a brain more complex has long seemed simple
enough. Just increase the number of nerve cells, or neurons, and the
interconnections between them. A human brain, for instance, is three
http://www.nytimes.com/2008/06/10/science/10plant.html?pagewanted=2_r=1ei=5087emen=484cb
A really interesting article about plant sensing. A bit O/T here but I'm
posting it after the recent neurons discussion, because it all suggests that
the control systems of living systems may indeed be
different branches, like philosophy of science, can tell you something about
the problems of acquiring knowledge in particular areas. But there is no
super-branch that can generalise about all the different problems in different
areas.
Anyway, I'll stop there for now...
Mike Tintner, et al
Ben: No one knows which brain functions rely on emergence to which extents
...
we're still puzzling this out even in relatively well-understood brain
regions
like visual cortex. ... But, the neural structures that carry out
object-recognition may well emerge
as a result of complex nonlinear
Here's the v. impressive thought-controlled Dean Kamen robotic arm:
http://blog.wired.com/gadgets/2008/05/dean-kamens-rob.html
The question is - given our recent discussion on the validity of experiments
showing how words activated appropriate physical movement areas in the
brain - how
Thanks. Excellent site. And here is a talk about advanced fmri - given our
recent discussion:
http://www.ted.com/talks/view/id/236
David H: An excellent 20-minute TED talk from Susan Blackmore (she's a
brilliant speaker!)
http://www.ted.com/talks/view/id/269
I considered posting to the
Thanks. I must confess to my usual confusion/ignorance here - but perhaps I
should really have talked of solid rather than 3-D mapping.
When you sit in a familiar chair, you have, I presume, a solid mapping (or
perhaps the word should be moulding) - distributed over your body, of how
it can
I suddenly realised that here are AGI-ers having all this very philosophical
and ethereal conversation about consciousness, when actually consciousness -
and my iworld-movie model of it, or you could call it a POV-movie model - is
instantiated in a very practical way in video games.
In case
This is what we've just been discussing and Richard was criticising as
highly fallible. Your article adds pictures of the predictions, which is
helpful.
But all this raises the question presumably of just how much can be told
from fmri images generally. Does anyone have views about this - or
Thanks. It would be nice to have an explanation of Friston's claims, e.g:
Meanwhile, Friston claims that the free-energy principle also gives plausible
explanations for other important features of the cortex. These include
adaptation effects, in which neurons stop firing after prolonged
John:Just conscious is too simple. It's too umbrella. A rock is conscious.
Is there an agent specific uniqueness to consciousness? No one is conscious
like me. And they all are unique as I am not conscious as they are... The
uniqueness may be a defining factor. Unreplicable and non-simulatable.
John, The reason why people are thinking about all this stuff in terms of
maths is
because it is not all just fluffy philosophizing you have to have at least
minimalistic math models in order to build software. So when you say
iTheathre or iMovie I'm thinking bits per send, compression, color
John:When you describe this you have to be careful how much computation your
mind
is doing and taking for granted. You make many assumptions just by looking
at the pic and saying these are signs that this man is conscious. And saying
that a handheld TV is some sort of model, ya that's making
Richard:Interesting, but I am araid that whenever I see someone report a
project
to collect all the world's knowledge in a nice, centralized format (Cyc,
and Daughters-of-Cyc) I cannot help but think of one of the early
chapters in Neal Stephenson's Quicksilver, were Wilkins, Leibnitz and
others
Steve: I have been advocating fixing the brain shorts that lead to problems,
rather than jerking the entire world around to make brain shorted people happy.
Which brain shorts? IMO the brain's capacity for shorts in one situation is
almost always a capacity for short-cuts in another - and
Steve/Stephen: I am planning to archive all conversations .This is pretty
simple with text, but when things move into real-time moving images from which
to understand the world, this takes a little more storage.
No one's yet actually trying to develop movie AI/AGI - an intelligence that
Bob: I'm doing stuff with robotics which is mostly about processing
sequences of images (I call the offline playbacks used for parameter
optimisation dream sequences), although probably what I'm doing
doesn't qualify as AGI in a strict sense - it's more reminiscent of
the Grand/Urban Challenge
Steve:Presuming that you do NOT want to store all of history and repeatedly
analyze all of it as your future AGI operates, you must accept MULTIPLE
potentially-useful paradigms, adding new ones and trashing old ones as more
information comes in. Our own very personal ideas of learning and
Will:And you are part of the problem insisting that an AGI should be tested
by its ability to learn on its own and not get instruction/help from
other agents be they human or other artificial intelligences.
I insist[ed] that an AGI should be tested on its ability to solve some
*problems* on its
Mark Waser:Several comments . . . .
First, this work is hideously outdated. The author cites his own reading
for some chapters he produced in 1992.
His claim that the dominant paradigms for studying language comprehension
imply that it is an archival process is *at best* hideously outdated --
Preparation for Situated Action
http://psychology.emory.edu/cognition/barsalou/papers/Barsalou_DP_1999_situated_comprehension.pdf
This is what Stephen and I were discussing a while back - but it neatly
names the alternative approaches to language. Most AGI language
comprehension treats it as
... on this:
http://www.adaptiveai.com/news/index.htm
Towards Commercialization
It's been a while. We've been busy. A good kind of busy.
At the end of March we completed an important milestone: a demo system
consolidating our prior 10 months' work. This was followed by my annual
John G: human musical pattern extrapolation fidelity is a sort of an
averaging of the human minds full
capability of an astonishingly robust pattern recognizing ability...I feel
that our modern audial
pattern recognition ability has been extremely dumbed down
The arts as seen by a
John:The synchronous melodies of the crickets strumming their legs, changes
harmony as the wind moves warmthness. The reeds vibrate; the birds, fearing
the snake, break their rhythmic falsetto polyphonies and flutter away to new
pastures.
But with humans, pattern-breaking and the seeking of
Steve:You can read the same passage and completely understand what it says.
However, when YOU (and not some imperfect computer program that you might
design and write) sit down and carefully identify parts of speech, construct
the potential diagrams for the sentence, go through domain-specific
Unsolicited, specially for me personally, today:
Artificial General Intelligence (Cognitive Technologies)
by Ben Goertzel
RRP: £46.00
Price: £30.36
You Save: £15.64 (34%)
Rate this item:
I own it
Steve,
Briefly, my thought re a super-medi-wiki is that it only presents
theories/contenders rather than definitive answers - and there must be some
ratings/voting system.Yes that favours conservative thinking which may become
out-of-date. But users will still look for outsider ideas, and it
Steve/MT:
My off-the-cuff thought here is that a central database, organised on some
open source basis getting medical professionals continually to contribute and
update, which would enable people to immediately get a run-down of the major
possible causes (and indeed minor possible
be that if one has tons of data, he can derive pretty good
predictions.
Mike Tintner wrote:
Steve/MT:
My off-the-cuff thought here is that a central database,
organised on some open source basis getting medical
professionals continually to contribute and update
Steve,
This is more or less where I came into this group. You've picked a, if not the,
classic AGI problem. The problem that distinguishes it from narrow AI.
Problematic, no right answer. And every option could often be wrong. I tried to
open a similar problem for discussion way back - how do
, computer hardware,
nuclear power stations, sick plants etc. ?
Mike,
On 5/14/08, Mike Tintner [EMAIL PROTECTED] wrote:
This is more or less where I came into this group. You've picked a, if not
the, classic AGI problem. The problem that distinguishes it from narrow AI.
Problematic
Joseph H:
Mike, what is your stance on vector images?
--
Hi Joe,
What's the point of this question? Is it something like: geometry can be used
to analyse any shapes?
Joe,
Thanks for reply - yes, I thought you meant something like this, but it's good
to have it spelled out.
I think you're making what seems to me to be a v. common mistake among AGI-ers.
Yes, you can reduce any image whatsoever on a computer screen, to some set of
mathemetical
Joe,
And here's a perhaps easier [?], certainly more commonplace set - how will
your system recognize each is Madonna:
http://www.the-planets.com/madonna/madonna_200.jpg
http://www.ouvre.com/wp-content/banniere-itms-madonna.png
Boris: I define intelligence as an ability to predict/plan by discovering
projecting patterns within an input flow.
IOW a capacity to generalize. A general intelligence is something that
generalizes from incoming info. about the world.
Well, no it can't be just that. Look at what you write
Right on. Everything I've read esp. Grandin, suggests strongly autism is
crucially hypersensitivity rather than an emotional disorder. If every time
the normal person touched someone, they got the equivalent of an electric
shock, they'd stay away from people too. [Thanks for your previous
Jim,
I doubt that your specification equals my individualization.
If I want to be able to recognize the individuals, Curtis/Brian/Carl/ and
Billi Bromer,only images will do it:
http://www.dunningmotorsales.com/IMAGES/people/Curtis%20Bromer.jpg
http://www.dundee.ac.uk/psychology/taharley/pcgn_harley_review.pdf
Richard's cowriter above reviews the state of cognitive neuropsychology,
[and the Handbook of Cognitive Neuropsychology] painting a picture of v.
considerable disagreement in the discipline. I'd be interested if anyone can
Actually, the sound of language isn't just a subtle thing - it's
foundational. Language is sounds first, and letters second (or third/fourth
historically).
And the sounds aren't just sounds - they express emotions about what is
being said. Not just emphases per one earlier post.
You could
A nice analogy occurs to me for NLP - processing language without the
sounds.
It's like processing songs without the music.
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
) meaning of by rather a
wide margin. That probably explains your puzzlement in this case.
Richard Loosemore
Mike Tintner wrote:
I'm not quite sure why Richard would want to quote Harnad. Harnad's idea
of how the brain works depends on it first processing our immediate
sensory images as iconic
Hi Jim,
Funny, I was just thinking re the reply to your point, the second before I
read it. What I was going to say was: I read a lot of Harnad many years ago,
and I was a bit confused then about exactly what he was positing re the
intermediate levels of processing - iconic/categorical.
YKY : Logic can deal with almost everything, depending on how much effort
you put in it =)
LES sanglots longs. des violons. de l'automne.
Blessent mon cour d'une langueur monotone.
You don't just read those words, (and most words), you hear them. How's
logic going to hear them?
YOY YKY?
You
Ah mon dieu - c'est Blessent mon COEUR..
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
Nathan/MT:
You only need emotions when you're dealing with problems that are
problematic, ill-structured, and involving potentially infinite reasoning.
(Chess qualifies as that for a human being, not for a program).
Those with severed connections from the amygdala (the emotional machine of
I'm not quite sure why Richard would want to quote Harnad. Harnad's idea of
how the brain works depends on it first processing our immediate sensory
images as iconic representations - not 1m miles from Lakoff's image
schemas. He sees the brain as first developing some kind of horse graphics,
Matthias,
Your remarks stimulated some interesting thoughts for me re concept
organisation. I agree with what you seem to be implying that every concept
must be a cluster of different POV images, and/or image schemas in the
brain.
But that cluster must have a normal organisation.
The
MH: Since we cannot explain qualia we can also a never answer the question
whether qualia is necessary for AGI
Well, clearly you do need emotions, continually evaluating the
worthwhileness of your current activity and its goals/ risks and costs - as
set against the other goals of your
exclusively about humans - and other animals.
I'm not aware, offhand, of any AGI's that do deal with problematic
problems. Are you? And what problems?
Von: Mike Tintner [mailto:[EMAIL PROTECTED] wrote
Well, clearly you do need emotions, continually evaluating the
worthwhileness of your current
Charles,
We're still a few million miles apart :). But perhaps we can focus on
something constructive here. On the one hand, while, yes, I'm talking about
extremely sophisticated behaviour in essaywriting, it has generalizable
features that characterise all life. (And I think BTW that a dog
So what are the principles that enable animated characters and materials
here to react/move in individual continually different ways, where previous
characters reacted typically and consistently?
Ben Now this looks like a fairly AGI-friendly approach to controlling
animated characters ...
..or is it just that these figures respond differently to the slightest
difference in angle and force of impact?
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your
Charles: as far as I can tell ALL modes of human thought
only operate within restricted domains.
I literally can't conceive where you got this idea from :). Writing an
essay - about, say, the French Revolution, future of AGI, flaws in Hamlet,
what you did in the zoo, or any of the other
The link from Lukas seems to suggest that applying this technology is
something of an art (is that right?):
As a side note, the fickle nature of the evolutionary approach is the
primary reason why euphoria isn't middleware; the team at NaturalMotion
helps you integrate it. Most often, you
Charles: Flaws in Hamlet: I don't think of this as involving general
intelligence. Specialized intelligence, yes, but if you see general
intelligence at work there you'll need to be more explicit for me to
understand what you mean. Now determining whether a particular deviation
from iambic
JAR/Russell:
This seems to be an example of what I was talking about in the other
thread - AI-ers starting with the set of sign systems and tools - and here
the kinds of intelligence - they know of personally, professionally, and
assume that they are the only kind, and encompass all types
Bob: Particularly I'd be interested in having
the robot learn a model of its own body kinematics - the beginnings of
a sense of self - based on data mining its sensory data and also using
experimental movements to confirm or refute hypotheses, which mught to
a naive observer look like play.
Russell,
This is a definite start and I'm just trying to put together a reasoned
thesis on this area. You're absolutely right that this is essential to
understanding AGI - General Intelligence - and literally no one does have
other than tiny fragments of understanding here, either in AI/AGI
Moving on from my previous post, the key distinction in mentality between
the literate and the new multimediate mentality is between PRE-SEMIOTIC and
SEMIOTIC.
The presemiotic person starts from the POV of his specialist sign system
and medium, when thinking about solving particular
appreciate the project,
this list, and it's contributors.
Mike Tintner wrote:
Matthias: a state description could be:
..I am in a kitchen. The door is open. It has two windows. There is a
sink. And three cupboards. Two chairs. A fly is on
the right window. The sun is shining. The color
-modal interaction, and
understanding the details of how the heuristics arise in the first place
from
the pressures of real-time processing constraints and deliberative
modelling.
Josh
On Tuesday 29 April 2008 11:12:28 am, Mike Tintner wrote:
Josh:You can't do it without using
both
Bob: I'm not totally convinced that having a high number of degrees of
freedom is actually necessary for the development of intelligence. Of
greater importance is the sensory capability, and the ways in which
that data is processed. A birds beak is a far less elaborate tool
than a human hand or
Matthias: a state description could be:
...I am in a kitchen. The door is open. It has two windows. There is a
sink. And three cupboards. Two chairs. A fly is on
the right window. The sun is shining. The color of the chair is... etc.
etc.
MT: http://honolulu.hawaii.edu/distance/sci122/Programs/p3/Rorschach.gif
(Oh - and a, linas, Bob, Mark, et al - can we agree that there is no way
for maths to process that image, period?)
Mark:No. I strongly disagree with your assertion. What you believe you are
processing (w)holistically can
a basically
adaptive program that pace Ben's could develop something like hide-and-seek
independently, after learning to fetch, is hard enough - or a maze-running
creature that could, say, learn to climb over maze walls and not just run round
them..
Mike,
On 4/24/08, Mike Tintner [EMAIL
BillK: MT: So what you must tell me is how your or any geometrical system
of analysis
is going to be able to take a rorschach and come up similarly with a
recognizable object or creature. Bear in mind, your system will be given
no
initial clues as to what objects or creatures are suitable as
Stephen:Mike, have you given any thought to how deaf and blind humans become
mentally competent?
Certainly. By using their touch, smell, kinaesthetic and the other
sensorimotor sensations of their own body to get to know the world. Blind
people can draw - they can draw outlines of
experience of its subject
matter. So my question is: what's the difference? (Unless it really is
just holding back those claims of text understanding w/o real-world
sensory data-- which is a fine point.)
On Wed, Apr 23, 2008 at 9:07 PM, Mike Tintner [EMAIL PROTECTED]
wrote:
Abram,
Just
Vlad: I agree that some kind of simulation is necessary, probably something
equivalent on high level to a 3D vector sketch of the events
developing in time, containing actors, where necessary structural
schemes of their bodies interacting with structure of the scene, etc.
The recurrent, but
MW: I see all your references are reinforcing the need for grounding and
some
showing how grounding *can* be accomplished by images (among many other
methods :-), but I have yet to find any of your references clearly saying
all meanings must be grounded BY IMAGES. That was the basis for my last
Ben/Joshua:
How do you think the AI and AGI fields relate to the embodied grounded
cognition movements in cog. sci? My impression is that the majority of
people here (excluding you) still have only limited awareness of them -
are still operating in total totally doomed defiance of their
I think one can now present a convincing case why any symbolic/linguistic
approach to AGI, that is not backed by imaginative simulation, simply will
not work. For example, any attempt to build an AGI with a purely symbolic
database of knowledge mined from the Net or other texts, is doomed.
Abram,
Both to-the-point responses. One: how much, you're asking, are statements
about movement central to language? Extremely central. That's precisely why
we have this core general activity/movement language that we all share -
all those very basic movement words - we use them so often.
Abram,
Just to illustrate further, here's the opening lines of today's Times sports
report on a football match.[Liverpool v Chelsea] How on earth could this be
understood without massive imaginative simulation? [Stephen?] And without
mainly imaginative memories of football matches?
John
[Sci Am] May 08
The first generation of World Wide Web capabilities
rapidly transformed retailing and information
search. More recent attributes such as blogging,
tagging and social networking, dubbed Web 2.0,
have just as quickly expanded people's ability not just
to consume online information
Current Directions in Psychological Science - April 2008 - In Press
http://www.psychologicalscience.org/journals/cd/17_2_inpress/Barsalou_completed.pdf
THE DOMINANT THEORY IN COGNITIVE SCIENCE
Across diverse areas of psychology, computer science, linguistics, and
philosophy, the
dominant
holistic images.
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
- Original Message
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Cc: dan michaels
Pei: I believe AGI is basically a theoretical problem, which will be solved
by a single person or a small group, with little funding
How do you define that problem?
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed:
Pei: I don't really want
a big gang at now (that will only waste the time of mine and the
others), but a small-but-good gang, plus more time for myself ---
which means less group debates, I guess. ;-)
Alternatively, you could open your problems for group discussion
think-tanking... I'm
Steve:If you've
got a messy real-world problem, you know little, if you have an
algorithm giving the solution, you know all.
This is the bit where, like most, you skip over the nature of AGI - messy
real-world problems. What you're saying is: hey if you've got a messy problem,
it's great, nay
Brad:What's really impressive is how natural the leg movements are
So natural, I wondered whether it wasn't a hoax with real people in there.
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed:
MW/MT: Correct me, but I haven't seen any awareness in AI of the huge
difficulties that result from the problem of : how do you test acquired
knowledge?
MW:You're missing seeing it. It's generally phrased as converting data to
knowledge or concept formulation and it's currently generally
My broad point is that there is only one way to test knowledge ultimately -
physically.
Science demands physical evidence for everything.
It then has in effect a graded system of veracity (although there is no
formalised system). The truest knowledge comes from direct physical observation
Richard:the idea
that perception is [the] fairly passive reception of impressions... is
so old and out of date that if you pick up a textbook on cognitive
psychology printed 30 years ago you will find it dismissed as wrong.
This is the issue of top-down vs bottom-up processing
No it isn't.
Richard:Now, if what you *meant* to talk about was links between action and
perception, all well and good, but I was just addressing the above
comment of yours.
I'm certainly not reiterating an ancient debate. This has been from the
start an exploratory thread. Prinz summarises fairly well
Richard:Personally, I think that embodiment makes the development process
vastly
easier, but this black and white declaration of IMPOSSIBLE! that you
shout seems to go too far.
Well, that's the point of discussing this - yes, the culture still allows
your position. But the new cog sci
Richard,
Just an addendum to my question - I'm quite happy to take just one
disembodied subject area. But - and this is an interesting point - since
we're talking A*General*I - there should really be at least two.
---
agi
Archives:
MW: I believe that I was also quite clear with my follow-on comment of a
cart
before the horse problem. Once we know how to acquire and store
knowledge, then we can develop metrics for testing it -- but, for now,
it's too early to go after the problem. as well.
You're basically agreeing with
Impressive. Especially their Rhex robot - v. resilient in v. different
terrains:
http://www.youtube.com/watch?v=wIuRVr8z_WEfeature=related
Peruse the video:
http://www.youtube.com/watch?v=W1czBcnX1Wwfeature=related
Of course, they are only showing the best stuff. And I am sure there
is
I want to return to what seems to me the high-school-naive idea of how an AGI's
or any body of knowledge can and/or does grow - i.e. linearly, mathematically
and logically.
Correct me, but I haven't seen any awareness in AI of the huge difficulties
that result from the problem of : how do you
of his system.
If you want to send me something, I'll gladly look at it reply offline -
although I'm real busy at the mo. answering *your* last question.!
Best
Mike Tintner wrote:
Richard,
I can't swear that I did read it. I read a paper of more or less exactly
that length some time ago
use s.o. else' s title for your
site. It doesn't bespeak originality.
Mike Tintner wrote:
Richard: I already did publish a paper doing exactly that ... haven't
you read it?
Yep. And I'm still mystified. I should have added that I have a vague
idea of what you mean by complex system and its
going to get the best brains
[or computers] that money can buy - loads loads of them. And get them to
come up with an idea. Pretty original, huh?
That's not an idea, Richard. *You* have to come up with that..
Mike Tintner wrote:
Richard,
Again, reread me precisely.
Saying your system
going to get the best brains
[or computers] that money can buy - loads loads of them. And get them to
come up with an idea. Pretty original, huh?
That's not an idea, Richard. *You* have to come up with that..
Mike Tintner wrote:
Richard,
Again, reread me precisely.
Saying your system
the best brains
[or computers] that money can buy - loads loads of them. And get them to
come up with an idea. Pretty original, huh?
That's not an idea, Richard. *You* have to come up with that..
P.S. Yes I did read it. I have it in a folder.
Mike Tintner wrote:
Richard,
Again, reread me
http://www.sciencedaily.com/releases/2008/03/080329122121.htm
http://cordis.europa.eu/ictresults/index.cfm/section/news/tpl/article/BrowsingType/Features/ID/89632
New Breed Of Cognitive Robot Is A Lot Like A Puppy
ScienceDaily (Mar. 31, 2008) - Designers of artificial cognitive systems
have
Charles H: Due to this, the resource management should not be algorithmic,
but
free to adapt to the amount of resources at hand. I'm intent on a
economic solution to the problem, where each activity is an economic
actor.
The idea of economics is v. interesting important. I think - I'm
, there is no such thing as an AGI at the moment. And there never
will be if machines can't do what the brain does - which is, first and last,
and all the time, look at the world in images as wholes.
MW: Mike Tintner
Well, guys, if the only difference between an image and, say, a symbolic
- verbal
it to the
main AGI consciousness while telling that consciousness that the picture is
what it actually sees?
- Original Message -
From: Mike Tintner
To: agi@v2.listbox.com
Cc: dan michaels
Sent: Monday, March 31, 2008 5:56 AM
Subject: Re: [agi] Symbols
You're
401 - 500 of 788 matches
Mail list logo