DZ:AGI researchers do not think of intelligence as what you think of as a
computer program -- some rigid sequence of logical operations programmed by a
designer to mimic intelligent behavior.
1. Sequence/Structure. The concept I've been using is not that a program is a
"sequence" of operations
hat images you associate with the latter two?
Since you prefer to use person as example, let me try the same. All of
my experience about 'Mike Tintner' is symbolic, nothing visual, but it
still makes you real enough to me...
I'm sorry if it sounds rude
Pei,
You attribute to symb
Jiri: Mike,
If you think your AGI know-how is superior to the know-how of those
who already built testable thinking machines then why don't you try to
build one yourself?
Jiri,
I don't think I know much at all about machines or software & never claim
to. I think I know certain, only certain, t
r in some other way abolish
your ignorance about technical subjects - exactly what you're asking
others to do.
Otherwise, you have to admit the folly of trying to compel any such folks
to move from their hard-earned perspectives, if you're not willing to do
that yourself.
Terren
--- On
Matt,
This issue is so fundamental - and AGI-ers/ rational thinkers simply can't
absorb it.
The answer is - in part, yes. You can always analyse existing scripts/
jokes/ types of modern art etc. etc. and arrive at underlying formulae. And
there is a great deal of computer art out there in pr
Narrow AI : Stereotypical/ Patterned/ Rational
Matt: Suppose you write a program that inputs jokes or cartoons and outputs
whether or not they are funny
AGI : Stereotype-/Pattern-breaking/Creative
"What you rebellin' against?"
"Whatcha got?"
Marlon Brando. The Wild One (1953) On screen
Matt,
Humor is dependent not on inductive reasoning by association, reversed or
otherwise, but on the crossing of whole matrices/ spaces/ scripts .. and
that good old AGI standby, domains. See Koestler esp. for how it's one
version of all creativity -
http://www.casbs.org/~turner/art/deacon_
Here you go - should be dead simple to analyse formula - and produce
program:)
http://www.energyquest.ca.gov/games/jokes/light_bulb.html
How many software engineers does it take to change a light bulb? Two. One
always leaves in the middle of the project.
-
Matt: Humor detection obviously requires a sophisticated language model and
knowledge of popular culture, current events, and what jokes have been told
before. Since entertainment is a big sector of the economy, an AGI needs all
human knowledge, not just knowledge that is work related.
In many
Obviously you have no plans for endowing your computer with a self and a
body, that has emotions and can shake with laughter. Or tears.
Actually, many of us do. And this is why your posts are so problematical.
You invent what *we* believe and what we intend to do. And then you
criticize your
t recognize it.
You're looking at the blueprints of F-14 Tomcat and arguing that the wings
don't move right for a bird and, besides, it's too unstable for a human to
fly (unassisted :-).
Read the papers in the first link and *maybe* we can have a useful
conversation . . . .
-
There is no computer or robot that keeps getting physically excited or
depressed by its computations. (But it would be a good idea).
you don't even realize that laptops (and many other computers -- not to
mention appliances) currently do precisely what you claim that no computer
or robot does.
Emotional laptops? On 2nd thoughts it's like Thomas the Tank Engine... If
s.o. hasn't done it already, there is big money here. Even bigger than you
earn, if that's humanly possible. Lenny the Laptop...? A really personal
computer. Whatddya think? Ideas? [Shh, darling, Lenny's thinking...]
-
Matt,
Yes embodiment is essential to the unconscious brain's recognizing the joke
in the first place. If the joke is, say, about a guy getting his cock caught
in a zipper, it's your embodied identification with him, that has you
doubling up and clutching your guts with laughter. Without a bod
[n.b. my posts are arriving in a weird order]
Jiri: MT>>Without a body, you couldn't understand the joke.
False. Would you also say that without a body, you couldn't understand
3D space ?
Jiri,
You have to offer a reason why something is "False" :). You're saying it's
that "3D space *can*
:
You're saying it's that "3D space *can* be understood without a body"?
Er, false.
http://en.wikipedia.org/wiki/SHRDLU
And SHRDLU can generally recognize whether any obect is "in" any another
object - whether a doll is in a box or lying between two walls, whether a
box is in another box,
Jiri,
Quick answer because in rush. Notice your "if" ... Which programs actually
do understand any *general* concepts of orientation? SHRDLU I will gladly
bet, didn't...and neither do any others.
The v. word "orientation" indicates the reality that every picture has a
point of view, and refe
data than when playing with 3D
toy-worlds, but in principle, it's possible to understand 3D without
having a body.
Jiri
On Thu, Sep 11, 2008 at 11:24 AM, Mike Tintner <[EMAIL PROTECTED]>
wrote:
Jiri,
Quick answer because in rush. Notice your "if" ... Which programs
actua
Otherwise, you could always claim
that a machine doesn't understand anything because only humans can do
that.
-- Matt Mahoney, [EMAIL PROTECTED]
--- On Thu, 9/11/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
From: Mike Tintner <[EMAIL PROTECTED]>
Subject: Re: [agi] Artificial
Mike Tintner <[EMAIL PROTECTED]> wrote:
To "understand" is to "REALISE" what [on earth, or
in the [real] world] is being talked about.
Matt: Nice dodge. How do you distinguish between when a computer realizes
something and when it just reacts as if it realizes
Matt,
"To understand/realise" is to be distinguished from (I would argue) "to
comprehend" statements.
The one is to be able to point to the real objects referred to. The other is
merely to be able to offer or find an alternative or dictionary definition
of the statements. A translation. Like the
Jiri and Matt et al,
I'm getting v. confident about the approach I've just barely begun to
outline. Let's call it "realistics" - the title for a new, foundational
branch of metacognition, that will oversee all forms of information, incl.
esp. language, logic, and maths, and also all image for
Matt,
What are you being so tetchy about? The issue is what it takes for any
agent, human or machine.to understand information .
You give me an extremely complicated and ultimately weird test/paper, which
presupposes that machines, humans and everyone else can only exhibit, and be
tested o
Matt: How are you going to understand the issues behind programming a
computer for human intelligence if you have never programmed a computer?
Matt,
We simply have a big difference of opinion. I'm saying there is no way a
computer [or agent, period] can understand language if it can't basic
Terren: I send this along because it's a great example of how systems that
self-organize can result in structures and dynamics that are more complex
and efficient than anything we can purposefully design. The applicability
to
the realm of designed intelligence is obvious.
Vlad: . Even if
Steve:View #2 (mine, stated from your approximate viewpoint) is that simple
programs (like Dr. Eliza) have in the past and will in the future do things
that people aren't good at. This includes tasks that encroach on
"intelligence", e.g. modeling complex phonema and refining designs.
Steve,
In
TITLE: Case-by-case Problem Solving (draft)
AUTHOR: Pei Wang
ABSTRACT: Case-by-case Problem Solving is an approach in which the
system solves the current occurrence of a problem instance by taking
the available knowledge into consideration, under the restriction of
available resources. It is dif
Ben,
I'm only saying that CPS seems to be loosely equivalent to wicked,
ill-structured problem-solving, (the reference to convergent/divergent (or
crystallised vs fluid) etc is merely to point out a common distinction in
psychology between two kinds of intelligence that Pei wasn't aware of in t
algorithmic ways...
ben
On Thu, Sep 18, 2008 at 8:51 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
Ben,
I'm only saying that CPS seems to be loosely equivalent to wicked,
ill-structured problem-solving, (the reference to convergent/divergent (or
crystallised vs f
Ben,
It's hard to resist my interpretation here - that Pei does sound as if he is
being truly non-algorithmic. Just look at the opening abstract sentences.
(However, I have no wish to be pedantic - I'll accept whatever you guys say you
mean).
"Case-by-case Problem Solving is an approach in w
Ben:
Your language is unclear
Could you define precisely what you mean by an "algorithm"
Also, could you give an example of a computer program, that can be run on a
digital computer, that is not does not embody an "algorithm" according to your
definition?
thx
ben
Matt,
Thanks for reference. But it's still somewhat ambiguous. I could somewhat
similarly outline a "non-procedure procedure" which might include "steps" like
"Think about the problem" then "Do something, anything - whatever first comes
to mind" and "If that doesn't work, try something else."
Steve:question: Why bother writing a book, when a program is a comparable
effort that is worth MUCH more?
Well,because when you do just state basic principles - as you constructively
started to do - I think you'll find that people can't even agree about those -
any more than they can agree abou
[You'll note that arguably the single greatest influence on people's thoughts
about AGI here is Google - basically Google search - and that still means to
most text search. However, video search & other kinds of image search [along
with online video broadcasting] are already starting to transfo
Mike, Google has had basically no impact on the AGI thinking of myself or 95%
of the other serious AGI researchers I know..
When did you start thinking about creating an online virtual AGI?.
---
agi
Archives: https://www.listbox.com/member/archive/
Mike, Google has had basically no impact on the AGI thinking of myself or 95%
of the other serious AGI researchers I know...
Ben,
Come again. Your thinking about a superAGI, and AGI takeoff, is not TOTALLY
dependent on Google? You would stlll argue that a superAGI is possible WITHOUT
a
Steve:
Thanks for wringing my thoughts out. Can you twist a little tighter?!
Steve,
A v. loose practical analogy is mindmaps - it was obviously better for Buzan to
develop a sub-discipline/technique 1st, and a program later.
What you don't understand, I think, in all your reasoning about "repai
Ben:I would not even know about AI had I never encountered paper, yet the
properties of paper have really not been inspirational in my AGI design
efforts...
Your unconscious keeps talking to you. It is precisely paper that mainly shapes
your thinking about AI. Paper has been the defining medium
Ben:Just for clarity: while I think that in principle one could make a
maths-only AGI, my present focus is on building an AGI that is embodied in
virtual robots and potentially real robots as well ... in addition to
communicating via language and internally utilizing logic on various levels..
Steve:
If I were selling a technique like Buzan then I would agree. However, someone
selling a tool to merge ALL techniques is in a different situation, with a
knowledge engine to sell.
The difference AFAICT is that Buzan had an *idea* - "don't organize your
thoughts about a subject in random
Matt: A more appropriate metaphor is that text compression is the altimeter
by which we measure progress. (1)
Matt,
Now that sentence is a good example of general intelligence - forming a new
connection between domains - altimeters and progress.
Can you explain how you could have arrived at i
Pei:In a broad sense, "formal logic" is nothing but
"domain-independent and justifiable data manipulation schemes". I
haven't seen any argument for why AI cannot be achieved by
implementing that
Have you provided a single argument as to how logic *can* achieve AI - or
to be more precise, Artifi
Ben: Mike:
(And can you provide an example of a single surprising metaphor or analogy
that have ever been derived logically? Jiri said he could - but didn't.)
It's a bad question -- one could derive surprising metaphors or analogies by
random search, and that wouldn't prove anything usef
ELY SUCCESSFUL
JESUS IS THE BEST RADIO PRODUCER IN THE BEANS.
MegaHAL is kinda creative and poetic, and he does generate some funky and
surprising metaphors ... but alas he is not an AGI...
-- Ben
On Sat, Sep 20, 2008 at 11:30 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
Ben, Just to be clear, when I said "no argument re how logic will produce
AGI.." I meant, of course, as per the previous posts, "..how logic will
[surprisingly] cross domains etc". That, for me, is the defining characteristic
of AGI. All the rest is narrow AI.
---
So can *you* understand credit default swaps?
"Here's the scary part of today's testimony everyone seems to have missed:
SEC chairman Chris Cox's statement that the Credit Default Swap (CDS) market
is "completely unregulated." It's size? Somewhere in the $50 TRILLION
range."
-
Ben,
Are CDS significantly complicated then - as an awful lot of professional,
highly intelligent people are claiming?
So can *you* understand credit default swaps?
Yes I can, having a PhD in math and having studied a moderate amount of
mathematical finance ...
-
Thanks, Ben, Dmitri for replies.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8
Piecing through the notice below with my renowned ignorance, it occurs to me
to ask: does the brain/ cerebellum demonstrate as much general intelligence
and flexibility in its movements as in its consciously directed thinking?
... In its ability to vary muscle coordination patterns (& structur
[Comment: Aren't logic and common sense *opposed*?]
Discursive [logical, propositional] Knowledge vs Practical [tacit] Knowledge
http://www.polis.leeds.ac.uk/assets/files/research/working-papers/wp24mcanulla.pdf
a) Knowledge: practical and discursive
Most, if not all understandings of tradition
Ben and Stephen,
AFAIK your focus - and the universal focus - in this debate on how and whether
language can be symbolically/logically interpreted - is on *individual words
and sentences.* A natural place to start. But you can't stop there - because
the problems, I suggest, (hard as they alrea
David,
Thanks for reply. Like so many other things, though, working out how we
understand texts is central to understanding GI - and something to be done
*now*. I've just started looking at it, but immediately I can see that what the
mind does - how it jumps around in time and space and POV and
No current approach has the slightest idea how to do that, I
suggest. You can't do it by a surface approach, simply analysing how words
are used in however many million verbally related sentences in texts on the
net.
http://video.google.ca/videoplay?docid=-7933698775159827395&e
as staring"
I could say more about the temporal ordering of the story sentences, but you
all should get the idea about how Texai would read, and perhaps someday
compose, fictional descriptive passages.
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
ht
Ben,
Er, you seem to be confirming my point. Tomasello from Wiki is an early child
development psychologist. I want a model that keeps going to show the stages of
language acquistion from say 7-13, on through teens, and into the twenties -
that shows at what stages we understand progressiv
red to be practical for reading, say, a random newspaper
story. But that is just what Cyc is for.
Anyway, the point is, understanding passages is not a new field, just
a neglected one.
--Abram
On Mon, Sep 29, 2008 at 3:23 PM, Mike Tintner <[EMAIL PROTECTED]>
wrote:
Ben and Stephen,
AFA
Ben: the reason AGI is so hard has to do with Santa Fe Institute style
complexity ...
Intelligence is not fundamentally grounded in any particular mechanism but
rather in emergent structures
and dynamics that arise in certain complex systems coupled with their
environments
Characterizing what
Ben: analogy is mathematically a matter of finding mappings that match certain
constraints. The traditional AI approach to this would be to search the
constrained space of mappings using some search heuristic. A complex systems
approach is to embed the constraints into a dynamical system and
Can't resist, Ben..
"it is provable that complex systems methods can solve **any** analogy problem,
given appropriate data"
Please indicate how your proof applies to the problem of developing an AGI
machine. (I'll allow you to specify as much "appropriate data" as you like -
any data, of cou
... but at least, it's funny, for those of us who get the joke ;-)
ben
On Tue, Sep 30, 2008 at 3:38 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
Can't resist, Ben..
"it is provable that complex systems methods can solve **any** analogy
problem, given appropri
Ben,
I must assume you are being genuine here - and don't perceive that you have not
at any point illustrated how complexity might lead to the solution of any
given general (domain-crossing) problem of AGI.
Your OpenCog design also does not illustrate how it is to solve problems - how
it is,
The foundation of the human mind and system is that we can only be in one
place at once, and can only be directly, fully conscious of that place. Our
world picture, which we and, I think, AI/AGI tend to take for granted, is
an extraordinary triumph over that limitation - our ability to concei
Colin:
1) Empirical refutation of computationalism...
.. interesting because the implication is that if anyone
doing AGI lifts their finger over a keyboard thinking they can be
directly involved in programming anything to do with the eventual
knowledge of the creature...they have already failed.
ed inference, rather than so often positioning all their
inferences/ideas within one "default context" ... for starters...
ben
On Fri, Oct 3, 2008 at 8:43 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
The foundation of the human mind and system is that we can only be in one
'arguments from
under-informed-authority' ... I defer to the empirical reality of the
situation and would prefer that it be left to justify itself. I did not
make any of it up. I merely observed. . ...and so if you don't mind I'd
rather leave the issue there. ..
regards
Matthias: I think it is extremely important, that we give an AGI no bias
about
space and time as we seem to have.
Well, I (& possibly Ben) have been talking about an entity that is in many
places at once - not in NO place. I have no idea how you would swing that -
other than what we already ha
ether the AGI is
distributed or not.
I mentioned this point because your question has relations to the more
fundamental question whether and which bias we should give AGI for the
representation of space and time.
Ursprüngliche Nachricht-
Von: Mike Tintner [mailto:[EMAIL PROTECTED]
Gesendet: Sam
Matt:The problem you describe is to reconstruct this image given the highly
filtered and compressed signals that make it through your visual perceptual
system, like when an artist paints a scene from memory. Are you saying that
this process requires a consciousness because it is otherwise not
c
hs of creativity - and creative possibilities - in a given
medium. A somewhat formalised maths, since creators usually find ways to
transcend and change their medium - but useful nevertheless. Is such a maths
being pursued?
On Sat, Oct 4, 2008 at 8:37 PM, Mike Tintner <[EMAIL PROTECTED
Brad:Unfortunately,
as long as the mainstream AGI community continue to hang on to what
should, by now, be a thoroughly-discredited strategy, we will never (or
too late) achieve human-beneficial AGI.
Brad,
Perhaps you could give a single example of what you mean by non-human
intelligence. Wh
John,
Sorry if I missed something, but I can't see any attempt by you to
schematise/ classify emotions as such, e.g.
melancholy, sorrow, bleakness...
joy, exhilaration, euphoria..
(I'd be esp. interested in any attempt to establish a gradation of emotional
terms).
Do you have anything like
Ben,
V. interesting and helpful to get this pretty clearly stated general position.
However:
"To put it simply, once an AGI can understand human language we can teach it
stuff."
you don't give any prognostic view about the acquisition of language. Mine is -
"in your dreams." Arguably, most AG
This is fine and interesting, but hasn't anybody yet read Kauffman's
Reinventing the Sacred (publ this year)? The entire book is devoted to this
theme and treats it globally, ranging from this kind of emergence in
physics, to emergence/evolution of natural species, to emergence/deliberate
crea
Ben:I didn't read that book but I've read dozens of his papers ... it's cool
stuff but does not convince me that engineering AGI is impossible ... however
when I debated this with Stu F2F I'd say neither of us convinced each other ;-)
...
Ben,
His argument (like mine), is that AGI is *algorith
AGI
do
the right things.
This could be an important problem for the development of AGI because in
my
opinion the difference between a human and a monkey is only fine tuning.
And
nature needed millions of years for this fine tuning.
I think there is no way to avoid this problem but this problem
Matthias (cont),
Alternatively, if you'd like *the* creative (& somewhat mathematical)
problem de nos jours - how about designing a "bail-out" fund/ mechanism for
either the US or the world, that will actually work? No show-stopper for
your AGI? [How would you apply logic here, Abram?]
Ben,
I am frankly flabberghasted by your response. I have given concrete example
after example of creative, domain-crossing problems, where obviously there is
no domain or frame that can be applied to solving the problem (as does
Kauffman) - and at no point do you engage with any of them - or h
Russell : Whoever said you
need to protect ideas is just shilly-shallying you. Ideas have no
market value; anyone capable of taking them up, already has more ideas
of his own than time to implement them.
In AGI, that certainly seems to be true - ideas are crucial, but require
such a massive a
Terren:autopoieisis. I wonder what your thoughts are about it?
Does anyone have any idea how to translate that biological principle into
building a machine, or software? Do you or anyone else have any idea what it
might entail? The only thing I can think of that comes anywhere close is the
Car
ng with human-level
intelligence or beyond.
Terren
--- On Fri, 10/10/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
From: Mike Tintner <[EMAIL PROTECTED]>
Subject: Re: [agi] open or closed source for AGI project?
To: agi@v2.listbox.com
Matt: a proposed design for a globally distributed artificial general
intelligence (AGI) for the purpose of automating the world economy. The
estimated value is on the order of US $1 quadrillion
That'll only cover Paulson's next recovery plan. You'll need a bit more.
-
re challenging, forcing the entities to
solve harder and harder problems to stay alive, corresponding with ever
increasing intelligence. At some distant point we may perhaps arrive at
something with human-level intelligence or beyond.
Terren
--- On Fri, 10/10/08,
As I understand the way you guys and AI generally work, you create
well-organized spaces which your programs can systematically search for
options. Let's call them "nets" - which have systematic, well-defined and
orderly-laid-out connections between nodes.
But it seems clear that natural syste
be drawn & explored here.
And my first attempt here may be rather like my first attempt at defining
programs a long time ago, which failed to distinguish between sequences and
structures of instructions - and was then pounced on by AI-ers.
On Sat, Oct 11, 2008 at 7:38 AM, Mike Tintner
I guess the obvious follow up question is when your systems search among
options for a response to a situation, they don't search in a systematic way
through spaces of options? They can just start anywhere and end up anywhere in
the system's web of knowledge - as you can in searching the Web its
Pei:The NARS solution fits people's intuition
You guys keep talking - perfectly reasonably - about how your logics do or
don't fit your intuition. The logical question is - how - on what
principles - does your intuition work? What ideas do you have about this?
What I should have added is that presumably your intuition must work on
radically different principles to your logics - otherwise you could
incorporate it/them
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listb
XML files containing appropriate
nodes/links and importing them) or one can start from a blank slate and let the
whole structure emerge as it will...
Ben G
On Sat, Oct 11, 2008 at 9:38 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
Ben,
Some questions then.
You don'
Ben,
I think that's all been extremely clear -and I think you've been very good in
all your different roles :). Your efforts have produced a v. good group -and a
great many thanks for them.
And, just to clarify: the fact that I set up this list and pay $12/month for
its hosting, and deal wit
ed way.
... some loose ends in reply to a message from a few days back ...
Mike Tintner wrote:
***
Be honest - when and where have you ever addressed creative problems?
[Just count how many problems I have raised)..
***
In my 1997 book FROM COMPLEXITY TO CREATIVITY
***
Colin,
Yes you and Rescher are going in a good direction, but you can make it all
simpler still, by being more specific..
We can take it for granted that we're talking here mainly about whether
*incomplete* creative works should be criticised.
If we're talking about scientific theories, then b
Colin:
others such as Hynna and Boahen at Stanford, who have an unusual hardware
neural architecture...(Hynna, K. M. and Boahen, K. 'Thermodynamically
equivalent silicon models of voltage-dependent ion channels', Neural
Computation vol. 19, no. 2, 2007. 327-350.) ...and others ... then things w
Will:There is a reason why lots of the planets biomass has stayed as
bacteria. It does perfectly well like that. It survives.
Too much processing power is a bad thing, it means less for
self-preservation and affecting the world. Balancing them is a tricky
proposition indeed
Interesting thought. B
Ben: I defy you to give me any neuroscience or cog sci result that cannot be
clearly explained using computable physics.
Ben,
As discussed before, no current computational approach can replicate the
brain's ability to produce a memory in what we can be v. confident are only a
few neuro
Ben: I don't have time to summarize all that stuff I already wrote in emails
either ;-p
Ben,
I asked you to "at least *label* what your "explanation" of scientific
creativity is.. Just a label, Ben. Books that are properly organized and
constructed (and sell), usually do have clearly labelled
Trent> : If you disagree with my paraphrasing of your opinion Colin, please
feel free to rebut it *in plain english* so we can better figure out
what the hell you're on about.
Well, I agree that Colin hasn't made clear what he stands for
[neo-]computationally. But perhaps he is doing us a se
why don't you start AGI-tech on the forum? enough people have expressed an
interest - simply reconfirm - and start posting there
- Original Message -
From: Derek Zahn
To: agi@v2.listbox.com
Sent: Wednesday, October 15, 2008 9:09 PM
Subject: RE: [agi] META: A possible re-fo
Trent: Oh you just hit my other annoyance.
"How does that work?"
"Mirror neurons"
IT TELLS US NOTHING.
Trent,
How do they work? By observing the shape of humans and animals , ("what
shape they're in"), our brain and body automatically *shape our bodies to
mirror their shape*, (put the
Trent,
I should have added that our brain and body, by observing the mere
shape/outline of others bodies as in Matisse's Dancers, can tell not only
how to *shape* our own outline, but how to "dispose" of our *whole body* -
we transpose/translate (or "flesh out") a static two-dimensional body
David:Mike, these statements are an *enormous* leap from the actual study of
mirror neurons. It's my hunch that the hypothesis paraphrased above is
generally true, but it is *far* from being fully supported by, or understood
via, the empirical evidence.
[snip] these are all original
101 - 200 of 908 matches
Mail list logo