Re: [agi] Artificial humor... P.S

2008-09-13 Thread Matt Mahoney
Mike, I understand what understand means. It is easy to describe what it 
means to another human. But to a computer you have to define it at the level of 
moving bits between registers. If you have never written software, you won't 
understand the problem.

So does the following program understand?

  main(){printf(Ah, now I understand!);}

You need a precise test. That is what Turing did.


-- Matt Mahoney, [EMAIL PROTECTED]


--- On Sat, 9/13/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: Re: [agi] Artificial humor... P.S
 To: agi@v2.listbox.com
 Date: Saturday, September 13, 2008, 12:18 AM
  Matt:  How are you going to understand the issues
 behind programming a 
  computer for human intelligence if you have never
 programmed a computer?
 
 Matt,
 
 We simply have a big difference of opinion. I'm saying
 there is no way a 
 computer [or agent, period] can understand language if it
 can't basically 
 identify/*see* (and sense) the real objects - (and
 therefore doesn't know 
 what) - it's talking about. Hence people say when they
 understand at last - 
 ah now I see.. now I see what you're
 talking about.. now I get the 
 picture.
 
 The issue of what faculties are needed to understand
 language (and be 
 intelligent)  is not, *in the first instance,* a matter of
 programming.  I 
 suggest you may have been v. uncharacteristically short in
 this exchange, 
 because you may not like the starkness of the message. It
 is stark, but I 
 believe it's the truth. 
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial [Humor ] vs Real Approaches to Information

2008-09-12 Thread Mike Tintner

Jiri and Matt et al,

I'm getting v. confident about the approach I've just barely begun to 
outline.  Let's call it realistics - the title for a new, foundational 
branch of metacognition, that will oversee all forms of information, incl. 
esp. language, logic, and maths, and also all image forms, and the whole 
sphere of semiotics.


The basic premise:

to understand a piece of information and its information objects, (eg 
words) , is to realise (or know) how they refer to real objects in the 
real world, (and, ideally, and often necessarily,  to be able to point to 
and engage with those real objects).


- this includes understanding/realising when they are unreal - when they 
do NOT refer directly to real objects, but for example to sur-real or 
metaphorical or abstract or non-existent objects


Realistics recognizes that understanding involves, you could say, 
object-ivity.


Complementarily,

to 'disunderstand  is to fail to see how information objects refer to real 
objects.


to be confused is not only to fail to see, but to be unsure *which* of the 
information objects in a piece of information do not refer to real objects 
(it's all a bit of a blur)


Bear in mind  that human information-processing involves an ENORMOUS amount 
of disunderstanding and confusion.


And a *major point* of this approach (to be explained on another occasion) 
is precisely that a great deal of the time people do not understand/realise 
*why* they do not understand/ are confused  - *why* they have such 
difficulty understanding genetics, atomic physics, philosophy, logic, maths, 
ethics, neuroscience etc. etc - just about every subject in the curriculum, 
academic or social - because, like virtual AGI-ers they fall into the trap 
of FAILING to refer the information to real objects. They do not try to 
realise what on earth is being talked about. And they even end up concluding 
(completely wrongly) that there is something wrong with their brain and its 
information-processing capacity, ending up with a totally unecessary 
inferiority complex. (There will probably be v. few here, even at this 
exalted level of intelligence, who are not so affected).


(Realistics should enormously improve human understanding, and holds out the 
promise that no one will ever fail to understand any information/subject 
ever again for want of anything other than time and effort).


Now there is a LOT more to expand here [later]. But for now it immediately 
raises the obvious, and inevitable object-ion to any contradictory, 
unreal /artificial  approach to information and esp language 
processing/NLP such as you and many other AGIers are outlining.


How will you understand, and recognize when information objects/ e.g 
language/words are unreal ?


e.g.
Turn yourself inside out.
Turn that block of wood inside out.
Turn around in a straight line.
What's inside is not more beautiful than what's on the outside
Drill down into Steve's logic.
Cars can hover just above the ground
The car flew into the wall.
The wall flew away.
Bush wants to liberalise sexual mores.
Truth and beauty are incompatible.

[all such statements obviously real/unreal/untrue/metaphorical in different 
and sometimes multiple simultaneous ways]


You might also ask yourself how you will, if your approach extends beyond 
language, know that any image or photo is unreal.


IOW how is any unreal approach to information processing (contradictory to 
mine) different from a putative logic that does *not* recognize truth or a 
maths that does *not* recognize equality/equations?




Mike,


The plane flew over the hill
The play is over


Using a formal language can help to avoid many of these issues.

But then the program must be able to tell what is in what or outside, 
what is behind/over etc.


The communication module in my experimental AGI design includes
several specialized editors, one of which is a Space Editor which
allows to use simple objects in a small nD sample-space to define
the meaning of terms like in, outside, above, under etc. The
goal is to define the meaning as simply as possible and the knowledge
can then be used in more complex scenes generated for problem solving
purposes.
Other editors:
Script Editor - for writing stories the system learns from.
Action Concept Editor - for learning about actions/verbs  related
roles/phases/changes.
Category Editor - for general categorization/grouping concepts.
Formula Editor - math stuff.
Interface Mapper - for teaching how to use tools (e.g. external software)
...
Some of those editors (probably including the Space Editor) will be
available only to privileged users. It's all RBAC-based. Only
lightweight 3D imagination - for performance reasons (our brains
cheat too), and no embodiment.. BTW I still have a lot to code
before making the system publicly accessible.

To understand is .. in principle, ..to be able to go into the real world 
and point to the real objects/actions being referred to..


Not from my perspective.


Re: [agi] Artificial humor... P.S

2008-09-12 Thread Matt Mahoney
--- On Thu, 9/11/08, Mike Tintner [EMAIL PROTECTED] wrote:

 To understand/realise is to be distinguished
 from (I would argue) to comprehend statements.

How long are we going to go round and round with this? How do you know if a 
machine comprehends something?

Turing explained why he ducked the question in 1950. Because you really can't 
tell. http://www.loebner.net/Prizef/TuringArticle.html


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor... P.S

2008-09-12 Thread Mike Tintner

Matt,

What are you being so tetchy about?  The issue is what it takes  for any 
agent, human or machine.to understand information .


You give me an extremely complicated and ultimately weird test/paper, which 
presupposes that machines, humans and everyone else can only exhibit, and be 
tested on, their thinking and understanding in an essentially Chinese room, 
insulated from the world.


I am questioning, and refuting the entire assumption, behind those 
extraordinarily woolly ideas of Turing, (witness the endlessly convoluted 
discussions of his test on this group - which clearly people had great 
difficulty understanding precisely because it is so woolly, when you try 
to understand exactly what's testing).


An agent understands information and information objects,IMO, if he can 
point to  the real objects referred to in the real world, OUTSIDE any 
insulated room. (I am taking Searle one step further). It is on his ability 
to use language to engage with the real world, - fulfil commands/requests 
like where's the key?, what food is in the fridge? is the room tidy? 
(and progressively more general information objects), that an agent's 
understanding must be tested.


That is consistent with every principle that you seem to like to invoke, of 
evolutionary fitness. Language and other forms of information exist 
primarily to enable humans to deal with real objects - and to survive  - in 
the real world,   and not in any virtual world, that academics and AGI-ers 
prefer to inhabit.


My special distinction, I think, is v. useful - the Chinese translator and 
AGI's  comprehend information/language - merely substituting symbols for 
other symbols. The agent who can use that language to deal with real 
objects, truly *understands* it.


This explanation is consistent with how humans actually fail to understand 
on inumerable occasions, and also how computers and would-be AGI's fail to 
understand  - not just outside in the real world, but *inside* their 
rooms/virtual worlds. All language understanding collapses without real 
object/world engagement.


In case you are unaware how academics will go to quite extraordinary mental 
lengths to stay inside their rooms, see this famous passage  which helped 
give birth to science , - re natural philosophers who,  (with small 
modifications, like AGI-ers)


having sharp and strong wits, and abundance of leisure, . as their persons 
were shut up in the cells of monasteries and colleges, and knowing little 
history, either of nature or time, did out of no great quantity of matter, 
and infinite agitation of wit spin out unto those laborious webs of learning 
which are extant in their books. For the wit and mind of man, if it work 
upon matter, worketh according to the stuff; but if it work upon itself, as 
the spider worketh his web, then it is endless, and brings forth indeed 
cobwebs of learning, admirable for the fineness of thread and work, but of 
no substance or profit. Francis Bacon, The Advancement of Learning.


.
Matt:



To understand/realise is to be distinguished
from (I would argue) to comprehend statements.


How long are we going to go round and round with this? How do you know if 
a machine comprehends something?


Turing explained why he ducked the question in 1950. Because you really 
can't tell. http://www.loebner.net/Prizef/TuringArticle.html



-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial [Humor ] vs Real Approaches to Information

2008-09-12 Thread Bryan Bishop
On Friday 12 September 2008, Mike Tintner wrote:
 to understand a piece of information and its information objects,
 (eg words) , is to realise (or know) how they refer to real
 objects in the real world, (and, ideally, and often necessarily,  to
 be able to point to and engage with those real objects).

This is usually called sourcing and citations, and so on. It's not 
enough to have a citation though, it's not enough to just have a 
symbolic representation of some part of the world beyond you within 
your system, you always have to be able to functionally and competently 
use those references, citations, or links in some useful manner, 
otherwise you're not grounded and you're off in la-la land.

Computers have offered us the chance to encapsulate and manage all of 
these citations (and so on) but in many cases they are citations that 
are limited and crude. Look at the difference between these two 
citations:

Tseng, A. A., Notargiacomo A.  Chen T. P. Nanofabrication by scanning 
probe microscope lithography: A review. J. Vac. Sci. Tech. B 23, 877–
894 (2005).

Compared to:

http://heybryan.org/graphene.html

Both would seem cryptic to any outsider to scientific literature or to 
the web. The first one is generally variablized across the literature, 
making OCR very difficult, and making it generally a challenge to 
always fetch the citations and refs in papers for researchers. Take a 
look at my attempts at OCR of bibliographies:

http://heybryan.org/projects/autoscholar/

Not good is an accurate summarization. With the HTTP string, it's not 
any better at all, *except* the fact that DNS servers are widely 
implemented, here's how to implement one, here's how the DNS root 
servers for the internet work, here's why you can (usually) type in any 
URL on the planet and get to the same site (unless you're on some other 
NIC of course - but this is very rare). There's a social context 
surprisingly involved for DNS .. which I guess is what you consider to 
be the realistics that everyone overlooks when they just assign 
symbols to many different things; for instance, I bet you don't know 
what DNS is, but you know what a dictionary is, even though they refer 
to more or less the same functional things (uh, sort of). 

Anyway, it's context that matters when it comes to groundtruthing 
citations and traces in information ecologies, and not so much the 
symbolic manipulation thereof. It's the overall groundtruthed process, 
the instantiated exploding von Neumann probe phylum that will 
ultimately (not) grey goo you.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor... P.S

2008-09-12 Thread Matt Mahoney
--- On Fri, 9/12/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Matt,
 
 What are you being so tetchy about?  The issue is what it
 takes  for any 
 agent, human or machine.to understand information .

How are you going to understand the issues behind programming a computer for 
human intelligence if you have never programmed a computer?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor... P.S

2008-09-12 Thread Mike Tintner


Matt:  How are you going to understand the issues behind programming a 
computer for human intelligence if you have never programmed a computer?


Matt,

We simply have a big difference of opinion. I'm saying there is no way a 
computer [or agent, period] can understand language if it can't basically 
identify/*see* (and sense) the real objects - (and therefore doesn't know 
what) - it's talking about. Hence people say when they understand at last - 
ah now I see.. now I see what you're talking about.. now I get the 
picture.


The issue of what faculties are needed to understand language (and be 
intelligent)  is not, *in the first instance,* a matter of programming.  I 
suggest you may have been v. uncharacteristically short in this exchange, 
because you may not like the starkness of the message. It is stark, but I 
believe it's the truth. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial [Humor ] vs Real Approaches to Information

2008-09-12 Thread Jiri Jelinek
Mike,

 How will you understand, and recognize when information objects/ e.g
 language/words are unreal ? e.g. Turn yourself inside out.
... unreal/untrue/metaphorical in different and sometimes multiple 
simultaneous ways

It's like teaching a baby. You don't want to use confusing
language/metaphors.. I expect my users to understand the GIGO effect.
But GINA (=my AGI experiment) does have some features for dealing with
unreal / confusing concepts. As I mentioned before, it learns from
stories (written in a formal language). Each story can be marked
Real, Unreal, or Abstract. The Real means real world, the
Unreal means fairy tale kind of stuff (animals talking etc), and
the Abstract covers things like math and other very formal worlds
(e.g. chess rules etc). When a user submits a problem-to-solve, he/she
can also specify if the scope of the solution search should include
the Unreal domain. Another relevant feature is support of phrase
concepts. It allows to teach the system about the impact of saying
something particular in particular scenarios (e.g. Good night,
WTF, I love you, H or possibly your Turn yourself inside
out). The description of what it literally means is optional (unlike
the impact descriptions). There are also some automated evaluation
procedures applied to new knowledge before it's approved as a
knowledge useful  for problem solving. Another thing is that the
confusing input (assuming it will make it to the knowledge used for
problem solving) will have the tendency to be eliminated because users
will be rejecting solutions that were based on it. There is a lot more
but I cannot explain it well in short.

 You might also ask yourself how you will, if your approach extends beyond
 language, know that any image or photo is unreal.

GINA just stores URLs for images and users describe it using system's
formal language (which I named GSL by the way - General Scripting
Language). GINA deals with images in similar way as with above
mentioned phrases.

Regards,
Jiri Jelinek


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Samantha Atkins


On Sep 10, 2008, at 12:29 PM, Jiri Jelinek wrote:

On Wed, Sep 10, 2008 at 2:39 PM, Mike Tintner [EMAIL PROTECTED] 
 wrote:

Without a body, you couldn't understand the joke.


False. Would you also say that without a body, you couldn't understand
3D space ?


It depends on what is meant by, and the value of, understand 3D  
space.   If the intelligence needs to navigate or work with 3D space  
or even understand intelligence whose very concepts are filled with 3D  
metaphors, then I would think yes, that intelligence is going to need  
at least simulated detailed  experience of 3D space.


- samantha



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Valentina Poletti
I think it's the surprize that makes you laugh actually, not physical
pain in other people. I find myself laughing at my own mistakes often
- not because they hurt (in fact if they did hurt they wouldn't be
funny) but because I get surprized by them.

Valentina

On 9/10/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
 On Wed, Sep 10, 2008 at 2:39 PM, Mike Tintner [EMAIL PROTECTED]
 wrote:
Without a body, you couldn't understand the joke.

 False. Would you also say that without a body, you couldn't understand
 3D space ?

 BTW it's kind of sad that people find it funny when others get hurt. I
 wonder what are the mirror neurons doing at the time. Why so many kids
 like to watch the Tom  Jerry-like crap?

 Jiri




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Jiri Jelinek
Samantha,  Mike,

 Would you also say that without a body, you couldn't understand
 3D space ?

 It depends on what is meant by, and the value of, understand 3D space.
 If the intelligence needs to navigate or work with 3D space or even
 understand intelligence whose very concepts are filled with 3D metaphors,
 then I would think yes, that intelligence is going to need at least
 simulated detailed  experience of 3D space.

If you talk to a program about changing 3D scene and the program then
correctly answers questions about [basic] spatial relationships
between the objects then I would say it understands 3D. Of course the
program needs to work with a queriable 3D representation but it
doesn't need a body. I mean it doesn't need to be a real-world
robot, it doesn't need to associate self with any particular 3D
object (real-world or simulated) and it doesn't need to be self-aware.
It just needs to be the 3D-scene-aware and the scene may contain just
a few basic 3D objects (e.g. the Shrdlu stuff).

Jiri


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread BillK
On Thu, Sep 11, 2008 at 2:28 PM, Jiri Jelinek wrote:
 If you talk to a program about changing 3D scene and the program then
 correctly answers questions about [basic] spatial relationships
 between the objects then I would say it understands 3D. Of course the
 program needs to work with a queriable 3D representation but it
 doesn't need a body. I mean it doesn't need to be a real-world
 robot, it doesn't need to associate self with any particular 3D
 object (real-world or simulated) and it doesn't need to be self-aware.
 It just needs to be the 3D-scene-aware and the scene may contain just
 a few basic 3D objects (e.g. the Shrdlu stuff).



Surely the DARPA autonomous vehicles driving themselves around the
desert and in traffic show that computers can cope quite well with a
3D environment, including other objects moving around them as well?

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner

Jiri,

Quick answer because in rush. Notice your if ... Which programs actually 
do understand any *general* concepts of orientation? SHRDLU I will gladly 
bet, didn't...and neither do any others.


The v. word orientation indicates the reality that every picture has a 
point of view, and refers to an observer. And there is no physical way 
around that.


You have been seduced by an illusion - the illusion of the flat, printed 
page, existing in a timeless space. And you have accepted implicitly that 
there really is such a world - flatland - where geometry and geometrical 
operations take place, utterly independent of you the viewer and puppeteer, 
and the solid world of real objects to which they refer. It demonstrably 
isn't true.


Remove your eyes from the page and walk around in the world - your room, 
say. Hey, it's not flat...and neither are any of the objects in it. 
Triangular objects in the world are different from triangles on the page, 
fundamentally different.


But it  is so difficult to shed yourself of this illusion. You  need to look 
at the history of culture and realise that the imposition on the world/ 
environment of first geometrical figures, and then, more than a thousand 
years later,  the fixed point of view and projective geometry,  were - and 
remain - a SUPREME TRIUMPH OF THE HUMAN IMAGINATION.  They don't exist, 
Jiri. They're just one of many possible frameworks (albeit v useful)  to 
impose on the physical world. Nomadic tribes couldn't conceive of squares 
and enclosed spaces. Future generations will invent new frameworks.


Simple example of how persuasive the illusion is. I didn't understand until 
yesterday what the introduction of a fixed point of view really meant - it 
was that word fixed. What was the big deal? I couldn't understand. Isn't 
it a fact of life, almost?


Then it clicked. Your natural POV is mobile - your head/eyes keep moving - 
even when reading. It is an artificial invention to posit a fixed POV. And 
the geometric POV is doubly artificial, because it is one-eyed, no?, not 
stereoscopic. But once you get used to reading pages/screens you come to 
assume that an artificial fixed POV is *natural*.


[Stan Franklin was interested in a speculative paper suggesting that the 
evolutionary brain's stabilisation of vision, (a  software triumph because 
organisms are so mobile) may have led to the development of consciousness).


You have to understand the difference between 1) the page, or medium,  and 
2) the real world it depicts,  and 3) you, the observer, reading/looking at 
the page. Your idea of AGI is just one big page [or screen] that apparently 
exists in splendid self-contained isolation.


It's an illusion, and it just doesn't *work* vis-a-vis programs.  Do you 
want to cling to excessive optimism and a simple POV or do you want to try 
and grasp the admittedly complicated  more sophisticated reality?

.

Jiri: If you talk to a program about changing 3D scene and the program then

correctly answers questions about [basic] spatial relationships
between the objects then I would say it understands 3D. Of course the
program needs to work with a queriable 3D representation but it
doesn't need a body. I mean it doesn't need to be a real-world
robot, it doesn't need to associate self with any particular 3D
object (real-world or simulated) and it doesn't need to be self-aware.
It just needs to be the 3D-scene-aware and the scene may contain just
a few basic 3D objects (e.g. the Shrdlu stuff).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RE: [agi] Artificial humor

2008-09-11 Thread John G. Rose
 From: John LaMuth [mailto:[EMAIL PROTECTED]
 
 As I have previously written, this issue boils down as one is serious
 or
 one is not to be taken this way a meta-order perspective)... the key
 feature in humor and comedy -- the meta-message being don't take me
 seriously
 
 That is why I segregated analogical humor seperately (from routine
 seriousness) in my 2nd US patent 7236963
 www.emotionchip.net
 
 This specialized meta-order-type of disqualification is built directly
 into
 the AGI schematics ...
 
 I realize that proprietary patents have acquired a bad cachet, but
 should
 not necessarily be ignored 
 

Nice patent. I can just imagine the look on the patent clerk's face when
that one came across the desk.

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Mark Waser
Quick answer because in rush. Notice your if ... Which programs actually 
do understand any *general* concepts of orientation? SHRDLU I will gladly 
bet, didn't...and neither do any others.


What about the programs that control Stanley and the other DARPA Grand 
Challenge vehicles?



- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, September 11, 2008 11:24 AM
Subject: Re: [agi] Artificial humor



Jiri,

Quick answer because in rush. Notice your if ... Which programs actually 
do understand any *general* concepts of orientation? SHRDLU I will gladly 
bet, didn't...and neither do any others.


The v. word orientation indicates the reality that every picture has a 
point of view, and refers to an observer. And there is no physical way 
around that.


You have been seduced by an illusion - the illusion of the flat, printed 
page, existing in a timeless space. And you have accepted implicitly that 
there really is such a world - flatland - where geometry and geometrical 
operations take place, utterly independent of you the viewer and 
puppeteer, and the solid world of real objects to which they refer. It 
demonstrably isn't true.


Remove your eyes from the page and walk around in the world - your room, 
say. Hey, it's not flat...and neither are any of the objects in it. 
Triangular objects in the world are different from triangles on the page, 
fundamentally different.


But it  is so difficult to shed yourself of this illusion. You  need to 
look at the history of culture and realise that the imposition on the 
world/ environment of first geometrical figures, and then, more than a 
thousand years later,  the fixed point of view and projective geometry, 
were - and remain - a SUPREME TRIUMPH OF THE HUMAN IMAGINATION.  They 
don't exist, Jiri. They're just one of many possible frameworks (albeit v 
useful)  to impose on the physical world. Nomadic tribes couldn't conceive 
of squares and enclosed spaces. Future generations will invent new 
frameworks.


Simple example of how persuasive the illusion is. I didn't understand 
until yesterday what the introduction of a fixed point of view really 
meant - it was that word fixed. What was the big deal? I couldn't 
understand. Isn't it a fact of life, almost?


Then it clicked. Your natural POV is mobile - your head/eyes keep 
moving - even when reading. It is an artificial invention to posit a fixed 
POV. And the geometric POV is doubly artificial, because it is one-eyed, 
no?, not stereoscopic. But once you get used to reading pages/screens you 
come to assume that an artificial fixed POV is *natural*.


[Stan Franklin was interested in a speculative paper suggesting that the 
evolutionary brain's stabilisation of vision, (a  software triumph 
because organisms are so mobile) may have led to the development of 
consciousness).


You have to understand the difference between 1) the page, or medium,  and 
2) the real world it depicts,  and 3) you, the observer, reading/looking 
at the page. Your idea of AGI is just one big page [or screen] that 
apparently exists in splendid self-contained isolation.


It's an illusion, and it just doesn't *work* vis-a-vis programs.  Do you 
want to cling to excessive optimism and a simple POV or do you want to 
try and grasp the admittedly complicated  more sophisticated reality?

.

Jiri: If you talk to a program about changing 3D scene and the program 
then

correctly answers questions about [basic] spatial relationships
between the objects then I would say it understands 3D. Of course the
program needs to work with a queriable 3D representation but it
doesn't need a body. I mean it doesn't need to be a real-world
robot, it doesn't need to associate self with any particular 3D
object (real-world or simulated) and it doesn't need to be self-aware.
It just needs to be the 3D-scene-aware and the scene may contain just
a few basic 3D objects (e.g. the Shrdlu stuff).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Jiri Jelinek
Mike,

Imagine a simple 3D scene with 2 different-size spheres. A simple
program allows you to change positions of the spheres and it can
answer question Is the smaller sphere inside the bigger sphere?
[Yes|Partly|No]. I can write such program in no time. Sure, it's
extremely simple, but it deals with 3D, it demonstrates certain level
of 3D understanding without embodyment and there is no need to pass
the orientation parameter to the query function. Note that the
orientation is just a parameter. It Doesn't represent a body and it
can be added. Of course understanding all the real-world 3D concepts
would take a lot more code and data than when playing with 3D
toy-worlds, but in principle, it's possible to understand 3D without
having a body.

Jiri

On Thu, Sep 11, 2008 at 11:24 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 Jiri,

 Quick answer because in rush. Notice your if ... Which programs actually
 do understand any *general* concepts of orientation? SHRDLU I will gladly
 bet, didn't...and neither do any others.

 The v. word orientation indicates the reality that every picture has a
 point of view, and refers to an observer. And there is no physical way
 around that.

 You have been seduced by an illusion - the illusion of the flat, printed
 page, existing in a timeless space. And you have accepted implicitly that
 there really is such a world - flatland - where geometry and geometrical
 operations take place, utterly independent of you the viewer and puppeteer,
 and the solid world of real objects to which they refer. It demonstrably
 isn't true.

 Remove your eyes from the page and walk around in the world - your room,
 say. Hey, it's not flat...and neither are any of the objects in it.
 Triangular objects in the world are different from triangles on the page,
 fundamentally different.

 But it  is so difficult to shed yourself of this illusion. You  need to look
 at the history of culture and realise that the imposition on the world/
 environment of first geometrical figures, and then, more than a thousand
 years later,  the fixed point of view and projective geometry,  were - and
 remain - a SUPREME TRIUMPH OF THE HUMAN IMAGINATION.  They don't exist,
 Jiri. They're just one of many possible frameworks (albeit v useful)  to
 impose on the physical world. Nomadic tribes couldn't conceive of squares
 and enclosed spaces. Future generations will invent new frameworks.

 Simple example of how persuasive the illusion is. I didn't understand until
 yesterday what the introduction of a fixed point of view really meant - it
 was that word fixed. What was the big deal? I couldn't understand. Isn't
 it a fact of life, almost?

 Then it clicked. Your natural POV is mobile - your head/eyes keep moving -
 even when reading. It is an artificial invention to posit a fixed POV. And
 the geometric POV is doubly artificial, because it is one-eyed, no?, not
 stereoscopic. But once you get used to reading pages/screens you come to
 assume that an artificial fixed POV is *natural*.

 [Stan Franklin was interested in a speculative paper suggesting that the
 evolutionary brain's stabilisation of vision, (a  software triumph because
 organisms are so mobile) may have led to the development of consciousness).

 You have to understand the difference between 1) the page, or medium,  and
 2) the real world it depicts,  and 3) you, the observer, reading/looking at
 the page. Your idea of AGI is just one big page [or screen] that apparently
 exists in splendid self-contained isolation.

 It's an illusion, and it just doesn't *work* vis-a-vis programs.  Do you
 want to cling to excessive optimism and a simple POV or do you want to try
 and grasp the admittedly complicated  more sophisticated reality?
 .

 Jiri: If you talk to a program about changing 3D scene and the program then

 correctly answers questions about [basic] spatial relationships
 between the objects then I would say it understands 3D. Of course the
 program needs to work with a queriable 3D representation but it
 doesn't need a body. I mean it doesn't need to be a real-world
 robot, it doesn't need to associate self with any particular 3D
 object (real-world or simulated) and it doesn't need to be self-aware.
 It just needs to be the 3D-scene-aware and the scene may contain just
 a few basic 3D objects (e.g. the Shrdlu stuff).





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner


Jiri,

Clearly a limited 3d functionality is possible for a program such as you 
describe - as for SHRDLU. But what we're surely concerned with here is 
generality. So fine start with a restricted world of say different kinds of 
kid's blocks and similar. But then the program must be able to tell what is 
in what or outside, what is behind/over etc. - and also what is moving 
towards or away from an object, ( it surely should be a mobile program) - 
and be able to move objects. My assumption is that even a relatively simple 
such general program wouldn't work - (I obviously haven't thought about this 
in any detail). It would be interesting to have the details about how SHRDLU 
broke down.


Also - re BillK's useful intro. of DARPA - do those vehicles work by GPS?


Mike,

Imagine a simple 3D scene with 2 different-size spheres. A simple
program allows you to change positions of the spheres and it can
answer question Is the smaller sphere inside the bigger sphere?
[Yes|Partly|No]. I can write such program in no time. Sure, it's
extremely simple, but it deals with 3D, it demonstrates certain level
of 3D understanding without embodyment and there is no need to pass
the orientation parameter to the query function. Note that the
orientation is just a parameter. It Doesn't represent a body and it
can be added. Of course understanding all the real-world 3D concepts
would take a lot more code and data than when playing with 3D
toy-worlds, but in principle, it's possible to understand 3D without
having a body.

Jiri

On Thu, Sep 11, 2008 at 11:24 AM, Mike Tintner [EMAIL PROTECTED] 
wrote:

Jiri,

Quick answer because in rush. Notice your if ... Which programs 
actually

do understand any *general* concepts of orientation? SHRDLU I will gladly
bet, didn't...and neither do any others.

The v. word orientation indicates the reality that every picture has a
point of view, and refers to an observer. And there is no physical way
around that.

You have been seduced by an illusion - the illusion of the flat, printed
page, existing in a timeless space. And you have accepted implicitly that
there really is such a world - flatland - where geometry and 
geometrical
operations take place, utterly independent of you the viewer and 
puppeteer,

and the solid world of real objects to which they refer. It demonstrably
isn't true.

Remove your eyes from the page and walk around in the world - your room,
say. Hey, it's not flat...and neither are any of the objects in it.
Triangular objects in the world are different from triangles on the page,
fundamentally different.

But it  is so difficult to shed yourself of this illusion. You  need to 
look

at the history of culture and realise that the imposition on the world/
environment of first geometrical figures, and then, more than a thousand
years later,  the fixed point of view and projective geometry,  were - 
and

remain - a SUPREME TRIUMPH OF THE HUMAN IMAGINATION.  They don't exist,
Jiri. They're just one of many possible frameworks (albeit v useful)  to
impose on the physical world. Nomadic tribes couldn't conceive of squares
and enclosed spaces. Future generations will invent new frameworks.

Simple example of how persuasive the illusion is. I didn't understand 
until
yesterday what the introduction of a fixed point of view really meant - 
it
was that word fixed. What was the big deal? I couldn't understand. 
Isn't

it a fact of life, almost?

Then it clicked. Your natural POV is mobile - your head/eyes keep 
moving -
even when reading. It is an artificial invention to posit a fixed POV. 
And
the geometric POV is doubly artificial, because it is one-eyed, no?, 
not

stereoscopic. But once you get used to reading pages/screens you come to
assume that an artificial fixed POV is *natural*.

[Stan Franklin was interested in a speculative paper suggesting that the
evolutionary brain's stabilisation of vision, (a  software triumph 
because
organisms are so mobile) may have led to the development of 
consciousness).


You have to understand the difference between 1) the page, or medium, 
and
2) the real world it depicts,  and 3) you, the observer, reading/looking 
at
the page. Your idea of AGI is just one big page [or screen] that 
apparently

exists in splendid self-contained isolation.

It's an illusion, and it just doesn't *work* vis-a-vis programs.  Do you
want to cling to excessive optimism and a simple POV or do you want to 
try

and grasp the admittedly complicated  more sophisticated reality?
.

Jiri: If you talk to a program about changing 3D scene and the program 
then


correctly answers questions about [basic] spatial relationships
between the objects then I would say it understands 3D. Of course the
program needs to work with a queriable 3D representation but it
doesn't need a body. I mean it doesn't need to be a real-world
robot, it doesn't need to associate self with any particular 3D
object (real-world or simulated) and it doesn't need to be self-aware.
It just needs 

Re: [agi] Artificial humor

2008-09-11 Thread Mark Waser

Also - re BillK's useful intro. of DARPA - do those vehicles work by GPS?


They are allowed to work by GPS but there are parts of the course where they 
are required to work without it.


Shouldn't you already have basic knowledge like this before proclaiming 
things like neither do any others when talking about being able to 
understand any *general* concepts of orientation



- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, September 11, 2008 1:31 PM
Subject: Re: [agi] Artificial humor




Jiri,

Clearly a limited 3d functionality is possible for a program such as you 
describe - as for SHRDLU. But what we're surely concerned with here is 
generality. So fine start with a restricted world of say different kinds 
of kid's blocks and similar. But then the program must be able to tell 
what is in what or outside, what is behind/over etc. - and also what is 
moving towards or away from an object, ( it surely should be a mobile 
program) - and be able to move objects. My assumption is that even a 
relatively simple such general program wouldn't work - (I obviously 
haven't thought about this in any detail). It would be interesting to have 
the details about how SHRDLU broke down.


Also - re BillK's useful intro. of DARPA - do those vehicles work by GPS?


Mike,

Imagine a simple 3D scene with 2 different-size spheres. A simple
program allows you to change positions of the spheres and it can
answer question Is the smaller sphere inside the bigger sphere?
[Yes|Partly|No]. I can write such program in no time. Sure, it's
extremely simple, but it deals with 3D, it demonstrates certain level
of 3D understanding without embodyment and there is no need to pass
the orientation parameter to the query function. Note that the
orientation is just a parameter. It Doesn't represent a body and it
can be added. Of course understanding all the real-world 3D concepts
would take a lot more code and data than when playing with 3D
toy-worlds, but in principle, it's possible to understand 3D without
having a body.

Jiri

On Thu, Sep 11, 2008 at 11:24 AM, Mike Tintner [EMAIL PROTECTED] 
wrote:

Jiri,

Quick answer because in rush. Notice your if ... Which programs 
actually
do understand any *general* concepts of orientation? SHRDLU I will 
gladly

bet, didn't...and neither do any others.

The v. word orientation indicates the reality that every picture has a
point of view, and refers to an observer. And there is no physical way
around that.

You have been seduced by an illusion - the illusion of the flat, printed
page, existing in a timeless space. And you have accepted implicitly 
that
there really is such a world - flatland - where geometry and 
geometrical
operations take place, utterly independent of you the viewer and 
puppeteer,

and the solid world of real objects to which they refer. It demonstrably
isn't true.

Remove your eyes from the page and walk around in the world - your room,
say. Hey, it's not flat...and neither are any of the objects in it.
Triangular objects in the world are different from triangles on the 
page,

fundamentally different.

But it  is so difficult to shed yourself of this illusion. You  need to 
look

at the history of culture and realise that the imposition on the world/
environment of first geometrical figures, and then, more than a thousand
years later,  the fixed point of view and projective geometry,  were - 
and

remain - a SUPREME TRIUMPH OF THE HUMAN IMAGINATION.  They don't exist,
Jiri. They're just one of many possible frameworks (albeit v useful)  to
impose on the physical world. Nomadic tribes couldn't conceive of 
squares

and enclosed spaces. Future generations will invent new frameworks.

Simple example of how persuasive the illusion is. I didn't understand 
until
yesterday what the introduction of a fixed point of view really 
meant - it
was that word fixed. What was the big deal? I couldn't understand. 
Isn't

it a fact of life, almost?

Then it clicked. Your natural POV is mobile - your head/eyes keep 
moving -
even when reading. It is an artificial invention to posit a fixed POV. 
And
the geometric POV is doubly artificial, because it is one-eyed, no?, 
not

stereoscopic. But once you get used to reading pages/screens you come to
assume that an artificial fixed POV is *natural*.

[Stan Franklin was interested in a speculative paper suggesting that the
evolutionary brain's stabilisation of vision, (a  software triumph 
because
organisms are so mobile) may have led to the development of 
consciousness).


You have to understand the difference between 1) the page, or medium, 
and
2) the real world it depicts,  and 3) you, the observer, reading/looking 
at
the page. Your idea of AGI is just one big page [or screen] that 
apparently

exists in splendid self-contained isolation.

It's an illusion, and it just doesn't *work* vis-a-vis programs.  Do you
want to cling to excessive optimism and a simple POV or do you want

Re: [agi] Artificial humor

2008-09-11 Thread Matt Mahoney
Mike, your argument would be on firmer ground if you could distinguish between 
when a computer understands something and when it just reacts as if it 
understands. What is the test? Otherwise, you could always claim that a machine 
doesn't understand anything because only humans can do that.


-- Matt Mahoney, [EMAIL PROTECTED]


--- On Thu, 9/11/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: Re: [agi] Artificial humor
 To: agi@v2.listbox.com
 Date: Thursday, September 11, 2008, 1:31 PM
 Jiri,
 
 Clearly a limited 3d functionality is possible for a
 program such as you 
 describe - as for SHRDLU. But what we're surely
 concerned with here is 
 generality. So fine start with a restricted world of say
 different kinds of 
 kid's blocks and similar. But then the program must be
 able to tell what is 
 in what or outside, what is behind/over etc. -
 and also what is moving 
 towards or away from an object, ( it surely should be a
 mobile program) - 
 and be able to move objects. My assumption is that even a
 relatively simple 
 such general program wouldn't work - (I obviously
 haven't thought about this 
 in any detail). It would be interesting to have the details
 about how SHRDLU 
 broke down.
 
 Also - re BillK's useful intro. of DARPA - do those
 vehicles work by GPS?
 
  Mike,
 
  Imagine a simple 3D scene with 2 different-size
 spheres. A simple
  program allows you to change positions of the spheres
 and it can
  answer question Is the smaller sphere inside the
 bigger sphere?
  [Yes|Partly|No]. I can write such program in no time.
 Sure, it's
  extremely simple, but it deals with 3D, it
 demonstrates certain level
  of 3D understanding without embodyment and there is no
 need to pass
  the orientation parameter to the query function. Note
 that the
  orientation is just a parameter. It Doesn't
 represent a body and it
  can be added. Of course understanding all the
 real-world 3D concepts
  would take a lot more code and data than when playing
 with 3D
  toy-worlds, but in principle, it's possible to
 understand 3D without
  having a body.
 
  Jiri
 
  On Thu, Sep 11, 2008 at 11:24 AM, Mike Tintner
 [EMAIL PROTECTED] 
  wrote:
  Jiri,
 
  Quick answer because in rush. Notice your
 if ... Which programs 
  actually
  do understand any *general* concepts of
 orientation? SHRDLU I will gladly
  bet, didn't...and neither do any others.
 
  The v. word orientation indicates the
 reality that every picture has a
  point of view, and refers to an observer. And
 there is no physical way
  around that.
 
  You have been seduced by an illusion - the
 illusion of the flat, printed
  page, existing in a timeless space. And you have
 accepted implicitly that
  there really is such a world -
 flatland - where geometry and 
  geometrical
  operations take place, utterly independent of you
 the viewer and 
  puppeteer,
  and the solid world of real objects to which they
 refer. It demonstrably
  isn't true.
 
  Remove your eyes from the page and walk around in
 the world - your room,
  say. Hey, it's not flat...and neither are any
 of the objects in it.
  Triangular objects in the world are different from
 triangles on the page,
  fundamentally different.
 
  But it  is so difficult to shed yourself of this
 illusion. You  need to 
  look
  at the history of culture and realise that the
 imposition on the world/
  environment of first geometrical figures, and
 then, more than a thousand
  years later,  the fixed point of view and
 projective geometry,  were - 
  and
  remain - a SUPREME TRIUMPH OF THE HUMAN
 IMAGINATION.  They don't exist,
  Jiri. They're just one of many possible
 frameworks (albeit v useful)  to
  impose on the physical world. Nomadic tribes
 couldn't conceive of squares
  and enclosed spaces. Future generations will
 invent new frameworks.
 
  Simple example of how persuasive the illusion is.
 I didn't understand 
  until
  yesterday what the introduction of a fixed
 point of view really meant - 
  it
  was that word fixed. What was the big
 deal? I couldn't understand. 
  Isn't
  it a fact of life, almost?
 
  Then it clicked. Your natural POV is
 mobile - your head/eyes keep 
  moving -
  even when reading. It is an artificial invention
 to posit a fixed POV. 
  And
  the geometric POV is doubly artificial, because it
 is one-eyed, no?, 
  not
  stereoscopic. But once you get used to reading
 pages/screens you come to
  assume that an artificial fixed POV is *natural*.
 
  [Stan Franklin was interested in a speculative
 paper suggesting that the
  evolutionary brain's stabilisation of
 vision, (a  software triumph 
  because
  organisms are so mobile) may have led to the
 development of 
  consciousness).
 
  You have to understand the difference between 1)
 the page, or medium, 
  and
  2) the real world it depicts,  and 3) you, the
 observer, reading/looking 
  at
  the page. Your idea of AGI is just one big page
 [or screen] that 
  apparently

Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner

Matt,

Jeez, massive question :).

Let me 1st partly dodge it, by giving you an example of the difficulty of 
understanding, say, over, both in NLP terms and ultimately (because it 
will be the same more or less) in practical object recognition/movement 
terms -  because  I suspect none of you have done what I told you, (naughty) 
 looked at Lakoff.


You will note the very different physical movements or positionings involved 
in:


The painting is over the mantle
The plane flew over the hill
Sam walked over the hill
Sam lives over the hill
The wall fell over
Sam turned the page over
She spread the cloth over the table.
The guards stood all over th ehill
Look over my page
He went over the horizon
The line stretches over the yard
The board is over the hole

[not to mention]
The play is over
There are over a hundred
Do it over, but don't overdo it.

 there are many more.

See Lakoff for schema illustrations. Nearly all involve very different 
trajectories, physical relationships.


That is why I'm confident that no program can handle that, but yes, Mark, I 
was putting forward a new idea (certainly to me) in the orientation 
framework - and doing no more than presenting a reasoned, but pretty 
ill-informed hypothesis. (And that is what I think this forum is for. And I 
will be delighted if you, or anyone else, will correct my 
overgeneralisations and errors).


Now a brief, rushed but, I suspect, massive, and new answer to your 
question - that I think, takes us, philosophically, way beyond the concept 
of grounding, which a lot of people are currently using for 
understanding.


To understand is to REALISE what [on earth, or in the [real] world] is 
being talked about. It is, in principle, and often in practice, to be able 
to go into the real world and point to the real objects/actions being 
referred to, (or realise that they are unreal/fantastic). So in terms of 
understanding a statement containing how something is over something else, 
it is to be able to go and point to the relevant objects in a scene, or, if 
possible, to recreate the physical events or relationship..


I believe that is actually how we *do* understand, how the brain does work, 
how a GI *must* work - , if correct, it automatically moves us beyond 
virtual AGI. I shall hopefully return to this concept on further 
occasions - I believe it has enormous ramifications. There are many, many 
qualifications to be made, which I won't attempt now, nevertheless the basic 
principle holds - and will hold for the psychology of how humans understand 
or *don't* understand or get confused.


IOW not only must an AGI or any GI be embodied it must also be directly  
indirectly embedded in the world.


(Grounding is being currently interpreted in practice almost entirely from 
the embodied or agent's side - as referring to what goes on *inside* the 
agent's mind. Realisation involves complementarily defining intelligence 
from the out-side of its ability to deal with the environment/real world 
being-referred-to. BIG difference. Like between just using nature/heredity, 
OTOH,  and, OTOH, also using nurture/environment to explain behaviour).


I hope you realise what I've been saying :).




Matt:
Mike, your argument would be on firmer ground if you could distinguish 
between when a computer understands something and when it just reacts as 
if it understands. What is the test? Otherwise, you could always claim 
that a machine doesn't understand anything because only humans can do 
that.



-- Matt Mahoney, [EMAIL PROTECTED]


--- On Thu, 9/11/08, Mike Tintner [EMAIL PROTECTED] wrote:


From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] Artificial humor
To: agi@v2.listbox.com
Date: Thursday, September 11, 2008, 1:31 PM
Jiri,

Clearly a limited 3d functionality is possible for a
program such as you
describe - as for SHRDLU. But what we're surely
concerned with here is
generality. So fine start with a restricted world of say
different kinds of
kid's blocks and similar. But then the program must be
able to tell what is
in what or outside, what is behind/over etc. -
and also what is moving
towards or away from an object, ( it surely should be a
mobile program) -
and be able to move objects. My assumption is that even a
relatively simple
such general program wouldn't work - (I obviously
haven't thought about this
in any detail). It would be interesting to have the details
about how SHRDLU
broke down.

Also - re BillK's useful intro. of DARPA - do those
vehicles work by GPS?

 Mike,

 Imagine a simple 3D scene with 2 different-size
spheres. A simple
 program allows you to change positions of the spheres
and it can
 answer question Is the smaller sphere inside the
bigger sphere?
 [Yes|Partly|No]. I can write such program in no time.
Sure, it's
 extremely simple, but it deals with 3D, it
demonstrates certain level
 of 3D understanding without embodyment and there is no
need to pass
 the orientation parameter to the query function. Note

Re: [agi] Artificial humor

2008-09-11 Thread Matt Mahoney
Mike Tintner [EMAIL PROTECTED] wrote:

To understand is to REALISE what [on earth, or
in the [real] world] is being talked about.

Nice dodge. How do you distinguish between when a computer realizes something 
and when it just reacts as if it realizes it?

Yeah, I know. Turing dodged the question too.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner




Mike Tintner [EMAIL PROTECTED] wrote:


To understand is to REALISE what [on earth, or
in the [real] world] is being talked about.


Matt: Nice dodge. How do you distinguish between when a computer realizes 
something and when it just reacts as if it realizes it?


Yeah, I know. Turing dodged the question too.



Matt,

I don't understand this objection - maybe I wasn't clear. I said to 
realise is to be able to go and point to the real objects/actions referred 
to, and to make the real actions happen. You understand what a key is if you 
can go and pick one up. You understand what picking up a key is, if you 
can do it. You understand what sex is, if you can point to it, or, better, 
do it,  the scientific observers, or Turing testers, can observe it.


As I said, there are many qualifications and complications - for example to 
understand what mind is, is also to be able to point to one in action, but 
it is a complex business on both sides [both mind and the pointing]  - 
nevertheless if both fruitful scientific and philosophical discussion and 
discovery about the mind are to take place - that real engagement with 
real objects, is exactly what must happen there too. That is the basis of 
science (and technology).


The only obvious places where understanding/ realisation, as defined here, 
*don't* happen  - or *appear* not to happen - are - can you guess? - yes, 
logic and mathematics. And what are the subjects closest to the hearts of 
virtual AGI-ers?


So you are generally intelligent if you can not just have a Turing test 
conversation with me about going and shopping in the supermarket, but 
actually go there and do it, per verbal instructions.


Explain any dodge here.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor... P.S

2008-09-11 Thread Mike Tintner

Matt,

To understand/realise is to be distinguished from (I would argue) to
comprehend statements.

The one is to be able to point to the real objects referred to. The other is
merely to be able to offer or find an alternative or dictionary definition
of the statements. A translation. Like the Chinese room translator. Who is
dealing in words, just words. Mere words.

(I'm open to an alternative title for comprehend - if you find it in any
way grates on you as a term, please say).




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread John LaMuth
- Original Message - 
From: John G. Rose [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, September 11, 2008 8:28 AM
Subject: RE: [agi] Artificial humor


 From: John LaMuth [mailto:[EMAIL PROTECTED]
 
 As I have previously written, this issue boils down as one is serious
 or
 one is not to be taken this way a meta-order perspective)... the key
 feature in humor and comedy -- the meta-message being don't take me
 seriously
 
 That is why I segregated analogical humor seperately (from routine
 seriousness) in my 2nd US patent 7236963
 www.emotionchip.net
 
 This specialized meta-order-type of disqualification is built directly
 into
 the AGI schematics ...
 
 I realize that proprietary patents have acquired a bad cachet, but
 should
 not necessarily be ignored 

 
 Nice patent. I can just imagine the look on the patent clerk's face when
 that one came across the desk.
 
 John
##

I can safely assume Joe Hirl was smiling about having
his name forever attached to this
PATENT FOR THE AGES ...
(It did take over 3 months to pass)

John L
www.global-solutions.org 


 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Jiri Jelinek
Mike,

The plane flew over the hill
The play is over

Using a formal language can help to avoid many of these issues.

But then the program must be able to tell what is in what or outside, what 
is behind/over etc.

The communication module in my experimental AGI design includes
several specialized editors, one of which is a Space Editor which
allows to use simple objects in a small nD sample-space to define
the meaning of terms like in, outside, above, under etc. The
goal is to define the meaning as simply as possible and the knowledge
can then be used in more complex scenes generated for problem solving
purposes.
Other editors:
Script Editor - for writing stories the system learns from.
Action Concept Editor - for learning about actions/verbs  related
roles/phases/changes.
Category Editor - for general categorization/grouping concepts.
Formula Editor - math stuff.
Interface Mapper - for teaching how to use tools (e.g. external software)
...
Some of those editors (probably including the Space Editor) will be
available only to privileged users. It's all RBAC-based. Only
lightweight 3D imagination - for performance reasons (our brains
cheat too), and no embodiment.. BTW I still have a lot to code
before making the system publicly accessible.

To understand is .. in principle, ..to be able to go into the real world and 
point to the real objects/actions being referred to..

Not from my perspective.

I believe that is actually how we *do* understand, how the brain does work, 
how a GI *must* work

It's ok (and often a must) to use different solutions when developing
for different platforms.
Planes don't flap wings.

You understand what a key is if you can go and pick one up

Again, AGI can know very little about particular objects and it can be
enough to successfully solve many problems  demonstrate useful level
of concept understanding. Let's say the AGI works as an online
adviser. For many key-involving problems it's good enough to know that
a particular key object can be used to unlock/open another particular
objects + the location info  + sometimes the key color or so, but for
example the exact shape of the key or the exact moves for opening a
particular lock using the key - that's something this online AGI can
in most cases leave to the user. The AGI should be able to learn
details but there are so many details in the real world that, for
practical reasons, the AGI would just need to filter most of it out.
AGI doesn't need to interact with the real world directly in order to
learn enough to be a helpful problem solver. And as long as it does a
good job as a problem solver, who cares about the understanding vs
reacting as if it understands classification..

Regards,
Jiri Jelinek


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Mike Tintner
Matt: Humor detection obviously requires a sophisticated language model and 
knowledge of popular culture, current events, and what jokes have been told 
before. Since entertainment is a big sector of the economy, an AGI needs all 
human knowledge, not just knowledge that is work related.


In many ways, it was brave of you to pursue this idea,  the results are 
fascinating. You see, there is one central thing you need in order to write 
a joke. (Have you ever tried it? You must presumably in some respect). You 
can't just logically, formulaically analyse those jokes - the common 
ingredients of, say, the lightbulb jokes. When you write something - even 
some logical extension, say, re how many plumbers it takes to change a light 
bulb - the joke *has to strike you as funny. You have to laugh. It's the 
only way to test the joke.


Obviously you have no plans for endowing your computer with a self and a 
body, that has emotions and can shake with laughter. Or tears.


But what makes you laugh? The common ingredient of humour is human error. We 
laugh at humans making mistakes - mistakes that were/are preventable. People 
having their head stuck snootily in the air, and so falling on banana skins. 
Mrs Malaprop mispronouncing, misconstruing big words while trying to look 
clever, and refusing to admit her ignorance. And we laugh because we can 
personally identify, because we've made those kinds of mistakes. They are a 
fundamental and continuous part of our lives.(How will your AGI identify?)


So are AGI-ers *heroic* figures trying to be/produce giants, or are they 
*comic* figures, like Don Quixote, who are in fact tilting at windmills, and 
refusing even to check whether those windmill arms actually belong to 
giants?


There isn't a purely logicomathematical way to decide that. It takes an 
artistic as well as a scientific mentality involving not just whole 
different parts of your brain, but different faculties and sensibilities - 
all v. real, and not reducible to logic and maths. When you deal with AGI 
problems -  like the problem of AGI itself - you need them.


(You may think this all esoteric, but in fact, you need all those same 
faculties to understand everything that is precious to you - the universe/ 
world/ society/ atoms/ genes /  machines -  even logic  maths. But more of 
that another time).





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Mark Waser
Obviously you have no plans for endowing your computer with a self and a 
body, that has emotions and can shake with laughter. Or tears.


Actually, many of us do.  And this is why your posts are so problematical. 
You invent what *we* believe and what we intend to do.  And then you 
criticize your total fabrications (a.k.a. mental masturbation).


- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, September 10, 2008 7:18 AM
Subject: Re: [agi] Artificial humor


Matt: Humor detection obviously requires a sophisticated language model 
and knowledge of popular culture, current events, and what jokes have been 
told before. Since entertainment is a big sector of the economy, an AGI 
needs all human knowledge, not just knowledge that is work related.


In many ways, it was brave of you to pursue this idea,  the results are 
fascinating. You see, there is one central thing you need in order to 
write a joke. (Have you ever tried it? You must presumably in some 
respect). You can't just logically, formulaically analyse those jokes - 
the common ingredients of, say, the lightbulb jokes. When you write 
something - even some logical extension, say, re how many plumbers it 
takes to change a light bulb - the joke *has to strike you as funny. You 
have to laugh. It's the only way to test the joke.


Obviously you have no plans for endowing your computer with a self and a 
body, that has emotions and can shake with laughter. Or tears.


But what makes you laugh? The common ingredient of humour is human error. 
We laugh at humans making mistakes - mistakes that were/are preventable. 
People having their head stuck snootily in the air, and so falling on 
banana skins. Mrs Malaprop mispronouncing, misconstruing big words while 
trying to look clever, and refusing to admit her ignorance. And we laugh 
because we can personally identify, because we've made those kinds of 
mistakes. They are a fundamental and continuous part of our lives.(How 
will your AGI identify?)


So are AGI-ers *heroic* figures trying to be/produce giants, or are they 
*comic* figures, like Don Quixote, who are in fact tilting at windmills, 
and refusing even to check whether those windmill arms actually belong to 
giants?


There isn't a purely logicomathematical way to decide that. It takes an 
artistic as well as a scientific mentality involving not just whole 
different parts of your brain, but different faculties and sensibilities - 
all v. real, and not reducible to logic and maths. When you deal with AGI 
problems -  like the problem of AGI itself - you need them.


(You may think this all esoteric, but in fact, you need all those same 
faculties to understand everything that is precious to you - the universe/ 
world/ society/ atoms/ genes /  machines -  even logic  maths. But more 
of that another time).





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Mike Tintner
Obviously you have no plans for endowing your computer with a self and a 
body, that has emotions and can shake with laughter. Or tears.


Actually, many of us do.  And this is why your posts are so problematical. 
You invent what *we* believe and what we intend to do.  And then you 
criticize your total fabrications (a.k.a. mental masturbation).


You/others have plans for an *embodied* computer with the equivalent of an 
autonomic nervous systems and the relevant, attached internal organs? A 
robot? That's certainly news to me. Please expand.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Mark Waser

That's certainly news to me.


Because you haven't been paying attention (or don't have the necessary 
background or desire to recognize it).  Look at the attention that's been 
paid to the qualia and consciousness arguments (http://consc.net/online). 
Any computer with sensors and effectors is embodied.  And IBM is 
even/already touting their Autonomic Computing initiatives 
(http://www.research.ibm.com/autonomic/).  Computers already divide tasks 
into foreground (conscious) and background (unconscious) processes that are 
*normally* loosely-coupled with internal details encapsulated away from each 
other.  Silicon intelligences aren't going to have human internal organs 
(except, maybe, as part of a project to simulate/study humans) but they're 
certainly going to have a sense of humor -- and while they are not going to 
have the evolved *physical* side-effects, it's going to feel like 
something to them.


Your arguments are very short-sighted and narrow and nitpicking minor 
*current* details while missing the sweeping scope of what is not only being 
proposed but actually moving forward around you.  Stop telling us what we 
think because you're getting it *WRONG*.  Stop telling us what we're missing 
because, in most cases, we're actually paying attention to version 3 of what 
you're talking about and you just don't recognize it.  You're looking at the 
blueprints of F-14 Tomcat and arguing that the wings don't move right for a 
bird and, besides, it's too unstable for a human to fly (unassisted :-).


Read the papers in the first link and *maybe* we can have a useful 
conversation . . . .


- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, September 10, 2008 7:41 AM
Subject: Re: [agi] Artificial humor


Obviously you have no plans for endowing your computer with a self and a 
body, that has emotions and can shake with laughter. Or tears.


Actually, many of us do.  And this is why your posts are so 
problematical. You invent what *we* believe and what we intend to do. 
And then you criticize your total fabrications (a.k.a. mental 
masturbation).


You/others have plans for an *embodied* computer with the equivalent of an 
autonomic nervous systems and the relevant, attached internal organs? A 
robot? That's certainly news to me. Please expand.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Mark Waser

Your response makes my point precisely . . . .

Until you truly understand *why* IBM's top engineers believes that 
autonomic is the correct term (and it's very clear to someone with enough 
background and knowledge that it is), you shouldn't be attempting this 
discussion.  Yes, *in CURRENT detail*, autonomic computing is different from 
the human body -- especially since the computer is much more equivalent to 
the brain with much of the rest of the body corresponding to the power grid 
and whatever sensors, effectors, and locomotive devices the computer 
controls.  Where the rest of the body differs is in the fact that a lot of 
the smarts, that lie in the computer in the artificial case, are actually 
physically embedded in the organs in the physical case.  Look at the amount 
of nervous tissue in the digestive system.  Guess why the digestive system 
is so tied into your emotions.  But the fact that the computer doesn't 
replicate the inefficient idiosyncrasies of the human body is a good thing, 
not something to emulate.  Further, when you say things like


There is no computer or robot that keeps getting physically excited or 
depressed by its computations. (But it would be a good idea).


you don't even realize that laptops (and many other computers -- not to 
mention appliances) currently do precisely what you claim that no computer 
or robot does.  When they compute that they are not being used, they start 
shutting down power usage.  Do you really want to continue claiming this?


The vast majority of this mailing list is going over your head because you 
don't recognize that while the details are different (like the autonomic 
case), the general idea and direction are dead on and way past where you're 
languishing in your freezing cave bleating because a heat pump isn't fire.


(I also suspect that you've missed most of the humor in this and the 
previous message)
((I feel like a villain in a cheesy drama -- helplessly trapped into 
monologue when I know it will do no good))


- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, September 10, 2008 10:18 AM
Subject: Re: [agi] Artificial humor


1.Autonomic [disembodied] computing is obviously radically different from 
having a body with a sympathetically controlled engine area (upper body) 
and parasympatheticaly controlled digestive area (lower body)  which are 
continually being emotionally revved up or down in preparation for action, 
and also in continuous conflict. There is no computer or robot that keeps 
getting phsyically excited or depressed by its computations. (But it would 
be a good idea).


2.Mimicking emotions as some robots do, is similarly v. different from 
having the physical capacity to embody them, and experience them.


3.Silicon intelligences - useful distinction - don't feel anything - 
they don't have an organic nervous system, and of course it's still a 
fascinating question as to what extent feeling (the hard problem)  is 
contained in that system. (Again true feelings for AGI's would be a 
wonderful, perhaps essential idea).


4.To have a sense of humour, as I more or less indicated, you have to be 
able to identify with the funny guy making the error - and that is an 
*embodied* identification. The humour that gets the biggest, most physical 
laughs and even has you falling on the floor, usually involves the 
biggest, most physical errors - e.g. slapstick. There are no plans that I 
know of, to have computers falling about.


5.Over and over, AI/AGI are making the same mistake  -  trying to 
copy/emulate human faculties and refusing to acknowledge that they are 
vastly more complex than AI'ers' construction.  AI'ers attempts are 
valuable and productive, but their refusal to acknowledge the complexity 
of - and to respect the billion years of evolution behind - those 
faculties, tend towards the comical. Rather like the chauffeur in High 
Anxiety who keeps struggling to carry a suitcase, I got it.. I got it.. I 
got it. I ain't got it.


6.I would argue that it is AGI-ers who are focussed on the blueprints of 
their machine, and who repeatedly refuse to contemplate or discuss how  it 
will fly, ( I seem to recall you making a similar criticism).



Because you haven't been paying attention (or don't have the necessary 
background or desire to recognize it).  Look at the attention that's been 
paid to the qualia and consciousness arguments (http://consc.net/online). 
Any computer with sensors and effectors is embodied.  And IBM is 
even/already touting their Autonomic Computing initiatives 
(http://www.research.ibm.com/autonomic/).  Computers already divide tasks 
into foreground (conscious) and background (unconscious) processes that 
are *normally* loosely-coupled with internal details encapsulated away 
from each other.  Silicon intelligences aren't going to have human 
internal organs (except, maybe, as part of a project to simulate/study 
humans

Re: [agi] Artificial humor

2008-09-10 Thread Mike Tintner
There is no computer or robot that keeps getting physically excited or 
depressed by its computations. (But it would be a good idea).


you don't even realize that laptops (and many other computers -- not to 
mention appliances) currently do precisely what you claim that no computer 
or robot does.


Emotional laptops, huh? Sounds like a great story idea for kids learning to 
love their laptops. Pixar needs you. [It hasn't crashed, it's just v. 
depressed].





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Matt Mahoney
--- On Wed, 9/10/08, Mike Tintner [EMAIL PROTECTED] wrote:

 4.To have a sense of humour, as I more or less indicated,
 you have to be 
 able to identify with the funny guy making the
 error - and that is an 
 *embodied* identification. The humour that gets the
 biggest, most physical 
 laughs and even has you falling on the floor, usually
 involves the biggest, 
 most physical errors - e.g. slapstick. There are no plans
 that I know of, to 
 have computers falling about.

No, the computer's task is to recognize humor, not to experience it. You only 
have to model the part of the brain that sends the signal to your pleasure 
center.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Eric Burton
Couldn't one use fine-grained collision detection in something like
OpenSim to feed tactile information into a neural net via a simulated
nervous system? The extent to which a simulated organism 'actually
feels' is certainly a point on a scale or a spectrum, just as it would
appear to be with carbon-based organisms on Earth as they progress up
from monocellular to higher life. If sufficient computing power were
brought to bear then perhaps an electronic organism could have
arbitrarily complex, rich interactions with its environment. A nervous
system with n tens of thousands of inputs seems to be a prerequisite,
but I don't know if it could be said to be doing anything significant
with them unless a proportionately huge computing layer were on the
back end.

All the tactile inputs would mandate a massive neural net, I think.
Pixel input for stereo visual fields alone consitutes a huge number of
inputs for any neural net to have. In a symbolically-based system you
could have layers of computer vision producing abstractions and
constructions for consciousness, but with a neural net, which seems
like the natural backend to a nervous system, this is less
straightforward.

The problem seems to be twofold: producing feedback on
avatar-environment interactions with sufficient resolution in the
front end, and processing it usefully on the back. By sufficient
resolution I'm thinking of collision detection that could activate as
appropriate tens or hundreds of thousands of sensors constituting
sensory streams for an electronic self, thus providing a richly
compelling idea of being embedded in its environment.

Work like that being done on algorithmic implementations of neuron
column function at IBM might prove to be the ideal computing layer for
this kind of VR embodiment, in order to enable the debate about what
goes on in the electronic substrate and whether or not it constitutes
'really feeling' or indeed really doing anything.

Eric B

On 9/10/08, Mark Waser [EMAIL PROTECTED] wrote:
 Emotional laptops, huh? Sounds like a great story idea for kids learning
 to love their laptops. Pixar needs you. [It hasn't crashed, it's just v.
 depressed].

 Great response.  Ignore my correct point with deflecting derision directed
 at a strawman (the last refuge of the incompetent).

 You seem more intent on winning an argument than learning or even honestly
 addressing the points that you yourself raised.

 I'll let you go back to your fantasies of being smarter than the rest of us
 now.

 - Original Message -
 From: Mike Tintner [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, September 10, 2008 12:31 PM
 Subject: Re: [agi] Artificial humor


 There is no computer or robot that keeps getting physically excited or
 depressed by its computations. (But it would be a good idea).

 you don't even realize that laptops (and many other computers -- not to
 mention appliances) currently do precisely what you claim that no
 computer or robot does.

 Emotional laptops, huh? Sounds like a great story idea for kids learning
 to love their laptops. Pixar needs you. [It hasn't crashed, it's just v.
 depressed].




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Eric Burton
I've seen humour modelled as a form of mental dissonance, when an
expectation is defied, especially a grave one. It may arise, then, as
a higher-order recognition of bizarreness in the overall state of the
mind at that point. Humour seems to me to be somehow fundamental to
intelligence, rather than originating from a given faculty.


On 9/10/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- On Wed, 9/10/08, Mike Tintner [EMAIL PROTECTED] wrote:

 4.To have a sense of humour, as I more or less indicated,
 you have to be
 able to identify with the funny guy making the
 error - and that is an
 *embodied* identification. The humour that gets the
 biggest, most physical
 laughs and even has you falling on the floor, usually
 involves the biggest,
 most physical errors - e.g. slapstick. There are no plans
 that I know of, to
 have computers falling about.

 No, the computer's task is to recognize humor, not to experience it. You
 only have to model the part of the brain that sends the signal to your
 pleasure center.

 -- Matt Mahoney, [EMAIL PROTECTED]




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Eric Burton
Here is an example I recall. A vine crosses your path and you think
there is a snake on your foot. Then you realize the nature of the vine
but the systemic effects of snake fear do not immediately subside. The
result is calming laughter. Perhaps, then, it's an evolved
compensation mechanism for biochemical states revealed intellectually
as inappropriate?

A deep subject!

On 9/10/08, Eric Burton [EMAIL PROTECTED] wrote:
 I've seen humour modelled as a form of mental dissonance, when an
 expectation is defied, especially a grave one. It may arise, then, as
 a higher-order recognition of bizarreness in the overall state of the
 mind at that point. Humour seems to me to be somehow fundamental to
 intelligence, rather than originating from a given faculty.


 On 9/10/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- On Wed, 9/10/08, Mike Tintner [EMAIL PROTECTED] wrote:

 4.To have a sense of humour, as I more or less indicated,
 you have to be
 able to identify with the funny guy making the
 error - and that is an
 *embodied* identification. The humour that gets the
 biggest, most physical
 laughs and even has you falling on the floor, usually
 involves the biggest,
 most physical errors - e.g. slapstick. There are no plans
 that I know of, to
 have computers falling about.

 No, the computer's task is to recognize humor, not to experience it. You
 only have to model the part of the brain that sends the signal to your
 pleasure center.

 -- Matt Mahoney, [EMAIL PROTECTED]




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Jiri Jelinek
On Wed, Sep 10, 2008 at 2:39 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Without a body, you couldn't understand the joke.

False. Would you also say that without a body, you couldn't understand
3D space ?

BTW it's kind of sad that people find it funny when others get hurt. I
wonder what are the mirror neurons doing at the time. Why so many kids
like to watch the Tom  Jerry-like crap?

Jiri


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Matt Mahoney
I think artificial humor has gotten little attention because humor (along with 
art and emotion) is mostly a right-brain activity, while science, math, and 
language are mostly left-brained. It should be no surprise that since most AI 
researches are left-brained, their interest is in studying problems that the 
left brain solves. Studying humor would be like me trying to write a 
Russian-Chinese translator without knowing either language. It could be done, 
but I would have to study how other people think without introspecting on my 
own mind.

It seems little research has been done in spite of the huge economic potential 
for AI. For example, we know that most of what we laugh at is ordinary 
conversation rather than jokes, that some animals laugh, and that infants laugh 
at 3.5 to 4 months (before learning language). It is not clear why laughter 
(the involuntary response) or the desire to laugh evolved. How does it 
increases fitness?

http://men.webmd.com/features/why-do-we-laugh
http://www.livescience.com/animals/050331_laughter_ancient.html

Nevertheless, the brain computes it, so there is no reason in principle why a 
computer could not.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread John LaMuth

Matt

As I have previously written, this issue boils down as one is serious or 
one is not to be taken this way a meta-order perspective)... the key 
feature in humor and comedy -- the meta-message being don't take me 
seriously


That is why I segregated analogical humor seperately (from routine 
seriousness) in my 2nd US patent 7236963

www.emotionchip.net

This specialized meta-order-type of disqualification is built directly into 
the AGI schematics ...


I realize that proprietary patents have acquired a bad cachet, but should 
not necessarily be ignored 


John LaMuth

www.ethicalvalues.com

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, September 10, 2008 1:53 PM
Subject: Re: [agi] Artificial humor


I think artificial humor has gotten little attention because humor (along 
with art and emotion) is mostly a right-brain activity, while science, 
math, and language are mostly left-brained. It should be no surprise that 
since most AI researches are left-brained, their interest is in studying 
problems that the left brain solves. Studying humor would be like me trying 
to write a Russian-Chinese translator without knowing either language. It 
could be done, but I would have to study how other people think without 
introspecting on my own mind.


It seems little research has been done in spite of the huge economic 
potential for AI. For example, we know that most of what we laugh at is 
ordinary conversation rather than jokes, that some animals laugh, and that 
infants laugh at 3.5 to 4 months (before learning language). It is not 
clear why laughter (the involuntary response) or the desire to laugh 
evolved. How does it increases fitness?


http://men.webmd.com/features/why-do-we-laugh
http://www.livescience.com/animals/050331_laughter_ancient.html

Nevertheless, the brain computes it, so there is no reason in principle 
why a computer could not.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Russell Wallace
The most plausible explanation I've heard is that humor evolved as a
social weapon for use by a group of low status individuals against a
high status individual. This explains why laughter is involuntarily
contagious, why it mostly occurs in conversation, why children like
watching Tom and Jerry and why it's always Tom rather than Jerry who
takes the fall. The snake  vine scenario is a derived application,
based on the general idea of something that had appeared badass,
turning out to not need to be taken seriously.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-09 Thread Mike Tintner

Matt,

Humor is dependent not on inductive reasoning by association, reversed or 
otherwise, but on the crossing of whole matrices/ spaces/ scripts .. and 
that good old AGI standby, domains. See Koestler esp. for how it's one 
version of all creativity -


http://www.casbs.org/~turner/art/deacon_images/index.html

Solve humor and you solve AGI. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com