Re: [agi] Re: AI isn't cheap

2008-09-11 Thread Samantha Atkins


On Sep 9, 2008, at 7:54 AM, Matt Mahoney wrote:


--- On Mon, 9/8/08, Steve Richfield [EMAIL PROTECTED] wrote:
On 9/7/08, Matt Mahoney [EMAIL PROTECTED] wrote:

The fact is that thousands of very intelligent people have been  
trying
to solve AI for the last 50 years, and most of them shared your  
optimism.



Unfortunately, their positions as students and professors at various
universities have forced almost all of them into politically correct
paths, substantially all of which lead nowhere, for otherwise they  
would

have succeeded long ago. The few mavericks who aren't stuck in a
university (like those on this forum) all lack funding.


Google is actively pursuing AI and has money to spend. If you have  
seen some of their talks, you know they are pursuing some basic and  
novel research.


Google to the best of my knowledge is pursuing a some areas of narrow  
AI.  I do not believe they are remotely after AGI.






Perhaps it would be more fruitful to estimate the cost of  
automating the
global economy. I explained my estimate of 10^25 bits of memory,  
10^26

OPS, 10^17 bits of software and 10^15 dollars.


You want to replicate the work currently done by 10^10 human brains.


Hmm.  Actually probably only some 10^6 of them at most are doing  
anything much worth replicating.  :-)


A brain has 10^15 synapses. A neuron axon has an information rate of  
10 bits per second. As I said, you can argue about these numbers but  
it doesn't matter much. An order of magnitude error only changes the  
time to AGI by a few years at the current rate of Moore's Law.


Software is not subject to Moore's Law so its cost will eventually  
dominate.


So creating software creating software may be a high payoff subtask.

A human brain has about 10^9 bits of knowledge, of which probably  
10^7 to 10^8 bits are unique to each individual.


How much of this uniqueness is little more than variations on a much  
smaller number of themes and/or irrelevant to the task?


That makes 10^17 to 10^18 bits that have to be extracted from human  
brains and communicated to the AGI.


What for?  That seems like a very slow path that would pollute your  
AGI with countless errors and repetition.


This could be done in code or formal language, although most of it  
will probably be done in natural language once this capability is  
developed.


Natural languages are ridiculously slow and ambiguous.  There is no  
way the 10^7 guesstimated unique bits per individual will ever get  
encoded in natural language anyway (or much of anything else other  
than its encoding in those brains).


Since we don't know which parts of our knowledge is shared, the most  
practical approach is to dump all of it and let the AGI remove the  
redundancies.


Actually, of the knowledge the AGI needs we have pretty good ideas of  
how much is shared.


This will require a substantial fraction of each person's life time,  
so it has to be done in non obtrusive ways, such as recording all of  
your email and conversations (which, of course, all the major free  
services already do).


What exactly is your goal?  Are you attempting to simulate all of  
humankind?   What for when the real thing is up and running?If you  
want uploads there are more direct possible paths after the AGI has  
perfected some crucial technologies.






The cost estimate of $10^15 comes by estimating the world GDP ($66  
trillion per year in 2006, increasing 5% annually) from now until we  
have the hardware to support AGI. We have the option to have AGI  
sooner by paying more. Simple economics suggests we will pay up to  
what it is worth.


Why believe that the real productive intellectual output of the entire  
human world is anywhere close to or represented by the world GDP?   It  
is not likely that we need to download the full contents of all human  
brains including the huge part that is mere variation on human primate  
programming to effectively meet and exceed this productive  
intellectual output.  I find this method of estimating costs utterly  
unconvincing.


- samantha



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Samantha Atkins


On Sep 10, 2008, at 12:29 PM, Jiri Jelinek wrote:

On Wed, Sep 10, 2008 at 2:39 PM, Mike Tintner [EMAIL PROTECTED] 
 wrote:

Without a body, you couldn't understand the joke.


False. Would you also say that without a body, you couldn't understand
3D space ?


It depends on what is meant by, and the value of, understand 3D  
space.   If the intelligence needs to navigate or work with 3D space  
or even understand intelligence whose very concepts are filled with 3D  
metaphors, then I would think yes, that intelligence is going to need  
at least simulated detailed  experience of 3D space.


- samantha



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Valentina Poletti
I think it's the surprize that makes you laugh actually, not physical
pain in other people. I find myself laughing at my own mistakes often
- not because they hurt (in fact if they did hurt they wouldn't be
funny) but because I get surprized by them.

Valentina

On 9/10/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
 On Wed, Sep 10, 2008 at 2:39 PM, Mike Tintner [EMAIL PROTECTED]
 wrote:
Without a body, you couldn't understand the joke.

 False. Would you also say that without a body, you couldn't understand
 3D space ?

 BTW it's kind of sad that people find it funny when others get hurt. I
 wonder what are the mirror neurons doing at the time. Why so many kids
 like to watch the Tom  Jerry-like crap?

 Jiri




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: AI isn't cheap

2008-09-11 Thread Steve Richfield
Samantha,

This is a really great posting. Just one comment:

On 9/11/08, Samantha Atkins [EMAIL PROTECTED] wrote:


 On Sep 9, 2008, at 7:54 AM, Matt Mahoney wrote:

  A human brain has about 10^9 bits of knowledge, of which probably 10^7 to
 10^8 bits are unique to each individual.


 How much of this uniqueness is little more than variations on a much
 smaller number of themes and/or irrelevant to the task?


WOW, my very favorite subject, since it so greatly overlaps with so many
religions. My claim is that most people are NOT sufficiently unique to claim
that they have any soul at all, so there is nothing for them to
save, especially through prayer that probably works to further standardize
their brains. Public education also works great for soul-elimination.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Jiri Jelinek
Samantha,  Mike,

 Would you also say that without a body, you couldn't understand
 3D space ?

 It depends on what is meant by, and the value of, understand 3D space.
 If the intelligence needs to navigate or work with 3D space or even
 understand intelligence whose very concepts are filled with 3D metaphors,
 then I would think yes, that intelligence is going to need at least
 simulated detailed  experience of 3D space.

If you talk to a program about changing 3D scene and the program then
correctly answers questions about [basic] spatial relationships
between the objects then I would say it understands 3D. Of course the
program needs to work with a queriable 3D representation but it
doesn't need a body. I mean it doesn't need to be a real-world
robot, it doesn't need to associate self with any particular 3D
object (real-world or simulated) and it doesn't need to be self-aware.
It just needs to be the 3D-scene-aware and the scene may contain just
a few basic 3D objects (e.g. the Shrdlu stuff).

Jiri


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread BillK
On Thu, Sep 11, 2008 at 2:28 PM, Jiri Jelinek wrote:
 If you talk to a program about changing 3D scene and the program then
 correctly answers questions about [basic] spatial relationships
 between the objects then I would say it understands 3D. Of course the
 program needs to work with a queriable 3D representation but it
 doesn't need a body. I mean it doesn't need to be a real-world
 robot, it doesn't need to associate self with any particular 3D
 object (real-world or simulated) and it doesn't need to be self-aware.
 It just needs to be the 3D-scene-aware and the scene may contain just
 a few basic 3D objects (e.g. the Shrdlu stuff).



Surely the DARPA autonomous vehicles driving themselves around the
desert and in traffic show that computers can cope quite well with a
3D environment, including other objects moving around them as well?

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner

Jiri,

Quick answer because in rush. Notice your if ... Which programs actually 
do understand any *general* concepts of orientation? SHRDLU I will gladly 
bet, didn't...and neither do any others.


The v. word orientation indicates the reality that every picture has a 
point of view, and refers to an observer. And there is no physical way 
around that.


You have been seduced by an illusion - the illusion of the flat, printed 
page, existing in a timeless space. And you have accepted implicitly that 
there really is such a world - flatland - where geometry and geometrical 
operations take place, utterly independent of you the viewer and puppeteer, 
and the solid world of real objects to which they refer. It demonstrably 
isn't true.


Remove your eyes from the page and walk around in the world - your room, 
say. Hey, it's not flat...and neither are any of the objects in it. 
Triangular objects in the world are different from triangles on the page, 
fundamentally different.


But it  is so difficult to shed yourself of this illusion. You  need to look 
at the history of culture and realise that the imposition on the world/ 
environment of first geometrical figures, and then, more than a thousand 
years later,  the fixed point of view and projective geometry,  were - and 
remain - a SUPREME TRIUMPH OF THE HUMAN IMAGINATION.  They don't exist, 
Jiri. They're just one of many possible frameworks (albeit v useful)  to 
impose on the physical world. Nomadic tribes couldn't conceive of squares 
and enclosed spaces. Future generations will invent new frameworks.


Simple example of how persuasive the illusion is. I didn't understand until 
yesterday what the introduction of a fixed point of view really meant - it 
was that word fixed. What was the big deal? I couldn't understand. Isn't 
it a fact of life, almost?


Then it clicked. Your natural POV is mobile - your head/eyes keep moving - 
even when reading. It is an artificial invention to posit a fixed POV. And 
the geometric POV is doubly artificial, because it is one-eyed, no?, not 
stereoscopic. But once you get used to reading pages/screens you come to 
assume that an artificial fixed POV is *natural*.


[Stan Franklin was interested in a speculative paper suggesting that the 
evolutionary brain's stabilisation of vision, (a  software triumph because 
organisms are so mobile) may have led to the development of consciousness).


You have to understand the difference between 1) the page, or medium,  and 
2) the real world it depicts,  and 3) you, the observer, reading/looking at 
the page. Your idea of AGI is just one big page [or screen] that apparently 
exists in splendid self-contained isolation.


It's an illusion, and it just doesn't *work* vis-a-vis programs.  Do you 
want to cling to excessive optimism and a simple POV or do you want to try 
and grasp the admittedly complicated  more sophisticated reality?

.

Jiri: If you talk to a program about changing 3D scene and the program then

correctly answers questions about [basic] spatial relationships
between the objects then I would say it understands 3D. Of course the
program needs to work with a queriable 3D representation but it
doesn't need a body. I mean it doesn't need to be a real-world
robot, it doesn't need to associate self with any particular 3D
object (real-world or simulated) and it doesn't need to be self-aware.
It just needs to be the 3D-scene-aware and the scene may contain just
a few basic 3D objects (e.g. the Shrdlu stuff).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RE: [agi] Artificial humor

2008-09-11 Thread John G. Rose
 From: John LaMuth [mailto:[EMAIL PROTECTED]
 
 As I have previously written, this issue boils down as one is serious
 or
 one is not to be taken this way a meta-order perspective)... the key
 feature in humor and comedy -- the meta-message being don't take me
 seriously
 
 That is why I segregated analogical humor seperately (from routine
 seriousness) in my 2nd US patent 7236963
 www.emotionchip.net
 
 This specialized meta-order-type of disqualification is built directly
 into
 the AGI schematics ...
 
 I realize that proprietary patents have acquired a bad cachet, but
 should
 not necessarily be ignored 
 

Nice patent. I can just imagine the look on the patent clerk's face when
that one came across the desk.

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Mark Waser
Quick answer because in rush. Notice your if ... Which programs actually 
do understand any *general* concepts of orientation? SHRDLU I will gladly 
bet, didn't...and neither do any others.


What about the programs that control Stanley and the other DARPA Grand 
Challenge vehicles?



- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, September 11, 2008 11:24 AM
Subject: Re: [agi] Artificial humor



Jiri,

Quick answer because in rush. Notice your if ... Which programs actually 
do understand any *general* concepts of orientation? SHRDLU I will gladly 
bet, didn't...and neither do any others.


The v. word orientation indicates the reality that every picture has a 
point of view, and refers to an observer. And there is no physical way 
around that.


You have been seduced by an illusion - the illusion of the flat, printed 
page, existing in a timeless space. And you have accepted implicitly that 
there really is such a world - flatland - where geometry and geometrical 
operations take place, utterly independent of you the viewer and 
puppeteer, and the solid world of real objects to which they refer. It 
demonstrably isn't true.


Remove your eyes from the page and walk around in the world - your room, 
say. Hey, it's not flat...and neither are any of the objects in it. 
Triangular objects in the world are different from triangles on the page, 
fundamentally different.


But it  is so difficult to shed yourself of this illusion. You  need to 
look at the history of culture and realise that the imposition on the 
world/ environment of first geometrical figures, and then, more than a 
thousand years later,  the fixed point of view and projective geometry, 
were - and remain - a SUPREME TRIUMPH OF THE HUMAN IMAGINATION.  They 
don't exist, Jiri. They're just one of many possible frameworks (albeit v 
useful)  to impose on the physical world. Nomadic tribes couldn't conceive 
of squares and enclosed spaces. Future generations will invent new 
frameworks.


Simple example of how persuasive the illusion is. I didn't understand 
until yesterday what the introduction of a fixed point of view really 
meant - it was that word fixed. What was the big deal? I couldn't 
understand. Isn't it a fact of life, almost?


Then it clicked. Your natural POV is mobile - your head/eyes keep 
moving - even when reading. It is an artificial invention to posit a fixed 
POV. And the geometric POV is doubly artificial, because it is one-eyed, 
no?, not stereoscopic. But once you get used to reading pages/screens you 
come to assume that an artificial fixed POV is *natural*.


[Stan Franklin was interested in a speculative paper suggesting that the 
evolutionary brain's stabilisation of vision, (a  software triumph 
because organisms are so mobile) may have led to the development of 
consciousness).


You have to understand the difference between 1) the page, or medium,  and 
2) the real world it depicts,  and 3) you, the observer, reading/looking 
at the page. Your idea of AGI is just one big page [or screen] that 
apparently exists in splendid self-contained isolation.


It's an illusion, and it just doesn't *work* vis-a-vis programs.  Do you 
want to cling to excessive optimism and a simple POV or do you want to 
try and grasp the admittedly complicated  more sophisticated reality?

.

Jiri: If you talk to a program about changing 3D scene and the program 
then

correctly answers questions about [basic] spatial relationships
between the objects then I would say it understands 3D. Of course the
program needs to work with a queriable 3D representation but it
doesn't need a body. I mean it doesn't need to be a real-world
robot, it doesn't need to associate self with any particular 3D
object (real-world or simulated) and it doesn't need to be self-aware.
It just needs to be the 3D-scene-aware and the scene may contain just
a few basic 3D objects (e.g. the Shrdlu stuff).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Jiri Jelinek
Mike,

Imagine a simple 3D scene with 2 different-size spheres. A simple
program allows you to change positions of the spheres and it can
answer question Is the smaller sphere inside the bigger sphere?
[Yes|Partly|No]. I can write such program in no time. Sure, it's
extremely simple, but it deals with 3D, it demonstrates certain level
of 3D understanding without embodyment and there is no need to pass
the orientation parameter to the query function. Note that the
orientation is just a parameter. It Doesn't represent a body and it
can be added. Of course understanding all the real-world 3D concepts
would take a lot more code and data than when playing with 3D
toy-worlds, but in principle, it's possible to understand 3D without
having a body.

Jiri

On Thu, Sep 11, 2008 at 11:24 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 Jiri,

 Quick answer because in rush. Notice your if ... Which programs actually
 do understand any *general* concepts of orientation? SHRDLU I will gladly
 bet, didn't...and neither do any others.

 The v. word orientation indicates the reality that every picture has a
 point of view, and refers to an observer. And there is no physical way
 around that.

 You have been seduced by an illusion - the illusion of the flat, printed
 page, existing in a timeless space. And you have accepted implicitly that
 there really is such a world - flatland - where geometry and geometrical
 operations take place, utterly independent of you the viewer and puppeteer,
 and the solid world of real objects to which they refer. It demonstrably
 isn't true.

 Remove your eyes from the page and walk around in the world - your room,
 say. Hey, it's not flat...and neither are any of the objects in it.
 Triangular objects in the world are different from triangles on the page,
 fundamentally different.

 But it  is so difficult to shed yourself of this illusion. You  need to look
 at the history of culture and realise that the imposition on the world/
 environment of first geometrical figures, and then, more than a thousand
 years later,  the fixed point of view and projective geometry,  were - and
 remain - a SUPREME TRIUMPH OF THE HUMAN IMAGINATION.  They don't exist,
 Jiri. They're just one of many possible frameworks (albeit v useful)  to
 impose on the physical world. Nomadic tribes couldn't conceive of squares
 and enclosed spaces. Future generations will invent new frameworks.

 Simple example of how persuasive the illusion is. I didn't understand until
 yesterday what the introduction of a fixed point of view really meant - it
 was that word fixed. What was the big deal? I couldn't understand. Isn't
 it a fact of life, almost?

 Then it clicked. Your natural POV is mobile - your head/eyes keep moving -
 even when reading. It is an artificial invention to posit a fixed POV. And
 the geometric POV is doubly artificial, because it is one-eyed, no?, not
 stereoscopic. But once you get used to reading pages/screens you come to
 assume that an artificial fixed POV is *natural*.

 [Stan Franklin was interested in a speculative paper suggesting that the
 evolutionary brain's stabilisation of vision, (a  software triumph because
 organisms are so mobile) may have led to the development of consciousness).

 You have to understand the difference between 1) the page, or medium,  and
 2) the real world it depicts,  and 3) you, the observer, reading/looking at
 the page. Your idea of AGI is just one big page [or screen] that apparently
 exists in splendid self-contained isolation.

 It's an illusion, and it just doesn't *work* vis-a-vis programs.  Do you
 want to cling to excessive optimism and a simple POV or do you want to try
 and grasp the admittedly complicated  more sophisticated reality?
 .

 Jiri: If you talk to a program about changing 3D scene and the program then

 correctly answers questions about [basic] spatial relationships
 between the objects then I would say it understands 3D. Of course the
 program needs to work with a queriable 3D representation but it
 doesn't need a body. I mean it doesn't need to be a real-world
 robot, it doesn't need to associate self with any particular 3D
 object (real-world or simulated) and it doesn't need to be self-aware.
 It just needs to be the 3D-scene-aware and the scene may contain just
 a few basic 3D objects (e.g. the Shrdlu stuff).





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: AI isn't cheap

2008-09-11 Thread Matt Mahoney
I suppose in order to justify my cost estimate I need to define more precisely 
what I mean by AGI. I mean the cost of building an automated economy in which 
people don't have to work. This is not the same as automating what people 
currently do. Fifty years ago we might have imagined a future with robot gas 
station attendants and robot sales clerks. Nobody imagined self serve gas or 
shopping on the internet.

But the exact form of the technology does not matter. People will invest money 
if there is an expected payoff higher than market driven interest rates. These 
numbers are known. AGI is worth $10^15 no matter how you build it.

An alternative goal of AGI is uploading, which I believe will cost considerably 
less. How much would you pay to have a machine that duplicates your memories, 
goals, and behavior well enough to convince everyone else that it is you, and 
have that machine turned on after you die? Whether such a machine is you 
(does your consciousness transfer?) is an irrelevant philosophical issue. It is 
not important. What is important is the percentage of people who believe it is 
true and are therefore willing to pay to upload. However, once we develop the 
technology to scan brains and simulate them, there should be no need to develop 
custom software or training for each individual as there is for building an 
economy. The cost will be determined by Moore's Law.

(This does not solve the economic issues. You still have to pay uploads to 
work, or to write the software to automate the economy).

  Software is not subject to Moore's Law so its cost
 will eventually  
  dominate.
 
 So creating software creating software may be a high payoff
 subtask.

If it is possible. However, there is currently no model for recursive self 
improvement. The major cost of write a program to solve X is the cost of 
describing X. When you give humans a programming task, they already know most 
of X without you specifying the details. To tell a machine, you either have to 
specify X in such detail that it is equivalent to writing the program, or you 
have to have a machine that knows everything that humans know, which is AGI.

  A human brain has about 10^9 bits of knowledge, of
 which probably  
  10^7 to 10^8 bits are unique to each individual.
 
 How much of this uniqueness is little more than variations
 on a much  
 smaller number of themes and/or irrelevant to the task?

Good question. Everything you have learned through language is already known to 
somebody else. However, the fact that you learned X from Y is known only to you 
and possibly Y. Some fraction of nonverbally acquired knowledge is unique to 
you also.

What fraction is relevant? Perhaps very little if AGI means new ways of solving 
problems rather than duplicating the work we now do. For other tasks such as 
entertainment, advertising, or surveillance, everything you know is relevant.

 Google to the best of my knowledge is pursuing a some areas
 of narrow  
 AI.  I do not believe they are remotely after AGI.

Google has only $10^11 to spend, not $10^15.


-- Matt Mahoney, [EMAIL PROTECTED]


--- On Thu, 9/11/08, Samantha Atkins [EMAIL PROTECTED] wrote:

 From: Samantha Atkins [EMAIL PROTECTED]
 Subject: Re: [agi] Re: AI isn't cheap
 To: agi@v2.listbox.com
 Date: Thursday, September 11, 2008, 3:19 AM
 On Sep 9, 2008, at 7:54 AM, Matt Mahoney wrote:
 
  --- On Mon, 9/8/08, Steve Richfield
 [EMAIL PROTECTED] wrote:
  On 9/7/08, Matt Mahoney [EMAIL PROTECTED]
 wrote:
 
  The fact is that thousands of very intelligent
 people have been  
  trying
  to solve AI for the last 50 years, and most of
 them shared your  
  optimism.
 
  Unfortunately, their positions as students and
 professors at various
  universities have forced almost all of them into
 politically correct
  paths, substantially all of which lead nowhere,
 for otherwise they  
  would
  have succeeded long ago. The few mavericks who
 aren't stuck in a
  university (like those on this forum) all lack
 funding.
 
  Google is actively pursuing AI and has money to spend.
 If you have  
  seen some of their talks, you know they are pursuing
 some basic and  
  novel research.
 
 Google to the best of my knowledge is pursuing a some areas
 of narrow  
 AI.  I do not believe they are remotely after AGI.
 
 
 
 
  Perhaps it would be more fruitful to estimate
 the cost of  
  automating the
  global economy. I explained my estimate of
 10^25 bits of memory,  
  10^26
  OPS, 10^17 bits of software and 10^15 dollars.
 
  You want to replicate the work currently done by 10^10
 human brains.
 
 Hmm.  Actually probably only some 10^6 of them at most are
 doing  
 anything much worth replicating.  :-)
 
  A brain has 10^15 synapses. A neuron axon has an
 information rate of  
  10 bits per second. As I said, you can argue about
 these numbers but  
  it doesn't matter much. An order of magnitude
 error only changes the  
  time to AGI by a few years at the current rate of
 Moore's Law.
 

Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner


Jiri,

Clearly a limited 3d functionality is possible for a program such as you 
describe - as for SHRDLU. But what we're surely concerned with here is 
generality. So fine start with a restricted world of say different kinds of 
kid's blocks and similar. But then the program must be able to tell what is 
in what or outside, what is behind/over etc. - and also what is moving 
towards or away from an object, ( it surely should be a mobile program) - 
and be able to move objects. My assumption is that even a relatively simple 
such general program wouldn't work - (I obviously haven't thought about this 
in any detail). It would be interesting to have the details about how SHRDLU 
broke down.


Also - re BillK's useful intro. of DARPA - do those vehicles work by GPS?


Mike,

Imagine a simple 3D scene with 2 different-size spheres. A simple
program allows you to change positions of the spheres and it can
answer question Is the smaller sphere inside the bigger sphere?
[Yes|Partly|No]. I can write such program in no time. Sure, it's
extremely simple, but it deals with 3D, it demonstrates certain level
of 3D understanding without embodyment and there is no need to pass
the orientation parameter to the query function. Note that the
orientation is just a parameter. It Doesn't represent a body and it
can be added. Of course understanding all the real-world 3D concepts
would take a lot more code and data than when playing with 3D
toy-worlds, but in principle, it's possible to understand 3D without
having a body.

Jiri

On Thu, Sep 11, 2008 at 11:24 AM, Mike Tintner [EMAIL PROTECTED] 
wrote:

Jiri,

Quick answer because in rush. Notice your if ... Which programs 
actually

do understand any *general* concepts of orientation? SHRDLU I will gladly
bet, didn't...and neither do any others.

The v. word orientation indicates the reality that every picture has a
point of view, and refers to an observer. And there is no physical way
around that.

You have been seduced by an illusion - the illusion of the flat, printed
page, existing in a timeless space. And you have accepted implicitly that
there really is such a world - flatland - where geometry and 
geometrical
operations take place, utterly independent of you the viewer and 
puppeteer,

and the solid world of real objects to which they refer. It demonstrably
isn't true.

Remove your eyes from the page and walk around in the world - your room,
say. Hey, it's not flat...and neither are any of the objects in it.
Triangular objects in the world are different from triangles on the page,
fundamentally different.

But it  is so difficult to shed yourself of this illusion. You  need to 
look

at the history of culture and realise that the imposition on the world/
environment of first geometrical figures, and then, more than a thousand
years later,  the fixed point of view and projective geometry,  were - 
and

remain - a SUPREME TRIUMPH OF THE HUMAN IMAGINATION.  They don't exist,
Jiri. They're just one of many possible frameworks (albeit v useful)  to
impose on the physical world. Nomadic tribes couldn't conceive of squares
and enclosed spaces. Future generations will invent new frameworks.

Simple example of how persuasive the illusion is. I didn't understand 
until
yesterday what the introduction of a fixed point of view really meant - 
it
was that word fixed. What was the big deal? I couldn't understand. 
Isn't

it a fact of life, almost?

Then it clicked. Your natural POV is mobile - your head/eyes keep 
moving -
even when reading. It is an artificial invention to posit a fixed POV. 
And
the geometric POV is doubly artificial, because it is one-eyed, no?, 
not

stereoscopic. But once you get used to reading pages/screens you come to
assume that an artificial fixed POV is *natural*.

[Stan Franklin was interested in a speculative paper suggesting that the
evolutionary brain's stabilisation of vision, (a  software triumph 
because
organisms are so mobile) may have led to the development of 
consciousness).


You have to understand the difference between 1) the page, or medium, 
and
2) the real world it depicts,  and 3) you, the observer, reading/looking 
at
the page. Your idea of AGI is just one big page [or screen] that 
apparently

exists in splendid self-contained isolation.

It's an illusion, and it just doesn't *work* vis-a-vis programs.  Do you
want to cling to excessive optimism and a simple POV or do you want to 
try

and grasp the admittedly complicated  more sophisticated reality?
.

Jiri: If you talk to a program about changing 3D scene and the program 
then


correctly answers questions about [basic] spatial relationships
between the objects then I would say it understands 3D. Of course the
program needs to work with a queriable 3D representation but it
doesn't need a body. I mean it doesn't need to be a real-world
robot, it doesn't need to associate self with any particular 3D
object (real-world or simulated) and it doesn't need to be self-aware.
It just needs 

Re: [agi] Artificial humor

2008-09-11 Thread Mark Waser

Also - re BillK's useful intro. of DARPA - do those vehicles work by GPS?


They are allowed to work by GPS but there are parts of the course where they 
are required to work without it.


Shouldn't you already have basic knowledge like this before proclaiming 
things like neither do any others when talking about being able to 
understand any *general* concepts of orientation



- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, September 11, 2008 1:31 PM
Subject: Re: [agi] Artificial humor




Jiri,

Clearly a limited 3d functionality is possible for a program such as you 
describe - as for SHRDLU. But what we're surely concerned with here is 
generality. So fine start with a restricted world of say different kinds 
of kid's blocks and similar. But then the program must be able to tell 
what is in what or outside, what is behind/over etc. - and also what is 
moving towards or away from an object, ( it surely should be a mobile 
program) - and be able to move objects. My assumption is that even a 
relatively simple such general program wouldn't work - (I obviously 
haven't thought about this in any detail). It would be interesting to have 
the details about how SHRDLU broke down.


Also - re BillK's useful intro. of DARPA - do those vehicles work by GPS?


Mike,

Imagine a simple 3D scene with 2 different-size spheres. A simple
program allows you to change positions of the spheres and it can
answer question Is the smaller sphere inside the bigger sphere?
[Yes|Partly|No]. I can write such program in no time. Sure, it's
extremely simple, but it deals with 3D, it demonstrates certain level
of 3D understanding without embodyment and there is no need to pass
the orientation parameter to the query function. Note that the
orientation is just a parameter. It Doesn't represent a body and it
can be added. Of course understanding all the real-world 3D concepts
would take a lot more code and data than when playing with 3D
toy-worlds, but in principle, it's possible to understand 3D without
having a body.

Jiri

On Thu, Sep 11, 2008 at 11:24 AM, Mike Tintner [EMAIL PROTECTED] 
wrote:

Jiri,

Quick answer because in rush. Notice your if ... Which programs 
actually
do understand any *general* concepts of orientation? SHRDLU I will 
gladly

bet, didn't...and neither do any others.

The v. word orientation indicates the reality that every picture has a
point of view, and refers to an observer. And there is no physical way
around that.

You have been seduced by an illusion - the illusion of the flat, printed
page, existing in a timeless space. And you have accepted implicitly 
that
there really is such a world - flatland - where geometry and 
geometrical
operations take place, utterly independent of you the viewer and 
puppeteer,

and the solid world of real objects to which they refer. It demonstrably
isn't true.

Remove your eyes from the page and walk around in the world - your room,
say. Hey, it's not flat...and neither are any of the objects in it.
Triangular objects in the world are different from triangles on the 
page,

fundamentally different.

But it  is so difficult to shed yourself of this illusion. You  need to 
look

at the history of culture and realise that the imposition on the world/
environment of first geometrical figures, and then, more than a thousand
years later,  the fixed point of view and projective geometry,  were - 
and

remain - a SUPREME TRIUMPH OF THE HUMAN IMAGINATION.  They don't exist,
Jiri. They're just one of many possible frameworks (albeit v useful)  to
impose on the physical world. Nomadic tribes couldn't conceive of 
squares

and enclosed spaces. Future generations will invent new frameworks.

Simple example of how persuasive the illusion is. I didn't understand 
until
yesterday what the introduction of a fixed point of view really 
meant - it
was that word fixed. What was the big deal? I couldn't understand. 
Isn't

it a fact of life, almost?

Then it clicked. Your natural POV is mobile - your head/eyes keep 
moving -
even when reading. It is an artificial invention to posit a fixed POV. 
And
the geometric POV is doubly artificial, because it is one-eyed, no?, 
not

stereoscopic. But once you get used to reading pages/screens you come to
assume that an artificial fixed POV is *natural*.

[Stan Franklin was interested in a speculative paper suggesting that the
evolutionary brain's stabilisation of vision, (a  software triumph 
because
organisms are so mobile) may have led to the development of 
consciousness).


You have to understand the difference between 1) the page, or medium, 
and
2) the real world it depicts,  and 3) you, the observer, reading/looking 
at
the page. Your idea of AGI is just one big page [or screen] that 
apparently

exists in splendid self-contained isolation.

It's an illusion, and it just doesn't *work* vis-a-vis programs.  Do you
want to cling to excessive optimism and a simple POV or do you want to 

Re: [agi] Artificial humor

2008-09-11 Thread Matt Mahoney
Mike, your argument would be on firmer ground if you could distinguish between 
when a computer understands something and when it just reacts as if it 
understands. What is the test? Otherwise, you could always claim that a machine 
doesn't understand anything because only humans can do that.


-- Matt Mahoney, [EMAIL PROTECTED]


--- On Thu, 9/11/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: Re: [agi] Artificial humor
 To: agi@v2.listbox.com
 Date: Thursday, September 11, 2008, 1:31 PM
 Jiri,
 
 Clearly a limited 3d functionality is possible for a
 program such as you 
 describe - as for SHRDLU. But what we're surely
 concerned with here is 
 generality. So fine start with a restricted world of say
 different kinds of 
 kid's blocks and similar. But then the program must be
 able to tell what is 
 in what or outside, what is behind/over etc. -
 and also what is moving 
 towards or away from an object, ( it surely should be a
 mobile program) - 
 and be able to move objects. My assumption is that even a
 relatively simple 
 such general program wouldn't work - (I obviously
 haven't thought about this 
 in any detail). It would be interesting to have the details
 about how SHRDLU 
 broke down.
 
 Also - re BillK's useful intro. of DARPA - do those
 vehicles work by GPS?
 
  Mike,
 
  Imagine a simple 3D scene with 2 different-size
 spheres. A simple
  program allows you to change positions of the spheres
 and it can
  answer question Is the smaller sphere inside the
 bigger sphere?
  [Yes|Partly|No]. I can write such program in no time.
 Sure, it's
  extremely simple, but it deals with 3D, it
 demonstrates certain level
  of 3D understanding without embodyment and there is no
 need to pass
  the orientation parameter to the query function. Note
 that the
  orientation is just a parameter. It Doesn't
 represent a body and it
  can be added. Of course understanding all the
 real-world 3D concepts
  would take a lot more code and data than when playing
 with 3D
  toy-worlds, but in principle, it's possible to
 understand 3D without
  having a body.
 
  Jiri
 
  On Thu, Sep 11, 2008 at 11:24 AM, Mike Tintner
 [EMAIL PROTECTED] 
  wrote:
  Jiri,
 
  Quick answer because in rush. Notice your
 if ... Which programs 
  actually
  do understand any *general* concepts of
 orientation? SHRDLU I will gladly
  bet, didn't...and neither do any others.
 
  The v. word orientation indicates the
 reality that every picture has a
  point of view, and refers to an observer. And
 there is no physical way
  around that.
 
  You have been seduced by an illusion - the
 illusion of the flat, printed
  page, existing in a timeless space. And you have
 accepted implicitly that
  there really is such a world -
 flatland - where geometry and 
  geometrical
  operations take place, utterly independent of you
 the viewer and 
  puppeteer,
  and the solid world of real objects to which they
 refer. It demonstrably
  isn't true.
 
  Remove your eyes from the page and walk around in
 the world - your room,
  say. Hey, it's not flat...and neither are any
 of the objects in it.
  Triangular objects in the world are different from
 triangles on the page,
  fundamentally different.
 
  But it  is so difficult to shed yourself of this
 illusion. You  need to 
  look
  at the history of culture and realise that the
 imposition on the world/
  environment of first geometrical figures, and
 then, more than a thousand
  years later,  the fixed point of view and
 projective geometry,  were - 
  and
  remain - a SUPREME TRIUMPH OF THE HUMAN
 IMAGINATION.  They don't exist,
  Jiri. They're just one of many possible
 frameworks (albeit v useful)  to
  impose on the physical world. Nomadic tribes
 couldn't conceive of squares
  and enclosed spaces. Future generations will
 invent new frameworks.
 
  Simple example of how persuasive the illusion is.
 I didn't understand 
  until
  yesterday what the introduction of a fixed
 point of view really meant - 
  it
  was that word fixed. What was the big
 deal? I couldn't understand. 
  Isn't
  it a fact of life, almost?
 
  Then it clicked. Your natural POV is
 mobile - your head/eyes keep 
  moving -
  even when reading. It is an artificial invention
 to posit a fixed POV. 
  And
  the geometric POV is doubly artificial, because it
 is one-eyed, no?, 
  not
  stereoscopic. But once you get used to reading
 pages/screens you come to
  assume that an artificial fixed POV is *natural*.
 
  [Stan Franklin was interested in a speculative
 paper suggesting that the
  evolutionary brain's stabilisation of
 vision, (a  software triumph 
  because
  organisms are so mobile) may have led to the
 development of 
  consciousness).
 
  You have to understand the difference between 1)
 the page, or medium, 
  and
  2) the real world it depicts,  and 3) you, the
 observer, reading/looking 
  at
  the page. Your idea of AGI is just one big page
 [or screen] that 
  apparently
  

Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner

Matt,

Jeez, massive question :).

Let me 1st partly dodge it, by giving you an example of the difficulty of 
understanding, say, over, both in NLP terms and ultimately (because it 
will be the same more or less) in practical object recognition/movement 
terms -  because  I suspect none of you have done what I told you, (naughty) 
 looked at Lakoff.


You will note the very different physical movements or positionings involved 
in:


The painting is over the mantle
The plane flew over the hill
Sam walked over the hill
Sam lives over the hill
The wall fell over
Sam turned the page over
She spread the cloth over the table.
The guards stood all over th ehill
Look over my page
He went over the horizon
The line stretches over the yard
The board is over the hole

[not to mention]
The play is over
There are over a hundred
Do it over, but don't overdo it.

 there are many more.

See Lakoff for schema illustrations. Nearly all involve very different 
trajectories, physical relationships.


That is why I'm confident that no program can handle that, but yes, Mark, I 
was putting forward a new idea (certainly to me) in the orientation 
framework - and doing no more than presenting a reasoned, but pretty 
ill-informed hypothesis. (And that is what I think this forum is for. And I 
will be delighted if you, or anyone else, will correct my 
overgeneralisations and errors).


Now a brief, rushed but, I suspect, massive, and new answer to your 
question - that I think, takes us, philosophically, way beyond the concept 
of grounding, which a lot of people are currently using for 
understanding.


To understand is to REALISE what [on earth, or in the [real] world] is 
being talked about. It is, in principle, and often in practice, to be able 
to go into the real world and point to the real objects/actions being 
referred to, (or realise that they are unreal/fantastic). So in terms of 
understanding a statement containing how something is over something else, 
it is to be able to go and point to the relevant objects in a scene, or, if 
possible, to recreate the physical events or relationship..


I believe that is actually how we *do* understand, how the brain does work, 
how a GI *must* work - , if correct, it automatically moves us beyond 
virtual AGI. I shall hopefully return to this concept on further 
occasions - I believe it has enormous ramifications. There are many, many 
qualifications to be made, which I won't attempt now, nevertheless the basic 
principle holds - and will hold for the psychology of how humans understand 
or *don't* understand or get confused.


IOW not only must an AGI or any GI be embodied it must also be directly  
indirectly embedded in the world.


(Grounding is being currently interpreted in practice almost entirely from 
the embodied or agent's side - as referring to what goes on *inside* the 
agent's mind. Realisation involves complementarily defining intelligence 
from the out-side of its ability to deal with the environment/real world 
being-referred-to. BIG difference. Like between just using nature/heredity, 
OTOH,  and, OTOH, also using nurture/environment to explain behaviour).


I hope you realise what I've been saying :).




Matt:
Mike, your argument would be on firmer ground if you could distinguish 
between when a computer understands something and when it just reacts as 
if it understands. What is the test? Otherwise, you could always claim 
that a machine doesn't understand anything because only humans can do 
that.



-- Matt Mahoney, [EMAIL PROTECTED]


--- On Thu, 9/11/08, Mike Tintner [EMAIL PROTECTED] wrote:


From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] Artificial humor
To: agi@v2.listbox.com
Date: Thursday, September 11, 2008, 1:31 PM
Jiri,

Clearly a limited 3d functionality is possible for a
program such as you
describe - as for SHRDLU. But what we're surely
concerned with here is
generality. So fine start with a restricted world of say
different kinds of
kid's blocks and similar. But then the program must be
able to tell what is
in what or outside, what is behind/over etc. -
and also what is moving
towards or away from an object, ( it surely should be a
mobile program) -
and be able to move objects. My assumption is that even a
relatively simple
such general program wouldn't work - (I obviously
haven't thought about this
in any detail). It would be interesting to have the details
about how SHRDLU
broke down.

Also - re BillK's useful intro. of DARPA - do those
vehicles work by GPS?

 Mike,

 Imagine a simple 3D scene with 2 different-size
spheres. A simple
 program allows you to change positions of the spheres
and it can
 answer question Is the smaller sphere inside the
bigger sphere?
 [Yes|Partly|No]. I can write such program in no time.
Sure, it's
 extremely simple, but it deals with 3D, it
demonstrates certain level
 of 3D understanding without embodyment and there is no
need to pass
 the orientation parameter to the query function. Note
that the
 

Re: [agi] Artificial humor

2008-09-11 Thread Matt Mahoney
Mike Tintner [EMAIL PROTECTED] wrote:

To understand is to REALISE what [on earth, or
in the [real] world] is being talked about.

Nice dodge. How do you distinguish between when a computer realizes something 
and when it just reacts as if it realizes it?

Yeah, I know. Turing dodged the question too.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner




Mike Tintner [EMAIL PROTECTED] wrote:


To understand is to REALISE what [on earth, or
in the [real] world] is being talked about.


Matt: Nice dodge. How do you distinguish between when a computer realizes 
something and when it just reacts as if it realizes it?


Yeah, I know. Turing dodged the question too.



Matt,

I don't understand this objection - maybe I wasn't clear. I said to 
realise is to be able to go and point to the real objects/actions referred 
to, and to make the real actions happen. You understand what a key is if you 
can go and pick one up. You understand what picking up a key is, if you 
can do it. You understand what sex is, if you can point to it, or, better, 
do it,  the scientific observers, or Turing testers, can observe it.


As I said, there are many qualifications and complications - for example to 
understand what mind is, is also to be able to point to one in action, but 
it is a complex business on both sides [both mind and the pointing]  - 
nevertheless if both fruitful scientific and philosophical discussion and 
discovery about the mind are to take place - that real engagement with 
real objects, is exactly what must happen there too. That is the basis of 
science (and technology).


The only obvious places where understanding/ realisation, as defined here, 
*don't* happen  - or *appear* not to happen - are - can you guess? - yes, 
logic and mathematics. And what are the subjects closest to the hearts of 
virtual AGI-ers?


So you are generally intelligent if you can not just have a Turing test 
conversation with me about going and shopping in the supermarket, but 
actually go there and do it, per verbal instructions.


Explain any dodge here.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor... P.S

2008-09-11 Thread Mike Tintner

Matt,

To understand/realise is to be distinguished from (I would argue) to
comprehend statements.

The one is to be able to point to the real objects referred to. The other is
merely to be able to offer or find an alternative or dictionary definition
of the statements. A translation. Like the Chinese room translator. Who is
dealing in words, just words. Mere words.

(I'm open to an alternative title for comprehend - if you find it in any
way grates on you as a term, please say).




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread John LaMuth
- Original Message - 
From: John G. Rose [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, September 11, 2008 8:28 AM
Subject: RE: [agi] Artificial humor


 From: John LaMuth [mailto:[EMAIL PROTECTED]
 
 As I have previously written, this issue boils down as one is serious
 or
 one is not to be taken this way a meta-order perspective)... the key
 feature in humor and comedy -- the meta-message being don't take me
 seriously
 
 That is why I segregated analogical humor seperately (from routine
 seriousness) in my 2nd US patent 7236963
 www.emotionchip.net
 
 This specialized meta-order-type of disqualification is built directly
 into
 the AGI schematics ...
 
 I realize that proprietary patents have acquired a bad cachet, but
 should
 not necessarily be ignored 

 
 Nice patent. I can just imagine the look on the patent clerk's face when
 that one came across the desk.
 
 John
##

I can safely assume Joe Hirl was smiling about having
his name forever attached to this
PATENT FOR THE AGES ...
(It did take over 3 months to pass)

John L
www.global-solutions.org 


 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Jiri Jelinek
Mike,

The plane flew over the hill
The play is over

Using a formal language can help to avoid many of these issues.

But then the program must be able to tell what is in what or outside, what 
is behind/over etc.

The communication module in my experimental AGI design includes
several specialized editors, one of which is a Space Editor which
allows to use simple objects in a small nD sample-space to define
the meaning of terms like in, outside, above, under etc. The
goal is to define the meaning as simply as possible and the knowledge
can then be used in more complex scenes generated for problem solving
purposes.
Other editors:
Script Editor - for writing stories the system learns from.
Action Concept Editor - for learning about actions/verbs  related
roles/phases/changes.
Category Editor - for general categorization/grouping concepts.
Formula Editor - math stuff.
Interface Mapper - for teaching how to use tools (e.g. external software)
...
Some of those editors (probably including the Space Editor) will be
available only to privileged users. It's all RBAC-based. Only
lightweight 3D imagination - for performance reasons (our brains
cheat too), and no embodiment.. BTW I still have a lot to code
before making the system publicly accessible.

To understand is .. in principle, ..to be able to go into the real world and 
point to the real objects/actions being referred to..

Not from my perspective.

I believe that is actually how we *do* understand, how the brain does work, 
how a GI *must* work

It's ok (and often a must) to use different solutions when developing
for different platforms.
Planes don't flap wings.

You understand what a key is if you can go and pick one up

Again, AGI can know very little about particular objects and it can be
enough to successfully solve many problems  demonstrate useful level
of concept understanding. Let's say the AGI works as an online
adviser. For many key-involving problems it's good enough to know that
a particular key object can be used to unlock/open another particular
objects + the location info  + sometimes the key color or so, but for
example the exact shape of the key or the exact moves for opening a
particular lock using the key - that's something this online AGI can
in most cases leave to the user. The AGI should be able to learn
details but there are so many details in the real world that, for
practical reasons, the AGI would just need to filter most of it out.
AGI doesn't need to interact with the real world directly in order to
learn enough to be a helpful problem solver. And as long as it does a
good job as a problem solver, who cares about the understanding vs
reacting as if it understands classification..

Regards,
Jiri Jelinek


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com