[agi] Grand Cooperative Projects

2010-08-13 Thread Mike Tintner
like this ( the Genome Project):

http://www.nytimes.com/2010/08/13/health/research/13alzheimer.html?_r=1themc=th

should become an ever bigger part of sci.  tech. Of course, with Alzheimer's 
there is a great deal of commonly recognized ground. Not so with AGI. It might 
be interesting to speculate on what could be common ground in AGI  associated 
robotics.  Common technological approaches, like the common protocols for 
robots suggested here, seem to me vulnerable to the probability that the chosen 
technologies may be simply wrong for AGI.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Nao Nao

2010-08-12 Thread Mike Tintner
John,

Any more detailed thoughts about its precise handling capabilities? Did it, 
first, not pick up the duck independently,  (without human assistance)? If it 
did,  what do you think would be the range of its object handling?  (I had an 
immediate question about all this - have asked the site for further 
clarificiation - but nothing yet).


From: John G. Rose 
Sent: Thursday, August 12, 2010 5:46 AM
To: agi 
Subject: RE: [agi] Nao Nao


I wasn't meaning to portray pessimism.

 

And that little sucker probably couldn't pick up a knife yet.

 

But this is a paradigm change happening where we will have many networked 
mechanical entities. This opens up a whole new world of security and privacy 
issues...  

 

John

 

From: David Jones [mailto:davidher...@gmail.com] 



Way too pessimistic in my opinion. 

On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.com wrote:

Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays sensory 
information back to the main servers with all the other Nao's all collecting 
personal data in a massive multi-agent geo-distributed robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in over 
the network from whatever interested corporation or government pays the most 
for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Nao Nao

2010-08-12 Thread Mike Tintner
By not made to perform work, you mean that it is not sturdy enough? Are any 
half-way AGI robots made to perform work, vs production line robots? (I think 
the idea of performing useful work should be a goal).

The protocol is obviously a good idea, but you're not suggesting it per se will 
lead to AGI?


From: John G. Rose 
Sent: Thursday, August 12, 2010 3:17 PM
To: agi 
Subject: RE: [agi] Nao Nao


Typically the demo is some of the best that it can do. It looks like the robot 
is a mass produced model that has some really basic handling capabilities, not 
that it is made to perform work. It could still have relatively advanced 
microprocessor and networking system, IOW parts of the brain could run on 
centralized servers. I don't think they did that BUT it could.

 

But it looks like one Nao can talk to another Nao. What's needed here is a 
standardized robot communication protocol. So a Nao could talk to a vacuum 
cleaner or a video cam or any other device that supports the protocol. 
Companies may resist this at first as they want to grab market share and don't 
understand the benefit.

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Thursday, August 12, 2010 4:56 AM
To: agi
Subject: Re: [agi] Nao Nao

 

John,

 

Any more detailed thoughts about its precise handling capabilities? Did it, 
first, not pick up the duck independently,  (without human assistance)? If it 
did,  what do you think would be the range of its object handling?  (I had an 
immediate question about all this - have asked the site for further 
clarificiation - but nothing yet).

 

From: John G. Rose 

Sent: Thursday, August 12, 2010 5:46 AM

To: agi 

Subject: RE: [agi] Nao Nao

 

I wasn't meaning to portray pessimism.

 

And that little sucker probably couldn't pick up a knife yet.

 

But this is a paradigm change happening where we will have many networked 
mechanical entities. This opens up a whole new world of security and privacy 
issues...  

 

John

 

From: David Jones [mailto:davidher...@gmail.com] 

Way too pessimistic in my opinion. 

On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.com wrote:

Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays sensory 
information back to the main servers with all the other Nao's all collecting 
personal data in a massive multi-agent geo-distributed robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in over 
the network from whatever interested corporation or government pays the most 
for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John

  agi | Archives | Modify Your Subscription 
 
 

  agi | Archives | Modify Your Subscription
 
 

 

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Scalable vs Diversifiable

2010-08-11 Thread Mike Tintner
Isn't it time that people started adopting true AGI criteria?

The universal endlessly repeated criterion here that a system must be capable 
of being scaled up is a narrow AI criterion.

The proper criterion is diversifiable. If your system can say navigate a 
DARPA car through a grid of city streets, it's AGI if it's diversifiable - or 
rather can diversify itself - if it can then navigate its way through a forest, 
or a strange maze - without being programmed anew. A system is AGI if it can 
diversify from one kind of task/activity to another different kind - as humans 
and animals do - without being additionally programmed . Scale is irrelevant 
and deflects attention from the real problem.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Scalable vs Diversifiable

2010-08-11 Thread Mike Tintner
To respond in kind ,you along with virtually all AGI-ers show an inability to 
understand or define the problems of AGI - i.e. the end-problems that an AGI 
must face,  the problems of creativity vs rationality. You only actually deal 
in standard, narrow AI problems. 

If you don't understand what a new machine must do, all your technical 
knowledge of machines to date may be irrelevant. And in your case, I can't 
think of any concerns of yours like complexity that have anything to do with 
AGI problems at all - nor have you ever tried to relate them to any actual AGI 
problems.

So we're well-matched in inability - except that in creative matters, knowledge 
of the problems-to-be-solved always takes priority over knowledge of entirely 
irrelevant solutions.



From: Jim Bromer 
Sent: Wednesday, August 11, 2010 7:43 PM
To: agi 
Subject: Re: [agi] Scalable vs Diversifiable


I don't feel that a non-programmer can actually define what true AGI criteria 
would be.  The problem is not just oriented around a consumer definition of a 
goal, because it involves a fundamental comprehension of the tools available to 
achieve that goal.  I appreciate your idea that AGI has to be diversifiable but 
your inability to understand certain things that are said about computer 
programming makes your proclamation look odd.
Jim Bromer


On Wed, Aug 11, 2010 at 2:26 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Isn't it time that people started adopting true AGI criteria?

  The universal endlessly repeated criterion here that a system must be capable 
of being scaled up is a narrow AI criterion.

  The proper criterion is diversifiable. If your system can say navigate a 
DARPA car through a grid of city streets, it's AGI if it's diversifiable - or 
rather can diversify itself - if it can then navigate its way through a forest, 
or a strange maze - without being programmed anew. A system is AGI if it can 
diversify from one kind of task/activity to another different kind - as humans 
and animals do - without being additionally programmed . Scale is irrelevant 
and deflects attention from the real problem.
agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Compressed Cross-Indexed Concepts

2010-08-10 Thread Mike Tintner
[from:

Concept-Rich Mathematics Instruction]



Teacher: Very good. Now, look at this drawing

and explain what you see. [Draws.]

Debora: It's a pie with three pieces.

Teacher: Tell us about the pieces.

Debora: Three thirds.

Teachers: What is the difference among the pieces?

Debora: This is the largest third, and here is the smallest . . .

Sound familiar? Have you ever wondered why students often

understand mathematics in a very rudimentary and prototypical

way, why even rich and exciting hands-on types of active learning

do not always result in real learning of new concepts? From

the psycho-educational perspective, these are the critical questions.

In other words, epistemology is valuable to the extent that

it helps us find ways to enable students who come with preconceived

and misconceived ideas to understand a framework of

scientific and mathematical concepts.

Constructivism: A New Perspective

At the dawn of behaviorism, constructivism became the most

dominant epistemology in education. The purest forms of this

philosophy profess that knowledge is not passively received

either through the senses or by way of communication, just as

meaning is not explicitly out there for grabs. Rather, constructivists

generally agree that knowledge is actively built up by a

cognizing human who needs to adapt to what is fit and viable

(von Glasersfeld, 1995). Thus, there is no dispute among constructivists

over the premise that one's knowledge is in a constant

state of flux because humans are subject to an ever-changing

reality (Jaworski, 1994, p. 16).

Although constructivists generally regard understanding as

the outcome of an active process, constructivists still argue

over the nature of the process of knowing. Is knowing simply

a matter of recall? Does learning new concepts reflect additive

or structural cognitive changes? Is the process of knowing

concepts built from the bottom up, or can it be a top-down

process? How does new conceptual knowledge depend on

experience? How does conceptual knowledge relate to procedural

knowledge? And, can teachers mediate conceptual

development?

| Concept-Rich Mathematics Instruction

Is Learning New Concepts Simply a Mechanism

of Memorization and Recall?

Science and mathematics educators have become increasingly

aware that our understanding of conceptual change is at least as

important as the analysis of the concepts themselves. In fact, a

plethora of research has established that concepts are mental

structures of intellectual relationships, not simply a subject matter.

The research indicates that the mental structures of intellectual

relationships that make up mental concepts organize human

experiences and human memory (Bartsch, 1998). Therefore, conceptual

changes represent structural cognitive changes, not simply

additive changes. Based on the research in cognitive psychology,

the attention of research in education has been shifting from the

content (e.g., mathematical concepts) to the mental predicates,

language, and preconcepts. Despite the research, many teachers

continue to approach new concepts as if they were simply addons

to their students' existing knowledge-a subject of memorization

and recall. This practice may well be one of the causes of

misconceptions in mathematics.

Structural Cognitive Change

The notion of structural cognitive change, or schematic change,

was first introduced in the field of psychology (by Bartlett, who

studied memory in the 1930s). It became one of the basic tenets

of constructivism. Researchers in mathematics education picked

up on this term and have been leaning heavily on it since the

1960s, following Skemp (1962), Minsky (1975), and Davis (1984).

The generally accepted idea among researchers in the field, as

stated by Skemp (1986, p. 43), is that in mathematics, to understand

something is to assimilate it into an appropriate schema.

A structural cognitive change is not merely an appendage. It

involves the whole network of interrelated operational and

conceptual schemata. Structural changes are pervasive, central,

and permanent.

The first characteristic of structural change refers to its pervasive

nature. That is, new experiences do not have a limited

effect, but cause the entire cognitive structure to rearrange itself.

Vygotsky (1986, p. 167) argued,

It was shown and proved experimentally that mental development

does not coincide with the development of separate psychological

functions, but rather depends on changing relations between them.

The development of each function, in turn, depends upon the

progress in the development of the interfunctional system.



From: Jim Bromer 
Sent: Monday, August 09, 2010 11:11 PM
To: agi 
Subject: [agi] Compressed Cross-Indexed Concepts


On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.com wrote:

   -Original Message-
   From: Jim Bromer [mailto:jimbro...@gmail.com]
  
how would these diverse 

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Mike Tintner
John:It can be defined mathematically in many ways

Try it - crude drawings/jottings/diagrams totally acceptable. See my set of 
fotos to Dave.

(And yes, you're right this is of extreme importance. And no. Dave, there are 
no such things as non-physical patterns).


From: John G. Rose 
Sent: Monday, August 09, 2010 7:16 AM
To: agi 
Subject: RE: [agi] How To Create General AI Draft2


Actually this is quite critical.

 

Defining a chair - which would agree with each instance of a chair in the 
supplied image - is the way a chair should be defined and is the way the mind 
processes it.

 

It can be defined mathematically in many ways. There is a particular one I 
would go for though...

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Sunday, August 08, 2010 7:28 AM
To: agi
Subject: Re: [agi] How To Create General AI Draft2

 

You're waffling.

 

You say there's a pattern for chair - DRAW IT. Attached should help you.

 

Analyse the chairs given in terms of basic visual units. Or show how any basic 
units can be applied to them. Draw one or two.

 

You haven't identified any basic visual units  - you don't have any. Do you? 
Yes/no. 

 

No. That's not funny, that's a waste.. And woolly and imprecise through and 
through.

 

 

 

From: David Jones 

Sent: Sunday, August 08, 2010 1:59 PM

To: agi 

Subject: Re: [agi] How To Create General AI Draft2

 

Mike,

We've argued about this over and over and over. I don't want to repeat previous 
arguments to you.

You have no proof that the world cannot be broken down into simpler concepts 
and components. The only proof you attempt to propose are your example problems 
that *you* don't understand how to solve. Just because *you* cannot solve them, 
doesn't mean they cannot be solved at all using a certain methodology. So, who 
is really making wild assumptions?

The mere fact that you can refer to a chair means that it is a recognizable 
pattern. LOL. That fact that you don't realize this is quite funny. 

Dave

On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

Dave:No... it is equivalent to saying that the whole world can be modeled as if 
everything was made up of matter

 

And matter is... ?  Huh?

 

You clearly don't realise that your thinking is seriously woolly - and you will 
pay a heavy price in lost time.

 

What are your basic world/visual-world analytic units  wh. you are claiming 
to exist?  

 

You thought - perhaps think still - that *concepts* wh. are pretty fundamental 
intellectual units of analysis at a certain level, could be expressed as, or 
indeed, were patterns. IOW there's a fundamental pattern for chair or 
table. Absolute nonsense. And a radical failure to understand the basic 
nature of concepts which is that they are *freeform* schemas, incapable of 
being expressed either as patterns or programs.

 

You had merely assumed that concepts could be expressed as patterns,but had 
never seriously, visually analysed it. Similarly you are merely assuming that 
the world can be analysed into some kind of visual units - but you haven't 
actually done the analysis, have you? You don't have any of these basic units 
to hand, do you? If you do, I suggest, reply instantly, naming a few. You won't 
be able to do it. They don't exist.

 

Your whole approach to AGI is based on variations of what we can call 
fundamental analysis - and it's wrong. God/Evolution hasn't built the world 
with any kind of geometric, or other consistent, bricks. He/It is a freeform 
designer. You have to start thinking outside the box/brick/fundamental unit.

 

From: David Jones 

Sent: Sunday, August 08, 2010 5:12 AM

To: agi 

Subject: Re: [agi] How To Create General AI Draft2

 

Mike,

I took your comments into consideration and have been updating my paper to make 
sure these problems are addressed. 

See more comments below.

On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

1) You don't define the difference between narrow AI and AGI - or make clear 
why your approach is one and not the other


I removed this because my audience is for AI researchers... this is AGI 101. I 
think it's clear that my design defines general as being able to handle the 
vast majority of things we want the AI to handle without requiring a change in 
design.
 

   

  2) Learning about the world won't cut it -  vast nos. of progs. claim they 
can learn about the world - what's the difference between narrow AI and AGI 
learning?


The difference is in what you can or can't learn about and what tasks you can 
or can't perform. If the AI is able to receive input about anything it needs to 
know about in the same formats that it knows how to understand and analyze, it 
can reason about anything it needs to.
 

   

  3) Breaking things down into generic components allows us to learn about and 
handle the vast majority of things we want to learn about. This is what makes 
it general!

   

  Wild

Re: RE: [agi] How To Create General AI Draft2

2010-08-09 Thread Mike Tintner
Dave,

You offer nothing to even attend to.

The questions completely unanswered by you are:

1. what basic visual units of analysis have you arrived at? (you say there are 
such things - you must have arrived at something, no?) - zero answer

2.what kind of physical/visual *pattern* informs our concept of chair? - zero 
answer. A non-physical pattern pace you is a non-existent entity/figment of 
your mind, (just as the pattern of divine grace is),  - and yet another 
non-answer.

You're supposed to be doing visual AGI - put up something visual in answer to 
the questions, or, I suggest, keep quiet.


From: David Jones 
Sent: Monday, August 09, 2010 11:55 AM
To: agi 
Subject: Re: RE: [agi] How To Create General AI Draft2


I agree John that this is a useful exercise. This would be a good discussion if 
mike would ever admit that I might be right and he might be wrong. I'm not sure 
that will ever happen though. :) First he says I can't define a pattern that 
works. Then, when I do, he says the pattern is no good because it isn't 
physical. Lol. If he would ever admit that I might have gotten it right, the 
discussion would be a good one. Instead, he hugs his preconceived notions no 
matter how good my arguments are and finds yet another reason, any reason will 
do, to say I'm still wrong. 


  On Aug 9, 2010 2:18 AM, John G. Rose johnr...@polyplexic.com wrote:


  Actually this is quite critical.



  Defining a chair - which would agree with each instance of a chair in the 
supplied image - is the way a chair should be defined and is the way the mind 
processes it.



  It can be defined mathematically in many ways. There is a particular one I 
would go for though...



  John



  From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
  Sent: Sunday, August 08, 2010 7:28 AM 


  To: agi
  Subject: Re: [agi] How To Create General AI Draft2




  You're waffling.



  You say there's a pattern for chair - DRAW IT. Attached should help you.



  Analyse the chairs given in terms of basic visual units. Or show how any 
basic units can be applied to them. Draw one or two.



  You haven't identified any basic visual units  - you don't have any. Do you? 
Yes/no. 



  No. That's not funny, that's a waste.. And woolly and imprecise through and 
through.







  From: David Jones 

  Sent: Sunday, August 08, 2010 1:59 PM

  To: agi 

  Subject: Re: [agi] How To Create General AI Draft2



  Mike,

  We've argued about this over and over and over. I don't want to repeat 
previous arguments to you.

  You have no proof that the world cannot be broken down into simpler concepts 
and components. The only proof you attempt to propose are your example problems 
that *you* don't understand how to solve. Just because *you* cannot solve them, 
doesn't mean they cannot be solved at all using a certain methodology. So, who 
is really making wild assumptions?

  The mere fact that you can refer to a chair means that it is a recognizable 
pattern. LOL. That fact that you don't realize this is quite funny. 

  Dave

  On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Dave:No... it is equivalent to saying that the whole world can be modeled as 
if everything was made up of matter



  And matter is... ?  Huh?



  You clearly don't realise that your thinking is seriously woolly - and you 
will pay a heavy price in lost time.



  What are your basic world/visual-world analytic units  wh. you are claiming 
to exist?  



  You thought - perhaps think still - that *concepts* wh. are pretty 
fundamental intellectual units of analysis at a certain level, could be 
expressed as, or indeed, were patterns. IOW there's a fundamental pattern for 
chair or table. Absolute nonsense. And a radical failure to understand the 
basic nature of concepts which is that they are *freeform* schemas, incapable 
of being expressed either as patterns or programs.



  You had merely assumed that concepts could be expressed as patterns,but had 
never seriously, visually analysed it. Similarly you are merely assuming that 
the world can be analysed into some kind of visual units - but you haven't 
actually done the analysis, have you? You don't have any of these basic units 
to hand, do you? If you do, I suggest, reply instantly, naming a few. You won't 
be able to do it. They don't exist.



  Your whole approach to AGI is based on variations of what we can call 
fundamental analysis - and it's wrong. God/Evolution hasn't built the world 
with any kind of geometric, or other consistent, bricks. He/It is a freeform 
designer. You have to start thinking outside the box/brick/fundamental unit.



  From: David Jones 

  Sent: Sunday, August 08, 2010 5:12 AM

  To: agi 

  Subject: Re: [agi] How To Create General AI Draft2



  Mike,

  I took your comments into consideration and have been updating my paper to 
make sure these problems are addressed. 

  See more comments below.

  On Fri, Aug 6, 2010 at 8:15 PM, Mike

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Mike Tintner
PS Examples of nonphysical patterns AND how they are applicable to visual AGI.?



From: David Jones 
Sent: Monday, August 09, 2010 1:34 PM
To: agi 
Subject: Re: [agi] How To Create General AI Draft2


You see. This is precisely why I don't want to argue with Mike anymore. it 
must be a physical pattern. LOL. Who ever said that patterns must be physical? 
This is exactly why you can't see my point of view. You impose unnecessary 
restrictions on any possible solution when there really are no such 
restrictions.

Dave


On Mon, Aug 9, 2010 at 7:27 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  John:It can be defined mathematically in many ways

  Try it - crude drawings/jottings/diagrams totally acceptable. See my set of 
fotos to Dave.

  (And yes, you're right this is of extreme importance. And no. Dave, there are 
no such things as non-physical patterns).




  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Mike Tintner
Examples of nonphysical patterns?


From: David Jones 
Sent: Monday, August 09, 2010 1:34 PM
To: agi 
Subject: Re: [agi] How To Create General AI Draft2


You see. This is precisely why I don't want to argue with Mike anymore. it 
must be a physical pattern. LOL. Who ever said that patterns must be physical? 
This is exactly why you can't see my point of view. You impose unnecessary 
restrictions on any possible solution when there really are no such 
restrictions.

Dave


On Mon, Aug 9, 2010 at 7:27 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  John:It can be defined mathematically in many ways

  Try it - crude drawings/jottings/diagrams totally acceptable. See my set of 
fotos to Dave.

  (And yes, you're right this is of extreme importance. And no. Dave, there are 
no such things as non-physical patterns).




  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Mike Tintner
No you didn't. You're being evasive through and through.

You haven't answered the questions put to you in any shape or form other than 
nonphysical - and never will. Nor do you have any answer. Finis.


From: David Jones 
Sent: Monday, August 09, 2010 1:51 PM
To: agi 
Subject: Re: [agi] How To Create General AI Draft2


I already stated these. read previous emails. 


On Mon, Aug 9, 2010 at 8:48 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  PS Examples of nonphysical patterns AND how they are applicable to visual 
AGI.?



  From: David Jones 
  Sent: Monday, August 09, 2010 1:34 PM
  To: agi 
  Subject: Re: [agi] How To Create General AI Draft2


  You see. This is precisely why I don't want to argue with Mike anymore. it 
must be a physical pattern. LOL. Who ever said that patterns must be physical? 
This is exactly why you can't see my point of view. You impose unnecessary 
restrictions on any possible solution when there really are no such 
restrictions.

  Dave


  On Mon, Aug 9, 2010 at 7:27 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

John:It can be defined mathematically in many ways

Try it - crude drawings/jottings/diagrams totally acceptable. See my set of 
fotos to Dave.

(And yes, you're right this is of extreme importance. And no. Dave, there 
are no such things as non-physical patterns).




agi | Archives  | Modify Your Subscription   

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Mike Tintner
Ben: I don't agree that solving vision and the vision-cognition bridge is 
*such* a huge part of AGI, though it's certainly a nontrivial percentage

Presumably because you don't envisage your AGI/computer as an independent 
entity? All its info. is going to have to be entered into it in a specially 
prepared form - and it's still going to be massively and continuously dependent 
on human programmers?

Humans and real AGI's receive virtually all their info. - certainly all their 
internet info - through heavily visual processing (with obvious exceptions like 
sound). You can't do maths and logic if you can't see them, and they have 
visual forms -  equations and logic have visual form and use visual 
ideogrammatic as well as visual numerical signs. 

Just wh. intelligent problemsolving operations is your AGI going to do, that do 
NOT involve visual processing OR - the alternative - massive human assistance 
to substitute for that processing?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Mike Tintner
Ben:I think that vision and the vision-cognition bridge are important for AGI, 
but I think they're only a moderate portion of the problem, and not the hardest 
part...

Which is?


From: Ben Goertzel 
Sent: Monday, August 09, 2010 4:57 PM
To: agi 
Subject: Re: [agi] How To Create General AI Draft2





On Mon, Aug 9, 2010 at 11:42 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Ben: I don't agree that solving vision and the vision-cognition bridge is 
*such* a huge part of AGI, though it's certainly a nontrivial percentage

  Presumably because you don't envisage your AGI/computer as an independent 
entity? All its info. is going to have to be entered into it in a specially 
prepared form - and it's still going to be massively and continuously dependent 
on human programmers?

I envisage my AGI as an independent entity, ingesting information from the 
world in a similar manner to how humans do (as well as through additional 
senses not available to humans)

You misunderstood my statement.  I think that vision and the vision-cognition 
bridge are important for AGI, but I think they're only a moderate portion of 
the problem, and not the hardest part...

 

  Humans and real AGI's receive virtually all their info. - certainly all their 
internet info - through heavily visual processing (with obvious exceptions like 
sound). You can't do maths and logic if you can't see them, and they have 
visual forms -  equations and logic have visual form and use visual 
ideogrammatic as well as visual numerical signs. 

  Just wh. intelligent problemsolving operations is your AGI going to do, that 
do NOT involve visual processing OR - the alternative - massive human 
assistance to substitute for that processing?

agi | Archives  | Modify Your Subscription  




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are to 
give everything its due, two times two makes five is sometimes a very charming 
thing too. -- Fyodor Dostoevsky


  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Nao Nao

2010-08-09 Thread Mike Tintner
An unusually sophisticated ( somewhat expensive) promotional robot vid:

http://www.telegraph.co.uk/technology/news/7934318/Nao-the-robot-that-expresses-and-detects-emotions.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-08 Thread Mike Tintner
Dave:No... it is equivalent to saying that the whole world can be modeled as if 
everything was made up of matter

And matter is... ?  Huh?

You clearly don't realise that your thinking is seriously woolly - and you will 
pay a heavy price in lost time.

What are your basic world/visual-world analytic units  wh. you are claiming 
to exist?  

You thought - perhaps think still - that *concepts* wh. are pretty fundamental 
intellectual units of analysis at a certain level, could be expressed as, or 
indeed, were patterns. IOW there's a fundamental pattern for chair or 
table. Absolute nonsense. And a radical failure to understand the basic 
nature of concepts which is that they are *freeform* schemas, incapable of 
being expressed either as patterns or programs.

You had merely assumed that concepts could be expressed as patterns,but had 
never seriously, visually analysed it. Similarly you are merely assuming that 
the world can be analysed into some kind of visual units - but you haven't 
actually done the analysis, have you? You don't have any of these basic units 
to hand, do you? If you do, I suggest, reply instantly, naming a few. You won't 
be able to do it. They don't exist.

Your whole approach to AGI is based on variations of what we can call 
fundamental analysis - and it's wrong. God/Evolution hasn't built the world 
with any kind of geometric, or other consistent, bricks. He/It is a freeform 
designer. You have to start thinking outside the box/brick/fundamental unit.


From: David Jones 
Sent: Sunday, August 08, 2010 5:12 AM
To: agi 
Subject: Re: [agi] How To Create General AI Draft2


Mike,

I took your comments into consideration and have been updating my paper to make 
sure these problems are addressed. 

See more comments below.


On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  1) You don't define the difference between narrow AI and AGI - or make clear 
why your approach is one and not the other

I removed this because my audience is for AI researchers... this is AGI 101. I 
think it's clear that my design defines general as being able to handle the 
vast majority of things we want the AI to handle without requiring a change in 
design.
 


  2) Learning about the world won't cut it -  vast nos. of progs. claim they 
can learn about the world - what's the difference between narrow AI and AGI 
learning?

The difference is in what you can or can't learn about and what tasks you can 
or can't perform. If the AI is able to receive input about anything it needs to 
know about in the same formats that it knows how to understand and analyze, it 
can reason about anything it needs to.
 

  3) Breaking things down into generic components allows us to learn about and 
handle the vast majority of things we want to learn about. This is what makes 
it general!

  Wild assumption, unproven or at all demonstrated and untrue.

You are only right that I haven't demonstrated it. I will address this in the 
next paper and continue adding details over the next few drafts.

As a simple argument against your counter argument... 

If that were true that we could not understand the world using a limited set of 
rules or concepts, how is it that a human baby, with a design that is 
predetermined to interact with the world a certain way by its DNA, is able to 
deal with unforeseen things that were not preprogrammed? That’s right, the baby 
was born with a set of rules that robustly allows it to deal with the 
unforeseen. It has a limited set of rules used to learn. That is equivalent to 
a limited set of “concepts” (i.e. rules) that would allow a computer to deal 
with the unforeseen. 
 
  Interesting philosophically because it implicitly underlies AGI-ers' 
fantasies of take-off. You can compare it to the idea that all science can be 
reduced to physics. If it could, then an AGI could indeed take-off. But it's 
demonstrably not so.

No... it is equivalent to saying that the whole world can be modeled as if 
everything was made up of matter. Oh, I forgot, that is the case :) It is a 
limited set of concepts, yet it can create everything we know.
 

  You don't seem to understand that the problem of AGI is to deal with the NEW 
- the unfamiliar, that wh. cannot be broken down into familiar categories, - 
and then find ways of dealing with it ad hoc.

You don't seem to understand that even the things you think cannot be broken 
down, can be.


Dave

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-08 Thread Mike Tintner

There is nothing visual or physical or geometric or quasi geometric about what 
you're saying - no shapes or forms whatsoever to your idea of patterns or 
chair or sitting. Given an opportunity to discuss physical concretes - and 
what actually physically constitutes a chair, or any other 
concept/class-of-forms is fascinating and central to AGI - you retreat into 
vague abstractions while claiming to be interested in visual AGI. 

Fine, let's leave it there.


From: David Jones 
Sent: Sunday, August 08, 2010 4:12 PM
To: agi 
Subject: Re: [agi] How To Create General AI Draft2


:) what you don't realize is that patterns don't have to be strictly limited to 
the actual physical structure.

In fact, the chair patterns you refer to are not strictly physical patterns. 
The pattern is based on how the objects can be used, what their intended uses 
probably are, and what most common effective uses are.

So, chairs are objects that are used to sit on. You can identify objects whose 
most likely use is for sitting based on experience.

If you think this is not a sufficient refutation of your argument, then please 
don't argue with me regarding it anymore. I know your opinion and respectfully 
disagree. If you don't accept my counter argument, there is no point to 
continuing this back and forth ad finitum. 

Dave


  On Aug 8, 2010 9:29 AM, Mike Tintner tint...@blueyonder.co.uk wrote:


  You're waffling.

  You say there's a pattern for chair - DRAW IT. Attached should help you.

  Analyse the chairs given in terms of basic visual units. Or show how any 
basic units can be applied to them. Draw one or two.

  You haven't identified any basic visual units  - you don't have any. Do you? 
Yes/no. 

  No. That's not funny, that's a waste.. And woolly and imprecise through and 
through.




  From: David Jones 
  Sent: Sunday, August 08, 2010 1:59 PM

  To: agi
  Subject: Re: [agi] How To Create General AI Draft2



  Mike,

  We've argued about this over and over and over. I don't want to repeat 
previous arguments to you.

  You have no proof that the world cannot be broken down into simpler concepts 
and components. The only proof you attempt to propose are your example problems 
that *you* don't understand how to solve. Just because *you* cannot solve them, 
doesn't mean they cannot be solved at all using a certain methodology. So, who 
is really making wild assumptions?

  The mere fact that you can refer to a chair means that it is a recognizable 
pattern. LOL. That fact that you don't realize this is quite funny. 

  Dave


  On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

Dave:No... it is equivalent to saying that the whole world can be modeled 
as if everything was made up of matter

And matter is... ?  Huh?

You clearly don't realise that your thinking is seriously woolly - and you 
will pay a heavy price in lost time.

What are your basic world/visual-world analytic units  wh. you are 
claiming to exist?  

You thought - perhaps think still - that *concepts* wh. are pretty 
fundamental intellectual units of analysis at a certain level, could be 
expressed as, or indeed, were patterns. IOW there's a fundamental pattern for 
chair or table. Absolute nonsense. And a radical failure to understand the 
basic nature of concepts which is that they are *freeform* schemas, incapable 
of being expressed either as patterns or programs.

You had merely assumed that concepts could be expressed as patterns,but had 
never seriously, visually analysed it. Similarly you are merely assuming that 
the world can be analysed into some kind of visual units - but you haven't 
actually done the analysis, have you? You don't have any of these basic units 
to hand, do you? If you do, I suggest, reply instantly, naming a few. You won't 
be able to do it. They don't exist.

Your whole approach to AGI is based on variations of what we can call 
fundamental analysis - and it's wrong. God/Evolution hasn't built the world 
with any kind of geometric, or other consistent, bricks. He/It is a freeform 
designer. You have to start thinking outside the box/brick/fundamental unit.


From: David Jones 
Sent: Sunday, August 08, 2010 5:12 AM
To: agi 
Subject: Re: [agi] How To Create General AI Draft2


Mike,

I took your comments into consideration and have been updating my paper to 
make sure these problems are addressed. 

See more comments below.


On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

  1) You don't define the difference between narrow AI and AGI - or make 
clear why your approach is one and not the other

I removed this because my audience is for AI researchers... this is AGI 
101. I think it's clear that my design defines general as being able to handle 
the vast majority of things we want the AI to handle without requiring a change 
in design.
 


  2) Learning about the world won't cut

Re: [agi] $35 ( 2GB RAM) it is

2010-08-07 Thread Mike Tintner
sounds like a great achievement - or not?


From: deepakjnath 
Sent: Saturday, August 07, 2010 2:55 PM
To: agi 
Subject: Re: [agi] $35 ( 2GB RAM) it is


This is done in a university in my city.! :) That is our Education Minister :)

cheers,
Deepak


On Sat, Aug 7, 2010 at 6:04 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  http://shockedinvestor.blogspot.com/2010/07/new-35-laptop-unveiled.html 


  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
cheers,
Deepak

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Mike Tintner

Why don't you kick it off with a suggestion of your own?

(I think there are only lower/basic *robotic* AGI apps- and suggest no one 
will come up with any answers for you. Why don't you disprove me?)


--
From: Ben Goertzel b...@goertzel.org
Sent: Sunday, August 08, 2010 2:10 AM
To: agi agi@v2.listbox.com
Subject: [agi] Help requested: Making a list of (non-robotic) AGI low 
hanging fruit apps



Hi,

A fellow AGI researcher sent me this request, so I figured I'd throw it
out to you guys


I'm putting together an AGI pitch for investors and thinking of low
hanging fruit applications to argue for. I'm intentionally not
involving any mechanics (robots, moving parts, etc.). I'm focusing on
voice (i.e. conversational agents) and perhaps vision-based systems.
Hellen Keller AGI, if you will :)

Along those lines, I'd like any ideas you may have that would fall
under this description. I need to substantiate the case for such AGI
technology by making an argument for high-value apps. All ideas are
welcome.


All serious responses will be appreciated!!

Also, I would be grateful if we
could keep this thread closely focused on direct answers to this
question, rather than
digressive discussions on Helen Keller, the nature of AGI, the definition 
of AGI

versus narrow AI, the achievability or unachievability of AGI, etc.
etc.  If you think
the question is bad or meaningless or unclear or whatever, that's
fine, but please
start a new thread with a different subject line to make your point.

If the discussion is useful, my intention is to mine the answers into a 
compact

list to convey to him

Thanks!
Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Mike Tintner
sTEVE:I have posted plenty about statements of ignorance, our probable 
inability to comprehend what an advanced intelligence might be thinking, 

What will be the SIMPLEST thing that will mark the first sign of AGI ? - Given 
that there are zero but zero examples of AGI.

Don't you think it would be a good idea to begin at the beginning? With 
initial AGI? Rather than advanced AGI? 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Mike Tintner
Maybe you could give me one example from the history of technology where 
machines ran before they could walk? Where they started complex rather than 
simple?  Or indeed from evolution of any kind? Or indeed from human 
development? Where children started doing complex mental operations like logic, 
say, or maths or the equivalent before they could speak?  Or started running 
before they could control their arms, roll over, crawl, sit up, haul themselves 
up, stand up, totter -  just went straight to running?**

A bottom-up approach, I would have to agree, clearly isn't obvious to AGI-ers. 
But then there are v. few AGI-ers who have much sense of history or evolution. 
It's so much easier to engage in sci-fi fantasies about future, topdown AGI's.

It's HARDER to think about where AGI starts - requires serious application to 
the problem.

And frankly, until you or anyone else has a halfway viable of where AGI will or 
can start, and what uses it will serve,  speculation about whether it's worth 
building complex, sci-fi AGI's is a waste of your valuable time.

**PS Note BTW - a distinction that eludes most AGI-ers -  a present computer 
program doing logic or maths or chess, is a fundamentally and massively 
different thing from a human or AGI doing the same, just as a current program 
doing NLP is totally different from a human using language.   IN all these 
cases, humans (and real AGIs to come) don't merely manipulate meaningless 
patterns of numbers,   they relate the symbols first to concepts and then to 
real world referents - massively complex operations totally beyond current 
computers.

The whole history of AI/would-be AGI shows the terrible price of starting 
complex - with logic/maths/chess programs for example - and not having a clue 
about how intelligence has to be developed from v. simple origins, step by 
step, in order to actually understand these activities.



From: Steve Richfield 
Sent: Friday, August 06, 2010 4:52 PM
To: agi 
Subject: Re: [agi] Epiphany - Statements of Stupidity


Mike,

Your reply flies in the face of two obvious facts:
1.  I have little interest in what is called AGI here. My interests lie 
elsewhere, e.g. uploading, Dr. Eliza, etc. I posted this piece for several 
reasons, as it is directly applicable to Dr. Eliza, and because it casts a 
shadow on future dreams of AGI. I was hoping that those people who have thought 
things through regarding AGIs might have some thoughts here. Maybe these people 
don't (yet) exist?!
2.  You seem to think that a walk before you run approach, basically a 
bottom-up approach to AGI, is the obvious one. It sure isn't obvious to me. 
Besides, if my statements of stupidity theory is true, then why even bother 
building AGIs, because we won't even be able to meaningfully discuss things 
with them.

Steve
==

On Fri, Aug 6, 2010 at 2:57 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  sTEVE:I have posted plenty about statements of ignorance, our probable 
inability to comprehend what an advanced intelligence might be thinking, 

  What will be the SIMPLEST thing that will mark the first sign of AGI ? - 
Given that there are zero but zero examples of AGI.

  Don't you think it would be a good idea to begin at the beginning? With 
initial AGI? Rather than advanced AGI? 
agi | Archives  | Modify Your Subscription  


  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-08-06 Thread Mike Tintner
This is on the surface interesting. But I'm kinda dubious about it. 

I'd like to know exactly what's going on - who or what (what kind of organism) 
is solving what kind of problem about what? The exact nature of the problem and 
the solution, not just a general blurb description.

If you follow the link from Kurzweil, you get a really confusing 
picture/screen. And I wonder whether the real action/problemsolving isn't 
largely taking place in the viewer/programmer's mind.


From: rob levy 
Sent: Friday, August 06, 2010 7:23 PM
To: agi 
Subject: Re: [agi] AGI  Alife


Interesting article: 
http://www.newscientist.com/article/mg20727723.700-artificial-life-forms-evolve-basic-intelligence.html?page=1


On Sun, Aug 1, 2010 at 3:13 PM, Jan Klauck jkla...@uni-osnabrueck.de wrote:

  Ian Parker wrote


   I would like your
   opinion on *proofs* which involve an unproven hypothesis,


  I've no elaborated opinion on that.



  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/

  Modify Your Subscription: https://www.listbox.com/member/?;

  Powered by Listbox: http://www.listbox.com



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-06 Thread Mike Tintner
1) You don't define the difference between narrow AI and AGI - or make clear 
why your approach is one and not the other

2) Learning about the world won't cut it -  vast nos. of progs. claim they 
can learn about the world - what's the difference between narrow AI and AGI 
learning?

3) Breaking things down into generic components allows us to learn about and 
handle the vast majority of things we want to learn about. This is what makes 
it general!

Wild assumption, unproven or at all demonstrated and untrue. Interesting 
philosophically because it implicitly underlies AGI-ers' fantasies of 
take-off. You can compare it to the idea that all science can be reduced to 
physics. If it could, then an AGI could indeed take-off. But it's demonstrably 
not so.

You don't seem to understand that the problem of AGI is to deal with the NEW - 
the unfamiliar, that wh. cannot be broken down into familiar categories, - and 
then find ways of dealing with it ad hoc.

You have to demonstrate a capacity for dealing with the new. (As opposed to, 
say, narrow AI squares).




From: David Jones 
Sent: Friday, August 06, 2010 9:44 PM
To: agi 
Subject: [agi] How To Create General AI Draft2


Hey Guys,

I've been working on writing out my approach to create general AI to share and 
debate it with others in the field. I've attached my second draft of it in PDF 
format, if you guys are at all interested. It's still a work in progress and 
hasn't been fully edited. Please feel free to comment, positively or 
negatively, if you have a chance to read any of it. I'll be adding to and 
editing it over the next few days.

I'll try to reply more professionally than I have been lately :) Sorry :S

Cheers,

Dave 
  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Walker Lake

2010-08-02 Thread Mike Tintner
Steve:How about an international ban on the deployment of all unmanned and 
automated weapons? 

You might as well ask for a ban on war (or, perhaps, aggression). I strongly 
recommend reading the SciAm July 2010 issue on robotic warfare. The US already 
operates from memory somewhere between 13,000 and 20,000 unmanned weapons. 
Unmanned war (obviously with some but ever less human supervision)  IS the 
future of war.

If you used a little lateral thinking, you'd realise that this may well be a 
v.g. thing - let robots kill each other rather than humans - whoever's robots 
win, wins the war. It would be interesting to compare Afghan./Vietnam - I 
imagine the kill count is considerably down (but correct me) - *because* of 
superior, more automated technology.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Robot Warriors - the closest to real AGI?

2010-08-02 Thread Mike Tintner
[Here's the SciAm article - go see the illustrations too. We should really be 
discussing all this technologically because it strikes me as the closest to 
real AGI there is - and probably where we're likely to see the soonest advances]



WAR MACHINES



Robots on and above the battlefield are bringing

about the most profound transformation of

warfare since the advent of the atom bomb

By P. W. Singer

Back in the early 1970s,
a handful of scientists, engineers,
defense contractors and
U.S. Air Force officers got
together to form a professional
group. They were essentially trying to
solve the same problem: how to build
machines that can operate on their own
without human control and to figure
out ways to convince both the public
and a reluctant Pentagon brass that robots
on the battlefield are a good idea.
For decades they met once or twice a
year, in relative obscurity, to talk over
technical issues, exchange gossip and
renew old friendships. This once cozy
group, the Association for Unmanned
Systems International, now encompasses
more than 1,500 member companies
and organizations from 55 countries.
The growth happened so fast, in fact,
that it found itself in something of an
identity crisis. At one of its meetings in
San Diego, it even hired a master storyteller
to help the group pull together
the narrative of the amazing changes in
robotic technology. As one attendee
summed up, Where have we come
from? Where are we? And where should
we-and where do we want to-go?
What prompted the group's soulsearching
is one of the most profound
changes in modern warfare since the
advent of gunpowder or the airplane:
an astonishingly rapid rise in the use of
robots on the battlefield. Not a single
robot accompanied
the U.S. advance
from Kuwait
toward Baghdad in 2003.
Since then, 7,000 unmanned aircraft and another
12,000 ground vehicles have entered the
U.S. military inventory, entrusted with missions
that range from seeking out snipers to bombing
the hideouts of al-Qaeda higher-ups in Pakistan.
The world's most powerful fighting forces,
which once eschewed robots as unbecoming to
their warrior culture, have now embraced a war
of the machines as a means of combating an irregular
enemy that triggers remote explosions
with cell phones and then blends back into the
crowd. These robotic systems are not only having
a big effect on how this new type of warfare
is fought, but they also have initiated a set of
contentious arguments about the implications
of using ever more autonomous and intelligent
machines in battle. Moving soldiers out of
harm's way may save lives, but the growing use
of robots also raises deep political, legal and
ethical questions about the fundamental nature
of warfare and whether these technologie
could inadvertently make wars easier to start.
The earliest threads of this story arguably
hark back to the 1921 play R.U.R., in which
Czech writer Karel ^C apek coined the word robot
to describe mechanical servants that eventually
rise up against their human masters. The
word was packed with meaning, because it derived
from the Czech word for servitude and
the older Slavic word for slave, historically
linked to the robotniks, peasants who had revolted
against rich landowners in the 1800s.
This theme of robots taking on the work we
don't want to do but then ultimately assuming
control is a staple of science fiction that continues
today in The Terminator and The Matrix.
Today roboticists invoke the descriptors unmanned
or remote-operated to avoid Hollywood-
fueled visions of machines that are plotting
our demise. In the simplest terms, robots are
machines built to operate in a sense-think-act
paradigm. That is, they have sensors that gather
develinformation
about the world. Those data are
then relayed to computer processors, and perhaps
artificial-intelligence software, that use
them to make appropriate decisions. Finally,
based on that information, mechanical systems
known as effectors carry out some physical action
on the world around them. Robots do not
have to be anthropomorphic, as is the other Hollywood
trope of a man in a metal suit. The size
and shape of the systems that are beginning to
carry out these actions vary widely and rarely
evoke the image of C-3PO or the Terminator.
The Global Positioning Satellite system, videogame-
like remote controls and a host of other
technologies have made robots both useful and
usable on the battlefield during the past decade.
The increased ability to observe, pinpoint and
then attack targets in hostile settings without
having to expose the human operator to danger
became a priority after the 9/11 attacks, and
each new use of the systems on the ground created
a success story that had broader repercussions.
As an example, in the first few months of the Afghan
campaign in 2001, a prototype of the PackBot,
now used extensively to defuse bombs, was
sent into the field for testing. The soldiers liked it
so much that they would not return it to its manufacturer,
iRobot, 

[agi] Systems AGI -[was: Of Singularities]

2010-08-01 Thread Mike Tintner
Dave:I believe that technological progress has been accelerating for quite some 
time now. In fact, that is hardly debatable

Yes, but that isn't the issue.  

What Lanier points out is that so far we only have machines that are 
*fragments* of living systems - rather like those horror movies, where you have 
a fragmentary limb, like a hand, or even a head, moving around, without any 
system to support it.  We don't have anything that is even remotely close to a 
*whole* living system.  We have what looks like the raw technology - robotics 
and computers can produce something akin to the various limbs and faculties of 
human and animal systems. But we can barely begin to integrate those parts in a 
living-system-like way.

The odd robots like Honda efforts, that *appear* to have whole bodies,  cannot 
begin to function like whole living systems,  with full possession and 
integration of all their parts and faculties.  They can only use v. 
fragmentary parts of their limited brains and bodies. And they can only achieve 
one small task, compared with the fabulously rich economy of activities that 
any animal or human develops and conducts.

Now we have no idea about at what rate technology will progress towards whole 
living machine systems with whole economies of activities. Zero. Which doesn't 
stop AGI-ers making idiotic predictions on a regular basis. Whereas we have 
reasonable ideas about the future progress of many fragmentary machines.

The many AGI-ers like you, who think it is possible to produce disembodied, 
fragmentary machine brains/intelligence, have no real conception of the 
difference between fragmentary and living system machines. And, generally, both 
science and technology have extremely little understanding of the difference - 
or of what is required to produce a living system.

This isn't a small matter. Only a systems AGI, that does recognize the 
difference, can begin to call itself a serious as distinct from a fantasy 
technology.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] No Shit AI

2010-07-30 Thread Mike Tintner
I write this month to condemn the inventor of the electronic seeing eye 
toilet. Yes, that's right, I'm talking toilets here, doo-doo-stuff, some of 
which I hopefully won't step in myself over the next few paragraphs. I know 
there must be more substantive and less objectionable topics to bring before 
you, but I have a sense that many of you join me in spirit if not common 
experience and so I devote this month's Outlook to another trivial snippet 
emphasizing our joint humanity and sense of loss due to the recent 
disappearance of the hand flusher.

I don't know where it is located exactly, but there's an electronic eye in the 
plumbing of public toilets these days that can sense when you get up and down 
(or is it down and up) and are finally finished with your business, if you 
get my drift. My doctor says a proctology exam is a necessary evil but cameras 
in toilets? Never having seen myself from this particular angle, it is 
particularly embarrassing to turn over the assignment to a camera and in effect 
say, Snap away - see anything that doesn't look right? I figure if there's an 
eye there, then there could also be a little voice that says, Have a seat, 
which of course I do, usually with much haste and a sense that I'd better get 
on with it before I attract a crowd.

It's after the dirty deed is complete, however, that the real intrigue begins. 
Does it flush or doesn't it? Only the computer chip knows for sure. Sometimes, 
though, after the paperwork has been filed, pants pulled up and an attempted 
getaway initiated - nothing happens. No flush. Well, what is one to do in such 
circumstances? You can't just leave it there, you know. Sometimes when the 
toilet's plugged and there's no plunger like in European bathrooms, you can get 
out of there quick with conscience in tact, but only, of course, after checking 
to see that there's no one else in the restroom who might be able to testify 
against you in court for being a non-flusher. With electronic eye toilets, 
however, the conscience is never clear and so you wave your hand in front of 
the camera, hoping to convince it by the breaking of light waves that someone 
really has used the toilet and that somehow it just forgot, or maybe the 
deposit was so minuscule that it just didn't merit a flush. Hello in there! 
Having failed to trick it, however, the next step is to look for that little 
button in the back that you supposedly push in an emergency - sort of like a 
break glass in case of fire toilet equivalent. But think of all the billions 
of germs! At least with an old handle you could kick it with your shoe, hold up 
your arms like a doctor scrubbing for surgery and make an exit looking like 
you're auditioning for a part on ER. Finally I suppose you head for the door, 
all the while listening for the flush, the flush, that beautiful sound of the 
flush. I could have done it myself, you know, with a lot less hassle. Which is 
why I support a retreat to the old days, (not the backyard outhouse), but the 
good old-fashioned hand flusher. One push, and presto - you're good to go!



http://www.pimco.com/LeftNav/Featured+Market+Commentary/IO/2010/Gross+Privates+Eye+August.htm



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-26 Thread Mike Tintner
David,

There must be a fair amount of cog sci/AI analysis of all this -  of how the 
brain analyses and remembers tunes  - and presumably leading theories (as for 
vision). Do you or anyone know more here?

Also, you have noted something of extreme importance, wh. is a lot more than a 
step further.

OTOH you've been analysing how we recognize the same, general tune in 
different, individual renditions. 

OTOH you've pointed out, we also recognize the INDIVIDUAL differences 
of/variatiions on the same genre/class - we appreciate the different ways 
Davis/Gillespie play as well as that they're playing the same tune.

Now correct me but isn't the individual dimension of images of all kinds, 
almost entirely missing from AI? The capacity to recognize what makes 
individuals of a species individual, and not just that they belong to the same 
species.  Isn't visual object recognition for example almost entirely focussed 
on recognizing general objects rather than individual objects - that that's an 
example of a general doll, rather than an individual particularly beaten up, or 
just slightly and disturbingly altered doll?

No doubt AI can recognize individual fingerprints, but it's the capacity to 
recognize individuals as variations on the general - to recognize that he has a 
particularly sarcastic smile, or she has a particularly lyrical, fluid walk,  
or that that tune contrasts harmonious and discordant music (as per rap) in a 
distinctive way - that's missing, no?



From: David Butler 
Sent: Monday, July 26, 2010 3:44 PM
To: agi 
Subject: Re: [agi] How do we hear music


When we listen to music there are many elements that come into play that create 
our memory of how the song goes.  If you take a piece of instrumental music,  
you have the melody, a succession of tones in a certain order,  duration of 
each note in the melody,  timbre, or tonal quality, (guitar vs trombone), time, 
how fast the song is played.  Phrasing, what part of the melody is emphasized 
using volume, change of tone quality etc...  Is the melody played slurred with 
all the notes run together or staccato played with short notes.   

Too take it a step further how do we recognize a solo played by Miles Davis 
rather than Dizzy Gillespie  playing the same song both on trumpet but sound 
completely different in style.  How do we recognize when two different 
conductors direct the same music with the same orchestra but yet make it sound 
different?

.


On Thu, Jul 22, 2010 at 3:05 PM, Matt Mahoney matmaho...@yahoo.com wrote:

  deepakjnath wrote:


   Why do we listen to a song sung in different scale and yet identify it as 
the same song.?  Does it have something to do with the fundamental way in which 
we store memory?


  For the same reason that gray looks green on a red background. You have more 
neurons that respond to differences in tones than to absolute frequencies.

   
  -- Matt Mahoney, matmaho...@yahoo.com 





--
  From: deepakjnath deepakjn...@gmail.com
  To: agi agi@v2.listbox.com
  Sent: Thu, July 22, 2010 3:59:57 PM
  Subject: [agi] How do we hear music


  Why do we listen to a song sung in different scale and yet identify it as the 
same song.?  Does it have something to do with the fundamental way in which we 
store memory?

  cheers,
  Deepak

agi | Archives  | Modify Your Subscription  

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-26 Thread Mike Tintner
Deepak,

No it's basically a distraction from the problem.  With time and closer 
inspection, they will all look different.

Correction, it IS useful. It probably tells us something about how the brain 
and an AGI must work 
First you start with a round blob shape for a class of objects - a face blob, 
and then you refine it and refine it, add more and more detail, for different 
individuals.

What makes Chinese difficult to individuate at first, is they have a particular 
characteristic wh. would be highly distinctive for a Western individual - 
relatively slanted eyes.  Imagine if a new race all had square jaws. You can't 
take your eyes off that feature at first. With time you learn to make 
adjustments for it, and notice the individual characteristics within the 
narrower eyes.  Ditto elephants are hard to individuate at first because they 
all have these massively distinctive features of huge ears and trunks.

You start general, and gradually individuate - but you have to individuate - 
your life depends on being able to distinguish individual characteristics as 
well as general forms.


From: deepakjnath 
Sent: Monday, July 26, 2010 7:56 PM
To: agi 
Subject: Re: [agi] How do we hear music


Mike,

All chinese look the same for me. But for a chinese person they don't. Why is 
this? Is there another clue here?

Thanks,
Deepak


On Mon, Jul 26, 2010 at 9:10 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  David,

  There must be a fair amount of cog sci/AI analysis of all this -  of how the 
brain analyses and remembers tunes  - and presumably leading theories (as for 
vision). Do you or anyone know more here?

  Also, you have noted something of extreme importance, wh. is a lot more than 
a step further.

  OTOH you've been analysing how we recognize the same, general tune in 
different, individual renditions. 

  OTOH you've pointed out, we also recognize the INDIVIDUAL differences 
of/variatiions on the same genre/class - we appreciate the different ways 
Davis/Gillespie play as well as that they're playing the same tune.

  Now correct me but isn't the individual dimension of images of all kinds, 
almost entirely missing from AI? The capacity to recognize what makes 
individuals of a species individual, and not just that they belong to the same 
species.  Isn't visual object recognition for example almost entirely focussed 
on recognizing general objects rather than individual objects - that that's an 
example of a general doll, rather than an individual particularly beaten up, or 
just slightly and disturbingly altered doll?

  No doubt AI can recognize individual fingerprints, but it's the capacity to 
recognize individuals as variations on the general - to recognize that he has a 
particularly sarcastic smile, or she has a particularly lyrical, fluid walk,  
or that that tune contrasts harmonious and discordant music (as per rap) in a 
distinctive way - that's missing, no?



  From: David Butler 
  Sent: Monday, July 26, 2010 3:44 PM
  To: agi 
  Subject: Re: [agi] How do we hear music


  When we listen to music there are many elements that come into play that 
create our memory of how the song goes.  If you take a piece of instrumental 
music,  you have the melody, a succession of tones in a certain order,  
duration of each note in the melody,  timbre, or tonal quality, (guitar vs 
trombone), time, how fast the song is played.  Phrasing, what part of the 
melody is emphasized using volume, change of tone quality etc...  Is the melody 
played slurred with all the notes run together or staccato played with short 
notes.   

  Too take it a step further how do we recognize a solo played by Miles Davis 
rather than Dizzy Gillespie  playing the same song both on trumpet but sound 
completely different in style.  How do we recognize when two different 
conductors direct the same music with the same orchestra but yet make it sound 
different?

  .


  On Thu, Jul 22, 2010 at 3:05 PM, Matt Mahoney matmaho...@yahoo.com wrote:

deepakjnath wrote:


 Why do we listen to a song sung in different scale and yet identify it as 
the same song.?  Does it have something to do with the fundamental way in which 
we store memory?


For the same reason that gray looks green on a red background. You have 
more neurons that respond to differences in tones than to absolute frequencies.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: deepakjnath deepakjn...@gmail.com
To: agi agi@v2.listbox.com
Sent: Thu, July 22, 2010 3:59:57 PM
Subject: [agi] How do we hear music


Why do we listen to a song sung in different scale and yet identify it as 
the same song.?  Does it have something to do with the fundamental way in which 
we store memory?

cheers,
Deepak

  agi | Archives  | Modify Your Subscription  

  agi | Archives  | Modify Your Subscription  



agi | Archives

Re: [agi] How do we hear music

2010-07-26 Thread Mike Tintner
I'm not sure that's too diff. from what I'm saying.

The interesting question is what does the brain use as its general class model 
against wh. to compare new individuals? It's unlikely to be a or the first 
individual face/object as you seem to be suggesting.

Another factor here is that you interpret all these objects with your body - 
you understand other faces and bodies by projecting your own body into them - a 
remarkable example of that is the ability of a c. 2 month old infant to imitate 
the mouth movements of parents ( remember it hasn't seen its own mouth yet).


From: deepakjnath 
Sent: Monday, July 26, 2010 8:38 PM
To: agi 
Subject: Re: [agi] How do we hear music


Okay Mike, 

Let me write down my theory of this phenomenon. my intuition is that brain 
learns in steps and deltas. The brain takes in a fixed amount of only new 
information at a time. So when a person who doesn't have too much impressions 
(image memories) of a chinese person sees a chinese, He takes in the round face 
and the eyes etc which are new info to the seer.

When the seer sees another chinese person the older chinese persons image comes 
back into the working memory. The new person is stored as delta of the other 
person.

As the seer sees more and more people the basic structure is no longer new. The 
new features that get captured become the subtle variations from the basic 
structure. This ability to identify new information becomes a crucial function 
of the brain. Thus as time passes with images of chinese people, the seer will 
be able to capture subtle variation and recognize the person.

People who are not musically trained find it difficult to distinguish between 
notes. But repeated listening to the notes engrave the structure of notes to 
the memory. And complex and subtle variations of the notes become apparent to 
the listener as the base notes are already stored in the memory and so no 
longer new.

cheers,
Deepak




On Tue, Jul 27, 2010 at 12:54 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Deepak,

  No it's basically a distraction from the problem.  With time and closer 
inspection, they will all look different.

  Correction, it IS useful. It probably tells us something about how the brain 
and an AGI must work 
  First you start with a round blob shape for a class of objects - a face blob, 
and then you refine it and refine it, add more and more detail, for different 
individuals.

  What makes Chinese difficult to individuate at first, is they have a 
particular characteristic wh. would be highly distinctive for a Western 
individual - relatively slanted eyes.  Imagine if a new race all had square 
jaws. You can't take your eyes off that feature at first. With time you learn 
to make adjustments for it, and notice the individual characteristics within 
the narrower eyes.  Ditto elephants are hard to individuate at first because 
they all have these massively distinctive features of huge ears and trunks.

  You start general, and gradually individuate - but you have to individuate - 
your life depends on being able to distinguish individual characteristics as 
well as general forms.


  From: deepakjnath 
  Sent: Monday, July 26, 2010 7:56 PM
  To: agi 
  Subject: Re: [agi] How do we hear music


  Mike,

  All chinese look the same for me. But for a chinese person they don't. Why is 
this? Is there another clue here?

  Thanks,
  Deepak


  On Mon, Jul 26, 2010 at 9:10 PM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

David,

There must be a fair amount of cog sci/AI analysis of all this -  of how 
the brain analyses and remembers tunes  - and presumably leading theories (as 
for vision). Do you or anyone know more here?

Also, you have noted something of extreme importance, wh. is a lot more 
than a step further.

OTOH you've been analysing how we recognize the same, general tune in 
different, individual renditions. 

OTOH you've pointed out, we also recognize the INDIVIDUAL differences 
of/variatiions on the same genre/class - we appreciate the different ways 
Davis/Gillespie play as well as that they're playing the same tune.

Now correct me but isn't the individual dimension of images of all kinds, 
almost entirely missing from AI? The capacity to recognize what makes 
individuals of a species individual, and not just that they belong to the same 
species.  Isn't visual object recognition for example almost entirely focussed 
on recognizing general objects rather than individual objects - that that's an 
example of a general doll, rather than an individual particularly beaten up, or 
just slightly and disturbingly altered doll?

No doubt AI can recognize individual fingerprints, but it's the capacity to 
recognize individuals as variations on the general - to recognize that he has a 
particularly sarcastic smile, or she has a particularly lyrical, fluid walk,  
or that that tune contrasts harmonious and discordant music (as per rap) in a 
distinctive way

[agi] The Math Behind Creativity

2010-07-25 Thread Mike Tintner
I came across this, thinking it was going to be an example of maths fantasy, 
but actually it has a rather nice idea about the mathematics of creativity.


The Math Behind Creativity
By Chuck Scott on June 15, 2010

The Science of Creativity is based on the following mathematical formula for 
Creativity:



In other words, Creativity is equal to infinity minus the area of a defined 
circle of what's working. 

Note:  is the geometric formula for calculating the area of a circle; where  is 
3.142 rounded to the nearest thousandth, and R is a circle's radius (the length 
from a circle's center to edge).



**

Simply, it's saying - that for every problem, and ultimately that's not just 
creative but rational problems, there's a definable space of options - the 
spaces you guys work with in your programs - wh. may work, if the problem is 
rational, but normally don't if it's creative. And beyond that space is the 
undefined space of creativity, wh. encompasses the entire world in an infinity 
of combinations. (Or all the fabulous multiverse[s] of Ben's mind).  Creative 
ideas - and that can be small everyday ideas as well as large cultural ones - 
can come from anywhere in, and any combinations of, the entire world (incl 
butterflies in Brazil and caterpillars in Katmandu -  QED I just drew that last 
phrase off the cuff from that vast world). Creative thinking - and that incl. 
the thinking of all humans from children on - is what in the world ? 
thinking - that can and does draw upon the infinite resources of the world. 
What in the world is he on about? Where in the world will I find s.o. 
who..? What in the world could be of help here?

And that is another way of highlighting the absurdity of current approaches to 
AGI - that would seek to encompass the entire world of creative ideas/options 
in the infinitesimal spaces/options of programs.







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com
math_994_a9533b31457bd21311d15e42a60f9153.pngmath_994_d895701496b1057f4cbe3c7c38db0d30.pngmath_994.5_8edb2cf68079344a2edd739531259f6c.png

Re: [agi] The Math Behind Creativity

2010-07-25 Thread Mike Tintner
I think it's v. useful - although I was really extending his idea.

Correct me - but almost no matter what you guys do, (or anyone in AI does) , 
you think in terms of spaces, or frames. Spaces of options. Whether you're 
doing logic, maths, or programs, spaces in one form or other are fundamental.

But you won't find anyone - or show me to the contrary - applying spaces to 
creative problems (or AGI problems). T

And what's useful IMO is the idea of **trying** to encompass the space of 
creative options - the options for any creative problem [wh can be as simple 
or complex as what shall we have to eat tonight? or how do we reform the 
banks? or  what do you think of the state of AGI? ]. 

It's only when you **try** to formalise creativity , that you realise it can't 
be done in any practical, programmable way - or formal way. You can only do it 
conceptually. Informally. 

The options are infinite, or, at any rate, practically endless. - and 
infinite not just in number, but in *diversity*, in endlessly proliferating 
*domains* and categories extending right across the world.

**And this is the case for every creative problem - every AGI problem**   (one 
reason why you won't find anyone in the field of AGI, actually doing AGI, only 
narrow AI gestures at the goal).  

It's only when you attempt - and fail - to formalise the space of creativity, 
that the meaning of there are infinite creative options really comes home. 
And you should be able to start to see why narrow AI and AGI are fundamentally 
opposite affairs - thinking in closed spaces vs thinking in open worlds.

{It is fundamental BTW to the method of rationality - and rationalisation - 
epitomised in current programming - to create and think in a closed space of 
options, wh. is always artificial in nature].




From: rob levy 
Sent: Sunday, July 25, 2010 9:16 PM
To: agi 
Subject: Re: [agi] The Math Behind Creativity


Not sure how that is useful, or even how it relates to creativity if considered 
as an informal description?


On Sun, Jul 25, 2010 at 10:15 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  I came across this, thinking it was going to be an example of maths fantasy, 
but actually it has a rather nice idea about the mathematics of creativity.

  
  The Math Behind Creativity
  By Chuck Scott on June 15, 2010

  The Science of Creativity is based on the following mathematical formula for 
Creativity:



  In other words, Creativity is equal to infinity minus the area of a defined 
circle of what’s working. 

  Note:  is the geometric formula for calculating the area of a circle; where  
is 3.142 rounded to the nearest thousandth, and R is a circle’s radius (the 
length from a circle’s center to edge).



  **

  Simply, it's saying - that for every problem, and ultimately that's not just 
creative but rational problems, there's a definable space of options - the 
spaces you guys work with in your programs - wh. may work, if the problem is 
rational, but normally don't if it's creative. And beyond that space is the 
undefined space of creativity, wh. encompasses the entire world in an infinity 
of combinations. (Or all the fabulous multiverse[s] of Ben's mind).  Creative 
ideas - and that can be small everyday ideas as well as large cultural ones - 
can come from anywhere in, and any combinations of, the entire world (incl 
butterflies in Brazil and caterpillars in Katmandu -  QED I just drew that last 
phrase off the cuff from that vast world). Creative thinking - and that incl. 
the thinking of all humans from children on - is what in the world ? 
thinking - that can and does draw upon the infinite resources of the world. 
What in the world is he on about? Where in the world will I find s.o. 
who..? What in the world could be of help here?

  And that is another way of highlighting the absurdity of current approaches 
to AGI - that would seek to encompass the entire world of creative 
ideas/options in the infinitesimal spaces/options of programs.





agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Math Behind Creativity

2010-07-25 Thread Mike Tintner
I wasn't trying for a detailed model of creative thinking with explanatory 
power -  merely one dimension (and indeed a foundation) of it.

In contrast to rational, deterministically programmed computers and robots wh. 
can only operate in closed spaces externally, (artificial environments) and 
only think in closed spaces internally,  human (real AGI) agents are designed 
to operate in the open world externally, (real world environments) and to think 
in open worlds internally.

IOW when you think about any creative problem, like what am I going to do 
tonight? or let me write a post in reply to MT - you *don't* have a nice 
neat space/frame of options lined up as per a computer program, which your 
brain systematically checks through. You have an open world of associations - 
associated with varying degrees of power - wh. you have to search, or since AI 
has corrupted that word, perhaps we should say quest through in haphazard, 
nonsystematic fashion. You have to *explore* your brain for ideas - and it is a 
risky business, wh. (with more difficult problems) may draw a blank.

(Nor BTW does your brain set up a space for solving creative problems - as 
was vaguely mooted in a recent discussion with Ben. Closed spaces are strictly 
for rational problems).

IMO though this contrast of narrow AI/rationality as thinking in closed 
spaces vs AGI/creativity as thinking in open worlds is a very powerful one.

Re your examples, I don't think Koestler or Fauconnier are talking of defined 
or closed spaces.  The latter is v. vague about the nature of his spaces. I 
think they're rather like the formulae for creativity that our folk culture 
often talks about. V. loosely. They aren't used in the strict senses the terms 
have in rationality - logic/maths/programming.

Note that Calvin's/Piaget's idea of consciousness as designed for when you 
don't know what to do accords with my idea of creative thinking as effectively 
starting from a blank page rather than than a ready space of options, and 
going on to explore a world of associations for ideas.

P.S. I should have stressed that the open world of the brain is 
**multidomain**, indeed **open-domain by contrast with the spaces of programs 
wh. are closed, uni-domain. When you search for what am I going to do..?  
your brain can go through an endless world of domains -  movies,call a friend, 
watch TV, browse the net, meal, go for walk, play a sport, ask s.o. for novel 
ideas, spend time with my kid ... and on and on.

The space thinking of rationality is superefficient but rigid and useless for 
AGI. The open world of the human, creative mind is highly inefficient by 
comparison but superflexible and the only way to do AGI.





From: rob levy 
Sent: Monday, July 26, 2010 1:06 AM
To: agi 
Subject: Re: [agi] The Math Behind Creativity


On Sun, Jul 25, 2010 at 5:05 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  I think it's v. useful - although I was really extending his idea.

  Correct me - but almost no matter what you guys do, (or anyone in AI does) , 
you think in terms of spaces, or frames. Spaces of options. Whether you're 
doing logic, maths, or programs, spaces in one form or other are fundamental.

  But you won't find anyone - or show me to the contrary - applying spaces to 
creative problems (or AGI problems). T




I guess we may somehow be familiar with different and non-overlapping 
literature, but it seems to me that most or at least many approaches to 
modeling creativity involve a notion of spaces of some kind.  I won't make a 
case to back that up but I will list a few examples: Koestler's bisociation is 
spacial, D. T. Campbell, the Fogels, Finke et al, and William Calvin's 
evolutionary notion of creativity involve a behavioral or conceptual fitness 
landscape, Gilles Fauconnier  Mark Turner's theory of conceptual blending on 
mental space, etc. etc.


The idea of the website you posted is very lacking in any kind of explanatory 
power in my opinion.  To me any theory of creativity should be able to show how 
a system is able to generate novel and good results.  Creativity is more than 
just outside what is known, created, or working.  That is a description of 
novelty, and with no suggestions for the why or how of generating novelty.  
Creativity also requires the semantic potential to reflect on and direct the 
focusing in on the stream of playful novelty to that which is desired or 
considered good.  


I would disagree that creativity is outside the established/known.  A better 
characterization would be that it resides on the complex boundary of the novel 
and the established, which is what make it interesting instead just a copy, or 
just total gobbledygook randomness.
  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread Mike Tintner
Huh, Matt? What examples of this holistic scene analysis are there (or are 
you thinking about)?


From: Matt Mahoney 
Sent: Saturday, July 24, 2010 10:25 PM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


David Jones wrote:
 I should also mention that I ran into problems mainly because I was having a 
 hard time deciding how to identify objects and determine what is really going 
 on in a scene.


I think that your approach makes the problem harder than it needs to be (not 
that it is easy). Natural language processing is hard, so researchers in an 
attempt to break down the task into simpler parts, focused on steps like 
lexical analysis, parsing, part of speech resolution, and semantic analysis. 
While these problems went unsolved, Google went directly to a solution by 
skipping them.


Likewise, parsing an image into physically separate objects and then building a 
3-D model makes the problem harder, not easier. Again, look at the whole 
picture. You input an image and output a response. Let the system figure out 
which features are important. If your goal is to count basketball passes, then 
it is irrelevant whether the AGI recognizes that somebody is wearing a gorilla 
suit.

 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread Mike Tintner
Matt: 
I mean a neural model with increasingly complex features, as opposed to an 
algorithmic 3-D model (like video game graphics in reverse). Of course David 
rejects such ideas ( http://practicalai.org/Prize/Default.aspx ) even though 
the one proven working vision model uses it.


Which is? and does what?  (I'm starting to consider that vision and visual 
perception  -  or perhaps one should say common sense, since no sense in 
humans works independent of the others -  may well be considerably *more* 
complex than language. The evolutionary time required to develop our common 
sense perception and conception of the world was vastly greater than that 
required to develop language. And we are as a culture merely in our babbling 
infancy in beginning to understand how sensory images work and are processed).


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Pretty worldchanging

2010-07-23 Thread Mike Tintner
this strikes me as socially worldchanging if it works - potentially leading to 
you-ain't-see-nothing-yet changes in world education ( commerce) levels over 
the next decade:

http://www.physorg.com/news199083092.html

Any comments on its technical  massproduction viability ?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-23 Thread Mike Tintner

Michael:but those things do have patterns.. A mushroom (A) is like a cloud

mushroom (B).

if ( (input_source_A == An_image) AND ( input_source_B == An_image ))

One pattern is that they both came from an image source, and I just used
maths + logic to prove it


Michael,

This is a bit desperate isn't it?

They both come from image sources. So do a zillion other images, from 
Obama to dung - so they're all alike? Everything in the world is alike and 
metaphorical for everything else?


And their images must be alike because they both have an 'o' and a 'u' in 
their words, (not their images)-  unless you're a Chinese speaker.


Pace Lear, that way madness lies.

Why don't you apply your animation side to the problem - and analyse the 
images per images, and how to compare them as images? Some people in AGI 
although not AFAIK on this forum are actually addressing the problem. I'm 
sure *you* can too.




--
From: Michael Swan ms...@voyagergaming.com
Sent: Friday, July 23, 2010 8:28 AM
To: agi agi@v2.listbox.com
Subject: Re: [agi] How do we hear music







On Fri, 2010-07-23 at 03:45 +0100, Mike Tintner wrote:
Let's crystallise the problem   - all the unsolved problems of AGI - 
visual

object recognition, conceptualisation, analogy, metaphor, creativity,
language understanding and generation -  are problems where you're 
dealing

with freeform, irregular patchwork objects - objects which clearly do not
fit any *patterns* -   the raison d'etre of maths .

To focus that , these objects do not have common parts in more or less
precisely repeating structures - i.e. fit patterns.

A cartoon and a photo of the same face may have no parts or structure in
common.
Ditto different versions of the Google logo. Zero common parts or 
structure


Ditto cloud and mushroom - no common parts, or common structure.

Yet the mind amazingly can see likenesses between all these things.

Just about all the natural objects in the world , with some obvious
exceptions, do not fit common patterns - they do not have the same parts 
in

precisely the same places/structures.  They may  have common loose
organizations of parts - e.g. mouths, eyes, noses, lips  - but they are
not precisely patterned.

So you must explain how a mathematical approach, wh. is all about
recognizing patterns, can apply to objects wh. do not fit patterns.

You won't be able to - because if you could bring yourselves to look at 
the

real world or any depictions of it other than geometric, (metacognitively
speaking),you would see for yourself that these objects don't have 
precise

patterns.

It's obvious also that when the mind likens a cloud to a mushroom, it 
cannot

be using any math. techniques.


.. but those things do have patterns.. A mushroom (A) is like a cloud
mushroom (B).

if ( (input_source_A == An_image) AND ( input_source_B == An_image ))

One pattern is that they both came from an image source, and I just used
maths + logic to prove it.


But we have to understand how the mind does do that - because it's fairly
clearly  the same technique the mind also uses to conceptualise even more
vastly different forms such as those of  chair, tree,  dog, cat.

And that technique - like concepts themselves -  is at the heart of AGI.

And you can sit down and analyse the problem visually, physically and see
also pretty obviously that if the mind can liken such physically 
different
objects as cloud and mushroom, then it HAS to do that with something like 
a
fluid schema. There's broadly no other way but to fluidly squash the 
objects

to match each other (there could certainly be different techniques of
achieving that  - but the broad principles are fairly self evident). 
Cloud
and mushroom certainly don't match formulaically, mathematically. Neither 
do

those different versions of a tune. Or the different faces of Madonna.

But what we've got here is people who don't in the final analysis give a
damn about how to solve AGI - if it's a choice between doing maths and
failing, and having some kind of artistic solution to AGI that actually
works, most people here will happily fail forever. Mathematical AI has
indeed consistently failed at AGI. You have to realise, mathematicians 
have
a certain kind of madness. Artists don't go around saying God is an 
artist,

or everything is art. Only mathematicians have that compulsion to reduce
everything to maths, when the overwhelming majority of representations 
are

clearly not mathematical - or claim that the obviously irregular abstract
arts (think Pollock) are mathematical. You're in good company - Wolfram, 
a
brilliant fellow, thinks his patterns constitute a new kind of science, 
when
the vast majority of scientists can see they only constitute a new  kind 
of

pattern, and do not apply to the real world.

Look again - the brain is primarily a patchwork adapted to a patchwork,
very extensively unpatterned world -  incl. the internet itself - adapted
primarily

Re: [agi] How do we hear music

2010-07-23 Thread Mike Tintner
No the answers are not there. That's complete rubbish; You won't be able to 
produce a point from your collective links that addresses any of the problems 
listed.

You seem blithely unaware that these are all unsolved problems.


From: L Detetive 
Sent: Friday, July 23, 2010 3:54 AM
To: agi 
Subject: Re: [agi] How do we hear music


So you must explain how a mathematical approach, wh. is all about recognizing 
patterns, can apply to objects wh. do not fit patterns.



No, we mustn't. You must read the links we've posted or stop asking the same 
things again and again. The answers are all there.

-- 
L

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-22 Thread Mike Tintner
Predicting the old and predictable  [incl in shape and form] is narrow AI. 
Squaresville.
Adapting to the new and unpredictable [incl in shape and form] is AGI. Rock on.


From: David Jones 
Sent: Thursday, July 22, 2010 4:49 PM
To: agi 
Subject: [agi] Re: Huge Progress on the Core of AGI


An Update

I think the following gets to the heart of general AI and what it takes to 
achieve it. It also provides us with evidence as to why general AI is so 
difficult. With this new knowledge in mind, I think I will be much more capable 
now of solving the problems and making it work. 

I've come to the conclusion lately that the best hypothesis is better because 
it is more predictive and then simpler than other hypotheses (in that order 
more predictive... then simpler). But, I am amazed at how difficult it is to 
quantitatively define more predictive and simpler for specific problems. This 
is why I have sometimes doubted the truth of the statement.

In addition, the observations that the AI gets are not representative of all 
observations! This means that if your measure of predictiveness depends on 
the number of certain observations, it could make mistakes! So, the specific 
observations you are aware of may be unrepresentative of the predictiveness of 
a hypothesis relative to the truth. If you try to calculate which hypothesis is 
more predictive and you don't have the critical observations that would give 
you the right answer, you may get the wrong answer! This all depends of course 
on your method of calculation, which is quite elusive to define. 

Visual input from screenshots, for example, can be somewhat malicious. Things 
can move, appear, disappear or occlude each other suddenly. So, without 
sufficient knowledge it is hard to decide whether matches you find between such 
large changes are because it is the same object or a different object. This may 
indicate that bias and preprogrammed experience should be introduced to the AI 
before training. Either that or the training inputs should be carefully chosen 
to avoid malicious input and to make them nice for learning. 

This is the correspondence problem that is typical of computer vision and has 
never been properly solved. Such malicious input also makes it difficult to 
learn automatically because the AI doesn't have sufficient experience to know 
which changes or transformations are acceptable and which are not. It is 
immediately bombarded with malicious inputs.

I've also realized that if a hypothesis is more explanatory, it may be 
better. But quantitatively defining explanatory is also elusive and truly 
depends on the specific problems you are applying it to because it is a 
heuristic. It is not a true measure of correctness. It is not loyal to the 
truth. More explanatory is really a heuristic that helps us find hypothesis 
that are more predictive. The true measure of whether a hypothesis is better is 
simply the most accurate and predictive hypothesis. That is the ultimate and 
true measure of correctness.

Also, since we can't measure every possible prediction or every last prediction 
(and we certainly can't predict everything), our measure of predictiveness 
can't possibly be right all the time! We have no choice but to use a heuristic 
of some kind.

So, its clear to me that the right hypothesis is more predictive and then 
simpler. But, it is also clear that there will never be a single measure of 
this that can be applied to all problems. I hope to eventually find a nice 
model for how to apply it to different problems though. This may be the reason 
that so many people have tried and failed to develop general AI. Yes, there is 
a solution. But there is no silver bullet that can be applied to all problems. 
Some methods are better than others. But I think another major reason of the 
failures is that people think they can predict things without sufficient 
information. By approaching the problem this way, we compound the need for 
heuristics and the errors they produce because we simply don't have sufficient 
information to make a good decision with limited evidence. If approached 
correctly, the right solution would solve many more problems with the same 
efforts than a poor solution would. It would also eliminate some of the 
difficulties we currently face if sufficient data is available to learn from.

In addition to all this theory about better hypotheses, you have to add on the 
need to solve problems in reasonable time. This also compounds the difficulty 
of the problem and the complexity of solutions.

I am always fascinated by the extraordinary difficulty and complexity of this 
problem. The more I learn about it, the more I appreciate it.

Dave

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] How do we hear music

2010-07-22 Thread Mike Tintner
And maths will handle the examples given :

same tunes - different scales, different instruments
same face -  cartoon, photo
same logo  - different parts [buildings/ fruits/ human figures]

revealing them to be the same  -   how exactly?

Or you could take two arseholes -  same kind of object, but radically different 
configurations - maths will show them to belong to the same category, how?

IOW do you have the slightest evidence for what you're claiming? 

And to which part of  AGI, is maths demonstrably fundamental? Any idea? Or are 
you just praying?




From: L Detetive 
Sent: Thursday, July 22, 2010 11:49 PM
To: agi 
Subject: Re: [agi] How do we hear music


Schemas are what maths can't handle - and are fundamental to AGI.



Maths are what Mike can't handle - and are fundamental to AGI.

-- 
L

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread Mike Tintner
Matt,

How did you learn to play chess?   Or write programs? How do you teach people 
to write programs?

Compare and contrast - esp. the nature and number/ extent of instructions -  
with how you propose to force a computer to learn below.

Why is it that if you tell a child [real AGI] what to do, it will never learn?

Why can and does a human learner get to ask questions and a computer doesn't?

How come you [a real AGI] can get to choose your instructors and textbooks, 
and/or whether you choose to pay attention to them, and a computer can't?

Why do computers stop learning once they've done what they're told, and humans 
and animals never stop and keep going on to learn ever new activities?

What and how many are the fundamental differences between how real AGI's and 
computers learn?




Mike, I think we all agree that we should not have to tell an AGI the steps to 
solving problems. It should learn and figure it out, like the way that people 
figure it out.


The question is how to do that. We know that it is possible. For example, I 
could write a chess program that I could not win against. I could write the 
program in such a way that it learns to improve its game by playing against 
itself or other opponents. I could write it in such a way that initially does 
not know the rules for chess, but instead learns the rules by being given 
examples of legal and illegal moves.


What we have not yet been able to do is scale this type of learning and problem 
solving up to general, human level intelligence. I believe it is possible, but 
it will require lots of training data and lots of computing power. It is not 
something you could do on a PC, and it won't be cheap.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 9:07:53 PM
Subject: Re: [agi] Of definitions and tests of AGI


The issue isn't what a computer can do. The issue is how you structure the 
computer's or any agent's thinking about a problem. Programs/Turing machines 
are only one way of structuring thinking/problemsolving - by, among other 
things, giving the computer a method/process of solution. There is an 
alternative way of structuring a computer's thinking, which incl., among other 
things, not giving it a method/ process of solution, but making it rather than 
a human programmer do the real problemsolving.  More of that another time.


From: Matt Mahoney 
Sent: Tuesday, July 20, 2010 1:38 AM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Creativity is the good feeling you get when you discover a clever solution to a 
hard problem without knowing the process you used to discover it.


I think a computer could do that.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 2:08:28 PM
Subject: Re: [agi] Of definitions and tests of AGI


Yes that's what people do, but it's not what programmed computers do.

The useful formulation that emerges here is:

narrow AI (and in fact all rational) problems  have *a method of solution*  (to 
be equated with general method)   - and are programmable (a program is a 
method of solution)

AGI  (and in fact all creative) problems do NOT have *a method of solution* (in 
the general sense)  -  rather a one.off *way of solving the problem* has to be 
improvised each time.

AGI/creative problems do not in fact have a method of solution, period. There 
is no (general) method of solving either the toy box or the build-a-rock-wall 
problem - one essential feature which makes them AGI.

You can learn, as you indicate, from *parts* of any given AGI/creative 
solution, and apply the lessons to future problems - and indeed with practice, 
should improve at solving any given kind of AGI/creative problem. But you can 
never apply a *whole* solution/way to further problems.

P.S. One should add that in terms of computers, we are talking here of 
*complete, step-by-step* methods of solution.



From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


  
  And are you happy with:

  AGI is about devising *one-off* methods of problemsolving (that only apply to 
the individual problem, and cannot be re-used - at 

  least not in their totality)


Yes exactly, isn't that what people do?  Also, I think that being able to 
recognize where past solutions can be generalized and where past solutions can 
be varied and reused is a detail of how intelligence works that is likely to be 
universal.

 
  vs

  narrow AI is about applying pre-existing *general* methods of problemsolving  
(applicable to whole classes of problems)?




  From: rob levy 
  Sent: Monday, July 19, 2010 4:45 PM
  To: agi 
  Subject: Re: [agi

Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread Mike Tintner
Infants *start* with general learning skills - they have to extensively 
discover for themselves how to do most things - control head, reach out, turn 
over, sit up, crawl, walk - and also have to work out perceptually what the 
objects they see are, and what they do... and what sounds are, and how they 
form words, and how those words relate to objects - and how language works

it is this capacity to keep discovering ways of doing things, that is a major 
motivation in their continually learning new activities - continually seeking 
novelty, and getting bored with too repetitive activities

obviously an AGI needs some help.. but at the mo. all projects get *full* help/ 
*complete* instructions - IOW are merely dressed up versions of narrow AI

no one AFAIK is dealing with the issue of how do you produce a true 
goalseeking agent who *can* discover things for itself?  - an agent, that 
like humans and animals, can *find* its way to its goals generally, as well as 
to learning new activities, on its own initiative  - rather than by following 
instructions.  (The full instruction method only works in artificial, 
controlled environments and can't possibly work in the real, uncontrollable 
world - where future conditions are highly unpredictable, even by the sagest 
instructor). [Ben BTW strikes me as merely gesturing at all this].

There really can't be any serious argument about this - humans and animals 
clearly learn all their activities with v. limited and largely general rather 
than step-by-step instructions.

You may want to argue there is an underlying general program that effectively 
specifies every step they must take (good luck) - but with respect to all their 
specialist.particular activities, - think having a conversation, sex, writing a 
post, an essay, fantasying, shopping, browsing the net, reading a newspaper - 
etc etc. - you got and get v. little step-by-step instruction about these and 
all your other activities

So AGI's require a fundamentally and massively different paradigm of 
instruction to the program, comprehensive, step-by-step paradigm of narrow AI.

[The rock wall/toybox tests BTW are AGI activities, where it is *impossible* to 
give full instructions, or produce a formula, whatever you may want to do].


From: rob levy 
Sent: Wednesday, July 21, 2010 3:56 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


A child AGI should be expected to need help learning how to solve many 
problems, and even be told what the steps are.  But at some point it needs to 
have developed general problem-solving skills.  But I feel like this is all 
stating the obvious.


On Tue, Jul 20, 2010 at 11:32 PM, Matt Mahoney matmaho...@yahoo.com wrote:

  Mike, I think we all agree that we should not have to tell an AGI the steps 
to solving problems. It should learn and figure it out, like the way that 
people figure it out.




  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] The Collective Brain

2010-07-20 Thread Mike Tintner
http://www.ted.com/talks/matt_ridley_when_ideas_have_sex.html?utm_source=newsletter_weekly_2010-07-20utm_campaign=newsletter_weeklyutm_medium=email

Good lecture worth looking at about how trade - exchange of both goods and 
ideas - has fostered civilisation. Near the end introduces a v. important idea 
- the collective brain. In other words, our apparently individual 
intelligence is actually a collective intelligence. Nobody he points out 
actually knows how to make a computer mouse, although that may seem 
counterintuitive  - it's an immensely complex piece of equipment, simple as it 
may appear, that engages the collective, interdependent intelligence and 
productive efforts of vast numbers of people.

When you start thinking like that, you realise that there is v. little we know 
how to do, esp of an intellectual nature, individually, without the implicit 
and explicit collaboration of vast numbers of people and sectors of society. 

The fantasy of a superAGI machine that can grow individually without a vast 
society supporting it, is another one of the wild fantasies of AGI-ers and 
Singularitarians that violate truly basic laws of nature. Individual brains 
cannot flourish individually in the real world, only societies of brains (and 
bodies) can. 

(And of course computers can do absolutely nothing or in any way survive 
without their human masters - even if it may appear that way, if you don't look 
properly at their whole operation)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Collective Brain

2010-07-20 Thread Mike Tintner
No, the collective brain is actually a somewhat distinctive idea.  It's 
saying a lot more than the individual brain is embedded in society, much 
more like interdependently functioning with society - that you can't say 
do maths or art or any subject, or produce products or perform most of our 
activities except as part of a whole culture and society. Did you watch the 
talk? My Googlings show that this does seem to be a distinctive formulation 
by Ridley.


The evidence of the idea's newness is precisely the discussions of 
superAGI's and AGI futures by the groups here - show me how much of these 
discussions if anything at all raises the social dimension (i.e society of 
robots dimension)  -  considers what I am suggesting is the truth that you 
will not be able to have an independent AGI  system without a society of 
such systems.  If the collective brain idea were established culturally, 
AGI-ers would not talk as naively as they do.


Your last question is also an example of cocooned-AGI thinking? Which 
brains?  The only real AGI brains are those of living systems - animals and 
humans - living in the real world.  All machines to date are only extensions 
of humans not living systems - though I'm not sure how many AGI-ers truly 
realise this.  And all those systems can and do only function in societies.


Why? Well, when you or y'all ever get around to dealing with AGI/creative 
problems you will realise why.  The risk of failure and injury when dealing 
with the creative problems of the real world is so great that you need a 
social network a) to support you and b) by virtue of a collective, to 
increase the chances of at least some individuals successfully reaching 
difficult goals. Also, social division of labour massively amplifies the 
productive power of the individual.  Plus you get sexual benefits.

--
From: Jan Klauck jkla...@uni-osnabrueck.de
Sent: Tuesday, July 20, 2010 8:25 PM
To: agi agi@v2.listbox.com
Subject: Re: [agi] The Collective Brain


Mike Tintner wrote


Near the end introduces a v. important
idea - the collective brain. In other words, our apparently individual
intelligence is actually a collective intelligence.


That individuals are embedded into social networks of specialization
and exchange, care etc. is already known both in sociology and economics,
probably in philosophy and social psychology, too.


and productive efforts of vast numbers of people.


Already known to economists.


The fantasy of a superAGI machine that can grow individually without a
vast society supporting it, is another one of the wild fantasies of
AGI-ers and Singularitarians that violate truly basic laws of nature.


AGIers and Singularitarians say so?


Individual brains cannot flourish individually in the real world, only
societies of brains (and bodies) can.


What kind of brains? What kind of societies? And why?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Collective Brain

2010-07-20 Thread Mike Tintner
Ah the collective brain is saying something else as well -  wh. is another 
reason why I was hoping to get a discussion. It's exemplified in the example 
of the mouse.


Actually, Ridley is saying, the complete knowledge to build a mouse does not 
reside in any individual brain, or indeed by extension in any group of 
individual brains.   That complete knowledge only effectively comes into 
being when you get all those brains along with all their relevant 
technologies and libraries, working together.


Hence one talks of a collective brain, which is of course a (useful) 
fiction. There is no such brain and nor is there any complete locatable 
store of knowledge to perform the great majority of our activities. They are 
the result of societies of individuals working together.


And that - although no doubt I'm not expressing it well at all - is a rather 
magical idea and magical reality.


{Note this is something different from but loosely related to the crude, 
rather atavistic idea beloved by AGI-ers that the internet will somehow 
magically come alive and acquire an individual brain of its own] 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Collective Brain

2010-07-20 Thread Mike Tintner
You partly illustrate my point - you talk of artificial brains as if they 
actually exist  - there aren't any; there are only glorified, extremely 
complex calculators/computer programs  - extensions/augmentations of 
individual faculties of human brains.  To obviously exaggerate, it's 
somewhat as if you were to talk of cameras as brains.


By implicitly pretending that artificial brains exist - in the form of 
computer programs -  you (and most AGI-ers), deflect attention away from all 
the unsolved dimensions of what is required for an independent 
brain-cum-living system, natural or artificial. One of those dimensions is a 
society of brains/systems. Another is a body. And there are more., none of 
wh. are incorporated in computer programs - they only represent one 
dimension of what is needed for a brain.


Yes you may know these things some times as you say, but most of the time 
they're forgotten.


--
From: Jan Klauck jkla...@uni-osnabrueck.de
Sent: Wednesday, July 21, 2010 1:56 AM
To: agi agi@v2.listbox.com
Subject: Re: [agi] The Collective Brain


Mike Tintner wrote


No, the collective brain is actually a somewhat distinctive idea.


Just a way of looking at social support networks. Even social
philosophers centuries ago had similar ideas--they were lacking our
technical understanding and used analogies from biology (organicism)
instead.


more like interdependently functioning with society


As I said it's long known to economists and sociologists. There's even
an African proverb pointing at this: It takes a village to raise a
child.
System researcher investigate those interdependencies since decades.


Did you watch the talk?


No flash here. I just answer on what you're writing.


The evidence of the idea's newness is precisely the discussions of
superAGI's and AGI futures by the groups here


We talked about the social dimensions some times. It's not the most
important topic around here, but that doesn't mean we're all ignorant.

In case you haven't noticed I'm not building an AGI, I'm interested
in the stuff around, e.g., tests, implementation strategies etc. by
the means of social simulation.


Your last question is also an example of cocooned-AGI thinking? Which
brains?  The only real AGI brains are those of living systems


A for Artificial. Living systems don't qualify for A.

My question was about certain attributes of brains (whether natural or
artificial). Societies are constrained by their members' capacities.
A higher individual capacity can lead to different dependencies and
new ways groups and societies are working.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
No, Dave  I vaguely agree here that you have to start simple. To think of 
movies is massively confused - rather like saying: when we have created an 
entire new electric supply system for cars, we will have solved the problem of 
replacing gasoline - first you have to focus just on inventing a radically 
cheaper battery, before you consider the possibly hundreds to thousands of 
associated inventions and innovations.involved in creating a major new supply 
system.

Here it would be much simpler to focus on understanding a single photographic 
scene - or real, directly-viewed scene - of objects, rather than the many 
thousands involved in a movie.

In terms of language, it would be simpler to focus on understanding just two 
consecutive sentences of a text or section of dialogue  - or even as I've 
already suggested, just the flexible combinations of two words - rather than 
the hundreds of lines and many thousands of words involved in a movie or play 
script.

And even this is probably all too evolved, for humans only came to use formal 
representations of the world v. recently in evolution.

The general point -  a massively important one - is that AGI-ers cannot 
continue to think of AGI in terms of massively complex and evolved intelligent 
systems, as you are doing. You have to start with the simplest possible systems 
and gradually evolve them.  Anything else is a defiance of all the laws of 
technology - and will see AGI continuing to go absolutely nowhere.

From: deepakjnath 
Sent: Monday, July 19, 2010 5:19 AM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Exactly my point. So if I show a demo of an AGI system that can see two movies 
and understand that the plot of the movies are same even though they are 2 
entirely different movies, you would agree that we have created a true AGI.

Yes there are always lot of things we need to do before we reach that level. 
Its just good to know the destination so that we will know it when it arrives.





On Mon, Jul 19, 2010 at 2:18 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Jeez,  no AI program can understand *two* consecutive *sentences* in a text - 
can understand any text period - can understand language, period. And you want 
an AGI that can understand a *story*. You don't seem to understand that 
requires cognitively a fabulous, massively evolved, highly educated, hugely 
complex set of powers . 

  No AI can understand a photograph of a scene, period - a crowd scene, a house 
by the river. Programs are hard put to recognize any objects other than those 
in v. standard positions. And you want an AGI that can understand a *movie*. 

  You don't seem to realise that we can't take the smallest AGI  *step* yet - 
and you're fantasying about a superevolved AGI globetrotter.

  That's why Benjamin  I tried to focus on v. v. simple tests -  they're 
still way too complex  they (or comparable tests) will have to be refined down 
considerably for anyone who is interested in practical vs sci-fi fantasy AGI.

  I recommend looking at Packbots and other military robots and hospital robots 
and the like, and asking how we can free them from their human masters and give 
them the very simplest of capacities to rove and handle the world independently 
- like handling and travelling on rocks. 

  Anyone dreaming of computers or robots that can follow Gone with The Wind 
or become a child (real) scientist in the foreseeable future pace Ben, has no 
realistic understanding of what is involved.

  From: deepakjnath 
  Sent: Sunday, July 18, 2010 9:04 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Let me clarify. As you all know there are somethings computers are good at 
doing and somethings that Humans can do but a computer cannot.

  One of the test that I was thinking about recently is to have to movies show 
to the AGI. Both movies will have the same story but it would be a totally 
different remake of the film probably in different languages and settings. If 
the AGI is able to understand the sub plot and say that the story line is 
similar in the two movies then it could be a good test for AGI structure. 

  The ability of a system to understand its environment and underlying sub 
plots is an important requirement of AGI.

  Deepak


  On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Please explain/expound freely why you're not convinced - and indicate 
what you expect,  - and I'll reply - but it may not be till tomorrow.

Re your last point, there def. is no consensus on a general problem/test OR 
a def. of AGI.  

One flaw in your expectations seems to be a desire for a single test -  
almost by definition, there is no such thing as 

a) a single test - i.e. there should be at least a dual or serial test - 
having passed any given test, like the rock/toy test, the AGI must be presented 
with a new adjacent test for wh. it has had no preparation,  like say 
building

Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-19 Thread Mike Tintner
Ian: Suppose I want to know about the characteristics of concrete

You seem to think you can know about an object without ever having seen it or 
physically interacted with it?  As long as you have a set of words for the 
world, you need never have actually experienced or been in the world?

You can fight Israel and lay concrete merely by manipulating words?


From: Ian Parker 
Sent: Monday, July 19, 2010 10:39 AM
To: agi 
Subject: Re: [agi] Is there any Contest or test to ensure that a System is AGI?


What is the difference between laying concrete at 50C and fighting Israel?. 
That is my question my 2 pennyworth. Other people can elaborate. 


If that question can be answered you can have an automated advisor in BQ. 
Suppose I want to know about the characteristics of concrete. Of course one 
thing you could do is go to BQ and ask them what they would be looking for in 
an avatar.




  - Ian Parker


On 19 July 2010 02:43, Colin Hales c.ha...@pgrad.unimelb.edu.au wrote:

  Try this one ...
  http://www.bentham.org/open/toaij/openaccess2.htm
  If the test subject can be a scientist, it is an AGI.
  cheers
  colin


  Steve Richfield wrote: 
Deepak,

An intermediate step is the reverse Turing test (RTT), wherein people or 
teams of people attempt to emulate an AGI. I suspect that from such a 
competition would come a better idea as to what to expect from an AGI.

I have attempted in the past to drum up interest in a RTT, but so far, no 
one seems interested.

Do you want to play a game?!

Steve
 


On Sun, Jul 18, 2010 at 5:15 AM, deepakjnath deepakjn...@gmail.com wrote:

  I wanted to know if there is any bench mark test that can really convince 
majority of today's AGIers that a System is true AGI?

  Is there some real prize like the XPrize for AGI or AI in general?

  thanks,
  Deepak

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription  

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
Non-reply.

Name one industry/ section of technology that began with, say, the invention of 
the car,  skipping all the many thousands of stages from the invention of the 
wheel. What you and others are proposing is far, far more outrageous.

It won't require one but a million strokes of genius in one - a stroke of 
divinity. More fantasy AGI.


From: deepakjnath 
Sent: Monday, July 19, 2010 12:00 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


‘The intuitive mind is a sacred gift and the rational  mind is a faithful 
servant. We have created a society that honours the servant and has forgotten 
the gift.’

‘The intellect has little to do on the road to discovery. There comes a leap in 
consciousness, call it intuition or what you will, and the solution comes to 
you and you don’t know how or why.’

— Albert Einstein

We are here talking like programmers who needs to build a new system; Just 
divide the problem, solve it one by one, arrange the pieces and voila. We are 
missing something fundamentally here. That I believe has to come as a stroke of 
genius to someone.

thanks,
Deepak





On Mon, Jul 19, 2010 at 4:10 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  No, Dave  I vaguely agree here that you have to start simple. To think of 
movies is massively confused - rather like saying: when we have created an 
entire new electric supply system for cars, we will have solved the problem of 
replacing gasoline - first you have to focus just on inventing a radically 
cheaper battery, before you consider the possibly hundreds to thousands of 
associated inventions and innovations.involved in creating a major new supply 
system.

  Here it would be much simpler to focus on understanding a single photographic 
scene - or real, directly-viewed scene - of objects, rather than the many 
thousands involved in a movie.

  In terms of language, it would be simpler to focus on understanding just two 
consecutive sentences of a text or section of dialogue  - or even as I've 
already suggested, just the flexible combinations of two words - rather than 
the hundreds of lines and many thousands of words involved in a movie or play 
script.

  And even this is probably all too evolved, for humans only came to use formal 
representations of the world v. recently in evolution.

  The general point -  a massively important one - is that AGI-ers cannot 
continue to think of AGI in terms of massively complex and evolved intelligent 
systems, as you are doing. You have to start with the simplest possible systems 
and gradually evolve them.  Anything else is a defiance of all the laws of 
technology - and will see AGI continuing to go absolutely nowhere.

  From: deepakjnath 
  Sent: Monday, July 19, 2010 5:19 AM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Exactly my point. So if I show a demo of an AGI system that can see two 
movies and understand that the plot of the movies are same even though they are 
2 entirely different movies, you would agree that we have created a true AGI.

  Yes there are always lot of things we need to do before we reach that level. 
Its just good to know the destination so that we will know it when it arrives.





  On Mon, Jul 19, 2010 at 2:18 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Jeez,  no AI program can understand *two* consecutive *sentences* in a text 
- can understand any text period - can understand language, period. And you 
want an AGI that can understand a *story*. You don't seem to understand that 
requires cognitively a fabulous, massively evolved, highly educated, hugely 
complex set of powers . 

No AI can understand a photograph of a scene, period - a crowd scene, a 
house by the river. Programs are hard put to recognize any objects other than 
those in v. standard positions. And you want an AGI that can understand a 
*movie*. 

You don't seem to realise that we can't take the smallest AGI  *step* yet - 
and you're fantasying about a superevolved AGI globetrotter.

That's why Benjamin  I tried to focus on v. v. simple tests -  they're 
still way too complex  they (or comparable tests) will have to be refined down 
considerably for anyone who is interested in practical vs sci-fi fantasy AGI.

I recommend looking at Packbots and other military robots and hospital 
robots and the like, and asking how we can free them from their human masters 
and give them the very simplest of capacities to rove and handle the world 
independently - like handling and travelling on rocks. 

Anyone dreaming of computers or robots that can follow Gone with The Wind 
or become a child (real) scientist in the foreseeable future pace Ben, has no 
realistic understanding of what is involved.

From: deepakjnath 
Sent: Sunday, July 18, 2010 9:04 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Let me clarify. As you all know there are somethings computers are good at 
doing and somethings

Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
Whaddya mean by solve the problem of how to solve problems? Develop a 
universal approach to solving any problem? Or find a method of solving a class 
of problems? Or what?


From: rob levy 
Sent: Monday, July 19, 2010 1:26 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI



  However, I see that there are no valid definitions of AGI that explain what 
AGI is generally , and why these tests are indeed AGI. Google - there are v. 
few defs. of AGI or Strong AI, period.




I like Fogel's idea that intelligence is the ability to solve the problem of 
how to solve problems in new and changing environments.  I don't think Fogel's 
method accomplishes this, but the goal he expresses seems to be the goal of AGI 
as I understand it. 


Rob
  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
OK. so you're saying:   AGI is solving problems where you have to *devise* a 
method of solution/solving the problem  and is that devising in effect or 
actually/formally? - **

vs

narrow AI wh. is where you *apply* a pre-existing method of solution/solving 
the problem  ?

And are you happy with:

AGI is about devising *one-off* methods of problemsolving (that only apply to 
the individual problem, and cannot be re-used - at least not in their totality)

vs

narrow AI is about applying pre-existing *general* methods of problemsolving  
(applicable to whole classes of problems)?




From: rob levy 
Sent: Monday, July 19, 2010 4:45 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Well, solving ANY problem is a little too strong.  This is AGI, not AGH 
(artificial godhead), though AGH could be an unintended consequence ;).  So I 
would rephrase solving any problem as being able to come up with reasonable 
approaches and strategies to any problem (just as humans are able to do).


On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Whaddya mean by solve the problem of how to solve problems? Develop a 
universal approach to solving any problem? Or find a method of solving a class 
of problems? Or what?


  From: rob levy 
  Sent: Monday, July 19, 2010 1:26 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI



However, I see that there are no valid definitions of AGI that explain what 
AGI is generally , and why these tests are indeed AGI. Google - there are v. 
few defs. of AGI or Strong AI, period.




  I like Fogel's idea that intelligence is the ability to solve the problem of 
how to solve problems in new and changing environments.  I don't think Fogel's 
method accomplishes this, but the goal he expresses seems to be the goal of AGI 
as I understand it. 


  Rob
agi | Archives  | Modify Your Subscription   

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
Yes that's what people do, but it's not what programmed computers do.

The useful formulation that emerges here is:

narrow AI (and in fact all rational) problems  have *a method of solution*  (to 
be equated with general method)   - and are programmable (a program is a 
method of solution)

AGI  (and in fact all creative) problems do NOT have *a method of solution* (in 
the general sense)  -  rather a one.off *way of solving the problem* has to be 
improvised each time.

AGI/creative problems do not in fact have a method of solution, period. There 
is no (general) method of solving either the toy box or the build-a-rock-wall 
problem - one essential feature which makes them AGI.

You can learn, as you indicate, from *parts* of any given AGI/creative 
solution, and apply the lessons to future problems - and indeed with practice, 
should improve at solving any given kind of AGI/creative problem. But you can 
never apply a *whole* solution/way to further problems.

P.S. One should add that in terms of computers, we are talking here of 
*complete, step-by-step* methods of solution.



From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


  
  And are you happy with:

  AGI is about devising *one-off* methods of problemsolving (that only apply to 
the individual problem, and cannot be re-used - at 

  least not in their totality)


Yes exactly, isn't that what people do?  Also, I think that being able to 
recognize where past solutions can be generalized and where past solutions can 
be varied and reused is a detail of how intelligence works that is likely to be 
universal.

 
  vs

  narrow AI is about applying pre-existing *general* methods of problemsolving  
(applicable to whole classes of problems)?




  From: rob levy 
  Sent: Monday, July 19, 2010 4:45 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Well, solving ANY problem is a little too strong.  This is AGI, not AGH 
(artificial godhead), though AGH could be an unintended consequence ;).  So I 
would rephrase solving any problem as being able to come up with reasonable 
approaches and strategies to any problem (just as humans are able to do).


  On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Whaddya mean by solve the problem of how to solve problems? Develop a 
universal approach to solving any problem? Or find a method of solving a class 
of problems? Or what?


From: rob levy 
Sent: Monday, July 19, 2010 1:26 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI



  However, I see that there are no valid definitions of AGI that explain 
what AGI is generally , and why these tests are indeed AGI. Google - there are 
v. few defs. of AGI or Strong AI, period.




I like Fogel's idea that intelligence is the ability to solve the problem 
of how to solve problems in new and changing environments.  I don't think 
Fogel's method accomplishes this, but the goal he expresses seems to be the 
goal of AGI as I understand it. 


Rob
  agi | Archives  | Modify Your Subscription   

  agi | Archives  | Modify Your Subscription  



agi | Archives  | Modify Your Subscription   

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
The issue isn't what a computer can do. The issue is how you structure the 
computer's or any agent's thinking about a problem. Programs/Turing machines 
are only one way of structuring thinking/problemsolving - by, among other 
things, giving the computer a method/process of solution. There is an 
alternative way of structuring a computer's thinking, which incl., among other 
things, not giving it a method/ process of solution, but making it rather than 
a human programmer do the real problemsolving.  More of that another time.


From: Matt Mahoney 
Sent: Tuesday, July 20, 2010 1:38 AM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Creativity is the good feeling you get when you discover a clever solution to a 
hard problem without knowing the process you used to discover it.


I think a computer could do that.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 2:08:28 PM
Subject: Re: [agi] Of definitions and tests of AGI


Yes that's what people do, but it's not what programmed computers do.

The useful formulation that emerges here is:

narrow AI (and in fact all rational) problems  have *a method of solution*  (to 
be equated with general method)   - and are programmable (a program is a 
method of solution)

AGI  (and in fact all creative) problems do NOT have *a method of solution* (in 
the general sense)  -  rather a one.off *way of solving the problem* has to be 
improvised each time.

AGI/creative problems do not in fact have a method of solution, period. There 
is no (general) method of solving either the toy box or the build-a-rock-wall 
problem - one essential feature which makes them AGI.

You can learn, as you indicate, from *parts* of any given AGI/creative 
solution, and apply the lessons to future problems - and indeed with practice, 
should improve at solving any given kind of AGI/creative problem. But you can 
never apply a *whole* solution/way to further problems.

P.S. One should add that in terms of computers, we are talking here of 
*complete, step-by-step* methods of solution.



From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


  
  And are you happy with:

  AGI is about devising *one-off* methods of problemsolving (that only apply to 
the individual problem, and cannot be re-used - at 

  least not in their totality)


Yes exactly, isn't that what people do?  Also, I think that being able to 
recognize where past solutions can be generalized and where past solutions can 
be varied and reused is a detail of how intelligence works that is likely to be 
universal.

 
  vs

  narrow AI is about applying pre-existing *general* methods of problemsolving  
(applicable to whole classes of problems)?




  From: rob levy 
  Sent: Monday, July 19, 2010 4:45 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Well, solving ANY problem is a little too strong.  This is AGI, not AGH 
(artificial godhead), though AGH could be an unintended consequence ;).  So I 
would rephrase solving any problem as being able to come up with reasonable 
approaches and strategies to any problem (just as humans are able to do).


  On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Whaddya mean by solve the problem of how to solve problems? Develop a 
universal approach to solving any problem? Or find a method of solving a class 
of problems? Or what?


From: rob levy 
Sent: Monday, July 19, 2010 1:26 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI



  However, I see that there are no valid definitions of AGI that explain 
what AGI is generally , and why these tests are indeed AGI. Google - there are 
v. few defs. of AGI or Strong AI, period.




I like Fogel's idea that intelligence is the ability to solve the problem 
of how to solve problems in new and changing environments.  I don't think 
Fogel's method accomplishes this, but the goal he expresses seems to be the 
goal of AGI as I understand it. 


Rob
  agi | Archives  | Modify Your Subscription   

  agi | Archives  | Modify Your Subscription  



agi | Archives  | Modify Your Subscription   

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   

  agi | Archives  | Modify Your Subscription  

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Of definitions and tests of AGI

2010-07-18 Thread Mike Tintner
I realised that what is needed is a *joint* definition *and*  range of tests of 
AGI.

Benamin Johnston has submitted one valid test - the toy box problem. (See 
archives).

I have submitted another still simpler valid test - build a rock wall from 
rocks given, (or fill an earth hole with rocks).

However, I see that there are no valid definitions of AGI that explain what AGI 
is generally , and why these tests are indeed AGI. Google - there are v. few 
defs. of AGI or Strong AI, period.

The most common: AGI is human-level intelligence -  is an embarrassing 
non-starter - what distinguishes human intelligence? No explanation offered.

The other two are also inadequate if not as bad: Ben's solves a variety of 
complex problems in a variety of complex environments. Nope, so does  a 
multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's - 
something to do with insufficient knowledge and resources...
Insufficient is open to narrow AI interpretations and reducible to 
mathematically calculable probabilities.or uncertainties. That doesn't 
distinguish AGI from narrow AI.

The one thing we should all be able to agree on (but who can be sure?) is that:

** an AGI is a general intelligence system, capable of independent learning**

i.e. capable of independently learning new activities/skills with minimal 
guidance or even, ideally, with zero guidance (as humans and animals are) - and 
thus acquiring a general, all-round range of intelligence..  

This is an essential AGI goal -  the capacity to keep entering and mastering 
new domains of both mental and physical skills WITHOUT being specially 
programmed each time - that crucially distinguishes it from narrow AI's, which 
have to be individually programmed anew for each new task. Ben's AGI dog 
exemplified this in a v simple way -  the dog is supposed to be able to learn 
to fetch a ball, with only minimal instructions, as real dogs do - they can 
learn a whole variety of new skills with minimal instruction.  But I am 
confident Ben's dog can't actually do this.

However, the independent learning def. while focussing on the distinctive AGI 
goal,  still is not detailed enough by itself.

It requires further identification of the **cognitive operations** which 
distinguish AGI,  and wh. are exemplified by the above tests.

[I'll stop there for interruptions/comments  continue another time].

 P.S. Deepakjnath,

It is vital to realise that the overwhelming majority of AGI-ers do not * want* 
an AGI test -  Ben has never gone near one, and is merely typical in this 
respect. I'd put almost all AGI-ers here in the same league as the US banks, 
who only want mark-to-fantasy rather than mark-to-market tests of their assets.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread Mike Tintner
Please explain/expound freely why you're not convinced - and indicate what 
you expect,  - and I'll reply - but it may not be till tomorrow.

Re your last point, there def. is no consensus on a general problem/test OR a 
def. of AGI.  

One flaw in your expectations seems to be a desire for a single test -  almost 
by definition, there is no such thing as 

a) a single test - i.e. there should be at least a dual or serial test - having 
passed any given test, like the rock/toy test, the AGI must be presented with a 
new adjacent test for wh. it has had no preparation,  like say building with 
cushions or sand bags or packing with fruit. (and neither rock/toy test state 
that clearly)

b) one kind of test - this is an AGI, so it should be clear that if it can pass 
one kind of test, it has the basic potential to go on to many different kinds, 
and it doesn't really matter which kind of test you start with - that is partly 
the function of having a good.definition of AGI .


From: deepakjnath 
Sent: Sunday, July 18, 2010 8:03 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


So if I have a system that is close to AGI, I have no way of really knowing it 
right? 

Even if I believe that my system is a true AGI there is no way of convincing 
the others irrefutably that this system is indeed a AGI not just an advanced AI 
system.

I have read the toy box problem and rock wall problem, but not many people will 
still be convinced I am sure.

I wanted to know that if there is any consensus on a general problem which can 
be solved and only solved by a true AGI. Without such a test bench how will we 
know if we are moving closer or away from our quest. There is no map.

Deepak




On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  I realised that what is needed is a *joint* definition *and*  range of tests 
of AGI.

  Benamin Johnston has submitted one valid test - the toy box problem. (See 
archives).

  I have submitted another still simpler valid test - build a rock wall from 
rocks given, (or fill an earth hole with rocks).

  However, I see that there are no valid definitions of AGI that explain what 
AGI is generally , and why these tests are indeed AGI. Google - there are v. 
few defs. of AGI or Strong AI, period.

  The most common: AGI is human-level intelligence -  is an embarrassing 
non-starter - what distinguishes human intelligence? No explanation offered.

  The other two are also inadequate if not as bad: Ben's solves a variety of 
complex problems in a variety of complex environments. Nope, so does  a 
multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's - 
something to do with insufficient knowledge and resources...
Insufficient is open to narrow AI interpretations and reducible to 
mathematically calculable probabilities.or uncertainties. That doesn't 
distinguish AGI from narrow AI.

  The one thing we should all be able to agree on (but who can be sure?) is 
that:

  ** an AGI is a general intelligence system, capable of independent learning**

  i.e. capable of independently learning new activities/skills with minimal 
guidance or even, ideally, with zero guidance (as humans and animals are) - and 
thus acquiring a general, all-round range of intelligence..  

  This is an essential AGI goal -  the capacity to keep entering and mastering 
new domains of both mental and physical skills WITHOUT being specially 
programmed each time - that crucially distinguishes it from narrow AI's, which 
have to be individually programmed anew for each new task. Ben's AGI dog 
exemplified this in a v simple way -  the dog is supposed to be able to learn 
to fetch a ball, with only minimal instructions, as real dogs do - they can 
learn a whole variety of new skills with minimal instruction.  But I am 
confident Ben's dog can't actually do this.

  However, the independent learning def. while focussing on the distinctive AGI 
goal,  still is not detailed enough by itself.

  It requires further identification of the **cognitive operations** which 
distinguish AGI,  and wh. are exemplified by the above tests.

  [I'll stop there for interruptions/comments  continue another time].

   P.S. Deepakjnath,

  It is vital to realise that the overwhelming majority of AGI-ers do not * 
want* an AGI test -  Ben has never gone near one, and is merely typical in this 
respect. I'd put almost all AGI-ers here in the same league as the US banks, 
who only want mark-to-fantasy rather than mark-to-market tests of their assets.
agi | Archives  | Modify Your Subscription  




-- 
cheers,
Deepak

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http

Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread Mike Tintner
Jeez,  no AI program can understand *two* consecutive *sentences* in a text - 
can understand any text period - can understand language, period. And you want 
an AGI that can understand a *story*. You don't seem to understand that 
requires cognitively a fabulous, massively evolved, highly educated, hugely 
complex set of powers . 

No AI can understand a photograph of a scene, period - a crowd scene, a house 
by the river. Programs are hard put to recognize any objects other than those 
in v. standard positions. And you want an AGI that can understand a *movie*. 

You don't seem to realise that we can't take the smallest AGI  *step* yet - and 
you're fantasying about a superevolved AGI globetrotter.

That's why Benjamin  I tried to focus on v. v. simple tests -  they're still 
way too complex  they (or comparable tests) will have to be refined down 
considerably for anyone who is interested in practical vs sci-fi fantasy AGI.

I recommend looking at Packbots and other military robots and hospital robots 
and the like, and asking how we can free them from their human masters and give 
them the very simplest of capacities to rove and handle the world independently 
- like handling and travelling on rocks. 

Anyone dreaming of computers or robots that can follow Gone with The Wind or 
become a child (real) scientist in the foreseeable future pace Ben, has no 
realistic understanding of what is involved.

From: deepakjnath 
Sent: Sunday, July 18, 2010 9:04 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Let me clarify. As you all know there are somethings computers are good at 
doing and somethings that Humans can do but a computer cannot.

One of the test that I was thinking about recently is to have to movies show to 
the AGI. Both movies will have the same story but it would be a totally 
different remake of the film probably in different languages and settings. If 
the AGI is able to understand the sub plot and say that the story line is 
similar in the two movies then it could be a good test for AGI structure. 

The ability of a system to understand its environment and underlying sub plots 
is an important requirement of AGI.

Deepak


On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Please explain/expound freely why you're not convinced - and indicate what 
you expect,  - and I'll reply - but it may not be till tomorrow.

  Re your last point, there def. is no consensus on a general problem/test OR a 
def. of AGI.  

  One flaw in your expectations seems to be a desire for a single test -  
almost by definition, there is no such thing as 

  a) a single test - i.e. there should be at least a dual or serial test - 
having passed any given test, like the rock/toy test, the AGI must be presented 
with a new adjacent test for wh. it has had no preparation,  like say 
building with cushions or sand bags or packing with fruit. (and neither 
rock/toy test state that clearly)

  b) one kind of test - this is an AGI, so it should be clear that if it can 
pass one kind of test, it has the basic potential to go on to many different 
kinds, and it doesn't really matter which kind of test you start with - that is 
partly the function of having a good.definition of AGI .


  From: deepakjnath 
  Sent: Sunday, July 18, 2010 8:03 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  So if I have a system that is close to AGI, I have no way of really knowing 
it right? 

  Even if I believe that my system is a true AGI there is no way of convincing 
the others irrefutably that this system is indeed a AGI not just an advanced AI 
system.

  I have read the toy box problem and rock wall problem, but not many people 
will still be convinced I am sure.

  I wanted to know that if there is any consensus on a general problem which 
can be solved and only solved by a true AGI. Without such a test bench how will 
we know if we are moving closer or away from our quest. There is no map.

  Deepak




  On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

I realised that what is needed is a *joint* definition *and*  range of 
tests of AGI.

Benamin Johnston has submitted one valid test - the toy box problem. (See 
archives).

I have submitted another still simpler valid test - build a rock wall from 
rocks given, (or fill an earth hole with rocks).

However, I see that there are no valid definitions of AGI that explain what 
AGI is generally , and why these tests are indeed AGI. Google - there are v. 
few defs. of AGI or Strong AI, period.

The most common: AGI is human-level intelligence -  is an embarrassing 
non-starter - what distinguishes human intelligence? No explanation offered.

The other two are also inadequate if not as bad: Ben's solves a variety of 
complex problems in a variety of complex environments. Nope, so does  a 
multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's

Re: [agi] NL parsing

2010-07-16 Thread Mike Tintner
Either that or the speaker is identifying 8 buffaloes ( no bulls) passing 
by


--
From: Jiri Jelinek jjelinek...@gmail.com
Sent: Friday, July 16, 2010 3:12 PM
To: agi agi@v2.listbox.com
Subject: [agi] NL parsing


Believe it or not, this sentence is grammatically correct and has
meaning: 'Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo
buffalo.'

source: http://www.mentalfloss.com/blogs/archives/13120

:-)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] NL parsing

2010-07-16 Thread Mike Tintner
Or if you want to be pedantic about caps, the speaker is identifying 3 
buffaloes from Buffalo,  2 from elsewhere.


Anyone got any other readings?

--
From: Jiri Jelinek jjelinek...@gmail.com
Sent: Friday, July 16, 2010 3:12 PM
To: agi agi@v2.listbox.com
Subject: [agi] NL parsing


Believe it or not, this sentence is grammatically correct and has
meaning: 'Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo
buffalo.'

source: http://www.mentalfloss.com/blogs/archives/13120

:-)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] NL parsing

2010-07-16 Thread Mike Tintner
Dave: That's why our additional knowledge from the blog is the only way we can 
reasonably disambiguate the sentence.

Contradicted by my reading. The particular blog reading was esoteric sure. But 
you do have to be capable of creative readings as humans are - that's the 
fundamental challenge of language.

But of course no machine understands language yet, period  - and isn't likely 
to for a v. v. long time.


From: David Jones 
Sent: Friday, July 16, 2010 4:35 PM
To: agi 
Subject: Re: [agi] NL parsing


This is actually a great example of why we should not try to write AGI as 
something able to solve any possible problem generally. We, strong ai agents, 
are not able to understand this sentence without quite a lot more information. 
Likewise, we shouldn't expect a general AI to try many possibilities until it 
is able to solve such a maliciously constructed sentence. There isn't 
explanatory reason to believe most of the possible hypotheses. We need more 
information to come up with possible hypotheses, which we can then test out on 
the sentence and confirm. That' why our additional knowledge from the blog is 
the only way we can reasonably disambiguate the sentence. Normal natural 
language disambiguation is similar in that way. 

Dave


On Fri, Jul 16, 2010 at 11:29 AM, Matt Mahoney matmaho...@yahoo.com wrote:

  That that that Buffalo buffalo that Buffalo buffalo buffalo buffalo that 
Buffalo
  buffalo that Buffalo buffalo buffalo.

   -- Matt Mahoney, matmaho...@yahoo.com




  - Original Message 
  From: Mike Tintner tint...@blueyonder.co.uk
  To: agi agi@v2.listbox.com

  Sent: Fri, July 16, 2010 11:05:51 AM
  Subject: Re: [agi] NL parsing

  Or if you want to be pedantic about caps, the speaker is identifying 3
  buffaloes from Buffalo,  2 from elsewhere.

  Anyone got any other readings?

  --
  From: Jiri Jelinek jjelinek...@gmail.com
  Sent: Friday, July 16, 2010 3:12 PM
  To: agi agi@v2.listbox.com
  Subject: [agi] NL parsing

   Believe it or not, this sentence is grammatically correct and has
   meaning: 'Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo
   buffalo.'
  
   source: http://www.mentalfloss.com/blogs/archives/13120
  
   :-)
  
  
   ---
   agi
   Archives: https://www.listbox.com/member/archive/303/=now
   RSS Feed: https://www.listbox.com/member/archive/rss/303/
   Modify Your Subscription:
   https://www.listbox.com/member/?;
   Powered by Listbox: http://www.listbox.com




  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com



  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-15 Thread Mike Tintner
And yet you dream dreams wh. are broad-ranging in subject matter, unlike all 
programs wh. are extremely narrow-ranging.


--
From: Michael Swan ms...@voyagergaming.com
Sent: Thursday, July 15, 2010 5:16 AM
To: agi agi@v2.listbox.com
Subject: Re: [agi] What is the smallest set of operations that can 
potentially  define everything and how do you combine them ?




I watched a brain experiment last night that proved that connections
between major parts of the brain stop when you are asleep.

They put electricity at different brain points, and it went everywhere
when the person was a awake, and dissipated when they were asleep.


On Thu, 2010-07-15 at 02:13 +0100, Mike Tintner wrote:

A demonstration of global connectedness is -  associate with anO   

I get:
number, sun, dish, disk, ball, letter, mouth, two fingers, oh, circle,
wheel, wire coil, outline, station on metro, hole, Kenneth Noland 
painting,

ring, coin, roundabout

connecting among other things - language, numbers, geometry, food, 
cartoons,

paintings, speech, sports, science, technology, art, transport,
transportation system, money.

Note though the other crucial weakness of the brain wh. impairs global
connections - fatigue. To maintain any piece of information in 
consciousness

for long is a strain,  (unless it's sexual?).

But the above demonstrates IMO why the brain is and has to be an image
processor.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread Mike Tintner
Sounds like a good explanation of why a body is essential for vision - not just 
for POV and orientation [up/left/right/down/ towards/ away] but for comparison 
and yardstick - you do know when your body or parts thereof are moving -and  
it's not merely touch but the comparison of other objects still and moving with 
your own moving hands and body that is important.

The more you go into it, the crazier the prospect of vision without eyes in a 
body becomes.


From: David Jones 
Sent: Thursday, July 15, 2010 5:54 PM
To: agi 
Subject: Re: [agi] How do we Score Hypotheses?


Jim,

even that isn't an obvious event. You don't know what is background and what is 
not. You don't even know if there is an object or not. You don't know if 
anything moved or not. You can make some observations using predefined methods 
and then see if you find matches... then hypothesize about the matches...

 It all has to be learned and figured out through reasoning. 

That's why I asked what you meant by definitive events. Nothing is really 
definitive. It is all hypothesized in a non-monotonic manner.

Dave


On Thu, Jul 15, 2010 at 12:01 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.com wrote:

What do you mean by definitive events? 



  I was just trying to find a way to designate obsverations that would be 
reliably obvious to a computer program.  This has something to do with the 
assumptions that you are using.  For example if some object appeared against a 
stable background and it was a different color than the background, it would be 
a definitive observation event because your algorithm could detect it with some 
certainty and use it in the definition of other more complicated events (like 
occlusion.)  Notice that this example would not necessarily be so obvious (a 
definitive event) using a camera, because there are a number of ways that an 
illusion (of some kind) could end up as a data event.

  I will try to reply to the rest of your message sometime later.
  Jim Bromer
agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Mike Tintner
Michael :The brains slow and unreliable methods I think are the price paid 
for

generality and innately unreliable hardware

Yes to one - nice to see an AGI-er finally starting to join up the dots, 
instead of simply dismissing the brain's massive difficulties in maintaining 
a train of thought.


No to two -innately unreliable hardware is the price of innately 
*adaptable* hardware - that can radically grow and rewire (wh. is the other 
advantage the brain has over computers).  Any thoughts about that and what 
in more detail are the advantages of an organic computer?


In addition, the unreliable hardware is also a price of  global 
ardware  - that has the basic capacity to connect more or less any bit of 
information in any part of the brain with any bit of information in any 
other part of the brain - as distinct from the local hardware of computers 
wh. have to go through limited local channels to limited local stores of 
information to make v. limited local kinds of connections. Well, that's my 
tech-ignorant take on it - but perhaps you can expand on the idea.  I would 
imagine v. broadly the brain is globally connected vs the computer wh. is 
locally connected. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-14 Thread Mike Tintner

A demonstration of global connectedness is -  associate with anO   

I get:
number, sun, dish, disk, ball, letter, mouth, two fingers, oh, circle, 
wheel, wire coil, outline, station on metro, hole, Kenneth Noland painting, 
ring, coin, roundabout


connecting among other things - language, numbers, geometry, food, cartoons, 
paintings, speech, sports, science, technology, art, transport, 
transportation system, money.


Note though the other crucial weakness of the brain wh. impairs global 
connections - fatigue. To maintain any piece of information in consciousness 
for long is a strain,  (unless it's sexual?).


But the above demonstrates IMO why the brain is and has to be an image 
processor. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-13 Thread Mike Tintner
You seem to be reaching for something important here, but it isn't at all clear 
what you mean.

I would say that any creative activity (incl. pure problemsolving) begins from 
a *conceptual paradigm* - a v. rough outline - of the form of that activity and 
the form of its end-product or -procedure.  As distinct from rational 
activities where a formula (and algorithm) define the form of the product (and 
activity) with complete precision.

You have a conceptual paradigm of writing a post or shopping for groceries 
or having a conversation. You couldn't possibly have a formula or algorithm 
completely defining every step - every word and sentence, every food, every 
topic  - you may have or want to take.

And programs as we know them, don't and can't handle *concepts* -  despite the 
misnomers of conceptual graphs/spaces etc wh are not concepts at all.  They 
can't for example handle writing or shopping because these can only be 
expressed as flexible outlines/schemas as per ideograms.

What do you mean?

.




From: Jim Bromer 
Sent: Tuesday, July 13, 2010 2:50 PM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


On Tue, Jul 13, 2010 at 2:29 AM, Abram Demski abramdem...@gmail.com wrote:
[The]complaint that probability theory doesn't try to figure out why it was 
wrong in the 30% (or whatever) it misses is a common objection. Probability 
theory glosses over important detail, it encourages lazy thinking, etc. 
However, this all depends on the space of hypotheses being examined. 
Statistical methods will be prone to this objection because they are 
essentially narrow-AI methods: they don't *try* to search in the space of all 
hypotheses a human might consider. An AGI setup can and should have such a 
large hypothesis space.
---
That is the thing.
We cannot search all possible hypotheses because we could not even write all 
possible hypotheses down.  This is why hypotheses have to be formed creatively 
in response to an analysis of a situation.  In my arrogant opinion, this is 
best done through a method that creatively uses discreet representations.  Of 
course it can use statistical or probabilistic data in making those creative 
hypotheses when there is good data to be used.  But the best way to do this is 
through categorization based creativity.  But this is an imaginative method, 
one which creates imaginative explanations (or other co-relations) for observed 
or conjectured events.  Those imaginative hypotheses then have to be compared 
to a situation through some trial and error methods.  Then the tentative 
conjectures that seem to withstand initial tests have to be further integrated 
into other hypotheses, conjectures and explanations that are related to the 
subject of the hypotheses.   This process of conceptual integration, a process 
which has to rely on both creative methods and rational methods, is a 
fundamental part of the process which does not seem to be clearly understood.  
Conceptual Integration cannot be accomplished by reducing a concept to True or 
False or to some number from 0 to 1 and then combined with other concepts that 
were also so reduced.  Ideas take on roles when combined with other ideas.  
Basically, a new idea has to be fit into a complex of other ideas that are 
strongly related to it.

Jim Bromer




 
On Tue, Jul 13, 2010 at 2:29 AM, Abram Demski abramdem...@gmail.com wrote:

  PS-- I am not denying that statistics is applied probability theory. :) When 
I say they are different, what I mean is that saying I'm going to use 
probability theory and I'm going to use statistics tend to indicate very 
different approaches. Probability is a set of axioms, whereas statistics is a 
set of methods. The probability theory camp tends to be bayesian, whereas the 
stats camp tends to be frequentist.

  Your complaint that probability theory doesn't try to figure out why it was 
wrong in the 30% (or whatever) it misses is a common objection. Probability 
theory glosses over important detail, it encourages lazy thinking, etc. 
However, this all depends on the space of hypotheses being examined. 
Statistical methods will be prone to this objection because they are 
essentially narrow-AI methods: they don't *try* to search in the space of all 
hypotheses a human might consider. An AGI setup can and should have such a 
large hypothesis space. Note that AIXI is typically formulated as using a space 
of crisp (non-probabilistic) hypotheses, though probability theory is used to 
reason about them. This means no theory it considers will gloss over detail in 
this way: every theory completely explains the data. (I use AIXI as a 
convenient example, not because I agree with it.)

  --Abram


  On Mon, Jul 12, 2010 at 2:42 PM, Abram Demski abramdem...@gmail.com wrote:

David,

I tend to think of probability theory and statistics as different things. 
I'd agree that statistics is not enough for AGI, but in contrast I think 
probability 

[agi] Concepts/ Conceptual paradigms

2010-07-13 Thread Mike Tintner
Just a quick note on what is actually a massive subject  the heart of AGI. I 
imagine - but do comment - that most of you think when I say that concepts are 
rough flexible outlines/schemas, wtf is this weird guy on about ? what's that 
got to do with serious AI? nonsense

Well, here are some classic examples of concepts as outlines/ideograms:

http://www.google.co.uk/imgres?imgurl=http://www.virtual-egypt.com/newhtml/hieroglyphics/sample/ideogram.gifimgrefurl=http://www.virtual-egypt.com/newhtml/hieroglyphics/sample/ideogram.htmusg=__laFC58e8cfDyOdIWl1Sa1DGLyJI=h=236w=402sz=4hl=enstart=2sig2=eNPq7_9APc8O2qQc5XeC_witbs=1tbnid=KSfetIcmLjRfgM:tbnh=73tbnw=124prev=/images%3Fq%3Dideogram*%26hl%3Den%26safe%3Doff%26sa%3DG%26gbv%3D2%26tbs%3Disch:1ei=RHg8TPyaGMKaOKHV0O8O

Note that it doesn't really matter what outline you use at any given moment for 
a concept (as long as it's relevant) because the brain's outlines are 
flexible/fluid and evolvable.

This may all seem v. weird to you. But where do you see such ideograms all the 
time, day in day out?

Logic.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Concepts/ Conceptual paradigms

2010-07-13 Thread Mike Tintner
Ah suddenly I realise why flexible/fluid outlines for concepts are an obvious 
necessity.

The reason they seem like a strange rather than obvious idea is that we - and 
especially AI-ers - tend to think of concepts in terms of subjects that we are 
reading about - that we are viewing as spectators from afar. Even concepts as 
simple as 

The cat sat on the mat

refer to subjects we are spectating.  In that case, you may think you can get 
away (as so many AI-ers try) with defining concepts as sets of more 
concepts/attributes.

But of course, concepts are first and foremost there to direct our **physical 
actions** - our physical engagement with the world - and needed by animals and 
humans alike, (incl. Ben's dog being instructed to fetch a ball).

Start thinking in terms of concepts which have to be instantiated in physical 
actions/ movements like:

HOLD the cup/ snake/ cactus/ breast/, EAT/CHEW your apple sauce/ bone/ 
cheese straw  CATCH  the book/ball/ case/ falling boy ,

and if you think of the v. different actions involved in each case, you can see 
- no? - that you need extremely fluid blueprints for actions - that can 
direct and inform almost infinitely * diverse configurations* of our body and 
its effectors.

Clearly concept graphs and all the other so-called conceptual approaches of 
current AI will be a total bust here.

Fluid outlines are equally necessary for other concepts like Cat and Mat, just 
not so obviously.

Bear in mind BTW that concepts direct a simply vast amount of our actions, 
incl. not so immediately obviously physical actions,  like  

GO to the shops. 
MOVE to another district.
TAKE a tablet.
LOOK at him.

P.S. Here's an interesting blog on concepts ( related neuroscience) somewhat 
in line with the above, although not stressing the fluidity of schemas.

http://artksthoughts.blogspot.com/2009/07/concepts-cognition-and-anthropomorphism.html




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Concepts/ Conceptual paradigms

2010-07-13 Thread Mike Tintner
To summarise: you need a fluid outline for a concept in order to guide a vastly 
diverse spectrum of lines of action - and lines/delineations of objects. (You 
can almost but not quite think of this geometrically).


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-13 Thread Mike Tintner
The first thing is to acknowledge that programs *don't* handle concepts - if 
you think they do, you must give examples.

The reasons they can't, as presently conceived, is 

a) concepts encase a more or less *infinite diversity of forms* (even if only 
applying at first to a species of object)  -  *chair* for example as I've 
demonstrated embraces a vast open-ended diversity of radically different chair 
forms; higher order concepts like  furniture embrace ... well, it's hard to 
think even of the parameters, let alone the diversity of forms, here.

b) concepts are *polydomain*- not just multi- but open-endedly extensible in 
their domains; chair for example, can also refer to a person, skin in French, 
two humans forming a chair to carry s.o., a prize, etc.

Basically concepts have a freeform realm or sphere of reference, and you can't 
have a setform, preprogrammed approach to defining that realm. 

There's no reason however why you can't mechanically and computationally begin 
to instantiate the kind of freeform approach I'm proposing. The most important 
obstacle is the setform mindset of AGI-ers - epitomised by Dave looking at 
squares, moving along set lines - setform objects in setform motion -  when it 
would be more appropriate to look at something like snakes.- freeform objects 
in freeform motion.

Concepts also - altho this is a huge subject - are *the* language of the 
general programs (as distinct from specialist programs, wh. is all we have 
right now)  that must inform an AGI. Anyone proposing a grandscale AGI project 
like Ben's (wh. I def. wouldn't recommend) must crack the problem of 
conceptualisation more or less from the beginning. I'm not aware of anyone who 
has any remotely viable proposals here, are you?


From: Jim Bromer 
Sent: Tuesday, July 13, 2010 5:46 PM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


On Tue, Jul 13, 2010 at 10:07 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote: 

  And programs as we know them, don't and can't handle *concepts* -  despite 
the misnomers of conceptual graphs/spaces etc wh are not concepts at all.  
They can't for example handle writing or shopping because these can only be 
expressed as flexible outlines/schemas as per ideograms.

I disagree with this, and so this is proper focus for our disagreement.
Although there are other aspects of the problem that we probably disagree on, 
this is such a fundamental issue, that nothing can get past it.  Either 
programs can deal with flexible outlines/schema or they can't.  If they can't 
then AGI is probably impossible.  If they can, then AGI is probably possible.

I think that we both agree that creativity and imagination is absolutely 
necessary aspects of intelligence.

Jim Bromer




  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-13 Thread Mike Tintner
 start making standard specification 
cherry cakes with standard ingredients, and standard mathematical sums with 
standard numbers and operations, and standard logical variables with standard 
meanings [and cut out any kind of et cetera]  **  

(And for much the same reason programs can't - aren't meant to - handle 
concepts. Every concept , like chair has to refer to a general class of 
objects embracing et ceteras - new, unspecified, yet-to-be-invented kinds of 
objects  and ones that you simply haven't heard of  yet, as well as specified, 
known kinds  of object . Concepts are wonderful cognitive tools for embracing 
unspecified objects. Concepts, for example,  like things, objects, 
actions, do something -  anything all sorts of things - everything you 
can possibly think of  even  write totally new kinds of programs - 
anti-programs - et cetera -  such concepts endow humans with massive 
creative freedom and scope of reference.

You along with the whole of AI/AGI are effectively claiming that there is or 
can be a formula/program for dealing with the unknown - i.e. unknown kinds of 
objects. It's patent absurdity - and counter to the whole spirit of logic and 
rationality -  in fact lunacy. You'll wonder in years to come how so smart 
people could be so dumb.   Could think they're producing programs that can make 
anything - can make cars or cakes - any car or cake  - when the rest of the 
world and his uncle can see that they're only producing very specific brands of 
car and cake (with very specific objects/parts).  VW Beetles not cars let 
alone vehicles let alone forms of transportation let alone means of 
travel let alone universal programs. . 

I'm full of it? AI/AGI is full of the most amazing hype about its generality 
and creativity wh. you have clearly swallowed whole . Programs are simply 
specialist procedures for producing specialist products and procedures with 
specified kinds of actions and objects - they cannot deal with unspecified 
kinds of actions and objects, period. You won't produce any actual examples to 
the contrary.

  


From: David Jones 
Sent: Tuesday, July 13, 2010 8:00 PM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


Correction:

Mike, you are so full of it. There is a big difference between *can* and 
*don't*. You have no proof that programs can't handle anything you say [they] 
can't.


On Tue, Jul 13, 2010 at 2:59 PM, David Jones davidher...@gmail.com wrote:

  Mike, you are so full of it. There is a big difference between *can* and 
*don't*. You have no proof that programs can't handle anything you say that 
can't. 



  On Tue, Jul 13, 2010 at 2:36 PM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

The first thing is to acknowledge that programs *don't* handle concepts - 
if you think they do, you must give examples.

The reasons they can't, as presently conceived, is 

a) concepts encase a more or less *infinite diversity of forms* (even if 
only applying at first to a species of object)  -  *chair* for example as 
I've demonstrated embraces a vast open-ended diversity of radically different 
chair forms; higher order concepts like  furniture embrace ... well, it's 
hard to think even of the parameters, let alone the diversity of forms, here.

b) concepts are *polydomain*- not just multi- but open-endedly extensible 
in their domains; chair for example, can also refer to a person, skin in 
French, two humans forming a chair to carry s.o., a prize, etc.

Basically concepts have a freeform realm or sphere of reference, and you 
can't have a setform, preprogrammed approach to defining that realm. 

There's no reason however why you can't mechanically and computationally 
begin to instantiate the kind of freeform approach I'm proposing. The most 
important obstacle is the setform mindset of AGI-ers - epitomised by Dave 
looking at squares, moving along set lines - setform objects in setform motion 
-  when it would be more appropriate to look at something like snakes.- 
freeform objects in freeform motion.

Concepts also - altho this is a huge subject - are *the* language of the 
general programs (as distinct from specialist programs, wh. is all we have 
right now)  that must inform an AGI. Anyone proposing a grandscale AGI project 
like Ben's (wh. I def. wouldn't recommend) must crack the problem of 
conceptualisation more or less from the beginning. I'm not aware of anyone who 
has any remotely viable proposals here, are you?


From: Jim Bromer 
Sent: Tuesday, July 13, 2010 5:46 PM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


On Tue, Jul 13, 2010 at 10:07 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote: 

  And programs as we know them, don't and can't handle *concepts* -  
despite the misnomers of conceptual graphs/spaces etc wh are not concepts at 
all.  They can't for example handle writing or shopping because these can 
only be expressed as flexible outlines/schemas

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-13 Thread Mike Tintner
Dave: The goal of the formula is to scan any unknown object 

How does the program define and therefore recognize object ? 

(And why then are you dealing with just squares if it can deal with this 
apparently vast and unlimited range of  objects? )

If you go into detail, you'll find no program can deal with or define object. 
 Jeez, none can recognize a chair - but now apparently they can recognize 
objects. 

What exactly does the program do?  Your description is confusing. What forms 
are input and output? Specific examples. If I put in a drawing of overlaid 
circles or a cartoon face, or a Jackson Pollock, or a photo of any scene, this 
program will give me  3-d versions?

Here's a bet - you're giving me yet more hype.




From: David Jones 
Sent: Wednesday, July 14, 2010 1:32 AM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


I'm not even going to read your whole email. 

I'll give you a great example of a formula handling unknown objects. The goal 
of the formula is to scan any unknown object and produce a 3D model of it using 
laser scanning. The objects are unknown, but that doesn't mean you can't handle 
unknown inputs. They all have things in common. Objects all have surfaces (at 
least the vast majority). So, whatever methods you can apply to analyze object 
surfaces, will work for the vast majority of objects. So, you *CAN* handle 
unknown objects. The same type of solution can be applied to many other 
problems, including AGI. The complete properties of the object or concept may 
be unknown, but the components that can be used to describe it are usually 
known. 

Your claim is baseless.

Dave


On Tue, Jul 13, 2010 at 7:34 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Dave:You have no proof that programs can't handle anything you say that can't

  Sure I do. 

  **There is no such thing as a formula (or program as we currently understand 
it) that can or is meant to handle UNSPECIFIED, (ESP NEW, UNKNOWN)  KINDS OF 
ACTIONS AND OBJECTS**

  Every program is essentially a formula for a set form activity which directs 
how to take a closed set of **specified kinds of actions and objects** - e,g,

  a + b + c + d +  = 

  [take an a and a b and a c and a d ..]

  in order to produce set forms of products and procedures  - (set combinations 
of those a,b,c,and d actions and objects)

  A recipe that specifies a set kind of cherry cake with set ingredients. 
[GA's, if you're wondering, are merely glorified recipes for mixing and 
endlessly remixing the same set of specific ingredients. Even random programs 
work with specified actions and objects.]

  There is no formula or program that says:

  take an a and a b and a c  oh, and something else -  a certain 'je ne 
sais quoi' - I don't know what it is, but you may be able to recognize it when 
you find it.Just keep looking 

  There is no formula of the form

  A + B + C + D + ETC. = 

  [ETC.= et cetera/some other unspecified things ]

  still less

  A + B + C + D + ETC ^ETC =  

  [some other things x some other operations]

  That, I trust you will agree, is a contradiction of a formula and a program - 
more like an anti-formula/program. There are no et cetera formulas, and no 
logical or mathematical symbols for etc  are there?

  But to be creative and produce new kinds of products and procedures, small 
and trivial as well as large, you have to be able to work with and find just 
such **unspecified (and esp. new) kinds of actions and objects.** - et ceteras.

  If you want to develop a new kind of fruit cake or new kind of cherry 
cake  or  even make a slightly different stew or more or less the same cherry 
cake but without the maraschinos wh. have gone missing, then you have to be 
able to work with and find new kinds of ingredients and mix/prepare them in new 
kinds of ways - new exotic kinds of fruit and other foods in new mixes and 
mashes and fermentations  -  et cetera x et cetera.

  If you want to develop a new kind of word or alphabet, (or develop a new kind 
of formula as I just did above,  then you have to be able to work with and 
find new kinds of letters and symbols and abbreviations (as I  just did) - etc.

  If you even want to engage with any part of the real world at the most 
mundane level  - walk down a street say - you have to be able to be creative 
and deal with new unspecified kinds of actions and objects that you may find 
there - because you can't predict what that street will contain.

  And to be creative, you do indeed  have to start not from a perfectly, fully 
specified formula, but something more like an et cetera anti-formula  -a 
v. imperfectly and partially specified  *conceptual paradigm*, such as  -:

  if you want to make a new different kind of cake/ house/ structure, you'll 
probably need an a and a b and a c  but you'll also need some other 
things -  some 'je ne sais quoi's - I don't know what they are, -- but you 
may be able to recognize them when you

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-13 Thread Mike Tintner

Michael: We can't do operations that
require 1,000,000 loop iterations.  I wish someone would give me a PHD
for discovering this ;) It far better describes our differences than any
other theory.

Michael,

This isn't a competitive point - but I think I've made that point several 
times (and so of course has Hawkins). Quite obviously, (unless you think the 
brain has fabulous hidden powers), it conducts searches and other operations 
with extremely few limited steps, and nothing remotely like the routine 
millions to billions of current computers.  It must therefore work v. 
fundamentally differently.


Are you saying anything significantly different to that?

--
From: Michael Swan ms...@voyagergaming.com
Sent: Wednesday, July 14, 2010 1:34 AM
To: agi agi@v2.listbox.com
Subject: Re: [agi] What is the smallest set of operations that can 
potentially  define everything and how do you combine them ?




On Tue, 2010-07-13 at 07:00 -0400, Ben Goertzel wrote:

Well, if you want a simple but complete operator set, you can go with

-- Schonfinkel combinator plus two parentheses


I'll check this out soon.

or

-- S and K combinator plus two parentheses

and I suppose you could add

-- input
-- output
-- forget

statements to this, but I'm not sure what this gets you...

Actually, adding other operators doesn't necessarily
increase the search space your AI faces -- rather, it
**decreases** the search space **if** you choose the right operators, 
that

encapsulate regularities in the environment faced by the AI


Unfortunately, an AGI needs to be absolutely general. You are right that
higher level concepts reduce combinations, however, using them, will
increase combinations for simpler operator combinations, and if you
miss a necessary operator, then some concepts will be impossible to
achieve. The smallest set can define higher level concepts, these
concepts can be later integrated as single operations, which means
using operators than can be understood in terms of smaller operators
in the beginning, will definitely increase you combinations later on.

The smallest operator set is like absolute zero. It has a defined end. A
defined way of finding out what they are.





Exemplifying this, writing programs doing humanly simple things
using S and K is a pain and involves piling a lot of S and K and 
parentheses

on top of each other, whereas if we introduce loops and conditionals and
such, these programs get shorter.  Because loops and conditionals happen
to match the stuff that our human-written programs need to do...

Loops are evil in most situations.

Let me show you why:
Draw a square using put_pixel(x,y)
// loops are more scalable, but, damage this code anywhere and it can
potentially kill every other process, not just itself. This is why
computers die all the time.

for (int x = 0; x  2; x++)
{
for (int y = 0; y  2; y++)
{
put_pixel(x,y);
}
}

opposed to
/* The below is faster (even on single step instructions), and can be
run in parallel, damage resistant ( ie destroy  put_pixel(0,1); and the
rest of the code will still run), is less scalable ( more code is
required for larger operations)

put_pixel(0,0);
put_pixel(0,1);
put_pixel(1,0);
put_pixel(1,1);

The lack of loops in the brain is a fundamental difference between
computers and brains. Think about it. We can't do operations that
require 1,000,000 loop iterations.  I wish someone would give me a PHD
for discovering this ;) It far better describes our differences than any
other theory.



A better question IMO is what set of operators and structures has the
property that the compact expressions tend to be the ones that are useful
for survival and problem-solving in the environments that humans and 
human-

like AIs need to cope with...


For me that is stage 2.



-- Ben G

On Tue, Jul 13, 2010 at 1:43 AM, Michael Swan ms...@voyagergaming.com 
wrote:

 Hi,

 I'm interested in combining the simplest, most derivable operations
 ( eg operations that cannot be defined by other operations) for 
 creating

 seed AGI's. The simplest operations combined in a multitude ways can
 form extremely complex patterns, but the underlying logic may be
 simple.

 I wonder if varying combinations of the smallest set of operations:

 {  , memory (= for memory assignment), ==, (a logical way to
 combine them), (input, output), () brackets  }

 can potentially learn and define everything.

 Assume all input is from numbers.

 We want the smallest set of elements, because less elements mean less
 combinations which mean less chance of hitting combinatorial explosion.

  helps for generalisation, reducing combinations.

 memory(=) is for hash look ups, what should one remember? What can be
 discarded?

 == This does a comparison between 2 values x == y is 1 if x and y are
 exactly the same. Returns 0 if they are not the same.

 (a logical way to combine them) Any non-narrow algorithm that reduces
 the raw data into a simpler state 

Re: [agi] Mechanical Analogy for Neural Operation!

2010-07-12 Thread Mike Tintner
One tangential comment.

You're still thinking linearly. Machines are linear chains of parts. 
Cause-and-effect thinking made flesh/metal.

With organisms, however you have whole webs of parts acting more or less 
simultaneously.

We will probably need to bring that organic thinking/framework - field vs chain 
thinking? -  into the design of AGI machines, robots.

In relation to your subject, you see, incoming information is actually analysed 
by the human system on multiple levels and in terms often of multiple domain 
associations simultaneously.

And that's why we often get confused -  and don't always not understand. 
Sometimes we do know clearly what we don't understand - what does that word 
[actually] mean? But sometimes we attend to a complex argument and we know it 
doesn't really make sense to us, but we don't know which part[s] of it don't 
make sense or why - and we have to patiently and gradually unravel that knot of 
confusion.


From: Steve Richfield 
Sent: Monday, July 12, 2010 7:02 AM
To: agi 
Subject: [agi] Mechanical Analogy for Neural Operation!


Everyone has heard about the water analogy for electrical operation. I have a 
mechanical analogy for neural operation that just might be solid enough to 
compute at least some characteristics optimally.

No, I am NOT proposing building mechanical contraptions, just using the concept 
to compute neuronal characteristics (or AGI formulas for learning).

Suppose neurons were mechanical contraptions, that receive inputs and 
communicate outputs via mechanical movements. If one or more of the neurons 
connected to an output of a neuron, can't make sense of a given input given its 
other inputs, then its mechanism would physically resist the several inputs 
that didn't make mutual sense because its mechanism would jam, with the 
resistance possibly coming from some downstream neuron.

This would utilize position to resolve opposing forces, e.g. one force being 
the observed inputs, and the other force being that they don't make sense, 
suggest some painful outcome, etc. In short, this would enforce the sort of 
equation over the present formulaic view of neurons (and AGI coding) that I 
have suggested in past postings may be present, and show that the math may not 
be all that challenging.

Uncertainty would be expressed in stiffness/flexibility, computed limitations 
would be handled with over-running clutches, etc.

Propagation of forces would come close (perfect?) to being able to identify 
just where in a complex network something should change to learn as efficiently 
as possible.

Once the force concentrates at some point, it then gives, something slips or 
bends, to unjam the mechanism. Thus, learning is effected.

Note that this suggests little difference between forward propagation and 
backwards propagation, though real-world wet design considerations would 
clearly prefer fast mechanisms for forward propagation, and compact mechanisms 
for backwards propagation.

Epiphany or mania?

Any thoughts?

Steve

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Cash in on robots

2010-07-12 Thread Mike Tintner
http://www.moneyweek.com/investment-advice/cash-in-on-the-robot-revolution-49024.aspx?utm_source=newsletterutm_medium=emailutm_campaign=Money%2BMorning

http://www.moneyweek.com/investment-advice/share-tips-five-ways-into-the-robotics-sector-49025.aspx


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-10 Thread Mike Tintner
. 

You are ironically misunderstanding the very foundations and rationale of 
geometry. Geometry - with its set form forms - was invented precisely because 
mathematicians didn't like the freeform nature of the world - wanted to create 
set forms (in the footsteps of the rational technologists who preceded them) - 
that they could control and reduce to formulae and reproduce with ease. 
Freeform rocks are a lot more complex to draw and make and reproduce than  set 
form rectangular bricks.

Set forms are not free forms. They are the opposite.

(And while you and others will continue to *claim*  in theory absolute 
setform=freeform nonsense, you will in practice always, always stick to setform 
objects. Some part of you knows the v.obvious truth ).



 
From: David Jones 
Sent: Saturday, July 10, 2010 3:51 PM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


Mike,

Using the image itself as a template to match is possible. In fact it has been 
done before. But it suffers from several problems that would also need solving. 

1) Images are 2D. I assume you are also referring to 2D outlines. Real objects 
are 3D. So, you're going to have to infer the shape of the object... which 
means you are no longer actually transforming the image itself. You are 
transforming a model of the image, which would have points, curves, dimensions, 
etc. Basically, a mathematical shape :) No matter how much you disapprove of 
encoding info, sometimes it makes sense to do it.
2) Creating the first outline and figuring out what to outline is not trivial 
at all. So, this method can only be used after you can do that. There is a lot 
more uncertainty involved here than you seem to realize. First, how do you even 
determine the outline? That is an unsolved problem. So, not only will you have 
to try many transformations with the right outline, you have to try many with 
wrong outlines, increase the possibilities (exponentially?). It looks like you 
need a way to score possibilities and decide which ones to try. 
3) rock is a word and words are always learned by induction along with other 
types of reasoning before we can even consider it a type of object. So, you are 
starting with a somewhat unrepresentative or artificial problem. 
4) Even the same rock can look very different from different perspectives. In 
fact, how do you even match the same rock? Please describe how your system 
would do this. It is not trivial at all. And you will soon see that there is an 
extremely large amount of uncertainty. Dealing with this type of uncertainty is 
the central problem of AGI. The central problem is not fluid schemas.Even if I 
used this method, I would be stuck with the same exact uncertainty problems. 
So, you're not going to get passed them like this. The same research on 
explanatory and non-monotonic type reasoning must still be done.
5) what is a fluid transform? You can't just throw out words. Please define it. 
You are going to realize that your representation is pretty much geometric 
anyway. Regardless, it will likely be equivalent. Are you going to try every 
possible transformation? Nope. That would be impossible. So, how do you decide 
what transformations to try? When is a transformation too large of a change to 
consider it the same rock? When is it too large to consider it a different 
rock? 
6) Are you seriously going to transform every object you've every tried to 
outline? This is going to be prohibitively costly in terms of processing. 
Especially because you haven't defined how you're going to decide what to 
transform and what not to. So, before you can even use this algorithm, you're 
going to have to use something else to decide what is a possible candidate and 
what is not.



On Fri, Jul 9, 2010 at 6:42 PM, Mike Tintner tint...@blueyonder.co.uk wrote: 
  Now let's see **you** answer a question. Tell me how any 
algorithmic/mathematical approach of any kind actual or in pure principle can 
be applied to recognize raindrops falling down a pane - and to predict 
their movement?

Like I've said many times before, we can't predict everything, and we certainly 
shouldn't try. But  


  http://www.pond5.com/stock-footage/263778/beautiful-rain-drops.html

  or to recognize a rock?

  http://www.handprint.com/HP/WCL/IMG/LPR/adams.jpg

  or a [filled] shopping bag?

  http://www.abc.net.au/reslib/200801/r215609_837743.jpg
  
http://www.sustainableisgood.com/photos/uncategorized/2007/03/29/shoppingbags.jpg
  http://thegogreenblog.com/wp-content/uploads/2007/12/plastic_shopping_bag.jpg

  or if you want a real killer, google some vid clips of amoebas in oozing 
motion?

  PS In every case, I suggest, the brain observes different principles of 
transformation - for the most part unconsciously. And they can be of various 
kinds not just direct natural transformations, of course. It's possible, it 
occurs to me, that the principle that applies to rocks might just be something 
like whatever can be carved out of stone

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-10 Thread Mike Tintner
Dave:You can't solve the problems with your approach either

This is based on knowledge of what examples? Zero?

I have given you one instance of s.o. [a technologist not a philosopher like 
me] who is if only in broad principle, trying to proceed in a non-encoding, 
analog-comparison direction. There must be others who are however crudely 
trying and considering what can be broadly classified as analog approaches. How 
much do you know, or have you even thought about such approaches? [Of course, 
computing doesn't have to be either/or analog-digital but can be both]

My point 6) BTW is irrefutable, completely irrefutable, and puts a finger bang 
on why geometry  obviously cannot cope with real objects,  ( although I can and 
must, do a much more extensive job of exposition).




From: David Jones 
Sent: Saturday, July 10, 2010 5:44 PM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


Mike, 

Your claim that you have to reject encoded and simpler descriptions of the 
world to solve AGI is unfounded. You can't solve the problems with your 
approach either. So, this is argument is going no where. You won't admit that 
you're faced with the same problems no matter how you approach it. I do admit 
that your ideas on transformations can be useful, but not at all by themselves 
and definitely not in the absense of math or geometry. They also are certainly 
not a solution to any of the problems I'm considering. Regardless, we both face 
the same problems of uncertainty and encoding.

Dave


On Sat, Jul 10, 2010 at 12:09 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  General point: you keep talking as if algorithms *work* for visual AGI - they 
don't - they simply haven't. Unless you take a set of objects carefully chosen 
to be closely aligned and close in overall form- and then it's not AGI. But in 
general the algorithmic patterned approach has been a bust - because natural 
objects as well as clusters of diverse artificial objects are not patterned. 
You can see this. It's actually obvious if you care to look.

  Re 2) It may well be that you've gotta have a body to move around to 
different POV's for objects, and to touch those objects and use another sense 
or two to determine the outlines. I haven't thought all this through at all, 
but you've got to realise that the whole of evolution tells you that sensing 
the world is a *multi*-*common*-sense affair, and not a single one. You're 
trying to isolate a sense - and insisting that that's the only way things can 
be done, even while you along with others are continuously failing.  Respect 
and learn from evolution.

  Re 1) I again haven't thought this through, but it sounds like you're again 
assuming that your AGI vision must automatically meet adult, educated criteria. 
Presumably it takes time to perceive and appreciate the 3-d ness of objects.And 
3-d is a mathematical, highly evolved idea. Yes, objects are solid, but they 
were never 3-d until geometry was invented a mere 2,000 or so years ago. 
Primitive people see very differently from modern people. Read McLuhan on this 
(v. worthwhile generally for s.o. like you).

  And no, rocks are simply *not* mathematical objects. There are no rocks in 
geometry period. *You* can use a mathematically-based program to draw a rock, 
but that's down to your AGI mind, not the mathematics.

  [Look BTW how you approach all these things - you always start mathematically 
- but it is a simple fact that maths. was invented only a few thousand years 
ago, animals and humans happily existed and saw the world without it, and maths 
objects are **abstract fictions** - they do not exist in the real world, as 
maths itself will tell you - and you have to be able to *see* that - to see and 
know that there is a diff. between a postulated math square and any concrete, 
real object version of a square. What visual processing are you going to use to 
tell the difference between a math and a real object? Are you saying you can 
use maths to do that?

  Non-sense.

  3) I am starting with simple natural irregular objects. I can recognize that 
rocks may have too large a range of irregularity for first visual perception. 
(It'd be v.g. to know how soon infants recognize them). Maybe then you need 
something with a narrower range like shopping bags. I'd again study the 
development of infant perception - that will give you the best ideas re what to 
start with.

  But what's vital is that your objects be natural and irregular, not narrow AI 
formulaic squares.

  5) A fluid transform is er a fluid transform. What are all the ways a 
raindrop as per the vid can transform into a different form - all the ways that 
the outline of the drop can continuously reshape. Jeez they're pretty well 
infinite, except that they're constrained. The drop isn't suddenly going to 
become a square or rectilinear. And you can presumably invent new lines/fields 
of transformation wh. could turn out to be true.

  But if you think

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-09 Thread Mike Tintner
Couple of quick comments (I'm still thinking about all this  - but I'm 
confident everything AGI links up here).

A fluid schema is arguably by its v. nature a method - a trial and error, 
arguably universal method. It links vision to the hand or any effector. 
Handling objects also is based on fluid schemas - you put out a fluid 
adjustably-shaped hand to grasp things. And even if you don't have hands, like 
a worm, and must grasp things with your body, and must grasp the ground under 
which you move, then too you must use fluid body schemas/maps.

All concepts - the basis of language and before language, all intelligence - 
are also almost certainly fluid schemas (and not as you suggested, patterns).

All creative problemsolving begins from concepts of what you want to do  (and 
not formulae or algorithms as in rational problemsolving). Any suggestion to 
the contrary will not, I suggest, bear the slightest serious examination.

**Fluid schemas/concepts/fluid outlines are attempts-to-grasp-things - 
gropings.** 

Point 2 : I'd relook at your assumptions in all your musings  - my impression 
is they all assume, unwittingly, an *adult* POV - the view of s.o. who already 
knows how to see - as distinct from an infant who is just learning to see and 
get to grips with an extremely blurred world, (even more blurred and 
confusing, I wouldn't be surprised, than that Prakash video). You're 
unwittingly employing top down, fully-formed-intelligence assumptions even 
while overtly trying to produce a learning system - you're looking for what an 
adult wants to know, rather than what an infant 
starting-from-almost-no-knowledge-of-the-world wants to know.

If you accept the point in any way, major philosophical rethinking is required.



From: David Jones 
Sent: Friday, July 09, 2010 1:56 PM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


Mike,


On Thu, Jul 8, 2010 at 6:52 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Isn't the first problem simply to differentiate the objects in a scene? 

Well, that is part of the movement problem. If you say something moved, you are 
also saying that the objects in the two or more video frames are the same 
instance.
 
  (Maybe the most important movement to begin with is not  the movement of the 
object, but of the viewer changing their POV if only slightly  - wh. won't be a 
factor if you're looking at a screen)

Maybe, but this problem becomes kind of trivial in a 2D environment, assuming 
you don't allow rotation of the POV. Moving the POV would simply translate all 
the objects linearly. If you make it a 3D environment, it becomes significantly 
more complicated. I could work on 3D, which I will, but I'm not sure I should 
start there. I probably should consider it though and see what complications it 
adds to the problem and how they might be solved.
 
  And that I presume comes down to being able to put a crude, highly tentative, 
and fluid outline round them (something that won't be neces. if you're dealing 
with squares?) . Without knowing v. little if anything about what kind of 
objects they are. As an infant most likely does. {See infants' drawings and how 
they evolve v. gradually from a v. crude outline blob that at first can 
represent anything - that I'm suggesting is a replay of how visual perception 
developed).

  The fluid outline or image schema is arguably the basis of all intelligence - 
just about everything AGI is based on it.  You need an outline for instance not 
just of objects, but of where you're going, and what you're going to try and do 
- if you want to survive in the real world.  Schemas connect everything AGI.

  And it's not a matter of choice - first you have to have an outline/sense of 
the whole - whatever it is -  before you can start filling in the parts.


Well, this is the question. The solution is underdetermined, which means that a 
right solution is not possible to know with complete certainty. So, you may 
take the approach of using contours to match objects, but that is certainly not 
the only way to approach the problem. Yes, you have to use local features in 
the image to group pixels together in some way. I agree with you there.  

Is using contours the right way? Maybe, but not by itself. You have to define 
the problem a little better than just saying that we need to construct an 
outline. The real problem/question is this: How do you determine the 
uncertainty of a hypothesis, lower it and also determine how good a hypothesis 
is, especially in comparison to other hypotheses? 

So, in this case, we are trying to use an outline comparison to determine the 
best match hypotheses between objects. But, that doesn't define how you score 
alternative hypotheses. That also is certainly not the only way to do it. You 
could use the details within the outline too. In fact, in some situations, this 
would be required to disambiguate between the possible hypotheses.  



  P.S. It would

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-09 Thread Mike Tintner
If fluid schemas - speaking broadly - are what is needed, (and I'm pretty sure 
they are), it's n.g. trying for something else. You can't substitute a square 
approach for a fluid amoeba outline approach. (And you will certainly need 
exactly such an approach to recognize amoeba's).

If it requires a new kind of machine, or a radically new kind of instruction 
set for computers, then that's what it requires - Stan Franklin, BTW, is one 
person who does recognize, and is trying to deal with this problem - might be 
worth checking up on him.

This is partly BTW why my instinct is that it may be better to start with tasks 
for robot hands*, because it should be possible to get them to apply a 
relatively flexible and fluid grip/handshape and grope for and experiment with 
differently shaped objects And if you accept the broad philosophy I've been 
outlining, then it does make sense that evolution should have started with 
touch as a more primary sense, well before it got to vision. 

*Or perhaps it may prove better to start with robot snakes/bodies or somesuch.


From: David Jones 
Sent: Friday, July 09, 2010 3:22 PM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of AGI





On Fri, Jul 9, 2010 at 10:04 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Couple of quick comments (I'm still thinking about all this  - but I'm 
confident everything AGI links up here).

  A fluid schema is arguably by its v. nature a method - a trial and error, 
arguably universal method. It links vision to the hand or any effector. 
Handling objects also is based on fluid schemas - you put out a fluid 
adjustably-shaped hand to grasp things. And even if you don't have hands, like 
a worm, and must grasp things with your body, and must grasp the ground under 
which you move, then too you must use fluid body schemas/maps.

  All concepts - the basis of language and before language, all intelligence - 
are also almost certainly fluid schemas (and not as you suggested, patterns).

fluid schemas is not an actual algorithm. It is not clear how to go about 
implementing such a design. Even so, when you get into the details of actually 
implementing it, you will find yourself faced with the exact same problems I'm 
trying to solve. So, lets say you take the first frame and generate an initial 
fluid schema. What if an object disappears? What if the object changes? What 
if the object moves a little or a lot? What if a large number of changes occur 
at once, like one new thing suddenly blocking a bunch of similar stuff that is 
behind it? How far does your fluid schema have to be distorted for the 
algorithm to realize that it needs a new schema and can't use the same old one? 
You can't just say that all objects are always present and just distort the 
schema. What if two similar objects appear or both move and one disappears? How 
does your schema handle this? Regardless of whether you talk about hypotheses 
or schemas, it is the SAME problem. You can't avoid the fact that the whole 
thing is underdetermined and you need a way to score and compare hypotheses. 

If you disagree, please define your schema algorithm a bit more specifically. 
Then we would be able to analyze its pros and cons better.
 

  All creative problemsolving begins from concepts of what you want to do  (and 
not formulae or algorithms as in rational problemsolving). Any suggestion to 
the contrary will not, I suggest, bear the slightest serious examination.

Sure.  I would point out though that children do stuff just to learn in the 
beginning. A good example is our desire to play. Playing is a strategy by which 
children learn new things even though they don't have a need for those things 
yet. It motivates us to learn for the future and not for any pressing present 
needs. 

No matter how you look at it, you will need algorithms for general 
intelligence. To say otherwise makes zero sense. No algorithms, no design. No 
matter what design you come up with, I call that an algorithm. Algorithms don't 
have to be formulaic or narrow. Keep an open mind about the world 
algorithm, unless you can suggest a better term to describe general AI 
algorithms.



  **Fluid schemas/concepts/fluid outlines are attempts-to-grasp-things - 
gropings.** 

  Point 2 : I'd relook at your assumptions in all your musings  - my impression 
is they all assume, unwittingly, an *adult* POV - the view of s.o. who already 
knows how to see - as distinct from an infant who is just learning to see and 
get to grips with an extremely blurred world, (even more blurred and 
confusing, I wouldn't be surprised, than that Prakash video). You're 
unwittingly employing top down, fully-formed-intelligence assumptions even 
while overtly trying to produce a learning system - you're looking for what an 
adult wants to know, rather than what an infant 
starting-from-almost-no-knowledge-of-the-world wants to know.

  If you accept the point in any way, major philosophical rethinking

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-09 Thread Mike Tintner
There isn't an algorithm. It's basically a matter of overlaying shapes to see 
if they fit -  much as you put one hand against another to see if they fit - 
much as you can overlay a hand to see if it fits and is capable of grasping an 
object - except considerably more fluid/ rougher. There has to be some 
instruction generating the process, but it's not an algorithm. How can you have 
an algorithm for recognizing amoebas - or rocks or a drop of water? They are 
not patterned entities - or by extension reducible to algorithms. You don't 
need to think too much about internal visual processes - you can just look,at 
the external objects-to-be-classified , the objects that make up this world, 
and see this. Just as you can look at a set of diverse patterns and see that 
they too are not reducible to any single formula/pattern/algorithm. We're 
talking about the fundamental structure of the universe and its contents.  If 
this is right and God is an artist before he is a mathematician, then it 
won't do any good screaming about it, you're going to have to invent a way  to 
do art, so to speak, on computers . Or you can pretend that dealing with 
mathematical squares will somehow help here - but it hasn't and won't.

Do you think that a creative process like creating 

http://www.apocalyptic-theories.com/gallery/lastjudge/bosch.jpg

started with an algorithm?  There are other ways of solving problems than 
algorithms - the person who created each algorithm in the first place certainly 
didn't have one. 

From: David Jones 
Sent: Friday, July 09, 2010 4:20 PM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


Mike, 

Please outline your algorithm for fluid schemas though. It will be clear when 
you do that you are faced with the exact same uncertainty problems I am dealing 
with and trying to solve. The problems are completely equivalent. Yours is just 
a specific approach that is not sufficiently defined.

You have to define how you deal with uncertainty when using fluid schemas or 
even how to approach the task of figuring it out. Until then, its not a 
solution to anything. 

Dave


On Fri, Jul 9, 2010 at 10:59 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  If fluid schemas - speaking broadly - are what is needed, (and I'm pretty 
sure they are), it's n.g. trying for something else. You can't substitute a 
square approach for a fluid amoeba outline approach. (And you will 
certainly need exactly such an approach to recognize amoeba's).

  If it requires a new kind of machine, or a radically new kind of instruction 
set for computers, then that's what it requires - Stan Franklin, BTW, is one 
person who does recognize, and is trying to deal with this problem - might be 
worth checking up on him.

  This is partly BTW why my instinct is that it may be better to start with 
tasks for robot hands*, because it should be possible to get them to apply a 
relatively flexible and fluid grip/handshape and grope for and experiment with 
differently shaped objects And if you accept the broad philosophy I've been 
outlining, then it does make sense that evolution should have started with 
touch as a more primary sense, well before it got to vision. 

  *Or perhaps it may prove better to start with robot snakes/bodies or somesuch.


  From: David Jones 
  Sent: Friday, July 09, 2010 3:22 PM
  To: agi 
  Subject: Re: [agi] Re: Huge Progress on the Core of AGI





  On Fri, Jul 9, 2010 at 10:04 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Couple of quick comments (I'm still thinking about all this  - but I'm 
confident everything AGI links up here).

A fluid schema is arguably by its v. nature a method - a trial and error, 
arguably universal method. It links vision to the hand or any effector. 
Handling objects also is based on fluid schemas - you put out a fluid 
adjustably-shaped hand to grasp things. And even if you don't have hands, like 
a worm, and must grasp things with your body, and must grasp the ground under 
which you move, then too you must use fluid body schemas/maps.

All concepts - the basis of language and before language, all intelligence 
- are also almost certainly fluid schemas (and not as you suggested, patterns).

  fluid schemas is not an actual algorithm. It is not clear how to go about 
implementing such a design. Even so, when you get into the details of actually 
implementing it, you will find yourself faced with the exact same problems I'm 
trying to solve. So, lets say you take the first frame and generate an initial 
fluid schema. What if an object disappears? What if the object changes? What 
if the object moves a little or a lot? What if a large number of changes occur 
at once, like one new thing suddenly blocking a bunch of similar stuff that is 
behind it? How far does your fluid schema have to be distorted for the 
algorithm to realize that it needs a new schema and can't use the same old one? 
You can't just say that all objects are always

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-08 Thread Mike Tintner
Isn't the first problem simply to differentiate the objects in a scene?  (Maybe 
the most important movement to begin with is not  the movement of the object, 
but of the viewer changing their POV if only slightly  - wh. won't be a factor 
if you're looking at a screen)

And that I presume comes down to being able to put a crude, highly tentative, 
and fluid outline round them (something that won't be neces. if you're dealing 
with squares?) . Without knowing v. little if anything about what kind of 
objects they are. As an infant most likely does. {See infants' drawings and how 
they evolve v. gradually from a v. crude outline blob that at first can 
represent anything - that I'm suggesting is a replay of how visual perception 
developed).

The fluid outline or image schema is arguably the basis of all intelligence - 
just about everything AGI is based on it.  You need an outline for instance not 
just of objects, but of where you're going, and what you're going to try and do 
- if you want to survive in the real world.  Schemas connect everything AGI.

And it's not a matter of choice - first you have to have an outline/sense of 
the whole - whatever it is -  before you can start filling in the parts.

P.S. It would be mindblowingly foolish BTW to think you can do better than the 
way an infant learns to see - that's an awfully big visual section of the brain 
there, and it works.


David,

How I'd present the problem would be predict the next frame, or more 
generally predict a specified portion of video given a different portion. Do 
you object to this approach?

--Abram


On Thu, Jul 8, 2010 at 5:30 PM, David Jones davidher...@gmail.com wrote:

  It may not be possible to create a learning algorithm that can learn how to 
generally process images and other general AGI problems. This is for the same 
reason that completely general vision algorithms are likely impossible. I think 
that figuring out how to process sensory information intelligently requires 
either 1) impossible amounts of processing or 2) intelligent design and 
understanding by us. 

  Maybe you could be more specific about how general learning algorithms would 
solve problems such as the one I'm tackling. But, I am extremely doubtful it 
can be done because the problems cannot be effectively described to such an 
algorithm. If you can't describe the problem, it can't search for solutions. If 
it can't search for solutions, you're basically stuck with evolution type 
algorithms, which require prohibitory amounts of processing.

  The reason that vision is so important for learning is that sensory 
perception is the foundation required to learn everything else. If you don't 
start with a foundational problem like this, you won't be representing the real 
nature of general intelligence problems that require extensive knowledge of the 
world to solve properly. Sensory perception is required to learn the 
information needed to understand everything else. Text and language for 
example, require extensive knowledge about the world to understand and 
especially to learn about. If you start with general learning algorithms on 
these unrepresentative problems, you will get stuck as we already have.

  So, it still makes a lot of sense to start with a concrete problem that does 
not require extensive amounts of previous knowledge to start learning. In fact, 
AGI requires that you not pre-program the AI with such extensive knowledge. So, 
lots of people are working on general learning algorithms that are 
unrepresentative of what is required for AGI because the algorithms don't have 
the knowledge needed to learn what they are trying to learn about. Regardless 
of how you look at it, my approach is definitely the right approach to AGI in 
my opinion.




  On Thu, Jul 8, 2010 at 5:02 PM, Abram Demski abramdem...@gmail.com wrote:

David,

That's why, imho, the rules need to be *learned* (and, when need be, 
unlearned). IE, what we need to work on is general learning algorithms, not 
general visual processing algorithms.

As you say, there's not even such a thing as a general visual processing 
algorithm. Learning algorithms suffer similar environment-dependence, but (by 
their nature) not as severe...

--Abram


On Thu, Jul 8, 2010 at 3:17 PM, David Jones davidher...@gmail.com wrote:

  I've learned something really interesting today. I realized that general 
rules of inference probably don't really exists. There is no such thing as 
complete generality for these problems. The rules of inference that work for 
one environment would fail in alien environments. 

  So, I have to modify my approach to solving these problems. As I studied 
over simplified problems, I realized that there are probably an infinite number 
of environments with their own behaviors that are not representative of the 
environments we want to put a general AI in. 

  So, it is not ok to just come up with any case study and solve it. The 
case 

[agi] masterpiece on an iPad

2010-07-02 Thread Mike Tintner
http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-creates-masterpiece-on-an-iPad.html

McLuhan argues that touch is the central sense - the one that binds the others. 
He may be right. The i-devices integrate touch into intelligence.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Open Sets vs Closed Sets

2010-07-02 Thread Mike Tintner
Well, first, you're not dealing with open sets in my broad sense - containing a 
potentially unlimited number of different SPECIES of things. 

[N.B.  Extension to my definitions here - I should have added that all members 
of a set have fundamental SIMILARITIES or RELATIONSHIPS - and the set is 
constrained. An open set does not incl. everything under the sun (unless that 
is the title of the set). So a set may be everything in that room or that 
street or text but will not incl. everything under the sun]

With respect to your example, a relevant broadly open-species set then might be 
regular shapes or geometric shapes  incl. most shapes in geometry, (or if 
you prefer, more limited sections of geometry) - where species = different 
kinds of shapes - squares, triangles,fractals etc. I can't see how your work 
with squares will prepare you to deal with a broad range of geometric shapes - 
please explain.  AFAICT you have take a very closed geometric space/set.

More narrowly, you raise a v. interesting question. Let us take a set of just 
one or a v. few objects, as you seem to be doing - say one or two black 
squares. The relevant set then is something like all the positionings [or 
movements] of two black squares within a given area [like a screen].   The set 
is principally one of square positions.

You make the bold claim:I can define an infinite number of ways in which a 0 
to infinite number of black squares can move. - Are you then saying your 
program can deal with every positioning/configuration of two squares on a 
screen? [I'm making this simple as pos]. I would say;no way. That is an open 
set of positions. And one can talk of different species of positions [tho I 
must say I haven't thought much about this] 

And this is a subject  IMO of central AGI importance - the predictability of 
object positions and movements.

If you could solve this, your program would in fairly shortly order become a 
great inventor - for finding new ways to position and apply objects is central 
to a vast amount of invention. But it is absolutely,impossible to do what 
you're claiming -  there are an infinity of non-formulaic, non-predictable - 
and therefore always new - ways to position objects - and that's why invention 
(and coming up with the idea of Chicken Kiev - putting the gravy inside instead 
of outside the food] is so hard. We're talking here about the fundamental 
nature of objects and space.




From: David Jones 
Sent: Friday, July 02, 2010 1:53 PM
To: agi 
Subject: Re: [agi] Open Sets vs Closed Sets


narrow AI is a term that describes the solution to a problem, not the problem. 
It is a solution with a narrow scope. General AI on the other hand should have 
a much larger scope than narrow ai and be able to handle unforseen 
circumstances. 

What I don't think you realize is that open sets can be described by closed 
sets. Here is an example from my own research. The set of objects I'm allowing 
in the simplest case studies so far are black squares. This is a closed set. 
But, the number, movement and relative positions of these squares is an open 
set. I can define an infinite number of ways in which a 0 to infinite number of 
black squares can move. If I define a general AI algorithm, it should be able 
to handle the infinite subset of the open set that is representative of some 
aspect of the real world. We could also study case studies that are not 
representative of the environment though.

The example I just gave is a completely open set, yet an algorithm could handle 
such an open set, and I am designing for it. So, your claim that no one is 
studying or handling such things is not right.

Dave

On Wed, Jun 30, 2010 at 8:58 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  I'd like opinions on terminology here.

  IMO the opposition of closed sets vs open sets is fundamental to the 
difference between narrow AI and AGI.

  However I notice that these terms have different meanings to mine in maths.

  What I mean is:

  closed set: contains a definable number and *kinds/species* of objects

  open set: contains an undefinable number and *kinds/species* of objects  
(what we in casual, careless conversation describe as containing all kinds of 
things);  the rules of an open set allow adding new kinds of things ad 
infinitum

  Narrow AI's operate in artificial environments containing closed sets of 
objects - all of wh. are definable. AGI's operate in real world environments 
containing open sets of objects - some of wh. will be definable, and some  
definitely not

  To engage in any real world activity, like walking down a street or 
searching/tidying a room or reading a science book/text is to  operate with 
open sets of objects,  because the next field of operations - the next street 
or room or text -  may and almost certainly will have unpredictably different 
kinds of objects from the last.

  Any objections to my use of these terms, or suggestions that I should use 
others

Re: [agi] masterpiece on an iPad

2010-07-02 Thread Mike Tintner
that's like saying cartography or cartoons could be done a lot faster if they 
just used cameras -  ask Michael to explain what the hand can draw that the 
camera can't


From: Matt Mahoney 
Sent: Friday, July 02, 2010 2:21 PM
To: agi 
Subject: Re: [agi] masterpiece on an iPad


It could be done a lot faster if the iPad had a camera.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 6:28:58 AM
Subject: [agi] masterpiece on an iPad


http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-creates-masterpiece-on-an-iPad.html

McLuhan argues that touch is the central sense - the one that binds the others. 
He may be right. The i-devices integrate touch into intelligence.
  agi | Archives  | Modify Your Subscription  

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] masterpiece on an iPad

2010-07-02 Thread Mike Tintner
Matt: 
AGI is all about building machines that think, so you don't have to.

Matt,

I'm afraid that's equally silly and also shows a similar lack of understanding 
of sensors and semiotics.

An AGI robot won't know what it's like to live inside a human skin, and will 
have limited understanding of our life problems -different body, different 
sensors, different body metaphors, and ergo different connotations for signs it 
may use.

So, sorry, you're just going to have to keep thinking.

Funny this, because I just posted the following elsewhere:

What's The Difference between Dawkins  The Pope?

We are survival machines-robot vehicles blindly programmed to preserve the 
selfish molecules known as genes 
The Pope

Why did God make you? God made me to know him, love him and serve him in this 
world, and be with him forever in the next
Richard Dawkins

God, genes, what's the diff, ? Same basic urge to subordinate the human to a 
higher purpose, to be worshipped and adored. Is there any real difference 
between so many scientists and religious here? 

{And one might add, AGI-ers with their omnipotent SuperAGI  -  in nomine 
Turing, et Neumann, et Minsky].




From: Matt Mahoney 
Sent: Friday, July 02, 2010 3:20 PM
To: agi 
Subject: Re: [agi] masterpiece on an iPad


AGI is all about building machines that think, so you don't have to.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 9:37:51 AM
Subject: Re: [agi] masterpiece on an iPad


that's like saying cartography or cartoons could be done a lot faster if they 
just used cameras -  ask Michael to explain what the hand can draw that the 
camera can't


From: Matt Mahoney 
Sent: Friday, July 02, 2010 2:21 PM
To: agi 
Subject: Re: [agi] masterpiece on an iPad


It could be done a lot faster if the iPad had a camera.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 6:28:58 AM
Subject: [agi] masterpiece on an iPad


http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-creates-masterpiece-on-an-iPad.html

McLuhan argues that touch is the central sense - the one that binds the others. 
He may be right. The i-devices integrate touch into intelligence.
  agi | Archives  | Modify Your Subscription  

  agi | Archives  | Modify Your Subscription   

  agi | Archives  | Modify Your Subscription  

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Open Sets vs Closed Sets

2010-06-30 Thread Mike Tintner
I'd like opinions on terminology here.

IMO the opposition of closed sets vs open sets is fundamental to the difference 
between narrow AI and AGI.

However I notice that these terms have different meanings to mine in maths.

What I mean is:

closed set: contains a definable number and *kinds/species* of objects

open set: contains an undefinable number and *kinds/species* of objects  (what 
we in casual, careless conversation describe as containing all kinds of 
things);  the rules of an open set allow adding new kinds of things ad 
infinitum

Narrow AI's operate in artificial environments containing closed sets of 
objects - all of wh. are definable. AGI's operate in real world environments 
containing open sets of objects - some of wh. will be definable, and some  
definitely not

To engage in any real world activity, like walking down a street or 
searching/tidying a room or reading a science book/text is to  operate with 
open sets of objects,  because the next field of operations - the next street 
or room or text -  may and almost certainly will have unpredictably different 
kinds of objects from the last.

Any objections to my use of these terms, or suggestions that I should use 
others?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Open Sets vs Closed Sets

2010-06-30 Thread Mike Tintner
Thanks for comment. I intend no comment on any other use of the terms, merely 
to ensure that my use is reasonable and not confusing. And I hope that you  
others will agree that the conceptual distinction I am making is a fundamental 
and essential one in itself.

Whether you agree that it is fundamental to narrow AI vs AGI is another but 
also v. fundamental matter.I would maintain that there is no method of any kind 
in the whole of rationality - i.e. esp. logic, maths, and computer programming 
- that is designed for, or can deal with open sets (per my term) - and that is 
of extreme importance.


From: Jim Bromer 
Sent: Wednesday, June 30, 2010 3:13 PM
To: agi 
Subject: Re: [agi] Open Sets vs Closed Sets


The use of the terminology of mathematics is counter intuitive, if, what you 
want to say is that mathematical methods are inadequate to describe AGI systems 
(or something like that.)
That is what I meant when I said that people don't always mean exactly what 
they seem to be saying.  You are not really defining a mathematical system, and 
you are not trying to conclude that a specific presumption is illogical are 
you?  Or are you?
There is another problem.  We can define sets so we can define things like a 
closed set of sets each containing infinities of objects.

However by qualifying your use of concepts like this and then appealing to a 
reasonable right to be understood as you intended, you can certainly use this 
kind of metaphor.
That's my opinion.
Jim Bromer


 
On Wed, Jun 30, 2010 at 8:58 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  I'd like opinions on terminology here.

  IMO the opposition of closed sets vs open sets is fundamental to the 
difference between narrow AI and AGI.

  However I notice that these terms have different meanings to mine in maths.

  What I mean is:

  closed set: contains a definable number and *kinds/species* of objects

  open set: contains an undefinable number and *kinds/species* of objects  
(what we in casual, careless conversation describe as containing all kinds of 
things);  the rules of an open set allow adding new kinds of things ad 
infinitum

  Narrow AI's operate in artificial environments containing closed sets of 
objects - all of wh. are definable. AGI's operate in real world environments 
containing open sets of objects - some of wh. will be definable, and some  
definitely not

  To engage in any real world activity, like walking down a street or 
searching/tidying a room or reading a science book/text is to  operate with 
open sets of objects,  because the next field of operations - the next street 
or room or text -  may and almost certainly will have unpredictably different 
kinds of objects from the last.

  Any objections to my use of these terms, or suggestions that I should use 
others?

agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Open Sets vs Closed Sets

2010-06-30 Thread Mike Tintner
PS Come to think of it, one can also talk of

open spaces vs closed spaces
open fields vs closed fields (of operation)

a space contains a set, wh. is its contents -  so the conceptual space of 
chairs contains a/the set of chairs 

I would go on to talk of every program, machine or agent working and solving 
problems in a field of operations, wh. always has a physical character , 
whereas spaces are cognitive, abstract entities.

Even if an agent is just thinking about an abstract cognitive space, (like 
chairs or politics), it is located in a physical field, and its cognitive 
operations take place in a physical medium/field like the brain/computer.

Again, I maintain, nothing in rationality, incl robotics to date AFAIK is 
designed for open sets, spaces or fields. (Of course many will *suggest* they 
are in one way or another, but won't begin to be able to demonstrate it).


From: Jim Bromer 
Sent: Wednesday, June 30, 2010 3:13 PM
To: agi 
Subject: Re: [agi] Open Sets vs Closed Sets


The use of the terminology of mathematics is counter intuitive, if, what you 
want to say is that mathematical methods are inadequate to describe AGI systems 
(or something like that.)
That is what I meant when I said that people don't always mean exactly what 
they seem to be saying.  You are not really defining a mathematical system, and 
you are not trying to conclude that a specific presumption is illogical are 
you?  Or are you?
There is another problem.  We can define sets so we can define things like a 
closed set of sets each containing infinities of objects.

However by qualifying your use of concepts like this and then appealing to a 
reasonable right to be understood as you intended, you can certainly use this 
kind of metaphor.
That's my opinion.
Jim Bromer


 
On Wed, Jun 30, 2010 at 8:58 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  I'd like opinions on terminology here.

  IMO the opposition of closed sets vs open sets is fundamental to the 
difference between narrow AI and AGI.

  However I notice that these terms have different meanings to mine in maths.

  What I mean is:

  closed set: contains a definable number and *kinds/species* of objects

  open set: contains an undefinable number and *kinds/species* of objects  
(what we in casual, careless conversation describe as containing all kinds of 
things);  the rules of an open set allow adding new kinds of things ad 
infinitum

  Narrow AI's operate in artificial environments containing closed sets of 
objects - all of wh. are definable. AGI's operate in real world environments 
containing open sets of objects - some of wh. will be definable, and some  
definitely not

  To engage in any real world activity, like walking down a street or 
searching/tidying a room or reading a science book/text is to  operate with 
open sets of objects,  because the next field of operations - the next street 
or room or text -  may and almost certainly will have unpredictably different 
kinds of objects from the last.

  Any objections to my use of these terms, or suggestions that I should use 
others?

agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread Mike Tintner
Jim,

The importance of the point here is NOT primarily about AGI systems having to 
make this distinction. Yes, a real AGI robot will probably have to make this 
distinction as an infant does - but in terms of practicality, that's an awful 
long way away.

The importance is this:  real AGI is about dealing with a world of living 
creatures in a myriad ways - those living creatures, are all fundamentally 
unpredictable. Ergo most AGI activities and problems involve dealing with a 
fundamentally unpredictable world. 

Narrow AI - and all rational technology - and all attempts-at-AGI to date are 
predicated on dealing with a predictable world. (All the additions of 
probabilities and uncertainties to date do not change this basic assumption). 
All your personal logical and mathematical exercises are based on a predictable 
world. An AGI TSP equivalent for you would be what I already said - how would 
you deal with deciding a travel route to a set of *mobile*, *unpredictable* 
destinations?

This recognition of fundamental unpredictability totally transforms the way you 
look at the world - and the kind of problems you have to deal with - makes you 
aware of  the v. different, non-rational problems that real humans do deal with.

And BTW it doesn't really matter if you are a determinist - for the plain 
reality of life is that the only evidence we have is of living creatures and 
humans behaving unpredictably. There might for argument's sake be some divine 
determinist plan revealing the underlying laws of living behaviour - but it 
sure as heck ain't available to anyone (not to mention that it doesn't exist) 
and we have to proceed accordingly.




From: Jim Bromer 
Sent: Monday, June 28, 2010 5:20 PM
To: agi 
Subject: Re: [agi] A Primary Distinction for an AGI


On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner tint...@blueyonder.co.uk wrote:


  Inanimate objects normally move  *regularly,* in *patterned*/*pattern* ways, 
and *predictably.*

  Animate objects normally move *irregularly*, * in *patchy*/*patchwork* ways, 
and *unbleedingpredictably* .


This presumption looks similar (in some profound way) to many of the 
presumptions that were tried in the early days of AI, partly because computers 
lacked memory and they were very slow.  It's unreliable just because we need 
the AGI program to be able to consider situations when, for example, inanimate 
objects move in patchy patchwork ways or in unpredictable patterns.

Jim Bromer
  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread Mike Tintner
Just off the cuff here - isn't the same true for vision? You can't learn vision 
from vision. Just as all NLP has no connection with the real world, and totally 
relies on the human programmer's knowledge of that world. 

Your visual program actually relies totally on your visual vocabulary - not 
its own. That is the inevitable penalty of processing unreal signals on a 
computer screen which are not in fact connected to the real world any more than 
the verbal/letter signals involved in NLP are.

What you need to do - what anyone in your situation with anything like your 
asprations needs to do - is to hook up with a roboticist. Everyone here should 
be doing that.



From: David Jones 
Sent: Tuesday, June 29, 2010 5:27 PM
To: agi 
Subject: Re: [agi] A Primary Distinction for an AGI


You can't learn language from language without embedding way more knowledge 
than is reasonable. Language does not contain the information required for its 
interpretation. There is no *reason* to interpret the language into any of the 
infinite possible interpretaions. There is nothing to explain but it requires 
explanatory reasoning to determine the correct real world interpretation


  On Jun 29, 2010 10:58 AM, Matt Mahoney matmaho...@yahoo.com wrote:


  David Jones wrote:
   Natural language requires more than the words on the page in the real 
world. Of...

  Any knowledge that can be demonstrated over a text-only channel (as in the 
Turing test) can also be learned over a text-only channel.


   Cyc also is trying to store knowledge about a super complicated world in 
simplistic forms and al...

  Cyc failed because it lacks natural language. The vast knowledge store of the 
internet is unintelligible to Cyc. The average person can't use it because they 
don't speak Cycl and because they have neither the ability nor the patience to 
translate their implicit thoughts into augmented first order logic. Cyc's 
approach was understandable when they started in 1984 when they had neither the 
internet nor the vast computing power that is required to learn natural 
language from unlabeled examples like children do.


   Vision and other sensory interpretaion, on the other hand, do not require 
more info because that...

  Without natural language, your system will fail too. You don't have enough 
computing power to learn language, much less the million times more computing 
power you need to learn to see.


   
  -- Matt Mahoney, matmaho...@yahoo.com




  
  From: David Jones davidher...@gmail.com
  To: agi a...@v2.listbox.c...

  Sent: Mon, June 28, 2010 9:28:57 PM 

  Subject: Re: [agi] A Primary Distinction for an AGI


  Natural language requires more than the words on the page in the real world. 
Of course that didn't ...

agi | Archives  | Modify Your Subscription   


  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread Mike Tintner
You're not getting where I'm coming from at all. I totally agree vision is far 
prior to language. (We and I've covered your points many times). That's not the 
point - wh. is that vision is nevertheless still vastly more complex, than you 
have any idea.

For one thing, vision depends on perceptualising/ conceptualising the world - a 
schematic ontology of the world - image-schematic. It almost certainly has to 
be done in a certain order, gradually built up.

No one in our culture has much idea of either what that ontology - a visual 
ontology - consists of, or how it's built up.

And for the most basic thing, you still haven't registered that your computer 
program has ZERO VISION. It's not actually looking at the world at all. It's 
BLIND - if you take the time to analyse it. A pretty fundamental error/ 
misconception.

Consequently, it also lacks a fundamental dimension of vision, wh. is 
POINT-OF-VIEW - distance of the visual medium (eg the retina) and viewing 
subject from the visual object. 

Get thee to a roboticist,  make contact with the real world.


From: David Jones 
Sent: Tuesday, June 29, 2010 6:42 PM
To: agi 
Subject: Re: [agi] A Primary Distinction for an AGI


Mike, 

THIS is the flawed reasoning that causes people to ignore vision as the right 
way to create AGI. And I've finally come up with a great way to show you how 
wrong this reasoning is. 

I'll give you an extremely obvious argument that proves that vision requires 
much less knowledge to interpret than language does. Let's say that you have 
never been to egypt, you have never seen some particular movie before.  But if 
you see the movie, an alien landscape, an alien world, a new place or any such 
new visual experience, you can immediately interpret it in terms of spacial, 
temporal, compositional and other relationships. 

Now, go to egypt and listen to them speak. Can you interpret it? Nope. Why?! 
Because you don't have enough information. The language itself does not contain 
any information to help you interpret it. We do not learn language simply by 
listening. We learn based on evidence from how the language is used and how it 
occurs in our daily lives. Without that experience, you cannot interpret it.

But with vision, you do not need extra knowledge to interpret a new situation. 
You can recognize completely new objects without any training except for simply 
observing them in their natural state. 

I wish people understood this better.

Dave


On Tue, Jun 29, 2010 at 12:51 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Just off the cuff here - isn't the same true for vision? You can't learn 
vision from vision. Just as all NLP has no connection with the real world, and 
totally relies on the human programmer's knowledge of that world. 

  Your visual program actually relies totally on your visual vocabulary - not 
its own. That is the inevitable penalty of processing unreal signals on a 
computer screen which are not in fact connected to the real world any more than 
the verbal/letter signals involved in NLP are.

  What you need to do - what anyone in your situation with anything like your 
asprations needs to do - is to hook up with a roboticist. Everyone here should 
be doing that.



  From: David Jones 
  Sent: Tuesday, June 29, 2010 5:27 PM
  To: agi 
  Subject: Re: [agi] A Primary Distinction for an AGI


  You can't learn language from language without embedding way more knowledge 
than is reasonable. Language does not contain the information required for its 
interpretation. There is no *reason* to interpret the language into any of the 
infinite possible interpretaions. There is nothing to explain but it requires 
explanatory reasoning to determine the correct real world interpretation


On Jun 29, 2010 10:58 AM, Matt Mahoney matmaho...@yahoo.com wrote:


David Jones wrote:
 Natural language requires more than the words on the page in the real 
world. Of...

Any knowledge that can be demonstrated over a text-only channel (as in the 
Turing test) can also be learned over a text-only channel.


 Cyc also is trying to store knowledge about a super complicated world in 
simplistic forms and al...

Cyc failed because it lacks natural language. The vast knowledge store of 
the internet is unintelligible to Cyc. The average person can't use it because 
they don't speak Cycl and because they have neither the ability nor the 
patience to translate their implicit thoughts into augmented first order logic. 
Cyc's approach was understandable when they started in 1984 when they had 
neither the internet nor the vast computing power that is required to learn 
natural language from unlabeled examples like children do.


 Vision and other sensory interpretaion, on the other hand, do not require 
more info because that...

Without natural language, your system will fail too. You don't have enough 
computing power to learn language, much less the million times more computing 
power you

Re: [agi] Huge Progress on the Core of AGI

2010-06-28 Thread Mike Tintner

MS: I'm solving this by using an algorithm + exceptions routines.

You're saying there are predictable patterns to human and animal behaviour 
in their activities, (like sports and investing) - and in this instance how 
humans change tactics?


What empirical evidence do you have for this, apart from zero, and over 300 
years of scientific failure to produce any such laws or patterns of 
behaviour?


What evidence in the slightest do you have for your algorithm working?

The evidence to the contrary, that human and animal behaviour, are not 
predictable is pretty overwhelming.


Taking into account the above, how would you mathematically assess the cases 
for proceeding on the basis that a) living organisms  ARE predictable vs b) 
living organisms are NOT predictable?  Roughly about the same as a) you WILL 
win the lottery vs b) you WON'T win? Actually that is almost certainly being 
extremely kind - you do have a chance of winning the lottery.


--
From: Michael Swan ms...@voyagergaming.com
Sent: Monday, June 28, 2010 4:17 AM
To: agi agi@v2.listbox.com
Subject: Re: [agi] Huge Progress on the Core of AGI



On Sun, 2010-06-27 at 19:38 -0400, Ben Goertzel wrote:


Humans may use sophisticated tactics to play Pong, but that doesn't
mean it's the only way to win

Humans use subtle and sophisticated methods to play chess also, right?
But Deep Blue still kicks their ass...


If the rules of chess changed slightly, without being reprogrammed deep
blue sux.
And also there is anti deep blue chess. Play chess where you avoid
losing and taking pieces for as long as possible to maintain high
combination of possible outcomes, and avoid moving pieces in known
arrangements.

Playing against another human player like this you would more than
likely lose.



The stock market is another situation where narrow-AI algorithms may
already outperform humans ... certainly they outperform all except the
very best humans...

... ben g

On Sun, Jun 27, 2010 at 7:33 PM, Mike Tintner
tint...@blueyonder.co.uk wrote:
Oh well that settles it...

How do you know then when the opponent has changed his
tactics?

How do you know when he's switched from a predominantly
baseline game say to a net-rushing game?

And how do you know when the market has changed from bull to
bear or vice versa, and I can start going short or long? Why
is there any difference between the tennis  market
situations?



I'm solving this by using an algorithm + exceptions routines.

eg Input 100 numbers - write an algorithm that generalises/compresses
the input.

ans may be
(input_is_always  0)  // highly general

(if fail try exceptions)
// exceptions
// highly accurate exceptions
(input35 == -4)
(input75 == -50)
..
more generalised exceptions, etc

I believe such a system is similar to the way we remember things. eg -
We tend to have highly detailed memory for exceptions - we tend to
remember things about white whales more than ordinary whales. In
fact, there was a news story the other night on a returning white whale
in Brisbane, and there are additional laws to stay way from this whale
in particular, rather than all whales in general.











From: Ben Goertzel
Sent: Monday, June 28, 2010 12:03 AM

To: agi
Subject: Re: [agi] Huge Progress on the Core of AGI



Even with the variations you mention, I remain highly
confident this is not a difficult problem for narrow-AI
machine learning methods

-- Ben G

On Sun, Jun 27, 2010 at 6:24 PM, Mike Tintner
tint...@blueyonder.co.uk wrote:
I think you're thinking of a plodding limited-movement
classic Pong line.

I'm thinking of a line that can like a human
player move with varying speed and pauses to more or
less any part of its court to hit the ball, and then
hit it with varying speed to more or less any part of
the opposite court. I think you'll find that bumps up
the variables if not unknowns massively.

Plus just about every shot exchange presents you with
dilemmas of how to place your shot and then move in
anticipation of your opponent's return .

Remember the object here is to present a would-be AGI
with a simple but *unpredictable* object to deal with,
reflecting the realities of there being a great many
such objects in the real world - as distinct from
Dave's all too predictable objects.

The possible weakness of this pong example is that
there might at some point cease to be unknowns, as
there always are in real world situations, incl
tennis. One could always introduce them if necessary

[agi] A Primary Distinction for an AGI

2010-06-28 Thread Mike Tintner
The recent Core of AGI exchange has led me IMO to a beautiful conclusion - 
to one of the most basic distinctions a real AGI system must make, and also 
a  simple way of distinguishing between narrow AI and real AGI projects of 
any kind.


Consider - you have

a) Dave's square moving across a screen

b) my square moving across a screen

(it was a sort-of-Pong-player line, but let's make it a square box).

How do you distinguish which is animate or inanimate, alive or dead? A 
very early distinction an infant must make.


Remember inanimate objects move (or are moved) too, and in this case you can 
only see them in motion,  - so the self-starting distinction is out.


Well, obviously, if Dave's moves *regularly* (like a train or falling 
stone), it's probably inanimate. If mine moves *irregularly*, - if it stops 
and starts, or slows and accelerates in irregular, even if only subtly jerky 
fashion (like one operated by a human Pong player)  - it's probably 
inanimate. That's what distinguishes the movement of life.


Inanimate objects normally move  *regularly,* in *patterned*/*pattern* 
ways, and *predictably.*


Animate objects normally move *irregularly*, * in *patchy*/*patchwork* ways, 
and *unbleedingpredictably* .


(IOW Newton is wrong - the laws of physics do not apply to living objects as 
whole objects  - that's the fundamental way we know they are living, because 
they visibly don't obey those laws - they don't normally move regularly like 
a stone falling to earth, or thrown through the sky. And we're v. impressed 
when humans like dancers or soldiers do manage by dint of great effort and 
practice to move with a high though not perfect degree of regularity and 
smoothness).


And now we have such a simple way of distinguishing between narrow AI and 
real AGI projects. Look at their objects. The really narrow AI-er  will 
always do what Dave did - pick objects that are shaped regularly, move and 
behave regularly, are patterned, and predictable. Even  at as simple a level 
as plain old squares.


And he'll pick closed, definable sets of objects.

He'll do this instinctively, because he doesn't know any different - that's 
his intellectual, logicomathematical world - one of objects that no matter 
how complex (like fractals) are always regular in shape, movement, 
patterned, come in definable sets and are predictable.


That's why Ben wants to see the world only as structured and patterned even 
though there's so much obvious mess and craziness everywhere - he's never 
known any different intellectually.


That's why Michael can't bear to even contemplate a world in which things 
and people behave unpredictably. (And Ben can't bear to contemplate a 
stockmarket that is obviously unpredictable).


If he were an artist his instincts would be the opposite - he'd go for the 
irregular and patchy and unpredictable twists. If he were drawing a box 
going across a screen, he would have to put some irregularity in 
omewhere  - put in some fits and starts and stops - there's always an 
irregular twist in the picture or the tale. An artist has to put some 
surprise and life into what he does -


If he were drawing or photographing a picture of any real world scene, it 
would be full of irregularity - irregular objects moving in irregular ways 
in irregular groupings. (One reason why so many AGI-ers can't bear to deal 
with visual images of any detail. ).


Even at one extreme if he were an abstract artist using regular objects like 
Albers, he'd still put them together in somewhat irregular ways, or 
irregular combinations of colours.


AGI is about dealing first and foremost with the real world, navigating real 
world scenes - streets, fields, rooms - manipulating real world objects, 
visually classifying real world objects, talking to real world people, 
dealing with real world texts, pictures, photographs and movies)


Not the artificial worlds of factories, and labs, and processing plants, and 
the artificial abstract objects and spaces of logic and maths.


The real world always contains a great deal of irregularly shaped objects, 
(like rocks and faces), moving and talking and signifying irregularly, in 
open, undefinable groups/sets,   in patchworks - and overall behaving 
unpredictably (and surprisingly).


That's what real AGI projects will have to deal with.

(objects here can be taken universally like things -  to denote not just 
physical objects but sign objects too like numbers, and words and pictures, 
and ideas).


P.S. Summary: The litmus, OBJECT TRUTH TEST of your AGI project - are the 
objects regular/irregular in


1) Form - Shape  ( brick vs rock)
2) Form - Structure ( pattern vs patchwork)
3) Movement/ Behaviour (incl. Signification)
4) Groups/Sets -  Closed Defined Sets vs Open Undefinable (or only partly 
definable) Sets

5) Predictable/Unpredictable

Or to put it another way, could this be part of the real, imperfect world 
rather than an artificial, perfect world?


I've only 

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Mike Tintner
There would be an  insidious problem with programming computers to play poker  
that in Sid's opinion  would raise the Turing test to a higher level.

  The problem would not be whether people could figure out if they were up 
against a computer. It would be whether the computer could figure out people, 
particularly the ever-changing social dynamics in a randomly selected group of 
people. Nobody at a poker table would care whether or not the computer would 
play poker like a person.

  In fact, people would welcome a computer, since computers would tend to play 
predictably. Computers would be, by definition, predictable, which would be the 
meaning of the word 'programmed.

  ' If you would play a computer simulation for a short amount of time, you 
would learn the  machine's betting patterns, adjust would mean the computer 
would be distinguishable from a person.

  Many people would play poker as predictably as a computer. They would be 
welcomed at the table, too. If you would find a predictable poker opponent and 
would learn his or her patterns, you could exploit that knowledge for profit. 
Most people,however, have been unpredictable and human unpredictability would 
be an  advantage at poker.

  To play poker successfully, computers would not only have to develop human 
unpredictability, hey would have to learn to adjust to human unpredictability 
as well. Computers would fail miserably at the problem of adjusting to ever 
changing social conditions that would result from human interactions.

  That would be why beating a computer at poker has been so easy. Of course, 
the same requirement, the ability to adjust unpredictability, would apply to 
poker playing humans who would want to be successful.  You should go back and 
study how Sid had adjusted each hour in his poker session. However, as humans, 
we have been more accustomed to human unpredictability, so we have been far 
better at learning how to adjust.
http://www.holdempokergame.poker.tj/adjust-your-play-to-conditions-1.html

Of course, he's talking about dumb narrow AI purely-predicting-and-predictable 
computers,  we're all interested in building AGI computers that 
expect-unpredictability-and-can-react-unpredictably, right? (Wh. means  being 
predicting-and-predictable some of the time too. The real world is 
complicated.).


From: Jim Bromer 
Sent: Monday, June 28, 2010 6:35 PM
To: agi 
Subject: Re: [agi] A Primary Distinction for an AGI


  On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:


Inanimate objects normally move  *regularly,* in *patterned*/*pattern* 
ways, and *predictably.*

Animate objects normally move *irregularly*, * in *patchy*/*patchwork* 
ways, and *unbleedingpredictably* .



I think you made a major tactical error and just got caught acting the way you 
are constantly criticizing everyone else for acting.  --(Busted)--

You might say my interest is: how do we get a contemporary computer problem to 
deal with situations in which a prevailing (or presumptuous) point of view 
should be reconsidered from different points of view, when the range of 
reasonable ways to look at a problem is not clear and the possibilities are too 
numerous for a contemporary computer to examine carefully in a reasonable 
amount of time.

For example, we might try opposites, and in this case I wondered about the case 
where we might want to consider a 'supposedly inanimate object' that moves in 
an irregular and unpredictable way.  Another example: Can unpredictable 
itself be considered predictable?  To some extent the answer is, of course it 
can.  The problem with using opposites is that it is an idealization of real 
world situations and where using alternative ways of looking at a problem may 
be useful.  Can an object be both inanimate and animate (in the sense Mike used 
the term)?  Could there be another class of things that was neither animate nor 
inanimate?  Is animate versus animate really the best way to describe living 
versus non living?  No?

Given that the possibilities could quickly add up and given that they are not 
clearly defined, it presents a major problem of complexity to the would be 
designer of a true AGI program.  The problem is that it is just not feasible to 
evaluate millions of variations of possibilities and then find the best 
candidates within a reasonable amount of time. And this problem does not just 
concern the problem of novel situations but those specific situations that are 
familiar but where there are quite a few details that are not initially 
understood.  While this is -clearly- a human problem, it is a much more severe 
problem for contemporary AGI.

Jim Bromer
  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Word of advice. You're creating your own artificial world here with its own 
artificial rules.

AGI is about real vision of real objects in the real world. The two do not 
relate - or compute. 

It's a pity - it's good that you keep testing yourself,  it's bad that they 
aren't realistic tests. Subject yourself to reality - it'll feel better every 
which way.


From: David Jones 
Sent: Sunday, June 27, 2010 6:31 AM
To: agi 
Subject: [agi] Huge Progress on the Core of AGI


A method for comparing hypotheses in explanatory-based reasoning: 

We prefer the hypothesis or explanation that *expects* more observations. If 
both explanations expect the same observations, then the simpler of the two is 
preferred (because the unnecessary terms of the more complicated explanation do 
not add to the predictive power). 

Why are expected events so important? They are a measure of 1) explanatory 
power and 2) predictive power. The more predictive and the more explanatory a 
hypothesis is, the more likely the hypothesis is when compared to a competing 
hypothesis.

Here are two case studies I've been analyzing from sensory perception of 
simplified visual input:
The goal of the case studies is to answer the following: How do you generate 
the most likely motion hypothesis in a way that is general and applicable to 
AGI?
Case Study 1) Here is a link to an example: animated gif of two black squares 
move from left to right. Description: Two black squares are moving in unison 
from left to right across a white screen. In each frame the black squares shift 
to the right so that square 1 steals square 2's original position and square 
two moves an equal distance to the right.
Case Study 2) Here is a link to an example: the interrupted square. 
Description: A single square is moving from left to right. Suddenly in the 
third frame, a single black square is added in the middle of the expected path 
of the original black square. This second square just stays there. So, what 
happened? Did the square moving from left to right keep moving? Or did it stop 
and then another square suddenly appeared and moved from left to right?

Here is a simplified version of how we solve case study 1:
The important hypotheses to consider are: 
1) the square from frame 1 of the video that has a very close position to the 
square from frame 2 should be matched (we hypothesize that they are the same 
square and that any difference in position is motion).  So, what happens is 
that in each two frames of the video, we only match one square. The other 
square goes unmatched.   
2) We do the same thing as in hypothesis #1, but this time we also match the 
remaining squares and hypothesize motion as follows: the first square jumps 
over the second square from left to right. We hypothesize that this happens 
over and over in each frame of the video. Square 2 stops and square 1 jumps 
over it over and over again. 
3) We hypothesize that both squares move to the right in unison. This is the 
correct hypothesis.

So, why should we prefer the correct hypothesis, #3 over the other two?

Well, first of all, #3 is correct because it has the most explanatory power of 
the three and is the simplest of the three. Simpler is better because, with the 
given evidence and information, there is no reason to desire a more complicated 
hypothesis such as #2. 

So, the answer to the question is because explanation #3 expects the most 
observations, such as: 
1) the consistent relative positions of the squares in each frame are expected. 
2) It also expects their new positions in each from based on velocity 
calculations. 
3) It expects both squares to occur in each frame. 

Explanation 1 ignores 1 square from each frame of the video, because it can't 
match it. Hypothesis #1 doesn't have a reason for why the a new square appears 
in each frame and why one disappears. It doesn't expect these observations. In 
fact, explanation 1 doesn't expect anything that happens because something new 
happens in each frame, which doesn't give it a chance to confirm its hypotheses 
in subsequent frames.

The power of this method is immediately clear. It is general and it solves the 
problem very cleanly.

Here is a simplified version of how we solve case study 2:
We expect the original square to move at a similar velocity from left to right 
because we hypothesized that it did move from left to right and we calculated 
its velocity. If this expectation is confirmed, then it is more likely than 
saying that the square suddenly stopped and another started moving. Such a 
change would be unexpected and such a conclusion would be unjustifiable. 

I also believe that explanations which generate fewer incorrect expectations 
should be preferred over those that more incorrect expectations.

The idea I came up with earlier this month regarding high frame rates to reduce 
uncertainty is still applicable. It is important that all generated hypotheses 
have as low uncertainty as possible given 

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Jim :This illustrates one of the things wrong with the dreary instantiations of 
the prevailing mind set of a group.  It is only a matter of time until you 
discover (through experiment) how absurd it is to celebrate the triumph of an 
overly simplistic solution to a problem that is, by its very potential, full of 
possibilities]

To put it more succinctly, Dave  Ben  Hutter are doing the wrong subject - 
narrow AI.  Looking for the one right prediction/ explanation is narrow AI. 
Being able to generate more and more possible explanations, wh. could all be 
valid,  is AGI.  The former is rational, uniform thinking. The latter is 
creative, polyform thinking. Or, if you prefer, it's convergent vs divergent 
thinking, the difference between wh. still seems to escape Dave  Ben  most 
AGI-ers.

Consider a real world application - a footballer, Maradona, is dribbling with 
the ball - you don't/can't predict where he's going next, you have to be ready 
for various directions, including the possibility that he is going to do 
something surprising and new   - even if you have to commit yourself to 
anticipating a particular direction. Ditto if you're trying to predict the path 
of an animal prey.

Dealing only with the predictable as most do, is perhaps what Jim is getting 
at - predictable. And wrong for AGI. It's your capacity to deal with the open, 
unpredictable, real world that signifies you are an AGI - not the same old, 
closed predictable, artificial world. When will you have the courage to face 
this?

Sent: Sunday, June 27, 2010 4:21 PM
To: agi 
Subject: Re: [agi] Huge Progress on the Core of AGI


On Sun, Jun 27, 2010 at 1:31 AM, David Jones davidher...@gmail.com wrote:

  A method for comparing hypotheses in explanatory-based reasoning:Here is a 
simplified version of how we solve case study 1:
  The important hypotheses to consider are: 
  1) the square from frame 1 of the video that has a very close position to the 
square from frame 2 should be matched (we hypothesize that they are the same 
square and that any difference in position is motion).  So, what happens is 
that in each two frames of the video, we only match one square. The other 
square goes unmatched.   
  2) We do the same thing as in hypothesis #1, but this time we also match the 
remaining squares and hypothesize motion as follows: the first square jumps 
over the second square from left to right. We hypothesize that this happens 
over and over in each frame of the video. Square 2 stops and square 1 jumps 
over it over and over again. 
  3) We hypothesize that both squares move to the right in unison. This is the 
correct hypothesis.

  So, why should we prefer the correct hypothesis, #3 over the other two?

  Well, first of all, #3 is correct because it has the most explanatory power 
of the three and is the simplest of the three. Simpler is better because, with 
the given evidence and information, there is no reason to desire a more 
complicated hypothesis such as #2. 

  So, the answer to the question is because explanation #3 expects the most 
observations, such as: 
  1) the consistent relative positions of the squares in each frame are 
expected. 
  2) It also expects their new positions in each from based on velocity 
calculations. 
  3) It expects both squares to occur in each frame. 

  Explanation 1 ignores 1 square from each frame of the video, because it can't 
match it. Hypothesis #1 doesn't have a reason for why the a new square appears 
in each frame and why one disappears. It doesn't expect these observations. In 
fact, explanation 1 doesn't expect anything that happens because something new 
happens in each frame, which doesn't give it a chance to confirm its hypotheses 
in subsequent frames.

  The power of this method is immediately clear. It is general and it solves 
the problem very cleanly.
  Dave 
agi | Archives  | Modify Your Subscription   


Nonsense.  This illustrates one of the things wrong with the dreary 
instantiations of the prevailing mind set of a group.  It is only a matter of 
time until you discover (through experiment) how absurd it is to celebrate the 
triumph of an overly simplistic solution to a problem that is, by its very 
potential, full of possibilities.

For one example, even if your program portrayed the 'objects' as moving in 
'unison' I doubt if the program calculated or represented those objects in 
unison.  I also doubt that their positioning was literally based on moving 
'right' since their movement were presumably calculated with incremental 
mathematics that were associated with screen positions.  And, looking for a 
technicality that represents the failure of the over reliance of the efficacy 
of a simplistic over generalization, I only have to point out that they did not 
only move to the right, so your description was either wrong or only partially 
representative of the apparent movement.

As long as the hypotheses are kept simple enough to eliminate the less 

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
 by the instance is the same. But, this cannot 
be the case because if we click the icon when no notepad window previously 
existed, it will be blank. based on these two experiences we can construct an 
explanatory hypothesis such that: clicking the icon simply opens a blank 
window. We also get evidence for this conclusion when we see the two windows 
side by side. If we see the old window with the content still intact we will 
realize that clicking the icon did not seem to have cleared it.

Dave



On Sun, Jun 27, 2010 at 12:39 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Sun, Jun 27, 2010 at 11:56 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Jim :This illustrates one of the things wrong with the dreary 
instantiations of the prevailing mind set of a group.  It is only a matter of 
time until you discover (through experiment) how absurd it is to celebrate the 
triumph of an overly simplistic solution to a problem that is, by its very 
potential, full of possibilities]

To put it more succinctly, Dave  Ben  Hutter are doing the wrong subject 
- narrow AI.  Looking for the one right prediction/ explanation is narrow AI. 
Being able to generate more and more possible explanations, wh. could all be 
valid,  is AGI.  The former is rational, uniform thinking. The latter is 
creative, polyform thinking. Or, if you prefer, it's convergent vs divergent 
thinking, the difference between wh. still seems to escape Dave  Ben  most 
AGI-ers.

  Well, I agree with what (I think) Mike was trying to get at, except that I 
understood that Ben, Hutter and especially David were not only talking about 
prediction as a specification of a single prediction when many possible 
predictions (ie expectations) were appropriate for consideration.  

  For some reason none of you seem to ever talk about methods that could be 
used to react to a situation with the flexibility to integrate the recognition 
of different combinations of familiar events and to classify unusual events so 
they could be interpreted as more familiar *kinds* of events or as novel forms 
of events which might be then be integrated.  For me, that seems to be one of 
the unsolved problems.  Being able to say that the squares move to the right in 
unison is a better description than saying the squares are dancing the irish 
jig is not really cutting edge.

  As far as David's comment that he was only dealing with the core issues, I 
am sorry but you were not dealing with the core issues of contemporary AGI 
programming.  You were dealing with a primitive problem that has been 
considered for many years, but it is not a core research issue.  Yes we have to 
work with simple examples to explain what we are talking about, but there is a 
difference between an abstract problem that may be central to your recent work 
and a core research issue that hasn't really been solved.

  The entire problem of dealing with complicated situations is that these 
narrow AI methods haven't really worked.  That is the core issue.

  Jim Bromer


agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Well, Ben, I'm glad you're quite sure because you haven't given a single 
reason why. Clearly you should be Number One advisor on every Olympic team, 
because you've cracked the AGI problem of how to deal with opponents that can 
move (whether themselves or balls) in multiple, unpredictable directions, that 
is at the centre of just about every field and court sport.

I think if you actually analyse it, you'll find that you can't predict and 
prepare for  the presumably at least 50 to 100 spots on a table tennis board/ 
tennis court that your opponent can hit the ball to, let alone for how he will 
play subsequent 10 to 20 shot rallies   - and you can't devise a deterministic 
program to play here. These are true, multiple-/poly-solution problems rather 
than the single solution ones you are familiar with.

That's why all of these sports have normally hundreds of different competing 
philosophies and strategies, - and people continually can and do come up with 
new approaches and styles of play to the sports overall - there are endless 
possibilities.

I suspect you may not play these sports, because one factor you've obviously 
ignored (although I stressed it) is not just the complexity but that in sports 
players can and do change their strategies - and that would have to be a given 
in our computer game. In real world activities, you're normally *supposed* to 
act unpredictably at least some of the time. It's a fundamental subgoal. 

In sport, as in investment, past performance is not a [sure] guide to future 
performance - companies and markets may not continue to behave as they did in 
the past -  so that alone buggers any narrow AI predictive approach.

P.S. But the most basic reality of these sports is that you can't cover every 
shot or move your opponent may make, and that gives rise to a continuing stream 
of genuine dilemmas . For example, you have just returned a ball from the 
extreme, far left of your court - do you now start moving rapidly towards the 
centre of the court so that you will be prepared to cover a ball to the 
extreme, near right side - or do you move more slowly?  If you don't move 
rapidly, you won't be able to cover that ball if it comes. But if you do move 
rapidly, your opponent can play the ball back to the extreme left and catch you 
out. 

It's a genuine dilemma and gamble - just like deciding whether to invest in 
shares. And competitive sports are built on such dilemmas. 

Welcome to the real world of AGI problems. You should get to know it.

And as this example (and my rock wall problem) indicate, these problems can be 
as simple and accessible as fairly easy narrow AI problems. 

From: Ben Goertzel 
Sent: Sunday, June 27, 2010 7:33 PM
To: agi 
Subject: Re: [agi] Huge Progress on the Core of AGI



That's a rather bizarre suggestion Mike ... I'm quite sure a simple narrow AI 
system could be constructed to beat humans at Pong ;p ... without teaching us 
much of anything about intelligence...

Very likely a narrow-AI machine learning system could *learn* by experience to 
beat humans at Pong ... also without teaching us much 
of anything about intelligence...

Pong is almost surely a toy domain ...

ben g


On Sun, Jun 27, 2010 at 2:12 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Try ping-pong -  as per the computer game. Just a line (/bat) and a 
square(/ball) representing your opponent - and you have a line(/bat) to play 
against them

  Now you've got a relatively simple true AGI visual problem - because if the 
opponent returns the ball somewhat as a real human AGI does,  (without the 
complexities of spin etc just presumably repeatedly changing the direction (and 
perhaps the speed)  of the returned ball) - then you have a fundamentally 
*unpredictable* object.

  How will your program learn to play that opponent - bearing in mind that the 
opponent is likely to keep changing and even evolving strategy? Your approach 
will have to be fundamentally different from how a program learns to play a 
board game, where all the possibilities are predictable. In the real world, 
past performance is not a [sure] guide to future performance. Bayes doesn't 
apply.

  That's the real issue here -  it's not one of simplicity/complexity - it's 
that  your chosen worlds all consist of objects that are predictable, because 
they behave consistently, are shaped consistently, and come in consistent, 
closed sets - and  can only basically behave in one way at any given point. AGI 
is about dealing with the real world of objects that are unpredictable because 
they behave inconsistently,even contradictorily, are shaped inconsistently and 
come in inconsistent, open sets - and can behave in multi-/poly-ways at any 
given point. These differences apply at all levels from the most complex to the 
simplest.

  Dealing with consistent (and regular) objects is no preparation for dealing 
with inconsistent, irregular objects.It's a fundamental error

  Real AGI animals and humans

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
I think you're thinking of a plodding limited-movement classic Pong line.

I'm thinking of a line that can like a human player move with varying speed and 
pauses to more or less any part of its court to hit the ball, and then hit it 
with varying speed to more or less any part of the opposite court. I think 
you'll find that bumps up the variables if not unknowns massively.

Plus just about every shot exchange presents you with dilemmas of how to place 
your shot and then move in anticipation of your opponent's return .

Remember the object here is to present a would-be AGI with a simple but 
*unpredictable* object to deal with, reflecting the realities of there being a 
great many such objects in the real world - as distinct from Dave's all too 
predictable objects.

The possible weakness of this pong example is that there might at some point 
cease to be unknowns, as there always are in real world situations, incl 
tennis. One could always introduce them if necessary - allowing say creative 
spins on the ball.

But I doubt that it will be necessary here for the purposes of anyone like Dave 
-  and v. offhand and with no doubt extreme license this strikes me as not a 
million miles from a hyper version of the TSP problem, where the towns can move 
around, and you can't be sure whether they'll be there when you arrive.  Or is 
there an obviously true solution for that problem too? [Very convenient these 
obviously true solutions].



From: Jim Bromer 
Sent: Sunday, June 27, 2010 8:53 PM
To: agi 
Subject: Re: [agi] Huge Progress on the Core of AGI


Ben:  I'm quite sure a simple narrow AI system could be constructed to beat 
humans at Pong ;p
Mike: Well, Ben, I'm glad you're quite sure because you haven't given a 
single reason why.

Although Ben would have to give us an actual example (of a pong program that 
could beat humans at Pong) just to make sure that it is not that difficult a 
task, it seems like such an obviously true statement that there is almost no 
incentive for anyone to try it.  However, there are chess programs that can 
beat the majority of people who play chess without outside assistance.
Jim Bromer


On Sun, Jun 27, 2010 at 3:43 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Well, Ben, I'm glad you're quite sure because you haven't given a single 
reason why. Clearly you should be Number One advisor on every Olympic team, 
because you've cracked the AGI problem of how to deal with opponents that can 
move (whether themselves or balls) in multiple, unpredictable directions, that 
is at the centre of just about every field and court sport.

  I think if you actually analyse it, you'll find that you can't predict and 
prepare for  the presumably at least 50 to 100 spots on a table tennis board/ 
tennis court that your opponent can hit the ball to, let alone for how he will 
play subsequent 10 to 20 shot rallies   - and you can't devise a deterministic 
program to play here. These are true, multiple-/poly-solution problems rather 
than the single solution ones you are familiar with.

  That's why all of these sports have normally hundreds of different competing 
philosophies and strategies, - and people continually can and do come up with 
new approaches and styles of play to the sports overall - there are endless 
possibilities.

  I suspect you may not play these sports, because one factor you've obviously 
ignored (although I stressed it) is not just the complexity but that in sports 
players can and do change their strategies - and that would have to be a given 
in our computer game. In real world activities, you're normally *supposed* to 
act unpredictably at least some of the time. It's a fundamental subgoal. 

  In sport, as in investment, past performance is not a [sure] guide to future 
performance - companies and markets may not continue to behave as they did in 
the past -  so that alone buggers any narrow AI predictive approach.

  P.S. But the most basic reality of these sports is that you can't cover every 
shot or move your opponent may make, and that gives rise to a continuing stream 
of genuine dilemmas . For example, you have just returned a ball from the 
extreme, far left of your court - do you now start moving rapidly towards the 
centre of the court so that you will be prepared to cover a ball to the 
extreme, near right side - or do you move more slowly?  If you don't move 
rapidly, you won't be able to cover that ball if it comes. But if you do move 
rapidly, your opponent can play the ball back to the extreme left and catch you 
out. 

  It's a genuine dilemma and gamble - just like deciding whether to invest in 
shares. And competitive sports are built on such dilemmas. 

  Welcome to the real world of AGI problems. You should get to know it.

  And as this example (and my rock wall problem) indicate, these problems can 
be as simple and accessible as fairly easy narrow AI problems. 

  From: Ben Goertzel 
  Sent: Sunday, June 27, 2010 7:33 PM
  To: agi

Re: [agi] The problem with AGI per Sloman

2010-06-25 Thread Mike Tintner
Colin,

Thanks. Do you have access to any of the full articles? I can't make too 
informed comments about the quality of work of all the guys writing for this 
journal, but they're certainly raising v. important questions - and this 
journal appears to have been unjustly ignored by this group.

Sloman, for example, seems to be exploring again the idea of a metaprogram (or 
I'd say, general program vs specialist program), wh. is the core of AGI, as 
Ben appears to be only v. recently starting to acknowledge:

A methodology for making progress is summarised and a novel requirement 
proposed for a theory of how human minds work: the theory should support a 
single generic design for a learning, developing system


From: Colin Hales 
Sent: Friday, June 25, 2010 4:30 AM
To: agi 
Subject: Re: [agi] The problem with AGI per Sloman


Not sure if this might be fodder for the discussion. The International Journal 
of Machine Consciousness (IJMC) has just issued Vol 2 #1 here: 
http://www.worldscinet.com/ijmc/02/0201/S17938430100201.html

It has a Sloman article and invited commentary on it.

cheers
colin hales




  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] The problem with AGI per Sloman

2010-06-24 Thread Mike Tintner
One of the problems of AI researchers is that too often they start off with an 
inadequate
understanding of the problems and believe that solutions are only a few years 
away. We need an educational system that not only teaches techniques and 
solutions, but also an understanding of problems and their difficulty - which 
can come from a broader multi-disciplinary education. That could speed up 
progress.
A. Sloman

( who else keeps saying that?)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Mike Tintner
[BTW Sloman's quote is a month old]

I think he means what I do - the end-problems that an AGI must face. Please 
name me one true AGI end-problem being dealt with by any AGI-er - apart from 
the toybox problem. 

As I've repeatedly said- AGI-ers simply don't address or discuss AGI 
end-problems.  And they do indeed start with solutions - just as you are 
doing - re the TSP problem and the problem of combinatorial complexity, both of 
wh. have in fact nothing to do with AGI, and for neither of wh.. can you 
provide a single example of a  relevant AGI problem.

One could not make up this total avoidance of the creative problem,

And AGI-ers are not just shockingly but obscenely narrow in their 
disciplinarity/ the range of their problem interests - maths, logic, standard 
narrow AI computational problems,  NLP, a little robotics and that's about it - 
with by my rough estimate some 90% of human and animal real world 
problemsolving of no interest to them. That esp. includes their chosen key 
fields of language, conversation and vision - all of wh. are much more the 
province of the *arts* than the sciences, when it comes to AGI

The fact that creative, artistic problemsolving presents a totally different 
paradigm to that of programmed, preplanned problemsolving, is of no interest to 
them - because they lack what educationalists would call any kind of 
metacognitive ( interdisciplinary) scaffolding to deal with it.

It doesn't matter that programming itself, and developing new formulae and 
theorems - (all the forms IOW of creative maths, logic, programming, science 
and technology)  -  the very problemsolving upon wh. they absolutely depend.- 
also come under artistic problemsolving.  

So there is a major need for broadening AI  AGI education both in terms of 
culturally creative problemsolving and true culture-wide multidisciplinarity.





From: Jim Bromer 
Sent: Thursday, June 24, 2010 5:05 PM
To: agi 
Subject: Re: [agi] The problem with AGI per Sloman


Both of you are wrong.  (Where did that quote come from by the way.  What year 
did he write or say that.)  

An inadequate understanding of the problems is exactly what has to be expected 
by researchers (both professional and amateurs) when they are facing a 
completely novel pursuit.  That is why we have endless discussions like these.  
What happened over and over again in AI research is that the amazing advances 
in computer technology always seemed to suggest that similar advances in AI 
must be just off the horizon.  And the reality is that there have been major 
advances in AI.  In the 1970's a critic stated that he wouldn't believe that AI 
was possible until a computer was able to beat him in chess.  Well, guess what 
happened and guess what conclusion he did not derive from the experience.  One 
of the problems with critics is that they can be as far off as those whose 
optimism is absurdly unwarranted.

If a broader multi-disciplinary effort was the obstacle to creating AGI, we 
would have AGI by now.  It should be clear to anyone who examines the history 
of AI or the present day reach of computer programming that a multi-discipline 
effort is not the key to creating effective AGI.  Computers have become 
pervasive in modern day life, and if it was just a matter of getting people 
with different kinds of interests involved, it would have been done by now.  It 
is a little like saying that the key to safe deep sea drilling is to rely on 
the expertise of companies that make billions and billions of dollars and which 
stand to lose billions by mistakes.  While that should make sense, if you look 
a little more closely, you can see that it doesn't quite work out that way in 
the real world. 

Jim Bromer


On Thu, Jun 24, 2010 at 7:33 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  One of the problems of AI researchers is that too often they start off with 
an inadequate
  understanding of the problems and believe that solutions are only a few years 
away. We need an educational system that not only teaches techniques and 
solutions, but also an understanding of problems and their difficulty — which 
can come from a broader multi-disciplinary education. That could speed up 
progress.
  A. Sloman

  ( who else keeps saying that?)
agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Mike Tintner
John,

You're making a massively important point, wh. I have been thinking about 
recently.

I think it's more useful to say that AGI-ers are thinking in terms of building 
a *complete AGI system* (rather than person) wh. could range from a simple 
animal robot to fantasies of an all intelligent brain-in-a-box.

No AGI-er has (and no team of supercreative AGI-ers could have) even a remotely 
realistic understanding of how massively complex a feat this would be.

I've changed recently to thinking that realistic AGI in the near future will 
have to concentrate instead (or certainly have one major focus) on what might 
be called local AGI as opposed to global AGI - getting a robot able to do 
just *one* or two things in a truly general way - with a very well-defined goal 
- rather than a true all-round AGI robot system. (more of this another time).

Look at Venter - he is not trying to build a complete artificial cell in one - 
that would be insane, and yet not a tiny fraction of the insanity of present 
AGI systembuilders' goals. He is taking it one narrow step at a time - one 
relatively narrow part at a time. That is a law of both natural and machine 
evolution to wh. I don't think there are any exceptions - from simple to 
complex in gradual, progressive stages.




From: John G. Rose 
Sent: Thursday, June 24, 2010 6:20 PM
To: agi 
Subject: RE: [agi] The problem with AGI per Sloman


I think some confusion occurs where AGI researchers want to build an artificial 
person verses artificial general intelligence. An AGI might be just a 
computational model running in software that can solve problems across domains. 
 An artificial person would be much else in addition to AGI.

 

With intelligence engineering and other engineering that artificial person 
could be built, or some interface where it appears to be a person. And a huge 
benefit is in having artificial people to do things that real people do. But 
pursuing AGI need not have to be pursuit of building artificial people.

 

Also, an AGI need not have to be able to solve ALL problems initially. Coming 
out and asking why some AGI theory wouldn't be able to figure out how to solve 
some problem like say, world hunger, I mean WTF is that?

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Thursday, June 24, 2010 5:33 AM
To: agi
Subject: [agi] The problem with AGI per Sloman

 

One of the problems of AI researchers is that too often they start off with an 
inadequate
understanding of the problems and believe that solutions are only a few years 
away. We need an educational system that not only teaches techniques and 
solutions, but also an understanding of problems and their difficulty - which 
can come from a broader multi-disciplinary education. That could speed up 
progress.

A. Sloman

 

( who else keeps saying that?)

  agi | Archives | Modify Your Subscription
 
 

 

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


  1   2   3   4   5   6   7   8   >