Re: [agi] The problem with AGI per Sloman

2010-06-25 Thread Mike Tintner
Colin,

Thanks. Do you have access to any of the full articles? I can't make too 
informed comments about the quality of work of all the guys writing for this 
journal, but they're certainly raising v. important questions - and this 
journal appears to have been unjustly ignored by this group.

Sloman, for example, seems to be exploring again the idea of a metaprogram (or 
I'd say, general program vs specialist program), wh. is the core of AGI, as 
Ben appears to be only v. recently starting to acknowledge:

A methodology for making progress is summarised and a novel requirement 
proposed for a theory of how human minds work: the theory should support a 
single generic design for a learning, developing system


From: Colin Hales 
Sent: Friday, June 25, 2010 4:30 AM
To: agi 
Subject: Re: [agi] The problem with AGI per Sloman


Not sure if this might be fodder for the discussion. The International Journal 
of Machine Consciousness (IJMC) has just issued Vol 2 #1 here: 
http://www.worldscinet.com/ijmc/02/0201/S17938430100201.html

It has a Sloman article and invited commentary on it.

cheers
colin hales




  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] AGI Alert: DARPA wants quintillion-speed computers

2010-06-25 Thread The Wizard
http://www.networkworld.com/community/node/62808


http://www.networkworld.com/community/node/62808Self aware system
software, including operating system, runtime system, I/O system, system
management/administration, resource management and means of exposing
resources, and external environments 
-- 
Carlos A Mejia

Taking life one singularity at a time.
www.Transalchemy.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: AGI Alert: DARPA wants quintillion-speed computers

2010-06-25 Thread The Wizard
Omnipresent High Performance Computing (OHPC) initiative)

Seriously DARPA we already get it. ;-)

On Fri, Jun 25, 2010 at 1:22 PM, The Wizard key.unive...@gmail.com wrote:


 http://www.networkworld.com/community/node/62808


 http://www.networkworld.com/community/node/62808Self aware system
 software, including operating system, runtime system, I/O system, system
 management/administration, resource management and means of exposing
 resources, and external environments 
 --
 Carlos A Mejia

 Taking life one singularity at a time.
 www.Transalchemy.com




-- 
Carlos A Mejia

Taking life one singularity at a time.
www.Transalchemy.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-25 Thread Travis Lenting
I hope I don't miss represent him but I agree with Ben (at
least my interpretation) when he said, We can ask it questions like, 'how
can we make a better A(G)I that can serve us in more different ways without
becoming dangerous'...It can help guide us along the path to a
positive singularity. I'm pretty sure he was also saying at first it
should just be a question answering machine with a reliable goal system and
stop the development if it has an unstable one before it gets to smart. I
like the idea that we should create an automated
cross disciplinary scientist and engineer (if you even separate the two) and
that NLP not modeled after the human brain is the best proposal for
a benevolent and resourceful super intelligence that enables a positive
singularity and all its unforeseen perks.
On Wed, Jun 23, 2010 at 11:04 PM, The Wizard key.unive...@gmail.com wrote:


 If you could ask an AGI anything, what would you ask it?
 --
 Carlos A Mejia

 Taking life one singularity at a time.
 www.Transalchemy.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-25 Thread Ian Parker
One of the first things in AGI is to produce software which is self
monitoring and which will correct itself when it is not working. For over a
day now I have been unable to access Google Groups. The Internet
access simply loops and does not get anywhere. If Google had any true AGI it
would :-

a) Spot that it was looping.
b) Failing that it would provide the use with an interface which would
enable the fault to be corrected on line.

This may seem an absolutely trivial point, but I feel it is absolutely
fundamental. First of all you do not pass the Turing test by being
absolutely dumb. I suppose you might say that conversing with Google was
rather like Tony Haywood answering questions in Congress. Sorry we cannot
process your request at this time (or any other time for that matter). You
don't either (this is Google Translate for you) by saying hat US forces
have committed atrocities in Burma when they have been out of SE Asia since
the end of the Vietnam war.

Another instance. Google denied access to my site saying that I had breached
the terms and conditions. I hadn't and they said they did not know why. You
do not pass the TT either by walking up and saying they had a paedophile
website when they hadn't.

I would say that the first task of AGI (this is actually a definition) would
be to provide software that is fault tolerant and self correcting. After all
if we have 2 copies of AGI we will have (by definition) a fault tolerant
system. If a request cannot be processed an AGI system should know why not
and hopefully be able to do something about it.

The lack of any real fault tolerance in our systems to me underlines just
how far off we really are.


  - Ian Parker

On 24 June 2010 07:10, Dana Ream dmr...@sonic.net wrote:

  How do you work?

  --
 *From:* The Wizard [mailto:key.unive...@gmail.com]
 *Sent:* Wednesday, June 23, 2010 11:05 PM
 *To:* agi
 *Subject:* [agi] Questions for an AGI


 If you could ask an AGI anything, what would you ask it?
 --
 Carlos A Mejia

 Taking life one singularity at a time.
 www.Transalchemy.com
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-25 Thread rob levy
 But there is some other kind of problem.  We should have figured it out by
 now.  I believe that there must be some fundamental computational problem
 that is standing as the major obstacle to contemporary AGI.  Without solving
 that problem we are going to have to wade through years of incremental
 advances.  I believe that the most likely basis of the problem is efficient
 logical satisfiability.  It makes the most senese given the nature of the
 computer and the nature of the best theories of mind.



I think there must be a computational or physical/computational problem we
have yet to clearly identify that goes along with an objection certain
philosophers like Chalmers have made about neural correlates, roughly: why
should one level of analysis or type of structure (eg neurons, brain
regions, dynamically synchronized ensembles of neurons,  or even the
organism-environment system), have this magic property of consciousness?

Since to me at least it seem obvious that the ecological level is the
relevant level of analysis at which to find the meaning relevant to
biological organisms, my sense is that we can reduce the above problem to a
question about meaning/significance, that is: what is it about a system that
makes it unified/integrated such that its relationship to other things
constitutes  a landscape of relevant meaning to the system as a whole.

I think that if that an explanation of meaning-to-a-system is either the
same as an explanation of first-hand subjectivity, or is closely tied to it,
though if subjectivity turns out to be part of a physical problem and not a
purely computational one, then we probably won't solve the above-posed
problem without such a physical explanation being clarified (not necessarily
explained though, just as we don't know what electricity really is for
example).

All computer software and situated robots that have ever been made are
composed of actions or expressions that are meaningful to people, but
software or robots have never been created that can refer to their own
actions in a way that demonstrates skillful knowledge indicating that they
are organized in a truly semantic way, as opposed to a merely programmatic
way.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-25 Thread Jim Bromer
On Fri, Jun 25, 2010 at 7:35 PM, rob levy r.p.l...@gmail.com wrote:
I think there must be a computational or physical/computational problem we
have yet to clearly identify that goes along with an objection certain
philosophers like Chalmers have made about neural correlates, roughly: why
should one level of analysis or type of structure (eg neurons, brain
regions, dynamically synchronized ensembles of neurons,  or even the
organism-environment system), have this magic property of consciousness?

I don't think that it will be understood fully during our lifetimes.  And I
don't think that the unknown aspects of this is relevant to computer
programming.  However, the question of subjective meaning is very relevant.

rob levy r.p.l...@gmail.com wrote:
what is it about a system that makes it unified/integrated such that its
relationship to other things constitutes  a landscape of relevant meaning to
the system as a whole.
I think that if that an explanation of meaning-to-a-system is either the
same as an explanation of first-hand subjectivity, or is closely tied to it,
though if subjectivity turns out to be part of a physical problem and not a
purely computational one, then we probably won't solve the above-posed
problem without such a physical explanation being clarified (not necessarily
explained though, just as we don't know what electricity really is for
example).

That is interesting.  I wonder if there is a way to make that sense of
subjectivity and subjective meaning a basic quality of a simple AGI program,
and if it could be a valuable elemental method of analyzing the IO data
environment.  I think objectives are an important method of testing ideas
(and idea-like impressions and reactions).  And this combination of setting
objectives to test ideas and further develop new ideas does seem to lend
itself to developing a sense of subjective experience in relation to the
'objects' of the IO data environment.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com