Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-26 Thread Bob Mottram
2008/6/26 Steve Richfield [EMAIL PROTECTED]:
 Perhaps we can completely sidestep the countless contentious issues
 regarding what intelligence is, what an AGI is, what consciousness is, what
 is needed, etc., with an entirely different approach:


It's the usual pattern for participants on AI forums to end up
endlessly trying to define what intelligence or consciousness is.
After many years of watching this happen the take home message for me
was that it is not possible to define such things by philosophical
enquiry or intraspection alone.



 Note in passing that intelligence alone does NOT assure success


Yes.  Many creatures get along quite happily with only modest
computing resources.  Carrying around a large brain is expensive, and
eventually you come up against diminishing returns where the cost of
running or maintaining the computing system outweighs its advantages
in terms of adaptive behavior.



 Please post concrete examples of useful activities that you hope that AGIs
 (rather than humans) will be performing.


A skill which any agent operating in the real world needs is to be
able to adequately sense its surroundings in a way which permits it to
use that information as a basis for taking decisions.  The reason why
the robots which we have today are not very smart is not because they
lack adequately sophisticated mechanical designs but because in most
cases they can only sense their environment in very limited ways.  It
turns out that merely gathering data from sensors is not enough, and
that the system needs to filter and integrate this raw data into some
kind of meaningful theatre.  The process or maintaining a mental
theatre involves synchronisation between high and low level systems.
This is essentially what I'm trying to do, and being an engineer I'm
also trying to do it in a way which can be implemented in a practical
and economical manner.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-26 Thread Richard Loosemore

Jim Bromer wrote:



- Original Message 
From: Richard Loosemore Jim,

I'm sorry:  I cannot make any sense of what you say here.

I don't think you are understanding the technicalities of the argument I 
am presenting, because your very first sentence... But we can invent a 
'mathematics' or a program that can is just completely false.  In a 
complex system it is not possible to used analytic mathematics to 
predict the global behavior of the system given only the rules that 
determine the local mechanisms.  That is the very definition of a 
complex system (note:  this is a complex system in the technical sense 
of that term, which does not mean a complicated system in ordinary 
language).

Richard Loosemore
--
I don't feel that you are seriously interested in discussing the subject with 
me.  Let me know if you ever change your mind.


No, I am seriously interested in discussing the subject with you:  I 
just explained a problem with the statement you made.  If I was not 
interested in discussing, I would not have gone to that trouble.


I suspect you are offended by my comment that I cannot make sense of 
what you say.  This is just my honest reaction to what you wrote.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-26 Thread Richard Loosemore

Steve Richfield wrote:

To all,

Perhaps we can completely sidestep the countless contentious issues
regarding what intelligence is, what an AGI is, what consciousness is, what
is needed, etc., with an entirely different approach:

Perhaps we could create a short database (maybe only a dozen or so entries)
of sample queries, activities, tasks, etc., that YOU would like to see YOUR
future AGIs performing to earn their electricity. Perhaps this could pull us
out of the esoteric morass that recent postings have fallen into, and put us
back onto a track of real-world goals to design to.

Everyone here has heard my discussions as to how really simple programs can
solve certain classes of difficult problems stated in NL, which was my
own initial goal. Many have argued that this goal was WAY too simple to be
interesting in an AGI forum. OK, if that is indeed the case (and I have no
reason to believe that this is not the case), then how about some really
concrete examples of what others here expect from AGIs, beyond merely a
willingness to work for electricity instead of food.

Note in passing that intelligence alone does NOT assure success, as
otherwise we would all be rich and way too busy to spend time here on this
forum. Why does anyone here expect that intelligent machines will succeed
where intelligent men routinely fail? There seems to be a nearly religious
fervor here, perhaps to hide our own personal failures to apply our
intelligence to propel us to personal success. Is there anything at all
substantial here, or is this simply a society of mutual delusion?

Hopefully we will either get a good list from this thread, or it will become
obvious to everyone that AGIs are a wasted effort.

Please post *concrete* examples of useful activities that you hope that AGIs
(rather than humans) will be performing. Perhaps you can even think of
something that you would be doing if only you were smarter, which of course
would be the very best example. No, designing AGIs does NOT count, unless of
course you first exhibit some other non-circular value.

Jiri previously noted that perhaps AGIs would best be used to manage the
affairs of humans so that we can do as we please without bothering with the
complex details of life. Of course, people and some (communist) governments
now already perform this function, so while this might be a
potential application, it doesn't count for this posting, as I am looking
for things that people either can not do at all, or can not do adequately
well.

BTW, note that computers were first justified on the basis of their use in
weapons computations (e.g. trajectory tables) and simulations (e.g. atomic
weapons). Perhaps there is some similar niche for AGIs that is big enough to
fund their development?

 Thanks in advance for your *concrete* examples.


Steve,

Some of the discussion on this list is about important questions such as 
what is needed to achieve AGI.  If you call that an esoteric morass 
then frankly this might be a forum that is not for you.


However, in the spirit of positive engagement, I will answer some of 
your questions.


An AGI is a generally intellgent system, so it would have the same 
capabilities as a human intellect.  These do not need to be listed.


The tasks that an AGI needs to be able to perform are, therefore, simply 
all the tasks that any human intelligence is able to perform.


But beyond that, it would be capable of understanding itself, because 
(unlike a human brain) its internal mechanisms will be (a) the result of 
a planned design process, and (b) completely open to external probes. 
This fact of its workings being open to inspection would mean that rapid 
improvements could almost certainly be made as a result of watching the 
system in regular operation.  So, unlike a human brain, an AGI would 
lead directly to the creation of faster, more efficient types of 
intelligence, and as a result we could reasonably expect an AGI to lead 
to much-greater-than-human levels of intelligence.  An AGI working at 
1000 times the speed of a human being would do in one year what that 
human would have done in a thousand years (this calculation partially 
depends on other factors that I will not go into here).


Also, unlike a human, the control system (motivations, emotions, drives) 
of the AGI would be designable, and so could be made to lack the flaws 
that obviously exist in the human mind.  Mainly, this means that the 
violently competitive motivations that were put into the brain [sic] by 
evolution would to need to be present.  This would result in Difference 
Number 2:  an AGI could be built in such a way as to be more trustworthy 
than a human, possibly to the extent that it would be completely 
empathic to the goals and aspirations of the human race.  This would be 
a big difference indeed.


Thirdly, an AGI would be able to engage in high-bandwidth communications 
with other AGIs, and as a result it would be possible for a team of AGIs 
to cooperate on 

Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-26 Thread Russell Wallace
On Thu, Jun 26, 2008 at 6:12 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
 Perhaps we could create a short database (maybe only a dozen or so entries)
 of sample queries, activities, tasks, etc., that YOU would like to see YOUR
 future AGIs performing to earn their electricity.

The approach I have in mind is to start with reasoning about
algorithms, so possible tasks for a medium-term AI might include:

Prove simple theorems.
Given a formal specification, write a program that meets it.
Given the rules of a game, write a program that can play it with modest skill.
Design cellular automata to carry out a given task, or estimate
whether a given CA has certain properties or can be made to do certain
things.
Estimate a lower bound for values of the busy beaver function for small N.
Analyze the correctness of a program relative to a formal spec.

In the longer term, once it gets to the point of being able to
usefully handle visual/spatial information, its capabilities might
include:

Searching photographs without being limited to human labeling.
Design of physical artifacts.
Checking of human-created or machine-assisted designs.
Watching a security camera feed, ignoring benign activity but alerting
a human operator in the event of suspicious activity.
Programming robots to carry out tasks in e.g. transport and construction.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-26 Thread William Pearson
2008/6/26 Steve Richfield [EMAIL PROTECTED]:

 Jiri previously noted that perhaps AGIs would best be used to manage the
 affairs of humans so that we can do as we please without bothering with the
 complex details of life. Of course, people and some (communist) governments
 now already perform this function, so while this might be a
 potential application, it doesn't count for this posting, as I am looking
 for things that people either can not do at all, or can not do adequately
 well.
snip

 Thanks in advance for your concrete examples.


Personally I concentrate on things humans could do, but that they
don't have the time to do. Mostly I want to do Intelligence
Augmentation through augmented reality.

Highlight on a heads up display
 - food that corresponds to a certain health guidelines/ethical
standards by object recognition and searching on-line information
 - books that might be interesting (again by searching information) or
other people the user has known has read.

None of these should have to be explictly programmed/configured by the
user, the system should pick them up by interacting with the user and
other machines. They should also only be done in contexts when the
user is looking at the items involved (in a book store/library), and
not just all the time.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-26 Thread Abram Demski
Ah, so you do not accept AIXI either.

Put this way, your complex system dilemma applies only to pure AGI,
and not to any narrow AI attempts, no matter how ambitious. But I
suppose other, totally different reasons (such as P != NP, if so) can
block those.

Is this the best way to understand your argument? Meaning, is the key
idea intelligence is a complex global property, so we can't define
it? If so, my original blog post is way of. My interpretation was
more like intelligence is a complex global property, so we can't
predict its occurring based on local properties. These are two very
different arguments. Perhaps you are arguing both points?

On Wed, Jun 25, 2008 at 6:20 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
[..]
 The confusion in our discussion has to do with the assumption you listed
 above:  ...I am implicitly assuming that we have some exact definition of
 intelligence, so that we know what we are looking for...

 This is precisely what we do not have, and which we will quite possibly
 never have.
[..]
 Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-26 Thread Steve Richfield
Russell and William,

OK, I think that I am finally beginning to get it. No one here is really
planning to do wonderful things that people can't reasonably do, though
Russell has pointed out some improvements which I will comment on
separately.

I am interested in things that people can NOT reasonably do. Note that many
computer programs have been written to way outperform people in specific
tasks, and my own Dr. Eliza would seem to far exceed human capability in
handling large amounts of qualitative knowledge that work within its
paradigm limits. Hence, it would seem that I may have stumbled into the
wrong group (opinions invited).

Continuing with comments on part of Russell's posting...

On 6/26/08, Russell Wallace [EMAIL PROTECTED] wrote:

 Searching photographs without being limited to human labeling.


Unsupervised learning? This could be really good for looking for strange
things in blood samples. Now, I routinely order a manual differential white
count that requires someone to manually look over the blood cells with a
microscope. These typically cost ~US$25. Note that the routine counting of
cell types in blood samples is already done by camera-driven AI programs in
most labs.

Design of physical artifacts.
 Checking of human-created or machine-assisted designs.


Something like AutoCAD's mechanical simulations?

Watching a security camera feed, ignoring benign activity but alerting
 a human operator in the event of suspicious activity.


Present systems already highlight any changes.

Programming robots to carry out tasks in e.g. transport and construction.


Similar to the program-by-example programming that is used with present
automobile welding robots?

This stuff all sounds pretty puny compared to the awe-inspiring hype of the
Singularity people, and there is already high-tech competition to AGIs for
much of it. None of these things would seem to be worth any great social
risk. None of these things would seem to be worth devoting anyone's life
toward. Am I missing something here?

I believe that a complete revolution in man's dealing with his problems is
right here to be had. Dr. Eliza certainly illustrates that there is probably
enough low hanging fruit to be worth immediately redesigning the Internet to
collect it and promptly extend the lives of most of the people on Earth.
However, my present interest is to NOT restrict the next generation Internet
to a particular more advanced capability than is now had, but to either:
1.  Figure out enough about problems and their solutions to do the job for
once and for all time, or
2.  Figure out how to do the job in an open-ended sort of way so that
capability can grow as we figure out more about solving problems.

Unfortunately, no one here appears to be interested in understanding this
landscape of solving future hyper-complex problems, but instead apparently
everyone wishes to leave this work to some future AGI, that cannot possibly
be constructed in the short time frame that I have in mind. Of course,
future AGIs are doomed to fail at such efforts, just as people have failed
for the last million years or so.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com