Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-27 Thread William Pearson
2008/6/27 Steve Richfield [EMAIL PROTECTED]:
 Russell and William,

 OK, I think that I am finally beginning to get it. No one here is really
 planning to do wonderful things that people can't reasonably do, though
 Russell has pointed out some improvements which I will comment on
 separately.

I still don't think you do. The general as far as I am concerned means
it can reconfigure itself to do other things it couldn't previously.
Just like a human learns differentiation. So when you ask for shopping
list of things for it to do, you will get our first steps and things
we know that can be done (because they have been done by humans)  for
testing etc...

Consider a human computer team. Now the human can code/configure the
machine to help her/him do pretty much anything. I just want to shift
that coding/configuring work to the machine. That is hard convey in
concrete examples, although I tried.

 I am interested in things that people can NOT reasonably do. Note that many
 computer programs have been written to way outperform people in specific
 tasks, and my own Dr. Eliza would seem to far exceed human capability in
 handling large amounts of qualitative knowledge that work within its
 paradigm limits. Hence, it would seem that I may have stumbled into the
 wrong group (opinions invited).

Probably so ;) The solution to a specific tasks is not within the
remit of the study of generality.  You would be like someone going up
to Turing and asking him what specific tasks ACE was going to solve.
If he said cryptography, you would go on about the Bombe cracking
engima.

 Unfortunately, no one here appears to be interested in understanding this
 landscape of solving future hyper-complex problems, but instead apparently
 everyone wishes to leave this work to some future AGI, that cannot possibly
 be constructed in the short time frame that I have in mind. Of course,
 future AGIs are doomed to fail at such efforts, just as people have failed
 for the last million years or so.

If Humans and AGIs are doomed to fail at the task perhaps it is impossible?

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-27 Thread Mike Tintner
Steve:No one here is really planning to do wonderful things that people can't 
reasonably do,

Why don't you specify examples of the problems you see as appropriate for 
exploration?

Your statement *sounds* a little confused - it may not be. The big challenge 
for an AGI is to solve the problems that people *can* solve reasonably all the 
time - although often with mixed results -  but that narrow AI's can't.

Those are the problematic, wicked, ill-structured problems where the solver, by 
definition, doesn't know what to do. From learning to walk, talk, have 
conversations about food, the weather or football, play with your toys, get 
your mommy to change her mind, write a story on what you did yesterday, plan 
your morning or a visit to the shops, write an essay, or a post to your AGI 
group,  or play football, or hide-and-seek, or even decide whether to take a 
left or a right round the person blocking your way etc etc.

These are the problems where you have to work out what to do as you go along, 
involving various forms of ad hoc thought, investigation, research, and 
experiment. You only have a rough idea of where you want to get to, and you 
have to go out and *find* a way to get there, *without* a predefined set of 
instructions for looking.

You're not likely to find anyone in the whole of AGI present and past to 
discuss these sort of problems with you, let alone any still more complicated 
problems, like invention (of say a cheaper gasoline alternative).

Logic problems, maths problems, programming problems, turing machine problems, 
where you do know what to do, yes. But not problems where you don't 

The insurmountable difficulty that everyone has is that condition of problems 
where you don't know what to do.* Everyone AFAIK accepts this definition/goal, 
and then proceeds to cheat on it completely. They say sure I'll deal with that 
problem, but I'll just get someone to tell my AGI what to do first. And they 
do all this without blushing, or sense of irony. They don't even know they're 
cheating.

It's not just this group, it's everyone AFAIK without exception.

You see, if you don't know what to do, then you don't have a definitive 
method, or set of rules for solving the problem, (and that includes rules for 
how to investigate or research the problem). You certainly have *some* methods 
and rules, but not - and never - a complete set of rules. It shouldn't be that 
hard to accept what I've just said - it's all common sense. Anyone disagree 
with me? Anyone think AGI involves solving problems where you *do* know exactly 
what to do?.

But here's why people have such difficulties and are so congenitally incapable 
of facing the problem of AGI directly. Let me redefine what I've just said - 
i.e. AGI involves solving problems where you don't know what to do. Any 
proper redefinition must involve: AGI involves solving problems WITHOUT an 
algorithm.

That's the part people find so tough. But there's no way around it. It's 
obvious - if you have an algorithm, you know what to do, you're cheating.

But no one in AGI knows how to design or instruct a machine to work without 
algorithms - or, to be more precise, *complete* algorithms. It's unthinkable - 
it seems like asking someone not to breathe...  until, like every problem,. you 
start thinking about it.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-27 Thread William Pearson
I'm going to ignore the oversimplifications of a variety of peoples positions.


 But no one in AGI knows how to design or instruct a machine to work without
 algorithms - or, to be more precise, *complete* algorithms. It's unthinkable
 - it seems like asking someone not to breathe...  until, like every
 problem,. you start thinking about it.

Have you managed to create/design a toy system that can do this very
basically.  If so, do share.

 Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-27 Thread Russell Wallace
On Fri, Jun 27, 2008 at 6:32 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
 Unsupervised learning? This could be really good for looking for strange
 things in blood samples. Now, I routinely order a manual differential white
 count that requires someone to manually look over the blood cells with a
 microscope. These typically cost ~US$25. Note that the routine counting of
 cell types in blood samples is already done by camera-driven AI programs in
 most labs.

I hadn't thought of blood samples, but that's an excellent example, thanks.

 Something like AutoCAD's mechanical simulations?

Yes, except better. An engineer wrote an excellent post on what would
be useful here:
http://groups.google.com/group/sci.nanotech/browse_thread/thread/ada3a83d1a284969/b713922d343e5371?lnk=stq=#b713922d343e5371

 Present systems already highlight any changes.

Yep. Now let's extend that to highlighting suspicious changes while
ignoring a cat chasing a mouse.

 Similar to the program-by-example programming that is used with present
 automobile welding robots?

Yes, except able to work in more complex environments than an assembly line.

 This stuff all sounds pretty puny compared to the awe-inspiring hype of the
 Singularity people

Well, I'm a _former_ Singularitarian :) But...

 None of these things would seem to be worth devoting anyone's life
 toward. Am I missing something here?

Oh, you asked for specific examples, I thought you meant something a
little nearer term than ultimate visions.

My ultimate vision? I would break the bounds that currently trammel
our species, stem the global loss of fifty million lives a year and
open the path to space colonization. I would make Earth-descended
sentient life immortal. Imagine smart CAD programs helping design cell
repair nanomachines. Imagine an O'Neill habitat being built by a swarm
of robots with their human supervisors in pressurized environments.
None of this is beyond the conceptual limits of the human mind, but it
is beyond what humans can _reasonably_ do with present-day technology,
because it takes too much time. Unlike many posters here, I don't
believe human-equivalent AGI is feasible in any meaningful timescale.
Nor do I believe it's necessary. Humans can continue to make the
high-level decisions. What we need, to accomplish great things, is
machines that can handle the details.

That, I hope you'll agree, is worth devoting one's life toward?

 I believe that a complete revolution in man's dealing with his problems is
 right here to be had. Dr. Eliza certainly illustrates that there is probably
 enough low hanging fruit to be worth immediately redesigning the Internet to
 collect it and promptly extend the lives of most of the people on Earth.

That sounds interesting, can you be more specific on what you would do and how?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-27 Thread Richard Loosemore

Abram Demski wrote:

Ah, so you do not accept AIXI either.


Goodness me, no ;-).  As far as I am concerned, AIXI is a mathematical 
formalism with loaded words like 'intelligence' attached to it, and then 
the formalism is taken as being about the real things in the world (i.e. 
intelligent systems) that those words normally signify.





Put this way, your complex system dilemma applies only to pure AGI,
and not to any narrow AI attempts, no matter how ambitious. But I
suppose other, totally different reasons (such as P != NP, if so) can
block those.

Is this the best way to understand your argument? Meaning, is the key
idea intelligence is a complex global property, so we can't define
it? If so, my original blog post is way of. My interpretation was
more like intelligence is a complex global property, so we can't
predict its occurring based on local properties. These are two very
different arguments. Perhaps you are arguing both points?


My feeling is that it is a mixture of the two.  My main concern is not 
to *assert* that intelligence is a complex global property, but to ask 
Is there a risk that intelligence is a complex global property? and 
then to follow that with a second question, namely If it is complex, 
then what impact would this have on the methodology of AGI?.


The answers that I tried to bring out in that paper were that (1) there 
is a substantial risk that all intelligent systems must be at least 
partially complex (reason:  nobody seems to know how to build a complete 
intelligence without including a substantial dose of the kind of tangled 
mechanisms that almost always give rise to complexity), and (2) the 
impact on AGI methodology is potentially devastating, and (disturbingly) 
so subtle that it would be possible for a skeptic to deny it forever.


The impact would be devastating because the current approach to AI, if 
applied to a situation in which the target was a complex system, would 
just run around in circles forever, always building systems that were 
kind of smart, but which did not scale up to the real thing, or which 
could only work if we hand-craft every piece of knowledge that the 
system uses, and so on.  In fact, the predicted progress rate in AI 
research would show exactly the type of pattern that has existed for the 
last fifty years.  As I said in another response to someone recently, 
all of the progress that has been made is essentially a result of AI 
researchers implictly using their own intuitions about how their minds 
work, while at the same time (mostly) denying that they are doing this.


So, going back to your question.  I do think that if intelligence is a 
(partially) complex global property, then it cannot be defined in a way 
that allows us to go from a definition to a prescription for a mechanism 
(i.e., we cannot simply set it up as an optimization problem).  That is 
not the direct purpose of my argument, but it is corollary.  Your second 
point is closer to the goal of my argument, but I would rephrase it to 
say that getting a real intelligence (an AGI) to work probably will 
require at least part of the system to have a disconnected relationship 
between global and local, so in that sense we would not be able to 
'predict' the occurence of intelligence based on local properties.


Remember the bottom line.  My only goal is to ask how different 
methodologies would fare if intelligence is complex.





Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-27 Thread Richard Loosemore

Steve Richfield wrote:

Russell and William,

OK, I think that I am finally beginning to get it. No one here is really
planning to do wonderful things that people can't reasonably do


Huh?

Not true.

I gave you a list of features that go a mind-boggling way beyond what 
people can do.  I do not quite understand how you came to this conclusion.




Richard Loosemore



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Can We Start P.S.

2008-06-27 Thread Steve Richfield
Mike,

Isn't this sort of behavior completely logical? If you try something new and
it is bad, then you have had one bad experience. However, if it is good,
then you have many good experiences. Hence. the *average* value of trying
something new is many times the value of the best thing that you now have
access to, because of this multiplicative effect.

IMHO, illogical researchers were looking for an illogical (to them)
phenomenon that was in fact completely logical.

Jim's God
Steve Richfield
===
On 6/27/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Jim's God was obviously listening to my last post, because I immediately
 came across this. I wouldn't make too much of it directly, but let me
 redefine its significance - there are parts of the brain and body that LIKE
 not knowing what to do, that LIKE creative, non-algorithmic problems. All
 you've got to do now is work out how to design a computer like that:

 Neuroscientists discover a sense of adventure

 Wellcome Trust scientists have identified a key region of the brain which
 encourages us to be adventurous. The region, located in a primitive area of
 the brain, is activated when we choose unfamiliar options, suggesting an
 evolutionary advantage for sampling the unknown. It may also explain why
 re-branding of familiar products encourages to pick them off the supermarket
 shelves.

 In an experiment carried out at the Wellcome Trust Centre for Neuroimaging
 at UCL (University College London), volunteers were shown a selection of
 images, which they had already been familiarised with. Each card had a
 unique probability of reward attached to it and over the course of the
 experiment, the volunteers would be able to work out which selection would
 provide the highest rewards. However, when unfamiliar images were
 introduced, the researchers found that volunteers were more likely to take a
 chance and select one of these options than continue with their familiar -
 and arguably safer - option.

 Using fMRI scanners, which measure blood flow in the brain to highlight
 which areas are most active, Dr Bianca Wittmann and colleagues showed that
 when the subjects selected an unfamiliar option, an area of the brain known
 as the ventral striatum lit up, indicating that it was more active. The
 ventral striatum is in one of the evolutionarily primitive regions of the
 brain, suggesting that the process can be advantageous and will be shared by
 many animals.

 Seeking new and unfamiliar experiences is a fundamental behavioural
 tendency in humans and animals, says Dr Wittmann. It makes sense to try
 new options as they may prove advantageous in the long run. For example, a
 monkey who chooses to deviate from its diet of bananas, even if this
 involves moving to an unfamiliar part of the forest and eating a new type of
 food, may find its diet enriched and more nutritious.

 When we make a particular choice or carry out a particular action which
 turns out to be beneficial, it is rewarded by a release of neurotransmitters
 such as dopamine. These rewards help us learn which behaviours are
 preferable and advantageous and worth repeating. The ventral striatum is one
 of the key areas involved in processing rewards in the brain. Although the
 researchers cannot say definitively from the fMRI scans how novelty seeking
 is being rewarded, Dr Wittmann believes it is likely to be through dopamine
 release.

 However, whilst rewarding the brain for making novel choices may prove
 advantageous in encouraging us to make potentially beneficial choices, it
 may also make us more susceptible to exploitation.

 I might have my own favourite choice of chocolate bar, but if I see a
 different bar repackaged, advertising its 'new, improved flavour', my search
 for novel experiences may encourage me to move away from my usual choice,
 says Dr Wittmann. This introduces the danger of being sold 'old wine in a
 new skin' and is something that marketing departments take advantage of.

 Rewarding the brain for novel choices could have a more serious side
 effect, argues Professor Nathaniel Daw, now at New York University, who also
 worked on the study.

 The novelty bonus may be useful in helping us make complex, uncertain
 decisions, but it clearly has a downside, says Professor Daw. In humans,
 increased novelty-seeking may play a role in gambling and drug addiction,
 both of which are mediated by malfunctions in dopamine release.

 Source: Wellcome Trust
 http://www.physorg.com/news133617811.html




 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-27 Thread Steve Richfield
William,

On 6/27/08, William Pearson [EMAIL PROTECTED] wrote:

  Unfortunately, no one here appears to be interested in understanding this
  landscape of solving future hyper-complex problems, but instead
 apparently
  everyone wishes to leave this work to some future AGI, that cannot
 possibly
  be constructed in the short time frame that I have in mind. Of course,
  future AGIs are doomed to fail at such efforts, just as people have
 failed
  for the last million years or so.
 
 If Humans and AGIs are doomed to fail at the task perhaps it is impossible?


The trick is to construct aids, e.g. bulldozers to move dirt, computers to
perform large numbers of high-precision calculations, etc. Difficult
problems from disparate domains have common STRUCTURES (e.g. figure 6)
composed of different elements. Computers can easily unravel these
structures much as they can (finally now at ~2GHz) easily solve the
traveling salesman problem for reasonable real-world-sized situations.

This is very similar to the different way that Chess masters look at a Chess
situation - as a compendium of structural elements, each with its own
strengths and weaknesses that he knows how to exploit; whereas beginning
Chess players only see pieces arrayed on a board.

Programs that work these structures aren't all that complex, though the
relationships expressed in their knowledge base may be very complex indeed.

My continuing challenge is that apparently no one prior to the Dr. Eliza
project ever looked closely at the structures of difficult problems, so this
is not taught in any school, so this is completely unknown to all audiences,
so it is hard to carry on a conversation about it, even when that
conversation would seem to be a guiding force in technological development.
Hence, people as on this forum, presume that more intelligence is needed to
solve more difficult problems, when relatively simple programs can exceed
any potential unguided intelligence, at least in some/many interesting
domains (e.g. health/medicine).

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-27 Thread Steve Richfield
Russell

On 6/27/08, Russell Wallace [EMAIL PROTECTED] wrote


 My ultimate vision?


YES! One of my tricks in pulling out-of-control projects out of the soup,
is to ask for a vision of how the task will be performed in ~100 years in
the future. Often that vision is much simpler than what they are now trying
to build, so the project can be retargeted to do more with less, salvaging
much of the core capability and wrapping it up differently.



 I would break the bounds that currently trammel
 our species, stem the global loss of fifty million lives a year and
 open the path to space colonization. I would make Earth-descended
 sentient life immortal. Imagine smart CAD programs helping design cell
 repair nanomachines. Imagine an O'Neill habitat being built by a swarm
 of robots with their human supervisors in pressurized environments.
 None of this is beyond the conceptual limits of the human mind, but it
 is beyond what humans can _reasonably_ do with present-day technology,
 because it takes too much time. Unlike many posters here, I don't
 believe human-equivalent AGI is feasible in any meaningful timescale.
 Nor do I believe it's necessary. Humans can continue to make the
 high-level decisions. What we need, to accomplish great things, is
 machines that can handle the details.


Just one gotcha - this will send even more people (like many of us) to the
unemployment lines, until their benefits expire. Some social restructuring
is necessary both now and in the future.



 That, I hope you'll agree, is worth devoting one's life toward?


Provided of course that the riches derived therefrom don't go to only ~1% of
the population. Your vision would become the means of ultimate enslavement
to countless future generations if left in the hands of our present
government and society.

I am NOT saying that AGIs would necessarily be a bad thing, but rather that
our present government and society are highly defective, and the presence of
AGIs will only amplify those problems.



  I believe that a complete revolution in man's dealing with his problems
 is
  right here to be had. Dr. Eliza certainly illustrates that there is
 probably
  enough low hanging fruit to be worth immediately redesigning the Internet
 to
  collect it and promptly extend the lives of most of the people on Earth.

 That sounds interesting, can you be more specific on what you would do and
 how?


All that would be needed to make Dr. Eliza work are:
1.  Some new HTML tags to indicate metadata needed to make Dr. Eliza work,
e.g. the syntax of what people typically say who have a problem whose
solution is explained in the associated article. Many of these are optional
and some are rare, e.g. the syntax of what people who are completely
unfamiliar with a subject area typically say, For example someone saying
that their car doesn't go is probably expressing a lack of mechanical
knowledge, and so should be presented with a brief article explaining that
cars have engines, transmissions, etc., so if it makes running noises, the
engine is probably OK, etc.
2.  HTML and Wikipedia composition tools that present forms to fill in, and
then create the above tags from those forms.
3.  Global agents like Google and/or Wiki that gather up the above tags from
the entire Internet and present them all in a single database, and provide
periodic updates to prior base databases.

There is still some hot debate over what tags should be utilized. My view is
that all of the ones that I have seen are important for some classes of
problems. However, the list is still fairly short. Some you may not be
familiar with, e.g. the type of cause-and-effect chain link that the
described phenomenon represents.

This would all be SO simple to do...

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-27 Thread Russell Wallace
On Fri, Jun 27, 2008 at 7:38 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
 Just one gotcha

[two claimed gotchas snipped]

I disagree with your assessment - while I agree present government and
society have problems, as I see it history shows that the development
of technology in general, and computer technology accessible to
individuals in particular, tends to alleviate rather than exacerbate
such problems - but that's off topic for this list, and is the kind of
subject that tends to generate vast and heated digressions, so I'll
refrain from further comment on it here...

 All that would be needed to make Dr. Eliza work are:

Interesting. I should look at the code and data of Dr. Eliza to
understand exactly how the current version works; where would you
recommend starting, and do you have links/files handy?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Can We Start P.S.

2008-06-27 Thread Mike Tintner
Steve,

I just casually quoted this research, because it reinforced a v. general point 
of mine.However, it is useful here. I think you're making a classical mistake, 
which may be v. much linked to the AGI mindset I'm criticising.

That mindset, I think, says: yes, AGI is about solving problems you don't know 
how to. So I'll just set up an algorithm that instructs my AGI to engage, when 
stuck, in a process of systematic trial and error...  That way, my AGI will be 
both algorithmic AND exploratory. and generative 

You seem to be saying something complementary here: you just try various new 
alternatives, and whichever on average, is better - you go with..It's logical.

Sounds ok in theory.

It makes sense to try new options as they may prove advantageous in the long 
run. For example, a monkey who chooses to deviate from its diet of bananas, 
even if this involves moving to an unfamiliar part of the forest and eating a 
new type of food, may find its diet enriched and more nutritious.

In practice, it doesn't work. You see, if you're that monkey, when do you go in 
search of new food? You don't know how long it's going to take, you don't know 
what dangers lie there, or what the weather will be like. Today? Now? In a 
hour? Tomorrow? So you go... and there's nothing there.. do you keep looking? 
And in the same part of the forest, because maybe you missed something; or in 
another part? And how long do you spend? And which parts of trees and 
undergrowth etc do you search? And how can you be sure that you've searched 
thoroughly? And which senses do you use? And what do you do if there's a 
strange plant you've never seen, and you're not even sure if it is a plant, 
etc. etc. (I just watched a movie, Finding Amanda, in which a guy can't 
remember where in his *room*, let alone a forest, he hid his casino winnings,  
can't find them even after taking the room apart - though the maid does 
afterwards).

Trying something new is vastly more complicated than it sounds - there are in 
fact virtually infinite possibilities, most of which you won't have thought of, 
at all. How do you even know you've made a mistake in the first place, that 
warrants trying something new? How do you know you just didn't persist long 
enough?

We're continually dealing with problematic problems, and the thing about them - 
is - LOGIC DOESN'T APPLY. There is no such thing as a systematic trial and 
error approach to them - not one that can work. That's why creativity is so 
*demonstrably* hard and such a eureka business when you get an idea.

How do I invest in the stockmarket now?  Buy up shares at their v. low current 
prices, and wait a few years? That HAS to work, right - it's logical? If you'd 
tried it with Japan in 1989, you'd still be in the red. There are no 
satisfactory algorithms for dealing with the stockmarket. There are some that 
may work at the moment - but only for a while, until the market changes 
radically..

And all problematic problems can be treated as stockmarket problems -  in which 
you have to decide how to invest limited amounts of time and effort and 
resources, with highly limited, imperfect knowledge of the options, and sources 
of information, and un-precisely-quantifiable risks and deadlines.

Problematic problems have infinite possibilities - and that's why humans are 
designed the way they are - not to be sure of anything. You're all dealing with 
the problematic problem of AGI - is there literally a single thing that anyone 
of you is sure of in relation to AGI? You ought to be, if you were 
algorithmically designed.. But nature is still a lot smarter than AGI.  You 
haven't been given an instinctive trial-and-error system.  

Any approach to trial and error, has itself to be a matter of trial and error.

You personally, Steve, seem to be making a further, related mistake here. And 
you can correct me. As I understand, you want to construct a general 
problem-solver, adapted from Eliza that can solve problems in many fields not 
just health. Sounds in principle good. Something more limited than a true AGI, 
but still v. useful.

You're aware, though, as no one else in AGI seems to be, that in every field of 
culture, you face major conflicts. There isn't a single field where experts 
aren't deeply split and don't divide into conflicting schools. That obviously 
poses major difficulties for any general problem-solver, let alone a superAGI. 
Your mistake - as I understand it - is that you think you can *logically* 
resolve these conflicts. The reason everyone is so divided everywhere is that 
they're dealing with problematic problems to which there is no logical or right 
answer. What's the best treatment for cancer? What's the best way to do AGI 
now? What's the best way to deal with the economy, the petrol problem, Iraq  
etc etc? No matter how you - or even a superAGI - drills down into these 
problems, people will still be fighting tooth and nail about their solutions. 
  Understandably. 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-27 Thread Richard Loosemore


At a quick glance I would say you could do it cheaper by building it 
yourself rather than buying Dell servers (cf MicroWulf project that was 
discussed before: http://www.clustermonkey.net//content/view/211/33/).


Secondly:  if what you need to get done is spreading activation (which 
implies massive parallelism) you would probably be better off with a 
Celoxica system than COTS servers:  celoxica.com.  Hugo de Garis has a 
good deal of experience with using this hardware:  it is FPGA based, so 
the potential parallelism is huge.


Third:  the problem, in any case, is not the hardware.  AI researchers 
have saying if only we had better hardware, we could really get these 
algorithms to sing, and THEN we will have a real AI! since the f***ing 
1970s, at least.  There is nothing on this earth more stupid than 
watching people repeat the same mistakes over and over again, for 
decades in a row.


Pardon my fury, but the problem is understanding HOW TO DO IT, and HOW 
TO BUILD THE TOOLS TO DO IT, not having expensive hardware.  So long as 
some people on this list repeat this mistake, this list will degenerate 
even further into obsolescence.


Frankly, looking at recent posts, I think this list is already dead.




Richard Loosemore






Ed Porter wrote:

WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

On Wednesday, June 25, US East Cost time, I had an interesting phone
conversation with Dave Hart, where we discussed just how much hardware could
you get for the current buck, for the amounts of money AGI research teams
using OpenCog (THE LUCKY ONES) might have available to them.

After our talk I checked out the cost of current servers at Dell (the
easiest place I knew of to check out prices.) I found hardware, and
particularly memory was somewhat cheaper than Dave and I had thought.  But
it is still sufficiently expensive, that moderately funded projects are
going to be greatly limited by the processor-memory and inter-processor
bandwidth as to how much spreading activation and inferencing they will be
capable of doing.

A RACK MOUNTABLE SERVER WITH 4 QUAD-CORE XEONS, WITH EACH PROCESSOR HAVING
8MB OF CACHE, AND THE WHOLE SERVER HAVING 128GBYTES OF RAM AND FOUR 300GBYTE
HARD DRIVES WAS UNDER $30K.  The memory stayed roughly constant in price per
GByte going from 32 to 64 to 128 GBytes.  Of course you would probably have
to pay a several extra grand for software and warranties.  SO LET US SAY THE
PRICE IS $33K PER SERVER.

A 24 port 20Gbit/sec infiniband switch with cables and one 20Gbit/sec
adapter card for each of 24 servers would be about $52K

SO A TOTAL SYSTEM WITH 24 SERVERS, 96 PROCESSORS, 384 CORES, 768MBYTE OF L2
CACHE, 3 TBYTES OF RAM, AND 28.8TBYTES OF DISK, AND THE 24 PORT 20GBIT/SEC
SWITCH WOULD BE ROUGHLY $850 GRAND.  


That doesn't include air conditioning.  I am guessing each server probably
draws about 400 watts, so 24 of them would be about 9600 watts--- about the
amount of heat of ten hair dryers running in one room, which obviously would
require some cooling, but I would not think would be that expensive to
handle.

With regard to performance, such systems are not even close to human brain
level but they should allow some interesting proofs of concepts

Performance
---
AI spreading activation often involves a fair amount of non-locality of
memory.  Unfortunately there is a real penalty for accessing RAM randomly.
Without overleaving, one article I read recently implied about 50ns was a
short latency for a memory access.  So we will assume 20M random RAM access
(randomRamOpps) per second per channel, and that an average activation will
take two, a read and write, so roughly 10M activations/sec per memory
channel.  


Matt Mahoney has pointed out that spreading activation can be modeled by
matrix methods that let you access RAM with much higher sequential memory
accessing rates.  He claimed he could process about a gigabyte of matrix
data a second.  If one assumes each element in the matrix is 8 bytes, that
would be the equivalent of doing 125M activation a second, which is roughly
12.5 times faster (if just 2 bytes, it would be 50 times fasters, or 500M
activation/sec).

If one assumes each of 4 core of each of 4 processors could handle a matrix
at 1GByte/sec, and each element in the matrix was just 2 bytes, that would
be 8 G 2Byte matrix activations/sec/server, and 256G matrix
activation/sec/system.  It is not clear how well this could be made to work
with the type of interconnectivity of an AGI.  It is clear their would be
some penalty for sparseness, perhaps a large one.  If one used run-length
encoding in matrix, which is read by rows, then a set of column whose values
could fit in cache could be loaded into cache, and the portions of all the
rows relating to them could be read sequentially.  Once all the portions of
all the row relating to the sub-set of colums had been processed, then the
process could be repeated for another 

Re: [agi] Approximations of Knowledge

2008-06-27 Thread Jim Bromer


 From: Richard Loosemore Jim,
 
 I'm sorry:  I cannot make any sense of what you say here.
 
 I don't think you are understanding the technicalities of the argument I 
 am presenting, because your very first sentence... But we can invent a 
 'mathematics' or a program that can is just completely false.  In a 
 complex system it is not possible to used analytic mathematics to 
 predict the global behavior of the system given only the rules that 
 determine the local mechanisms.  That is the very definition of a 
 complex system (note:  this is a complex system in the technical sense 
 of that term, which does not mean a complicated system in ordinary 
 language).
 Richard Loosemore

Well lets forget about your theory for a second.  I think that an advanced AI 
program is going to have to be able to deal with complexity and that your 
analysis is certainly interesting and illuminating.

But I want to make sure that I understand what you mean here.  First of all, 
your statement, it is not possible to use analytic mathematics to predict the 
global behavior of the system given only the rules that determine the local 
mechanisms.
By analytic mathematics are you referring to numerical analysis, which the 
article in Wikipedia, 
http://en.wikipedia.org/wiki/Numerical_analysis
describes as the study of algorithms for the problems of continuous 
mathematics (as distinguished from discrete mathematics).  Because if you are 
saying that the study of continuous mathematics -as distinguished from discrete 
mathematics- cannot be used to represent discreet system complexity, then that 
is kind of a non-starter. It's a cop-out by initial definition. I am primarily 
interested in discreet programming ( I am, of course also interested in 
continuous systems as well), but in this discussion I was expressing my 
interest in measures that can be taken to simplify computational complexity.

Again, Wikipedia gives a slightly more complex definition of complexity than 
you do.  http://en.wikipedia.org/wiki/Complexity
I am not saying that your particular definition of complexity is wrong, I only 
want to make sure that I understand what it is that you are getting at.

The part of your sentence that read, ...given only the rules that determine 
the local mechanisms, sounds like it might well apply to the kind of system 
that I think would be necessary for a better AI program, but it is not 
necessarily true of all kinds of demonstrations of complexity (as I understand 
them).  For example, consider a program that demonstrates the emergence of 
complex behaviors from collections of objects that obey simple rules that 
govern their interactions.  One can use a variety of arbitrary settings for the 
initial state of the program to examine how different complex behaviors may 
emerge in different environments.  (I am hoping to try something like this when 
I buy my next computer with a great graphics chip in it.)  This means that 
complexity does not have to be represented only in states that had been 
previously generated by the system, as can be obviously seen in the fact that 
initial states are a necessity of such systems.

I think I get what you are saying about complexity in AI and the problems of 
research into AI that could be caused if complexity is the reality of advanced 
AI programming.

But if you are throwing technical arguments at me, some of which are trivial 
from my perspective like the definition of, continuous mathematics (as 
distinguished from discrete mathematics), then all I can do is wonder why.

Jim Bromer


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-27 Thread Richard Loosemore

Jim Bromer wrote:



From: Richard Loosemore Jim,

I'm sorry:  I cannot make any sense of what you say here.

I don't think you are understanding the technicalities of the argument I 
am presenting, because your very first sentence... But we can invent a 
'mathematics' or a program that can is just completely false.  In a 
complex system it is not possible to used analytic mathematics to 
predict the global behavior of the system given only the rules that 
determine the local mechanisms.  That is the very definition of a 
complex system (note:  this is a complex system in the technical sense 
of that term, which does not mean a complicated system in ordinary 
language).

Richard Loosemore


Well lets forget about your theory for a second.  I think that an advanced AI 
program is going to have to be able to deal with complexity and that your 
analysis is certainly interesting and illuminating.

But I want to make sure that I understand what you mean here.  First of all, your 
statement, it is not possible to use analytic mathematics to predict the global 
behavior of the system given only the rules that determine the local mechanisms.
By analytic mathematics are you referring to numerical analysis, which the article in Wikipedia, 
http://en.wikipedia.org/wiki/Numerical_analysis

describes as the study of algorithms for the problems of continuous mathematics (as 
distinguished from discrete mathematics).  Because if you are saying that the study 
of continuous mathematics -as distinguished from discrete mathematics- cannot be used to 
represent discreet system complexity, then that is kind of a non-starter. It's a cop-out 
by initial definition. I am primarily interested in discreet programming ( I am, of 
course also interested in continuous systems as well), but in this discussion I was 
expressing my interest in measures that can be taken to simplify computational complexity.

Again, Wikipedia gives a slightly more complex definition of complexity than 
you do.  http://en.wikipedia.org/wiki/Complexity
I am not saying that your particular definition of complexity is wrong, I only 
want to make sure that I understand what it is that you are getting at.

The part of your sentence that read, ...given only the rules that determine the 
local mechanisms, sounds like it might well apply to the kind of system that I 
think would be necessary for a better AI program, but it is not necessarily true of all 
kinds of demonstrations of complexity (as I understand them).  For example, consider a 
program that demonstrates the emergence of complex behaviors from collections of objects 
that obey simple rules that govern their interactions.  One can use a variety of 
arbitrary settings for the initial state of the program to examine how different complex 
behaviors may emerge in different environments.  (I am hoping to try something like this 
when I buy my next computer with a great graphics chip in it.)  This means that 
complexity does not have to be represented only in states that had been previously 
generated by the system, as can be obviously seen in the fact that initial states are a 
necessity of such systems.

I think I get what you are saying about complexity in AI and the problems of 
research into AI that could be caused if complexity is the reality of advanced 
AI programming.

But if you are throwing technical arguments at me, some of which are trivial from my 
perspective like the definition of, continuous mathematics (as distinguished from 
discrete mathematics), then all I can do is wonder why.


Jim,

With the greatest of respect, this is a topic that will require some 
extensive background reading on your part, because the misunderstandings 
in your above test are too deep for me to remedy in the scope of one or 
two list postings.  For example, my reference to analytic mathematics 
has nothing at all to do with the wikipedia entry you found, alas.  The 
word has many uses, and the one I am employing is meant to point up a 
distinction between classical mathematics that allows equations to be 
solved algebraically, and experimental mathematics that solves systems 
by simulation.  Analytic means by analysis in this context...but this 
is a very abstract sense of the word that I am talking about here, and 
it is very hard to convey.


This topic is all about 'complex systems' which is a technical term that 
does not mean systems that are complicated (in the everyday sense of 
'complicated').  To get up to speed on this, I recommend a popular 
science book called Complexity by Waldrop, although there was also a 
more recent book whose name I forget, which may be better.  You could 
also read Wolfram's A New Kind of Science, but that is huge and does 
not come to the simple point very easily.


I am happy to make an attempt to bridge the gap by answering questions, 
but you must begin with the understanding that this would be a dialog 
between someone who has been doing research in a field for over 25 

[agi] Not dead

2008-06-27 Thread Stephen Reed
Hi Richard,


 Frankly, looking at recent posts, I think this list is already dead.

Dear Richard, be patient, or post more about your own results.  I have, right 
or wrong, somewhat modest expectations for the posts on this list (-aside from 
my favorite authors :-) ).  I, like perhaps some other developers, am hard at 
work solving the numerous mundane, tedious, obscure problems that bedevil our 
designs, and do not have the time to respond to every provocative post.  When I 
am fortunate enough to report progress - I do so.

I've spent a couple of weeks revisiting an issue I thought solved (i.e. 
incremental fluid construction grammar syntax), in the hope of dramatically 
collapsing the number of required grammar rules by allowing optional 
sub-constituents.  This feature simplifies the task of the grammar author but 
makes my otherwise simple Java parsing/generation code harder to test and to 
understand.

I'll have more to say in an upcoming blog post - both in regard to the issue at 
hand and some observations on my own cognitive activities during this process.

To re-capitulate, this list is not dead - some of its historical posters are 
very busy.

Cheers and warm regards,
-Steve


Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, June 27, 2008 3:29:59 PM
Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN 
AGI


At a quick glance I would say you could do it cheaper by building it 
yourself rather than buying Dell servers (cf MicroWulf project that was 
discussed before: http://www.clustermonkey.net//content/view/211/33/).

Secondly:  if what you need to get done is spreading activation (which 
implies massive parallelism) you would probably be better off with a 
Celoxica system than COTS servers:  celoxica.com.  Hugo de Garis has a 
good deal of experience with using this hardware:  it is FPGA based, so 
the potential parallelism is huge.

Third:  the problem, in any case, is not the hardware.  AI researchers 
have saying if only we had better hardware, we could really get these 
algorithms to sing, and THEN we will have a real AI! since the f***ing 
1970s, at least.  There is nothing on this earth more stupid than 
watching people repeat the same mistakes over and over again, for 
decades in a row.

Pardon my fury, but the problem is understanding HOW TO DO IT, and HOW 
TO BUILD THE TOOLS TO DO IT, not having expensive hardware.  So long as 
some people on this list repeat this mistake, this list will degenerate 
even further into obsolescence.

Frankly, looking at recent posts, I think this list is already dead.




Richard Loosemore






Ed Porter wrote:
 WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI
 
 On Wednesday, June 25, US East Cost time, I had an interesting phone
 conversation with Dave Hart, where we discussed just how much hardware could
 you get for the current buck, for the amounts of money AGI research teams
 using OpenCog (THE LUCKY ONES) might have available to them.
 
 After our talk I checked out the cost of current servers at Dell (the
 easiest place I knew of to check out prices.) I found hardware, and
 particularly memory was somewhat cheaper than Dave and I had thought.  But
 it is still sufficiently expensive, that moderately funded projects are
 going to be greatly limited by the processor-memory and inter-processor
 bandwidth as to how much spreading activation and inferencing they will be
 capable of doing.
 
 A RACK MOUNTABLE SERVER WITH 4 QUAD-CORE XEONS, WITH EACH PROCESSOR HAVING
 8MB OF CACHE, AND THE WHOLE SERVER HAVING 128GBYTES OF RAM AND FOUR 300GBYTE
 HARD DRIVES WAS UNDER $30K.  The memory stayed roughly constant in price per
 GByte going from 32 to 64 to 128 GBytes.  Of course you would probably have
 to pay a several extra grand for software and warranties.  SO LET US SAY THE
 PRICE IS $33K PER SERVER.
 
 A 24 port 20Gbit/sec infiniband switch with cables and one 20Gbit/sec
 adapter card for each of 24 servers would be about $52K
 
 SO A TOTAL SYSTEM WITH 24 SERVERS, 96 PROCESSORS, 384 CORES, 768MBYTE OF L2
 CACHE, 3 TBYTES OF RAM, AND 28.8TBYTES OF DISK, AND THE 24 PORT 20GBIT/SEC
 SWITCH WOULD BE ROUGHLY $850 GRAND.  
 
 That doesn't include air conditioning.  I am guessing each server probably
 draws about 400 watts, so 24 of them would be about 9600 watts--- about the
 amount of heat of ten hair dryers running in one room, which obviously would
 require some cooling, but I would not think would be that expensive to
 handle.
 
 With regard to performance, such systems are not even close to human brain
 level but they should allow some interesting proofs of concepts
 
 Performance
 ---
 AI spreading activation often involves a fair amount of non-locality of
 memory.