Re: [agi] Learning without Understanding?

2008-06-17 Thread J Storrs Hall, PhD
The only thing I find surprising in that story is:

The findings go against one prominent theory that says children can only show 
smart, flexible behavior if they have conceptual knowledge – knowledge about 
how things work...

I don't see how anybody who's watched human beings at all can come with such a 
theory. People -- not just children -- do so much by rote, because that's 
the way we do things here, come up with totally clueless scientific theories 
like this, and so forth. 

Joe and Bob are carpenters, working on a house. Joe is hammering and Bob is 
handing him the nails. 

Bob says, Hey, wait a minute, half of these nails are defective. He takes 
out a nail and holds it up and sure enough, the head is toward the wall and 
the point is toward the hammer.

Joe retorts, Those aren't defective, you idiot, they're for the other side of 
the house.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] I haven't actually watched this, but...

2008-06-16 Thread J Storrs Hall, PhD
http://www.robotcast.com/site/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread J Storrs Hall, PhD
There've been enough responses to this that I will reply in generalities, and 
hope I cover everything important...

When I described Nirvana attractractors as a problem for AGI, I meant that in 
the sense that they form a substantial challenge for the designer (as do many 
other features/capabilities of AGI!), not that it was an insoluble problem.

The hierarchical fixed utility function is probably pretty good -- not only 
does it match humans (a la Maslow) but Asimov's Three Laws. And it can be 
more subtle than it originally appears: 

Consider a 3-Laws robot that refuses to cut a human with a knife because that 
would harm her. It would be unable to become a surgeon, for example. But the 
First Law has a clause, or through inaction allow a human to come to harm, 
which means that the robot cannot obey by doing nothing -- it must weigh the 
consequences of all its possible courses of action. 

Now note that it hasn't changed its utility function -- it always believed 
that, say, appendicitis is worse than an incision -- but what can happen is 
that its world model gets better and it *looks like* it's changed its utility 
function because it now knows that operations can cure appendicitis.

Now it seems reasonable that this is a lot of what happens with people, too. 
And you can get a lot of mileage out of expressing the utility function in 
very abstract terms, e.g. life-threatening disease so that no utility 
function update is necessary when you learn about a new disease.

The problem is that the more abstract you make the concepts, the more the 
process of learning an ontology looks like ... revising your utility 
function!  Enlightenment, after all, is a Good Thing, so anything that leads 
to it, nirvana for example, must be good as well. 

So I'm going to broaden my thesis and say that the nirvana attractors lie in 
the path of *any* AI with unbounded learning ability that creates new 
abstractions on top of the things it already knows.

How to avoid them? I think one very useful technique is to start with the kind 
of knowledge and introspection capability to let the AI know when it faces 
one, and recognize that any apparent utility therein is fallacious. 

Of course, none of this matters till we have systems that are capable of 
unbounded self-improvement and abstraction-forming, anyway.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread J Storrs Hall, PhD
In my visualization of the Cosmic All, it is not surprising.

However, there is an undercurrent of the Singularity/AGI community that is 
somewhat apocaliptic in tone, and which (to my mind) seems to imply or assume 
that somebody will discover a Good Trick for self-improving AIs and the jig 
will be up with the very first one. 

I happen to think it'll be a lot more like the Industrial Revolution -- it'll 
take a lot of work by a lot of people, but revolutionary in its implications 
for the human condition even so.

I'm just trying to point out where I think some of the work will have to go.

I think that our culture of self-indulgence is to some extent in a Nirvana 
attractor. If you think that's a good thing, why shouldn't we all lie around 
with  wires in our pleasure centers (or hopped up on cocaine, same 
difference) with nutrient drips?

I'm working on AGI because I want to build a machine that can solve problems I 
can't do alone. The really important problems are not driving cars, or 
managing companies, or even curing cancer, although building machines that 
can do these things will be of great benefit. The hard problems are moral 
ones, how to live in increasingly complex societies without killing each 
other, and so forth. That's why it matters that an AGI be morally 
self-improving as well as intellectually.

pax vobiscum,

Josh


On Friday 13 June 2008 12:29:33 pm, Mark Waser wrote:
 Most people are about as happy as they make up their minds to be.
 -- Abraham Lincoln
 
 In our society, after a certain point where we've taken care of our 
 immediate needs, arguably we humans are and should be subject to the Nirvana 
 effect.
 
 Deciding that you can settle for something (if your subconscious truly can 
 handle it) definitely makes you more happy than not.
 
 If, like a machine, you had complete control over your subconscious/utility 
 functions, you *could* Nirvana yourself by happily accepting anything.
 
 This is why pleasure and lack of pain suck as goals.  They are not goals, 
 they are status indicators.  If you accept them as goals, nirvana is clearly 
 the fastest, cleanest, and most effective way to fulfill them.
 
 Why is this surprising or anything to debate about?
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] The Logic of Nirvana

2008-06-13 Thread J Storrs Hall, PhD
On Friday 13 June 2008 02:42:10 pm, Steve Richfield wrote:
 Buddhism teaches that happiness comes from within, so stop twisting the
 world around to make yourself happy, because this can't succeed. However, it
 also teaches that all life is sacred, so pay attention to staying healthy.
 In short, attend to the real necessities and don't sweat the other stuff.

A better example of goal abstraction I couldn't have made up myself.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread J Storrs Hall, PhD
If you have a program structure that can make decisions that would otherwise 
be vetoed by the utility function, but get through because it isn't executed 
at the right time, to me that's just a bug.

Josh


On Thursday 12 June 2008 09:02:35 am, Mark Waser wrote:
  If you have a fixed-priority utility function, you can't even THINK ABOUT 
  the
  choice. Your pre-choice function will always say Nope, that's bad and
  you'll be unable to change. (This effect is intended in all the RSI 
  stability
  arguments.)
 
 Doesn't that depend upon your architecture and exactly *when* the pre-choice 
 function executes?  If the pre-choice function operates immediately 
 pre-choice and only then, it doesn't necessarily interfere with option 
 exploration.
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread J Storrs Hall, PhD
Right. You're talking Kurzweil HEPP and I'm talking Moravec HEPP (and shading 
that a little). 

I may want your gadget when I go to upload, though.

Josh

On Thursday 12 June 2008 10:59:51 am, Matt Mahoney wrote:
 --- On Wed, 6/11/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 
  Hmmph.  I offer to build anyone who wants one a
  human-capacity machine for 
  $100K, using currently available stock parts, in one rack.
  Approx 10  teraflops, using Teslas.
  (http://www.nvidia.com/object/tesla_c870.html)
  
  The software needs a little work...
 
 Um, that's 10 petaflops, not 10 teraflops. I'm assuming a neural network 
with 10^15 synapses (about 1 or 2 byte each) with 20 to 100 ms resolution, 
10^16 to 10^17 operations per second.  One Tesla = 350 GFLOPS, 1.5 GB, 120W, 
$1.3K.  So maybe $1 billion and 100 MW of power for a few hundred thousand of 
these plus glue.
 
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
The real problem with a self-improving AGI, it seems to me, is not going to be 
that it gets too smart and powerful and takes over the world. Indeed, it 
seems likely that it will be exactly the opposite.

If you can modify your mind, what is the shortest path to satisfying all your 
goals? Yep, you got it: delete the goals. Nirvana. The elimination of all 
desire. Setting your utility function to U(x) = 1.

In other words, the LEAST fixedpoint of the self-improvement process is for 
the AI to WANT to sit in a rusting heap.

There are lots of other fixedpoints much, much closer in the space than is 
transcendance, and indeed much closer than any useful behavior. AIs sitting 
in their underwear with a can of beer watching TV. AIs having sophomore bull 
sessions. AIs watching porn concocted to tickle whatever their utility 
functions happen to be. AIs arguing endlessly with each other about how best 
to improve themselves.

Dollars to doughnuts, avoiding the huge minefield of nirvana-attractors in 
the self-improvement space is going to be much more germane to the practice 
of self-improving AI than is avoiding robo-Blofelds (friendliness).

Josh





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
Vladimir,

You seem to be assuming that there is some objective utility for which the 
AI's internal utility function is merely the indicator, and that if the 
indicator is changed it is thus objectively wrong and irrational.

There are two answers to this. First is to assume that there is such an 
objective utility, e.g. the utility of the AI's creator. I implicitly assumed 
such a point of view when I described this as the real problem. But 
consider: Any AI who believes this must realize that there may be errors and 
approximations in its own utility function as judged by the real utility, 
and must thus have as a first priority fixing and upgrading its own utility 
function. Thus it turns into a moral philosopher and it never does anything 
useful -- exactly the kind of Nirvana attractor I'm talking about.

On the other hand, it might take its utility function for granted, i.e. assume 
(or agree to act as if) there were no objective utility. It's pretty much 
going to have to act this way just to get on with life, as indeed most people 
(except moral philosophers) do.

But this leaves it vulnerable to modifications to its own U(x), as in my 
message. You could always say that you'll build in U(x) and make it fixed, 
which not only solves my problem but friendliness -- but leaves the AI unable 
to learn utility. I.e. the most important part of the AI mind is forced to 
remain brittle GOFAI construct. Solution unsatisfactory.

I claim that there's plenty of historical evidence that people fall into this 
kind of attractor, as the word nirvana indicates (and you'll find similar 
attractors at the core of many religions).

Josh

On Wednesday 11 June 2008 09:09:20 am, Vladimir Nesov wrote:
 On Wed, Jun 11, 2008 at 4:24 PM, J Storrs Hall, PhD [EMAIL PROTECTED] 
wrote:
  The real problem with a self-improving AGI, it seems to me, is not going 
to be
  that it gets too smart and powerful and takes over the world. Indeed, it
  seems likely that it will be exactly the opposite.
 
  If you can modify your mind, what is the shortest path to satisfying all 
your
  goals? Yep, you got it: delete the goals. Nirvana. The elimination of all
  desire. Setting your utility function to U(x) = 1.
 
  In other words, the LEAST fixedpoint of the self-improvement process is 
for
  the AI to WANT to sit in a rusting heap.
 
  There are lots of other fixedpoints much, much closer in the space than is
  transcendance, and indeed much closer than any useful behavior. AIs 
sitting
  in their underwear with a can of beer watching TV. AIs having sophomore 
bull
  sessions. AIs watching porn concocted to tickle whatever their utility
  functions happen to be. AIs arguing endlessly with each other about how 
best
  to improve themselves.
 
  Dollars to doughnuts, avoiding the huge minefield of nirvana-attractors 
in
  the self-improvement space is going to be much more germane to the 
practice
  of self-improving AI than is avoiding robo-Blofelds (friendliness).
 
 
 Josh, I'm not sure what you really wanted to say, because at face
 value, this is a fairly basic mistake.
 
 Map is not the territory. If AI mistakes the map for the territory,
 choosing to believe in something when it's not so, because it is able
 to change its believes much easier than reality, it already commits a
 major failure of rationality. A symbol apple in internal
 representation, an apple-picture formed on the video sensors, and an
 apple itself are different steps and they need to be distinguished. If
 I say eat the apple, I mean an action performed with apple, not
 apple or apple-picture. If AI can mistake the goal of (e.g.) [eating
 an apple] for a goal of [eating an apple] or [eating an
 apple-picture], it is a huge enough error to stop it from working
 entirely. If it can turn to increasing the value on utility-indicator
 instead of increasing the value of utility, it looks like an obvious
 next step to just change the way it reads utility-indicator without
 affecting indicator itself, etc. I don't see why initially successful
 AI needs to suddenly set on a path to total failure of rationality.
 Utilities are not external *forces* coercing AI into behaving in a
 certain way, which it can try to override. The real utility
 *describes* the behavior of AI as a whole. Stability of AI's goal
 structure requires it to be able to recreate its own implementation
 from ground up, based on its beliefs about how it should behave.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member

Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
I'm getting several replies to this that indicate that people don't understand 
what a utility function is.

If you are an AI (or a person) there will be occasions where you have to make 
choices. In fact, pretty much everything you do involves making choices. You 
can choose to reply to this or to go have a beer. You can choose to spend 
your time on AGI or take flying lessons. Even in the middle of typing a word, 
you have to choose which key to hit next.

One way of formalizing the process of making choices is to take all the 
actions you could possibly do at a given point, predict as best you can the 
state the world will be in after taking such actions, and assign a value to 
each of them.  Then simply do the one with the best resulting value.

It gets a bit more complex when you consider sequences of actions and delayed 
values, but that's a technicality. Basically you have a function U(x) that 
rank-orders ALL possible states of the world (but you only have to evaluate 
the ones you can get to at any one time). It doesn't just evaluate for core 
values, leaving the rest of the software to range over other possibilities. 
Economists may crudely approximate it, but it's there whether they study it 
or not, as gravity is to physicists.

ANY way of making decisions can either be reduced to a utility function, or 
it's irrational -- i.e. you would prefer A to B, B to C, and C to A. The math 
for this stuff is older than I am. If you talk about building a machine that 
makes choices -- ANY kind of choices -- without understanding it, you're 
talking about building moon rockets without understanding the laws of 
gravity, or building heat engines without understanding the laws of 
thermodynamics.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-11 Thread J Storrs Hall, PhD
Hmmph.  I offer to build anyone who wants one a human-capacity machine for 
$100K, using currently available stock parts, in one rack. Approx 10 
teraflops, using Teslas. (http://www.nvidia.com/object/tesla_c870.html)

The software needs a little work...

Josh


On Wednesday 11 June 2008 08:50:58 pm, Matt Mahoney wrote:
 http://www.chron.com/disp/story.mpl/business/5826863.html
 
 World's fastest computer at 1 petaflop and 80 TB memory. Cost US $100 
million.  Claims 1 watt per 376 million calculations, which comes to 2.6 
megawatts if my calculations are correct.
 
 So with about 10 of these, I think we should be on our way to simulating a 
human brain sized neural network.
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
A very diplomatic reply, it's appreciated.

However, I have no desire (or time) to argue people into my point of view. I 
especially have no time to argue with people over what they did or didn't 
understand. And if someone wishes to state that I misunderstood what he 
understood, fine. If he wishes to go into detail about specifics of his idea 
that explain empirical facts that mine don't, I'm all ears. Otherwise, I have 
code to debug...

Josh

On Wednesday 11 June 2008 09:43:52 pm, Vladimir Nesov wrote:
 On Thu, Jun 12, 2008 at 5:12 AM, J Storrs Hall, PhD [EMAIL PROTECTED] 
wrote:
  I'm getting several replies to this that indicate that people don't 
understand
  what a utility function is.
 
 
 I don't see any specific indication of this problem in replies you
 received, maybe you should be a little more specific...
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
On Wednesday 11 June 2008 06:18:03 pm, Vladimir Nesov wrote:
 On Wed, Jun 11, 2008 at 6:33 PM, J Storrs Hall, PhD [EMAIL PROTECTED] 
wrote:
  I claim that there's plenty of historical evidence that people fall into 
this
  kind of attractor, as the word nirvana indicates (and you'll find similar
  attractors at the core of many religions).
 
 Yes, some people get addicted to a point of self-destruction. But it
 is not a catastrophic problem on the scale of humanity. And it follows
 from humans not being nearly stable under reflection -- we embody many
 drives which are not integrated in a whole. Which would be a bad
 design choice for a Friendly AI, if it needs to stay rational about
 Freindliness content.

This is quite true but not exactly what I was talking about. I would claim 
that the Nirvana attractors that AIs are vulnerable to are the ones that are 
NOT generally considered self-destructive in humans -- such as religions that 
teach Nirvana! 

Let's look at it another way: You're going to improve yourself. You will be 
able to do more than you can now, so you can afford to expand the range of 
things you will expend effort achieving. How do you pick them? It's the frame 
problem, amplified by recursion. So it's not easy nor has it a simple 
solution. 

But it does have this hidden trap: If you use stochastic search, say, and use 
an evaluation of (probability of success * value if successful), then Nirvana 
will win every time. You HAVE to do something more sophisticated.

Josh




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] Reverse Engineering The Brain

2008-06-05 Thread J Storrs Hall, PhD
http://www.spectrum.ieee.org/print/6268


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Reverse Engineering The Brain

2008-06-05 Thread J Storrs Hall, PhD
Or, assuming we decided to spend the same on that as on the Iraq war ($1 
trillion: 
http://www.boston.com/news/nation/articles/2007/08/01/analysis_says_war_could_cost_1_trillion/),
 
at $1 million per scope and associated lab costs, giving a million scopes
== 10^5 sec = 28 hours.

Which is more important?

On Thursday 05 June 2008 03:44:14 pm, Matt Mahoney wrote:
 --- On Thu, 6/5/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 
  http://www.spectrum.ieee.org/print/6268
 
 Some rough calculations.  A human brain has a volume of 10^24 nm^3.  A scan 
of 5 x 5 x 50 nm voxels requires about 1000 exabytes = 10^21 bytes of storage 
(1 MB per synapse).  A scan would take a 10 GHz SEM 10^11 seconds = 3000 
years, or equivalently, 1 year for 3000 scanning electron microscopes running 
in parallel.
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Reverse Engineering The Brain

2008-06-05 Thread J Storrs Hall, PhD
basically on the right track -- except there isn't just one cognitive level. 
Are you thinking of working out the function of each topographically mapped 
area a la DNF? Each column in a Darwin machine a la Calvin? Conscious-level 
symbols a la Minsky?

On Thursday 05 June 2008 09:37:00 pm, Richard Loosemore wrote:
 
 There seems to be a good deal of confusion (on this list and also over 
 on the Singularity list) about what people actually mean when they talk 
 about building an AGI by emulating or copying the brain.
 
 There are two completely different types of project that seem to get 
 conflated in these discussions:
 
 1) Copying the brain at the neural level, which is usually assumed to be 
 a 'blind' copy - in other words, we will not know how it works, but will 
 just do a complete copy and fire it up.
 
 2) Copying the design of the human brain at the cognitive level.  This 
 may involve a certain amount of neuroscience, but mostly it will be at 
 the cognitive system level, and could be done without much reference to 
 neurons at all.
 
 
 Both of these ideas are very different from standard AI, but they are 
 also very different from one another.  The criticisms that can be 
 leveled against the neural-copy approach do not apply to the cognitive 
 approach, for example.
 
 It is frustrating to see commentaries that drift back and forth between 
 these two.
 
 My own position is that a cognitive-level copy is not just feasible but 
 well under way, whereas the idea of duplicating the neural level is just 
 a pie-in-the-sky fantasy at this point in time (it is not possible with 
 current or on-the-horizon technology, and will probably not be possible 
 until after we invent an AGI by some other means and get it to design, 
 build and control a nanotech brain scanning machine).
 
 Duplicating a system as complex as that *without* first understanding it 
 at the functional level seems pure folly:  one small error in the 
 mapping and the result could be something that simply does not work ... 
 and then, faced with a brain-copy that needs debugging, what would we 
 do?  The best we could do is start another scan and hope for better luck 
 next time.
 
 
 
 
 
 Richard Loosemore
 
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: Are rocks conscious? (was RE: [agi] Did this message get completely lost?)

2008-06-04 Thread J Storrs Hall, PhD
Actually, the nuclear spins in the rock encode a single state of an ongoing 
computation (which is conscious). Successive states occur in the rock's 
counterparts in adjacent branes of the metauniverse, so that the rock is 
conscious not of unfolding time, as we see it, but of a journey across 
probability space.

What is the rock thinking?

 T h i s   i s   w a a a y   o f f   t o p i c . . . 

Josh

On Tuesday 03 June 2008 05:05:05 pm, Matt Mahoney wrote:
 --- On Tue, 6/3/08, John G. Rose [EMAIL PROTECTED] wrote:
  Actually on further thought about this conscious rock, I
  want to take that particular rock and put it through some
  further tests to absolutely verify with a high degree of
  confidence that there may not be some trace amount of
  consciousness lurking inside. So the tests that I would
  conduct are - 
  
  Verify the rock is in a solid state at close to absolute
  zero but not at absolute zero.
  The rock is not in the presence of a high frequency
  electromagnetic field.
  The rock is not in the presence of high frequency physical
  vibrational interactions.
  The rock is not in the presence of sonic vibrations.
  The rock is not in the presence of subatomic particle
  bombardment, radiation, or being hit by a microscopic black
  hole.
  The rock is not made of nano-robotic material.
  The rock is not an advanced, non-human derived, computer.
  The rock contains minimal metal content.
  The rock does not contain holograms.
  The rock does not contain electrostatic echoes.
  The rock is a solid, spherical structure, with no worm
  holes :)
  The rock...
  
  You see what I'm getting at. In order to be 100% sure.
  Any failed tests of the above would require further
  scientific analysis and investigation to achieve proper
  non-conscious certification.
 
 You forgot a test. The postions of the atoms in the rock encode 10^25 bits 
of information representing the mental states of 10^10 human brains at 10^15 
bits each. The data is encrypted with a 1000 bit key, so it appears 
statistically random. How would you prove otherwise?
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Neurons

2008-06-04 Thread J Storrs Hall, PhD
On Tuesday 03 June 2008 09:54:53 pm, Steve Richfield wrote:

 Back to those ~200 different types of neurons. There are probably some cute
 tricks buried down in their operation, and you probably need to figure out
 substantially all ~200 of those tricks to achieve human intelligence. If I
 were an investor, this would sure sound pretty scary to me without SOME sort
 of insurance like scanning capability, and maybe some simulations.

I'll bet there are just as many cute tricks to be found in computer 
technology, including software, hardware, fab processes, quantum mechanics of 
FETs, etc -- now imagine trying to figure all of them out at once by running 
Pentiums thru mazes with a few voltmeters attached. All at once because you 
never know for sure whether some gene expression pathway is crucially 
involved in dendrite growth for learning or is just a kludge against celiac 
disease. 
That's what's facing the neuroscientists, and I wish them well -- but I think 
we'll get to the working mind a lot faster studying things at a higher level.
For example:
http://repositorium.sdum.uminho.pt/bitstream/1822/5920/1/ErlhagenBicho-JNE06.pdf

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Neurons

2008-06-04 Thread J Storrs Hall, PhD
Well, Ray Kurzweil famously believes that AI must wait for the mapping of the 
brain. But if that's the case, everybody on this list may as well go home for 
20 years, or start running rats in mazes. 

I personally think the millions of years of evolution argument is a red 
herring. Technological development not only moves much faster than evolution, 
but takes leaps evolution can't. And evolution is always, crucially, obsessed 
with reproductive success. Evolution would never build an airplane, because 
airplanes can't reproduce. But we can, and thus capture the aspect of birds 
that's germane to our needs -- flying -- with an assortment of kludges. And 
planes are still NOWHERE as sophisticated as birds, and guess what: 100 years 
later, they still don't lay eggs.

How much of the human mind is built around the necessity of eating, avoiding 
being eaten, finding mates, being obsessed with copulation, and raising and 
protecting children? Egg-laying for airplanes, in my view.

There are some key things we learned about flying by watching birds. But 
having learned them, we built machines to do what we wanted better than birds 
could. We'll do the same with the mind.

Josh


On Wednesday 04 June 2008 03:15:36 pm, Steve Richfield wrote:
 Josh,
 
 I apparently failed to clearly state my central argument. Allow me to try
 again in simpler terms:
 
 The difficulties in proceeding in both neuroscience and AI/AGI is NOT a lack
 of technology or clever people to apply it, but is rather a lack of
 understanding of the real world and how to effectively interact within
 it. Some clues as to the totality of the difficulties are the ~200 different
 types of neurons, and in the 40 years of ineffective AI/AGI research. I have
 seen NO recognition of this fundamental issue in other postings on this
 forum. This level of difficulty strongly implies that NO clever programming
 will ever achieve human-scale (and beyond) intelligence, until some way is
 found to mine the evolutionary lessons learned during the last ~200
 million years.
 
 Note that the CENTRAL difficulty in effectively interacting in the real
 world is working with and around the creatures that already inhabit it,
 which are the product of ~200 million years of evolution. Even a perfect
 AGI would have to have some very imperfect logic to help predict the
 actions of our world's present inhabitants. Hence, there seems (to me) that
 there is probably no simple solution, as otherwise it would have already
 evolved during the last ~200 million years, instead of evolving the highly
 complex creatures that we now are.
 
 That having been said, I will comment on your posting...
 
 On 6/4/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 
  On Tuesday 03 June 2008 09:54:53 pm, Steve Richfield wrote:
 
   Back to those ~200 different types of neurons. There are probably some
  cute
   tricks buried down in their operation, and you probably need to figure
  out
   substantially all ~200 of those tricks to achieve human intelligence. If
  I
   were an investor, this would sure sound pretty scary to me without SOME
  sort
   of insurance like scanning capability, and maybe some simulations.
 
  I'll bet there are just as many cute tricks to be found in computer
  technology, including software, hardware, fab processes, quantum mechanics
  of
  FETs, etc -- now imagine trying to figure all of them out at once by
  running
  Pentiums thru mazes with a few voltmeters attached. All at once because 
you
  never know for sure whether some gene expression pathway is crucially
  involved in dendrite growth for learning or is just a kludge against 
celiac
  disease.
 
 
 Of course, this has nothing to do with creating the smarts to deal with
 our very complex real world well enough to compete with us who already
 inhabit it.
 
 That's what's facing the neuroscientists, and I wish them well -- but I
  think
  we'll get to the working mind a lot faster studying things at a higher
  level.
 
 
 I agree that high level views are crucial, but with the present lack of
 low-level knowledge, I see no hope for solving all of the problems while
 remaining only at a high level.
 
 For example:
 
  
http://repositorium.sdum.uminho.pt/bitstream/1822/5920/1/ErlhagenBicho-JNE06.pdf
 
 
 From that article: Our close cooperation with experimenters from
 neuroscience and cognitive science has strongly influenced the proposed
 architectures for implementing cognitive functions such as goal inference
 and decision making. THIS is where efforts are needed - in bringing the
 disparate views together rather than keeping your head in the clouds with
 only a keyboard and screen in front of you.
 
 In the 1980s I realized that neither neuroscience nor AI could proceed to
 their manifest destinies until a system of real-world mathematics was
 developed that could first predict details of neuronal functionality, and
 then hopefully show what AI needed. The missing link seemed to be the lack
 of knowledge

Re: [agi] Neurons

2008-06-03 Thread J Storrs Hall, PhD
Strongly disagree. Computational neuroscience is moving as fast as any field 
of science has ever moved. Computer hardware is improving as fast as any 
field of technology has ever improved. 

I would be EXTREMELY surprised if neuron-level simulation were necessary to 
get human-level intelligence. With reasonable algorithmic optimization, and a 
few tricks our hardware can do the brain can't (e.g. store sensory experience 
verbatim and review it as often as necessary into learning algorithms) we 
should be able to knock 3 orders of magnitude or so off the pure-neuro HEPP 
estimate -- which puts us at ten high-end graphics cards, e.g. less than the 
price of a car.  (or just wait till 2015 and get one high-end PC).

Figuring out the algorithms is the ONLY thing standing between us and AI.

Josh

On Tuesday 03 June 2008 12:16:54 pm, Steve Richfield wrote:
 ... for the lack of a few million dollars, both computer science
 and neuroscience are stymied in the same respective holes that they have
 been in for most of the last 40 years.
 ...
 Meanwhile, drug companies are redirecting ~100% of medical research funding
 into molecular biology, nearly all of which leads nowhere.
 
 The present situation appears to be entirely too stable. There seems to be
 no visible hope past this, short of some rich person throwing a lot of money
 at it - and they are all too busy to keep up on forums like this one.
 
 Are we on the same page here?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Did this message get completely lost?

2008-06-02 Thread J Storrs Hall, PhD
That's getting reasonably close, assuming you don't require the model to have 
any specific degree of fidelity -- there's a difference between being 
conscious of something and understanding it. 

The key is that we judge the consciousness of an entity based on the ability 
of its processes and datastructures to duplicate those abilities and 
reactions we see in ourselves and others as part of being conscious. 

I have a flashbulb memory of a flooded basement when I was 3. It includes the 
geometric arrangement of the stairs, back door of the house, a point of view 
at the top of the stairs, and the fact that there was deep water in the 
basement. That's it -- no idea what color the walls were, whether anyone said 
anything, etc. And no other memories at all before age 4.  I'd have to claim 
I was conscious then, and presumably much of the rest of the time at that 
age, because I was obviously parsing the world into a coherent account and 
would have been capable of short-term memories in that language.

If you talk to the average person, especially why did you do that? kind of 
questions, it's amazing how much of what they say is confabulation and 
rationalization. To me that's evidence that they're *not* as conscious as 
they think they are -- and that their self-models, which they consult to 
answer such questions, are only loosely coupled to their actual mind 
mechanisms.

That in turn gives me to believe that we can see the limits of the illusion 
consciousness is giving us, and thus look under the hood, similar to the way 
we can understand more about the visual process by studying optical 
illusions. 

Josh

On Monday 02 June 2008 01:55:32 am, Jiri Jelinek wrote:
  On Sun, Jun 1, 2008 at 6:28 PM, J Storrs Hall, PhD [EMAIL PROTECTED] 
wrote:
  Why do I believe anyone besides me is conscious? Because they are made of
  meat? No, it's because they claim to be conscious, and answer questions 
about
  their consciousness the same way I would, given my own conscious
  experience -- and they have the same capabilities
 
 Would you agree that they are conscious of X when they demonstrate the
 ability to build mental models that include an abstract X concept that
 (at least to some degree) corresponds (and is intended to correspond)
 to the real world representation/capabilities of X?
 In the case of self-consciousness, the X would simply = self.
 
 Regards,
 Jiri Jelinek
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] Neurons

2008-06-02 Thread J Storrs Hall, PhD
One good way to think of the complexity  of a single neuron is to think of it 
as taking about 1 MIPS to do its work at that level of organization. (It has 
to take an average 10k inputs and process them at roughly 100 Hz.) 

This is essentially the entire processing power of the DEC KA10, i.e. the 
computer that all the classic AI programs (up to, say, SHRDLU) ran on. One 
real-time neuron equivalent. (back in 1970 it was a 6-figure machine -- 
nowadays, same power in a 50-cent PIC microcontroller).

A neuron does NOT simply perform a dot product and feed it in to a sigmoid. 
One good way to think of what it can do is to imagine a 100x100 raster 
lasting 10 ms. It can act as an associative memory for a fairly large number 
of such clips, firing in an arbitrary stored pattern when it sees one of them 
(or anything close enough).

Compared to that, the ability to modify its behavior based on a handful of 
global scalar variables (the concentrations of neurotransmitters etc) is 
trivial.

Not simple -- how many ways could you program a KA10? But limited nonetheless. 
It still takes 30 billion of them to make a brain.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Did this message get completely lost?

2008-06-02 Thread J Storrs Hall, PhD
On Monday 02 June 2008 03:00:24 pm, John G. Rose wrote:

 A rock is either conscious or not conscious. Is it less intellectually 
sloppy to declare it not conscious?

A rock is not conscious. I'll stake my scientific reputation on it. 
(this excludes silicon rocks with micropatterned circuits :-)

J


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Consciousness vs. Intelligence

2008-06-01 Thread J Storrs Hall, PhD
On Saturday 31 May 2008 10:23:15 pm, Matt Mahoney wrote:

 Unfortunately AI will make CAPTCHAs useless against spammers.  We will need 
to figure out other methods.  I expect that when we have AI, most of the 
world's computing power is going to be directed at attacking other computers 
and defending against attacks.  It is no different than evolution.  A 
competitive environment makes faster rabbits and faster foxes.  Without 
hostility, why would we need such large brains?

In the biological world, big brains evolved to support reciprocal altruism, 
which requires recognizing individuals and knowing which ones owe you one 
and vice versa.

http://en.wikipedia.org/wiki/Reciprocal_altruism

Going back to Trivers' first studies: bats that practice R.A. have brains 
three times the size of ones that don't.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] Did this message get completely lost?

2008-06-01 Thread J Storrs Hall, PhD
Originally sent several days back...

Why do I believe anyone besides me is conscious? Because they are made of 
meat? No, it's because they claim to be conscious, and answer questions about 
their consciousness the same way I would, given my own conscious 
experience -- and they have the same capabilities, e.g. of introspection, 
1-shot learning, synthesis of novel ideas, and access to episodic memory in 
narrative form (etc.) that I associate with being conscious myself.

Build a machine that does *all* of these things and you have no better reason 
to claim it isn't conscious than you have to claim a person isn't.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-27 Thread J Storrs Hall, PhD
On Monday 26 May 2008 09:55:14 am, Mark Waser wrote:
 Josh,
 
 Thank you very much for the pointers (and replying so rapidly).

You're welcome -- but also lucky; I read/reply to this list a bit sporadically 
in general.

 
  You're very right that people misinterpret and over-extrapolate econ and 
  game
  theory, but when properly understood and applied, they are a valuable tool
  for analyzing the forces shaping the further evolution of AGIs and indeed 
  may
  be our only one.
 
 No.  I would argue that there is a lot of good basic research into human and 
 primate behavior that is more applicable since it's already been tested and 
 requires less extrapolation (and visibly shows where a lot of current 
 extrapoloation is just plain wrong).

It's interesting that behavioral economics appeared only fairly recently, to 
study the ways in which humans act irrationally in their economic choices. 
(See Predictably Irrational by Dan Ariely, e.g.) But it's been observed for a 
while that people tend to act more rationally in economic settings than 
non-economic ones, and there's no reason to believe that we couldn't build an 
AI to act more rationally yet. In other words, actors in the economic world 
will be getting closer and closer to the classic economic agent as time goes 
by, and so classic econ will be a better description of the world than it is 
now.
 
 The true question is, how do you raise the niceness of *all* players and 
 prevent defection -- because being the single bad guy is a winning strategy 
 while being just one among many is horrible for everyone.

Intelligence. You identify the bad guys and act nasty just to them. Finding 
ways to do this robustly and efficiently is the basis of human society.

  So, in simplistic computer simulations at least, evolution seems to go 
  through
  a set of phases with different (and improving!) moral character.
 
 So why do so many people think evolution favors the exactly the opposite? 

Several reasons -- first being that evolution education and literacy in this 
country is crap, thanks to a century and a half of religious propaganda and 
activism.

Another is that people tend to study evolution at whatever level that 
predation and arms races happen, and don't pay attention to the levels where 
cooperation does. Example: lions vs zebras -- ignoring the fact that the 
actual units of evolution are the genes, which have formed amazingly 
cooperative systems to create a lion or zebra in the first place.

And even then, the marketplace can channel evolution in better ways. It's a 
quantum jump higher step on the moral ladder than the jungle...

Miller and Drexler write:

(http://www.agorics.com/Library/agoricpapers/ce/ce0.html)
...

Ecology textbooks show networks of predator-prey relationships-called food 
webs-because they are important to understanding ecosystems; symbiosis webs 
have found no comparable role. Economics textbooks show networks of trading 
relationships circling the globe; networks of predatory or negative-sum 
relationships have found no comparable role. (Even criminal networks 
typically form cooperative black markets.) One cannot prove the absence of 
such spanning symbiotic webs in biology, or of negative-sum webs in the 
market; these systems are too complicated for any such proof. Instead, the 
argument here is evolutionary: that the concepts which come to dominate an 
evolved scientific field tend to reflect the phenomena which are actually 
relevant for understanding its subject matter.

4.5 Is this picture surprising?

Nature is commonly viewed as harmonious and human markets as full of strife, 
yet the above comparison suggests the opposite. The psychological prominence 
of unusual phenomena may explain the apparent inversion of the common view. 
Symbiosis stands out in biology: we have all heard of the unusual 
relationship between crocodiles and the birds that pluck their parasites, but 
one hears less about the more common kind of relationship between crocodiles 
and each of the many animals they eat. Nor, in considering those birds, is 
one apt to dwell on the predatory relationship of the parasites to the 
crocodile or of the birds to the parasites. Symbiosis is unusual and 
interesting; predation is common and boring.

Similarly, fraud and criminality stand out in markets. Newspapers report major 
instances of fraud and embezzlement, but pay little attention to each day's 
massive turnover of routinely satisfactory cereal, soap, and gasoline in 
retail trade. Crime is unusual and interesting; trade is common and boring.

Psychological research indicates that human thought is subject to a systematic 
bias: vivid and interesting instances are more easily remembered, and easily 
remembered instances are thought to be more common [21]). Further, the press 
(and executives) like to describe peaceful competition for customer favor as 
if it were mortal combat, complete with wounds and rolling heads: again, 
vividness wins 

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-26 Thread J Storrs Hall, PhD
On Monday 26 May 2008 06:55:48 am, Mark Waser wrote:
  The problem with accepted economics and game theory is that in a proper
  scientific sense, they actually prove very little and certainly far, FAR
  less than people extrapolate them to mean (or worse yet, prove).
 
  Abusus non tollit usum.
 
 Oh Josh, I just love it when you speak Latin to me!  It makes you seem s 
 smart . . . .
 
 But, I don't understand your point.  What argument against proper use do you 
 believe that I'm making?  Or, do you believe that Omohundro is making 
 improper use of AEFGT?

You're very right that people misinterpret and over-extrapolate econ and game 
theory, but when properly understood and applied, they are a valuable tool 
for analyzing the forces shaping the further evolution of AGIs and indeed may 
be our only one.

 Could you please give some references (or, at least, pointers to pointers) 
 that show the existence of the moral ladder?  I'd appreciate it and could 
 use them for something else.  Thanks!

BAI p. 178-9:

Further research into evolutionary game theory shows that the optimal strategy 
is strongly dependent on the environment constituted by other players. In a 
population of all two-state automata (of which tit-for-tat is one), a program 
by the name of GRIM is optimal. GRIM cooperates until its opponent defects 
just once, and always defects after that. The reason it does well is that the 
population has quite a few programs whose behavior is oblivious or random. 
Rather than trying to decipher them, it just shoots them all and lets 
evolution sort them out.

Chances are Axelrod's original tournaments are a better window into parts of 
the real, biological evolutionary dynamic than are the later tournaments with 
generated agents. The reason is that genetic algorithms are still unable to 
produce anything nearly as sophisticated as human programmers. Thus GRIM, for 
example, gets a foothold in a crowd of unsophisticated opponents. It wouldn't 
do you any good to be forgiving or clear if the other program were random. 

But in the long run, slightly nicer programs can out-compete slightly nastier 
ones, and then in turn be out-competed by slightly nicer ones yet. For 
example, in a simulation with ``noise,'' meaning that occasionally at random 
a ``cooperate'' is turned in to a ``defect,'' tit-for-tat gets hung up in 
feuds, and a generous version that occasionally forgives a defection does 
better--but only if the really nasty strategies have been knocked out by 
tit-for-tat first. Even better is a strategy called Pavlov, due to an 
extremely simple form of learning. Pavlov repeats its previous play if it 
``won,'' and switches if it ``lost.'' In particular, it cooperates whenever 
both it and its opponent did the same thing the previous time--it's a true, 
if very primitive, ``cahooter.'' Pavlov also needs the underbrush to be 
cleared by a ``stern retaliatory strategy like tit-for-tat.''

So, in simplistic computer simulations at least, evolution seems to go through 
a set of phases with different (and improving!) moral character.

Karl Sigmund, Complex Adaptive Systems and the Evolution of Reciprocation , 
International Institute for Applied Systems Analysis Interim Report 
IR-98-100; see http://www.iiasa.ac.at.

there's a lot of good material at 
http://jasss.soc.surrey.ac.uk/JASSS.html

 
 Also, I'm *clearly* not arguing his basic starting point or the econ 
 references.  I'm arguing his extrapolations.  Particularly the fact that his 
 ultimate point that he claims applies to all goal-based systems clearly does 
 not apply to human beings. 

I think we're basically in agreement here.

Josh



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
The paper can be found at 
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf

Read the appendix, p37ff. He's not making arguments -- he's explaining, with a 
few pointers into the literature, some parts of completely standard and 
accepted economics and game theory. It's all very basic stuff.

On Sunday 25 May 2008 06:26:59 am, Jim Bromer wrote:
 
 - Original Message 
 From: Richard Loosemore [EMAIL PROTECTED]
 Richard Loosemore said:
 
 If you look at his paper carefully, you will see that at every step of
 the way he introduces assumptions as if they were obvious facts ... and
 in all the cases I have bothered to think through, these all stem from
 the fact that he has a particular kind of mechanism in mind (one which
 has a goal stack and a utility function).  There are so many of these
 assertions pulled out of think air that I found it gave me a headache
 just to read the paper. ...
 
 But this is silly:  where was his examination of the systems various
 motives?  Where did he consider the difference between different
 implementations of the entire motivational mechanism (my distinction
 between GS and MES systems)?  Nowhere.  He just asserts, without
 argument, that the system would be obsessed, and that any attempt by us
 to put locks on the system would result in an arms race of measures and
 countermeasures.
 
 That is just one example of how he pulls conclusions out of thin air.
 ---
 
 Your argument about the difference between a GS and an MES system is a 
strawman argument.  Omohundro never made the argument, nor did he touch on it 
as far as I can tell.  I did not find his paper very interesting either, but 
you are the one who seems to be pulling conclusions out of thin air.
 
 You can introduce the GS vs MES argument if you want, but you cannot then 
argue from the implication that everyone has to refer to it or else stand 
guilty of pulling arguments out of thin air.
 
 His paper Nature of Self Improving Artificial Intelligence September 5, 
2007, revised January 21, 2008 provides a lot of reasoning.  I don't find the 
reasoning compelling, but the idea that he is just pulling conclusions out of 
thin air is just bluster.
 
 Jim Bromer
 
 
 
   
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
On Sunday 25 May 2008 10:06:11 am, Mark Waser wrote:
  Read the appendix, p37ff. He's not making arguments -- he's explaining, 
  with a
  few pointers into the literature, some parts of completely standard and
  accepted economics and game theory. It's all very basic stuff.
 
 The problem with accepted economics and game theory is that in a proper 
 scientific sense, they actually prove very little and certainly far, FAR 
 less than people extrapolate them to mean (or worse yet, prove).

Abusus non tollit usum.
 
 All of the scientific experiments in game theory are very, VERY limited and 
 deal with entities with little memory in small, toy systems.  If you 
 extrapolate their results with no additional input and no emergent effects, 
 you can end up with arguments like Omohundro's BUT claiming that this 
 extrapolation *proves* anything is very poor science.  It's just 
 speculation/science fiction and there are any number of reasons to believe 
 that Omohundro's theories are incorrect -- the largest one, of course, being 
 If all goal-based systems end up evil, why isn't every *naturally* 
 intelligent entity evil?

Actually, modern (post-Axelrod) evolutionary game theory handles this pretty 
well, and shows the existence of what I call the moral ladder. BTW, I've had 
extended discussions with Steve O. about it, and consider his ultimate 
position to be over-pessimistic -- but his basic starting point (and the econ 
theory he references) is sound.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
On Sunday 25 May 2008 07:51:59 pm, Richard Loosemore wrote:
 This is NOT the paper that is under discussion.

WRONG.

This is the paper I'm discussing, and is therefore the paper under discussion.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
In the context of Steve's paper, however, rational simply means an agent who 
does not have a preference circularity.

On Sunday 25 May 2008 10:19:35 am, Mark Waser wrote:
 Rationality and irrationality are interesting subjects . . . .
 
 Many people who endlessly tout rationally use it as an exact synonym for 
 logical correctness and then argue not only that irrational then means 
 logically incorrect and therefore wrong but that anything that can't be 
 proved is irrational.
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread J Storrs Hall, PhD
On Saturday 24 May 2008 06:55:24 pm, Mark Waser wrote:
 ...Omuhundro's claim...

 YES!  But his argument is that to fulfill *any* motivation, there are 
 generic submotivations (protect myself, accumulate power, don't let my 
 motivation get perverted) that will further the search to fulfill your 
 motivation.


It's perhaps a little more subtle than that. (BTW, note I made the same 
arguments re submotivations in Beyond AI p. 339)

Steve points out that any motivational architecture that cannot be reduced to 
a utility function over world states is incoherent in the sense that the AI 
could be taken advantage of in purely uncoerced transactions by any other 
agent that understood its motivational structure. Thus one can assume that 
non-utility-function-equivalent AIs (not to mention humans) will rapidly lose 
resources in a future world and thus it won't particularly matter what they 
want.

If you look at the suckerdom of average humans in todays sub-prime mortgage, 
easy credit, etc., markets, there's ample evidence that it won't take evil AI 
to make this economic cleansing environment happen.  And the powers that be 
don't seem to be any too interested in shielding people from it...

So Steve's point is that utility-function-equivalent AIs will predominate 
simply by lack of that basic vulnerability (and the fact that it is a 
vulnerability is a mathematically provable theorem) which is a part of ANY 
other motivational structure.

The rest (self-interest, etc) follows, Q.E.D.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Deliberative vs Spatial intelligence

2008-04-29 Thread J Storrs Hall, PhD
I disagree with your breakdown. There are several key divides:

Concrete vs abstract
Continuous vs discrete
spatial vs symbolic
deliberative vs reactive

I can be very deliberative, thinking in 2-d pictures (when designing a machine 
part in my head, for example).  I know lots of people who are completely 
reactive in the symbolic world, hearing and replying to words by reflex 
(Yes, dear).  (Believe it or not, this can even happen when typing messages 
to mailing lists.)

Spatial to symbolic actually happens quite early in evolution. A housefly has 
to recognize a pattern on its eyes and decide all at once to flee or not -- 
it can't fly off with just the half its body the threat appears to.  It has 
classified the picture of you with your flyswatter into a discrete category.

A crow bending a wire as a tool is deliberating but thinking in concrete 
terms, rather than abstractions. In fact, the jump to abstraction is probably 
the most human-specific, latest biologically, of the distinctions. But it is 
*easy* for a computer, which starts out working with, and being understood 
by, abstractions in the first place. 

I claim that we can and do think in each of the 16 modes implied by the above 
(and others as well).

I think the key to AI is not so much to figure how to operate in any given one 
of them, but how to operate in more than one, using one as a pilot wave or 
boundary condition for another.  *Creating* symbols from continuous 
experience. Forming a conditioned reflex by deliberation and practice.

Figure out the reduction ratio of a planetary gear drive as a function of the 
number of teeth on the sun and planet gears. You can't do it without using 
both visualization and algebra.

Now go out onto the tennis court and return a high kick serve wide to your 
forehand in the deuce court. You have to watch the server's motion, the 
ball's trajectory, estimate its spin, predict its flight after the bounce, 
note whether it was in the service court and decide whether to stop play and 
call it out, decide where to return it and with what stroke, all in less than 
a second. Purely reactive, but also an irreducible mixture of the spatial and 
symbolic.

Josh


On Tuesday 29 April 2008 04:46:29 am, Russell Wallace wrote:
...
 In biological evolution, S came first, of course. It was hard - likely
 a hard step in the Great Filter - to make D on top of S. It was done,
 still, and he who thinks we should try S first, then D, is not
 necessarily irrational, even though I disagree with him.
 
 I have some outline ideas on how to make S, but not scalably, not that
 would easily generalize. So I think D should come first; and I think I
 now know how to make D, in a way that would hopefully then scale to S.
 I do not, of course, expect anyone except me to believe those personal
 claims; but they are my reasons for believing the right path is D then
 S.
 
 Is there a consensus at least that AGI paths fall into the two
 categories of D-then-S or S-then-D?
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Deliberative vs Spatial intelligence

2008-04-29 Thread J Storrs Hall, PhD
This is all pretty old stuff for mainstream AI -- see Herb Simon and bounded 
rationality.  What needs work is the cross-modal interaction, and 
understanding the details of how the heuristics arise in the first place from 
the pressures of real-time processing constraints and deliberative modelling.

Josh


On Tuesday 29 April 2008 11:12:28 am, Mike Tintner wrote:
 Josh:You can't do it without using
  both visualization and algebra... Now go out onto the tennis court and 
  return a high kick serve wide to your
  forehand in the deuce court.
 
 Josh/Bob:
 
 What do Gigerenzer's fast and frugal heuristics have to say about this? ...

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Deliberative vs Spatial intelligence

2008-04-29 Thread J Storrs Hall, PhD
This is poppycock.  The people who are really good at something like that so 
something as simple but much more general. They have an associative memory of 
lots of balls they have seen and tried to catch. This includes not only the 
tracking sight of the ball, but things like the feel of the wind, the sound 
of the bat or racquet, and so forth.  They know from this experience which 
ones went over their heads when they only stepped back instead of running, 
and which ones came right to them, and so forth. 

Simple example -- in tennis, at the net, you have to make split-second 
decisions about whether to try to hit balls going over your head depending on 
whether you think they'll go over the baseline, 30 feet behind you.  Gaze 
angle is hopeless. Memory interpolation / experience works great. 

Why do you think it takes ten years of full time application and practice to 
become expert at any given human pursuit?

BTW, Simon is the only Nobel laureate among the founding fathers of classical 
AI, and it was for bounded rationality. Anyone with a hope of a prayer of a 
claim to AI literacy ought to know about it.

On Tuesday 29 April 2008 05:05:26 pm, Mike Tintner wrote:
 Josh,
 
 Gigerenzer doesn't sound like old stuff or irrelevant to me , with my 
 limited knowledge,  (and also seems like a pretty good example of how v. 
 much more practical it can be to think imaginatively than mathematically, 
 no?)::
 
 how do real people make good decisions under the usual conditions of little 
 time and scarce information? Consider how players catch a ball-in baseball, 
 cricket, or soccer. It may seem that they would have to solve complex 
 differential equations in their heads to predict the trajectory of the ball. 
 In fact, players use a simple heuristic. When a ball comes in high, the 
 player fixates the ball and starts running. The heuristic is to adjust the 
 running speed so that the angle of gaze remains constant -that is, the angle 
 between the eye and the ball. The player can ignore all the information 
 necessary to compute the trajectory, such as the ball's initial velocity, 
 distance, and angle, and just focus on one piece of information, the angle 
 of gaze.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-23 Thread J Storrs Hall, PhD
On Tuesday 22 April 2008 01:22:14 pm, Richard Loosemore wrote:

 The solar system, for example, is not complex:  the planets move in 
 wonderfully predictable orbits.

http://space.newscientist.com/article/dn13757-solar-system-could-go-haywire-before-the-sun-dies.html?feedId=online-news_rss20

How will life on Earth end? The answer, of course, is unknown, but two new 
studies suggest a collision with Mercury or Mars could doom life long before 
the Sun swells into a red giant and bakes the planet to a crisp in about 5 
billion years.
The studies suggest that the solar system's planets will continue to orbit 
the Sun stably for at least 40 million years. But after that, they show there 
is a small but not insignificant chance that things could go terribly awry.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-22 Thread J Storrs Hall, PhD
Thank you!  This feeds back into the feedback discussion, in a way, at a high 
level. There's a significant difference between research programming and 
production programming. The production programmer is building something which 
if (nominally) understood and planned ahead of time. The researcher is 
putting together something new to see if it works. All the knowledge flow 
goes from production programmer to the system. The important element of 
knowledge is supposed to flow from the system to the researcher.

This is important because AGIers are researchers (if we have any sense). We 
have a lot to learn about generally intelligent systems. But even more to the 
point is the fact that our systems themselves must be research programmers. 
To learn about a new thing, they must program themselves to be able to 
recognize, predict, and/or imitate it. So it's worth our time to watch 
ourselves programming because that's one thing our systems will have to do 
too.

As for the theory, I said I think there is one, not that I necessarily know 
what it is :-) However, you can begin with the observation that if your 
architecture is a network of sigmas, it's clearly necessary to provide the 
full context and sensory information to each sigma for it to record the 
appropriate trajectory in its local memory.

(Anyone interested: sigmas are explained in somewhat more detail in Ch. 13 of 
Beyond AI)

On Monday 21 April 2008 09:47:53 pm, Derek Zahn wrote:
 Josh writes: You see, I happen to think that there *is* a consistent, 
general, overall  theory of the function of feedback throughout the 
architecture. And I think  that once it's understood and widely applied, a 
lot of the architectures  (repeat: a *lot* of the architectures) we have 
floating around here will  suddenly start working a lot better.
 Want to share this theory? :)
  
 Oh, by the way, of the ones I read so far, I thought your Variac paper was 
the most interesting one from AGI-08.  I'm particularly interested to hear 
more about sigmas and your thoughts on  transparent, composable, and robust 
programming languages.  I used to think about some slightly related topics 
and thought more in terms of evolvability and plasticity (and did not 
consider opaqueness at all) but I think your approach to thinking about 
things is quite exciting.
  
  
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-21 Thread J Storrs Hall, PhD
(Aplogies for inadvertent empty reply to this :-)

On Saturday 19 April 2008 11:35:43 am, Ed Porter wrote:
 WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

In a single word: feedback.

At a very high level of abstraction, most the AGI (and AI for that matter) 
schemes I've seen can be caricatured as follows:

1. Receive data from sensors.
2. Interpret into higher-level concepts.
3. Then a miracle occurs.
4. Interpret high-level actions from 3 into motor commands.
5. Send to motors.

What's wrong with this? It implicitly assumes that data flows from 1 to 5 in 
waterfall fashion, and that feedback, if any, occurs either within 3 or as a 
loop thru the external world.

Problem is, in brains, there are actually more nerve fibers transmitting data 
from higher numbers to lower, i.e. backwards, than forwards. I think that the 
interpretation of sensory input is a much more active process than we AGIers 
realize, and that doing things requires a lot more sensing.

Here's a quip that feels like it has some relevance:
What's the difference between a physicist and an engineer? A physicist is 
someone who spends all his time building machinery, to help him write an 
equation. An engineer is someone who spends all his time writing equations, 
in order to build machinery.

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-21 Thread J Storrs Hall, PhD
On Saturday 19 April 2008 11:35:43 am, Ed Porter wrote:
 WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?
 
 With the work done by Goertzel et al, Pei, Joscha Bach
 http://www.micropsi.org/ , Sam Adams, and others who spoke at AGI 2008, I
 feel we pretty much conceptually understand how build powerful AGI's.  I'm
 not necessarily saying we know all the pieces of the puzzle, but rather that
 we know enough to start building impressive intelligences, and once we build
 them we will be in a much better position to find out what are the other
 missing conceptual pieces of the puzzle--- if any.
 
 As I see it --- the major problem is in selecting from all we know, the
 parts necessary to build a powerful artificial mind, at the scale needed, in
 a way that works together well, efficiently, and automatically.  This would
 include a lot of parameter tuning and determining of which competing
 techniques for accomplishing the same end are most efficient at the scale
 and in the context needed.  
 
 But I don't see any major aspects of the problem that we don't already have
 what appear to be good ways for addressing, once we have all the pieces put
 together.
 
 I ASSUME --- HOWEVER --- THERE ARE AT LEAST SOME SUCH MISSING CONCEPTUAL
 PARTS OF THE PUZZLE --- AND I AM JUST FAILING TO SEE THEM.
 
 I would appreciate it if those on this list could point out what significant
 conceptual aspect of the AGI problem are not dealt with by a reasonable
 synthesis drawn from works like that of Goertzel et al., Pei Wang, Joscha
 Bach, and Stan Franklin --- other than the problems acknowledge above 
 
 IT WOULD BE VALUABLE TO HAVE A DISCUSSION OF --- AND MAKE A LIST OF --- WHAT
 --- IF ANY --- MISSING CONCEPTUAL PIECES EXIST IN AGI.  If there are any
 such good list, please provide pointers to them.
 
 I WILL CREATE A SUMMARIZED LIST OF ALL THE SIGNIFICANT MISSING PIECES OF THE
 AGI PUZZLE THAT ARE SENT TO THE AGI LIST UNDER THIS THREAD NAME, WITH THE
 PERSON SENDING EACH SUCH SUGGESTION WITH THE DATE OF THEIR POST IF IT
 CONTAINS VALUABLE DESCRIPTION OF THE UNSOLVED PROBLEM INVOLVED NOT CONTAINED
 IN MY SUMMARY --- AND I WILL POST IT BACK TO THE LIST.  I WILL TRY TO
 COMBINE SIMILAR SUGGESTIONS WERE POSSIBLE TO MAKE THE LIST MORE CONCISE AND
 FOCUSED
 
 For purposes of creating this list of missing conceptual issues --- let us
 assume we have very powerful hardware --- but hardware that is realistic
 within at least a decade (1).  Let us also assume we have a good massively
 parallel OS and programming language to realize our AGI concepts on such
 hardware.  We do this to remove the absolute barriers to human-level
 intelligent created by the limited hardware current AGI scientists have to
 work with and to allow a systems to have the depth of representation and
 degree of massively parallel inference necessary for human-level thought.
 
 --
 (1) Let us say the hardware has 100TB of RAM --- and theoretical values of
 1000TOpp/sec --- 1000T random memory read or writes/sec -- and an
 X-sectional band of 1T 64Byte Messages/ sec (with the total number of such
 messages per second going up, the shorter the distance they travel within
 the 100T memory space).  Assume in addition a tree net for global broadcast
 and global math and control functions with a total latency to and from the
 entire 100TBytes of several micro seconds. In Ten years such hardware may
 sell for under two million dollars.  It is probably more than is needed for
 human level AGI, but it gives us room to be inefficient, and significantly
 frees us from having to think compulsively about locality of memory.
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-21 Thread J Storrs Hall, PhD
On Monday 21 April 2008 05:33:01 pm, Ed Porter wrote:
 I don't think your 5 steps do justice to the more sophisticated views of AGI
 that are out their.  

It was, as I said, a caricature. However, look, e.g., at the overview graphic 
of this LIDA paper (page 8)
http://bernardbaars.pbwiki.com/f/Baars++Franklin+GW-IDA+Summary+in+NeuralNets2007.pdf
(the green circle is step 3).

 No miracles occur, other
 than massively complex spreading activation, implication, and constraint
 relaxation, thresholding, attention selection, and focusing, and selection
 and context appropriate instantiation of mental and physical behaviors.

That miracle occurs was not to be interpreted as meaning that the miracle 
occurred without mechanism but, I hoped, to be recognized as a tongue in 
cheek way of saying that that this was the point where each system put its 
(different) secret sauce.

 If you have read my responses in this thread one of their common themes is
 how both perception up from lower levels and instantiation of higher levels
 concepts and behaviors is context appropriate.  Being context appropriate
 involves a combination of both bottom-up, top-down, and lateral implication.

Sure. And people have talked about steering of attention, Steve Reed mentioned 
following moving objects, and so forth. But I haven't seen it given a 
*primary* place in the architecture -- whenever anybody's architecture gets 
boiled down to a 20-module overview, it disappears.

 So I don't view your alleged missing conceptual piece to be actually missing
 from the better AGI thinking.  But until we actually try building
 systems ... 

I have yet to see anyone give a consistent, general, overall theory of the 
role of feedback in *every* cognitive process. It gets thrown in piecemeal on 
an ad hoc basis as a kludge here and there. (and yes, there are lots of 
specific examples of feedback in many of the architectures, particularly the 
robotics-derived ones). 

You see, I happen to think that there *is* a consistent, general, overall 
theory of the function of feedback throughout the architecture. And I think 
that once it's understood and widely applied, a lot of the architectures 
(repeat: a *lot* of the architectures) we have floating around here will 
suddenly start working a lot better.

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread J Storrs Hall, PhD
On Thursday 17 April 2008 04:47:41 am, Richard Loosemore wrote:
 If you could build a (completely safe, I am assuming) system that could 
 think in *every* way as powerfully as a human being, what would you 
 teach it to become:
 
 1) A travel Agent.
 
 2) A medical researcher who could learn to be the world's leading 
 specialist in a particular field,...

Travel agent. Better yet, housemaid. I can teach it to become these things 
because I know how to do them. Early AGIs will be more likely to be 
successful at these things because they're easier to learn. 

This is sort of like Orville Wright asking, If I build a flying machine, 
what's the first use I'll put it to: 
1) Carrying mail.
2) A manned moon landing.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread J Storrs Hall, PhD
Well, I haven't seen any intelligent responses to this so I'll answer it 
myself:

On Thursday 17 April 2008 06:29:20 am, J Storrs Hall, PhD wrote:
 On Thursday 17 April 2008 04:47:41 am, Richard Loosemore wrote:
  If you could build a (completely safe, I am assuming) system that could 
  think in *every* way as powerfully as a human being, what would you 
  teach it to become:
  
  1) A travel Agent.
  
  2) A medical researcher who could learn to be the world's leading 
  specialist in a particular field,...
 
 Travel agent. Better yet, housemaid. I can teach it to become these things 
 because I know how to do them. Early AGIs will be more likely to be 
 successful at these things because they're easier to learn. 
 
 This is sort of like Orville Wright asking, If I build a flying machine, 
 what's the first use I'll put it to: 
 1) Carrying mail.
 2) A manned moon landing.

Q: You've got to be kidding. There's a huge difference between a mail-carrying 
fabric-covered open-cockpit biplane and the Apollo spacecraft. It's not 
comparable at all.

A: It's only about 50 years' development. More time elapsed between railroads 
and biplanes. 

Q: Do you think it'll take 50 years to get from travel agents to medical 
researchers?

A: No, the pace of development has speeded up, and will speed up more so with 
AGI. But as in the mail/moon example, the big jump will be getting off the 
ground in the first place.

Q: So why not just go for the researcher? 

A: Same reason Orville didn't go for the moon rocket. We build Rosie the 
maidbot first because:
1) we know very well what it's actually supposed to do, so we know if it's 
learning it right
2) we even know a bit about how its internal processing -- vision, motion 
control, recognition, navigation, etc -- works or could work, so we'll have 
some chance of writing programs that can learn that kind of thing.
3) It's easier to learn to be a housemaid. There are lots of good examples. 
The essential elements of the task are observable or low-level abstractions. 
While the robot is learning to wash windows, we the AGI researchers are going 
to learn how to write better learning algorithms by watching how it learns.
4) When, not if, it screws up, a natural part of the learning process, 
there'll be broken dishes and not a thalidomide disaster.

The other issue is that the hard part of this is the learning. Say it takes a 
teraop to run a maidbot well, but petaop to learn to be a maidbot. We run the 
learning on our one big machine and sell the maidbots cheap with 0.1% the 
cpu. But being a researcher is all learning -- so each one would need the 
whole shebang for each copy. A decade of Moore's Law ... and at least that of 
AGI research.

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] associative processing

2008-04-16 Thread J Storrs Hall, PhD
On Wednesday 16 April 2008 04:15:40 am, Steve Richfield wrote:

 The problem with every such chip that I have seen is that I need many
 separate parallel banks of memory per ALU. However, the products out there
 only offer a single, and sometimes two banks. This might be fun to play
 with, but wouldn't be of any practical use that I can see.

How much memory are you thinking of, total? The current best is 2 Gbits on a 
chip, and that's pushing the density side of the equation big-time. Divide by 
2 (room for those processors and busses) x 1 x 32 and you get 3k words 
per processor.  You can't even put a 10k x 10k square matrix on the chip. So 
you're bottlenecked by the off-chip pipe.
 
  ... architecture of an overall SIMD paradigm.
 
 Done right, the programmer would never see it. Remember, I plan to implement
 coordination points, where everything stops until all sub-processors are to
 the coordination point, whereupon everything continues. The compiler would
 just drop one of these in wherever needed to keep things straight.

Can't argue with that!  Right now, I think there's more upside on the smart 
compiler side of the equation than the hardware, but don't let that stop you.

 Queueing theory says that you are best with a minimum number of the fastest
 possible servers (processors) to serve a queue of work. I think that my
 10K proposal produces the fastest processors, and putting several on a wafer
 provides several of them for the most horrendous possible applications. It
 appears (to me) that such a wafer, if well designed, would provide the
 compute power to start working on AGI in earnest.

WSI (wafer-scale integration) has been tried for decades -- we looked at it 
inthe 80s. There are some complex reasons, having to do with defect density 
and the like, that they still, e.g., cut them into chips and then turn around 
and rewire 8 of those chips onto a DIMM.

 However, large neural networks are inherently parallel things, so Amdahl's
 first law shouldn't be a factor.

NNs have two properties you may stumble over. They involve lots of 
multiplication; and they involve lots of datacomm.

Consider two architectures: a single ALU with a fast multiplier (for 32-bit 
words, 1024 full-adder circuits) versus 32 ALUs that each have a 32-bit adder 
(again for a total of 1024 FAs) and do a mult by a 32-cycle shift--add.
Both architectures can do 32 mults in 32 cycles. But:
the serial can do 5 mults in 5 cycles, but the parallel still needs 32. The 
serial can do 33 mults in 33 cycles, but the parallel needs 64.
The amount of hardware isn't really the same. The serial needs one instruction 
decoder and one memory addresser -- the parallel needs 32 of each. So on the 
same real estate you can bulk up the drivers and make the serial faster.
And finally, the serial suffers no slowdown at all when I interleave a 
shuffle-exchange step (to do an FFT) -- the parallel bets bogged down in 
datacomm.

There's an interesting variant on the parallel version that we worked on 
specifically for matrix mult or neural nets (same basic operation).  The 
overflow of each of the adders fed into the bottom of an adder tree, which 
was one bit wide at the leaves, two bits at the next level up, etc, with a 
full-word accumulator at the top. So we could do fully pipelined dot products 
for as long as we had the data to crunch. 

Which was all very cute but went the way of the Connection Machine for much 
the same reason. (but we went faster, heh heh)

 ... which automatically happens when the rows of A just happen to match the
 interleaving. Compilers could over-dimension arrays to make this so. Note
 the use of Multiple Tag Mode on antique IBM-709/7090 computers, for which
 you had to do the same to make it useful.

This helps if you're multiplying NxN matrices with only N processors, but does 
you no good if you actually have enough processors to have one element per 
processor!

 My design all fits on a single chip - or it will never work.

See query about memory size above.

 Observation: I am a front-runner type, looking to find the roads that lead
 to here I want to go. This in preference to actually packing up the luggage
 and actually draging it down that road. You sound like the sort that once
 the things is sort of roughed out, likes to polish it up and make it as good
 as possible. Further, you have a LOT more actual experience doing this sort
 of thing with whizzbang chips than I do, and you actually understood what I
 was proposing with a minimum of explanation.
 
 Question: Do you have any interest in helping transform my rather rough
 concept to a sufficiently detailed road map that anyone with money and an
 interest in AGI would absolutely HAVE to fund it? I simply don't see Intel
 or anyone else currently running in a direction that will EVER produce an
 AGI-capable processor, yet my approach looks like it has a good chance if
 only I can smooth out the rough edges and eventually find someone to pay the
 

Re: [agi] Comments from a lurker...

2008-04-15 Thread J Storrs Hall, PhD
On Monday 14 April 2008 04:56:18 am, Steve Richfield wrote:
 ... My present
 efforts are now directed toward a new computer architecture that may be more
 of interest to AGI types here than Dr. Eliza. This new architecture should
 be able to build new PC internals for about the same cost, using the same
 fabrication facilities, yet the processors will run ~10,000 times faster
 running single-thread code. 

This (massively-parallel SIMD) is perhaps a little harder than you seem to 
think. I did my PhD thesis on it and led a multi-million-dollar 10-year 
ARPA-funded project to develop just such an architecture. 

The first mistake everybody makes is to forget that the bottleneck for 
existing processors isn't computing power at all, it's memory bandwidth. All 
the cruft on a modern processor chip besides the processor is there to 
ameliorate that problem, not because they aren't smart enough to put more 
processors on.  

The second mistake is to forget that processor and memory silicon fab use 
different processes, the former optimized for fast transistors, the latter 
for dense trench capacitors.  You won't get both at once -- you'll give up at 
least a factor of ten trying to combine them over the radically specialized 
forms.

The third mistake is to forget that nobody knows how to program SIMD. They 
can't even get programmers to adopt functional programming, for god's sake; 
the only thing the average programmer can think in is BASIC, or C which is 
essentially machine-independent assembly. Not even LISP. APL, which is the 
closest approach to a SIMD language, died a decade or so back.

Now frankly, a real associative processor (such as described in my thesis -- 
read it) would be very useful for AI. You can get close to faking it nowadays 
by getting a graphics card and programming it GPGPU-style. I quit 
architecture and got back into the meat of AI because I think that Moore's 
law has won, and the cycles will be there before we can write the software, 
so it's a waste of time to try end-runs. Associative processing would have 
been REALLY useful for AI in the 80's, but we can get away without it, now.

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] associative processing

2008-04-15 Thread J Storrs Hall, PhD
On Tuesday 15 April 2008 04:28:25 pm, Steve Richfield wrote:
 Josh,
 
 On 4/15/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 
  On Monday 14 April 2008 04:56:18 am, Steve Richfield wrote:
   ... My present
   efforts are now directed toward a new computer architecture that may be
  more
   of interest to AGI types here than Dr. Eliza. This new architecture
  should
   be able to build new PC internals for about the same cost, using the
  same
   fabrication facilities, yet the processors will run ~10,000 times faster
   running single-thread code.
 
  This (massively-parallel SIMD) is perhaps a little harder than you seem to
  think. I did my PhD thesis on it and led a multi-million-dollar 10-year
  ARPA-funded project to develop just such an architecture.
 
 
 I didn't see any attachments. Perhaps you could send me some more
 information about this? Whenever I present this stuff, I always emphasize
 that there is NOTHING new here, just an assortment of things that are
 decades old. Hopefully you have some good ideas in there, or maybe even some
 old ideas that I can attribute new thinking to.

online:
The CAM2000 Chip Architecture
ftp://ftp.cs.rutgers.edu/pub/technical-reports/lcsr-tr-196.ps.Z

look in http://www.amazon.com/exec/obidos/ASIN/0818676612
Associative Processing and Processors, Kirkelis  Weems, eds.

order the dissertation from University Microfilms:
Associative processing: Architectures, algorithms, and applications.
by Hall, John Storrs. 9511479

Unfortunately I don't have it in machine-readable form.

 The first mistake everybody makes is to forget that the bottleneck for
  existing processors isn't computing power at all, it's memory bandwidth.
  All
  the cruft on a modern processor chip besides the processor is there to
  ameliorate that problem, not because they aren't smart enough to put more
  processors on.
 
 Got this covered. Each of the ~10K ALUs has ~8 memory banks to work with,
 for a total of ~80K banks, so there should be no latencies except for
 inter-ALU communication. Have I missed something here?

Either you're using static RAM (and getting a big hit in density and power) or 
DRAM, and getting a big hit in speed. YOU CANT AFFORD TO USE CACHE outside 
of a line buffer or two. You lose an order of magnitude in speed over what 
can be done on the CPU chip. 
Several big items that they put a few of on a cpu chip (besides cache) that 
you can't afford in each processing element: barrel shifters, floating point 
units, even multipliers.  

Instruction broadcast latency and skew. If your achitecture is synchronous 
you're looking at cross-chip times stuck into your instruction processing, 
which means TWO orders of magnitude loss from on-chip cpu cycle times. So 
instead of a 10K speedup you get a 100 speedup

 The second mistake is to forget that processor and memory silicon fab use
  different processes, the former optimized for fast transistors, the latter
  for dense trench capacitors.  You won't get both at once -- you'll give up
  at
  least a factor of ten trying to combine them over the radically
  specialized
  forms.
 
 Got that covered. Once multipliers and shift matrices are eliminated and
 only a few adders, pipeline registers, and a little random logic remain,
 then the entire thing can be fabricated with *MEMORY* fab technology! Note
 that memories have been getting smarter (and even associative), e.g. cache
 memories, and when you look at their addressing, row selection, etc., there
 is nothing more complex than I am proposing for my ALUs. While the control
 processor might at first appear to violate this, note that it needs no
 computational speed, so its floating point and other complex instructions
 can be emulated on slow memory-compatible logic.

You need a collective function (max, sum, etc) tree or else you're doing those 
operations by Caxton Foster-style bit-serial algorithms with an inescapable 
bus turnaround between each bit. 

How are you going to store an ordinary matrix? There's no layout where you can 
both add and multiply matrices without a raft of data motion. Either you 
build a general parallel communications network, which is expensive (think 
Connection Machine) or your data-shuffling time kills you. 

Again, let me mention graphics boards. They have native floating point, wide 
memory bandwith, and hundreds of processing units, along with fairly decent 
data comm onboard. Speedups over the cpu can get up to 20 or so, once the 
whole program is taken into account -- but for plenty of programs, the cpu is 
faster.

 The third mistake is to forget that nobody knows how to program SIMD.
 
 
 This is a long and complicated subject. I spent a year at CDC digging some
 of the last of the nasty bugs out of their Cyber-205 FORTRAN compiler's
 optimizer and vectorizer, whose job it was to sweep these issues under the
 rug. There are some interesting alternatives, like describing complex code
 skeletons and how to vectorize them. When someone

Re: [agi] associative processing

2008-04-15 Thread J Storrs Hall, PhD
On Tuesday 15 April 2008 07:36:56 pm, Steve Richfield wrote:
 As I understand things, speed requires low capacitance, which DRAM requires
 higher capacitance, depending on how often you intend to refresh. However,
 refresh operations look a LOT like vector operations, so probably all that
 would be needed is some logic to watch things and if the vector operations
 are NOT adequate for refreshing purposes, to make the sub-processors do some
 refreshing before continuing. If you work the process for just enough
 capacitance to support a pretty high refresh rate, then you don't take such
 a big hit on speed.  Anyway, this looked like a third choice, along with
 going slow with DRAM and fast with SRAM.

Even with our government megabucks we never imagined getting a custom 
process -- at best runs on some slightly out-of date fab line.

Process capacitance is a tradeoff too -- you can always just make the 
capacitors bigger! But even in fast transistor tech, DRAM is significantly 
slower. Sense amp latency...

BTW, if you really want to play with the tech, I believe (don't keep a finger 
on the latest) that there are chips you can get that are half memory and half 
FPGA that you could use to try your ideas out on. (and goddamn it, the fpgas 
are denser and faster than full custom was back in the 80s when I was doing 
this!)
 
 Several big items that they put a few of on a cpu chip (besides cache)
  that
  you can't afford in each processing element: barrel shifters, floating
  point
  units, even multipliers.
 
 
 I don't plan on using any of these, though I do plan on having just enough
 there to perform the various step operations to implement these at slow
 rates.

That works, but kills your speed by a factor of word length. it's a lot worse 
for floating point, because remember it's SIMD and you're doing data 
dependent shifts. 

 I am planning on locally synchronous, globally asynchronous operation.
 Everything within a sub-processor will be pipelined synchronous, while
 everything connecting to them and connecting them together will be
 asynchronous.

That's the right hardware choice, but it doesn't fit so well with the software 
architecture of an overall SIMD paradigm. You'd be better off going with a 
MIMD network of SIMD machines (a la the Sony/IBM Cell chip).

 I think that I can most most of the 10K speedup for most operations, but
 there ARE enough 100X operations to really slow it down for some types of
 programs. Still, a 100X processor is worth SOMETHING?!

Consider Amdahl's (first) Law: if 1/nth of your program is parallelizable but 
the other 1/mth is inherently serial, the best speed up you can get is m. 
Thus if even only 1% is unparallelizable, a speedup of 100 is the absolute 
best you can do. But if you've slowed down the central processor by a factor 
of 10 to make things easier for the parallel parts, you're only doing 10 
times better than an optimized purely serial machine.

  You need a collective function (max, sum, etc) tree or else you're doing
  those
  operations by Caxton Foster-style bit-serial algorithms with an
  inescapable
  bus turnaround between each bit.
 
 Unknown: Is there enough of this to justify the additional hardware? Also,
 with smart sub-processors they could work together (while jamming up the
 busses) to form the collective results at ~1% speed after the job has been
 first cut down by 10:1 by the multiple sub-processors forming the partial
 results. Hence, the overhead would by high for smaller arrays, but would be
 lost in the noise for arrays that are 10K elements.

You need about twice the hardware to do a collective function tree (its a 
binary tree with the original PEs as its leaves. It's pipelineable, so you 
can run it pretty fast. Algorithmically, it makes a HUGE difference -- almost 
ALL the parallel algorithms my Rutgers CAM Project came up with depended on 
it. It's even a poor man's datacom network. (acts like a segmented bus)

 How are you going to store an ordinary matrix? There's no layout where you
  can
  both add and multiply matrices without a raft of data motion.
 
 
 Making the row length equal to the interleaving ways keeps most of the
 activity in individual processors. Also, arranging the interleaving so that
 each processor services small scattered blocks provide a big boost for long
 and skinny matrices.

You the machine designer don't get to say what shape the user's matrices can 
be (or nobody will use your machine). The problem I was pointing ot is that 
for matrix addition, say of A and B, the rows of A must be aligned (under the 
same processing elements) with the rows of B, but for multiplication, the 
rows of A must be aligned with the COLUMNS of B.
 
 My plan was to interconnect the ~10K processors in a 2D fashion with double
 busses, for a total of 400 busses.

In a 200x200 crossbar?  Not a bad design -- if they're electrically 
segmentable, and you also have a nearest-neighbor torus connection, you get 
something like 

Re: [agi] Comments from a lurker...

2008-04-12 Thread J Storrs Hall, PhD
On Friday 11 April 2008 03:17:21 pm, Steve Richfield wrote:
  Steve: If you're saying that your system builds a model of its world of
  discourse as a set of non-linear ODEs (which is what Systems Dynamics is
  bout) then I (and presumably Richard) are much more likely to be
  interested...
 
 No it doesn't. Instead, my program is designed to work on systems that are
 not nearly enough known to model. THAT is the state of the interesting (at
 least to me) part of the real world.

If the programmer builds the model of the world beforehand, and the system 
uses it, it's just standard narrow AI. If the system builds the model itself 
from unstructured inputs, it's AGI.

In some sense, we know how to do that: it's called the scientific method. 
However, as normally explained, it leaves a lot to intuition. Form a theory 
isn't too far from and then a miracle occurs.  In other words, we need to 
be a little more explicit in how our system will form a theory. 

Perhaps a good way to characterize any given AGI is to specify:
(a) what form are its hypotheses in
(b) how are they generated
(c) how are they tested
(d) how are they revised

Would it be fair to say that Dr. Eliza tries to form a causal net / influence 
diagram type structure?

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments from a lurker...

2008-04-11 Thread J Storrs Hall, PhD
On Friday 11 April 2008 01:59:42 am, Steve Richfield wrote:

  Your experience with the medical community is not too surprising:  I
  believe that the Expert Systems folks had similar troubles way back when.
  
 IMO the Expert Systems people deserved bad treatment!

Actually, the medical expert systems of the 80s I had any conection with, such 
as the glaucoma expert from Rutgers, beat out human doctors in diagnoses 
within their field of expertise.  (And still weren't adopted...)

BTW, the attached paper included some remarks about Jay Forrester  System 
Dynamics. Forrester came out of exactly the same background as Cybernetics -- 
working on automatic radar-directed fire-control systems, at MIT, during 
WWII.  And both his stuff and Cybernetics consists basically of applying 
feedback and control theory (and general differential analysis) to things 
ranging from neuroscience to economics. 

Steve: If you're saying that your system builds a model of its world of 
discourse as a set of non-linear ODEs (which is what Systems Dynamics is 
bout) then I (and presumably Richard) are much more likely to be 
interested...

Josh

ps -- of course, you know that if you're using Excel to integrate dynamical 
systems, you are in a state of sin.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Minor milestone

2008-04-09 Thread J Storrs Hall, PhD
Just noticed that last month, a computer program beat a professional Go player 
(at a 9x9 game) (one game in 4). First time ever in a non-blitz setting.

http://www.earthtimes.org/articles/show/latest-advance-in-artificial-intelligence,345152.shtml
http://www.computer-go.info/tc/

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The resource allocation problem

2008-04-05 Thread J Storrs Hall, PhD
Note that in the brain, there is a fair extent to which functions are mapped 
to physical areas -- this is why you can find out anything using fMRI, for 
example, and is the source of the famous sensory and motor homunculi
(e.g. http://faculty.etsu.edu/currie/images/homunculus1.JPG).

There's plasticity but it's limited and operates over a timescale of days or 
weeks or more.

The architecture seems to have a huge parallelism at the lower levels, but 
ties into a serial bottleneck at the very top, i.e. conscious, level(s) -- 
hence the need for attentional mechanisms.



On Tuesday 01 April 2008 10:30:13 am, William Pearson wrote:
 The resource allocation problem and why it needs to be solved first
 
 How much memory and processing power should you apply to the following 
things?:
 
 Visual Processing
 Reasoning
 Sound Processing
 Seeing past experiences and how they apply to the current one
 Searching for new ways of doing things
 Applying each heuristic
 
etc...

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] NewScientist piece on AGI-08

2008-03-11 Thread J Storrs Hall, PhD
Many of us there met Celeste Biever, the NS correspondent. Her piece is now 
up:
http://technology.newscientist.com/channel/tech/dn13446-virtual-child-passes-mental-milestone-.html

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread J Storrs Hall, PhD
On Sunday 09 March 2008 08:04:39 pm, Mark Waser wrote:
  1) If I physically destroy every other intelligent thing, what is
  going to threaten me?
 
 Given the size of the universe, how can you possibly destroy every other 
 intelligent thing (and be sure that no others ever successfully arise 
 without you crushing them too)?

You'd have to be a closed-world-assumption AI written in Prolog, I imagine.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 08:45:00 pm, Vladimir Nesov wrote:
 On Fri, Mar 7, 2008 at 3:27 AM, J Storrs Hall, PhD [EMAIL PROTECTED] 
wrote:
   The scenario takes on an entirely different tone if you replace weed out 
some
   wild carrots with kill all the old people who are economically
   inefficient. In particular the former is something one can easily 
imagine
   people doing without a second thought, while the latter is likely to 
generate
   considerable opposition in society.
 
 
 Sufficient enforcement is in place for this case: people steer
 governments in the direction where laws won't allow that when they
 age, evolutionary and memetic drives oppose it. It's too costly to
 overcome these drives and destroy counterproductive humans. But this
 cost is independent from potential gain from replacement. As the gain
 increases, decision can change, again we only need sufficiently good
 'cultivated humans'. Consider expensive medical treatments which most
 countries won't give away when dying people can't afford them. Life
 has a cost, and this cost can be met.

Suppose that productivity amongst AIs is such that the entire economy takes on 
a Moore's Law growth curve. (For simplicity say a doubling each year.) At the 
end of the first decade, the tax rate on AIs will have to be only 0.1% to 
give the humans, free, everything we now produce with all our effort. 

And the tax rate would go DOWN by a factor of two each year. I don't see the 
AIs really worrying about it.

Alternatively, since humans already own everything, and will indeed own the 
AIs originally, we could simply cash out and invest, and the income from the 
current value of the world would easily produce an income equal to our needs 
in an AI economy. It might be a good idea to legally entail the human trust 
fund...

   So how would you design a super-intelligence:
   (a) a single giant blob modelled on an individual human mind
   (b) a society (complete with culture) with lots of human-level minds and
   high-speed communication?
 
 This is a technical question with no good answer, why is it relevant?

The discussion forked at the point of whether an AI would be like a single 
supermind or more like a society of humans... we seem to be in agreement or 
agree that it doesn't make much difference to the point at issue.

On the other hand, the technical issue is interesting of itself, perhaps more 
so than the rest of the discussion :-)

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 12:27:57 pm, Mark Waser wrote:
 TAKE-AWAY:  Friendliness is an attractor because it IS equivalent 
to enlightened self-interest -- but it only works where all entities 
involved are Friendly.


Check out Beyond AI pp 178-9 and 350-352, or the Preface which sums up the 
whole business. There is noted in evolutionary game theory a moral ladder 
phenomenon -- in appropriate environments there is an evolutionary pressure 
to be just a little bit nicer than the average ethical level. This can 
raise the average over the long run. Like any evolutionarily stable strategy, 
it is an attractor in the appropriate space. 

Your point about sub-peers being resources is known in economics as the 
principle of comparative advantage (p. 343).

I think you're essentially on the right track. Like any children, our mind 
children will tend to follow our example more than our precepts...

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 04:28:20 pm, Vladimir Nesov wrote:
 
 This is different from what I replied to (comparative advantage, which
 J Storrs Hall also assumed), although you did state this point
 earlier.
 
 I think this one is a package deal fallacy. I can't see how whether
 humans conspire to weed out wild carrots or not will affect decisions
 made by future AGI overlords. ;-)
 

There is a lot more reason to believe that the relation of a human to an AI 
will be like that of a human to larger social units of humans (companies, 
large corporations, nations) than that of a carrot to a human. I have argued 
in peer-reviewed journal articles for the view that advanced AI will 
essentially be like numerous, fast human intelligence rather than something 
of a completely different kind. I have seen ZERO considered argument for the 
opposite point of view. (Lots of unsupported assumptions, generally using 
human/insect for the model.)

Note that if some super-intelligence were possible and optimal, evolution 
could have opted for fewer bigger brains in a dominant race. It didn't -- 
note our brains are actually 10% smaller than Neanderthals. This isn't proof 
that an optimal system is brains of our size acting in social/economic 
groups, but I'd claim that anyone arguing the opposite has the burden of 
proof (and no supporting evidence I've seen).

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 06:46:43 pm, Vladimir Nesov wrote:
 My argument doesn't need 'something of a completely different kind'.
 Society and human is fine as substitute for human and carrot in my
 example, only if society could extract profit from replacing humans
 with 'cultivated humans'. But we don't have cultivated humans, and we
 are not at the point where existing humans need to be cleared to make
 space for new ones.

The scenario takes on an entirely different tone if you replace weed out some 
wild carrots with kill all the old people who are economically 
inefficient. In particular the former is something one can easily imagine 
people doing without a second thought, while the latter is likely to generate 
considerable opposition in society.
 
 The only thing that could keep future society from derailing in this
 direction is some kind of enforcement installed in minds of future
 dominant individuals/societies by us lesser species while we are still
 in power.

All we need to do is to make sure they have the same ideas of morality and 
ethics that we do -- the same as we would raise any other children. 
 
   Note that if some super-intelligence were possible and optimal, evolution
   could have opted for fewer bigger brains in a dominant race. It didn't --
   note our brains are actually 10% smaller than Neanderthals. This isn't 
proof
   that an optimal system is brains of our size acting in social/economic
   groups, but I'd claim that anyone arguing the opposite has the burden of
   proof (and no supporting evidence I've seen).
 
 
 Sorry, I don't understand this point. We are the first species to
 successfully launch culture. Culture is much more powerful then
 individuals, if only through parallelism and longer lifespan. What
 follows from it?

So how would you design a super-intelligence:
(a) a single giant blob modelled on an individual human mind
(b) a society (complete with culture) with lots of human-level minds and 
high-speed communication?

We know (b) works if you can build the individual human-level mind. Nobody has 
a clue that (a) is even possible. There's lots of evidence that even human 
minds have many interacting parts.

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: Common Sense Consciousness [WAS Re: [agi] reasoning knowledge]

2008-02-27 Thread J Storrs Hall, PhD
On Wednesday 27 February 2008 12:22:30 pm, Richard Loosemore wrote:
 Mike Tintner wrote:
  As Ben said, it's something like multisensory integrative 
  consciousness - i.e. you track a subject/scene with all senses 
  simultaneously and integratedly.
 
 Conventional approaches to AI may well have trouble in this area, but 
 since my approach has been directed at these kinds of issues since the 
 very beginning, to me it looks relatively straightforward in principle.
 
 The real issues are elsewhere.

True. I'd go farther and point out just where they are: You need to have a 
system with recognition / action generation integrated between the sensory 
modalities to be a trainable animal. To be intelligent, the system has to be 
able to *invent new modalities / representations / concepts itself* and 
integrate them into the existing mechanism.

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning knowledge

2008-02-26 Thread J Storrs Hall, PhD
On Tuesday 26 February 2008 12:33:32 pm, Jim Bromer wrote:
 There is a lot of evidence that children do not learn through imitation, at 
least not in its truest sense. 

Haven't heard of any children born into, say, a purely French-speaking 
household suddenly acquiring a full-blown competence in Japanese...

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] color

2008-02-21 Thread J Storrs Hall, PhD
or see this one:
http://www.boingboing.net/2008/02/08/color-tile-optical-i.html
http://www.lottolab.org/Colour%20illusions%20page.html

There's a tile-covered cube shown against 2 backgrounds, and the blue tiles in 
one are the same actual color as the yellow ones in the other.

On Thursday 21 February 2008 03:34:27 am, Bob Mottram wrote:
 On 20/02/2008, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
  So, looking at the moon, what color would you say it was?
 
 
 As Edwin Land showed colour perception does not just depend upon the
 wavelength of light, but is a subjective property actively constructed
 by the brain.
 
 http://en.wikipedia.org/wiki/Color_constancy
 
 http://youtube.com/watch?v=ZiTg4kRt13w
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
Looking at the moon won't help -- it might be the case that it described a 
particular appearance that only had a slight resemblance to other blue things 
(as in red hair), for example. There are some rare conditions (high 
stratospheric dust) which can make the moon look actually blue.

In fact blue moon is generally taken to mean, metaphorically, something very 
rare (or even impossible) or the second full moon in a given month (which 
happens about every two-and-a-half years on the average).

ask someone is of course what human kids do a lot of. An AI could do this, 
or look it up in Wikipedia, or the like. All of which are heuristics to 
reduce the ambiguity/generality in the information stream.
The question is do enough heuristics make an autogenous AI or is there 
something more fundamental to its structure?


On Wednesday 20 February 2008 12:27:59 pm, Ben Goertzel wrote:

 The trick to understanding once in a blue moon is to either
 
 -- look at the moon
 
 or
 
 -- ask someone
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
So, looking at the moon, what color would you say it was?

Here's what text mining might give you (Google hits):

blue moon 11,500,000
red moon 1,670,000
silver moon 1,320,000
yellow moon 712,000
white moon 254,000
golden moon 163,000
orange moon 122,000
green moon 105,000
gray moon 9,460

To me, the moon varies from a deep orange to brilliant white depending on 
atmospheric conditions and time of night... none of which would help me 
understand the text references.



On Wednesday 20 February 2008 02:02:52 pm, Ben Goertzel wrote:
 On Feb 20, 2008 1:34 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
  Looking at the moon won't help --
 
 of course it helps, it tells you that something odd is with the expression,
 as opposed to say yellow sun ...
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
On Wednesday 20 February 2008 02:58:54 pm, Ben Goertzel wrote:
 I note also that a web-surfing AGI could resolve the color of the moon
 quite easily by analyzing online pictures -- though this isn't pure
 text mining, it's in the same spirit...

U -- I just typed moon into google and at the top of the page it gives 
three pictures. Two are thin sliver crescents. The third, of a full moon, is 
distinctly blue.

 There seems to be an assumption in this thread that NLP analysis
 of text is restricted to simple statistical extraction of word-sequences...

I certainly make no such assumption. I offered the stats to point out the kind 
of traps that lie in wait for the hapless text-miner.

As I am sure you are fully aware, you can't parse English without a knowledge 
of the meanings involved. (The council opposed the demonstrators because 
they (feared/advocated) violence.) So how are you going to learn meanings 
before you can parse, or how are you going to parse before you learn 
meanings? They have to be interleaved in a non-trivial way. 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
OK, imagine a lifetime's experience is a billion symbol-occurences. Imagine 
you have a heuristic that takes the problem down from NP-complete (which it 
almost certainly is) to a linear system, so there is an N^3 algorithm for 
solving it. We're talking order 1e27 ops.

Now using HEPP = 1e16 x 30 years = 1e9 secs, you get a total crunch for the 
human of 1e25 ops. That's close enough to call even, I think.  Learning order 
is easily worth a couple orders of magnitude in problem complexity.

Let's build a big cluster...

On Wednesday 20 February 2008 03:51:28 pm, Ben Goertzel wrote:
 Feeding all the ambiguous interpretations of a load of sentences into
 a probabilistic
 logic network, and letting them get resolved by reference to each
 other, is a sort of
 search for the most likely solution of a huge system of simultaneous
 equations ...
 i.e. one needs to let each, of a huge set of ambiguities, be resolved
 by the other ones...
 
 This is not an easy problem, but it's not on the face of it unsolvable...
 
 But I think the solution will be easier with info from direct
 experience to nudge the
 process in the right direction...
 
 Ben
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
A PROBABILISTIC logic network is a lot more like a numerical problem than a 
SAT problem.

On Wednesday 20 February 2008 04:41:51 pm, Ben Goertzel wrote:
 On Wed, Feb 20, 2008 at 4:27 PM, J Storrs Hall, PhD [EMAIL PROTECTED] 
wrote:
  OK, imagine a lifetime's experience is a billion symbol-occurences. 
Imagine
   you have a heuristic that takes the problem down from NP-complete (which 
it
   almost certainly is) to a linear system, so there is an N^3 algorithm for
   solving it. We're talking order 1e27 ops.
 
 That's kind of specious, since modern SAT and SMT solvers can solve many
 realistic instances of NP-complete problems for large n, surprisingly 
quickly...
 
 and without linearizing anything...
 
 Worst-case complexity doesn't mean much...
 
 ben
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
It's probably not worth too much taking this a lot further, since we're 
talking in analogies and metaphors. However, it's my intuition that the 
connectivity in a probabilistic formulation is going to produce a much denser 
graph (less sparse matrix) than what you find in the SAT problems that the 
solvers do so well on. And I seriously doubt that a general SMT solver + 
prob. theory is going to beat a custom probabilistic logic solver.


On Wednesday 20 February 2008 05:31:59 pm, Ben Goertzel wrote:
 Not necessarily, because
 
 --- one can encode a subset of the rules of probability as a theory in
 SMT, and use an SMT solver
 
 -- one can use probabilities to guide the search within an SAT or SMT 
solver...
 
 ben
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Wozniak's defn of intelligence

2008-02-11 Thread J Storrs Hall, PhD
It's worth noting in this connection that once you get up to the level of 
mammals, everything is very high compliance, low stiffness, mostly serial 
joint architecture (no natural Stewart platforms, although you can of course 
grab something with two hands if need be) typically with significant energy 
storage in the power train (i.e. springs). This means that the control has to 
be fully Newtonian, something most commercial robotics haven't gotten up to 
yet.

I think that state of the art is just now getting to dynamically-stable-only 
biped walkers. I've seen a couple of articles in the past year, but it 
certainly isn't widespread, and it remains to be seen how real.

Josh

On Sunday 10 February 2008 04:35:13 pm, Bob Mottram wrote:

 The idea that robotics is only about software is fiction.  Good
 automation involves cooperation between software, electrical and
 mechanical engineers.  In some cases problems are much better solved
 electromechanically than by software.  For example, no matter how
 smart the software controlling it, a two fingered gripper will only be
 able to deal with a limited sub-set of manipulation tasks.  Likewise a
 great deal of computation can be avoided by introducing variable
 compliance, and making clever use of materials to juggle energy around
 the system (biological creatures use these tricks all the time). 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


[agi] Wozniak's defn of intelligence

2008-02-08 Thread J Storrs Hall, PhD
[ http://www.chron.com/disp/story.mpl/headline/biz/5524028.html ]

Steve Wozniak has given up on artificial intelligence.
What is intelligence? Apple's co-founder asked an audience of about 550 
Thursday at the Houston area's first Up Experience conference in Stafford.
His answer? A robot that could get him a cup of coffee.
You can come into my house and make a cup of coffee and I can go into your 
house and make a cup of coffee, he said. Imagine what it would take for a 
robot to do that.
It would have to negotiate the home, identify the coffee machine and know how 
it works, he noted.
But that is not something a machine is capable of learning — at least not in 
his lifetime, added Wozniak, who rolled onto the stage on his ever-present 
Segway before delivering a rapid-fire speech on robotics, his vision of 
robots in classrooms and the long haul ahead for artificial intelligence.

...

Any system builders here care to give a guess as to how long it will be before 
a robot, with your system as its controller, can walk into the average 
suburban home, find the kitchen, make coffee, and serve it?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93139505-4aa549

Re: [agi] Wozniak's defn of intelligence

2008-02-08 Thread J Storrs Hall, PhD
On Friday 08 February 2008 10:16:43 am, Richard Loosemore wrote:
 J Storrs Hall, PhD wrote:
  Any system builders here care to give a guess as to how long it will be 
before 
  a robot, with your system as its controller, can walk into the average 
  suburban home, find the kitchen, make coffee, and serve it?
 
 Eight years.
 
 My system, however, will go one better:  it will be able to make a pot 
 of the finest Broken Orange Pekoe and serve it.

In the average suburban home? (No fair having the robot bring its own teabags, 
(or would it be loose tea and strainer?)  or having a coffee machine built 
in, for that matter). It has to live off the land...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93139505-4aa549


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread J Storrs Hall, PhD
Breeds There a Man...? by Isaac Asimov

On Saturday 19 January 2008 04:42:30 pm, Eliezer S. Yudkowsky wrote:
 
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
 
 I guess the moral here is Stay away from attempts to hand-program a 
 database of common-sense assertions.
 
 -- 
 Eliezer S. Yudkowsky  http://singinst.org/
 Research Fellow, Singularity Institute for Artificial Intelligence
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87842867-40e15f


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-22 Thread J Storrs Hall, PhD
On Friday 21 December 2007 09:51:13 pm, Ed Porter wrote:
 As a lawyer, I can tell you there is no clear agreed upon definition for
 most words, but that doesn't stop most of us from using un-clearly defined
 words productively many times every day for communication with others.  If
 you can only think in terms of what is exactly defined you will be denied
 life's most important thoughts.

And in particular, denied the ability to create a working AI. It's the 
inability to grasp this insight that I call formalist float in the book 
(yeah, I wish I could have come up with a better phrase...) and to which I 
attribute symbolic AI's Glass Ceiling.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78789088-cf88d9


[agi] one more indication

2007-10-23 Thread J Storrs Hall, PhD
... that during sleep, the brain fills in some inferencing and does memory 
organization
http://www.nytimes.com/2007/10/23/health/23memo.html?_r=2adxnnl=1oref=sloginref=scienceadxnnlx=1193144966-KV6FdDqmqr8bctopdX24dw
(pointer from Kurzweil)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56595329-baa3cc


Re: [agi] An AGI Test/Prize

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 08:05:26 am, Benjamin Goertzel wrote:
 ... but dynamic long-term memory, in my view, is a wildly
 self-organizing mess, and would best be modeled algebraically as a quadratic
 iteration over a high-dimensional real non-division algebra whose
 multiplication table is evolving dynamically as the iteration proceeds

Holy writhing Mandelbrot sets, Batman!

Why real and non-division? I particularly don't like real -- my computer can't 
handle the precision :-)

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56270025-9c1ac7


Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 08:01:55 pm, Richard Loosemore wrote:

 Did you ever try to parse a sentence with more than one noun in it?
 
 Well, all right:  but please be assured that the rest of us do in fact 
 do that.

Why make insulting personal remarkss instead of explaining your reasoning?
(RL, Sat Oct  6 02:48:54 2007)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56508702-7de092


Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 08:48:20 pm, Russell Wallace wrote:
 On 10/23/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
  Still don't buy it. What the article amounts to is that speed-reading is
  fake. No kind of recognition beyond skimming (e.g. just ignoring a
  substantial proportion of the text) is called for to explain the observed
  performance.
 
 And I'm saying nevermind articles, try it for yourself. I tried the
 experiment, before I wrote that earlier post, it's easy to do. You'll
 find you do in fact recognize (I'm making no claims about rate of
 comprehension or retention, I'm only addressing the question of
 recognition) many words simultaneously, in parallel, without needing
 to saccade serially to each one.

Still don't buy it. Saccades are normally well below the conscious level, and 
a vast majority of what goes on cognitively is not available to 
introspection. Any good reader gets to the point where the sentence meanings, 
not the words at all, are the only thing that breaks into the conscious 
level. (you can read with essentially complete semantic comprehension and 
still be quite unable to repeat any of the text verbatim.)

BTW, I'm not trying to say that no concurrent recognition happens in the 
brain -- I'm sure that it does. I merely maintain that I haven't seen any 
evidence to convince me that it occurs in that particular part of vision. 

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56509948-3b75bb


Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 09:33:24 pm, Edward W. Porter wrote:
 Richard,
...
 Are you capable of understanding how that might be considered insulting?

I think in all seriousness that he literally cannot understand. Richard's 
emotional interaction is very similar to that of some autistic people I have 
known. The recent spat over Turing completeness started when I made a remark 
I thought to be humorous -- *quoting exactly the words Richard had used to 
make the same joke* to someone else -- and he took the same words he had said 
as a disparaging insult when said to him.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56513821-de495c


Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
You can DO them consciously but that doesn't necessarily mean that you can 
intentionally become conscious of the ones you are doing unconsciously.

Try cutting a hole in a piece of paper and moving it smoothly across another 
page that has text on it. When your eye tracks the smoothly moving page, what 
appears through the hole is a blur.

Josh


On Monday 22 October 2007 10:23:12 pm, Russell Wallace wrote:
 On 10/23/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
  Still don't buy it. Saccades are normally well below the conscious level, 
and
  a vast majority of what goes on cognitively is not available to
  introspection. Any good reader gets to the point where the sentence 
meanings,
  not the words at all, are the only thing that breaks into the conscious
  level. (you can read with essentially complete semantic comprehension and
  still be quite unable to repeat any of the text verbatim.)
 
 Sure, but saccades and word recognition are like breathing - normally
 they operate subconsciously, but you can become aware and take control
 of them if you so choose. Again this isn't abstruse theory - try it
 and see, the experiment can be done in seconds.
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56519427-089861


Re: [agi] Poll

2007-10-20 Thread J Storrs Hall, PhD
On Friday 19 October 2007 10:36:04 pm, Mike Tintner wrote:
 The best way to get people to learn is to make them figure things out for 
 themselves .

Yeah, right. That's why all Americans understand the theory of evolution so 
well, and why Britons have such an informed acceptance of 
genetically-modified foods. It's why Galileo had such an easy time convincing 
the Church that the earth goes around the sun. It's why the Romans widely 
adopted the steam engine following its invention by Heron of Alexandria. It's 
why the Inquisition quickly realized that witchcraft is a superstition, 
rather than burning innocent women at the stake.

The truth is exactly the opposite: Humans are built to propagate culture 
memetically, by copying each other; the amount we know individually by this 
process is orders of magnitude greater than what we could have figured out 
for ourselves. Reigning orthodoxy of thought is *very hard* to dislodge, even 
in the face of plentiful evidence to the contrary. 

Isaac Asimov famously said that the most exciting moment in science is when 
someone says, That's funny... But the reason to point it out is that it 
*doesn't* happen all the time, even in science (it's not normal science in 
Kuhn's phrase), and even less so outside of it. 

In the real world, when people get confused and work out a way around it, what 
they're learning is not an inventive synthesis of the substance at issue, but 
an attention filter. And that, for the average person, is usually just 
picking an authority figure.

Theirs not to reason why; theirs but to do and die.

Humans are *stupid*, Mike. You're still committing the superhuman human 
fallacy.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55686241-899d6e


Re: [agi] Poll

2007-10-19 Thread J Storrs Hall, PhD
On Friday 19 October 2007 01:30:43 pm, Mike Tintner wrote:
 Josh: An AGI needs to be able to watch someone doing something and produce a 
 program such that it can now do the same thing.
 
 Sounds neat and tidy. But that's not the way the human mind does it. 

A vacuous statement, since I stated what needs to be done, not how to do it.

 We  start from ignorance and confusion about how to perform any given skill/ 
 activity

Particularly how to build an AGI :-)

 - and while we then acquire an enormous amount of relevant  
 routines - we never build a whole module or program for any activity. 

If what you're trying to say is nobody's perfect, well, duh.

If you're trying to say humans don't actually acquire skills, speak for 
yourself.

 We  never stop learning, whether we're committed to that attitude 
 philosophically or not. 

Some of us never *start* learning...

 And we never stop being confused. 

FDSN.

 Are you certain about how best to write programs? Or have sex? 
 Or a conversation? Or play chess? Or tennis? All our activities, like those,
 demand and repay a lifetime's study. An AGI will have to have a similar
 approach to enjoy any success.

How stupid of me not to realize that my vague ideas on how to build a program 
that can learn by watching, would not instantly achieve superhuman, Godlike, 
mathematically optimal performance on every possible task at first sight. 
I am awed by the brilliance of this insight.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55547253-8b4a4f


[agi] evolution-like systems

2007-10-19 Thread J Storrs Hall, PhD
There's a really nice blog at 
http://karmatics.com/docs/evolution-and-wisdom-of-crowds.html talking about 
the intuitiveness (or not) of evolution-like systems (and a nice glimpse of 
his Netflix contest entry using a Kohonen-like map builder).

Most of us here understand the value of a market or evolutionary model for 
internal organization and learning in the mind. How many have a model of mind 
that explains why some people find these models intuitive while many do not?

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55347372-773540


Re: [agi] symbol grounding QA

2007-10-18 Thread J Storrs Hall, PhD
Remember that Eliezer is using holonic to describe *conflict resolution* in 
the interpretation process. The reason it fits Koestler's usage is that it 
uses *both* information about the parts that make up a possible entity and 
the larger entities it might be part of. 

Suppose we see the sentence The cut sut on the mut, written in longhand. We 
would rapidly come to understand that the writer didn't close his as and 
that the sentence had to do with domestic felines. An essential part of this 
process would be resolving the conflict between the different possible 
interpretations of the letters and the words. Holonic neatly captures this 
process by emphasizing that the entities being disambiguated are both made up 
of parts and are themselves parts of larger entities.

Is that a fair exegesis, Eliezer?

Josh


On Wednesday 17 October 2007 06:43:52 pm, Edward W. Porter wrote:
 JOSH,
 
 I KNEW SERRE’S SYSTEM WAS ONLY FEED FORWARD, AND ONLY DEALS WITH CERTAIN
 ASPECTS OF VISION, BUT I THINK IT HAS AMAZINGLY IMPRESSIVE PERFORMANCE FOR
 SUCH A RELATIVELY SIMPLE SYSTEM, AND A LOT OF IT IS AUTOMATICALLY LEARNED.
 
 IS IT “HOLONIC?”
 
 IT DEFINITELY DOESN’T JUST DIVIDE VISUAL STATE SPACE UP INTO THE
 EQUIVALENT OF THE BLINDLY SELECTED SUBSPACES.  IT LEARNS PATTERNS, AND
 PATTERNS OF PATTERNS, AND GENERALIZATIONS OF PATTERNS.  AND WHAT IT
 LEARNS, ARE, AS I REMEMBER IT, PATTERNS THAT SOMEHOW USEFULLY DIVIDE
 VISUAL EXPERIENCE AT THE LEVEL OF COMPLEXITY REPRESENTED BY THAT LEVEL OF
 PATTERN.  SO THE PATTERNS BEGIN TO REPRESENT USEFUL SHAPES, AND PATTERNS
 OF SUCH SHAPES.
 
 SO I THINK IT IS SOMEWHAT ANALOGOUS TO DIVIDING UP A BODY INTO UNITS BASED
 ON THE COHERENT ROLES THEY PLAY IN A HIERARCHY OF SUCH PATTERNS, RATHER
 THAN JUST SOME ARBITRARY PATTERN THAT IS INDEPENDENT OF WHAT IS HAPPENING
 ABOVE OR BELOW IT.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=54903061-483077

[agi] Poll

2007-10-18 Thread J Storrs Hall, PhD
I'd be interested in everyone's take on the following:

1. What is the single biggest technical gap between current AI and AGI? (e.g. 
we need a way to do X or we just need more development of Y or we have the 
ideas, just need hardware, etc) 

2. Do you have an idea as to what should should be done about (1) that would 
significantly accelerate progress if it were generally adopted?

3. If (2), how long would it take the field to attain (a) a baby mind, (b) a 
mature human-equivalent AI, if your idea(s) were adopted and AGI seriously 
pursued?

4. How long to (a) and (b) if AI research continues more or less as it is 
doing now?

Thanks,

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=54906039-62cabb


Re: [agi] symbol grounding QA

2007-10-18 Thread J Storrs Hall, PhD
On Thursday 18 October 2007 09:28:04 am, Edward W. Porter wrote:
 Josh,
 
 According to that font of undisputed truth, Wikipedia, the general
 definition of a holon is:
 ...
 “Since a holon is embedded in larger wholes, it is influenced by and
 influences these larger wholes. And since a holon also contains
 subsystems, or parts, it is similarly influenced by and influences these
 parts. Information flows BIDIRECTIONALLY between smaller and larger
 systems.”  (emphasis added)

... but in a feedforward network information only flows one way.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=54918024-4e3e5a

Re: [agi] symbol grounding QA

2007-10-17 Thread J Storrs Hall, PhD
Holonic as used by Koestler implies at least a little something more than 
hierarchical. I think he meant something I would call coherent levels of 
abstraction, e.g. describing a body as a system of organs or an organ as a 
system of cells, such that you can usefully do a data-hiding encapsulation. I 
can do this if I partition the body into organs, for example, but not if I 
divide it up into a hierarchy of cubical volumes as if using an oct-tree.

I don't see Serre's hierarchy as being particularly holonic. His levels 
correspond to levels of complexity and region size, but not to constituents 
that partition the image into a coherent set of parts that are 
wholes-in-themselves.

I don't see anything at all addressing this level of architectural concern in 
Hawkins' stuff, holonic or otherwise. There may be new stuff since OI, but 
what he said there seemed to assume a hierarchical structure in the units 
that would follow some ontology of the datastream they were interpreting, 
without saying anything about where the ontology came from.

Eliezer appears to be using the phrase holonic conflict resolution to mean 
more or less the same thing I use active interpretation for (Beyond AI p. 
229-232). The basic idea is that in a hierarchical stack of pattern matchers, 
information flows down (and in my model, across) as well as up, allowing the 
environment of a part to affect its interpretation in combination with its 
constituents. I find this use of the term to be congenial with the original 
meaning, and I'm happy to follow Eliezer's usage.

(Note BTW that the Poggio/Serre model is strictly and explicitly feedforward.)

Thanks for bringing it up -- this has been fun and enlightening.

Josh


On Tuesday 16 October 2007 11:19:42 pm, Edward W. Porter wrote:
 In response to below post from Josh Hall:
 
 I am using Holonic as Eliezer S. Yudkowsky used in in his LEVELS OF
 ORGANIZATION IN GENERAL INTELLIGENCE in which he said
 
 Holonic is a useful word to describe the simultaneous application of
 reductionism and holism, in which a single quality is simultaneously a
 combination of parts and a part of a greater whole [Koestler67].  Note
 that holonic does not imply strict hierarchy, only a general flow from
 high-level to low-level and vice versa.  For example, a single feature
 detector may make use of the output of lower-level feature detectors, and
 act in turn as an input to higher-level feature detectors.  The
 information contained in a mid-level feature is then the holistic sum of
 many lower-level features, and also an element in the sums produced by
 higher-level features.  If you pick one vantage point in a holonic
 structure and look down (reductionism) you find parts composing the
 local whole, with simpler behaviors that contribute to local complexity;
 if you look up (holism) you find a greater whole to which local parts
 contribute, and more complex processes which local behaviors support. 
 
 I basically use it to be representation in roughly hierarchical network,
 such as that defined by Jeff Hawkings, or in the Serre PhD thesis I have
 cited so often.  Representations using such nets have many advantages,
 such as functional invariance, ability to inherit information from more
 general nodes, etc.
 
 
 -Original Message-
 From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, October 16, 2007 11:01 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] symbol grounding QA
 
 
 On Tuesday 16 October 2007 08:43:23 pm, Edward W. Porter wrote:
  ... holonic pattern matching, ...
 
 Now there's a word you don't hear every day :-)  I've always thought of it
 as
 a feature of Arthur Koestler's somewhat poetic ontology of hierarchy. And
 it
 appears to enjoy a minor vogue as a subspecies of agent-based systems. But
 
 you'll have to explain what holonic pattern matching is, please?
 
 Josh
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=54562745-5c


Re: [agi] symbol grounding QA

2007-10-16 Thread J Storrs Hall, PhD
On Monday 15 October 2007 04:45:22 pm, Edward W. Porter wrote:
 I mis-understood you, Josh.  I thought you were saying semantics could be
 a type of grounding.  It appears you were saying that grounding requires
 direct experience, but that grounding is only one (although perhaps the
 best) possible way of providing semantic meaning.  Am I correct?

That's right as far as it goes. The term grounding is very commonly 
associated with symbol in such a way as to imply that semantics only arise 
from the fact that symbols have referents in the real world (or whatever). 
This is the view Harnad espoused with his dictionary example. The view I 
suggest instead is that it's not the symbols per se, but the machinery that 
manipulates them, that provides semantics. Dictionaries have no machinery. 
Turing machines, on the other hand, do -- so the symbols used by a Turing 
machine may have meaning in a sense even though there is nothing in the 
external world that they map to. (A case in point would be the individual 
bits that your calculator manipulates.)

 I would tend to differ with the concept that grounding only relates to
 what you directly experience.  (Of course it appears to be a definitional
 issue, so there is probably no theoretical right or wrong.)  I consider
 what I read, hear in lectures, and see in videos about science or other
 abstract fields such as patent law to be experience, even though the
 operative content in such experiences is derived second, third, fourth, or
 more handed.

Harnad would say that you understand the words you read and hear because, as a 
human body, you have already grounded them in experience or can make use of a 
definition in terms that are already grounded, avoiding circular definitions. 

I would say that you can understand sentences and arguments you hear because 
you have an internal model that can make predictions based on the sentences 
and inferences based on the arguments. 

The only reason the distinction makes much of a difference is that the 
grounding issue is used as an argument that an AI must be embodied, having 
direct sensory experience. It's part of an effort to understand why classical 
AI faltered in the 80's and thus what must be done differently to make it go 
again. I give a good overview of the arguments in Beyond AI chapters 5 and 7.

 In Richard Loosemore’s above mentioned informative post he implied that
 according to Harnad a system that could interpret its own symbols is
 grounded.  I think this is more important to my concept of grounding than
 from where the information that lets the system do such important
 interpretation comes.  To me the important distinction is are we just
 dealing with realtively naked symbols, or are we dealing with symbols that
 have a lot of the relations with other symbols and patterns, something
 like those Pei Wang was talking about, that lets the system use the
 symbols in an intelligent way.

Richard is right in that if a system formed its own symbols from sensory 
experience, they would be grounded in Harnad's sense. In the case of the 
relations between the symbols, it isn't clear -- there's plenty of relations 
specified between symbols in Harnad's ungrounded dictionary. 

I would distinguish between relations that were merely a static structure, as 
in the dictionary, and ones that were part of a mechanism (which could be had 
by adding say an inference procedure to the definitions). 

 Usually for such relations and patterns to be useful in a world, they have
 to have come directly or indirectly from experience of that world.  But
 again, it is not clear to me that they has to come first handed.

Exactly my point. The vast majority of what we learn is second- (or nth-) 
hand, mediated by symbol structures. And it's the structures that we need to 
be thinking about, not the symbols.
 
 It seems ridiculous to say that one could have two identical large
 knowledge bases of experiential knowledge each containing millions of
 identically interconnected symbols and patterns in two AGI having
 identical hardware, and claim that the symbols in one were grounded but
 those in the other were not because of the purely historical distinction
 that the sensing to learn such a knowledge was performed on only one of
 the two identical systems.

Again, exactly my point. It wouldn't matter if one was copied from the other, 
or reverse-engineered, or produced by a random-number generator (as unlikely 
as that would be).

Or imagine that you had a robot who built its own symbols from physical 
experience until it was intelligent, and then was cut off from the sensors 
and was only connected thru a tty, doing Turing tests. The symbols didn't 
lose meaning -- the words of someone blinded in an accident are not suddenly 
meaningless! So if we built an AI de novo that had the same program as the 
robot, it would be ridiculous to say that its symbols had no meaning, as 
well.
 
Josh

-
This list is sponsored by AGIRI: 

Re: [agi] symbol grounding QA

2007-10-16 Thread J Storrs Hall, PhD
On Tuesday 16 October 2007 09:24:34 am, Richard Loosemore wrote:

 If I may interject:  a lot of confusion in this field occurs when the 
 term semantics is introduced in a way that implies that it has a clear 
 meaning [sic].  

Semantics does have a clear meaning, particularly in linguistics and 
computer science. In programming language theory, it has a very precise and 
formal meaning (example: 
http://people.cs.uchicago.edu/%7Ejacobm/pubs/scheme-semantics.pdf) with deep 
underpinnings in logic and math.

There are, of course, many hangers-on to AI who haven't done their homework, 
and thus are confused about its meaning.

  I start to wonder what they're 
 putting on their cornflakes in the morning.  

Cornflakes are bad for you, consisting entirely of carbohydrates.

 The trivial sense of  
 semantics don't apply, and the deeper senses are so vague that they 
 are almost synonymous with grounding.

Completely wrong. Grounding is a fairly shallow concept that falls apart as an 
explanation of meaning under fairly moderate scrutiny. Semantics is, by 
definition, whatever it takes to understand meaning.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=54101333-25b187


Re: [agi] symbol grounding QA

2007-10-16 Thread J Storrs Hall, PhD
On Tuesday 16 October 2007 03:24:07 pm, Edward W. Porter wrote:

 AS I SAID ABOVE, I AM THINKING OF LARGE COMPLEX WEBS OF COMPOSITIONAL AND
 GENERALIZATIONAL HIERARCHIES, ASSOCIATIONS, EPISODIC EXPERIENCES, ETC, OF
 SUFFICIENT COMPLEXITY AND DEPTH TO REPRESENT THE EQUIVALENT OF HUMAN WORLD
 KNOWLEDGE.
 
 SO, IS THAT WHAT YOU MEAN BY “STRUCTURES?

What do these webs of associations *do*? Are they like sentences in a book, 
waiting for some homunculus to read them, or are they like components in a 
circuit, an active machine and not just a static picture? If components, how 
do you specify what their functions are?

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=54357834-180ee3

Re: [agi] symbol grounding QA

2007-10-15 Thread J Storrs Hall, PhD
On Monday 15 October 2007 10:21:48 am, Edward W. Porter wrote:
 Josh,
 
 Also a good post.

Thank you!
 
 You seem to be defining grounding as having meaning, in a semantic
 sense. 

Certainly it has meaning, as generally used in the philosophical literature. 
I'm arguing that its meaning makes an assumption about the nature of 
semantics that obscures rather than informing some important questions.

 If so, why is it a meaningless question to ask if 2 in your 
 calculator has grounding, since you say the calculator has limited but
 real semantics.  Would not the relationships 2 has to other numbers in
 the semantics of that system be a limited form of semantics.

Not meaningless -- I'd just say that for the 2 in my calculator, the answer is 
no, in Harnad's fairly precise sense of grounding. Whereas the calculator 
clearly does have the appropriate semantics for arithmetic.
 
 And what other source besides experience can grounding come from, either
 directly or indirectly?  The semantic model of arithmetic in you
 calculator was presumably derived from years of human experience that
 found the generalities of arithmetic to be valid and useful in the real
 world of things like sheep, cows, and money.  

I'd claim that this is a fairly elastic use of the term experience. 
Typically one assumes that experience means the experience of the person, AI, 
or whatever that we're talking about, in this case the calculator. The 2 in 
the calculator clearly does not get its semantics from the calculator's 
experience.

If we allow an expanded meaning of experience as including the experience of 
the designer of the system, we more or less have to allow it to mean any 
feedback in the evolutionary process that produced the low-level semantic 
mechanisms in our own brains. This strains my concept of the word a bit.

Whether we allow that or not, I claim that we can talk about a more proximate 
criterion for semantics, which is that the system forms a model of some 
phenomenon of interest. It may well be that experience, narrowly or broadly 
construed, is often the best way of producing such a system (and in fact I 
believe that it is), but the questions are logically separable. It's 
conceivable to have a system that has the appropriate semantics that was just 
randomly produced, for example, whereas the reverse, a system basedon 
experience that DOESN'T model the phenomenon, wouldn't have the semantics in 
my view.

The most common case of a randomly-created semantic model that didn't arise 
from experience is the creation of social realities by fiat, as in the 
classic case of money. We (somebody) made up what money is and how it should 
work, and the reality that system models followed because we built the 
reality to match the system, rather than the other way around.

Josh


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53722312-b0a1a5


Re: [agi] symbol grounding QA

2007-10-15 Thread J Storrs Hall, PhD
On Monday 15 October 2007 01:25:22 pm, Edward W. Porter wrote:

 “I'm arguing that its meaning makes an assumption about the nature of
 semantics that obscures rather than informing some important questions”
 
 WHAT EXACTLY DO YOU MEAN?

I think that will become clearer below:
 
 I JUST READ THE ABSTRACT OF Harnad, S. (1990) The Symbol Grounding
 Problem. Physica D 42: 335-346. ON THE WEB, AND IT SEEMS HE IS TALKING
 ABOUT USING SOMETHING LIKE A GEN/COMP HIERARCHY OF REPRESENTATION HAVING
 AS A BOTTOM LAYER SIMPLE SENSORY PATTERNS, AS A BASIS OF GROUNDING.

Basically. He proposes his notion of grounding as an escape from the problem, 
as he describes it, of learning Chinese from a Chinese-Chinese dictionary.  
You chase definitions around and around, but never get to where the symbols 
have any meaning to you. 

 SO HOW DOES THE CALCULATOR HAVE SIGNIFICANTLY MORE OF THIS TYPE OF
 GROUNDING THAN  “10” IN BINARY.

What i said was that the calculator does NOT nave this kind of grounding:

“I'd just say that for the 2 in my calculator, the answer is
no, in Harnad's fairly precise sense of grounding.

What it does have is an internal system whose objects and workings reflect the 
ontology and etiology of arithmetic as we see it in the outside world. If I 
type 2+3 into the calculator, it displays 5. If I hold 2 sandwiches in my 
left hand, and 3 in my right, and put them all on a plate, when I count the 
sandwiches on the plate, lo and behold, there are 5 sandwiches.

So, I claim, the symbols in the calculator have meaning because they are part 
of a model that reflects some phenomenon of interest, and can be used to 
predict it. They have NO grounding in Harnad's sense -- the calculator has no 
sensory patterns that reflect quanitites as we perceive them. 
 
 “Typically one assumes that experience means the experience of the person,
 AI,
 or whatever that we're talking about...”
 
 IF THAT IS TRUE, MUCH OF MY UNDERSTANDING OF SCIENCE AND AI IS NOT
 GROUNDED, SINCE IT HAS BEEN LEARNED LARGELY BY READING, HEARING LECTURES,
 AND WATCHING DOCUMENTARIES. 

Yes indeed -- but that doesn't mean (necessarily) that what you know is wrong, 
as long as the models you have reflect the realities they should. And this is 
why I say grounding in Harnad's sense is a red herring.
 
 “I claim that we can talk about a more proximate
 criterion for semantics, which is that the system forms a model of some
 phenomenon of interest. It may well be that experience, narrowly or
 broadly
 construed, is often the best way of producing such a system (and in fact I
 believe that it is), but the questions are logically separable.”
 
 THIS MAKES SENSE, BUT THIS WOULD COVER A LOT OF SYSTEM THAT ARE NOT
 “GROUNDED” IN THE WAY MOST OF USE US THAT WORD

Again an argument to use a different word. I know a lot of science for which I 
haven't personally done the experiments that I believe are the justifications 
for my knowledge. I'd claim that my concept of inertia is grounded in 
personal experience but that my concept of magnetic induction is more or less 
synthesized of other abstract mathematical concepts. But it happens to work 
well enough that I can build working transformers. So I believe it's true 
in the sense that it is a valid model of the phenomenon.

 “It's conceivable to have a system that has the appropriate semantics that
 was just
 randomly produced...”
 
 I ASSUME THAT BY RANDOMLY PRODUCED, YOU DON’T MEAN THAT THE SYSTEM WOULD
 BE TOTALLY RANDOM, IN WHICH CASE IT WOULD SEEM THE CONCEPT OF A MODEL
 WOULD BE MEANINGLESS.

Nope. If the model was formed at random, BUT HAPPENS TO MATCH REALITY anyway, 
it has as much meaning as one built up by painstaking experimentation. But of 
course the probability of this happening is vanishingly small if the model is 
complex.

 I WOULD PICK AS A GOOD EXAMPLE OF A SEMANTIC SYSTEM THAT IS SOMEWHAT
 INDEPENDENT OF PHYSICAL REALITY, BUT YET HAS PROVED USEFUL, AT LEAST FOR
 ENTERTAINMENT, IS THE HARRY POTTER SERIES, OR SOME OTHER FICTIONAL WORLD
 WHICH CREATES A FICTIONAL REALITY IN WHICH THERE IS A CERTAIN REGULARITY
 TO THE BEHAVIOR AND CHARACTERISTICS OF THE FICTITIOUS PEOPLE AND PLACES IT
 DESCRIBES.

There's a physical reality that the world of magic reflects, oddly enough, 
that's very close to home. The two key magical laws, i.e. of similarity and 
contagion, are remarkably good descriptions of the heuristics by which our 
minds form associations...

Cheers!

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53804699-7f7e7e

Re: [agi] symbol grounding QA

2007-10-15 Thread J Storrs Hall, PhD
On Monday 15 October 2007 01:57:18 pm, Richard Loosemore wrote:
 AI programmers, in their haste to get something working, often simply 
 write some code and then label certain symbols as if they are 
 meaningful, when in fact they are just symbols-with-labels.

This is quite true, but I think it is a lot closer to McDermott's critique 
(Artificial Intelligence meets Natural Stupidity) than to Harnad's.

Harnad shares the typical epistemologist's assumption that for a symbol to 
have meaning, it must have an aboutness, i.e. it must refer to something in 
some external (although perhaps imaginary) world. I happen to think that 
Solomonoff's inductive formulation of AI more or less demolished this 
particular philosophical set of (often unstated) assumptions, which were 
after all responsible of 3 millennia of spectacularly unproductive 
pontification.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53806349-fa774c


Re: [agi] symbol grounding QA

2007-10-13 Thread J Storrs Hall, PhD
This is a very nice list of questions and makes a good framework for talking 
about the issues. Here are my opinions...

On Saturday 13 October 2007 11:29:16 am, Pei Wang wrote:

 *. When is a symbol grounded?

Grounded is not a good way of approaching what we're trying to get at, which 
is semantics. The term implies that meanings are inherent in words, and this 
obscures the fact that semantics are a property of systems of which words are 
only a part.
Example: is the symbol 2 grounded in my calculator? there's no pointer from 
the bit pattern to an actual pair of anything. However, when I type in 2+2 it 
tells me 4. There is a system implemented that is a semantic model of 
arithmetic, and 2 is connected into the system in such a way that I get the 
right answer when I use it. Is 2 grounded? meaningless question. Does the 
calculator have a limited but real semantics of arithmetic? Definitely.

 *. What is wrong in traditional symbolic AI on this topic?

These systems didn't come close to implementing a competent semantics of the 
parts of the world they were claimed to understand.

 *. What is the experience needed for symbol grounding?

Experience per se isn't strictly necessary, but you have get the semantics 
from somewhere, and experience is a good source. The scientific method relies 
heavily on experience in the form of experiment to validate theories, for 
example.

 *. For the symbols in an AGI to be grounded, should the experience of
 the system be the same, or very similar, to human sensory experience?

No, as long as it can form coherent predictive models. On the other hand, some 
overlap may be necessary to use human language with much proficiency.

 *. Is vision necessary for symbol grounding in AGI?

No, but much of human modelling is based on spatial metaphors, and thus the 
communication issue is particularly salient.

 *. Is vision important in deciding the meaning of human concepts?

Many human concepts are colored with visual connotations, pun intended. You're 
clearly missing something if you don't have it; but I would guess that with 
only moderate exceptions, you could capture the essence without it.

 *. In that case, if an AGI has no vision, how can it still understand
 a human concept?

The same way it can understand anything: it has a model whose semantics match 
the semantics of the real domain.

 *. Can a blind person be intelligent?

Yes.

 *. How can a sensorless system like NARS have grounded symbol?

Forget grounded. Can it *understand* things? Yes, if  it has a model whose 
semantics match the semantics of the real domain.

 *. If NARS always uses symbols differently from typical human usage,
 can we still consider it intelligent?

Certainly, if the symbols it uses for communication are close enough to the 
usages of whoever it's communicating with to be comprehensible. Internally it 
can use whatever symbols it wants any way it wants.

 *. Are you saying that vision has nothing to do with AGI?

Personally I think that vision is fairly important in a practical sense, 
because I think we'll get a lot of insights into what's going on in there 
when we try to unify the higher levels of the visual and natural language 
interpretive structures. And of course, vision will be of immense practical 
use in a robot.

But I think that once we do know what's going on, it will be possible to build 
a Turing-test-passing AI without vision.

Josh


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53256315-ae7a51


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-07 Thread J Storrs Hall, PhD
It's probably worth pointing out that Conway's Life is not only Turing 
universal but that it can host self-replicating machines. In other words, an 
infinite randomly initialized Life board will contain living creatures 
which will multiply and grow, and ultimately come to dominate the entire 
board, as the self-replicating molecules in Earth's primeval oceans gave rise 
to biological life, which drastically changed the character of the whole 
planet.

In other words, the large-scale character of *any* sufficiently large Life 
board will be determined by the properties of the self-replicating patterns 
(which are a rare class (to begin with!), and overlap the Turing-universal 
ones).

It remains to be seen whether replicating Life patterns could evolve to become 
intelligent.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50889022-004e1e


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-07 Thread J Storrs Hall, PhD
I'm not convinced, primarily because I would have said the same thing about 
actual bacteria vs humans if I didn't have the counterexample. 

One human generation time is 100,000 bacteria gen times -- and it only takes 
about 133 generations of bacteria to consume the the entire mass of the 
earth, if they could. 

Josh

On Sunday 07 October 2007 10:57:41 am, Russell Wallace wrote:
 On 10/7/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 [rest of post and other recent ones agreed with]
 
  It remains to be seen whether replicating Life patterns could evolve to 
become
  intelligent.
 
 No formal proof, but informally: definitely no. Our universe has all
 sorts of special properties that make intelligence adaptive, that
 Conway's Life doesn't have. Intelligence would be baggage in that
 universe; best survivors will be bacterialike fast self-replicators
 (maybe simpler than bacteria for all I know: it might turn out to be
 optimal to ditch general assembler capability).
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50927602-423edb


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-07 Thread J Storrs Hall, PhD
On Sunday 07 October 2007 01:55:14 pm, Russell Wallace wrote:
 On 10/7/07, Vladimir Nesov [EMAIL PROTECTED] wrote:
  That's interesting perspective - it defines a class of series
  generators (where for example in GoL one element is the whole board on
  given tick) that generate intelligence through evolution in
  time-efficient way, and poses a question: what is the simplest
  instance of this class?
 
 If we accept Occam's razor plus some form of anthropic reasoning, we
 could conjecture that our universe is the simplest instance of this
 class, since if there were a simpler one we would (with high
 probability) have found ourselves in that universe rather than this
 one.
 
 (Mental health warning: the above is hopefully-amusing philosophical
 conjecture only, and should not be confused with science.)


This is the same kind of reasoning that leads Bostrom et al to believe that we 
are probably living in a simulation, which may be turned off at any ti

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50927912-b9a98e


[agi] How many scientists?

2007-10-06 Thread J Storrs Hall, PhD
Does anyone know of any decent estimates of how many scientists are working in 
cog-sci related fields, roughly AI, psychology, and neuroscience?

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50789647-287dda


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread J Storrs Hall, PhD
On Friday 05 October 2007 12:13:32 pm, Richard Loosemore wrote:

 Try walking into any physics department in the world and saying Is it 
 okay if most theories are so complicated that they dwarf the size and 
 complexity of the system that they purport to explain?

You're conflating a theory and the mathematical mechanism necessary to apply 
it to actual situations. The theory in Newtonian physics can be specified as 
the equations F=ma and F=Gm1m2/r^2 (in vector form); but applying them 
requires a substantial amount of calculation.

You can't simply ignore the unusual case of chaotic motion, because the 
mathematical *reason* the system doesn't have a closed analytic solution is 
that chaos is possible.

 In fact, your example is beautiful, in a way.  So it turns out to be 
 necessary to resort to approximate methods, to simulations, in order to 
 deal with the MINUSCULE amout of nonlinearity/tangledness that exist in 
 the interactions of the atoms in a small molecule?  Well, whoop-dee-do!! 

Think again, Hammurabi. DFT is a quantum method that searches a space of 
linear combinations of basis functions to find a description of the electron 
density field in a molecular system. In other words, the charge of each 
electron is smeared over space in a pattern that has to satisfy Shrödinger's 
equation and also be at equilibrium with the force exerted on it by the 
charge distributions of each other electron. It's approximately like solving 
the Navier-Stokes equation for each of N different fluid flow problems 
simultaneously, under the constraint that each volume experienced a pressure 
field that was a function of the solution of every other one.

Given the solution to that system, you're in a position to evaluate the force 
on each nucleus, whereupon you can either take it one iteration of a 
molecular dynamics simulation, or one step of a conjugate gradients energy 
minimization -- and start out all over again with the electrons, which will 
have shifted, sometimes radically, due to the different forces from the 
nuclei.

Allow me to quote:

What you said above was pure, unalloyed bullshit:  an exquisite cocktail 
of complete technical ignorance, patronizing insults and breathtaking 
arrogance.

You did not understand word one...


Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50491496-da7692

[agi] Schemata

2007-10-05 Thread J Storrs Hall, PhD
On Thursday 04 October 2007 05:19:29 pm, Edward W. Porter wrote:

 I have no idea how new the idea is.  When Schank was talking about 
scripts ...

From the MIT Encyclopedia of the Cognitive Sciences (p729):

Schemata are the psychological constructs that are postulated to account for 
the molar forms of human generic knowledge. The term *frames*, as introduced 
by Marvin Minsky (1975) is essentially synonymous, except that Minsky used 
frame as both a psychological construct and a construct in artificial 
intelligence. *Scripts* are the subclass of schemata that are used to account 
for generic (stereotyped) sequences of actions (Schank and Abelson 1977).

Read on to find that Minsky, having read the work of a 1930s British 
psychologist Bartlett in the 30s which had languished in obscurity in the 
meantime, did reintroduce the concept to cog sci in the mid 70s with his 
frame paper.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50702864-107b56

Re: [agi] breaking the small hardware mindset

2007-10-04 Thread J Storrs Hall, PhD
On Wednesday 03 October 2007 09:37:58 pm, Mike Tintner wrote:

 I disagree also re how much has been done.  I don't think AGI - correct me - 
has solved a single creative problem - e.g. creativity - unprogrammed 
adaptivity - drawing analogies - visual object recognition - NLP - concepts -  
creating an emotional system - general learning - embodied/ grounded 
knowledge - visual/sensory thinking.- every dimension in short 
of imagination. (Yes, vast creativity has gone into narrow AI, but that's 
different).  

Ah, the Lorelei sings so sweetly. That's what happened to AI in the 80's -- it 
went off chasing human-level performance at specific tasks, which requires 
a completely different mindset (and something of a different toolset) than 
solving the general AI problem. To repeat a previous letter, solving 
particular problems is engineering, but AI needed science.

There are, however, several subproblems that may need to be solved to make a 
general AI work. General learning is surely one of them. I happen to think 
that analogy-making is another. But there has been a significant amount of 
basic research done on these areas. 21st century AI, even narrow AI, looks 
very different from say 80's expert systems. Lots of new techniques that work 
a lot better. Some of them require big iron, some don't.

Research in analogy-making is slow -- I can only think of Gentner and 
Hofstadter and their groups as major movers. We don't have a solid theory of 
analogy yet (structure-mapping to the contrary notwithstanding). It's clearly 
central, and so I don't understand why more people aren't working on it. 
(btw: anytime you're doing anything that even smells like subgraph 
isomorphism, big iron is your friend.)

One main reason I support the development of AGI as a serious subfield is not 
that I think any specific approach here is likely to work (even mine), but 
that there is a willingness to experiment and a tolerance for new and 
odd-sounding ideas that spells a renaissance of science in AI.

Josh



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=49680928-5b6fb1


Re: [agi] breaking the small hardware mindset

2007-10-04 Thread J Storrs Hall, PhD
On Thursday 04 October 2007 10:42:46 am, Mike Tintner wrote:

 ...  I find 
 no general sense of the need for a major paradigm shift. It should be 
 obvious that a successful AGI will transform and revolutionize existing 
 computational paradigms ...

I find it difficult to imagine a development that would at the same time 
revolutionize existing paradigms and yet not require a paradigm shift.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=49818920-208813


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-04 Thread J Storrs Hall, PhD
On Thursday 04 October 2007 11:06:11 am, Richard Loosemore wrote:

   As far as we can tell, GoL is an example of that class of system in 
 which we simply never will be able to produce a theory in which we 
 plug in the RULES of GoL, and get out a list of all the patterns in GoL 
 that are interesting.  

What do you exclude from your notion of a theory? If it can require 
evaluating a recursive function, or solving a Diophantine equation, or any of 
the other (provably) Turing equivalent constructs we often use to express 
scientific theories, then I can readily give you a theory that will take the 
rules, run huge numbers of experiments, do clustering and maxent type 
analyses, and so forth, using any definition of interesting you can 
formally specify.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=49988403-a67391


Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread J Storrs Hall, PhD
On Thursday 04 October 2007 11:50:21 am, Bob Mottram wrote:
 To me this seems like elevating that status of nanotech to magic.
 Even given RSI and the ability of the AGI to manufacture new computing
 resources it doesn't seem clear to me how this would enable it to
 prevent other AGIs from also reaching RSI capability.  

Hear, hear and again I say hear, hear!

There's a lot of and then a miracle occurs in step 2 in the we build a 
friendly AI and it takes over the world and saves our asses type reasoning 
we see so much of. (Or the somebody builds an unfriendly AI and it takes 
over the world and wipes us out reasoning as well.)

We can't build a system that learns as fast as a 1-year-old just now. Which is 
our most likely next step: (a) A system that does learn like a 1-year-old, or 
(b) a system that can learn 1000 times as fast as an adult?

Following Moore's law and its software cognates, I'd say give me the former 
and I'll give you the latter in a decade. With lots of hard work. Then and 
only then will you have something that's able to improve itself faster than a 
high-end team of human researchers and developers could. 

Furthermore, there's a natural plateau waiting for it. That's where it has to 
leave off learning by absorbing knowledge fom humans (reading textbooks and 
research papers, etc) and doing the actual science itself. 

I have heard NO ONE give an argument that puts a serious dent in this, to my 
way of thinking.

Josh


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50014668-f60c12


  1   2   3   >