Re: [agi] AI Buzz in Mainstream

2005-02-18 Thread Brad Wyble
On Thu, 17 Feb 2005, JW Johnston wrote:
Bob Cowell's At Random column in this month's IEEE Computer magazine was
about his renewed excitement in AI given Jeff Hawkins' book and work:
http://www.computer.org/computer/homepage/0105/random/index.htm
Then today, found a similar article in Computer World:
http://www.computerworld.com/softwaretopics/software/story/0,10801,99691,00.
html?source=NLT_PMnid=99691. Talks about Hawkins' work as well as that of
Robert Hecht-Nileson and Stan Franklin.
The AI winter ending?

I'm always glad to see public enthusiasm for Ai, Neuroscience, et al.
But unfortunately, the age-old problems of AI continue to haunt us.
The homonculus is alive and well, even after all these years, and 
warnings.  Every time some amateur AI philosopher, such as this Jeff 
Hawkins, throws his hat into the ring, I'm reminded of this pernicious 
problem.

I haven't read the book, but from this fairly substantial review article, 
I get the sense that his model of the brain consists of a pattern matcher 
under the control of intention.

There's the homonculous all over again.  And Bob, the column author, 
loves it.

I guess I can't blame Bob, the homonculous has been so pernicious 
precisely because we tend to anthropomorphosize complex processes.  But 
the author of the book, if accurately portrayed by this review, deserves a 
ruler across the fingers for falling into the swamp of the homonculus, 
despite plenty of posted warnings.

-Brad



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] AI Buzz in Mainstream

2005-02-18 Thread Brad Wyble
Brad,
I read Hawkins' book, and while I don't agree with his ideas about AI, I
don't think he falls prey to any simple homunculus fallacy..
Some of my thoughts on his book are at:
http://www.goertzel.org/dynapsyc/2004/ProbabilisticVisionProcessing.htm
(BTW, my site seems to be down today but it will be back up shortly)

Well consider my grumblings retracted then.  It's certainly gratifying to 
see the enthusiasm generated on his http://onintelligence.org site.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-11 Thread Brad Wyble
On Fri, 11 Feb 2005, Eugen Leitl wrote:
Just want to be clear Eugen, when you talk about evolutionary simulations, 
you are talking about simulating the physical world, down to a 
cellular and perhaps even molecular level?

-B
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Brad Wyble
On Wed, 9 Feb 2005, Martin Striz wrote:
--- Brad Wyble [EMAIL PROTECTED] wrote:
Hardware advancements are necessary, but I think you guys spend alot of
time chasing white elephants.  AGI's are not going to magically appear
just because hardware gets fast enough to run them, a myth that is
strongly implied by some of the singularity sites I've read.
Really?  Someone may just artificially evolve them (it happened once 
already on
wetware), and evolution in silico could move 10, nay 20, orders of magnitude
faster.

No never.  Evolution in silico will never move faster than real matter 
interacting.

But yes it's true, there are stupidly insane emounts of CPU power that 
would give us AI instantly (although it would be so alien to us that we'd 
have no idea how to communicate with it). However nothing that we'll get 
in the next 100 century will be so vast.  You'd need a computer many 
times the size of the earth to generate AI through evolution in a 
reasonable time frame.



Martin Striz
__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
---
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Brad Wyble



There are several major stepping stones with hardware speed. One, is when you 
have
enough for a nontrivial AI (price tag can be quite astronomic). Second,
enough in an *affordable* installation. Third, enough crunch to map the
parameter space/design by evolutionary algorithms. Fourth, the previous item
in an affordable (arbitrarily put, 50-100 k$) package.
Arguably, we're approaching the region where a very large, very expensive
installation could, in theory, support a nontrivial AI.

Yes, *in theory*, but you still have to engineer it.  That's the hard 
part.

Maybe I'm overstating my case to make a point, but it's a point that 
dearly needs to be made: the control architecture is everything.

Let's do a very crude thought experiment, and for the moment not consider 
evolving AI, because the hardware requirements for that are a bit silly.

So imagine it like this, you've got your 10^6 CPU's and you want to make 
an AI.  You have to devote some percentage of those CPU's to thinking 
(ie analyzing and representing information) and the remainder to 
restricting that thinking to some useful task.  No one would argue, I 
hope, that it's useful to blindly analyze all available information.

The part that's directing your resources is the control architechture and 
it requires meticulous engineering and difficult design decisions. 
What percentage do you allocate?

5%? 20%?   The more you spend, the more efficiently the remaining CPU 
power is spent.  There's got to be a point at which you achieve a maximum 
efficiency for your blob of silicon.

The brain is thoroughly riddled with such control architechture, starting 
at the retina and moving back, it's a constant process of throwing out 
information and compressing what's left into a more compact form.  That's 
really all your brain is doing from the moment a photon hits your eye, 
determining whether or not you should ignore that photon.  And it is a 
Very Hard problem.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Brad Wyble
The brain is thoroughly riddled with such control architechture, starting
at the retina and moving back, it's a constant process of throwing out
information and compressing what's left into a more compact form.  That's
really all your brain is doing from the moment a photon hits your eye,
determining whether or not you should ignore that photon.  And it is a
Very Hard problem.
Yes, but it's a solved problem. Biology is rife with useful blueprints to
seed your system with. The substrate is different, though, so some things are
harder and others are easier, so you have to coevolve both.
This is where you need to sink moles of crunch.

I don't think you and I will ever see eye to eye here, because we have 
different conceptions in our heads of how big this parameter space is.

Instead, I'll just say in parting that, like you, I used to think AGI was 
practically a done deal.  I figured we were 20 years out.

7 years in Neuroscience boot-camp changed that for good.  I think anyone 
who's truly serious about AI should spend some time studying at least one 
system of the brain.  And I mean really drill down into the primary 
literature, don't just settle for the stuff on the surface which paints 
nice rosy pictures.

Delve down to network anatomy, let your mind be blown by the precision and 
complexity of the connectivity patterns.

Then delve down to cellular anatomy, come to understand how tightly 
compact and well engineered our 300 billion CPUs are.  Layers and layers 
of feedback regulation interwoven with an exquisite perfection, both 
within cells and between cells.  What we don't know yet is truly 
staggering.

I guarantee this research will permanently expand your mind.
Your idea of what a Hard problem is will ratchet up a few notches, and 
you will never again look upon any significant slice of the AGI pie as 
something simple enough that it can can be done by GA running on a few kg 
of molecular switches.


-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Brad Wyble
I'd like to start off by saying that I have officially made the transition 
into old crank.  It's a shame it's happened so early in my life, but it 
had to happen sometime.  So take my comments in that context.  If I've 
ever had a defined role on this list, it's in trying to keep the pies from 
flying into the sky.


Evolution is limited by mutation rates and generation times.  Mammals 
need from 1 to 15 years before they reach reproductive age.  Generation
That time is not useless or wasted.  Their brains are acquiring 
information, molding themselves.  I don't think you can just skip it.

times are long and evolution is slow.  A computer could eventually 
simulate 10^9 (or 10^20, or
whatever) generations per second, and multiple mutation rates (to find optimal
evolutionary methodologies).  It can already do as many operations per second,
it just needs to be able to do them for billions of agents.

10^ 9 generations per second?  This rate depends(inversely) on the 
complexity of your organism.

And while fitness functions for simple ant AI's are (relatvely) simple to 
write and evaluate, when you start talking about human level AI, you need 
a very thorugh competition, involving much scoial interaction.  This takes 
*time* whether simulated time or realtime, it will add up.

A simple model of interaction between AI's will give you simple AI's.  We 
didn't start getting really smart until we could exchange meaningful 
ideas.


But yes it's true, there are stupidly insane emounts of CPU power that
would give us AI instantly (although it would be so alien to us that we'd
have no idea how to communicate with it). However nothing that we'll get
in the next 100 century will be so vast.  You'd need a computer many
times the size of the earth to generate AI through evolution in a
reasonable time frame.
That's not a question that I'm equipped to answer, but my educated opinion 
is
that when we can do 10^20 flops, it'll happen.  Of course, rationally designed
AI could happen under far, far less computing power, if we know how to do it.
I'd be careful throwing around guesses like that.  You're dealing with so 
many layers of unknown.

Before the accusation comes, I'm not saying these problems are unsolvable. 
I'm just saying that (barring planetoid computers) sufficient hardware is 
a tiny fraction of the problem.  But I'm hearing a disconcerting level of 
optimism here that if we just wait long enough, it'll happen on all of our 
desktops with off-the shelf AI building kits.

Let me defuse another criticism of my perspective,  I'm not saying we need 
to copy the brain.  However, the brain is an excellent lesson of how Hard 
this problem is and should certainly be embraced as such.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Brad Wyble

I'm confused, all you want are Ants?
Or did you mean AGI in  ant-bodies?
Social insects are a good model, actually. Yes, all I want is a framework
flexible and efficient enough to produce social insect level on intelligence
on hardware of the next decades.
If you can come that far, the rest is relatively trivial, especially if you
have continous accretion of data from wet and computational neuroscience.

I'm going to have to stop on this note.  You and I live in different 
worlds.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-09 Thread Brad Wyble
Hardware advancements are necessary, but I think you guys spend alot of 
time chasing white elephants.  AGI's are not going to magically appear 
just because hardware gets fast enough to run them, a myth that is 
strongly implied by some of the singularity sites I've read.

The hardware is a moot point.  If a time traveler from the year 2022 were 
to arrive tomorrow and give us self-powered uber CPU fabrication plants, 
we'd be barely a mouse fart closer to AGI.

Spend your time learning how to use what we have now, that's what 
evolution did, starting from the primitive processing capabilities of 
single celled organisms.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] What are qualia...

2005-01-26 Thread Brad Wyble
On Sat, 22 Jan 2005, Philip Sutton wrote:
Once complex brained / complecly motivated creatures start using qualia they
could play into lifepatterns so profoundly that even obscure trends in the use
of qualia for aesthetic purposes could actually effect reproductive prospects.
For example, male peacocks have large tails that look nice - clearly qualia are
playing a role in the differentiation process that decides which peacocks will
be more or less successful in breeding.
Cheers, Philip
This is not at all true.
I could design a neural network, or perhaps even symbolic computer program 
that can evaluate the attractivenes of a peacock tail and tune it to 
behave in a similar fashion as that tiny portion of a real peacock's 
brain.

Does this crude simulation contain qualia?
-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] What are qualia...

2005-01-26 Thread Brad Wyble
Yes, that's consistent with my line of thinking.
Qualia are intensity of patterns ... in human brains these are mostly neural
patterns ...
and what we *call* qualia are qualia that are patterns closely associated
with the part of the brain that deals with calling ...
-- Ben

I'd like to make a motion that this discussion topic be slated for 
disposal in the Yucatan storage facility, preferrably in drums that read 
WARNING: UNRESOLVABLE PHILOSOPHY Do Not Discuss for at least 30,000 
years.

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] A theorem of change and persistence????

2004-12-20 Thread Brad Wyble
On Sun, 19 Dec 2004, Ben Goertzel wrote:
Hmmm...
Philip, I like your line of thinking, but I'm pretty reluctant to extend 
human logic into the wildly transhuman future...

Ben, this isn't so much about logic as it is about thermodynamics and it's 
going to be a very long time indeed before we can get around that one.

Phil's idea comes down to stating that the entity will need to exert 
energy to counteract local entropy in order to remain a coherent being.

I'd agree and state a trivial extension:  The larger the sphere  (in 
physical or 
some other space) of entropy that 
the entity is counteracting by expending energy, the greater it's chance 
of survival.

Consider humanity, let's assume we'll survive as a species long as the 
earth remains intact.  We're still vulnerable from asteroids (admittedly a 
miniscule chance).  If we extend our sphere of control of entropy into 
space (by building gizmos and whatsits to protect the earth), we further 
increase our chance of deep time survival.  We've made a bubble of entropy 
control around the earth.

I'd also put forth this one: it's more energy efficient to ensure deep 
time longevity by reproduction than by protection.

There's a book called the Robot's Rebellion which espouses the view that 
humans are a deep-time survival mechanism for our DNA.  I haven't read it 
yet, but it sounds right on target for this topic.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] A theorem of change and persistence????

2004-12-20 Thread Brad Wyble
The Robot's Rebellion : Finding Meaning in the Age of Darwin
by Keith E. Stanovich
University of Chicago Press (May 15, 2004)
ISBN: 0226770893
Cheers, Philip

I'm glad you looked this up and posted it, as there are two books titled 
The Robot's Rebellion, the other being a very controversial attack on 
organized religion.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] 25000 rat brain cells in a dish can beat you at Flight Simulator

2004-10-31 Thread Brad Wyble
This research represents a major series of techincal triumphs, but the lay 
press versions of the story are somewhat misleading.  There is no real 
learning going on, at least in the sense of synaptic modification.  This 
is not really a brain system which exhibits information processing, reward 
 punishment, self organization, or any of the other classical ideas of 
brain learning.

According to the author's comments at:
http://slashdot.org/comments.pl?sid=126880threshold=-1commentsort=0tid=191mode=threadcid=10614281
it's basically a system of adaptation to input and if the network were 
disconnected from the interface for some amount of minutes, it would 
return to a baseline state (ie have forgotten)

Think of the neurons as springs, they adapt in a very characteristic way 
to input (as a spring's force increases with stretching), but on a time 
scale of minutes.  By engineering the interface carefully, they end up 
with a system that depends on this pattern of adaptation, and therefore 
the system appears to learn, as the airplane is initially unstable but 
becomes stable.  It is not, however, performing any kind of long term 
synaptic modification, or cellular reorganization.

Very cool stuff techniques though.  This is the same group that built a 
wheeled robot that learned to navigate in a similar way.  Their 
expertise in keeping a brain culture alive for a long time and then using 
it is stunning, to say the least.

http://www.neuro.gatech.edu/groups/potter/index.html
-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Model simplification and the kitchen sink

2004-10-25 Thread Brad Wyble
Intelligence is not necessary to create intelligence. Case in point: us.
The evolutionary process is a simple algorithm.

In the very text that you quoted, I didn't say intelligence was necessary, 
I said a resource pool far larger than that of the entity being 
designed/deconstructed is necessary.

And evolution certainly had that.
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Model simplification and the kitchen sink

2004-10-25 Thread Brad Wyble
On Sun, 24 Oct 2004, Ben Goertzel wrote:
Well, I don't think that building an AI is in principle too hard for a
single mind to handle  Understanding the brain may well be, because the
brain has so damn many parts with their individual complex dynamics -- an AI
doesn't need to be as complicated as the brain, though.  (As a rough
analogy, look how much harder it is to understand a bird wing than an
airplane wing...).
Airplane wings were easy for one person to understand back when they were 
simple things.  But now that airplane wings have been adapted to generate 
more lift under different conditions, and also possess other vital 
functions (carrying fuel, air intakes, hydraulics), they are increasing in 
complexity dramatically.

So shall it be with AGI, as they scale up in complexity to handle real 
world challenges, they will spiral out of the size that can be 
comprehended by a single person.

I predict that this point at which AGI design exceeds a human's 
understanding will come long before they are capable of generative self 
modification.  Hence, we will need teams of people before we get to that 
point.

-Brad

We're trying to build an AI, not via one person's efforts only, but via the
combined efforts of a small team.  I'm betting this is enough.  I don't
understand all of the Novamente codebase in detail -- no one person does --
but our small team, collectively, does.
-- Ben G
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of J.Andrew Rogers
Sent: Sunday, October 24, 2004 11:19 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Model simplification and the kitchen sink

On Oct 24, 2004, at 2:14 PM, Brad Wyble wrote:
Another point to this discussion is that the problems of AI and
cognitive science are unsolvable by a single person.  1 brain can't
understand itself, but perhaps 10,000 brains can understand or design
1 brain.

This does not follow.  You can build arbitrarily complex machines with
a very tiny finite control function and plenty of tape.  The complexity
of AI as an algorithm and design space is not in the same class as the
complexity of an instance of human-level AI, even though the latter is
just the former given some state space to play with.
It is highly improbable that the core control function of intelligence
cannot be understood by one person, or at least I see no evidence in
theory to support this conjecture.  Intelligence appears to be a pretty
simple thing, even in theory; most of the nominal complexity can be
attributed to people who don't really understand it (IMNSHO) or who
require the addition of some complexity to solve a practical design
problem.  What you are saying is kind of like saying that no one can
comprehend pi because no one can recite all the digits.
j. andrew rogers
---
To unsubscribe, change your address, or temporarily deactivate
your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Model simplification and the kitchen sink

2004-10-25 Thread Brad Wyble
Yes, but if any kid can buy a system with Avogadro number of switches, and
large corporations 10^6 of that, no reason why we can't breed AI starting
from an educated guess (a spiking network of automata controlling virtual
co-evolving critters). That future is some 30-50 years remote.

I think you're a few orders of magnitude off, but I made basically the 
same point here:

http://www.mail-archive.com/[EMAIL PROTECTED]/msg01509.html

But we're getting off track, my point at the start of this was that simple 
theories allow efficient communication, and therefore are essential for 
rapid progress in an effort like this, in which people are trying to 
design or understand something that is holistic.   i.e. something that 
cannot be decomposed into mathematically discrete chunks the way 
that the physical sciences often can.

AI and cognitive science both fall squarely into that category.
-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Model simplification and the kitchen sink

2004-10-25 Thread Brad Wyble
Engineering massively emergent systems is not something we're familiar with.
But it doesn't mean it can't be done. You know the fitness function, let the
system design itself.
I'm not saying it can't be done.  I'm saying it can't be done by one 
person.  I'm saying the discipline requires the interaction of many 
scientists.

Molecular neuroscience allows you map molecular events in their impact on
structure and function. We're at the beginning here, but there are a lot
simple parameters there (as well as terribly complex ones) for tweaking.
While this hasn't been done, inflating the neocortex should result in a
smarter critter. And that's a trivial parameter.
I know you are all probably getting sick of me talking about how all of 
this is complicated, but it really is.  Hearing that inflating the cortex 
is a trivial parameter grates on me terribly.

Inflating the cortex would require, apart from a huge increase in 
metabolism and female pelvic diameter, lots of control structures so that 
this new cortex does something useful.  Any new functional modules in the 
brain have to be carefully balanced against the structural and functional 
characteristics of existing ones or there are a great variety of mental 
illnesses that can result.  We are already skirting the envelope of 
cognitive dysfunction, as evidenced by the number of people with 
psychological problems.

You might, for example, have a system that is now incapable of 
resonating at frequencies that the brain expects to work at, and now 
you've broken consciousness.  Or perhaps this new super genius is 
constantly wracked by crippling grand-mal seizures, because you've failed 
to apply the correct amount of neuromodulatory control to your new 
cortical real estate.

Also, you've just increased the required amount of blood flow by perhaps 
double resulting perhaps in a massive increase in strokes due to larger 
and more numerous arteries.

These are very real problems that evolution is constantly grappling with. 
You might hear about the rare cases of encephalitic babies with very 
oddly shaped brains that are fully functional members of society and think 
that human brains are heavily robust. What you don't hear about are the 
much larger number of stillborn children with brains that didn't work 
properly.

Trivial parameter indeed.
-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Model simplification and the kitchen sink

2004-10-25 Thread Brad Wyble
On Mon, 25 Oct 2004, Ben Goertzel wrote:
Brad wrote:
I know you are all probably getting sick of me talking about how all of
this is complicated, but it really is.  Hearing that inflating the cortex
is a trivial parameter grates on me terribly.
Brad, I agree with you re human brains, but of course there is a big
difference between engineered AI systems and human brains in this regard.

Agreed, that whole email was concerning his comments about augmenting 
human brains, and was a bit off track for this list.  My apologies.

We can engineer our AI systems specifically so that adding more processor
and memory WILL simply increase their intelligence, without a lot of
complications and hassles  Novamente is designed this way.
You are correct about physical/architechtural hassles, but I'm not so sure 
about mental hassles.  I can imagine functional complications that result 
from having too much unstructured room to play in.  As a simple analogy, I 
recall that Principal Component Analysis can fail to give you useful 
results if you allow it to use too many dimensions.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Model simplification and the kitchen sink

2004-10-25 Thread Brad Wyble
The Godel statement represents itself, completely, via diagonalization.

Unfortunately I'm not equipped to discuss Godel in depth.
All I can do is argue by simple analogy, that is, it takes N1 neurons in 
the brain to mentally represent the idea of a neuron.  Therefore the brain 
cannot represent itself.

Now it's true that the representation of the system can be reduced in 
complexity by good theories, but given our severe limitations in working 
memory and abstract spatial conceptualization both of which are necessary 
to understand a complex system, we are orders of magnitude from having the 
capacity to understand even a reduced version of ourselves.

In other words, any representation of the brain that is so reduced as to 
be singularly understandable by a single human will be so abstract as to 
have surrendered much of its explanatory power.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Unlimited intelligence.

2004-10-24 Thread Brad Wyble
On Thu, 21 Oct 2004, deering wrote:
True intelligence must be aware of the widest possible context and derive super-goals based on direct observation of that context, and then generate subgoals for subcontexts.  Anything with preprogrammed goals is limited intelligence.

You have pre-programmed goals.
And you are certainly not aware of the widest possible context, you'd need 
a brain several orders of magnitude larger than the universe you're trying 
to be aware of in order to posses that remarkable ability.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Brad Wyble
On Sun, 24 Oct 2004, Ben Goertzel wrote:
One idea proposed by Minsky at that conference is something I disagree with
pretty radically.  He says that until we understand human-level
intelligence, we should make our theories of mind as complex as possible,
rather than simplifying them -- for fear of leaving something out!  This
reminds me of some of the mistakes we made at Webmind Inc.  I believe our
approach to AI there was fundamentally sound, yet the theory underlying it
(not the philosophy of mind, but the intermediate level
computational-cog-sci theory) was too complex which led to a software system
that was too large and complex and hard to maintain and tune.  Contra Minsky
and Webmind, in Novamente I've sought to create the simplest possible design
that accounts for all the diverse phenomena of mind on an emergent level.
Minsky is really trying to jam every aspect of the mind into his design on
the explicit level.

Can you provide a quote from Minsky about this?  That's certainly an 
interesting position to take.  The entire field of cognitive psychology is 
intent on reducing the complexity of its own function so that it can be 
understood by itself.

On the other hand, Minsky's point is probably more one of evolutionary 
progress across the entire field, we should try many avenues and select 
those that work best, rather than getting locked into narrow 
visions of how the brain works as has happened repeatedly throughout the 
history of Psychology.


Re: Deb, his stuff is clearly an amazing accomplishment, although I think 
that his success is more of a technical than a deeply theoretical flavor.


On a more general note, I wouldn't expect to impress the AI community with 
just your theories and ideas.  There are many AI frameworks out there, and 
it takes too much effort to understand new ones that come along until they 
do something amazing.

So you'll need a truly impressive demo to make a splash.   Until you 
do that, every AI conference you go to will be like this one.  Deb's 
learned this lesson and learned it well :)

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Brad Wyble
Hi Brad,
really excited about Novamente as an AGI system, we'll need splashy demos.
They will come in time, don't worry ;-)   We have specifically chosen to
Looking forward to it as ever :)  I can understand your frustration with 
this state of affairs.  Getting people to buy into your theoretical 
framework requires a major time investment on their part.

This is why my own works stays within the bounds of conventional 
experimental and psychological research.  I speak the same langauge as 
everyone else, and so it's easy to cross pollenate ideas.  Of course, this 
is also why SOAR and similar architectures have such appeal despite their 
limitations.  Because the SOAR community is speaking the same language to 
one another, it's possible (in theory) for the whole of them to make 
faster progress than if they each had their own pet architechture.

This synergy is very real, but may be outweighed by SOAR's 
limitations.


And, I hope my comments didn't seem to be dissing Deb Roy's work.  It's
really good stuff, and was among the more interesting stuff at this
conference, for sure.
Not at all, I think we're in general agreement about the value of his 
work.


Now, I understand well that the human brain is a mess with a lot of
complexity, a lot of different parts doing diverse things.  However, what I
think Minsky's architecture does is to explicitly embed, in his AI design, a
diversity of phenomena that are better thought of as being emergent.  My
argument with him then comes down to a series of detailed arguments as to
whether this or that particular cognitive phenomenon
a) is explicitly encoded or emergent in human cognitive neuroscience
b) is better explicitly encoded, or coaxed to emerge, from an AI system
In each case, it's a judgment call, and some cases are better understood
based on current AI or neuroscience knowledge than others.  But I think
Minsky has a consistent, very strong bias toward explicit encoding.  This is
the same kind of bias underlying Cyc and a lot of GOFAI.

Whether something is explicit or emergent depends only on your 
perspective of what counts as explicit.  I'll assume you mean anatomically 
explicit in some way (where anatomical refers to features of both
neurophysiology and box/arrow design).

With this assumption, I think b follows from a.   Evolution has 
always looked for the efficient solution, so if evolution has explicitly 
encoded these behaviors, it's likely the best way to do it, at least as 
far as we'll be able to determine with our stupid human brains :)

There's certainly a huge preponderance of evidence that our brains have 
leaned towards specific anatomically explicit solutions to 
problems in the domains that we can examine easily (near the motor and 
sensory areas).

Of course, in many cases these anatomically explicity solutions are 
emergent from developmental processes, but I still think they should be 
considered explicit.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Brad Wyble
So much for getting work done today :)
I noticed at this conference that different researchers were using basic
words like knowledge and representation and learning and evolution
in very different ways -- which makes communication tricky!
Don't get me started on Working Memory.
In an AI context, it means whether something exists explicitly in the source
code, rather than coming about dynamically as an indirect result of the
sourcecode, in the bit-patterns in RAM created by the executable running...
A fair definition.
Agreed.  And I think that sensorimotor stuff is more likely to be explicit
rather than emergent in the brain  And that, in coding an AI system,
it's hopeless to try to make too much of cognition explicit rather than
emergent -- but the same statement probably doesn't hold for perception 
action...

If that were the case, would you not expect to see more variance in high 
level behaviors?  Instead we tend to see the same types of behavior 
expressed, the only difference between people being the relative amount of 
expression of these tendencies.

But I guess that's an arguable point, whether these observed tendencies 
among a population of people are actually there, or are only a product of 
the theories used to classify them.


-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Model simplification and the kitchen sink

2004-10-24 Thread Brad Wyble
Another point to this discussion is that the problems of AI and cognitive 
science are unsolvable by a single person.  1 brain can't understand 
itself, but perhaps 10,000 brains can understand or design 1 brain.

Therefore, these sciences depend on the interaction of communities of 
scientists in a way that the physical sciences do not.

And for this interaction to succeed, you need simplicity of theory above 
all else, because the individual agents in the discipline need to be able 
to communicate efficiently with one another, and that's the biggest 
bottleneck in the scientific process.

So unless one lone researcher can solve the problem in isolation, and it 
is a mathematical fact that this is impossible, their long years of toil 
will be in vain unless the ideas can be communicated.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI research consortium

2004-06-28 Thread Brad Wyble
On Mon, 28 Jun 2004, J. Andrew Rogers wrote:
There is most certainly not an infinite range of solutions, and there is
an extremely narrow range of economically viable solutions.
There are certainly an infinite range of solutions in AI, 
even for a specific problem, let alone for a space of many problems. 
What's an optimal economic solution for application X is a waste of 
resources for application Y.

I think the burden of proof is on you to demonstrate how this range is 
extremely narrow.  Even within this mailing list there are vastly 
divergent  opinions on the basic fundamentals of the architechture that might do the 
job.  We can't even settle the simple question of whether it's better 
pursuing biological analogues or starting from scratch.


I would state that your entire characterization of technological history
is incorrect.  There is very little diversity in engineering in
practice, and I don't expect AGI to be any different.  Most of the
variation between  implementations of a given core technology is window
dressing or attributable to variations in specification.
We're perched on the edge of a semantic debate concerning what's 
fundamentally different.

But I'll content myself with the concrete example of PC CPU's
They do the same job, but they work differently inside, different people 
have designed them and most importantly for the purposes of this 
discussion:  different companies get money when I purchase them.

And the only reason CPU's function similarly is because they need to run 
the same software at an instruction level.  AGI will be under no such 
constraint.


You can parameterize an implementation for variation, but you can't
parameterize the underlying mathematics and physics that allows a
technology to exist in the first place.  AGI is a math problem with huge
Except that this isn't about physics, it's about information processing. 
And given a certain amount of information processing resources (as 
limited by physics yes), there are a great many ways to use them to perform a task.


economic consequences.  Economics will dictate the implementation
parameters of the mathematics, and will strongly bound the architecture
of successful implementations.  There won't be diversity, there will be
slightly different flavors of the same solution.
Only in the very first iteration.  The solution space we explore will 
bifurcate rapidly.

Any design competition will be decisive and short-lived.  There will be
no second place.  You are mis-modeling how AGI technology will interact
with the marketplace by treating it as a conventional production-bound
widget when this will be a case of something very different.  There is
only one barrier to entry into the AGI market, but that barrier is
precisely such that the first-mover will have a decisive advantage.
AI will be bound by the same economic models as everything else, but you
have to use valid data and parameters or you'll end up with completely
broken expectations.  AI technology is almost ideally pathological from
a market competition perspective.
It's true, I'm fitting AGI into the conventional mold of product 
development and economics.

I do this because I think AGI development is going to be an iterative 
process with solutions that are incremental improvements, and with several 
companies close on each other's heels.  There won't be a clear winner, 
because the fitness landscape of AGI is poorly defined.


I'd like to hear how you define AI being pathological from a market 
competition perspective?  Is it because AI's will improve themselvers, and 
therefore the early winners will accelerate their own development process?

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI research consortium

2004-06-28 Thread Brad Wyble
Great stuff Andrew.

I should have specified extremely narrow for implementations in our
universe as we generally understand it.
This is an old discussion, so I'm not going to rehash it.  The enemy of
implementation is *tractability*, not will this work in theory if I
throw astronomical quantities of computing resources at the problem.
This is a crucial point that still seems to escape many people.  It is
fairly evident from what we know of the mathematics that this is the
case.  You have a wide diversity of choices in theoretical designs, but
only because while dabbling in theory you can ignore engineering
tractability in practice.
My perspective comes from reverse engineering the brain.  In 
doing this, I am exposed to the many compromises and innovations evolution was 
forced to make.

But these are very specific solutions to very specific problems, such as 
how to recognize the minute differences between thousands of faces, or 
learning to understand speech, remember events that occur over 80 years. 
And the specific solutions of how we cognitively deal with complex 
decision making tasks is a problem we're just beginning to scratch at.

The more one studies the specifics of the solutions the brain uses, the 
more one realizes the incredible varieties of strategies that could have 
been used, yes economically, and yes in our universe.

Evolution was forced into certain compromises and choices not because they 
were the only solutions to the problem but because evolution is heavily bound by 
its own historical precedent.


If it helps you, I'll restate it this way:  Almost every major AI effort
is a theoretically correct design.  Extremely few, if any, are
reasonably tractable AI designs in our universe.  The problem has never
been about theoretical correctness, at least not if one has a reasonable
understanding of the underlying problem.
Yes, I understood your point.  I do have a reasonable understanding of the 
underlying problem.  The difference is simply one of perspective.  I see 
limitless solutions, you see a narrow track.


Finding someone with a contrary opinion on any topic is easy.  That they
have an opinion at all does not make their opinion meaningful or valid
ipso facto.
I think there is far more agreement on the basics than you seem to
think.  Not being able to discern between the idle chatter of people who
don't grok the underlying subject matter and people with a serious clue
will give you that impression though.  There are a lot of airplane
enthusiasts who do not understand the basic physical design principles
of powered flight.  For any topic that attracts fanboys and cranks, it
often takes a great deal of genuine expertise to separate the cranks and
fanboys from people with genuine expertise.
I've been involved with the hard AI, the weak AI and the 
neurophys approaches to these problems at an academic level.  There is 
very little agreement there.

All CPUs are technologically identical for all intents and purposes, and
commodities.  The differences revolve almost entirely around the
manufacturing costs and efficiencies.  This is a terrible example
actually, as it is an industry where competitors differentiate
themselves by manufacturing economics.
So, if you had a company that made its money sorting sets of data in a
competitive market, you are saying that the selection of sorting
algorithm is irrelevant to your ability to survive in the marketplace?
I'm not a big believer in self-improving AI, at least not in the sense
that it often seems many others use that term.  The core design will
have to be pretty close from the beginning for an implementation to even
be plausible in practice.
The primary difference is that AI will be a business where the economics
is bound almost entirely by knowledge rather than more traditional
manufacturing cost structures.  Even nominally pure information
products, such as software, are bound by the manufacturing cost of the
code, though the overall process starts to show symptoms of
knowledge-bound economics.

So by knowledge you mean essentially empirical facts, that 1 + 2 = 3.
The hard part in AGI is not finding that knowledge, but in developing an 
agent that can distill that knowledge.  That's what we do, as people.


The problem with knowledge, is that it has to be learned.  One can't
simply buy knowledge, and the whole point of intelligence is to
That's certainly not true.  I can buy a chemistry textbook full of 
essentially mathematical knowledge of how atoms interact.  That knowledge 
doesn't have to be learned either, I can use it straight from the book.

I think you were thinking of something else when you discuss knowledge, 
and if so, you'll need to clarify it.


manufacture knowledge from data.  Economics and economic advantage is
all about knowledge -- that is the real value of intelligence after all.
Any machine intelligence will quickly be in a position of clear economic
advantage in the marketplace which will allow it to maximize its 

RE: [agi] AGI's and emotions

2004-02-25 Thread Brad Wyble
On Wed, 25 Feb 2004, Ben Goertzel wrote:

 
 Emotions ARE thoughts but they differ from most thoughts in the extent to
 which they involve the primordial brain AND the non-neural physiology of
 the body as well.  This non-brain-centricity means that emotions are more
 out of 'our' control than most thoughts, where 'our' refers to the
 modeling center of the brain that we associate with the feeling of 'free
 will.'
 
 -- Ben G
 

I would agree with this.   Emotions seem to arise from parts of the brain 
that your central executive has minimal control over.  They can be 
suppressed and manipulated with effort but they are distinct 
from the character of thoughts originating in other parts of the brain.  

It's probably a mistake to characterize emotions as a unitary phenomenon 
though.  Different emotions have different functions, and likely originate 
from different structures. 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] AGI's and emotions

2004-02-25 Thread Brad Wyble

 I guess we call emotions 'feelings' because we feel them - ie. we can 
 feel the effect they trigger in our whole body, detected via our internal 
 monitoring of physical body condition.
 
 Given this, unless AGIs are also programmed for thoughts or goal 
 satisfactions to trigger 'physical' and/or other forms of systemic 
 reaction, I suppose their emotions will have a lot less 'feeling' depth to 
 them than humans and other biological species experience.
 

That's not the entirety of the difference between emotions and other types 
of thoughts.  A reasoning entity can detect that their thoughts are under 
the influence of an emotion.  For example, consider being in a road rage 
situation, which I'm sure we can all relate to.  

You know full well that 
your reaction of anger towards someone who's unwittingly committed a 
minor offense to you is wildly irrational and yet you can't help but feel 
a flash of extreme animosity towards someone else (or maybe your steering 
wheel :)).  The fact that you know it's an emotional 
reaction doesn't prevent you from feeling its effects on your thoughts, it 
just lets you handle it without acting on it.

So any entity capable of remembering their thought processes would be able 
to detect the influence of an emotion (at least the human variety) on 
the current flow of their thoughts even without body-state markers.  
-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Within-cell computation in biological neural systems??

2004-02-23 Thread Brad Wyble
 
 Nonlinear dendritic integration can be accurately captured by the
 comparmental model which divides dendrites into small sections
 with ion channels and other internal reaction mechanisms. This
 is the most accurate level of modeling. It may be possible to
 simplify this model with machine learning techniques and without
 significant loss in accuracy.


I am well aware of compartmental modelling and have done it myself.  But 
this type of model only accounts for the physical size/character of a 
dendrite, ignoring, in principle, a whole raft of complex molecular 
dynamics of what might be occuring inside it.  Such molecular dynamics will sure 
contribute to the 
nonlinear aspects of a dendrite.  



 
 Just as an example, a new type of neuron has recently been discovered that 
 can hold a steady state of firing in isolation, apply current, rate 
 increases and remains stable at a new threshold.  It's dynamically 
 settable, which blows away all standard Integrate  Fire models.  
 
 I don't know the exact mechanisms that give rise to that type
 of neurons, but the comparmental model should be able to cover
 this. What is needed is a large-scale database of neuronal
 characteristics (automation).

Yes, one can create a model of a neuron that does this, it's already been 
done.  It's far from a standard model though.

My point, however, was that there is an entire world of complexity within 
the cell that will be relevant to its role in a neural network (as opposed 
to simply metabolic)  that we are just beginning to understanding.  


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Within-cell computation in biological neural systems??

2004-02-10 Thread Brad Wyble

The jury is very much out Phillip.  Eliezer goes too far in saying it's a 
myth perpetuated by computer scientists.  They use the simplest 
representations they know to exist in their models for purposes of 
parsimony.  It's hard to fault them for being rigorous in this respect.  


But neurons are surely far more complex than this.  The majority of 
computation may well occur within the nonlinear bursting dynamics of 
dendrites.

Just as an example, a new type of neuron has recently been discovered that 
can hold a steady state of firing in isolation, apply current, rate 
increases and remains stable at a new threshold.  It's dynamically 
settable, which blows away all standard Integrate  Fire models.  

We're just scratching the surface of an enormous iceberg.  If you're 
trying to build some useful index of brain power based on number of 
neurons (or even synapses), give up and wait 30 years at least.  

-Brad 



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Real world effects on society after development of AGI????

2004-01-15 Thread Brad Wyble
Ben Wrote:

  1) AI is a tool and we're the user, or
  2) AI is our successor and we retire, or
  3) The Friendliness scenario, if it's really feasible.
 
 This collapse of a huge spectrum of possibilities into three
 human-society-based categories isn't all that convincing to me...
 

Yes, a list like this should always include  

4) Something else

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Real world effects on society after development of AGI????

2004-01-14 Thread Brad Wyble
On Tue, 13 Jan 2004, deering wrote:

 Brad, I completely agree with you that the computer/human crossover
 point is meaningless and all the marbles are in the software engineering
 not the hardware capability.  I didn't emphasize this point in my
 argument because I considered it a side issue and I was trying to keep
 the email from being any longer than necessary.  But even when someone
 figures out how to write the software of the mind, you still need the
 machine to run it on.  I believe in the creative ability of the whole
 AGI research ecosystem to be able to deliver the software when the
 hardware is available.  I believe that the human mind is capable of
 solving this design/engineering problem, and will solve it at the
 earliest opportunity presented by hardware availability.


You seem to contradict yourself, saying first that the hardware crossover 
point is meaningless, then implying that we'll solve the design problem at 
the first opportunity.  

I won't reiterate my stance again, you know what it is :)


 
 Regarding nanotechnology development, I think we are approaching
 nano-assembly capability much faster than you seem to be aware.  Check
 out the nanotech news
 
 http://nanotech-now.com/

Being able to make these bits n bobs in the lab is a different problem 
than having autonomous little nanorobots doing it.  You then have problems 
of power distribution, intelligent coordination, heat dissipation. It's 
quite a ways off in my opinion.

Again, I'm not sure whether this or AGI will come first, they are both 'H' 
hard.  

 
 Regarding science, Yes, turtles all the way down.  Probably.  But atoms
 are so handy.  Everything of any usefulness is made of atoms.  To go
 below atoms to quarks and start manipulating them and making stuff other
 than the 92 currently stable atoms has such severe theoretical obstacles
 that I can't imagine solving them all.  Granted, I may be lacking
 imagination, or maybe I just know too much about quarks to ignore all
 the practical problems.  Quarks are not particles.  You can't just pull
 them apart and start sticking them together any way you want.  Quarks
 are quantified characteristics of the particles they make up.  We have
 an existence proof that you can make neat stuff out of atoms.  Atoms are
 stable.  Quarks are more than unstable, they don't even have a separate
 existence.  I realize that my whole argument has one great big gaping
 hole, We don't know what we don't know.  Okay, but what I do know
 about quarks leads me to believe that we are not going to have quark
 technology.  On a more general vein, we have known for some time that
 areas of scientific research are shutting down.  Mechanics is finished.  
 Optics is finished.  Chemistry is finished.  Geology is basically
 finished.  We can't predict earthquakes but that's not because we don't
 know what is going on.  Metrology we understand be can't calculate, not
 science's fault.  Oceanography, ecology, biology, all that is left to
 figure out is the molecular biology and they are done.  Physics goes on,
 and on, and on, but to no practical effect beyond QED and that is all
 about electrons and photons and how they interact with atoms, well
 roughly.

Perhaps some of them have evolved into different kinds of science that you 
no longer recognize as such.  That's not the same thing as shutting down.


 
 
 I don't expect this clarification to change your mind.  I think we are
 going to have to agree to disagree and wait and see.
 

Yes indeedy. :)


 
 See you after the Singularity.
 

Ah ah, but the Singularity says you can't make that prediction :)

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Real world effects on society after development of AGI????

2004-01-13 Thread Brad Wyble
On Mon, 12 Jan 2004, deering wrote:
 
 Brad, you are correct.  The definition of the Singularity cited by
 Verner Vinge is the creation of greater-than-human intelligence.  And
 his quite logical contention is that if this entity is more intelligent
 than us, we can't possibly predict what it will think or do, hence the
 incomprehensibility.  Many people subscribe to this statement as if it
 were scripture, not me.  A few years later, Ray Kurzweil noticed that
 the advancement of knowledge in molecular biotechnology and
 miniaturization of electrical and mechanical systems were graphing
 tracks that closely matched the graphs for the advancement of
 computational capacity.  It appears from the graph data that
 computational capacity of desktop computers will surpass human brains at

I keep making this point as often as it has to be made: surpassing the 
computational capacity of the brain is not even close to sufficient to 
develop AGI.  The hard part, the real limitation, is the engineering of 
the type that Ben's doing.  

Software engineering will be our biggest hurdle for decades after we cross 
the brain CPU barrier.


 about the same time as miniaturization reaches positional molecular
 assembly and knowledge of molecular biotechnology reach completeness.  
 If you think about it, the fact that these three areas of technological
 advancement are tracking together toward specific goals is not
 surprising.  They are all very closely tied to each other.  The advance
 of miniaturization of electrical and mechanical systems are producing
 the tools for the investigation of living organisms at the molecular
 level.  It is also producing the hardware for the advancement of
 computational capacity.  The more powerful computers are providing the
 control systems for the automation of molecular biotechnology speeding
 up the assimilation of knowledge.  Computers and nanotechnology are
 progressing lockstep; scientists need more powerful computers to advance
 nanotechnology, computers need more miniaturized circuits to become more
 powerful.  And molecular biotechnology is dependent on both the
 advancement of computational capacity and the advancement of nanotech
 tools.  So is born the concept of the three technology Singularity.


A good point, however nanomanufacturing has some special challenges, as 
does mind design.  One is likely much harder than the other, I just don't 
know which.

 
 1.  Intelligence will top out at levels of great efficiency, accuracy,
 and speed; and the best types of thought processes will be similar to
 ways of thinking used by our best geniuses, a mode of thought that is
 not beyond our comprehension, just merely beyond our perfect execution.


It will be beyond your comprehension.  I don't know about you, but I 
cannot comprehend the way hardcore theoretical mathematicians think about 
equations.  I feel like a cat staring at its master.

In the same way, you spend most of your day thinking about topics that are 
utterly incomprehensible to people 100 years old.   

So it will be with your children and theirs.

 
 2.  Physical technology will reach a limit at the complete control of
 the positioning of atoms in fabrication, maintenance, and functioning,
 including molecular scaled robots and machinery.

You sound like physicists 100 years ago who thought that the proton, 
neurtron and electron were the end of the road.  Why is it so hard to 
imagine that we can put quantum particles to use?  Or that there are not 
layer upon layer of sub-quantum particles?  

I think It's turtles all the way down, and so far History has proved me 
right.  

 
 3.  Science will reach a limit with the completion of cataloging and
 understanding of all molecular processes in living organisms.
 

This is largely irrelevant.   Science is so much more than cataloging 
living organisms.  It is completely open-ended.  


 When I say limits, I don't mean that we will stop innovating, merely
 that we will have all of the basic knowledge and capability we are ever
 going to have, and what is left is art.  Sure we will still be inventing
 better mouse traps, but not whole new areas of science.
 

Sorry.  I can't see how you've demonstrated this.

 Given these limits, the Singularity becomes very comprehensible.  We
 know what basic capabilities we will have.  We can plan how we want to
 use them.  We get to decide what principles our society will be based
 on, and how we will implement them.
 

I couldn't disagree more.

 Okay, you can start laughing now.


Just shaking my head in wonder is all :)


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Dr. Turing, I presume?

2004-01-11 Thread Brad Wyble
 
 I see your point, but I'm not so sure you're correct.

If you're devoting resources specifically to getting some attention, you 
may indeed speed up the process.  I wish you luck.  

However even if you do get such attention, it will still take quite a 
while for the repercussions to percolate through society.  Mike seemed to 
be implying a technological rapture with very rapid changes at 
all levels of society.  

I think that people at all levels will be slow to react while a small 
percentage of early adopters who grab hold and start creating a market.  
This belief is based on historical precedent.  


-Brad


 
 I think about it this way:
 
 * Sometimes bullshit get huge amounts of media attention and money.
 
 * Sometimes really *demonstrably* valuable things get pathetically little
 media attention and money
 
 * Sometimes really demonstrably valuable things get huge amounts of media
 attention and money
 
 Assuming Novababy really eventuates like I hope/believe it will, I intend to
 ensure that Novamente AGI falls into the latter category.  I don't think its
 so impossible to achieve this, it just requires approaching the task of
 fundraising and publicity-seeking with some energy and inventiveness.
 
 I think I have a good idea of what achieving this requires.  For instance, I
 have a good friend who lives here in DC who is a very successful PR agent
 and would be quite helpful on the media aspect of this (one of his jobs was
 doing PR for the Republic of Sealand, which was totally obscure before he
 started working with them, and wound up on the cover of Wired and in every
 major paper... and is a heck of a lot less generally interesting than
 Novamente...).  And I know a few people in the US gov't research funding
 establishment, who personally like AGI, but who can't authorize AGI funding
 due to internal-politics constraints.  It wouldn't take such a big nudge for
 the research-funding establishment to give them the go-ahead to follow their
 intuitions and fund AGI.
 
 I think that raising funds and serious positive publicity for a
 scientifically successful baby AGI project is a *hard* problem, but
 nowhere near as hard as making the baby AI in the first place.
 
 Confident as I am in Novamente, it's the making the baby AI work problem
 that worries me more, not the how to publicize and monetize AGI once the
 baby AI works problem!!
 
 -- Ben G
 
 
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your subscription, 
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Dr. Turing, I presume?

2004-01-10 Thread Brad Wyble
On Sat, 10 Jan 2004, deering wrote:

 Ben, you are absolutely correct.  It was my intention to exaggerate the
 situation a bit without actually crossing the line.  But I don't think
 it is much of an exaggeration to say that a 'baby' Novamente even with
 limited hardware and speed is a tremendous event in the history of life
 on Earth.  A phase change starts with one molecule.  As computers are

Yes, but what effect will it immediately have?  How long after the 
development of the transistor that the average person's life was 
significantly changed?

This baby novamente will be one of many blips on the radar.  The public is 
constantly innundated with reports of revolutions in AI and they have 
become jaded by such sensationalistic reporting. 

So that if/when Ben succeeds, how is anyone to know that they're looking
at a real baby AI, and not some slight enhancement of the AIBO?  They
won't.  Only you, I and maybe 998 other other people would understand the
significance and these 1000 only because we're well versed with Ben's
activities.

Any AGI will take a decade to make itself known and to rise above the 
 signal/noise ratio of scientific media.

 becoming more powerful and nanotech capabilities reach closer to the
 ultimate goal of molecular positional assembly the world will cross a
 threshold similar to supercooled water where one triggering event will
 set off a chain reaction causing a phase change to ice throughout the
 entire mass.  Okay, I'm exaggerating again, but not much.  The money men
 know it is coming.  But they have been burned so many times before in
 the A.I. category that they are not willing to touch the stove again,
 unless someone can show them something that works.  It doesn't have to
 be a finished product, just something that demonstrates a new
 capability.  Your 'baby' Novamente or Peter's proof-of-concept example
 or James Rogers' who-knows-super-secret-whatits will trigger a phase
 change in funding for AGI.  The practical applications are unlimited.  
 The profit potential is unlimited.  That's why the money men threw away
 so much twenty years ago on projects that didn't have a ghost of a
 chance and got burned.  I'm not saying that your 'baby' Novemente will
 change the whole world overnight all by itself.  But any working example
 of AGI, no matter how limited, will trigger a complicated chain reaction
 in the economy and mindset of the world.  The initial example, whatever
 it is, may turn out to be a flawed design of limited usefulness (I
 wouldn't want to see scaled-up jumbo 'Wright Flyers' populating airport
 terminals) but it will not matter.  Just look at the funding that GOOGLE
 has attracted with some cleverly written but dumb (non-AGI) rules.
 

You, me and all of us are a collection of cleverly written but dumb rules 
:)  



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-25 Thread Brad Wyble
 
 This is exactly backward, and which makes using it as an unqualified
 presumption a little odd.  Fetching an object from true RAM is substantially
 more expensive than executing an instruction in the CPU, and the gap has
 only gotten worse with time.


That wasn't my point, which you may have missed.  The point is that with
our current technology track it's far cheaper to double your memory
than to double your CPU speed.  I'm not referring to the amount of memory
bits processed by the CPU, but the total number of pigeonholes available.  
These are not one and the same.

Therefore you can make gains in representational power by boosting the
amount of RAM, and having each bit of memory be a more precise
representation.  You can afford to have, for example, a neuron encoding
blue sofas and a neuron encoding red sofas.  While a more restricted RAM
approach would need to rely on a distributed representation, one with only
sofa neurons and color neurons. (apologies for the poor example, but I'm 
in a hurry)

Your points are correct, but refer to the bottleneck of getting 
information from RAM to the CPU, not on the total amount of RAM available.  


 Back to the problem of the human brain, a big part of the problem in the
 silicon case is that the memory is too far from the processing which adds
 hard latency to the system.  The human brain has the opposite problem, the
 processing is done in the same place as the memory it operates on (great
 for latency), but the operational speed of the processing architecture is
 fundamentally very slow.  The reason the brain seems so fast compared to
 silicon for many tasks is that the brain can support a spectacular number of
 effective memory accesses per second that silicon can't touch.

Both technologies have their advantages and disadvantages.  The brain's
memory capacity (in terms of number of addressable bits) cannot be
increased easily while a computer's can be.  I merely suggest that this
fundamental difference is something to consider if one is intent on
implementing AGI in a Neumann architechture.




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-24 Thread Brad Wyble


Guess I'm too used to more biophysical models in which that approach won't
work.  In the models I've used (which I understand aren't relevant to your
approach) you can't afford to ignore a neuron or its synapses because they
are under threshold.  Interesting dynamics are occurring even when the
neuron isn't firing.  You could ignore some neurons that are at rest and 
hadn't received any direct or modulatory input for some time, but
then you'd need some fancy optimizations to ensure you're not missing
anything.

But in the situation you're referring to with a more abstract (and 
therefore more useful to AGI) implementation, these details are 
irrelevant.  

I just wanted to chime in and ramble a bit :)

Very glad to hear things are going well with Novamente.  

Hope the holidays treat all of you well.

-Brad



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Evolution and complexity (Reply to Brad Wyble)

2003-10-09 Thread Brad Wyble
On Wed, 8 Oct 2003, Majboroda O.M. 16.03.2001 wrote:
 
 CPU cicles is not analogue of energy.
 Transformation of energy is not necessarily accompanied by transformation of 
 the information.
 But transformation of the information is always occured due to energy.


It's a fine analogy actually.  It's not perfect, analogies never are.


 
 
 I understand and I appreciate your idea.
 I might tell it last year too.
[snip]

Lots of big words in there, but unless you believe that there was a 
creator, or that for some reason computers can't simulate physical laws 
complex enough to evoke a nice fitness landscape (ie quantum randomness 
is necessary for evolution), nothing that you've said 
countermands my point that we can, in principle, generate Ai through sheer 
brute force when our computers get fast enough (ie planet sized nano 
computers).

And, as I said, we can use our wits to shortcut the evolutionary process 
by a few(hundred) orders of magnitude, which is essentially the goal of 
AI.

Seems pretty cut and dried.  I think you're thinking too hard.  Evolution 
is conceptually really simple.  Take closed system, add energy, bake for 4 
billion years, get complexity.  


-Brad Wyble

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Discovering the Capacity of Human Memory

2003-09-16 Thread Brad Wyble
Good point Shane, I didn't even pay attention to the ludicrous size of the 
number, so keen was I to get my rant out.  



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Discovering the Capacity of Human Memory

2003-09-16 Thread Brad Wyble


It's also disconcerting that something like this can make it through the 
review process.

Transdisciplinary is oftentimes a pseudonym for combining half-baked and 
ill-formed ideas from multiple domains into an incoherent mess. 

This paper is an excellent example.  (bad math + bad neuroscience != good 
paper)





---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Early AGI training - multiple communications channels /multi-tasking

2003-09-03 Thread Brad Wyble


You're right, it's possible, but don't underestimate the problems of 
having multiple interaction channels.  It's not a almost a freebie (to 
heavily paraphrase your and Phil's comments).  

Multiple streams of interaction across broadly varying contexts would 
require some forms of independent memory for them, or you'll get cause and 
effect all jumbled up as an event from stream A comes directly after 
stream B.  If the memory system doesn't understand this distinction, the 
search space to figure out correlations and causations will be extremely 
huge.

Our brains can support multiple simultaneous streams of interaction 
without a major performance hit as long as the streams are more or less 
orthogonal (e.g. walk while talking).  But there are quite good reasons 
why we can't follow two conversations at the same time.

Now yes an AI can handle multiple streams, but you are going to 
pay for it somehow, either with multiple independent memory systems for 
each stream which must later be integrated, or by a hugely increased 
processing cost for analyzing and consolidating a single memory system.  

My advice is to learn to crawl before you try running.

-Brad








 
 Yeah -- the design supports multiple interaction channels  This idea
 comes up naturally when one thinks about having an AI
 
 * chat with different folks all around the Web, at the same time
 * look through sensors located at different places in the world
 * control robots undersea as well as in outer space, etc
 
 Why shouldn't one mind do all these things, after all?  Why should cognition
 and memory need to be restricted to a single locale in space, a single
 integrated set of sensors/actuators?
 
 I guess that the subjective experiences AI's with multiple interaction
 channels may be quite diverse.
 
 Maybe there will be a threshold of interactivity between the interaction
 channels and their cognitive-support units, at which there is a phase
 transition from multiple subjective consciousnesses to single subjective
 consciousness.
 
 This is yet another way that digital intelligence will likely be very, very
 different from human intelligence.
 
 -- Ben
 
 
 
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Behalf Of Philip Sutton
 Sent: Tuesday, September 02, 2003 9:07 PM
 To: [EMAIL PROTECTED]
 Subject: [agi] Early AGI training - multiple communications channels /
 multi-tasking
 
 
 Hi Ben,
 
 It just occurred to me that very early in a Novamente's training you
 might want to give it more than one set of coordinated communication
 channels so that the Novamente can learn to communicate with more
 than one external intelligence at a time.
 
 My guess is that this would lead to a a multilobed consciousness -
 where each communication channel (2 way, possibly multiple senses)
 would have it's own mini-consciousness and the Novamente would
 have a metaconsciousness that knits all its mental parts together as a
 whole self.
 
 I don't think we should assume a single communications channel mode
 for Novamentes just because that's how we think of biological minds
 communicating.
 
 Maybe it's a bit like teaching a person to play the piano with two
 hands???  Or how people learn to use whole-body motor skills for
 sport.  But with a sharper and higher level of independent consciouness
 attached to each communication channel/conversation.
 
 We learn to play with each hand and with both hands together.
 
 Cheers, Philip
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your
 subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your subscription, 
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Web Consciousness and self consciousness

2003-08-24 Thread Brad Wyble

Just a word of advice, you'd get more and better feedback if your .htm 
didn't crash IE.

If you've got some wierd html in there, tone it down a bit.



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] sick of AI flim-flam

2003-08-14 Thread Brad Wyble
A tiny rant about bogus AI.



I was depressed to find this site:

http://www.intelagent.org/

and another by the same snakeoil salesman (Sol Endelman)

http://hardwear.org/personal/PC/

See that pencil drawing of the wearable computer?  Lifted straight from 
the MIT Mithril project website.  I'm going to fire off a letter to 
Mithril and point it out, but the point is this whole webpage, if you 
bother to read it, is practically gibberish.  

And then you go back to hardwear.org to see the citing outrageous prices 
for stuff like trousers and socks, let alone the $60,000 actuator array.

On going back to intelagent.org, it seems clear that this guy's purpose is 
to create authentic looking companies and swindle the bejesus out of 
gullible VC's.   He throws around alot of buzzwords and technical sounding 
garbage, which might sound good to a naive investor.

This guy is worse than the spammers.  People like this cripple the ability 
of legitimate AI companies to get funding by scaring off investors.  

Disgusting.

I only hope Sol is on this list and can read this, but I doubt it.

-Brad





---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Perl AI Weblog

2003-08-14 Thread Brad Wyble


The open source concept to AI, which is essential what you are doing here, 
is a very interesting one.

However, the open source success stories have always involved lots of tiny 
achievable goals surrounding one mammoth success (the functional kernel).  
i.e. there were many stepping stones which served to organize efforts.

This approach doesn't seem to have a series of achievable goals that will 
direct efforts.


And if I may offer some constructive criticism of clarity, the text of 
this email is very clear, but that of the 
webpage is much harder to follow.  If you wish people to take this 
seriously, make an effort to make it very clear exactly what you are 
hoping for them to do.  


Some questions I was unable to answer in 5 minutes of browsing your site:


How do these minds compete?

On what/whose servers will they run?

What input is the AI system given?

By what means will they be evaluated?

Why Perl? 

What (who's)code does the main Alife loop connect with for the submodules?

You use the word port as if programmers are merely translating an engine 
from one codebase to another, but that doesn't seem to be the case?  
What did you mean by port exactly?



And finally, the claim that AI has been solved in your webpage title is 
a bit offputting.  I envy you your enthusiasm with this project though.

-Brad


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Fw: Do This! Its hysterical!! It works!!

2003-07-20 Thread Brad Wyble

I use elm so I couldn't tell, was there a virus riding on that?

Just curious.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Robert Hecht-Nielsen's stuff

2003-07-19 Thread Brad Wyble


Well the short gist of this guy's spiel is that Lenat is on the right track.  The key 
is to accumulate terabytes of stupid, temporally forward associations between elements.

A little background check reveals that this guy isn't a complete nutcase.  He's got 
some publications (but not many), and a real lab position.

However, his claims are a bit too grandiose and he smacks a bit of a snake oil 
salesman at the end when he's fielding the questions, especially the one about the 
inability of his theory to handle the tightly regimented sequence of commands 
necessary to execute motor programs.  He sidesteps that one in a particularly 
obfuscatory fashion.

Nor is his model very interesting in its application.  

Neuroscience data stands counter to his basic claim that the cortex is just a big 
sheet of associatiors, there are many genetically described connection patterns.  

His claim that we set up relatively immutable patterns early in life have only been 
shown to be true for the visual cortex as far as I know.

AI isn't a failure because everyone involved is an idiot and keeps missing the obvious 
point that this genius has stumbled upon.  

AI is a failure because AI is hard.  

I give it a C-.

It's long on words and full of idealistic grandeur, but short on substance when you 
really boil it down. 


-Brad










---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Educating an AI in a simulated world

2003-07-14 Thread Brad Wyble
 
 
 
 It's an interesting idea, to raise Novababy knowing that it can adopt
 different bodies at will.  Clearly this will lead to a rather different
 psychology than we see among humans --- making the in-advance design of
 educational environments particularly tricky!!

First of all, please read Diaspora by Greg Egan.  As a SF author, he excels in his 
informed approach to AI design, philosophy, and neuroscience.  This book touches on 
this topic(AI's designed for multiple bodies) very directly.  


This VR training room initially seemed like a great idea to me, but on consideration, 
I'm not so certain it's worth the trouble.

First of all, you are reducing the complexity of the environment by orders of 
magnitude.  One could argue that it is a baby's physical interation with the world is 
the cornerstone on which all future intelligence resides.  

Now you've made pains to point out that you're not trying to recreate people, but 
intelligence.  However, a Novamente grounded in a different reality will be difficult 
for people to interact with.  

So here are two possible issues: the VR world might actually slow down the 
intellectual growth of the Nova Baby.  And even when intelligent, it will be more 
alien from us than it needs to be.

A second point about this plan is that you are creating extra work for yourself both 
in designing a VR training paradigm, and then in bridging the gap from VR to the real 
world, which would be no picnic.  


So there are some possible negatives, the positives you've already listed.  


If this course is decided upon, consider giving the Novamente an ability to sense 
objects in their native format (sprites in 3d coordinates).  If your intent is to 
simplify the world, don't add in the fuzz of the artificial visual input, which is 
often flawed (e.g. clipping errors).  Give the Novababy access to the underlying 
framework of the world or it will be eternally confused as it tries to figure out why 
it can walk through trees, or why Mr. Smith's left toes are inside its own foot.  


-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Intelligence enhancement

2003-06-23 Thread Brad Wyble

What wasn't made very clear in that article is that the sole function of TMS is 
shutting down specific areas of the brain for a short while.  

So it's not that he's improving a given piece of brain tissue, he's shutting off 
certain areas which changes the balance of power in the mind, and allows creativity to 
become temporarily dominant in people that are usually very left-brained.

I'm prepared to believe these results, and I think it's fascinating work.

However, I'm not going to be first in line to let someone do TMS on me.  Nothing that 
temporarily shuts down areas of the brain can be good for them.  I'm imagining huge 
swaths of ions blasting back and forth through neural membranes as the metabolic 
processes try to maintain homeostasis, accumulating cell damage or death in the 
process.
   
Maybe if this same thing were achieved through cooling (although I can't imagine how 
it could be done non-invasively), or with pharmacology, I would feel better about it.  

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Dog-Level Intelligence

2003-03-21 Thread Brad Wyble


It might be easier to build a human intelligence than a dog intelligence simply 
because we don't have a dog's perspective and we can't ask them to reflect on it.  
Don't be quick to assume it would be easier just because they are less intelligent.  



-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Hard Wired Switch

2003-03-05 Thread Brad Wyble
 
 An AGI system will turn against us probably if humans turn against it first. 
 It's like raising a child, if you beat the child every day, they are not going 
 to grow up very friendly. If you raise a child to co-operate and co-exist with 
 its environment, what possible motivation is there for it to grow up to be 
 hostile?
 
 cheers,
 Simon
 

It's not question of hostility so much as deciding that we are an obstruction in its 
goal path.  There are plenty of lines of reasoning that could lead an AGI to deciding 
that its existence would be easier without us getting in the way.   

-Brad


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Playing with fire

2003-03-03 Thread Brad Wyble

 Extra credit:
 I've just read the Crichton novel PREY. Totally transparent movie-scipt but
 a perfect text book on how to screw up really badly. Basically the formula
 is 'let the military finance it'. The general public will see this
 inevitable movie and we we will be drawn towards the moral battle we are
 creating.
 
 In early times it was the 'tribe over the hill' we feared. Communication has
 killed that. Now we have the 'tribe from another planet' and the 'tribe from
 the future' to fear and our fears play out just as powerfully as any time in
 out history.

Note: I'm not arguing for or against AI here, just bringing to light some personal 
observations


This particular situation is different than the others you describe(tribe over the 
hill).  To accept the dangers of AI, one must first swallow racial pride and admit 
that we are not the top-dogs in the universe.  Few people are willing to do this, even 
among well-educated, science minded engineers.  I just tested this topic on my group 
of internet friends in a private forum with 20 some people.  I was unable to convince 
a single person that this danger is real with a day's worth of intensive back and 
forth discussion.  They assumed the typical we can just control it mentality that 
has always been prevalent.  Notice that even in gloomy bad-AI stories such as 
Terminator and the Matrix, the humans always win in the end.  This is what the 
mainstream will believe becauses they want to believe it.  

In other words, I don't think the public is going to care one-iota about the dangers 
of AI.  They'd prefer to focus their energy on banning truly harmless technologies, 
such as cloning.  People fear clones because as far as they are concerned, clones are 
people too, so we're dealing with an equal, and can lose.  But AI's are just 
machines, they can be out-smarted or out-evolved  as far as the average person 
is concerned.  

The upside is that AI researchers won't have to fight to keep their research legal.

The downside of this is that we're more likely to destroy ourselves. 


-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Playing with fire

2003-03-03 Thread Brad Wyble
One thing I should add:

It's the same hubris I mentioned in my previous message that prompted us to send out 
satellites effectively bearing our home address and basic physiology on a plaque in 
the hope that aliens would find it and come to us.  Even NASA scientists seem to have 
no fear of anything non-human.  

From a species-survival perspective, we'd be better off contacting alien races on our 
terms, rather than inviting them to come by for a visit.  

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Brad Wyble
 
 Yep.  Novamente contains particular mechanisms for converting between
 declarative and procedural knowledge... something that is learned
 procedurally can become declarative and vice versa.  In fact, if all goes
 according to plan (a big if of course ;) Novamente should *eventually* be
 much better at this than the human brain.

I'm glad that you choose to incorporate elements of human cognitive theory into 
Novamente, even if you are not intent on building a brain.  Such commonalities will 
make the design of NM far more intuitive and accessible to designer and lay-person 
alike.

 
 For instance, humans are not very good at making procedural knowledge
 declarative -- it takes a rare human to be able to explain and understand
 how they do something they know how to do well.  There is a real algorithmic
 difficulty here, but even so, I think a lot of the difficulty that humans
 have in doing this is unnecessary, i.e. a consequence of the particular
 way the brain is structured rather than a consequence of the (admittedly
 large) difficulty of the problem involved.
 


I disagree that we have a problem converting procedural to declarative for all 
domains.  As an example, I can retrieve a phone number from procedural memory with 1 
retrieval operation (watch my fingers dial it).   Admittedly this system isn't as 
slick as one that would work purely internally, it requires performance of the task, 
but it works. 

Grammar is tougher, I can test any given rule by using it out in a sentence and seeing 
how it sounds.  But extrapolating all of the rules I use is a tricky problem, in fact 
it's one we haven't completely finished solving  (the rules of English grammar are 
similar, but not identical to the rules our brains want to use).   

And then communicating how to swing a golf club is another matter, but I think the 
limitation there lies in a lack of communication.  Our brains have no good way of 
transmitting or interpreting such fine grained information.  

And to be fair to our brains, transcribing a motor memory of how to move 10,000 
muscles in a very precise sequence into declarative knowledge is an extremely 
challenging problem.  Particularly because that sequence isn't static, but requires 
feedback from joint sensors.  The information isn't just the sequence of neural 
impulses, it's the substance of the entire network.


That said, Novamente would be far better at it than we.  With the ability to 
understand it's own code, NM could just rattle off the relevant parameters into 
declarative memory.  Making this declarative knowledge useful would require 
understanding how it functions though.   That would be the tricky part.   


-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Brad Wyble

 
 Indeed, making the declarative knowledge derived from rattling off
 parameters describing procedures useful is a HARD problem... but at least
 Novamente can get the data, which as you have greed, would seem to give AI
 systems an in-principle advantage over humans in this area...
 
 It's hard to overestimate the intelligence-enhancement potential of a more
 fluid process of interconversion btw declarative and procedural knowledge


Yes, getting this data is what the entire field of neurophys is about.  Being able to 
extract it without using surgery, electrodes, amplifiers, and gajillions of manhours 
would be outstanding.   A lack of data is the primary thing holding neuroscience back 
and to a large degree, the depth of cognitive theory over time mirrors the quality of 
the acquisition and analysis tools. 

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Brad Wyble
 
 That was exactly my impression when I last looked seriously into
 neuroscience (1995-96).  I wanted to understand cognitive dynamics, and I
 hoped that tech like PET and fMRI would do the trick.  But nothing existing
 giving the combination of temporal and spatial acuity that you'd need to
 even make a start on the problem  I had a PhD student (Graham Zemunik)

Just FYI, MEG's (Magnetoencephalography) is a good step in providing temporal 
precision, but is still a long way from discerning individual neurons.  It can 
basically give us EEG measurements from deep inside the brain without using 
electrodes(which obviously opens alot of doors for human experimentation) 

 who tried to make a detailed model of the cognitive dynamics in a
 cockroach's brain -- and even that was pretty dicey because the data found
 by different researchers was often inconsistent.  From what you're

I'm sure you know this, but for the benefit of others:
Insect brains are much easier to study because the neurons are explicitly laid down by 
the genetic code.  They are identifiable neuron by neuron and are roughly identical 
from insect to insect (within the same species).   The fact that even these networks 
aren't quite yet understood is a shining example of how far we have to go in 
understanding the human brain.

 describing, some headway is finally being made on modeling cognitive
 dynamics in parts of the rat's brain, and that's a great thing.  I've
 enjoyed following Walter Freeman's work on olfaction in rabbits, but, I've
 also noticed the pattern of bold hypotheses and partial retractions in his
 work over time, which is due to the fact that the data is not quite rich
 enough to support the kind of theorizing he wants to do.
 

I support fringe theorists like Freeman as long as they stay in touch with the 
community and don't sail off to parts unknown.  (Edelman tends to do this).  Progress 
takes all types, the careful, methodical data collectors, and the people on the front 
lines pushing the theories to extents that the data barely supports.  


 Fortunately, neuro-analysis technologies are advancing really fast just like
 computer chips.  In another 10-30 years we will have the data to understand
 our brains, and the computers and algorithms to crunch this data.  (And we
 may have AI's to do the work for us, who knows ;)
 

Here's hoping.  Although I fear they probably said similar things 10-30 years ago.  
Only nanotech can get us the type of noninvasive, detailed data that we need.  The 
type of electrodes we currently use are never going to suffice. 

Lucky for us that the brain uses electrically recordable signals from a structure that 
is so easily accessible.   We'd be in dire straits if the brain used entirely chemical 
mechanisms and was located in an abdominal sack.   Thank you evolution for making our 
jobs as easy as they are :)

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Brad Wyble

 
 I actually have a big MEG datafile on my hard drive, which I haven't gotten
 around to playing with.
 
 It consists of about 120 time series, each with about 100,000 points in it.
 
 It represents the magnetic field of someone's brain, measured through 120
 sensors on their skull, while they sit in a chair and perform an experiment
 of clicking a button when they see a line appear on a screen.  (Pretty
 exciting, huh?)  I got the data from my friend Barak Pearlmutter at UNM, who
 has spent a few years working on signal processing tools (using blind
 source signal separation methods) designed to clean up the raw data
 (basically subtracting off for noise caused by repeated reflection of
 magnetic fields off the inside of the skull).

It's actually a very complicated data analysis.  You basically have a spherical 
surface of data (the sensors), and you are trying to reconstruct the sources and sinks 
in 3d that created the 2d data you are observing.  The problem is underconstrained, 
because many 3d data sets could produce the same 2d data set, but you try to build in 
some anatomical assumptions (ie: we know the hippocampus is probably a powerful 
source/sink, so pin that thumbtack there) to constrain the possible results.

As you can imagine it's very weak spatially, but far more precise temporally than PET 
or FMRI, which can only measure blood flow changes occuring 1 second or more after the 
source activity.

I think combined MEG/FMRI(or was it PET/FMRI) is going to be able to get the best of 
both worlds.  Either way, there are plenty of technological obstacles.   

  
 
 I guess that MEG can be used, over time, for stuff subtler than clicking
 buttons when lines appear.  But using it to track the dynamics of thoughts
 seems a long way off  Basically, one needs a lot more than 120 sensors!!
 ... and then one needs to hope the signal processing code scales well (it
 probably can be made to do so)
 

Well you can use far more complex behavioral tasks than that even with existing MEG 
technology(have people navigate a maze, do math, word problems, etc).  But in order to 
get a footing with the new MEG technology, they need to start at the basics so that 
they can map MEG responses with known EEG signatures available from work that's 
already been done.  The first decade of any new neurophys technique is characterized 
by a whirlwind of very basic, boring results (usually that create pretty pictures 
generating funding).  Only after the tech has matured do you even begin to hit the 
cool stuff.  


I'll bet AI's will be required to analyze the data sets will be getting in the next 20 
years. 

-brad





---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] swarm intellience

2003-02-26 Thread Brad Wyble

The limitation in multi-agent systems is usually the degree of interaction they can 
have.  The bandwidth between ants, for example, is fairly low even when they are in 
direct contact, let alone 1 inch apart.  

This limitation keeps their behavior relatively simple, simple relative to what you 
might expect for the large neural mass involved.  

Also, swarms only scale to a limited degree.  An anthill 1 mile high is not going to 
possess much more smarts than a 3 inch anthill.  

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] swarm intellience

2003-02-26 Thread Brad Wyble

 
 But hopefully the bandwidth of communication is compensated by the power of 
 parallel processing. So long as communication between ants or processing nodes 
 is not completely blocked, some sort of intelligence should self-organize, then 
 its just a matter of time. As programmers or engineers we can manipulate those 
 communication channels ...

Interesting stuff today, I wish I wasn't so insanely busy with this thesis.

I don't think swarms are a good way to approach AGI design because of the design 
principles.  An infinite number of AGI designs are possible, but some are going to be 
drastically easier to create and cheaper to run than others.

The advantage of the swarm is to be robust in the face of severe damage.  Wipe out 3/4 
of it, and the remainder will function perfectly well and eventually regrow.  Even a 
single element is a functional entity (although not necessarily self-sufficient).  

The brain certainly does not share this ability beyond preserved functionality in the 
face of a hemispheric lesion.  Orient your lesion perpendicular to the axis of 
symmetry and you have a vegetative organism.  

The advantage the brain gets for being so tightly integrated is greatly enhanced 
functionality.  

Because we(in general) are not planning to design AGI's to survive in the face of 75% 
damage, swarms seem to be an inefficient approach.  Although obviously there are 
domains in which this sort of resilience is a huge advantage, such as in the 
deployment of a network of robotic sensory drones in a hostile battlefield.  I don't 
think they will ever have AGI like behavior however.  


-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-26 Thread Brad Wyble


 Just to pick a point, Eliezer defines Seed AI as Artificial
 Intelligence designed for self-understanding, self-modification, and
 recursive self-enhancement.  I do not agree with you that pure Seed AI
 is a know-nothing baby.

I was perhaps a bit extreme in my word choice, but I do not believe that the axis I 
mentioned is orthogonal to the question of hand-wiring of a knowledge base.  Certainly 
some concepts and tinkering must be included in a seed-AI, but I think that the 
representations a seed-AI will develop are one and the same as the knowledge that Cyc 
will require to be hand-made.  

This isn't really a semantic argument as it might seem at first glance, rather it's a 
question of the degree to which knowledge is separable from the internal machinery 
that a seed-AI will construct in its growth.  

But that's just my opinion, it's been known to change before :)

-Brad



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] more interesting stuff

2003-02-25 Thread Brad Wyble
Cliff wrote:
 
 It's not a firm conclusion, but I'm basing it on information /
 complexity theory.  This relates, in certain ways, to ideas about
 entropy -- and energy is negentropy.  I.e. without the sun's input we
 would be nothing.
 
 I'm not convinced of this idea on an intuitive basis, but rather on a
 mathematical basis -- that is, the mathematical idea that complexity
 cannot be freely produced.  You cannot get truly random numbers out
 of a fixed process would be another way to state this.

No, but you can get pseudorandomness from energy.  I would say that we ourselves are, 
at best, pseudorandom as well.  True randomness is a tricky thing to get. 

 
 I also wonder if AGI's could really be trained in a simulated micro-
 environment...perhaps the real universe's randomness is *necessary*
 for development of intelligence.

How?  Your ideas are interesting, but I don't see the need for them.  We understand 
the core principles of evolution, we see it in action and it makes intuitive sense.  
We also know that raw energy input can lead to an increase in evolution.  We have all 
the necessary and sufficient conditions for evolution to have worked, there's not 
really a missing piece to fill in.

Consider the following thought experiment: a computer able to simulate the earth down 
to an atomic level (let's put aside the possibility that quantum phenomena influence 
events on the scale of earth-life).  This system has 1 source of input, a constant 
stream of energy.  The machine is simple, runs on a turing machine.  

Do you doubt that this machine could recreate evolution in its simulation?  If you do 
not, then we're all done here, as this machine is completely isolated and receives no 
complexity input.   

If you do doubt it, what is the missing piece that you are trying to fill?

 
 Well, that's what I'm wondering about.  Does co-evolution increase the
 total complexity (in a mathematical, Kolmogorov sense) or does it
 *mirror* or *absorb* complexity.  Lately, I suspect the latter -- you
 cannot really increase it, you can only reflect it.

Energy-complexity in a closed system.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] more interesting stuff

2003-02-25 Thread Brad Wyble

Kevin said:

 I would say that complex information about anything can be conveyed in ways
 outside of your current thinking, but if you ask me to prove it, I cannot.
 There is evidence of it in things like the ERP experiment which show the
 existence of a possible substrate that we have not yet measured or
 verified...

Which experiment?  I'd like to hear about it.  As far as I know, there is yet to be 
found any conclusive evidence of a cognitively releveant substrate in the brain that 
we have not measured or verified.  Nothing in neuroscience data, that I am aware of, 
cannot be explained by cellular interactions at the atomic+ scale.

Or did you not refer to Event-Related Potentials?  The ERP acronym has multiple 
connotations. 


 
 Question: the big bang occured in a closed system, yet the information for
 every phenomena we witness was the result of that occurance.  How was that
 information stored?  How did it get promulgated?

Information did not have to be stored for interesting things to develop.  I don't 
think, for example, that you would find the text for the Constitution of the United 
States hiding somewhere in the big bang proto-particle.  Simple systems can give rise 
to complex series of information.  

One can posit that quantum randomness (Shroedinger's cat) can imply the universe is 
non-predictable from its start state, but that doesn't mean quantum phenomenae are 
transmitting complex and significant information across some unverified substrate.   
It just means that quantum randomness can occasionally push things one or another, as 
in the butterfly wing analogy, in which a small change at one time can drastically 
alter the future.   

But I fear we are now getting pretty far from the AGI list's manifesto.

_brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] really cool

2003-02-25 Thread Brad Wyble
They are not mapping to IP addresses, probably geography as Ben suggests.  I went to 
the search window and intercepted searches done by other people.  

-Brad


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] seed AI vs Cyc, where does Novamente fall?

2003-02-25 Thread Brad Wyble
Ok, let's get rolling then.

Ben, here's a question.  To what extent are parts of Novamente hand built ala Cyc?

I can easily imagine a dimension here.  At one end is Cyc, which is carefully and 
meticulously constructed by people.  The design work is two fold, creating the 
structure within which to store information, and then the inputting of the 
information.   At the other end of the dimension is pure Seed AI.  A know-nothing 
baby.   You put it together, hit the power button and let it grow(admittedly it would 
require tuning as it matured).

Where would Novamente fall along this dimension? 

What domains of knowledge do you expect to have to explicitly create by hand?
  
-Brad


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] the Place system of the rodent hippocampus

2003-02-24 Thread Brad Wyble

I whipped this up this afternoon in case any of your are interested.  I tried to gear 
it towards functionally relevant features.  Enjoy

Reference document: The Hippocampal navigational system
by Brad Wyble

A primer of neurophysiological correlates of spatial navigation in the rodent 
hippocampus.


Why AI enthusiasts should care:

The place system is a unique way to study what the rat is thinking and how it uses 
information to compute.   Place cells represent a particular way that the rodent brain 
analyzes spatial location in a way that is cognitively accessible to us.  The behavior 
of place cells are relatively homogenous across the population.  Contrast this to 
recordings from frontal cortex in which cellular activity is extremely varied.  
Frontal cortical cells are doing very interesting things with respect to behavior, but 
they are very different from one another which makes it practically impossible to draw 
conclusions.  The structural and functional simplicity of the hippocampus makes it a 
gateway to understanding the brain, a strong foothold for our first significant steps. 
 

It is largely a happy accident that the place system is so easy to study.  The 
hippocampus is arranged in horizontal sheet of very dense cell near the surface of the 
skull (in rodents at least) which means that it is possible to get yields of 200+ 
cells *simulteanously* within one rat using current technology, a feat achievable in 
no other brain area by at least an order of magnitude.  This high cell yield allows us 
to study the behavior of the entire system in the same way the Nielson system studies 
the television viewing habits of the entire country using data from a tiny fraction of 
homes.  


Place Cell
The place cell was described in the 70's by a group led by John O'Keefe  (O'Keefe, 
1976) and is the foundation for our understanding of the rodent hippocampal 
navigational system.  A place is a hippocampal neuron(pyramidal, complex spike cell of 
CA1/CA3/Dentate)  that will fire reliably and selectively within a small region of 
space.  The firing pattern for that region of space (usually about 10cm in diameter 
but varies with the size/shape of the environment) is roughly a 2-dimensional 
bell-curve if the cell spikes are compiled into a histogram with respect to 2-d 
location.This region of space is called a place field and is defined with 
respect to a specific neuron in a specific environment.  The particular configuration 
of place fields across all place cells for a given environment is called a place 
map.  One neuron can have multiple place fields within a single environment, but this 
is rare.   

Environment:
The concept of what constitutes an enviornment is vital to this discussion.   For 
experimental purposes, a rat is introduced into a chamber it has never seen before.  A 
seemingly arbitrary place map develops over 10-15 minutes.  This same rat placed in a 
different environment will immediately develop an entirely different and 
*uncorrelated* place map for the new environment.   There is alot of complexity in 
figuring out what constitutes a different environment.  Alterations in the geometric 
shape (square- circle) almost always generate a new map.  Variations in visual cues 
will sometimes cause a remapping, sometimes not.   Rats remap in all or none fashion.  
That is to say, as visual cues are altered, the map will stay constant until some 
arbitrary threshold in passed, at which point the entire population will remap.   Map 
alterations do not cause gradual shifts in the field.  

Extreme alterations in behavior can cause a remapping.  If a rat is trained to do two 
different tasks(targetted vs random foraging) in the same environment, it will usually 
develop two different place maps and switches between them based on the task.  

Sensory Cue control.  
In environments for which multiple orientations are available (square, cylinder), the 
place map will align itself with the most obvious visual cues.  If a cylinder has a 
cue card on the wall, and the card is shifted, the place map will follow the card.   
The place map is largely immune to the removal of cues.  If the lights are turned off 
and no visual cues are available, the rat will continue to use the same place map.   
It uses a combination of vestibular and kinesthetic cues to integrate its motion, and 
keep mental track of its position (as evidenced by the preserved functionality of the 
place map and behavior).   It can use olfactory(excrement) and tactile cues to correct 
for drift error in the path integration process.  This is demonstrated by using 
environments that allow for no olfactory cues by wiping the environment with alcohol 
and turning off the lights.  The place map is stable with respect to the cylinder 
walls, but drifts in orientation over time because there !
are no olfactory cues to control for rotational drift, while contact with the walls 
controls for radial drift.  Generally visual cues

Re: [agi] the Place system of the rodent hippocampus

2003-02-24 Thread Brad Wyble

 On the face of it, these place maps are very reminiscent of attractors as
 found in formal attractor neural networks.  When multiple noncorrelated
 maps are stored in the same collection of neurons, this sounds like multiple
 attractors being stored in the same formal neural net.

Yeap, there's well developed theories about how an autoassociate network like CA3 
could support multiple, uncorrelated attractor maps and sustain activity once one of 
them was activated.  The big debate is about how they are formed.  

 
 About the ability to study 200 neurons at once: With what time granularity
 can this be done?  Do there exist time series of the activity of these 200

Raw data is usually acquired at 50+ Khz, and then the spikes are identified as to 
which neuron they belong to and are stored in a reduced form (ie spike X of neuron Y 
occurs at time T)

 neuron, both during map learning and during map use?  Analyzing this
 200-dimensional time series would be interesting.  (Not that I have time to
 do it .. but it would be interesting.)  We are currently using Novamente to

They're working on it.  At present such labs are acquiring data faster than they can 
analyze it.  Figuring out how the maps form is a tricky business because you can only 
sample the formation of a place field when the rat is in it.  Consequently the data is 
very sparse during the formation.  They are making progress though.

 analyze coupled time series in another biological domain (gene expression
 data).  If there is decent time series data, it could be interesting to
 codevelop a grant application with someone to see what Novamente can find in
 this data

Very interesting idea.  The lab with most of this data is the McNaughton lab in 
Arizona.  They are somewhat reluctant to give it out though, because of the money and 
time investment in collecting it.  It would be very cool if Novamente could be applied 
to it though.


 
 On a more philosophical note, I like the idea that the machinery used for
 place mapping in rats is similar to the machinery used for more abstract
 sorts of mapping in humans.  Indeed, this reflects the point someone made
 last week on this list, regarding the fact that humans have much better
 reasoning ability in familiar domains than unfamiliar ones.  Maybe one of
 the reasons is that when we know a domain well we figure out how to map the
 domain into a physical-environment metaphor so we can use some physical
 mapping machinery to reason about it.  But some familiarity is needed to
 create the map into the physical-environment metaphor.  I think this is what
 someone suggested last week -- and your essay makes me like the hypothesis
 even more.

That was me.  

It will be awhile before we have such human data of course, but they are starting to 
record from human hippocampi (in eplileptic patients).  I'm a big fan of using 
landscape analogies to reason about problems, it tends to work well for me.  But I 
wonder if such abilities are more reliant on visuospatial areas of the cortex.  

One of the limitations that strikes me is that of dimensionality.  I used to spend 
time while driving on road trips trying to think in 4-dimensions in a similar way that 
I can visualize 3-d.  I just couldn't get it to work well.  The best I could come up 
with was layers of 3-d representations with 1 feature varying.  This is an excellent 
example of how powerful our minds are at certain kinds of computation, but  limited 
outside of our innate domains.  

 
 I am reminded a bit of some management-consulting ideas developed by my
 friend Johan Roos, see e.g.
 
 http://www.seriousplay.com/images/landscapes.pdf
 
 His work explores the notion of knowledge landscapes, and the use of the
 physical-landscape metaphor in human thinking about business.

I'll check it out, thanks.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] the Place system of the rodent hippocampus

2003-02-24 Thread Brad Wyble
 
 
 
  Yeap, there's well developed theories about how an autoassociate
  network like CA3 could support multiple, uncorrelated attractor
  maps and sustain activity once one of them was activated.  The
  big debate is about how they are formed.
 
 The standard way attractors are formed in formal ANN theory is via variants
 of Hebbian learning.  But pure unsupervised Hebbian learning has never
 worked very well in simulations.  In the CS theory of reinforcement
 learning, a lot of tricks have been used to make Hebbian learning work
 better (temporal difference learning, for example), but none of these work
 that awesomely.
 

Using artificial rules, such as hardball winner-take-all and synaptic weight 
normalization, it's doable to get ANN's to do this.

But in an autoassociative network with realistic biophysical properties, controlling 
activity to prevent runaway synaptic modification is a very large problem.  My own 
grad advisor, Mike Hasselmo has worked on this very problem using pharmacological 
modulation to suppress synaptic transmission during learning.  The fact that epilepsy 
usually starts within the hippocampus (with its sheets of 100k neurons, all 
interconnected with excitatory connections) indicates that this is a real problem for 
the brain as well as models of it.   

I imagine the hippocampus is really pushing the evolutionary envelope in terms of 
being prone to epilepsy.  Demand for more memory is probably fighting directly against 
epileptic tendencies in terms of evolutionary fitness.  Another problem is feeding 
that dense sheet of nerves (which is why the hippocampus is one of the first things to 
suffer damage during anoxia).  It's a very specialized area that's pushing the limits 
of the body's ability to feed it and keep it from siezing up.   

 
 Do you think the spike-time data contains enough information that it's not
 necessary to look for patterns in the raw data?

They keep the real data too, but it's *huge* (100+ channels of 70khz data, realtime).  
The raw data is basically an average of the neural activity of the nearby cells.  
Spikes from neurons within a small radius of the electrode tip stand out and have a 
certain characteristic shape/amplitude, which is used to identify said cell.  Apart 
from identifying spikes, I'm not sure you'd get much out of the raw data(assuming you 
are also collecting EEG data realtime at 10khz or so from one electrode in the nearby 
region).

However, nowadays people are starting to worry about complex spikes too (bursts of 
spikes).  Assigning these spikes to their source neuron is much harder because spikes 
after the first one in a burst are reduced in amplitude.  So you need specialized 
clustering algorithms that are aware of bursts and what they do to a spike amplitude.  
You need to go back to the raw data to identify such bursts every time you change your 
detection algorithm.   

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Goal systems designed to epxlore novelty

2003-02-22 Thread Brad Wyble

 Novelty is recognized when a new PredicateNode (representing an observed
 pattern) is created, and it's assessed that prior to the analysis of the
 particular data the PredicateNode was just recognized in, the system would
 have assigned that PredicateNode a much lower truth value.  (That is:  the
 system has seen a pattern that it did not expect to see.)

So you're saying a newly formed PredicateNode normally has a low truth value, but PN's 
about novelty tend to have abnormally higher truth values?

Or is it: novel Predicatenodes tend to have lower than normal truth values?

 
 Novelty is recognized when a map (a set of Atoms that share a coherent
 activity pattern) is formed, which is dissimilar to any previously existing
 maps.

Are you familiar with the place cell system of the hippocampus as found in rats?  I'll 
give you a brief synopsis in a new subject in case there's any ideas  that you find 
useful.

 
 It should be noted that the rules for recognizing novelty are similar to the
 rules for mentioning learning.  However, novelty focuses on the suddenness
 of changes in truth value, whereas learning focuses on the total amount of
 changes in truth value.  The two are similar conceptually but different
 quantitatively.

Interesting idea, I'm still unclear about the specifics of how truth relates to 
novelty, but I get general idea.  I'll wait for the nicer review article and leave you 
to your work. 

Thanks
-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] A probabilistic/algorithmic puzzle...

2003-02-21 Thread Brad Wyble


 
 1) Humans use special-case algorithms to solve these problem, a different
 algorithm for each domain
 
 2) Humans have a generalized mental tool for solving these problems, but
 this tool can only be invoked when complemented by some domain-specific
 knowledge
 
 My intuitive inclination is that the correct explanation is 2) not 1).  But
 of course, which explanation is correct for humans isn't all that relevant
 to AI work in the Novamente v


Lakoff and Nunez (http://perso.unifr.ch/rafael.nunez/reviews.html) have a theory that 
we compare lengths in our head to do arithmetic, when we're not using school-learned 
rules.  Our innate mathematical ability is based on visuo-spatial comparisons in their 
view.

This would basically be #2, and to use this capability we need to get familiar enough 
with the problem that our mind translates the numbers involved into length.



-Brad



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: Re: [agi] A probabilistic/algorithmic puzzle...

2003-02-21 Thread Brad Wyble
 
 Hi Ben,
 
 Thanks for the brain teaser!  As a sometimes believer in Occam's Razor, I
 think it makes sense to assume that Xi and Xj are indepenent, unless we know
 otherwise.  This simplifies things, and is the rational thing to do (for
 some definition of rational ;-).  So why not construct a bayes net modeling
 the distributions, with causal links only where you _know_ two variables are
 dependent?  For reasoning about orphan variables (e.g., you know nothing
 at all about Xi), just assume the average of all other distributions.  If
 you have P(Xi|Xj), and want P(Xj|Xi), fudge something together with Bayes'
 rule.  This isn't a complete solution, but its how I would start... Is this
 one of the things you've tried?
 
 Cheers,
 Moshe


As Pei Wang said:  Intelligence is the ability to work and adapt to the environment 
with insufficient knowledge and resources.

I think this is a core principle of AGI design and that a system that only makes 
inferences it *knows* are correct would be fairly uninteresting and incapable of 
performing in the real world.  The fact that the information in the P(xi|xj) list is 
very incomplete is what makes the problem interesting.

Or maybe I'm misinterpreting your intent.


-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] A probabilistic/algorithmic puzzle...

2003-02-21 Thread Brad Wyble
 This is also an example of how weird the brain can be from an algorithmic
 perspective.  In designing an AI system, one tends to abstract cognitive
 processes and create specific processes based on these abstractions.  (And
 this is true in NN type AI architectures, not just logicist ones.)  But
 evolution is a hacker sometimes: often, rather than abstracting, it reuses
 stuff that was created for another purpose, providing hacky mappings to
 enable the reuse.  This is terrible software engineering practice, but
 evolution has a lot of computational resources to work with, and it does
 create a lot of buggy things ;)
 

The study of historical constraints on evolution's design's principle is fascinating.  
I took a class with this guy: http://www.mcz.harvard.edu/Departments/Fish/kfl.htm, and 
he focusses on very interesting problems within systems that would seem to be very 
boring (the evolution of jaw structures in cichlids).

For example, consider hemoglobin, the current means of transmitting oxygen in the 
body.  There might be a better way to do it, in fact, it's almost certain that there 
is.  But evolution would have a very hard time finding it, because we're already 
heavily invested in the hemoglobin tract.   

The same thing applies to the brain of course, evolution has invested alot of effort 
into developing sensory and motor facilities.  Logic and reason are crude hacks, 
tacked on top of a system designed to do nothing of the sort.  It's like figuring out 
how to attach a swimming pool to the space shuttle.  (and miracle of miracles, it 
somehow works, albeit crudely).  

Small wonder that we are so terribly bad at logic.  

http://plus.maths.org/issue20/reviews/book1/


Interestingly, there are some primitive parts of our brain that are better at logic 
and are more rational than our executive function.  Animals (and humans) in a 
classical conditioning paradigm are *excellent* at performing simple behaviors in a 
way that maximizes reward.  We can determine the proper ratio of performance on a two 
lever task without even being consciously aware of the contingencies.  Rats can do 
this too.  In fact, sometimes our advanced forebrain gets in the way of our more 
primitive structures trying to do what they do best.  This is probably why people 
gamble and play the lottery.  I would guess that the payoff matrices for all forms of 
casino gambling are too subtle and complicated for our primitive rationality agents to 
comprehend, and so the stupid forebrain gets to have its way.

-brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/a href=/faq.html#spam[EMAIL PROTECTED]/a


Re: Re: Re: [agi] A probabilistic/algorithmic puzzle...

2003-02-21 Thread Brad Wyble

 
 Brad wrote:
  I think this is a core principle of AGI design and that a system that
  only makes inferences it *knows* are correct would be fairly
  uninteresting and incapable of performing in the real world.  The fact
  that the information in the P(xi|xj) list is very incomplete is what
  makes the problem interesting.
  
  Or maybe I'm misinterpreting your intent.
 
 I agree perfectly with your core principle, and my proposal was not to
 only make inferences that you know are correct. I think you may be
 misinterpreting: lets say that we know P(Xi), and want to guess at P(Xi|Xj).
 We have insufficient knowledge, so we need to make some assumptions to
 approximate P(Xi|Xj).  I argue that under these circumstances, the best
 assumption to make is that Xi and Xj are independent, (ie, P(Xi|Xj)=P(Xi)). 
 Does this clarify things?


You are basically saying, for each unknown P(Xi|Xj), assume it equals P(Xi).  

I think this conservative approach, while well grounded in rationality, doesn't really 
allow for the existence of useful and interesting inference.   An AGI has to tolerate, 
and work with, large degrees of uncertainty.  This includes assuming dependencies 
without sufficient evidence.  I can say that in the biological sciences, one has to do 
this constantly.  What separates the good scientists from the not-so-good is an 
ability to keep track of many low-confidence assumptions simultaneously, shake them up 
and see what theories fall out that violate the fewest of them.   

-Brad




 
 Moshe
 
  
  
  -Brad
  
  ---
  To unsubscribe, change your address, or temporarily deactivate your
  subscription,  please go to
  http://v2.listbox.com/member/[EMAIL PROTECTED]
 
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your subscription, 
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] A probabilistic/algorithmic puzzle...

2003-02-20 Thread Brad Wyble
 
 But anyway, using the weighted-averaging rule dynamically and iteratively
 can lead to problems in some cases.  Maybe the mechanism you suggest -- a
 nonlinear average of some sort -- would have better behavior, I'll think
 about it.

The part of the idea that guaranteed an eventual equilibrium was to add decay to the 
variables that can trigger internal probability adjustments(in my case, what I called 
them truth).  Eventually the system will stop self-modifying when the energy(truth) 
runs out.  The only way to add more truth to the system would be to acquire new 
information via adding goal nodes for that purpose.  You could say that the internal 
conistency checker feeds on the truth energy introduced into the system by the 
completion of data-acquisition goals(which are capable of incrementing truth values).

This should guarantee the prevention of  infinite self-modification loops.

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Goal systems designed to epxlore novelty

2003-02-19 Thread Brad Wyble

  The AIXI would just contruct some nano-bots to modify the reward-button so
  that it's stuck in the down position, plus some defenses to
  prevent the reward mechanism from being further modified. It might need to
  trick humans initially into allowing it the ability to construct such
  nano-bots, but it's certainly a lot easier in the long run to do
  this than
  to benefit humans for all eternity. And not only is it easier, but this
  way he gets the maximum rewards per time unit, which he would not be able
  to get any other way. No real evaluator will ever give maximum rewards
  since it will always want to leave room for improvement.
 
 Fine, but if it does this, it is not anything harmful to humans.
 
 And, in the period BEFORE the AIXI figured out how to construct nanobots (or
 coerce  teach humans how to do so), it might do some useful stuff for
 humans.
 
 So then we'd have an AIXI that was friendly for a while, and then basically
 disappeared into a shell.
 
 Then we could build a new AIXI and start over ;-)


This is an interesting aspect to the problem.  Evolution has designed a fairly robust 
reward system, one that encourages us to achieve interesting through our lives and 
acquire knowledge in an interesting way.  Yet even it is vulnerable to short-cutting 
the reward system as seen in addictive behaviors.  

Ben, I'm guessing you've thought alot about how to structure the reward/goal system of 
Novamente.  I'd love to hear more about it.  It seems that designing a system that 
forces itself to expand its knowledge base is a fairly non-trivial task.  We as 
entities (demonstrated also in rats) have a certain prediliction for exploring novel 
situations, environments, objects, ideas, etc.  Have you implemented a similar drive 
for seeking novelty in Novamente?

-Brad









---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Breaking AIXI-tl

2003-02-19 Thread Brad Wyble
 
 Now, there is no easy way to predict what strategy it will settle on, but
 build a modest bunker and ask to be left alone surely isn't it. At the
 very least it needs to become the strongest military power in the world, and
 stay that way. It might very well decide that exterminating the human race
 is a safer way of preventing future threats, by ensuring that nothing that
 could interfere with its operation is ever built. Then it has to make sure
 no alien civilization ever interferes with the reward button, which is the
 same problem on a much larger scale. There are lots of approaches it might
 take to this problem, but most of the obvious ones either wipe out the human
 race as a side effect or reduce us to the position of ants trying to survive
 in the AI's defense system.
 

I think this is an appropriate time to paraphrase Kent Brockman:

Earth has been taken over  'conquered', if you will  by a master race of unfriendly 
AI's. It's difficult to tell from this vantage point whether they will destroy the 
captive earth men or merely enslave them. One thing is for certain, there is no 
stopping them; their nanobots will soon be here. And I, for one, welcome our new 
computerized overlords. I'd like to remind them that as a trusted agi-list 
personality, I can be helpful in rounding up Eliezer to...toil in their underground 
uranium caves 


http://www.the-ocean.com/simpsons/others/ants2.wav


Apologies if this was inapporpriate.  

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] doubling time watcher.

2003-02-18 Thread Brad Wyble
 
 I would like to contribute new SPEC CINT 2000 results as they are posted
 to the SPEC benchmark list by semiconductor manufacturers.  I expect
 to post perhaps 10 times per year with this news.  This is the source data
 for my Human Equivalent Computing spreadsheet and regression line.

I'm uncomfortable with the phrase Human Equivalent because I think we are very far 
from understanding what that phrase even means.  We don't yet know the relevant 
computational units of brain function.  It's not just spikes, it's not just EEG 
rhythms.  I understand we'll never know for certain, but at the moment, the 
possibility of guesstimating within even an order of magnitude seems premature.

This isn't to say that the regression isn't a bad idea, or irrelevant to AGI design.  
I just don't like the title.  

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] doubling time watcher.

2003-02-18 Thread Brad Wyble
 
 Brad writes, Might it not be a more accurate measure to chart mobo+CPU com=
 bo prices?
 
 
 Maybe.   If you wanted to research and post this data I'm sure it would be =
 helpful to have.


Check out www.pricewatch.com.   They have a search engine which ranks products by 
vendors.  Using this, you could get lots and lots of data from one source.  By 
averaging mean prices from the top 10 cheapest vendors, you'd wash out wierd one-time 
price break deals that would pollute your data if you only considered the cheapest.

They also have data for complete systems.  


It's also probable that pricewatch keeps archived data of prices.  You might consider 
emailing them.  Finding a smart techie in their NOC who thinks AI is cool and you 
might get your hands on 5+ years of perfect data on every index of computing power.  
CPU, hard drives, tape storage, RAM, everything.

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] doubling time watcher.

2003-02-18 Thread Brad Wyble
 
 I used the assumptions of Hans Moravec to arrive at Human Equivalent
 Computer processing power:
 
 http://www.frc.ri.cmu.edu/~hpm/
 
 Of course as we get closer to AGI then the error delta becomes smaller.  I
 am comfortable with the name for now and will adjust the metric as more
 info becomes available.
 

The error delta depends more on neuroscience research than AGI progress.  I'm not 
comfortable with Moravec's calculations, but his approach of estimating based on 
retinal processing power is better than anything else I've read on it.  Retinal 
neurons aren't quite the same beasts as the enormous pyramidal's that make up much of 
the brain though. 

 
  This isn't to say that the regression isn't a bad idea, or irrelevant to AGI 
design.  I just don't like the title.
 
  -Brad


Oops, I meant to that This isn't to say that the regression *is* a bad idea.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble
 The brain is actually fantasticly simple... 
 
 It is nothing compared with the core of a linux operating system
 (kernel+glibc+gcc). 
 
 Heck, even the underlying PC hardware is more complex in a number of
 ways than the brain, it seems... 
 
 The brain is very RISCy... using a relatively simple processing pattern
 and then repeating it millions of times. 

Alan, I strongly suggest you increase your familiarity with neuroscience before making 
such claims in the future.  I'm not sure what simplified model of the neuron you are 
using, but be assured that there are many layers of complexity of function within even 
a simple neuron, let alone in networks.  The coupled resistor/capacitor model is only 
given as a simplified version in textbooks to make the topic of neural networks 
digestible to the entry-level student.  Dendrites are not simple summators, they have 
a variety of nonlinear processes including recursive, catalytic chemical reactions and 
complex second-messenger systems.  That's just the tip of the iceberg once you get 
into pharmacological subsystems, the complexity becomes a bit staggering. 

If it were fanastically simple, more so than a Linux box, do you think that thousands 
of scientists working over more than one hundred years would still understand it so 
poorly, yet it takes a group of 5 people 2 years to crank out a new Linux OS?
   
 
  We know from the biology folks that the human mind contains at least 
  dozens, and probably hundreds of specialized subsystems.
 
 In the cortex, I would propose the number is 28 for the left hemisphere,
 and maybe another 10 or so in the right hemisphere which don't directly
 overlap with the ones on the left.

You realize that the blobs drawn on images of the brain in college level textbooks are 
simply areas of cell responsivity, and not diagrams of the systems themselves?  The 
cortex is highly differentiated containing probably dozens if not hundreds of systems, 
not to mention the enormous variety of specialized systems at the subcortical level.   
The complex soup of the reticular formation is sufficient to turn a sane anatomist 
into a sobbing wreck with its dozens of specific nerve clusters.


 
 Consider the chess problem. 
 The present computer Chess solutions are widely acknowleged to be much
 less efficient than the ones in the brain. So the complexity that you
 are trying to argue is necessary for AGI is merely reflective of our
 currently poor programming methodologies.

Chess is a game designed by the mind, so it is no surprise that it is something the 
mind is good at.  It is trivial to design games that computers are vastly superior at, 
but that does not mean the mind has poor programming methodologies.



_Brad





---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble

 
 
 Well, we invented our own specialized database system (in effect) but not
 our own network protocol.
 
 In each case, it's a tough decision whether to reuse or reimplement.  The
 right choice always comes down to the nasty little details...
 
 The biggest Ai waste of time has probably been implementing new programming
 languages, thinking that if you just had the right language, coding the AI
 would be SO much easier.  Ummm...
 

The thing that gives me the most confidence in you Ben is that you made it to round 2 
and you're still swinging.  You've personally learned the hard lessons of AGI design 
and its pitfalls that most of the rest of us can only imagine by analogy.  

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble
 
 [META: please turn line-wrap on, for each of these responses my own
 standards for outgoing mail necessitate that I go through each line and
 ensure all quotations are properly formatted...]

I think we're suffering from emacs issues, I'm using elm.

 
 Iff the brain is not unique in its capability to support intelligence
 then all of this can be replaced by some abstract model with the same
 basic computational charactaristics but in a very different way. 

I totally agree.  But the genesis of this debate was whether the brain is 
complicated in a non-trivial way.  The fact that it is complicated does not mean it 
cannot be replicated in a different substrate (and like Ben, I think it would be a 
misapplication of effort to try).  

 
  The implementation details are what tells you how the brain 
  functions.
 
 I don't care _HOW_ it functions, I care about _WHAT_ a given section
 accomplishes through its functioning. 

The nature of neuroscience research doesn't really differentiate between the two at 
present.  In order to understand WHAT a brain part does, we have to understand HOW it, 
and all structures connected to it function.   We need to understand the inputs and 
the outputs, and that's all HOW.  

There are people who approach the problem from a purely black-box perspective of 
course, by giving people memory tests and looking at the pattern of failures.  This is 
extremely interesting work, particularly as regards the types of errors people make 
while speaking.  (http://www.wjh.harvard.edu/~caram/pubs.htm)
I don't think it's sufficient, on its own, to figure out the brain without 
simulteanously looking at the neural data.  

 
 Given that, it should be relatively streight forward to find a
 work-alike 

well, it just isn't.  Brains are hard to reverse engineer, and that's basically what 
you're talking about.

 
 Failing that, it is still possible to set up a system akin to Creatures
 but with a much more powerful engine and wait untill a good'nuff
 algorithm evolves on its own... 

It took evolution billions of years with an enormous search space.  Obviously we can 
speed the process.  But in the end, you'd end up an equally inscrutable mass of neural 
tissue.  You'd be better off getting yourself a real kid :)

 
 rant mode engaged
 I HATE IVORYTOWERISM!!!
 IF A BOOK DOESN'T TELL IT LIKE IT IS, IT SHOULD NEVER BE PUBLISHED, EVEN
 TO LITTLE CHILDREN!! (Especially not to little children.)

My comment was in the context of you saying that the brain is fantastically simple 
and then citing Calvin as a source for your conclusion.  I'm saying that books by pop 
authors are insufficient to draw conclusions from, not that they are useless.  His 
ideas are great, I love his work.  

 
 The brain does have an innate structure in the form of the topology I
 mentioned earlier. This topology naturally leads to the development of
 functional systems. HOWEVER, there is no law in the *cortex* which
 governs what behaviors it will produce (likes, dislikes etc...) these
 must be inputed either from the environment or from the subcortical
 structures.

I disagree with this, but I see where you are coming from.  We don't know enough about 
the cortex to say things like this.  The reason that subcortical structures seem more 
concrete to us, is that they are simpler in design and therefore easier to understand 
than cortical structures.  


 Yes, and I don't think those varriations in layers or even connectivity
 are at all significant. Ofcourse you want to know which layer is for
 input and which layer is for feedback but you don't really worry
 yourself about the measurements which are probably a biproduct of having
 more neurons in those regions that are heavily connected and not, in
 themselves, interesting... The extray layers in the occipital lobe are
 probably nothing more than the equivalent of a math coprocessor in a
 computer...

The addition or deletion of layers is going to drastically change the nature of 
computations a given bit of cortex performs.  

 
  I've spent 8 years studying hippocampal anatomy.  It is fascinating and 
  highly structured in a way the cortex isn't (or its simplicity allows 
  us to perceive the structure).  Vast volumes of data about its anatomy 
  are available and I have read most of it.
 
 GIMME GIMME GIMME!!! =P

I said I read it, I didn't say I could remember all of it :)

 
   I( and the rest of the hippocampal community) am at a loss to tell you 
  how it functions. 
 
 Do we know what it does? (how its outputs relate to its inputs)

Nope.  We think it might have to do with spatial navigation in rodents (rats tend to 
think in terms of 2-D space) and more complex types of memory in higher order 
critters.  Anatomy and neurophysiology seem to suggest it should relate memory to 
motor actions and behavioral states, but lesion it and animals seem relatively 
unimpaired in that respect(lesions are a troublesome way to reverse engineer the 
brain).  *throws up 

Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble
 
 Not exactly. It isn't that I think we should give up on AGI, but rather that
 we should be consciously planning for it to take several decades to get
 there. We should still tackle the problems in front of us, instead of giving
 up on real AI work altogether. But we need to get past the idea that every
 AI project should start from scratch and end up delivering a
 human-equivalent AGI, because that isn't going to happen. We just aren't
 that close yet.
 
 The way the software industry has solved big challenges in the past is to
 break them up into sub-problems, figure out which sub-problems can be solved
 right now, solve them as thoroughly as possible, and offer the resulting
 solutions as black boxes that can then become inputs into the next round of
 problem solving. That's what happened with operating systems, and
 development environments, and database systems. If we want to see real
 progress in AI, the same thing needs to happen to problems like NLP,
 computer vision, memory, attention, etc.
 

In as much as I'm a neurophile, I disagree that this is the best approach.  AI 
research has been having a hard time making progress by working on little black boxes 
and then hooking them together.  I think without the context of the whole entity (the 
top level AGI), it's harder to think about and implement solutions to the black box 
problems.  

Evolution certainly didn't work with black boxes.  It made functionally complete 
organisms at each step of the way, and I think AI design can work in the same manner.  
The progress of bottom-up, whole organism roboticism, ala Rod Brooks, is an impressive 
example of what can happen when you attack the whole organism simultaneously.  The top 
level thinking is grounded in the structure of the representations used by the lower 
level stuff that actually interacts with the world.  

Now this agrees with most of what you are saying, namely that we can't implement a 
cloud in the sky AGI that thinks in a vacuum.  But it disagrees with you in saying 
that we can't afford to work on these sub-problems without the context of the entire 
organism.

-Brad



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble
 
  The nature of neuroscience research doesn't really differentiate 
 between the two at present.  In order to understand WHAT a brain part
 does, we have to understand HOW it, and all structures connected to it
 function.   We need to understand the inputs and the outputs, and that's
 all HOW.
  
 
 I wouldn't say even that much... The exact format of the IO is not
 necessary either but only the general Information X Y and Z is carried
 to here from here. 

We don't even know what the information is, honestly.  Cells fire spikes.  Sometimes 
there are clear behavioral correlates which makes it easy to figure out (place cells), 
usually not.  The spike firing code depends on the function of the underlying 
structures.   We have to know how they represent information to know what information 
is being transmitted.  

Understand, by the way, that there are plenty of computational and mathematical 
specialists working on this, applying plenty of information theoretic approaches.  

 
 I've seen a very interesting report on the reverse engineering of the
 hearing system though I am still months away from finishing my first
 reading of Principles of neuroscience. 

The primary modalities are the easiest systems to decode, because you can control 
precisely what the inputs are.  Those are the first systems to be decoded.

 Yes, that is because they don't constitute a computer.
 I suppose you need a really deep understanding of what computation is to
 see how the cortex is a computer (and hence has all the same properties
 of nonpredictability and such...) 

Well it computes.  So... it's a computer, sure.  Feel free to tell me more.

 
 Does it really? ;)
 I would suggest that the individual cortical columns represent a fairly
 consistient set of adaptive logic gates (of considerable complexity). I
 would further suggest that as the ferrit example showed the computation
 the cortical region performs depends mostly on where in the logic
 network the inputs are sent and the outputs taken. In this way you can
 take just about any cortical region and get it to do just about anything
 any other region does (except for the extra layers of the occipital
 lobe) just by hooking it up differently...

I don't really have any strong data for or against that hypothesis.  We're not sure 
how brittle columns are, functionally.  Simple neural net models tell us though that 
it's very easy to drastically alter the functional character of a network by changing 
one parameter.  I'll read the ferret example, but I'm guessing that all they found was 
evidence of striation, which doesn't mean the system is working correctly.  However, 
given the resilience of the brain to changes performed at a young age, it is likely 
there was some visual perception.  

 
 Where is the evidence for celular differentiation beyond the 20 or so
 classes of neurons?

I'm not talking just about neuron types, but also about connection patterns of neurons 
between and within areas.   Subregions CA3 and CA1 of the hippocampus are identical 
from a cellular composition perspective, but their connectivity patterns are so 
different that noone who studies the system would expect them to do the same thing.  
Neurophysiological evidence demonstrates that they do in fact differ in their 
functional characteristics.

 
 Absent this evidence, how can you say that a certain structure of cells
 X, Y, and Z which are arranged in layers 1-6 in cortical region A do
 something significantly different from those in region B?
 

For starters, an autoassociative network performs differently than a heteroassociative 
one.  Or add noradrenergic modulation(or one of 10+ other neuromodulators), or delete 
a subclass of GABA cells, or triple the percentage of stellate cells.  It is easy to 
make a neural network behave differently.  This is easily demonstrated with models.

   

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] doubling time revisted.

2003-02-17 Thread Brad Wyble
Processing speed is a necessary but far from sufficient criterion of AGI design.  The 
software engineering aspect is going to be the bigger limitation by far.  
It is common to speak of the brain as x neurons and Y synapses but the truth of it 
is that there are layers of complexity beneath the synapses.  Even more important is 
the vast heterogeneity between brain regions.  Even within cortex regions of similar 
architechture(and there are many different types of cortex!), the interconnections 
between regions alone effectively equate to specializied subsystems.

If raw horsepower were the limiting factor, evolution could have easily given us 
massively homogeneous blobs of neural tissue at a cheap engineering/DNA cost.  The 
fact that evolution uses a diverse and heterogeneous neural architecture tells us 
above all that it is *necessary*.   Long term evolution doesn't do things it doesn't 
have to.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] doubling time revisted.

2003-02-17 Thread Brad Wyble
 Hmmm.  I think the critical problem is neither processing speed, NOR
 software engineering per se -- it's having a mind design that's correct in
 all the details.
 
 Or is that what you meant by software engineering?  To me, software
 engineering is about HOW you build it, not about WHAT you build in a
 mathematical/conceptual sense.

That is what I meant, yes.  The WHAT that you build.  The HOW isn't so much important 
except in that it's efficient enough to get the job done and doesn't leave the authors 
lost.  

I guess my point is that if you take as a thought experiment the idea that on waking 
up tomorrow and we found all of our cpu's magically running at 10x the speed (memory 
10x, etc), we wouldn't be that much closer to an AGI because we're still working on 
what to do with the power.  

However, with your experience at webmind you would know better than I how cpu limits 
constrain AGI design.

 And cheaper computers let individuals in less wealthy nations get online and
 start computing, which adds more brainpower to the mix...
 
 -- Ben G

An excellent, and rarely stated point.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] doubling time revisted.

2003-02-17 Thread Brad Wyble

 It is obvious that no one on this list agrees with me.  This does not mean =
 that I am obviously wrong.  The division is very simple.
 
 My position:  the doubling time has been reducing and will continue to do s=
 o.
 
 Their position:  the doubling time is constant.

It is incredibly unlikely that the doubling time is constant.  

But whatever the data show, as Ben says, it is impossible that the decrease in 
doubling time can continue ad infinitum.  It will approach various asymptotic limits 
as defined by technology and market pressures (noone is going to spend billions in RD 
to make computers that cost $.05) and eventually by the laws of physics themselves as 
we approach the atomic and quantum scales.  

We will not have $10, 20 ghz, computers in 3 years.



 
 This is not a question of philosophy but only of the data.  What does the d=
 ata show?  If we had a stack of COMPUTER SHOPPER  magazines for the past tw=
 enty years the question could be decided in short order.  The drop in doubl=
 ing time starts out very slowly.  That is why it is not obvious yet.  By th=
 e time it becomes obvious it will be too late.
 
 
 Mike Deering.
 www.SingularityActionGroup.com---new website.
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your subscri=
 ption,=20
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
 
 --=_NextPart_000_0010_01C2D67C.E8727860
 Content-Type: text/html;
   charset=iso-8859-1
 Content-Transfer-Encoding: quoted-printable
 
 !DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.0 Transitional//EN
 HTMLHEAD
 META http-equiv=3DContent-Type content=3Dtext/html; charset=3Diso-8859-1
 META content=3DMSHTML 6.00.2800.1126 name=3DGENERATOR
 STYLE/STYLE
 /HEAD
 BODY bgColor=3D#ff
 DIVFONT face=3DArialIt is obvious that no one on this list agrees with=
 =20
 me.nbsp; This does not mean that I am obviously wrong.nbsp; The division =
 is=20
 very simple./FONT/DIV
 DIVFONT face=3DArial/FONTnbsp;/DIV
 DIVFONT face=3DArialMy position:nbsp; the doubling time has been reduc=
 ing and=20
 will continue to do so./FONT/DIV
 DIVFONT face=3DArial/FONTnbsp;/DIV
 DIVFONT face=3DArialTheir position:nbsp; the doubling time is=20
 constant./FONT/DIV
 DIVFONT face=3DArial/FONTnbsp;/DIV
 DIVFONT face=3DArialThis is not a question of philosophy but only of th=
 e=20
 data.nbsp; What does the data show?nbsp; If we had a stack of COMPUTER=20
 SHOPPERnbsp; magazines for the past twenty years the question could be dec=
 ided=20
 in short order.nbsp; The drop in doubling time starts out very slowly.nbs=
 p;=20
 That is why it is not obvious yet.nbsp; By the time it becomes obvious it =
 will=20
 be too late./FONT/DIV
 DIVFONT face=3DArial/FONTnbsp;/DIV
 DIVFONT face=3DArial/FONTnbsp;/DIV
 DIVFONT face=3DArialMike Deering./FONT/DIV
 DIVFONT face=3DArialA=20
 href=3Dhttp://www.SingularityActionGroup.com;www.SingularityActionGroup.c=
 om/Anbsp;nbsp;nbsp;=20
 lt;---new website./FONT/DIV/BODY/HTML
 
 --=_NextPart_000_0010_01C2D67C.E8727860--
 
 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] doubling time revisted.

2003-02-17 Thread Brad Wyble

I know this topic is already beaten to death in previous discussions, but I'll throw 
out one more point after reading that we may already have the equivalent power of some 
3000 minds in raw CPU available worldwide.

The aggregate neural mass of the world's population of insects and animals are 
probably at least an order of magnitude greater than that of humanity(and this using 
processing units literally identical to our own, no uncomfortable assumptions of 
computational equivalence are involved).  


And yet they aren't the ones building spaceships.

Putting processing power to good, effective use is a *hard* problem.

Also, integrating the power of multiple units is another hard problem.  I don't recall 
the figure, but the vast majority of the brain is interconnective tissue.  Networking 
hardware scales nonlinearly with the number of processing units.   Even if you had 
sole dominion of those millions of desktop units and the perfect AGI software to run 
on them, the bandwidth bottleneck would make the thing unusable.  

-Brad



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Brad Wyble

 I guess that for AIXI to learn this sort of thing, it would have to be
 rewarded for understanding AIXI in general, for proving theorems about AIXI,
 etc.  Once it had learned this, it might be able to apply this knowledge in
 the one-shot PD context  But I am not sure.
 

For those of us who have missed a critical message or two in this weekend's lengthy 
exchange, can you explain briefly the one-shot complex PD?  I'm unsure how a program 
could evaluate and learn to predict the behavior of its opponent if it only gets 
1-shot.  Obviously I'm missing something.

-Brad



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Brad Wyble
 
 There are simple external conditions that provoke protective tendencies in 
 humans following chains of logic that seem entirely natural to us.  Our 
 intuition that reproducing these simple external conditions serve to 
 provoke protective tendencies in AIs is knowably wrong, failing an 
 unsupported specific complex miracle.

Well said.
 
 Or to put it another way, you see Friendliness in AIs as pretty likely 
 regardless, and you think I'm going to all these lengths to provide a 
 guarantee.  I'm not.  I'm going to all these lengths to create a 
 *significant probability* of Friendliness.
 

You're mischaracterizing my position.  I'm certainly not saying we'll get friendliness 
for free, but was trying to reason by analogy (perhaps in a flawed way), that our best 
chance of success may be to model AGI's based on our innate tendencies wherever 
possible.  Human behavior is a knowable quality.

I perceived, based on the character of your discussion, that you would be unsatisfied 
with anything short of a formal, mathetmatical proof that any given AGI would not 
destroy us before giving the assent to turning it on.  If that characterization was 
incorrect, the fault is mine.


-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble

I don't think any human alive has the moral and ethical underpinnings to allow them to 
resist the corruption of absolute power in the long run.  We are all kept in check by 
our lack of power, the competition of our fellow humans, the laws of society, and the 
instructions of our peers.  Remove a human from that support framework and you will 
have a human that will warp and shift over time.  We are designed to exist in a social 
framework, and our fragile ethical code cannot function properly in a vacuum.

This says two things to me.  First, we should try to create friendly AI's.  Second, we 
have no hope of doing it.  

We will forge ahead anyway because progress is always inevitable.  We'll do as good a 
job as we can.  At some point humans will be obsolete, but that's no reason to turn 
back.

I'm also a strong proponent of the idea that humans can be made much better with the 
addition of enhancements, first through external add-ons  (gargoyle type apparati 
which enhance our minds through UI's that are as intuitively useful as a hammer), and 
later through direct enhancement of our brains.  

In summary, I think we are getting ahead of ourselves in thinking we even have the 
capacity to predict what a friendly AI will be, especially if said AI is 
hyperintelligent and self-modifying.  

-Brad
  


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
 
 I am exceedingly glad that I do not share your opinion on this.  Human
 altruism *is* possible, and indeed I observe myself possessing a
 significant measure of it.  Anyone doubting thier ability to 'resist
 corruption' should not IMO be working in AGI, but should be doing some
 serious introspection/study of thier goals and motivations. (No offence
 intended, Brad)
 
 Michael Roy Ames
 

None taken.  I'm altruistic myself, to a fault oftentimes. 

I have no doubt of my ability to help my fellow man.  I bend over backwards to help 
complete strangers without a care because it makes me feel good.  I am a friendly 
person.

But that word fellow is the key.  It implies peers, relative equals.  

I don't think I, or you, or anyone, can expect our personal ethical frameworks to 
function properly in a situation like that a hyperintelligent AI will face.


Tell me this, have you ever killed an insect because it bothered you?


-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble

I can't imagine the military would be interested in AGI, by its very definition.  The 
military would want specialized AI's, constructed around a specific purpose and under 
their strict control.  An AGI goes against everything the military wants from its 
weapons and agents.  They train soldiers for a long time specifically to beat the GI 
out of them (har har, no pun intended) so that they behave in a predictable manner in 
a dangerous situation.


And while I'm not entirely optimistic about the practicality of building ethics into 
AI's, I think we should certainly try, and that rules military funding right out. 

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Consciousness

2003-02-11 Thread Brad Wyble

 
 A good, if somewhat lightweight, article on the nature of mind and whether =
 silicon can eventually manifest conscioussness..
 
 http://www.theage.com.au/articles/2003/02/09/1044725672185.html
 
 Kevin

I don't know if consciousness debates are verbotten here or not, but I will say that I 
grow weary of Penrose worming his way into every debate/article with his hand-waving 
about quantum phenomenae.  Their only application to the debate is that they are 
unknown and therefore a subject of mystery, like consciousness.  The implied inference 
used by many, including this author, is that they are therefore related. 

He makes a good point about the failure of the neuron replacement thought experiment, 
but slipping and there is much in quantum physics to suggest it might be into the 
last paragraph left a bad taste in my mouth.  Ascribing the unknown to quantum 
physics, merely because it is mysterious, is no different than ascribing it to the 
Almighty.


-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AGI morality

2003-02-10 Thread Brad Wyble


 
 There might even be a benefit to trying to develop an ethical system for 
 the earliest possible AGIs - and that is that it forces everyone to strip 
 the concept of an ethical system down to its absolute basics so that it 
 can be made part of a not very intelligent system.  That will probably be 
 helpful in getting the clarity we need for any robust ethical system 
 (provided we also think about the upgrade path issues and any 
 evolutionary deadends we might need to avoid).
 
 Cheers, Philip

I'm sure this idea is nothing new to this group, but I'll mention it anyway out of 
curiosity.

A simple and implementable means of evaluating and training the ethics of an early AGI 
(one existing in a limited FileWorld type environment), would engage the AGI in 
variants of prisoner's dilemna with either humans or a copy of itself.   The payoff 
matrix(CC, CD, DD) could be varied to provide a number of different ethical 
situtations.  

Another idea is that the prisoner's dilemna could then be internalized, and the AGI 
could play the game between internal actors, with the Self evaluating their actions 
and outcomes.


-Brad





---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] A thought.

2003-02-07 Thread Brad Wyble
Philip,
 
 I can understand the brain structure we see in intelligent animals would 
 emerge from a process of biological evolution where no conscious 
 design is involved (ie. specialised non conscious functions emerge first, 
 generalised processes emerge later), but why should AGI design 
 emulate this given that we can now apply conscious design processes, 
 in addition to the traditional evolutionary incremental trial and error 
 methods? 
 
 Cheers, Philip

An excellent question.  I don't think there's any long term need for AGI to follow 
evolution's path, and there are certainly some benefits to eschewing that approach.  
However, I don't think we're yet at a point in which we can afford to ignore the 
structure of the brain as a rubric.  It seems to make the most sense that if we are 
going to develop an AGI that we can communicate with and understand, there's no reason 
to start from scratch. 

-Brad



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] A thought.

2003-02-06 Thread Brad Wyble

 3)
 Any successful AGI system is also going to have components in two other
 categories:
 
 a) specialized-intelligence components that solve particular problems in
 ways having little or nothing to do with truly general intelligence
 capability
 
 b) specialized-intelligence components that are explicitly built on top of
 components having truly general intelligence capability
 

Are you willing to explain why you put them in this order, or has this available 
elsewhere, perhaps on agiri.org?   I ask because it's my perspective that the brain is 
built the other way around, with specialized intelligence modules on the bottom and 
AGI built on top of them.

I know you're not trying to build a brain per se, but I'm curious why you choose this 
manner to stack ASI and AGI.  It's my belief that in the case of our brains, what we 
call AGI is the seamless combination of many ASI's.  Our problem solving looks 
general, but it really isn't.  There's AGI wiring on top to glue it all together, but 
most of the work is being done subconsciously in specialized regions.  


-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] A thought.

2003-02-05 Thread Brad Wyble
I've just joined this list, it's my first post.  Greetings all.

1.5 line summary of me: AI enthusiast since 10 yrs old, CS undergrad degree, 3 mo.'s 
from finishing psych/neuroscience PHD. 


Mike, you are correct that an AI must be matched to its enviornment.  It's likely that 
a sentience optimized to function in an alien environment would behave in a way that 
appeared initially as random noise to a human observer.  

However when you say this:

 nd understandable.  There is ONE general organizational structure that opti=
 mizes this AGI for our environment.  All deviations from the one design onl=
 y serve to make the AGI function less effectively.  Any significant departu=

I could not disagree more.  There are an infinite number of ways an AI could be 
designed within a given social/cultural context.  The nature of the designs would 
allow them to provide different solutions, some of which are more or less  effective 
in any particular situation.

Evolution figured this out(forgive my anthropomorphizing), and this is why our minds 
contain many different forms of intelligence.  They all attack any problem in a 
parallel fashion, share their results, and come to a sort of consensus.  

 res cease to function in any way we would consider intelligent.  The SAI of=
  the future will be vastly more intelligent, powerful, and amazing.  It wil=
 l not be incomprehensible.  It will be a lot like us.

It might be incomprehensible if it's too much like us.  One of the dangers of creating 
an AI to study brain function is that the result might be even more inscrutable than 
our brain.  

-Brad Wyble

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]