[agi] Connectionists: ANNOUNCE: PASCAL Visual Object Classes Recognition Challenge 2007

2007-05-04 Thread Eugen Leitl
- Forwarded message from Chris Williams [EMAIL PROTECTED] -

From: Chris Williams [EMAIL PROTECTED]
Date: Mon, 30 Apr 2007 18:10:41 +0100 (BST)
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED], John Winn [EMAIL PROTECTED],
[EMAIL PROTECTED], Mark Everingham [EMAIL PROTECTED]
Subject: Connectionists: ANNOUNCE: PASCAL Visual Object Classes Recognition
Challenge 2007


  PASCAL Visual Object Classes Recognition Challenge 2007

We are running a third PASCAL Visual Object Classes Recognition
Challenge. This time there are more classes (twenty), more challenging
images, and the possibility of more confusion between classes with
similar visual appearance (cars/bus/train, bicycle/motorbike).

As before, participants can recognize any or all of the classes, and there
are classification and detection tracks. There are also two taster
competitions, on pixel-wise segmentation and on person layout (detecting
head, hands, feet).

The development kit (Matlab code for evaluation, and baseline algorithms)
and training data is now available at:

http://www.pascal-network.org/challenges/VOC/voc2007/index.html

where further details are given. The timetable of the challenge is:

* April 2007: Development kit and training data available.

* 11 June 2007: Test data made available.

* 17 Sept 2007, 11pm GMT: DEADLINE for submission of results.

* 15 October 2007: Visual Recognition Challenge workshop (Caltech 256 and
PASCAL VOC2007) to be held as part of ICCV 2007 in Rio de Janeiro, Brazil,
see http://www.robots.ox.ac.uk/~vgg/misc/iccv07/

Mark Everingham
Luc Van Gool
Chris Williams
John Winn
Andrew Zisserman


- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] mouse uploading

2007-04-28 Thread Eugen Leitl
On Sat, Apr 28, 2007 at 01:15:13PM -0400, J. Storrs Hall, PhD. wrote:
 In case anyone is interested, some folks at IBM Almaden have run a 
 one-hemisphere mouse-brain simulation at the neuron level on a Blue Gene (in 

What they did was running a simplified, unrealistic model. It's still
a great spiking code AI benchmark, given that #1 of Top 500 has 16x
the performance, putting it into one realtime mouse, assuming (a
rather large if) country, or at least about eight slow mice.
Or one equally slow Algernon.

 0.1 real time):

Despite 125 us latency (thanks, Sony, for virtualizing even the bloody
GBit Ethernet, and even more so for locking us out from the nVidia
chip, and the entire video memory) PS3 looks like a great system for 
garage AI:

http://www.netlib.org/utk/people/JackDongarra/PAPERS/scop3.pdf

The nodes are power-hungry, but 65 nm structure shrink has already occured,
so second-generation PS3's can be quite interesting.

Of course, one could always wait for Barcelona, not many months away now.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Re: Why do you think your AGI design will work?

2007-04-25 Thread Eugen Leitl
On Wed, Apr 25, 2007 at 02:02:44PM -0400, Richard Loosemore wrote:

 I am the one who is actually getting on with the job and doing it, and I 
 say that not only is it doable, but as far as I can see it is converging 
 on an extremely powerful, consistent and usable model of an AGI system.

Sounds interesting. Do you have a few papers, or even some results
we could look at?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] How should an AGI ponder about mathematics

2007-04-24 Thread Eugen Leitl
On Tue, Apr 24, 2007 at 07:09:22AM -0700, Eric B. Ramsay wrote:

The more problematic issue is what happens if you non-destructively
up-load your mind? What do you do with the original which still

It's a theoretical problem for any of us on this list. Nondestructive
scans require medical nanotechnology.

considers itself you? The up-load also considers itself you and may
suggest a bullet.

How is that different from identical twins? I hope you're not suggesting
suicide to your twin brother.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Low I.Q. AGI

2007-04-17 Thread Eugen Leitl
On Tue, Apr 17, 2007 at 02:37:01PM -0400, Eric Baum wrote:

 Could you be more specific, please? What specific applications do you
 think are high value?

Anything they want to do, and don't have to do to have the bills paid.
I presume many here have loftier aspirations than their dayjob (but
for a lucky few, for whom both are identical).

The interesting capability threshold in AI is autopoietic automation
a la http://www.molecularassembler.com/KSRM.htm
which is about insect-level for macroscale self-replicators in
an unsupportive environment.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Low I.Q. AGI

2007-04-16 Thread Eugen Leitl
On Sun, Apr 15, 2007 at 06:41:39PM -0400, Benjamin Goertzel wrote:

A key point is that, unlike a human, a well-architected AGI should be
able to easily increase its intelligence via adding memory, adding
faster processors, adding more processors, and so forth.  As well as

I see why a human wouldn't profit from enhancements, it's just most
of them would require germline manipulation, or technology quite
beyond of what is available today.

by analyzing its own processes and their flaws with far more accuracy
than any near-term brain scan...

Of course a point could be made that reconstructing function from
structure (which in principle can be obtained from vitrified
brain sections at arbitrary resolution) is less far off than AI bootstrap.
 
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Low I.Q. AGI

2007-04-15 Thread Eugen Leitl
On Sun, Apr 15, 2007 at 07:40:03AM -0700, Eric B. Ramsay wrote:

There is an easy assumption of most writers on this board that once
the AGI exists, it's route to becoming a singularity is a sure thing.

The singularity is just an rather arbitrary cutoff on the advancing
horizont of predictability. We're soaking in a process with multiple
positive feedback loops right now. You'll never notice you've
passed Schwarzschild's radius when falling into Sagittarius A either. 

Why is that? In humans there is a wide range of smartness in
the population. People face intellectual thresholds that they cannot

But you can't pick up the smart ones, and make a few million copies 
of them, if you have a nice personal project.

cross because they just do not have enough of this smartness thing.
Although as a physicist I understand General Relativity, I really
doubt that if it had been left up to me that it would ever have been
discovered - no matter how much time I was given. Do neuroscientists

The dog run for a million years never discovering GR is a more canonical
example. But it takes a minimal intelligence in order to start manipulating
intelligence.

know where this talent difference comes from in terms of brain

You could scan a vitrified brain of a freshly dead and cryonically
suspended expert with an arbitrary resolution. The information is certainly in 
there.

structure? Where in the designs for other AGI (Ben's for example) is
the smartness of the AGI designed in? I can see how an awareness may

Let's say I give you a knob which would slowly mushroom your neocortex. It would
just insert a new neuron between the other ones. Do you think you would
notice something, after a few years?

bubble up from a design but this diesn't mean a system smart enough to
move itself towards being a singularity. Even if you feed the system

Evolution is dumb as a rock, yet it produced you who is capable of producing
that symbol of strings, distributed across a planet and understood by similiarly
constructed systems. We certainly can do what evolution did, and maybe a bit
more.

all the information in the world, it would know a lot but not be any
smarter or even know how to make itself smarter. How many years of
training will we give a brand new AGI before we decide it's retarded?

How about a self-selecting population of a few trillions. The cybervillage
idiots will never even be a single screen blip.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


[agi] dopamine and reward prediction error

2007-04-13 Thread Eugen Leitl

http://scienceblogs.com/developingintelligence/2007/04/the_death_of_a_beautiful_theor.php

The Death of a Beautiful Theory? Dopamine And Reward Prediction Error

Category: Artificial Intelligence • Cognitive Neuroscience • Computational
Modeling Posted on: April 11, 2007 12:07 PM, by Chris Chatham

Very early in the history of artificial intelligence research, it was
apparent that cognitive agents needed to be able to maximize reward by
changing their behavior. But this leads to a credit-assignment problem: how
does the agent know which of its actions led to the reward? An early solution
was to select the behavior with the maximal predicted rewards, and to later
adjust the likelihood of that behavior according to whether it ultimately led
to the anticipated reward. These temporal-difference errors in reward
prediction were first implemented in a 1950's checker-playing program, before
exploding in popularity some 30 years later.

This repopularization seemed to originate from a tantalizing discovery: the
brain's most ancient structures were releasing dopamine in exactly the way
predicted by temporal-difference learning algorithms. Specifically, dopamine
release in the ventral tegmental area (VTA) decreased in response to stimuli
that were repeatedly paired without a reward - as though dopamine levels
dipped to signal the overprediction (and under-delivery) of a reward.
Secondly, dopamine release abruptly spikes in response to stimuli that are
suddenly paired with a reward - as though dopamine is signaling the
underprediction (and over-delivery) of a reward. Finally, when a
previously-rewarded stimulus is no longer rewarded, dopamine levels dip,
again suggesting overprediction and underdelivery of reward.

Thus, a beautiful computational theory was garnering support from some
unusually beautiful data in neuroscience. Dopamine appeared to rise for items
that predicted a reward, to dropped for items that predict an absence of
reward, and to show no response to neutral stimuli. But as noted by Thomas
Huxley, in science many a beautiful theory has been destroyed by an ugly
fact.

These ugly facts are presented in Redgrave and Gurney's new NRN article that
is circulating the field of computational neuroscience. Among the ugliest:

1) Dopamine spikes in response to novel items which have never been paired
with reward, and thus have no predictive value.

2) The latency and duration of dopamine spikes is constant across species,
experiments, stimulus modality and stimulus complexity. In contrast, reward
prediction should take longer to establish in some situations than others -
for example, reward prediction may be slower for more complex stimuli.

3) The dopamine signal actually occurs before animals have even been able to
fixate on a stimulus - this questions the extent to which this signal is
mechanistically capable of the reward prediction error function.

4) VTA dopamine neurons fire simultaneous with (and possibly even before)
object recognition is completed in the infero-temporal cortex, and
simultaneous with visual responses in striatum and subthalamic nucleus. It
seems unlikely that VTA can perform both object recognition and reward
prediction error.

5) The most likely visual signal to these VTA neurons may originate from
superior colliculus, a region that is sensitive to spatial changes but not
those that would be involved in object processing per se.

6) Many of the experiments showing the apparent dopaminergic-coding of reward
prediction error had stimuli that differed not only in reward value but also
in spatial location. Therefore, data in support of reward prediction error is
confounded with hypotheses involving spatial selectivity.

Redgrave  Gurney suggest that VTA dopamine neurons fire too quickly and with
too little detailed visual input to actually accomplish the calculation of
errors in reward prediction. They advocate an alternative theory in which
temporal prediction is still key, but instead of encoding reward prediction,
dopamine neurons are actually signalling the reinforcement of
actions/movements that immediately precede a biologically salient event.

To understand this claim, consider Redgrave  Gurney's point that most
temporally unexpected transient events in nature are also spatially
unpredictable. The theory is basically that a system notes its own
uncertainty, via the spatial reorientation in the superior colliculus, and
attempts to reduce that uncertainty by pairing a running record of previous
movements with the unexpected event.

Although this alternative theory is intriguing, there is not an abundance of
evidence supporting it: it seems to me more like a pastiche of fragments from
the apparently broken reward prediction error hypothesis.

We should also be cautious in discarding any theory as powerful as the reward
prediction error hypothesis on the basis of null evidence: in this case, we
simply don't know how reward prediction error could be calculated so quickly.
This kind 

Re: [agi] My proposal for an AGI agenda

2007-04-09 Thread Eugen Leitl
On Mon, Apr 09, 2007 at 02:02:33PM -0400, Philip Goetz wrote:

 Samantha, you need to provide me with references if you want me to
 believe this.  No LISP compiler has ever been optimized to any serious

I've heard different. Google seems to agree somewhat:
http://www.google.com/search?hl=ensa=Xoi=spellresnum=0ct=resultcd=1q=LISP+numerical+performancespell=1

 degree AFAIK.  The nature of the language makes it difficult to write
 efficient code in the first place.  And I suspect that these many
 problem domains don't include any that involve numeric calculations.

Of course you won't get the numerical libraries of Fortran...

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Growing a Brain in Switzerland

2007-04-05 Thread Eugen Leitl
On Thu, Apr 05, 2007 at 11:34:09AM +0200, Shane Legg wrote:

  The wiring is not determined by the genome, it's only a facility
  envelope.
 
Some wiring is genetic, and some is not.  On a large scale genes

No, the wiring is not genetically determined. There is no specific gene telling
cell 0x093785 to connect to cell 0x3464653, and a similiar gene for every other
cell. In fact, there is no individual cell addressing, only cell type 
addressing 
and relatively diffuse addressing by diffusion gradients.

regulate
how one part of the neocortex is wired to another part (there are even
little

I've heard of neuromorphogenesis in embryos, yes.

tiny crawler things that do the wiring up during the prenatal
development of
the brain that sound totally science fiction and very cool, though the
system
isn't exactly perfect as they kill quite a few neurons when they try
to craw
around the brain hooking all the wiring up).

You don't know this is defect, and not by design. Apoptosis is is frequently
a deliberate mechanism.

At a micro scale each of the different types of neurons have different
dendritic tree structures (which is genetic), and lie in particular

Cell type, yes. Individual cell, no.

layers of
cortex (which is also genetic), and various other things.  In short,
it's not
really genetic or due to adaption, it's a complex mixture of genetics
and
adaption that produces the wiring in the adult brain.

I've never claimed anything otherwise. But the genome is not a noticeable
source of complexity in the adult individual. The fertilized egg in
the womb context maps to an embryo which maps to an fetus which
maps to a neonate which is capable to directly extract information from
an appropriately structured environment. There's no gene which tells
you how to integrate or how to fix a flat tire.
 
  The models are not complex. The emulation part is a standard
  numerics
  package.
 
Heh.  Come to Switzerland and talk to the Blue Brain guys at EPFL...
Their model is very complex and definitely not simply some standard

I agree it's not off the shelf. It's still not anything you won't
find in Biophysics of Computation, and primary literature. It's completely
pedestrian physics, and rather unexciting (by the standars of AI folks,
numerics is really complicated and somewhat of a black art) programming.

numerics package.  They are working in collaboration with something
like 400 researchers all around the world and the project will be
going
for at least several decades.  Simple it is not.

It is not only simple, it is completely trivial in comparison to
a hypothetical AI designed by people. The complexity in the behaviour
comes from the neuroanatomy, not the code. The code is vanilla numerics.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Growing a Brain in Switzerland

2007-04-05 Thread Eugen Leitl
On Thu, Apr 05, 2007 at 12:20:54PM +0200, Shane Legg wrote:

That two specific neurons are not wired together due to genetics does
not mean
that there is not wiring that is genetically determined.  The brain
contains a huge
about of wiring information that comes from the genes.

If you look at how many bits are required to describe the brain structure 
(some 10^19, taken from those 10^17 sites, a la 100 bits each), and and 
those few gigabases of the genome (a mere 10^9), there's still this gap of 
10^10 which is generated endogenously, including interaction with the 
environment. These 10^10 bits (don't get hung up on that number, you knew
where it's been) did not come from the genome. 
 
I forget the exact number, but I think something like 20% of the human
genome describes the brain.  If somebody is interested in building a

No, it codes for the brain tissue. That's something very different from
describing the brain. See
http://www.amazon.com/Birth-Mind-Creates-Complexities-Thought/dp/0465044069/ref=pd_bbs_sr_1/002-8825487-1287227?ie=UTF8s=booksqid=1175769947sr=8-1
for the difference. 

  going for at least several decades.  Simple it is not.
  It is not only simple, it is completely trivial in comparison to
  a hypothetical AI designed by people. The complexity in the
  behaviour
  comes from the neuroanatomy, not the code.
 
What you said was, the models are not complex.

No, I didn't say that. What I said was
The models are not complex. The emulation part is a standard numerics
package. The complexity comes directly from scans of neurons. The resulting
behaviour is complex, but IMHO not hopelessly so. I'm interested in
automatic optimization, which is based on feature and function abstraction,
and co-evolution of machine/represenation. This is a much harder task
than mere brute-force simulation -- however, much easier than classical
AI.'

You remember the thread: complexity in the code versus complexity in the
data? The Blue Brain complexity is all in the data. This is very different
from the classical AI, which tends to obsessionate about lots of clever
algorithms, but typically does sweep the data (state) under the carpet.

You can extract more complexity from fewer bits using more complex
transformations, up to a point. The complex transformations take resources,
and add up atomic delays so the entire evolution is slower. You can
also generate complexity from completely braindead transformations 
on a lot of sites aligned on a regular lattice, a la crystalline
computation http://people.csail.mit.edu/nhm/cc.pdf

What they are modeling IS the neuroanatomy and its behaviour.

We're in violent agreement here. I wish we could get Markram or someone
from his group to the Oxford seminar.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Growing a Brain in Switzerland

2007-04-05 Thread Eugen Leitl
On Thu, Apr 05, 2007 at 02:03:32PM +0200, Shane Legg wrote:

I didn't mean to imply that all this was for wiring, just that there
is a sizable
about of information used to construct the brain that comes from the
genes.

No disagreement. Apart from sizable. A few gigabases doesn't give you 10^10
bits. And that's about the last time I'm about to mention it.

We're totally on the same page, but for some reason you're extremely 
literally-mindedly focusing on some isolated phrases instead of what 
I mean. It is rather obvious what I mean, if you don't start looking
at isolated phrases and look at word meaning in an absolute way
(yes, Blue Brain is about as complicated and large scale as simulations
come. No, it is completely trivial on the code size as far as the
classic AI school is concerned. The classic AI school doesn't
think that cable theory, calcium dynamics or the Nernst-Planck
equations are to be considered nontrivial. 

If you want to model the brain then this is the kind of information
that you
are going to have to put into your model.

Not if you're looking at short-range processes. The genome has zero 
activity on second scale, and only very little activity on minute 
scale. Here you can look at the anatomy, and completely ignore how
it came into being, and what it does on hour to day scale (which
is where the genome comes in).

You have to make the model a lot more complex if you just start with
a fertilized egg in machina.

Why does the optic tract project to the lateral geniculate nucleus,
the pretectum
and the superior colliculus and not other places in the brain?  Why
does the
lateral genicultate body project to striate and not other parts of
cortex?  Why does
the magnocellular pathway project to layer 4Calpha, while the
parvocullular
pathway projects to 4A and 4Cbeta?  Why does the cerebral cortex
project to
the putamen and caudate nucleus, but not the subthalamic nucleus?  I
could
list pages and pages of examples of brain wiring that you were born
with and
that came from your genetics, it's basic neuro science.

We're still in vigorous agreement. In fact, in C. elegans each
neuron *does* have an address, and the neural network *is* completely
deterministic and genetically wired. But that's a 300 cell network
in a 1 kCell animal.  

I don't clam that all wiring in the brain is genetic, or even a
sizable proportion of it.

You sound about as frustrated about this exchange as I am. 

What I am claiming is that the brain wiring that is genetic is
non-trivial and cannot
be ignored if somebody wants to build a working brain simulation.

On a time scale of seconds to minute you can absolutely ignore the
genetic component. You absolutely need the genetic contribution (including
full cell dynamics and migration, and molecular target recognition) if
you start from nowhere.
 
  You remember the thread: complexity in the code versus complexity
  in the
  data? The Blue Brain complexity is all in the data. This is very
  different
  from the classical AI, which tends to obsessionate about lots of
  clever
  algorithms, but typically does sweep the data (state) under the
  carpet.
 
Yes, I agree, it's in the data rather than the code.  But I don't
accept
that you can say that their model is simple.

As neural emulations go, in terms of lines of code, it's probably 
the largest and most complex there is. In terms of software project
complexity, as measured in MLoCs, especially if you exclude the 
numerics libraries, no. If compared with a classical AI approach
to human cognition, it's effectively zero complexity. Yet Blue Brain
(with quite a few extensions) is in touching distance of creating
the full cognition of an adult, if loaded with the full data set
on appropriate hardware.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Growing a Brain in Switzerland

2007-04-04 Thread Eugen Leitl
On Tue, Apr 03, 2007 at 03:24:16PM -0700, Matt Mahoney wrote:

 Kind of like modeling a microprocessor using finite element analysis.  It's
 good for studying transistor design but not for studying database design.

The brain, unlike the computer, is a machine which is built on
emergence and self-organisation. In order to understand the higher-level
phenomena sufficiently to tell salient from nonsalient things apart
(and to build a reasonable representation/hardware pair) you need
to plug the digitized neuroanatomy into the simulation, and do
a machine experiment. You can do things in machina you can't do in
vivo.
 
 Does Blue Brain really need this level of detail to study intelligence?

Absolutely. And I have a feeling this project will not turn out a dud,
unlike so many others.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Growing a Brain in Switzerland

2007-04-04 Thread Eugen Leitl
On Wed, Apr 04, 2007 at 08:23:37AM -0700, David Clark wrote:

 Although some emergent and self-organization surely occurs in our brains,

I meant is that the trouble with biology is that it's difficult to
analyse without having access to the whole picture. Unlike human designs,
which can be understood analytically from the sum of the parts.

The learning part is not really relevant, because typically you would
plug in more or less mature brain tissue. (Of course, by looking at
structure/functure diffs in less mature systems you can see how
the knowledge extraction from the environment is done (e.g. synapse pruning
is a hint).

 how do you reconcile the fact that babies are very stupid compared to
 adults?  Babies have no less genetic hardware than adults but the difference

The wiring is not determined by the genome, it's only a facility envelope.
Getting the genome into the picture will be necessary at some point, but
the current simulations are sore pressed by looking at function of static
hardware (s vs. minutes/hours/days).

 in intelligence is gigantic.  I content the difference is that adults have
 20+ years of learning from other intelligent people and babies do not.  I
 don't see any evidence that would support a claim that adult level
 intelligence emerges by itself in adults.  This evidence seems to be
 lacking for both humans and computer AIs.

Babies come with enough functionality onboard in order to be able to
extract knowledge from a suitably stuctured/supportive environment. 
As such they're a good model for an artificial infant, but that is not
the scope of the Blue Brain project, as far as I understand.
 
 If a baby never got any more intelligent than just after it was born, would
 you call it intelligent?  I am not saying babies, just born, exhibit NO
 intelligence but would you say that baby level intelligence is good enough
 for an AGI to be called intelligent?

I would say that if you'd be able to mimick the human infant development
for a few years, then you'd get one damn useful AI. If you can make this
scale to an adult, the AI problem is solved.
 
 I know your project (or AGI idea) is based on some form of brain simulation
 but making blanket statements about currently unknown (unproven) issues
 doesn't seem warranted by the facts.

I don't have an AI project, I'm interested in individually accurate numerical
models of animals, including people. It's about removing some of the limits
to the human condition. The AI part is only a side effect of that.
 
 Why can't database design level intelligence be modeled and studied at a
 level that our computers can efficiently do, without the need to model the

Our computers are pretty pathetic, and our programmers are even more so.

 incredible complexity of a human brain?  If I was creating an accounting

The models are not complex. The emulation part is a standard numerics
package. The complexity comes directly from scans of neurons. The resulting
behaviour is complex, but IMHO not hopelessly so. I'm interested in
automatic optimization, which is based on feature and function abstraction,
and co-evolution of machine/represenation. This is a much harder task
than mere brute-force simulation -- however, much easier than classical
AI.

I'm not religious about this, it's just this appears to me be barely doable,
whereas the classical AI quite beyond of what mere human designers and
programmers can do. Just because you're intelligent, it doesn't mean you're
intelligent enough to understand how you're intelligent. 

 program, I wouldn't need a model of the human brain to make sure that the
 debits equal the credits.  What makes intelligence ineligible to a solution
 using existing computer techniques?

It's the people. Humans can't handle complexity very well. For some
reason (no idea why) there was a school that thought that human experts
knew just what they were solving, and could externalize that knowledge
into a rule-based design that computers could execute. That approach was
pretty much a complete debacle (the experts both didn't knew how they
were doing it, nor could they externalise that knowledge in a representation
that was useful for encoding it in a classical machine).

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 08:42:53PM +0800, YKY (Yan King Yin) wrote:

I believe that a minimal AGI core, sans KB content, may be around 100K
lines of code.

I don't know what 'KB' content is. But the kLoCs are irrelevant, because
the data is where it's at, and it's huge.
 
What are other people's estimates?

10^17 sites, 10^23 OPs/s total. The transformation
complexity might very well be 100 kLoC, or even 10 kLoC.

But that code is worthless without the magic data.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 09:16:09AM -0400, Mark Waser wrote:

I'll go you one better . . . . I truly believe that the minimal AGI
core, sans KB content, is 0 lines of code . . . .

In theory, a TOE can be quite small. In theory, you could have
a low-level physical simulation that's happening to be an intelligent
system. In practice, however... As they say, in theory, there is no
difference between practice and theory. In practice, there is.  
 
Just like C compilers are written in C, the AGI should be entirely
written in it's knowledge base (eventually) to the point that it can

What's the knowledge base between your ears is written in? 

understand itself, rewrite itself, and recompile itself in it's

What makes you think the system can ever understand itself, whatever
that term means exactly? Evolution doesn't understand anything, but 
as an optimization process it produced us from prebiotic ursoup, which
is nothing to sneeze at.

entirety.  The problem is bootstrapping to that point.

Since nobody here knows, how about evolution? Empirically validated
is not good enough for you?
 
Personally, I find all of these wild-ass guess-timates and opinion
polls quite humorous.  Given that we can't all even agree on what an

It's okay as long as everybody agrees they're wild-ass guesstimates.

AGI is, much less how to do it, how can we possibly think that we can

I dunno about you, but I see a general intelligence (admittedly, not much
of an intelligence) every morning in the shaving mirror. As I said, you'll
know AGI when it hits the job market and the news.

accurately estimate it's features?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 09:35:57AM -0400, Pei Wang wrote:

 I have to disagree. The following is adapted from my chapter in the
 AGI collection 
 (http://www.springer.com/west/home/generic/order?SGWID=4-40110-22-43950079-0):

I have to disagree with your disagreement. Provably optimal computational
substrates and representation can be optimized by co-evolution. This
process is open-ended. 
 
 *. Complete self-modifying is an illusion. As Hofstadter put it, below
 every tangled hierarchy lies an inviolate level [in GEB]. If we
 allow a system to modify its meta-level knowledge, i.e., its inference rules
 and control strategy, we need to give it (fixed) meta-meta-level knowledge
 to specify how the modification happens. As flexible as the human mind

Stochastical optimization doesn't have any blinkers. Of course, it
takes a population, because most of these are fatal.

 is, it cannot modify its own low of thought.
 
 *. Though high-level self-modifying will give the system more flexibility, 
 it
 does not necessarily make the system more intelligent. Self-modifying at

If intelligence is infoprocessing capability, then any process that
maximizes the ops/g and ops/J will also optimize for intelligence.

 the meta-level is often dangerous, and it should be used only when the
 same effect cannot be produced in the object-level. To assume the more
 radical the changes can be, the more intelligent the system will be is
 unfounded. It is easy to allow a system to modify its own source code,
 but hard to do it right.

Yes, it took evolution a while before it learned to evolve. ALife hasn't
reached that first milestone yet.
 
 Even if you write a C compiler in C, or a Prolog interpreter in Prolog
 (which is much easier), it cannot be used without something else that
 understand at least a subset of the language.

The whole language metaphor in AI is a crock. It makes so many smart
people go chasing wild geese up blind alleys.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 04:46:59PM +0200, Jean-Paul Van Belle wrote:

Some random thoughts.
 
Any RAM location can link to any other RAM location so there are more
interconnects.

Not so fast. Memory bandwidth is very limited (~20 GByte/s current,
GDDR3/GPUs are much better, agreed), and the access
pattern is not flat. Predictable and local accesses are preferred,
whereas worst case can be as low as 5% of advertised peak.

The difference between CPU speed and memory bandwidth growth
is a linear semi-log plot, too.

However, the limited fan-out factors are not a problem with 
active media and even simple packet-switched signalling mesh.
Embedded DRAM, wide bus ALU (with in-register parallelism)
meshed up with a packet-switched signalling fabric is the bee's
knees -- but you can't buy these yet.
 
The structure of RAM can be described very succintly.

RAM alone doesn't compute. Try hardware CAs. These are pretty
regular, too, and actually pack a lot of punch, especially in 3d.
(In fact, the best possible classical computational substrate is
a molecular-cell CA).
 
A CPU has 800 million transistors - a much more generous instruction
set than our brain.

I have absolutely no idea what you mean by this. I'm hazarding
that you yourself don't, either.
 
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Eugen Leitl
On Thu, Mar 29, 2007 at 08:40:02AM -0700, David Clark wrote:

I would like to know what computer executes data without code.  None
that I have used since 1976 so please educate me!

The distinction is a bit arbitrary. Machine instructions are nothing
but data to the CPU.

But the lack of distinction between code and data in biological
tissue processing is significant. Such systems are best seen as
state, and their evolution (the state space variety) as iterative
transformation on that state.

Considering the memory bottleneck, you don't get a lot of refreshes/s
on a typical 10^9 word node. With current technology 10 MBytes/node in order
to match the refresh rate of neuronal circuitry, which is not a lot
of state/node, so you need an awful lot of nodes.
 
Even though some state designs can put logic into data instead of
program code, and even though program code is stored as data, they
aren't the same.

The distinction between storage and processing, between code and
data is arbitrary. It's an earmark of a particular technology, and a
rather pitiful technology, which goes back directly to the Jaquard loom.
We're stuck in a bad optimum for time being, but luckily people have
started running into enough limitations (recent multicore mania is a
symptom) so they're willing to abandon the conventional approach,
because it no longer offers enough ROI, especially long-term.

To estimate given, insufficient knowledge is problematic, to estimate
given NO knowledge produces useless conjectures.

Yes. This is why I stick to what biology can do in a given volume, because
it's the only working instance we can analyze.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Eugen Leitl

http://www.greythumb.org/blog/index.php?/archives/193-Why-evolution-Why-not-neuroscience.html#extended

Tuesday, March 20. 2007

Why evolution? Why not neuroscience?

Opinion

I was reading about Numenta's NuPIC platform today, and it occurred to me
that there are really two big promising directions in machine learning/AI
today: evolutionary computation and brain reverse-engineering. Some readers
might be curious as to why I'm working on evolutionary computation and not
neuroscience-based approaches. I thought of a good metaphor to explain, as
well as a few practical reasons.

First of all, my intent is not to say everyone else's work sucks and my
approach rules! The prevalence of that kind of counterproductive
ego-parading is one of the things that I don't like about the AI field. The
AI field eats its young. It's one of the reasons I seldom use the term AI
to refer to my own work, with the other being the history of over-the-top
hype associated with it. (If you're wondering... no, I don't think that
AutoCore is going to make people immortal or bring about the singularity. It
might help us diagnose diseases though, or find oil, or control robots, or
have really challenging immersive game characters.)

But I do figure that people might be curious, especially since reverse
engineering the brain is definitely the approach that garners the most press
attention. The field seems like it has a funny bias against evolution, but I
digress.

So here's a metaphor. It's not a perfect metaphor, and as I'll explain later
it sounds a little more critical of the other approaches than I really intend
to be. But it is a clear metaphor, which is why I use it.

Imagine that you're trying to figure out what fire is. To me, the brain
reverse engineering approach is like sitting around and meticulously
recording what the fire is doing. Ok... a blue flicker is followed by two
brighter orange flickers whose cones have the following shape... By
contrast, I see the evolutionary approach as being more like Fire happens
when oxygen combines with reducing agents. Heat a reducing agent to a high
enough temperature in the presence of oxygen, and you get a fire. I would
then add who cares how the fire flickers?

Like I said, this is a little more uncharitable than I want to be. Staring at
flames is not likely to get you anywhere at all when it comes to
understanding fire, but studying the brain may indeed get you somewhere when
it comes to AI. The metaphor is this: the algorithms, structures, and
biological processes of the brain are the flames, while evolution is the
process of oxidation-reduction that produced them. Studying individual
cross-sections of the totality of what the brain is doing might teach you a
cool algorithm. But I really don't think that the ultimate source of
intelligence is a single algorithm.

I think that the ultimate source of intelligence is the process that
generates algorithms. That process is evolution, in all its self-adaptive
recursive evolvability-evolving glory. Biological evolution is a
process-generating process; an algorithm that generates algorithms (that
generates algorithms that...).

So there you have it. That's why I think that evolutionary approaches are
important. Evolutionary approaches have the potential to generate algorithms,
including but not limited to algorithms like hierarchical temporal memory, on
demand.

Last but not least, there are practical reasons. Approaches like the Numenta
NuPIC platform still require a human engineer to do a fair amount of work
defining a HTM to solve a particular problem. I think that evolutionary
approaches should be able to do even more of the engineering for you, leading
to systems with really really simple interfaces that require the human
programmer to do very little work. AutoCore (to be released in alpha soon!)
is almost there I think; you can do adaptive image classification with it in
188 lines of C++ code including comments and the code required to load the
training images. But I still think we can go farther still (and some problems
are not going to be as straightforward as simple image classification!). I
envision a system where you can design a problem representation in some kind
of high-level meta-language or even a graphical interface and then just hit
start and let it run.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Eugen Leitl
On Fri, Mar 23, 2007 at 12:27:55PM -0400, Richard Loosemore wrote:

 Andi's and Pei's comments bring me to a question I was just about to ask.
 
 What exactly do people mean by evolutionary computation anyway?

I don't know what other people mean by it, but I mean by it anything
which involves imperfect replication and selection.

This is a pretty wide envelope, since it can include human cognition
(neural darwinism among neocortex modules) and mature evolutionary
systems, which are quite beyond of the simple above framework.
 
 I always thought I knew what they meant, and it is the same meaning that 
 Andi and Pei are alluding to above (Koza, genetic algorithms, etc):  an 
 approach that stays pretty close to the kind of evolution that DNA likes 
 to do:  with crossover, mutation, etc, operating on things that are 
 represented as strings of symbols.

You've got a genotype, which maps to phenotype, which then gets selected
for, but that's also not very constraining.
 
 As such, I agree with Pei's critique exactly:  evolution is just not a 
 very good metaphor.
 
 But here is my worry:  are people starting to use evolutionary in a 
 more general sense, meaning something like a generalized form of 
 evolution that is really just adaptation?  Would some kinds of NN system 
 be evolutionary?  Would evolution be the right word for something that 

It can be.

 tries new possibilities all the time and uses *some* kind of mechanism 
 for strengthening the things that work?  Even if it did not use DNA-like 
 strings of symbols?

What do you understand to be DNA-like? Linear strings? No, it doesn't have
to be a linear string. 
 
 If they were doing this, I'd have to pay more attention.
 
 I don't think this is happening, but I can't be sure.
 
 If any one has any more perspective on this, I'd be interested.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Eugen Leitl
On Fri, Mar 23, 2007 at 08:36:14AM -0700, David Clark wrote:

 I have created a system that makes Self modifying code and I have a design
 that will make use of self modifying code.  This is exactly why I created
 this language in the first place.

Interesting. Why did you feel the need to improve on SEXPRs? What does
it improve on the CL Lisp model?
 
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Eugen Leitl
On Fri, Mar 23, 2007 at 11:29:22AM -0400, Pei Wang wrote:

 In general, we should see intelligence and evolution as two different
 forms of adaptation. Roughly speaking, intelligence is achieved

What about intelligence that works evolutionary? I agree that Edelman's
thesis is not well validated, but at least to my knowledge it has
not been ruled out.

 through experience-driven changes (learning or conditioning)
 within a single system, while evolution is achieved through
 experience-independent changes (crossover or mutation) across

Of course evolutionary systems can and do carry memory across 
generations. They're not ahistorical.

 generations of systems. The intelligent changes are more
 justifiable, gradual, and reliable, while the evolutionary changes

What if human intelligence is Darwin-driven?

 are more incidental, radical, and risky. Though the two processes do
 have some common properties, their basic principles and procedures are
 quite different.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Eugen Leitl
On Fri, Mar 23, 2007 at 08:23:51AM -0700, David Clark wrote:

 I have a Math minor from University but in 32 years of computer work, I
 haven't used more than grade 12 Math in any computer project yet.  I have
 produced thousands of programs for at least 100 clients including creating a
 language/database that sold over 30,000 copies.  I have done system
 programming and lots of assembly language as well as major applications in
 PowerBuilder and Visual FoxPro.  Everyone is entitled to their opinion but
 if Math wasn't required at all in all my career, I fail to see how it is
 necessary for the creation on an AGI or any other major programming effort.

I agree that formal mathematics is probably not a useful tool for
AI, but neurons do process information by physical processes which
sometimes have a close match to human concepts (delay, correlation,
multiply). Of course the constraints of biological systems are very
much like engineering (power footprint, structure minimax) and not
at all like mathematics. Mathematics typically doesn't like to 
deal with relativistic latency, for instance. 
 
 A CPU executes instructions including assignment, conditionals and simple
 looping.  How can a language not have these things and still be useful.

Does the human brain tissue have assignments, conditionals, and simple looping?
I don't think it does, and yet it is good enough that I can understand your
message (at least I think so) by the feat of Natural Intelligence.

If you look at provably optimal computing substrates, they're very
far removed from what you would consider computation. A classical
approach looks a look like a silicon compiler, only on a 3d lattice.
Signal timing and gates are discrete though, which removes any parasitary/
dirt effects from design.

Less conventional computation would be based on an Edge of Chaos dynamic
pattern, which self-organises, homeostates and adapts. 

 I made a point about the efficiency of creating high level languages in the
 language of the AGI.  I argue that this causes a performance hit of up to
 100x or more depending on the complexity of the code. (less complex means a
 bigger performance hit)  With the tools I have put into my language, higher
 level functional or other languages can easily be made and then compiled
 into the native language for huge cycle savings.  This can't be said for all
 the languages I have looked at so far.

Does your language allow you to do asynchronous message passing (no shared
memory) across a signalling mesh, involving millions and billions of
asynchronous, concurrent units?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-22 Thread Eugen Leitl
On Wed, Mar 21, 2007 at 08:57:01PM -0700, David Clark wrote:

 I put up with 1 person out of all the thousands of emails I get who insisted
 on sending standard text messages as a attachment.  Because of virus

Um, no. Mine are standard http://rfc.net/rfc2015.html digitally signed
messages. If your MUA displays them as an attachement, then it is buggy.
A (somewhat inflammatory) FAQ is here 
http://kmself.home.netcom.com/Rants/gpg-signed-mail.html

 infections, I had normally set all emails with attachments to automatically
 get put in the garbage can.  I had to stop that so I could read your emails
 for the past 2 years.

Thanks for doing that.
 
 You have a lot of nerve, indeed.  I made a number of arguments in my email
 about your conclusions (supported I might add by no arguments) and you

Most of my conclusions are rather speculative, but I do have arguments for
some of them. I'm quite ready to offer them. However, nontrivial (and
anything less wouldn't do) mails take a lot of time which I currently do
not have. Because of this I tend to postpone such difficult mails, and deal
with easier mails (such as basic quoting netiquette) out of sequence.
All to frequently, however, such things gets postponed until they fall off
the stack. Sorry for that, but my time is not infinite.

 respond by pointing me to how to post email URL's.  Your arrogance surely
 exceeds your intelligence.

I'm quite sure of that.
 
 -- David Clark
 
 - Original Message - 
 From: Eugen Leitl [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, March 21, 2007 2:04 PM
 Subject: Re: [agi] Fwd: Numenta Newsletter: March 20, 2007
 
 
  On Wed, Mar 21, 2007 at 10:47:35AM -0700, David Clark wrote:
 
   In my previous email, I mistakenly edited out the part from Yan King Yin
 and
   it looks like the We know that logic is easy was attributed to him
 when it
   was actually a quote of Eugen Leitl.
  
   Sorry for my mistake.
 
  It's not your mistake. It's the mistake of those who choose to ignore
 
  http://www.netmeister.org/news/learn2quote.html
 
  It is really a great idea to use plaintext posting and set standard
  quoting in your MUA. For those with braindamaged MUAs there are
  workarounds like
 
  http://home.in.tum.de/~jain/software/outlook-quotefix/
 
  -- 
  Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
  __
  ICBM: 48.07100, 11.36820http://www.ativel.com
  8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303
 
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Eugen Leitl
On Wed, Mar 21, 2007 at 06:12:45PM +0800, YKY (Yan King Yin) wrote:

Hi Eugen,  This opinion is *biased* by placing too much emphasis on
sensory / vision.  I tried to build such a vision-centric AGI a couple

We know that logic is easy. People only had to learn to deal with
it evolutionary only recently, and computers can do serial symbol 
string transformations quite rapidly. Already computer-assisted
proofs have transformed a branch of mathematics into an empirical
science.

Building world/self models in realtime from noisy, incomplete and inconsistent
data takes a lot of processing, and parallel processing. For some reason
traditionally AI considered the logic/mathematics/formal domain for hard,
and vision as easy. It has turned out exactly the other way round.
Minsky thought porting SHRDLU to the real world was a minor task.
Navigation and realtime control, especially cooperatively is hard.

We've desintegrated into discussion minutiae (which programming language, etc.)
but the implicit plan is to build a minimal seed that can bootstrap by
extracting knowledge from its environment. The seed must be open-ended,
as in adapting itself to the problem domain. I think vision is a reasonable
first problem domain, because insects can do it quite well. You can presume
that a machine which has bootstrapped to master vision will find logic a
piece of cake. Not necessarily the other way round. I understand some
consider self-modification a specific problem domain, so a system capable
of targeted self-inspection and self-modification can self-modify itself
adaptively to a given task, any given task. I think there is absolutely
no evidence this is doable, and in fact there is some evidence this is
a Damn Hard problem.

Do you think this is arbitrary and unreasonable?

of years back, and found that it has severe deficiencies when it comes
to *symbolic* and logical aspects of cognition.  If you spend some
time thinking about the latter domains, you'd likely change your
mind.  But the current status of neuroscience is such that vision is
the most understood aspect of the brain, so the vision-centric view of
AGI is prevalent among people with a strong neuroscience background.

I think there's merit in recapitulating the capabilities as they arose
evolutionary. We're arguably below insect level now, both in capabilities,
and from the computational potential of the current hardware. 

It's best to learn to walk before trying to win the sprinter Olympics, no?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Eugen Leitl
On Wed, Mar 21, 2007 at 08:21:57AM -0400, Ben Goertzel wrote:

 Eventually, yeah, a useful AGI should be able to process visual info,
 just like it should be able to understand human language.

Being able to learn to see and to learn to hear, yes? How much
of it do you expect to be hardwired?

E.g. part of what cochlea does directly in hardware is a Fourier
transform. Do you expect to start with that as a prepositioned
building block, or let the system figure out the appropriate
transformation on its own?

 But I feel that the strong focus on vision that characterizes much
 AI work today (especially AI work with a neuroscience foundation)
 generally tends to lead in the wrong direction, because vision
 processing in humans is carried out largely by fairly specialized
 structures and processes (albeit in combination with more general-

There's a reason for that, it's in the law of optics and the
kind of structures a camera sees in the world. Vision (or an equivalent
high-bandwidth channel, direct depth perception by TOF, or LIDAR,
whatever) is a basic instrument for knowledge extraction.

You can skip on that by making the agent directly sense the simulation
grid (externalising the representation), but you'd have to abandon that
if your system does its first steps in the real world.

 purpose structures and processes).  So, one can easily progress 
 incrementally
 toward better and better vision processing systems, via better and
 better emulating the specialized component of human vision processing,
 without touching the general-understanding-based component...

Only parts of the visual processing pathway are hardwired (not really,
and it's not a linear thing at all), and of course the upper stages use
every trick the neocortex can muster. So, no, I don't think you can
trivialize vision, or postpone it.
 
 Of course, the same dynamic happens across all areas of AI
 (creating specialized rather than general methods being a better
 way to get impressive, demonstrable incremental progress), but it happens
 particularly acutely with vision
 
 Gary Lynch, in the late 80's, made some strong arguments as to why
 olfaction might in some ways be a better avenue to cognition than vision.
 Walter Freeman's work on the neuroscience of olfaction is inspired by
 this same idea.

The bit rate you get from olfaction is really low. Yes, you can sense
gradients, and if you code everything with volatile carriers you can
recognize about everything. What I don't like about olfaction is that
it's evolutionary even older than vision, and directly wired to attention
allocation (emotion) processes. It's like vision, only without the
advantages, and even more hardwired.
 
 One point is that vision processing has an atypically hierarchical 
 structure in the
 human brain.  Olfaction OTOH seems to work more based on attractors
 and nonlinear dynamics (cf Freeman's work), sorta like a fancier Hopfield
 net (w/asymmetric weights thus leading to non fixed point attractors).  The
 focus on vision has led many researchers to overly focus on the hierarchal
 aspect rather than the attractor aspect, whereas both aspects obviously 
 play
 a bit role in cognition.

Makes sense.
 
 Direct sensory connections to biomedical lab equipment would
 be more useful ;-)

It would be interesting to see which sensory modalities are optimal
e.g. for medical voxelsets. One could wire a 3d retina on the voxelset,
or let the system look at the space/time domains directly, and let
it build its own processing and representation.
 
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Eugen Leitl
On Wed, Mar 21, 2007 at 12:00:09PM -0700, rooftop8000 wrote:

 Trying to make a seed AI is the same as hoping to win the lottery. 

Winning the lottery is an unbiased stochastical process. Darwinian
co-evolution is a highly biased stochastical process. Seeds are one-way
hashes (morphogenetic code expands the small seed into a structure
which is appropriately positioned (environment-shaped) to extract
knowledge from the environment (aka doting parents). Such seeds
can be relatively tiny, see fertilized human eggs (the womb
does not seem to contribute noticeable amounts of complexity). Hence they 
contain
far less complexity than an adult, winning which by stochastical process,
unbiased or otherwise, takes terrible odds.

 You're just hoping you only have to do one thing so you can forget about
 all the other stuff that is required. 

No. I don't think that other stuff required can be done. This is
the same reason I don't subscribe to SENS. I thought this was unlikely
when I was a 15 year old, and I still think it's unlikely as a 40 year old.
 
 And if i could pick things that wouldn't be needed in a seed AI, it would be 
 real-world vision and motor skills. I agree that understanding movement and 

Learning from the environment takes navigation in and active manipulation 
of the environment. The G in AGI doesn't stand for domain-specific.

 diagrams and figures is essential to thought, but why would a computer 
 program 
 need to do recognize
 a picture of a chair or a picture of a horse or be able
 to track a flying bird in the sky? I don't think that's required
 for most problems. I also don't see how you get to all other thoughts from 
 there?
 (Not that it can't be useful to have in your system..)
 
 Not necessarily the other way round. I understand some
  consider self-modification a specific problem domain, so a system capable
  of targeted self-inspection and self-modification can self-modify itself
  adaptively to a given task, any given task. I think there is absolutely
  no evidence this is doable, and in fact there is some evidence this is
  a Damn Hard problem.
 
 I agree. you can only do some minor self-modification if you don't fully
 understand your inner workings/code. 

I have reasons to suspect that a system can't understand its inner workings
well enough to do radical tweaks. Well, we can (in theory) mushroom our cortex 
by
a minimal genetic tweak. That's a trivial modification, which doesn't 
reengineer the microarchitecture. Live brain surgery on self or a single
copy doesn't strike me as a particularly robust approach. Add a population
of copies, and a voting selection or an external unbiased evaluator, and 
you're already in Darwin/Lamarck country.
 
 I still think most of this AGI will have to coded by hand, and it will

I don't think this is doable by mere humans. This is a few orders of magnitude
below of what the maximum complexity ceiling is (tools only take you that far). 
If AI is numerics, Fortran+MPI would be enough. C would be arguably less 
painful.
If AI is not numerics, you're equally screwed, whether this is Lisp, Erlang, 
Ruby or Fortran.

 be a lot of software engineering and not the romantic seed AI or minimal

To clarify, I'm only interested in ~human equivalent general AI, and only in
co-evolution from a reasonable seed pool in a superrealtime virtual
environment heavily skewed towards problem-solution as fitness function
as a design principle. The only reason for this is that it looks as if
all other approaches are sterile. You're of course quite welcome to prove
me wrong by delivering a working product.

 subset of 10 perfect algorithms... Seems like people don't seem to 
 want to put in all the energy and keep looking for a quick solution

My estimate is several % of yearly GNP for several decades for a likely success
by above design mechanism. If you call that a quick solution, many will
disagree.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Eugen Leitl
On Wed, Mar 21, 2007 at 10:47:35AM -0700, David Clark wrote:

 In my previous email, I mistakenly edited out the part from Yan King Yin and
 it looks like the We know that logic is easy was attributed to him when it
 was actually a quote of Eugen Leitl.
 
 Sorry for my mistake.

It's not your mistake. It's the mistake of those who choose to ignore

http://www.netmeister.org/news/learn2quote.html

It is really a great idea to use plaintext posting and set standard
quoting in your MUA. For those with braindamaged MUAs there are
workarounds like

http://home.in.tum.de/~jain/software/outlook-quotefix/

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] structure of the mind

2007-03-20 Thread Eugen Leitl
On Tue, Mar 20, 2007 at 06:34:25PM +, Russell Wallace wrote:

  wouldn't exist unless it generalized to new experiences. So while
  its hard to engineer this, which might be called emergence,

It's not emergence, but rather failing gracefully and doing the
right thing.

  you will IMO be forced to if you want to succeed. That is the
  reason why AGI is hard.

There are many reasons why AGI is hard. This is only one of them.

Folks, please use the right quoting style. Not posting HTML-only
is a good start. Levels of whitespace indentation don't cust
the mustard. You have to use .
 
It's one reason why AGI is hard, and there is truth in what you say.
However, ab initio search for compact explanation is so hard that we
humans mostly don't do it because we can't. When we do have to bite

Exhaustive searches are intractable, but if the fitness space has high
diversity in a small ball at each given point of genotype space and
a neutral fitness network though which individua can percolate through
without suffering dire consequences you can reach pretty good solutions
without doing the impossible. 

And, of course, the systems reshaping their fitness landscape in above
way is the hardest trick they have to do, because they have to effectively
(statistically) brute force that initial threshold. It's pretty easy sailing
afterwards.

the bullet and explicitly attempt it, it often takes entire
communities of geniuses working for decades to produce a result that
can be boiled down to a few lines. Newton, Darwin, Einstein et al were
by no means the only ones working on their various problems. Koza has
an example of the invention of a simple circuit, I think it was the
negative feedback amplifier or somesuch, you could draw it on the back
of a cigarette pack, it took a very bright engineer months or years of
thinking before he cracked it, and there were lots of others trying at
the same time.

Evolutionary designs typically produce networks with both positive and
negative feedback loops. Miraculously, these are not only stable, but rather
robust. Notice that a mix of positive and negative feedback loops is an
earmark of nonlinear dynamics systems. That evolutionary algorithms produce
just these is not a coincidence. It indicates nonlinear systems are damn
good solutions. 

Notice that human designers routinely miss these, and don't even have the
analytical tools to understand these when plunked down in front of their
very noses. What you described is not an isolated occurence. It is a typical
case.

What we mostly do is use existing solutions and blends thereof, that
were developed by our predecessors over millions of lifetimes. Even
when I'm programming, apparently writing new code, I'm really mostly
using concepts I learned from other people, tweaking and blending them
to fit the current context.

I don't view programming as programming, but as state and state 
transformations. Everything else is just semantics and syntactic sugar.
And once you realize that you're dealing with a lot of state, and
quite nonlinear transformations, then immediately the source of the state
(somebody typing it in? I don't think so) and the kind of transformations
(written down explicitly? I don't think so) come in.

And an AGI will have to do the same. Yes, it will have to be able to
bite the bullet and run a full-blown search for a compact solution

Why bite the bullet? Optimisations is where it's all at.

when necessary. But that's just plain too hard to be doing all the
time, so an AGI will have to, like humans, mostly rely on existing
concepts developed by other people.

People, as not bipedal primate people. And of course this assumes that
everything is zero diversity, so you can just drop in modules, and 
expect them to make sense.

Just for the record of any future readers: not all of us are quite that
silly.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] design and control of self-organizing systems

2007-03-16 Thread Eugen Leitl

A nice Ph.D. thesis:

http://cogprints.org/5442/01/thesis.pdf

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] horsepower

2007-03-15 Thread Eugen Leitl
On Thu, Mar 15, 2007 at 12:16:19AM -0700, Kevin Peterson wrote:

 Some numbers that we know without a doubt have bearing on an upper bound.
 
 Genome: 3 billion base pairs. 2 bits/pair, 750MB (somehow the human
 genome project quotes 1byte / basepair, which is clearly wrong)
 
 Protein coding sequences are approximately 1.5% of that, or 11.25MB.

And there are approximately 50.1 matches in each matchbox.
An average person uses about 5.6 of those/year. So what do these
numbers tell us?
 
 The question is, how much of that goes into the structure of the brain?

Do you really think the genome codes the neuronal wiring explicitly?

I will point you to
http://www.mail-archive.com/agi@v2.listbox.com/msg04853.html
 
 Hmm...was the 1MB just a blue sky guess, or did you follow a similar
 chain of reasoning?

That's a remarkably rusty chain.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] horsepower

2007-03-15 Thread Eugen Leitl
On Thu, Mar 15, 2007 at 09:03:46AM -0400, Eric Baum wrote:
 
 I did a computation along these lines (in What is Thought?, ch 2)
 and, came up with a vaguely similar figure. But, a few comments:

Um, you do realize that the genome is not a noticeable source
of complexity in the human primate, right?

http://www.genomesize.com/statistics.php
It's not the size that matters.

 (1) You need to account for control information. I simply doubled
 my protein coding estimate, but of course this could be off.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] My proposal for an AGI agenda

2007-03-14 Thread Eugen Leitl
On Tue, Mar 13, 2007 at 11:12:16PM -0500, J. Storrs Hall, PhD. wrote:

 Woops, not what I meant. You wondered if I were thinking about the brain 
 because I acted as if I had a processor per concept. I'm just taking as a 
 point of departure that (a) we know intelligence can be done in 1e16 ops, and 

We don't. Intelligence looks like 10^23 ops/s on 10^17 sites country.
Pulling numbers out of /dev/ass is easy; anyone can do it.

 (b) lets assume that it needs 1e16 ops for a brute-force implementation -- 
 what architecture would that imply? That turned out to suggest a whole bunch 
 of ideas.
 
 I expect experiments of parts of the theory can be done handily on a high-end 
 workstation today, that will lead to the better understanding and 

Why a workstation? Why not a 10^9 CPU cluster?

 optimization. The real brain may hold 10 million concepts but I should be 

Where did *that* number come from?

 able to demonstrate adaptiveness, robustness, learning, and reflective 
 control with 10 to 100 -- which I have the horsepower for.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] My proposal for an AGI agenda

2007-03-14 Thread Eugen Leitl
On Wed, Mar 14, 2007 at 07:30:20AM -0500, J. Storrs Hall, PhD. wrote:

 In my previous msg that one referred to, I quoted my figures as being from 
 Kurzweil  Moravec respectively. After I read your 652-page book justifying 

Would you take 562 pages as well?

http://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999/ref=pd_bbs_sr_1/104-7341035-9436725?ie=UTF8s=booksqid=1173876708sr=8-1

This one is quite good, too:
http://www.amazon.com/o/ASIN/0262681080/ref=s9_asin_image_2/104-7341035-9436725

You might find the authors have a bit more credibility than
Moravec, and especially such a notorious luminary like Kurzweil
http://www.kurzweiltech.com/aboutray.html

I'm not actually just being flippant, the AI crowd has a rather
bad case of number creep as far as estimates are concerned.
You can assume 10^23 ops/s on 10^17 sites didn't come out of the
brown, er, blue.

 your estimates, I may quote them as well.
 
  Why a workstation? Why not a 10^9 CPU cluster?
 
 'Fraid I left my gigacluster in my other pants today.

What if you're going to need it? Seriously, with ~20 GByte/s
memory bandwidth you won't get a lot of refreshes/s on your
few GBytes.
 
   optimization. The real brain may hold 10 million concepts but I should...
 
  Where did *that* number come from?
 
 ls /proc/brain/concepts | wc

My neocortex firmware is way out of date, mine unfortunately doesn't
come with procvfs. Time for a reflash...

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] My proposal for an AGI agenda

2007-03-14 Thread Eugen Leitl
On Wed, Mar 14, 2007 at 09:12:55AM -0400, Ben Goertzel wrote:
 
 It is a trite point, but I can't help repeating that, given how very 
 little we know about the
 brain's deeper workings, these estimates of the brain's computational 

Not to belabor the point, but the objections about how little we
know about neuroscience typically come from AI folks, who, frankly
are not particularly versed in the matter. So I'd take my Koch or my
Markram over Kurzweil any time.

 and memory capability
 are all kinda semi-useless...

My point precisely. People love throwing around such numbers,
so I threw some in which are a bit at the high range, to see
whether they will be dismissed off hand. People love to quote
Moore, so why this sudden lack of we only need X doublings
to go there? Mightily curious.
 
 I think that brain-inspired AGI may become very interesting in 5-20 
 years once neuroscience
 has advanced substantially.  At that point quantitative estimates of 

If you have the structure, you can crunch things from first principles.
At the low level of theory the problem is well-understood. Do you have
a well-understood low-level theory of generic AI? Or *any* theory at
all?

 neural computing power
 may also be meaningful!

 http://www.amazon.com/Biophysics-Computation-Information-Computational-Neuroscience/dp/0195181999/ref=pd_bbs_sr_1/104-7341035-9436725?ie=UTF8s=booksqid=1173876708sr=8-1
 http://www.amazon.com/o/ASIN/0262681080/ref=s9_asin_image_2/104-7341035-9436725

I would really recommend that as many list subscribers who can
afford the time (they're cheap) to read these two books. 

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] horsepower

2007-03-14 Thread Eugen Leitl
 that
meek parts of neurons can do a very nice analog multiply) to do it.
 
 Back to the present: Amdahl's Rule of Thumb puts memory size and bandwidth 
 equal to ops per second for conventional computer systems. I conjecture that 
 an AI-optimized system may need to be processor-heavy by a factor of 10, i.e. 
 be able to look at every word in memory in 100 ms, while still being able to 
 overlay memory from disk in 1 sec. We're looking at needing memory the size 
 of a very average database, but in RAM. 

I don't quite follow you here, but at 20 GBytes/s (best case) you'd get
an about ~10 Hz refresh rate on your memory/node. Interestingly enough,
with ~GBytes and direct-neighbour cube interface GBit Ethernet is quite
enough. You only need faster interconnects when you have less memory/node,
so your processing rate is 10 Hz. Machines a la Tera Scale can do
much better, provided you can package enough embedded memory.
 
 Bottom line: HEPP for $1M today, $1K in a decade, but only if we have 
 understood and optimized the software.

Do you think what your brain does (what is not require for housekeeping)
is grossly inefficient, in the terms of operations, not comparisons to
some semi-optimal computronium, and it can be optimized (by skipping
all those NOPs, probably)? I'm not quite that confident, I must admit.
I'm however quite confident that there is no simple theory lurking in 
there which can written down as a neat set of equations on a sheet
of paper, or even a small pile of such. So there's not much to understand,
and very little to optimize.
 
 Let's get to work.

Excellent idea.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] Logical representation

2007-03-12 Thread Eugen Leitl
On Mon, Mar 12, 2007 at 06:58:37PM +, Russell Wallace wrote:

I spent a lot of time on every known variant of that idea and some
AFAIK hitherto unknown ones, before coming to the conclusion that I
had been simply fooling myself with wishful thinking; it's the
perpetual motion machine of our field. Admittedly biology did it, but

Ah, it took you a while to see it ;)

even with a whole planet for workspace it took four billion years and
I don't know about you gentlemen, but that's more time than I'm
prepared to devote to this enterprise. When we try to program that

You don't need the entire four billion years since you don't have
to start from scratch (animals, ahem), and you can put things on 
fast-forward, and select the fitness function for a heavy bias towards
intelligence.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] Logical representation

2007-03-12 Thread Eugen Leitl
On Mon, Mar 12, 2007 at 07:47:26PM +, Russell Wallace wrote:

You're also a couple dozen orders of magnitude short on computing

You don't have to recrunch the total ops of the biosphere for
the same reason you don't have to redo the whole four gigayears.
You're already surrounded by the products of the process. Here's
a major shortcut.

power, and you don't know how to set up the graded sequence of fitness

Computer power will be cheap. You will live to see a single system
with a mole of switches. Even now, there's a lot of crunch hiding
in a sea of gates of a 300 mm wafer. The challenge is to get the
mainstream to unlock it. Simulations for gaming and virtual reality
are a good driver. Cell does a quarter of a teraflop. Intel Tera Scale
does over a teraflop, soon. Blue Gene next-gen will do a petaflop.
When your home box does a petaflop, and there are billions of
such on the network, that's not negligible. Not because there are
a lot of of them, but because a lot of people will own a petaflop
box. 

functions. That said, if you or anyone else wants to actually take a

The first and biggest step is to get your system to learn how to evolve.
I understand many do not yet see this as a problem at all.

shot at that route, let me know if you want a summary of conclusions
and ideas I got to before I moved away from it.

I don't understand why you moved away from it (it's the only game
in town), but if you have a document of your conclusions to share,
fire away.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] The Missing Piece

2007-03-10 Thread Eugen Leitl
On Sat, Mar 10, 2007 at 10:11:19AM -0500, Ben Goertzel wrote:

 In a sense we do, but it's not implemented in the brain as an actual sim 
 world with a physics engine and so forth ... our internal sim world is a 

I'm not sure we know how it's implemented. A lot of things are done
by topographic maps, which are equivalent to coordinate transformations.
I don't think this is a bad representation, if you're interested
in minimizing gate delays to few 10 deep when processing reasonably
complex stimuli in realtime. If you want to do within ~ns what
biology does within ~ms you don't have a lot of choices.

 lot less physically accurate (more naive physics than correct 
 equational physics), and probably gains some kinds of creativity from 

It's certainly good enough for monkey behaviour planning. It's rather
useless for Mach 25 atmospheric reentry, or magnetar physics, agreed.

 this as well as losing a lot of potential for other kinds of creativity...

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] general weak ai

2007-03-07 Thread Eugen Leitl
On Tue, Mar 06, 2007 at 08:33:13PM +, Russell Wallace wrote:

What simulation algorithms did you have in mind with that data

Anything vaguely physical, and doing long-range interactions
by iteration of overlapping local neighbourhoods. It's not much of a
constraint. Of course, you have to add more data to the volume
element, depending on what you want to do.

structure? There are good reasons for the typical emphasis on floating
point, polygons and more sophisticated structures; the human eye and

You still have to work with the ~20 GBytes/s peak, and it is only that
when you fill cache one line at a time, and when you stream. You can
do that with (1d, 2d, 3d, 4d, that's about it) arrays. You also want
to shrink the size of the element. You also want to use MIMD parallelism
in-register.

brain track things to better than 1/256, and so do embedded computer

In the particular structure, it is indeed 1/256, but it's the resolution
only within the voxel. A GByte node buys you 1000^3 of those, at 10-20 Hz.
And the nice thing about it is that you can scale up the volume by
adding more nodes (on mere GBit Ethernet), while still remaining at a
constant data rate. With more parallelism, and a better interconnect
you can increase the processing rate. If you do your voxels in hardware,
you can achieve sub-us processing rate. Constant time over entire volume,
regardless how large, as long as you have hardware wired in the proper
topology.

Not a single serial section. Amdahl can go stuff himself.
Sounds quite good, doesn't it? Do you know any other code which
does that? 

systems; integer arithmetic is not necessarily faster than floating
point on modern hardware (and can even be slower); and frankly, we're

Modern hardware of the x86 variety and the Cell has multimedia instructions, 
which
treat a word like an array of 8, 16, 32 bit integers.

nowhere near the stage at which worrying about what kind of machine
word to use is useful rather than harmful (premature optimization is

Not just what kind of a machine word, which kind of a machine.

the root of all evil and all that).

If you want to do AI which tracks reality efficiently, you have to do
what the reality does. And if you ignore what the reality does, you will
have to throw out everything you did, and do it the way the reality does
it. (Jim van Ehr thought I was being premature when I told him he's going
to use voxels for his NEMS and MEMS stuff. Last time I looked Zyvex was
selling voxel rendering for their MEMS visualisation packages).

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Eugen Leitl
On Sun, Feb 18, 2007 at 12:40:03AM -0800, Samantha Atkins wrote:

 Really?  I question whether you can get anywhere near the same level of
 reflection and true data - code equivalence in any other standard
 language.  I would think this capability might be very important
 especially to a Seed AI.

Lisp is really great as a language for large scale software systems, which 
do really push the envelope of software development in terms of sheer size and 
complexity
of the result, which is still functional and useful. With parallel (asynchronous
message passing primitives equivalent to at least a subset of MPI) extensions
and run on a suitable (10^6..10^9 nodes) hardware there's no reason why Lisp
couldn't do AI, in principle. It might be not the best tool for the job,
but certainly not the worst, either.

However, the AI school represented here seems to assume a seed AI (an 
open-ended agent
capable of directly extracting information from its environment) is 
sufficiently simple
to be specified by a team of human programmers, and implemented explictly by
a team of human programmers. This type of approach is most clearest represented
by Cyc, which is sterile. The reason is assumption that the internal 
architecture
of human cognition is fully inspectable by human analyst introspection alone, 
and 
that furthermore the resulting extracted architecture is below the complexity 
ceiling 
accessible to a human team of programmers. I believe both assumptions are 
incorrect.

There are approaches which involve stochastical methods,
information theory and evolutionary computation which appear potentially 
fertile,
though the details of the projects are hard to evaluate, since lacking 
sufficient
numbers of peer-reviewed publications, source code, or even interactive 
demonstrations.
Lisp does not particularly excel at these numerics-heavy applications, though 
e.g.
Koza used a subset of Lisp sexpr with reasonably good results. MIT Scheme folks 
demonstrated
automated chip design long ago, so in principle Lisp could play well with 
today's large FPGAs. 

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Eugen Leitl
On Sun, Feb 18, 2007 at 09:51:45AM -0800, Eliezer S. Yudkowsky wrote:

 As Michael Wilson pointed out, only one thing is certain when it comes 
 to a language choice for FAI development:  If you build an FAI in 
 anything other than Lisp, numerous Lisp fanatics will spend the next 
 subjective century arguing that it would've been better to use Lisp.

All languages are shallow as far as AI is concerned, and only useful
to figure out the shape of the dedicated hardware for the target.
C-like things are more or less useful with meshed FPGA cores with
embedded RAM, but for a really minimalistic cellular architecture
C is also quite useless. However, C/MPI is very useful for running
a prototype on a large scale machine, with some 10^4..10^6 nodes.

It doesn't matter (much) which language you use in the initial prototype
phase, you will have to throw it away anyway.

Oh, and Python being slow: IronPython is .Net, and extending/expanding
Python for the prototype you do in C is the standard approach. 

A possible solution for those who're loath to touch hardware design: Erlang.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-17 Thread Eugen Leitl
On Sat, Feb 17, 2007 at 08:46:17AM -0800, Peter Voss wrote:

 We use .net/ c#, and are very happy with our choice. Very productive.

I don't know much about those. Bytecode, JIT at runtime? Might be not
too slow. If you use code generation, do you do it at source or at bytecode 
level?
 
 Eugen(Of course AI is a massively parallel number-crunching application...
 
 Disagree.

That it is massively parallel, or number-crunching? Or neither 
massively-parallel,
nor number-crunching?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


[agi] the birth of the mind

2007-02-14 Thread Eugen Leitl

http://www.amazon.com/Birth-Mind-Creates-Complexities-Thought/dp/0465044069/sr=8-1/qid=1171483943/ref=pd_bbs_sr_1/105-4534151-3528451?ie=UTF8s=books

A good easy account of the developing brain, wherein it is described 
where the (many) bits missing from the genome come from.

Might be of interest to some AGI folks.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: [Beowulf] [PCGrid 2007] call for participation: workshop on desktop grids]

2007-02-09 Thread Eugen Leitl
]: Petascale Distributed Storage
Adam L. Beberg, Stanford University, U.S.A.
Vijay Pande, Stanford University, U.S.A.

-
SESSION IV: THEORY

Applying IC-Scheduling Theory to Familiar Classes of Computations
Gennaro Cordasco, University of Salerno, Italy
Grzegorz Malewicz, Google, Inc., U.S.A.
Arnold Rosenberg, University of Massachusetts at Amherst, U.S.A.

Invited Paper: A Combinatorial Model for Self-Organizing Networks
Yuri Dimitrov, Ohio State University, U.S.A.
Gennaro Mango, Ohio State University, U.S.A.
Carlo Giovine, Ohio State University, U.S.A.
Mario Lauria, Ohio State University, U.S.A.

Invited Paper: Towards Contracts  SLA in Large Scale Clusters  Desktops
Grids
Denis Caromel, INRIA, France
Francoise Baude, INRIA, France
Alexandre di Costanzo, INRIA, France
Christian Delbe, INRIA, France
Mario Leyton, INRIA, France

#
ORGANIZATION

General Chairs
Derrick Kondo, INRIA Futurs, France
Franck Cappello, INRIA Futurs, France

Program Chair
Gilles Fedak, INRIA Futurs, France
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] Quantum Computing Demo Announcement

2007-02-09 Thread Eugen Leitl
On Thu, Feb 08, 2007 at 11:03:38PM -0500, Ben Goertzel wrote:

 But, Novamente is certainly architected to take advantage of their  
 1000-qubit version for various tasks, when it comes out... ;-)

Which part of it is massively parallel, and suitable for QC?
 
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] Quantum Computing Demo Announcement

2007-02-09 Thread Eugen Leitl
On Fri, Feb 09, 2007 at 08:45:28AM -0500, George Dvorsky wrote:

 It'll be interesting to see if quantum computation starts to follow Moore's 
 Law.

We don't have QC in solid state, at RT yet. Most things don't
map well to QC, either.

The scaling is 2^N (N=number of qubits, each new qubit doubles
performance) completely burns Moore (which isn't about performance,
but just integration density -- both are quite different things).

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: [Comp-neuro] ISIPTA '07 Second Call for Papers]

2007-02-06 Thread Eugen Leitl
--
Gert de Cooman (Ghent University, Belgium)
Fabio Cozman (University of Sao Paulo, Brazil)
Serafín Moral (Universidad de Granada, Spain)
Teddy Seidenfeld (Carnegie Mellon University, USA)
Jirina Vejnarova (Academy of Sciences, Czech Republic)
Marco Zaffalon (IDSIA, Switzerland).

Special Issues
--
We are currently negotiating with a number of journals about the
feasibility of editing a special issue with contributions based on a
selection of the papers accepted for the conference. We can already
confirm at this point that there will be such a special issue for the
International Journal of Approximate Reasoning.

Further details
---
For further details about (pre)registration, paper submission,
scientific and cultural programme, and programme committee, please
consult the ISIPTA '07 web site at http://www.sipta.org/isipta07/.

Details about previous ISIPTA meetings can be found at
http://www.sipta.org/isipta/.

More information about SIPTA, the international organisation responsible
for organising both the ISIPTA meetings and the SIPTA Schools on
Imprecise Probabilities, please consult the SIPTA web site at
http://www.sipta.org.

Questions
-
If you have any questions about the symposium, please contact the
Steering Committee preferably by email ([EMAIL PROTECTED]), or at the
following address:

Jirina Vejnarova
Institute of Information Theory and Automation
Pod vodarenskou vezi 4
182 08 Prague
Czech republic.
___
Comp-neuro mailing list
[EMAIL PROTECTED]
http://www.neuroinf.org/mailman/listinfo/comp-neuro

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: [IP] Stanford EE CS Colloq] Computer Architecture is Back * 4:15PM, Wed Jan 31, 2007 in Gates B01]

2007-01-26 Thread Eugen Leitl
.



---
---
You are subscribed as [EMAIL PROTECTED]
To manage your subscription, go to
http://v2.listbox.com/member/?listname=ip

Archives at: 
Archives: http://archives.listbox.com/247/
Modify Your Subscription: 
http://v2.listbox.com/member/?;
Unsubscribe: http://v2.listbox.com/unsubscribe/?id=125019-59fc1126-fkgydqyc
Powered by Listbox: http://www.listbox.com

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] About the brain-emulation route to AGI

2007-01-23 Thread Eugen Leitl
On Mon, Jan 22, 2007 at 06:43:08PM -0800, Matt Mahoney wrote:

 I think AGI will be solved when computer scientists, psychologists, and 
 neurologists work together to solve the problem with a combination of 
 computer, human, and animal experiments.

I agree. (Though I would just put computational neuroscientists and
neuroscientists in your list. Psychology is too high-level to be
a useful source of constraints).

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] About the brain-emulation route to AGI

2007-01-22 Thread Eugen Leitl
 population, and maybe even
their degree of phosphorylation to obtain parameters you won't see
from your garden-variety EM micrograph.

  but that detail turns out to be one order of magnitude beyond what 
 any imaginable science can deliver.  Who knows if this is an issue, 

Do you realize that cryo AFM has had atomic resolution for a while now?

 without a detailed functional understanding of the brain.
 
 But if B is not the intended route, then it must be some variety of A. 
 Which then begs the question:  how far toward A are they supposed to be 
 going?  Everything in these arguments about AGI vs Brain Emulation 
 depends on exactly how far the B.E. people are going to go toward 
 understanding functionality.

Ain't breaking down self-erected strawmen fun?
 
 If they go the whole way - basically using B.E. as a set of clues about 
 how to do AGI - all they are doing is AGI *plus* a bunch of brain 
 sleuthing.  Sure, the neuron maps might help.  But they will have to be

Do you have an idea what a rich source of design constraints in
such a difficult field is worth?
 
 just as smart about their AGI models as they are about their neuron 
 maps.  You cannot understand the functional architecture of the brain 

You seem to think there's something nice and modular and shiny sitting in there,
just waiting for the right person to be picked up. Sure, that would be
nice. Some modularity will be there. But, this stuff hasn't been
designed for easy of human analysis as a fitness function component. 

 without having a general understanding of the same kinds of things that 
 AGI/Cognitive Science people have to know.  Which makes the B.E. 
 approach anything but an alternative to AGI.  They will have to know all 
 about the information processing systems in the human mind, and probably 
 also about the general subject of [different kinds of intelligent 
 information processing systems], which is another way of refering to 
 AGI/Cognitive Science.

I don't see how that follows.
 
 Now, let's finish by asking what the neuroscience people are actually 
 doing in practice, right now.  Are they trying build sophisticated 

So you're engaging in a critique of a field you know very little about.

 models of neural functionality, understanding not just the low-level 
 signal transmission but the many, many layers of structure on top of 
 that bottom level?

Is that a rhetorical question?
 
 I would say:  no!  First, they have a habit of making diabolically 

It seems it was.

 simplistic statements about the relationship between circuits and 
 function (Brain Scientists Discover the Brain Region That Determines 
 Altruism / Musical Tastes / Potty Training Ability / Whether You Like 
 Blondes!).  Second, when you look at the theoretical structures they 
 are using to build their higher level functional understanding of the 
 brain systems, what do we find?... a resurgence of interest in 
 reinforcement learning, which is an idea that was thrown out by the 
 cognitive science community decades ago because it was stupidly naive.

I hope I didn't come over in my critique of strong AI as you're in
thinking neuroscience is a crock of strong fertilizer.
 
 In general, I am amazed at the naivete and arrogance of neuroscience 
 folks when it comes to cognitive science.  Not all, but an alarming 
 number of them.  (The same criticism can be applied to narrow AI people, 
 but that is a different story).

I'm not sure we're getting anywhere in all those mutual tar brush strokes.
I would like to read up on the differences of novel approaches discussed
here. I'm looking at a several online papers right now, but would really
like pointers to succinct comparisions of what's new, and how the new is
better than the old.
 
 
 
 Brain Emulation is just the latest hype-driven bandwagon.  It will come 

You've just built a brand new strawman, and after demolishing it,
are complaining that the straw smells fresh.

 and go like Expert Systems, The Fifth Generation Project and (Naive) 
 Neural Networks.

Or not http://faculty.washington.edu/chudler/hist.html
http://www.stottlerhenke.com/ai_general/history.htm

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] (video)The Future of Cognitive Computing

2007-01-21 Thread Eugen Leitl
On Sun, Jan 21, 2007 at 10:03:52AM -0500, Benjamin Goertzel wrote:

 One thing I find interesting is that IBM is focusing their AGI-ish
 efforts so tightly on human-brain-emulation-related approaches.

IBM is smart. They know what they're doing.
 
 Kurzweil, as is well known, has forecast that human brain emulation is
 the most viable path to follow to get to AGI.  I agree that it is a
 viable path, but I don't think it is anywhere near the shortest path.

There are shorter paths, but nobody knows where they are. That's
the key point of it: the world is complicated. Dealing with the
world takes lost of machinery. There's a strange cognitive bias in
people, AIlers specifically, to think that AI is based on some
simple generic method, and they just know what it is. No validation
or further evidence required; it's all obvious. Whomever
you ask, they all know it, but all their answers differ. Historically,
this approach has failed abysmally. Trying to reverse-engineer
a known working system might do less for one's ego, but it's the only
game in town, as far as I can see.

 However, I think it's possible (though not extremely likely) that if
 all the pundits and funding sources (like IBM) continue to harp on the
 brain-emulation approach to the exclusion of other approaches, the
 prophecy that human brain emulation will be the initial path to AGI
 could become a self-fulfilling one ;-p ...

In this race, there are no second places.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] (video)The Future of Cognitive Computing

2007-01-21 Thread Eugen Leitl
On Sun, Jan 21, 2007 at 08:25:54AM -0800, Peter Voss wrote:
 Eugen IBM is smart. They know what they're doing.
 
 Yeah! What an impressive argument.

If you're not impressed by IBM's raw resources and
PI power in comparison to everybody else (all here present
including, of course), then I don't know what you find impressing. 

 Eugen There are shorter paths, but nobody knows where they are
 
 There is more known about the shorter paths than actual the functioning of
 the human mind/brain.

Reality check: what is working so far? Do you have anything which
would approach the skills across the board of a 3 year old human baby?
 
 * All current useful robots are engineered, not reverse-engineered.

I don't find these robots particularly useful so they would compete
for the same job slots I'm applying, nevermind the same job slots
for which extreme talents are applying. 

 * All AI successes so far are engineered solutions, not copies of wetware
 (Deep Blue, Darpa Challenge, Google, etc.)

Deep Blue was a chess system. If you're defining AI in terms how a specialized
system plays chess, this is ridiculous I have frankly nothing more to add.
The Darpa Challenge is actually a good example, but is still a specialized
system, with pathetic performance. Google, AI? You *are* kidding, right?

 * Planes have been flying for 100 years, yet we haven't even
 reverse-engineered a sparrow's fart...

And we've been having AI since 1950, right? Except, we don't, and we won't
for another 50 years, if you're continuing down the same, downtrodden, sterile
path.
 
 Ben, your comment seems to reflect your frustration at lack of funding
 rather than a realistic assessment of the situation. Even if no *dedicated*
 AGI engineering project is first to achieve AGI, people in the software/AI
 community will stumble on a solution long before reverse engineering

Stumble upon just like that, tripping on the soldering iron's power cord,
and finding a couple simple neat equations on a piece of paper. *Right*
Thanks for giving such a nice illustration of hubris and problem agnosia
in one fell swoop.

 becomes feasible. Don't you agree?

I'm not Ben, but I disagree emphatically. Feel free to prove me wrong.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: [Beowulf] [PCGRID07] Call for Papers for Workshop on Desktop Grids]

2006-08-21 Thread Eugen Leitl
- Forwarded message from Derrick Kondo [EMAIL PROTECTED] -

From: Derrick Kondo [EMAIL PROTECTED]
Date: Sun, 20 Aug 2006 16:11:12 +0200
To: Derrick Kondo [EMAIL PROTECTED]
Cc: 
Subject: [Beowulf] [PCGRID07] Call for Papers for Workshop on Desktop Grids

CALL FOR PAPERS

Workshop on Large-Scale, Volatile Desktop Grids (PCGrid 2007)
held in conjunction with the
IEEE International Parallel  Distributed Processing Symposium (IPDPS)
March 30, 2007
Long Beach, California U.S.A.
http://pcgrid07.lri.fr

Desktop grids utilize the free resources available in Intranet or
Internet environments for supporting large-scale computation and
storage. For over a decade, desktop grids have been one of the largest
and most powerful distributed computing systems in the world, offering
a high return on investment for applications from a wide range of
scientific domains (including computational biology, climate
prediction, and high-energy physics).  While desktop grids sustain up
to Teraflops/second of computing power from hundreds of thousands to
millions of resources, fully leveraging the platform's computational
power is still a major challenge because of the immense scale, high
volatility, and extreme heterogeneity of such systems.

The workshop seeks to bring desktop grid researchers together from
theoretical, system, and application areas to identify plausible
approaches for supporting applications with a range of complexity and
requirements on desktop environments.  Moreover, the purpose of the
workshop is to provide a forum for discussing recent advances and
identifying open issues for the development of scalable,
fault-tolerant, and secure desktop grid systems.

As such, we invite submissions on desktop grid topics including the
following:

- desktop grid middleware and software infrastructure (including
management)
- incorporation of desktop grid systems with Grid infrastructures
- desktop grid programming environments and models
- modeling, simulation, and emulation of large-scale, volatile
environments
- resource management and scheduling
- resource measurement and characterization
- novel desktop grid applications
- data management (strategies, protocols, storage)
- security on desktop grids (reputation systems, result verification)
- fault-tolerance on shared, volatile resources
- peer-to-peer (P2P) algorithms or systems applied to desktop grids

With regard to the last topic, we strongly encourage authors of
P2P-related paper submissions to emphasize the applicability to desktop
grids in order to be within the scope of the workshop.

The workshop proceedings will be published through the IEEE Computer
Society Press as part of the IPDPS CD-ROM.

##
IMPORTANT DATES

Manuscript submission deadline: October 23, 2006
Acceptance Notification:  December 11, 2006
Camera-ready paper deadline: January 22, 2007
Workshop: March 30, 2007

##
REVIEW OF MANUSCRIPTS

Manuscripts will be evaluated based on their originality, technical
strength, quality of presentation, and relevance to the conference
scope.  Only submissions that have neither appeared nor been submitted
to another
conference or journal are allowed.

#
ORGANIZATION

General Chairs
Derrick Kondo, INRIA Futurs, France
Franck Cappello, INRIA Futurs, France

Program Chair
Gilles Fedak, INRIA Futurs, France

Program Committee
David Anderson, University of California at Berkeley, USA
Artur Andrzejak, Zuse Institute of Berlin, Germany
MaengSoon Baik, Samsung Research, Korea
Henri Bal, Vrije Universiteit, The Netherlands
Zoltan Balaton, SZTAKI, Hungary
James C. Browne, University of Texas at Austin, USA
Denis Caromel, INRIA, France
Abhishek Chandra, University of Minnesota, USA
Rudolf Eigenmann, Purdue University, USA
JoonMin Gil, Catholic University of Daegu, Korea
Renato Figueiredo, University of Florida, USA
Fabrice Huet, University of Nice Sophia Antipolis, France
Adriana Iamnitchi, University of South Florida, USA
Mario Lauria, Ohio State University, USA
Virginia Lo, University of Oregon, USA
Grzegorz Malewicz, Google Inc., USA
Fernando Pedone, University of Lugano, Switzerland
Arnold L. Rosenberg, University of Massachusetts Amherst, USA
Mitsuhisa Sato, University of Tsukuba, Japan
Luis Silva, University of Coimbra, Portugal
Alan Sussman, University of Maryland, USA
Michela Taufer, University of Texas at El Paso, USA
Douglas Thain, University of Notre Dame, USA
Bernard Traversat, SUN, USA
Jon Weissman, University of Minnesota, USA
Rich Wolski, University of California at Santa Barbara, USA
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

- End forwarded message -
-- 
Eugen* Leitl a href=http

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-20 Thread Eugen Leitl
On Sun, Aug 13, 2006 at 04:15:30AM +0100, Russell Wallace wrote:

An unusual claim... do you mean all knowledge can be learned verbally,
or do you think there are some kinds of knowledge that cannot be
demonstrated verbally?

Language can be used to serialize and transfer state of cloned objects.
This doesn't mean human experts know their inner state, or can freeze
and serialize it, and that other instances can instantiate such serialized
state.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: [alife] New book published - Artificial Cognition Systems]

2006-08-14 Thread Eugen Leitl
- Forwarded message from Angelo Loula [EMAIL PROTECTED] -

From: Angelo Loula [EMAIL PROTECTED]
Date: Wed, 9 Aug 2006 11:13:10 -0300
To: [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject: [alife] New book published - Artificial Cognition Systems

Artificial Cognition Systems - 2006
edited by Angelo Loula, Ricardo Gudwin and João Queiroz
published by IDEA Group Inc.
ISBN: Hard cover: 1-59904-111-1, Soft cover: 1-59904-112-X

Artificial Cognition Systems presents recent research efforts in
artificial intelligence about building artificial systems capable of
performing cognitive tasks. Such study relies on modeling and
simulating cognitive processes and therefore constructs experimental
labs to evaluate hypothesis and theories about cognition.

Artificial Cognition Systems offers contributions from researchers
with different backgrounds applying diverse perspectives in cognitive
processes modeling and simulation, and brings forth an important and
open discussion in artificial intelligence: how cognitive processes
can be meaningful to artificial systems.

For Table of Contents, Preface, Buying Information, see:
http://www.dca.fee.unicamp.br/projects/artcog/book/
http://www.idea-group.com/books/details.asp?id=6047

___
alife-announce mailing list
[EMAIL PROTECTED]
http://lists.idyll.org/listinfo/alife-announce

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: [Beowulf] new release of GAMMA and MPI/GAMMA]

2006-08-04 Thread Eugen Leitl
- Forwarded message from Giuseppe Ciaccio [EMAIL PROTECTED] -

From: Giuseppe Ciaccio [EMAIL PROTECTED]
Date: Thu, 3 Aug 2006 15:26:15 +0200 (CEST)
To: beowulf@beowulf.org
Subject: [Beowulf] new release of GAMMA and MPI/GAMMA

Hello,

this is to inform you that a new release of the Genoa Active Message
MAchine (GAMMA) is available for download at the site
www.disi.unige.it/project/gamma/

In addition to numerous bug fixes, this release also provides support
for the Broadcom ``Tigon 3'' Gigabit Ethernet chipset.  This is not yet
fully tested -- more tests at end of vacations -- but I was able to run
a ping-pong benchmark.  Compared to the Intel PRO/1000, the back-to-back
latency of ``Tigon 3'' seems quite disappointingly high (~21 usec, to
be compared with the 6 usec achieved by the Intel PRO/1000).  I think
the ``Tigon 3'' may have inherited a slow design from its ancestor (the
``Tigon 2'' found on the old Alteon AceNIC).

I've also put out (two weeks ago) a new release of MPI/GAMMA, available at
www.disi.unige.it/project/gamma/mpigamma/
based on MPICH 1.2.7 .

Work in progress: support for Flat Neighbourhood Networks (FNN).
This is almost done; we will test it ASAP (the code is already inside this
release, but dormant).

Any feedback will be appreciated.  Thank you, and regards,

Giuseppe Ciaccio   http://www.disi.unige.it/person/CiaccioG/
DISI - Universita' di Genova   via Dodecaneso 35   16146 Genova,   Italy
phone +39 10 353 6637  fax +39 010 3536699 [EMAIL PROTECTED]

___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: [silk] moderating online conversations]

2006-07-28 Thread Eugen Leitl
- Forwarded message from Udhay Shankar N [EMAIL PROTECTED] -

From: Udhay Shankar N [EMAIL PROTECTED]
Date: Fri, 28 Jul 2006 09:50:40 +0530
To: silklist@lists.hserus.net
Subject: [silk] moderating online conversations
X-Mailer: QUALCOMM Windows Eudora Version 7.0.1.0
Reply-To: silklist@lists.hserus.net

From Teresa Nielsen Hayden, Some things I know about moderating 
conversations in virtual space

http://nielsenhayden.com/makinglight/archives/006036.html#006036

1. There can be no ongoing discourse without some degree of 
moderation, if only to kill off the hardcore trolls. It takes rather 
more moderation than that to create a complex, nuanced, civil 
discourse. If you want that to happen, you have to give of yourself. 
Providing the space but not tending the conversation is like 
expecting that your front yard will automatically turn itself into a garden.

2. Once you have a well-established online conversation space, with 
enough regulars to explain the local mores to newcomers, they'll do a 
lot of the policing themselves.

3. You own the space. You host the conversation. You don't own the 
community. Respect their needs. For instance, if you're going away 
for a while, don't shut down your comment area. Give them an open 
thread to play with, so they'll still be there when you get back.

4. Message persistence rewards people who write good comments.

5. Over-specific rules are an invitation to people who get off on 
gaming the system.

6. Civil speech and impassioned speech are not opposed and mutually 
exclusive sets. Being interesting trumps any amount of conventional 
politeness.

7. Things to cherish: Your regulars. A sense of community. Real 
expertise. Genuine engagement with the subject under discussion. 
Outstanding performances. Helping others. Cooperation in maintenance 
of a good conversation. Taking the time to teach newbies the ropes.

All these things should be rewarded with your attention and praise. 
And if you get a particularly good comment, consider adding it to the 
original post.

8. Grant more lenience to participants who are only part-time jerks, 
as long as they're valuable the rest of the time.

9. If you judge that a post is offensive, upsetting, or just plain 
unpleasant, it's important to get rid of it, or at least make it hard 
to read. Do it as quickly as possible. There's no more useless advice 
than to tell people to just ignore such things. We can't. We 
automatically read what falls under our eyes.

10. Another important rule: You can let one jeering, unpleasant jerk 
hang around for a while, but the minute you get two or more of them 
egging each other on, they both have to go, and all their recent 
messages with them. There are others like them prowling the net, 
looking for just that kind of situation. More of them will turn up, 
and they'll encourage each other to behave more and more 
outrageously. Kill them quickly and have no regrets.

11. You can't automate intelligence. In theory, systems like 
Slashdot's ought to work better than they do. Maintaining a 
conversation is a task for human beings.

12. Disemvowelling works. Consider it.

13. If someone you've disemvowelled comes back and behaves, forgive 
and forget their earlier gaffes. You're acting in the service of 
civility, not abstract justice.

-- 
((Udhay Shankar N)) ((udhay @ pobox.com)) ((www.digeratus.com))


- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] [META] Is there anything we can do to keep junk out of the AGI Forum?

2006-07-27 Thread Eugen Leitl
On Wed, Jul 26, 2006 at 10:20:17PM -0400, Mike Dougherty wrote:

not only mailing lists; I'd say they're a bane everywhere.

In case you need a moderator, I'm here.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] Penn researchers calculate how much the eye tells the brain

2006-07-27 Thread Eugen Leitl
.

PENN Medicine is a $2.9 billion enterprise dedicated to the related missions of 
medical education, biomedical research, and high-quality patient care. PENN 
Medicine consists of the University of Pennsylvania School of Medicine (founded 
in 1765 as the nation's first medical school) and the University of 
Pennsylvania Health System.

Penn's School of Medicine is ranked #2 in the nation for receipt of NIH 
research funds; and ranked #3 in the nation in U.S. News  World Report's most 
recent ranking of top research-oriented medical schools. Supporting 1,400 
fulltime faculty and 700 students, the School of Medicine is recognized 
worldwide for its superior education and training of the next generation of 
physician-scientists and leaders of academic medicine.

The University of Pennsylvania Health System includes three hospitals, all of 
which have received numerous national patient-care honors (Hospital of the 
University of Pennsylvania; Pennsylvania Hospital, the nation's first hospital; 
and Penn Presbyterian Medical Center); a faculty practice plan; a primary-care 
provider network; two multispecialty satellite facilities; and home care and 
hospice.


-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-25 Thread Eugen Leitl
On Tue, Jul 25, 2006 at 11:23:54AM +0200, Shane Legg wrote:

When measuring the intelligence of a human or other animal you
have to use an appropriate test -- clearly cats can't solve linguistic

Cats and people share common capabilities, which can be tested
for by the same test. A human or a dog fetching a stick is
very much the same thing.

problems and even if they could they can't use a pen to write down
their answer.  Thus intelligence tests need to take into account the

Clearly behaviour evaluation to assess task completion applies to 
any system in any environment. In most environments, a human observer
would evaluate very well, especially if the it's an interactive
learning and/or reward/punishment scenario requiring communication.

environment that the agent needs to deal with, the ways in which it
can interact with its environment, and also what types of cognitive
abilities might reasonably be expected.  However it seems unlikely
that AIs will be restricted to having senses, cognitive abilities or
environments that are like those of humans or other animals.  As

AIs are built to solve tasks. Calling human sensory capabilities
in comparison to an AI restricted gives reason to some serious amusement.
There are some very very few domains where AI excel in perception
(sniffing packets, operating in multidimensional spaces and similiar),
but they're not AGIs. They're very brittle, domain-specific problem
solvers. 

such the ways in which we measure intelligence, and indeed our
whole notion of what intelligence is, needs to be expanded to
accommodate this.

Once AGI perform as well as animal or human subjects in task
completion you don't have to worry about defining intelligence
metrics. You'd be too busy with trying to stay alive.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-24 Thread Eugen Leitl
On Sat, Jul 22, 2006 at 07:48:10PM +0200, Shane Legg wrote:

After some months looking around for tests of intelligence for
machines what I found

Why would machines need a different test of intelligence than
people or animals? Stick them into the Skinner box, make
them solve mazes, make them find food and collaborate with
others in task-solving, etc.

The nice thing is that people build environments where machines
and people can interact in a virtual environment, they only call 
them games for some strange reason.

was... not very much.  Few people have proposed tests of intelligence
for machines,
and other than the Turing test, none of these test have been developed
or used much.
Naturally I'd like universal intelligence, that Hutter and myself
have formulated,
to lead to a practical test that was widely used.  However making the
test practical
poses a number of problems, the most significant of which, I think, is
the sensitivity
that universal intelligence has to the choice of reference universal
Turing machine.
Maybe, with more insights, this problem can be, if not solved, at
least dealt with in
a reasonably acceptable way?
Shane

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] Processing speed for core intelligence in human brain

2006-07-14 Thread Eugen Leitl
On Fri, Jul 14, 2006 at 02:50:23PM -0400, Eric Baum wrote:
 
 Eugen Groan. The whole network computes. The synapse is just an
 Eugen element.  Also: you're missing on connectivity,
 Eugen reconfigurability, synapse type and strength issues.
 
 I'll definitely grant you reconfigurability. Might be fairer
 to compare to a programmable array.

Not really. You can have divergent factors of 10^4, and
convergent factors of some 10^5. You can't do this with
a 2 1/2 dimension substrate with severe fanout issues.
Actually a FPGA with on-die memory, with a signalling
fabric mesh is probably entry-level hardware for AI,
several thousands of them.
 
 Well, on this we differ. I can appreciate how you might think memory
 bandwidth was important for some tasks, although I don't, but
 I'm curious why you think its important for planning problems like

An AGI is a general intelligence. Your hardware has to have
enough performance to execute a general AI core. There are some
10^11 cells in the CNS, each has a connectivity ranging
into high 10^3, and each site operates in 10^3 Hz range.
Assuming 64 bit words/site, that's 10^17 words/s. Best
case (worst case is 10^1..10^2 worse) of today's memory
is some 10^9 words/s. I think you would agree that a missing
factor of 10^8 is not negligible. And of course since sequential 
memory access and CPU speed (which is not Moore, btw) 
delta is an exponential function, too, you'll see we're 
running into problems. Nevermind that the strictly sequential
buck stops well before THz (10^12 Hz) rate, and you have to
go parallel (about 10^6 cores parallel, if my math is
accurate, which it probably isn't).

 Sokoban or Go, or a new planning game I present your AI on the fly,
 or whether you think whatever your big memory intensive
 approach is will solve those.

The world is complex. You need a lot of bits to represent that state,
and even more bits for making forecasts, and some little bit more
for system housekeeping. It only appears memory intensive if you're
unfamiliar with the problem set of an AGI. That problem domain is
many orders of magnitude remote from such trivial toys like a chess
program to beat human grandmasters.

 As you know, I argued that the problem of designing the relevant software
 is NP-hard at least, so it is not clear that it can be cracked without
 employing massive computation in its design, anymore than a team of
 experts could solve a large TSP problem by hand.

I agree that human experts can't produce an AGI. They might not be
able to even produce a seed for an AGI. I think they are (barely)
capable of building the boundary conditions for emergence of an
AGI, if given enough hardware resources (a mole of bits).
 
 However, I have an open mind on this, which I regard as the critical
 issue for AGI.
 
 
 Mark VERY few Xeon transistors are used per clock tick.  Many, many,
 Mark MANY more brain synapses are firing at a time.
 
 How many Xeon transistors per clock tick? Any idea?

The Xeon alone is useless. Out of some 10 billion transistors in
a current desktop PC most of them are idle, and are DRAM cells.

 I recall estimating .001 of neurons were firing at any given time
 (although I no longer recall how I reached that rough guesstimate.)
 And remember, the Xeon has a big speed factor.

What is a speed factor, kemo sabe?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: Connectionists: CFP: Dynamics and Psychology]

2006-07-12 Thread Eugen Leitl
- Forwarded message from Aarre Laakso [EMAIL PROTECTED] -

From: Aarre Laakso [EMAIL PROTECTED]
Date: Wed, 12 Jul 2006 09:43:29 -0400
To: undisclosed-recipients: ;
Subject: Connectionists: CFP: Dynamics and Psychology
Organization: Indiana University, Bloomington
User-Agent: Thunderbird 1.5.0.4 (Macintosh/20060530)

** APOLOGIES FOR MULTIPLE POSTINGS ***

SECOND CALL FOR PAPERS

*New Ideas in Psychology*

Published by Elsevier Science B.V. ISSN 0732-118X, URL:
http://authors.elsevier.com/JournalDetail.html?PubID=678

A Special Issue on 'Dynamics and Psychology'

GUEST EDITORS

Paco Calvo (U. Murcia, Spain)
Aarre Laakso (Indiana University, USA)
Toni Gomila (U. Illes Balears, Spain)

Paper Submission Deadline: September 30th, 2006


New Ideas in Psychology is calling papers for a special issue entitled
'Dynamics and Psychology'.

The purpose of this special issue is to bring together some of the
leading views on dynamicism as it relates to psychological phenomena.
Although the primary focus is on conceptual ideas regarding the status
of dynamicism from the standpoint of Developmental Psychology, Cognitive
Science, Artificial Intelligence, Philosophy, and related fields,
empirical work is also welcome insofar as it bears explicitly upon
theoretical debate.

New Ideas in Psychology invites original contributions for the
forthcoming special issue on Dynamics and Psychology from a broad scope
of areas. Some key research issues and topics relevant to this special
issue include:

*Brain and cognitive function
*Categorical perception
*Dynamic computer simulations
*Dynamic field approach
*Dynamic systems theory and developmental theory
*Dynamics of control of processing
*Dynamics of social interaction
*Emergence
*Intermodality
*Language development
*Mental representation
*Motor development
*Neurobiological constraints
*Perceptual learning
*Self-organization of behavior
*Sensory-motor and perception-action loops
*Temporality

SUBMISSION INSTRUCTIONS AND DEADLINE

Manuscripts, following the New Ideas in Psychology guidelines
(http://authors.elsevier.com/GuideForAuthors.html?PubID=678dc=GFA)
should be emailed to Paco Calvo ([EMAIL PROTECTED]) by September 30th, 2006.

INVITED CONTRIBUTORS

The special issue will include invited papers by:

Dante Chialvo (Northwestern University Medical School, Chicago)
Eliana Colunga (Colorado, Boulder) and Linda Smith (Indiana University)
Rick Grush (UCSD)
Aarre Laakso (Indiana University)
John Spencer (University of Iowa)

RELATED AND SAMPLE ARTICLES

*Bechtel, W. (1998) Representations and cognitive explanations:
assessing the dynamicist's challenge in cognitive science, Cognitive
Science, 22, 295-318.

*Beer, R. D. (1995) A dynamical systems perspective on
agent-environment interaction, Artificial Intelligence, 72, 173-215.

*Clark, A. (1997) The dynamical challenge, Cognitive Science, 21, 461-481.

*Erlhagen, W.  Schöner, G. (2002) Dynamic field theory of movement
preparation, Psychological Review, 109, 545-572.

*Nuñez, R.  Freeman, W.J. (1999) Reclaiming cognition: the primacy of
action, intention and emotion. Imprint Academic.

*Prinz, J. J.,  Barsalou, L. W. (2000) Steering a course for embodied
representation, In E. Dietrich  A. B. *Markman (Eds.), Cognitive
dynamics: Conceptual and representational change in humans and machines
(pp. 51-77). Mahwah, NJ: Lawrence Erlbaum Associates.

*Spencer, J.P.  Schöner, G. (2003) Bridging the representational gap
in the dynamic systems approach to development, Developmental Science,
6, 392-412.

*Sporns, O., Chialvo, D., Kaiser, M.  Hilgetag, C. (2004)
Organization, development and function of complex brain networks,
Trends in Cognitive Sciences, 9, 418-425.

*Thelen, E., Schöner, G., Scheier, C.  Smith, L. (2001) The dynamics
of embodiment: A field theory of infant perseverative reaching,
Behavioral and Brain Sciences, 24, 1-86.

*Townsend, J. T.,  Busemeyer, J. (1995) Dynamic representation of
decision making, In R. F. Port  T. Van Gelder (Eds.), Mind as motion.
Cambridge, MA: MIT Press.

*Turvey, M. T.,  Carello, C. (1995) Some dynamical themes in
perception and action In R. F. Port  T. Van *Gelder (Eds.), Mind as
motion. Cambridge, MA: MIT Press.

*van Gelder, T. (1998) The dynamical hypothesis in Cognitive Science,
Behavioral and Brain Sciences, 21, 615-665.

GUEST EDITORS

Paco Calvo
Departamento de Filosofía
Universidad de Murcia
E-30100 Murcia - SPAIN
e-mail: [EMAIL PROTECTED]

Aarre Laakso
Department of Psychology
Indiana University
1101 East 10th Street
Bloomington, IN 47405
e-mail: [EMAIL PROTECTED]

Toni Gomila
Department of Psychology
University of the Balearic Islands
E-07122 Palma de Mallorca - SPAIN
e-mail: [EMAIL PROTECTED]

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779

Re: [agi] Measuerabel Fitness Functions?.... Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-06 Thread Eugen Leitl
On Thu, Jul 06, 2006 at 10:28:57AM -0400, Danny G. Goe wrote:

 What are the measurable fitness functions that can be built into AI?

You don't seem to be really communicating, only going through the motions.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] Computing Intelligence? How too? ................. ping

2006-07-05 Thread Eugen Leitl
On Wed, Jul 05, 2006 at 03:19:15PM +, [EMAIL PROTECTED] wrote:

 What are the best methods of computing algorithm fitness or intelligence? 

If intelligence is ability to solve hard tasks, then task completion
is your fitness function. If you want to evaluate it automatically
(people as observers don't really scale, but are very effective,
especially for complex tasks), a good first step is to build an
artificial reality simulator, and set up your problem there. Game
engines would be natural, and neatly allow to combine human and
nonhuman agent interactions.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: [Htech] what is it like to be a spider?]

2006-06-30 Thread Eugen Leitl
- Forwarded message from Eugen Leitl [EMAIL PROTECTED] -

From: Eugen Leitl [EMAIL PROTECTED]
Date: Fri, 30 Jun 2006 15:30:48 +0200
To: [EMAIL PROTECTED]
Subject: [Htech] what is it like to be a spider?
User-Agent: Mutt/1.5.9i
Reply-To: [EMAIL PROTECTED]


http://lemonodor.com/archives/001409.html

What Is It Like to be a Spider?

jumping spider watching tv

Last night the Institute for Figuring sponsored a lecture on spider vision and 
cognition by Simon Pollard: “What Is It Like to be a spider?”

Dr. Pollard started by saying that spiders are not the little automatons we 
might think they are (hey, I'm pretty sure humans are automatons, but I don't 
hold it against them). It seems that jumping spiders (one particular group of 
jumping spiders, Portia, was the focus of the talk) exhibit some behaviors that 
most people would think of as too complicated for a spider, which just reminded 
me of Valentino Braitenberg's “law of uphill analysis and downhill invention.”

Jumping spiders will spot a prey spider sitting in a web, then take a route 
that eventually gets them closer to the prey while staying hidden from it. This 
hunting “detour” may take hours, during which the jumping spider often does not 
have the prey in visual contact. That is, it seems to have “memory” of a “goal”.

Jumping spiders have amazing vision for such small creatures—something like 1/5 
the visual acuity of human beings while being smaller than our eyeballs. They 
can recognize (and will attack) TV images.

They achieve this visual acuity through a pair of telescopic eye tubes.

jumping spider eye tubes

The fovea is small with a very narrow field of view, but the spiders seem to 
build a detailed image by scanning an object of interest—which they do without 
moving their heads or bodies or eye lenses, but by pivoting the eye tubes 
inside their heads. The video of this was a big hit, with everyone making 
surprised and amazed noises when they saw a spider, sitting motionless, in 
front of a powerful backlight that allowed us to see the shadows of the eye 
tubes swiveling like mad inside its translucent head, taking in the scene.

“...when we look into Portia's dark, bulging eyes, the lights are on, 
somebody's at home, and a lot more than an eight-legged automaton is staring 
back.” 

http://theiff.org/lectures/17.html

The Institute for Figuring
Announces the third lecture in our Spring 2006 series
The Insect Trilogy

WHAT IS IT LIKE TO BE A SPIDER
By Dr. Simon Pollard [IFF-16]
Wednesday, June 28 @ 7:30pm
Hosted at Telic Arts Exchange in Chinatown/ Los Angeles
975 Chung King Road
Los Angeles, CA 90012
A jumping spider watching a cartoon spider on TV attacks the virtual competitor 
as fiercely as if it were the real thing. Photo courtesy Dr. Duane Harland.


In the skies over Lake Victoria on the border of Kenya and Uganda swarms of 
lake flies mass in clouds so thick they block out the sun. From this dense 
throng a tiny jumping spider on the ground below can pick out a single mosquito 
– a hapless victim whose blood engorged stomach will serve as its next meal. 
Possessing almost feline hunting skills, jumping spiders can see better than 
any invertebrate and several orders of magnitude better than any other insect. 
Though their heads are far too tiny to contain a spherical eyeball, jumping 
spiders have developed eyes with an acuity on a par with mammals. 
Astonishingly, their miniscule brains can comprehend images on a television 
screen – a feat of mental processing previously thought impossible for an 
invertebrate mind.

A Singaporean jumping spider. Two front facing eyes are complemented by two 
other sets of eyes that allow the animal to see in 360 degree panoramic scope. 
Here, 4 of the 6 eyes are visible. Photo courtesy Dr Simon Pollard.
Everything about a jumping spider’s vision system demands our admiration – 
beginning with the number of eyes. In addition to two forward facing lenses 
that jut out on stalks from the front of its head, a jumping spider has four 
peripheral eyes that enable it to see at the back of its head. The two front 
eyes operate on the same principle as a Galilean telescope, the result of the 
same evolutionary strategy to that taken by eagles and falcons. Information 
from all six eyes is processed by a brain that contains just a few hundred 
thousand neurons yet is capable of recognizing television pictures. In this 
lecture, Dr Simon Pollard will talk about the physics, neurology and perceptual 
psychology of how a spider sees the world.
Jumping spider of the species evarcha, sizing up a mosquito lure. All spiders 
are liquid feeders - they must liquify a meal before they can eat it. Most 
spiders do this by pumping their own stomach juices into the prey turning it 
into an extension of their own guts. Evarcha gets around this step by siphoning 
blood directly from the stomach of a freshly engorged mosquito. Photo courtesy 
Dr Simon Pollard.

Dr Simon Pollard has been studying vision

Re: [agi] [EMAIL PROTECTED]: [Htech] what is it like to be a spider?]

2006-06-30 Thread Eugen Leitl
On Fri, Jun 30, 2006 at 03:17:50PM +0100, Bob Mottram wrote:

 Last year I read a book called A spiders world: senses and behavior which
 gives a good overview of their neuroanatomy, which as you might expect is 
 quite
 different from a mammal.

What I'm impressed is the high acuity, and a detailed world model,
built by serial scanning of the world with a high-resolution spot.
Very much like us (fovea), but implemented in just 10 neurons. It
would be definitely interesting to build an industrial/military 
or exploratorial robot on a spider or a crab model.

Similiarly, some birds manage to package a lot of general intelligence in
a smaller footprint than our average higher primate.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: [rael-science] First Molecular Proof That Some Aspects of Aging Are Out of Our Control]

2006-06-25 Thread Eugen Leitl
, 
Gary Chisholm, and Brad Pollock, from the University of Texas Health 
Science Center; Claudia Hartmann and Christoph Klein from the 
Institute for Immunology, Ludwig-Maximilians University in Munchen, 
Germany; and Martijn E. T. Dolle, from the National Institute of 
Public Health and the Environment, Bilthoven, the Netherlands. The 
work was supported by a grant from the National Institutes of Health 
and a BioFuture Grant from the German Federal Ministry for Education 
and Science. 

Source: Buck Institute for Age Research 

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: [nsg] Meeting Announcement]

2006-06-19 Thread Eugen Leitl
- Forwarded message from Fred Hapgood [EMAIL PROTECTED] -

From: Fred Hapgood [EMAIL PROTECTED]
Date: Mon, 19 Jun 2006 12:47:12 -0400
To: Nanotech Study Group [EMAIL PROTECTED]
Subject: [nsg] Meeting Announcement
X-Mailer: MIME::Lite 5022  (F2.72; T1.15; A1.62; B3.04; Q3.03)


Meeting notice: The 060620 meeting will be held at 7:30 P.M. at the
Royal East (782 Main St., Cambridge), a block down from the corner of
Main St. and Mass Ave.  If you're new and can't recognize us, ask the
manager. He'll probably know where we are. More details below.

Suggested topic: Minsky's perspectives on AI

Many of us believe that there is a function or application that is to
cognition what the cell is to the body, a logical action out which all
the kinds and competencies of intelligence can be built, either by
increasing the size or population count of this unit, or by introducing
families of trivial modifications to the basic design, or both. Alex
Turing thought it was search; Jeff Hawkins thinks it is prediction; some
among us think it is object recognition.

There are some reasons to expect cognition to have such a unit: the
basic anatomy of the cortex looks much the same everywhere and we all
know that evolution strongly prefers to reuse and repurpose already
established features as opposed to cooking up brand new solutions.
Plus, if cognition has a basic unit, finding and then simulating that
unit might gain us lots of leverage over the development of AI.  Not
that that is a reason to believe in it, of course.

However, this is not a universal position.  Marvin Minsky in particular
has for decades maintained that the human mind is composed of many forms
of intelligence, forms that might have and probably do have very
different underlying structures.  The most recent elaboration of these
ideas appears in An Architecture for Cognitive Diversity by Push Singh
and Marvin Minsky, a 2004 publication of the Media Lab's.  See
http://web.media.mit.edu/~push/CognitiveDiversity.html.

While the authors do not take on the lowest common unit theory
directly, in the sense that they do not attempt to show why it is wrong,
they go after it indirectly in just about every other sentence.  In the
process a lot of really interesting speculations get thrown off about
the kinds of intelligence we use in everyday life -- as, in the authors'
inspired example, when we put a pillow in a pillow case.

I was especially taken with the authors' hierarchy of reflectiveness. I
quote An important feature of our architecture (by which the authors
mean the architecture presented in the paper) is that it is designed to
be highly self- reflective and self-aware, so that it can recognize and
understand its own capabilities and limitations, and debug and improve
its abilities over time. In contrast, most architectural designs in
recent years have focused mainly on ways to react or deliberate—with no
special ability to reflect upon their own behavior or to improve the way
they think about things. In our architecture, agents are organized into
a tower of reflection consisting of six layers ...

These layers are:

Innate or instinctive reactions. 
Learned reactions. 
Deliberative thinking. (Model-building.) 
Reflective thinking.  (Am I going about this all wrong?) 
Self-reflective thinking.  (Is this what I want to be doing with my
life.) 
Self-conscious thinking.  (What would Minsky say if he were here now?)

Also, take a look at this paper, referenced in the footnotes.

http://researchweb.watson.ibm.com/journal/sj/413/forum.html#part2



++

In twenty years half the population of Europe will have visited the
moon.

-- Jules Verne, 1865

+

Announcement Archive: http://www.pobox.com/~fhapgood/nsgpage.html.

+

Legend:

NSG expands to Nanotechnology Study Group.  The Group meets on the
first and third Tuesdays of each month at the above address, which
refers to a restaurant located in Cambridge, Massachusetts.

The NSG mailing list carries announcements of these meetings and little
else. If you wish to subscribe to this list (perhaps having received a
sample via a forward) send the string 'subscribe nsg'  to
[EMAIL PROTECTED]  Unsubs follow the same model.

Comments, petitions, and suggestions re list management to:
[EMAIL PROTECTED]   www.pobox.com/~fhapgood  www.pobox.com/~fhapgood
  www.pobox.com/~fhapgood


___
Nsg mailing list
[EMAIL PROTECTED]
http://polymathy.org/mailman/listinfo/nsg_polymathy.org

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED

[agi] [EMAIL PROTECTED]: Re: [Beowulf] Slection from processor choices; Requesting Giudence]

2006-06-17 Thread Eugen Leitl
- Forwarded message from Douglas Eadline [EMAIL PROTECTED] -

From: Douglas Eadline [EMAIL PROTECTED]
Date: Sat, 17 Jun 2006 08:25:05 -0400 (EDT)
To: beowulf@beowulf.org
Subject: Re: [Beowulf] Slection from processor choices; Requesting Giudence
User-Agent: SquirrelMail/1.4.6-5.el4.centos4


 Several persons replied and not a SINGLE ONE of them talks about
 one way pingpong latency, which is one of the most important features
 of a highend network for *many* applications.

Sigh, as I mentioned in my past post, go to the link below
Understand that it requires reading skills.

  http://www.clustermonkey.net//content/view/121/33/1/1/

Look at Table One and the two graphs, there are numbers there.
Read the numbers. They are MPI single byte latency for NetPipe.

GigE is not perfect. My point is that for many applications
it can work well. There are many other applications
that need better networking. It is a price to performance
argument.

BTW, the NICs I used in the link above were $35 Intel MT/1000
desktop (32 bit PCI) cards. I managed to get 14.6 HPL GFLOPS
and 4.35 GROMACS GFLOPS out of 8 nodes consisting of hardware
with a total costs of $2500. (much less using today prices)
Background is at the following link:

 http://www.clustermonkey.net//content/view/41/33/

As a point of reference, a quad opteron 270 (2GHz) reported
4.31 GROMACS GFLOPS.


-- 
Doug
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-14 Thread Eugen Leitl
On Wed, Jun 14, 2006 at 04:28:36AM +0800, Yan King Yin wrote:

 I have to agree that NN can represent all forms of knowledge, since our
 brains are NNs.  But figuring out how to do that in artificial systems must
 be pretty difficult.  I should also mention Ron Sun's work, he has

Actually all you need is a rich morphogenetic code which prewires your
virtual cortex, and hardware general and powerful enough to run
it in realtime, plus a virtual environment, and multiply this 
by a factor of 10^4..10^6 to contain a population. Let
evolution discover the rest: which types of small-integer automata,
which synapse types, connectivity pattern, its change over time.

A fabric of FPGA/memory cores would be enough today -- unfortunately,
it would require custom hardware, and thus be prohibitively expensive.

 long tried to reconcile neural and symbolic processing.  I studied NNs/ANNs
 for some time, but I recently switched camp to the more symbolic side.
 
 One question is whether there is some definite advantage to using NNs
 instead of say, predicate logic.  Can you give an example of a thought, or a
 line of inference, etc, that the NN-type representation is particularly
 suited?  And that has a advantage over the predicate logic representation?

Getting a response to a complex stimulus within 50 ms.

 John McCarthy proposed that predicate logic can represent 'almost'
 everything.

Of course, but who's going to produce the VHDL for you?
 
 If NN-type representation is not necessarily required, then we should
 naturally use symbolic/logic representations since they are so much more
 convenient to program and to run on von Neumann hardware.

You can't make AI on von Neumann (in the general sense) hardware.
If you have 10^6 meshed cores it's no longer von-Neumann.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: Worthwhile time sinks was Re: [agi] list vs. forum

2006-06-13 Thread Eugen Leitl
On Tue, Jun 13, 2006 at 05:05:56AM +0530, sanjay padmane wrote:
 Even though only a few have reacted to my (somewhat threatening ;-) )
 proposal to discontinue this list, it seems that people are comfortable with

I would call it a troll.

 it, anyhow...

You seem to be new to the Internet. I suggest you take it slow, and do
your research instead of posting reflexively on merits of technology
you're not familiar with.

 Someone can experiment with automated posting of all forum messages to the

Hey, it was your suggestion, you do it. Just download the list manager,
and hack it. It's easy, right? And don't forget automatic cathegorization, 
plaintext and multipart support, and a search engine, and anti-spam measures, 
and authentication, and to make my browser spawn my favourite editor,
instead of pasting into a form, and server-side filtering, and distributed 
archives, 
and push, while you're at it. And don't forget to build a community about 
your project, in order to support it, and to issue security fixes for
the hundreds of bugs you'll find in a new project of such complexity. 
Gosh, email is sure retarded, having all these features a forum doesn't 
have, and you'll find are absolutely trivial to implement. Get back to us 
when you're done, will you?

 list, as and when they are created.
 
 Speaking of high quality, you are the best person to do that :-). As I'm
 only starting in Agi etc, I've only questions and speculations to post. I've
 not done that because I'm afraid of sinking agi-forums to the level of
 agi-n00b-forums. But I'll take that risk someday, I can delete the post
 (unlike in a list), if it sounds too low quality.

That's not a bug, that's a feature. And you can't edit my local inbox, and
it won't go away when the machine with the list archives dies (trust me,
eventually they all do).
 
 On the suggestion of creating a wiki, we already have it here
 http://en.wikipedia.org/wiki/Artificial_general_intelligence , as you know,
 and its exposure is much wider. I feel, wiki cannot be a good format for
 discussions. No one would like their views edited out by a random user. It
 serves the purpose best, when the knowledge is already established.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Eugen Leitl
On Tue, Jun 13, 2006 at 04:15:35PM +0100, Russell Wallace wrote:

 Okay, to put it in a less facetious-sounding way: It is worth bearing in
 mind that biological neural nets are _very bad_ at syntactic symbol
 manipulation; consider the mindboggling sophistication and computing power
 in a dolphin's brain, for example, and note that it is completely incapable

Representing and manipulating formal system is a very recent component
in the fitness function, and hence not well-optimized.

 of doing any such thing. Even humans aren't particularly good at it: our
 present slow, simple, crude computers can do things like symbolic
 differentiation millions of times faster and more accurately than we can.

And how little it does help them to navigate reality.
 
 The point being, we tend to try to answer how questions by looking for
 simple, efficient methods - but biology suggests (albeit doesn't prove) that
 the reason we can't see a simple, efficient way for NNs to handle syntactic
 knowledge is that there isn't one; that researchers trying to use NNs or the
 like for AGI may have to bite the bullet and look for complex, expensive
 solutions to this problem.

The world is complicated. There are no simple solutions that work over
all domains in the real word.
 
 (My own reaction to this is the same as yours, incidentally: to go straight
 for symbolic mechanisms as fundamental components in the belief that this
 plays better to the strengths of digital hardware. That doesn't mean NNs

What are the strenghts of digital hardware, in your opinion?

 can't succeed, but it does suggest that they'll have to hit this problem
 head-on and resign themselves to throwing a lot of resources at it, in
 somewhat the same way that we on the symbolic side of the fence will have to
 resign ourselves to throwing a lot of resources at problems like visual
 perception.)

Human resources, or computational resources? If computational resources,
which architecture?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Eugen Leitl
On Tue, Jun 13, 2006 at 04:38:49PM +0100, Russell Wallace wrote:

 Representing and manipulating formal system is a very recent component
 in the fitness function, and hence not well-optimized.
 
 True; but I will claim that no matter how much you optimize a biological
 neural net, it will always have characteristics such as being slow at serial
 computation, and relatively imprecise.

No disagreement. But you don't have to use live cells to build
a computational network. As to imprecise, with scaling down geometry
and ramping up switching speed digital is no longer well-defined.
With many small switches you're also getting reliability problems,
so noise begins to creep in at the hardware layer.
 
 Fast serial calculation. 

In comparison to biological neurons, yes.

 Very high precision. Extreme flexibility in choice

I don't see why an automaton network can't use many bits
to represent things. There's also some question for what
you need very high precision. Cryptography is a candidate,
another one is physical modelling doing it like a 
mathematician. I think there is a very distinct
bias, almost an agnosia if you want to do it not like
a mathematician.

 of operations and instant rewiring of data structures.

You can't actually rewire the circuit, so you have
to switch state which reprsents the circuit. It's easier
if you embrace the model of dynamically traced out
circuitry in a computational substrate. Very few
things are instant in a current memory-bottlenecked
digital computers. If you want to widen that bottleneck,
you first get a massively parallel box, and eventually
a cellular/mosaic architecture of simple computational
elements.
 
 If computational resources,
 which architecture?
 
 I don't understand the question, please clarify?

If you want to build a robot capable of playing tennis
in a heavy hail, how would you do it?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] high-performance reality simulators

2006-06-12 Thread Eugen Leitl

If you're using a virtual environment for AGI testing,
are you rolling your own (if yes, open-sourced?), or
using an off-shelf one?

Are you using massive parallelism, and clusters, or 
hardware acceleration (either game physics, or GPU), or
are you running one instance/machine?

What is your ratio of wall clock:simulation time (are
you at all over realtime, or do you prefer 1:1 for
better observation/interactivity?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] Neural representations of negation and time?

2006-06-11 Thread Eugen Leitl
On Fri, Jun 09, 2006 at 10:42:37PM -0400, Philip Goetz wrote:

 I'm also interested in ideas about neural representations of time.

Here's an interesting recent paper about representing space,
not time: http://arxiv.org/PS_cache/q-bio/pdf/0606/0606005.pdf

 How, when memories are stored, are they tagged with a time sequence,
 so that we remember when and/or in what order they happened, and how
 do we judge how far apart in time events occurred?  Is there some
 brain code for time, with a 1D metric on it to judge distance?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] list vs. forum

2006-06-11 Thread Eugen Leitl
On Sat, Jun 10, 2006 at 12:41:16AM -0400, Philip Goetz wrote:

 Why do we have both an email list and a forum?
 Seems they both serve the same purpose.

Both planes and ships are means of transportation.
So why do we have both planes and ships?

Email and the web are very different media, and
in many ways complementary. A synthesis of both
(a blog/forum/email) would be best, but this hasn't
been done yet properly. 

So for time being, the forum is a very different
medium from the list, and we should keep both
for those who're more comfortable with either.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] list vs. forum

2006-06-11 Thread Eugen Leitl
On Sat, Jun 10, 2006 at 07:39:52PM +0530, sanjay padmane wrote:

 I feel you should discontinue the list. That will force people to post 
 there.

Or will cause email-only users to drop out of the conversation.
Sorry, I don't do forums. I only do web because of tabs and RSS,
and there's nothing like that available for forums. Message
authentication is possible. Push is not possible. Self-archiving
is not possible. Maybe in another ten years we'll have a web
forum that is usable.

 I'm not using the forum only because no one else is using it (or very
 few), and everyone is perhaps doing the same.

I'm not posting to the forum because I never post to forums.
 
 Another advantage is that it will expose the discussions to google and
 it will draw more people with increasing content.

If list archives are not public, something is very wrong with this list's
architecture. 

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] cheap 8-core cluster

2006-06-09 Thread Eugen Leitl

http://www.tyan.com/PRODUCTS/html/typhoon_b2881.html

Notice the Direct Connect Architecture part. Online
pricing looks very reasonable.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] information in the brain?

2006-06-09 Thread Eugen Leitl
On Fri, Jun 09, 2006 at 01:12:49AM -0400, Philip Goetz wrote:

 Does anyone know how to compute how much information, in bits, arrives
 at the frontal lobes from the environment per second in a human?

Most of information is visual, and retina purportedly compresses 1:126
(obviously, some of it lossy). 
http://www.4colorvision.com/dynamics/mechanism.htm
claims 23000 receptor cells on the foveola, so I would just
do a rough calculation of some 50 fps (you don't see this, but
the cells do), and 16 bit/cell (it's probably 12 bit, but it's
a rough estimate, anyway). Estimate gives some 20 MBit/s, which
I think is way too low. 

It's also kinda artificial to start counting on the visual nerve,
since the retina is technically a part of the brain. So I would
just use total photoreceptor count instead of just 23 k cells
of the fovea.
 
 For a specific  brain region, you can compute its channel capacity if
 you know the number of neurons, and the refractory period of the
 neurons in that region, since you can compute approximate bandwidth
 per neuron as the max firing frequency.  However, that doesn't tell
 you how much information from its inputs is actually coming through
 that channel.  The channel capacity is sometimes much smaller than the
 input bandwidth, but that doesn't mean the channel is fully utilized.
 If the channel capacity going out of a region is larger than the
 channel capacity coming in, it is especially important to have some
 equation that accounts for the mutual information between inputs to
 different neurons in that area.

I'm not sure you can separate the processing so cleanly into modules,
connected by interfaces. What's wrong with looking at metabolic rate,
and how much spiking flows over across an arbitrary boundary?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] Two draft papers: AI and existential risk; heuristics and biases

2006-06-08 Thread Eugen Leitl
On Thu, Jun 08, 2006 at 12:44:08PM -0400, Mark Waser wrote:
  How relatively simple? Evolution doesn't do simple. I doubt that any
  human goal system has a simple mathematical formalization.
 
 I guess the question is how do you define simple?  

How many bits in a poor (unsupportive) context it
takes to specify the system. E.g. 16 bit multiplier,
versus an E. coli, or a bunny.

 What I have in mind has three really simple axioms, a 
 fourth that I suspect is provable from the first three 
 (but I don't want to fight over) and everything else 
 follows from there (although, as always, the devil is 
 in the details -- but always resolvable by referring 
 back to the original four axioms).

Let us hear these axioms, please. I don't think a
rehash of Asimov's 4 laws is going to cut the mustard,
though.
 
 Mark
 
 P.S.  And, as a side comment, evolution often does do 
 simple when there is a down-gradient path to it and 
 particularly when complex exacts a cost.  It's just 
 that human biases are struck more by the complicated 
 examples and you notice them more.

Human primates are not exactly simple, and that's about
all we really care here.
 
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] Two draft papers: AI and existential risk; heuristics and biases

2006-06-07 Thread Eugen Leitl
On Wed, Jun 07, 2006 at 11:32:36AM -0400, Mark Waser wrote:

 I think that we as a community need to get off our butts and start 
 building consensus as to what even the barest framework of friendliness is.  
 I think that we've seen more than enough proof that no one here can go on for 
 more than twenty lines without numerous people objecting vociferously to 
 their idea of friendliness (and just wait til you start trying to include 
 Leon Kass, your average fundamentalist Christian or your average 
 fundamentalist Muslim).  But you've gone off and invented a magical system 
 which will solve all of these problems by determining what we would define as 
 friendly if we were better (and are now looking for a way to mathematically 
 guarantee that such a system will work correctly).

A reasonably low common denominator feature would be
a human self image, with empathy. Starting with
mirror neurons and a human baby environment would
be a good first step.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] Two draft papers: AI and existential risk; heuristics and biases

2006-06-07 Thread Eugen Leitl
On Wed, Jun 07, 2006 at 03:31:10PM -0400, Mark Waser wrote:

But what about all of those lovely fundamentalist Christians or Muslims 
 who see no problem with killing infidels (see Crusades, Jihad, etc.)?  They 
 won't murder the human race as a whole but they will take out a major piece 
 of it.

What was your operational definition of friendliness, again?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: Re: [Beowulf] coprocessor to do physics calculations]

2006-05-14 Thread Eugen Leitl
 Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: [EMAIL PROTECTED]
web  : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax  : +1 734 786 8452
cell : +1 734 612 4615
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: Re: [Beowulf] coprocessor to do physics calculations]

2006-05-14 Thread Eugen Leitl
- Forwarded message from Mark Hahn [EMAIL PROTECTED] -

From: Mark Hahn [EMAIL PROTECTED]
Date: Sun, 14 May 2006 14:37:22 -0400 (EDT)
To: beowulf@beowulf.org
Subject: Re: [Beowulf] coprocessor to do physics calculations

 Didn't see anyone post this link regarding Aegia Physix processor. It is the 
 most comprehensive write up I have seen.
 
 http://www.blachford.info/computer/articles/PhysX1.html

yes, and even so it's not very helpful.  fabric connecting compute and
memory elements pretty well covers it!  the block diagram they give
could almost apply directly to Cell, for instance.

fundamentally, about these cell/aegia/gpu/fpga approaches,
you have to ask:

- how cheap will it be in final, off-the-shelf systems?  GPUs
are most attractive this way, since absurd gaming cards have 
become a check-off even on corporate PCs (and thus high volume.)
it's unclear to me whether Cell will go into any million-unit 
products other than dedicated game consoles.

- does it run efficiently-enough?  most sci/eng I see is pretty
firmly based on 64b FP, often with large data.  but afaikt, 
Cell (eg) doesn't do well on anything but in-cache 32b FP.
GPUs have tantalizingly high local-mem bandwidth, but also 
don't really do anything higher than 32b.

- how much time will it take to adapt to the peculiar programming
model necessary for the device?  during the time spent on that,
what will happen to the general-pupose CPU market?

I think price, performance and time-to-market are all stacked against this 
approach, at least for academic/research HPC.  it would be different if the
general-purpose CPU market stood still, or if there were no way to scale up
existing clusters...

___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] who's looking for AI where?

2006-05-11 Thread Eugen Leitl

http://www.google.com/trends?q=artificial+intelligencectab=0date=allgeo=all

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] Timing of Human-Level AGI [was: Joint Stewardship of Earth]

2006-05-10 Thread Eugen Leitl
On Wed, May 10, 2006 at 03:18:02PM +0100, Russell Wallace wrote:

 It won't. Uncertainty is a necessary part of life in the real world; formal
 logic just isn't a powerful enough tool by itself to deal with it.

Yeah, current systems which rely on proofs can't prove their way
out of a paper bag. Natural intelligence is about making realtime
decisions from incomplete, partially incorrect, noisy data. A lot
of neural crunch in the processing pipeline (which is not really a
strictly linear pipeline) is dedicated to extract higher-order 
representation.

I don't think you can build a domain-specific yet an all-purpose
intelligence. If you've been raised on reading text streams it won't 
help you much to navigate physical reality. Don't diss DARPA's Grand
Challenge, this is where real AI work is.
 
 Our intuitions have evolved over millions of years of testing; while
 necessarily imperfect, they're proven powerful enough for survival, at
 least. I expect any successful AGI will have to adopt those intuitions or

Biology evolved all-purpose processing hardware, especially recently
(neocortex). It can remap itself to fill in damage in older, specialized
areas if lesioned. So it may be enough if you can reverse-engineer
the neocortex -- of course, including its development, where lots of
action occurs.

 something similar to them, at least for awhile. Ultimately it may start
 developing its own, better intuitions; but I see that as something for the
 far future, beyond our prediction horizon.

By definition, if AIs walk the Earth and roam the space, it's no longer our
planet as we know it.
 
 (An aside on the original topic: if you look at Marc's posts for the last
 few years, the main thing wrong with them has been too much drift into
 philosophy and intuition away from hard data and formal reasoning; so in
 fairness, this switch to overemphasis on rigor, while not absolutely true,
 may be something he needs to do at the moment.)

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] Logic and Knowledge Representation

2006-05-07 Thread Eugen Leitl
On Sun, May 07, 2006 at 09:29:51AM -0400, Ben Goertzel wrote:

 However, this does not imply that in an AI, these things cannot be
 done using explicit logic operations.

It's possible to build anything from NAND gates. But in practice,
there are usually other constraints to implementation. Transistors
and wires don't reconfigure themselves on a 3d lattice, and don't form 
1:10^4 fanout on a microwatt/mm^3 energy budget.
 
 Emulation of the human brain is only one possible route to AGI.  I

There are many possible routes. But in practice whoever achieves scalabe
AGI first renders everything else irrelevant. There will be no close
second runners, similiarly to the fact there life emerged only once
on the prebiotic Earth.

 agree it is a route that is ultimately sure to be feasible, but that
 doesn't mean it will necessarily be the *fastest* route given our very
 limited knowledge of the human brain.

Is it necessary to know every single detail of mammal brain operation in order
to extract constraints for a fertile seed? I don't think so. We know
that we can think. Our knowledge about our CNS increases monotonically
(and in fact, arguably accelerating if not outright exponentially).
In contrast, we don't have any successful AI instances to study. 
If I had to bet money on whether biologically inspired or de novo hit 
region of fertility first, I would certainly bet on the former.
 
 I agree with all this, but you have not explained by (for example)
 probabilistic logic is a bad knowledge representation for
 sensory-motor patterns.  I think it is just as good a representation
 as for instance a neural network from a pure sensorimotor point of
 view' and better in terms of its interoperability with abstract
 cognition.
 
 Do you think that, for instance,
 
 high conductance along the synapses between neuron in cell assembly A
 and cell assembly B
 
 is somehow a superior representation to
 
 P(B|A) is large
 
 If so, why?  In what manner?

Because it's not what the wetware actually does. You picked up just 
one facet of the operation space. 
 
 If you explain to me what non-logic based reprsentation you prefer to
 use to represent a particular piece of knowledge, I can almost surely

What if you don't yet know which piece of knowledge you have to 
represent? And that there are no clear specs, and the requirements
change constantly?

 explain to you how to use probabilistic logic to express this
 knowledge with equal or greater convenience, and easier
 interoperability with cognition.

Let's just look at a yet another isolated aspect of bio infoprocessing:
using system noise to enhance computation 
http://diwww.epfl.ch/~gerstner/SPNM/node32.html
Digital components are just analog components operated in a specific
regime, and if you approach the operation limits (shrink the feature size,
reduce the switching energy) you're bound to wind up with unreliable
logics. Formally, it is very difficult to treat classes of circuits,
classes of computations, with noise/unreliability, and shape of
the mapping of input to output. Evolution has no problems treating
such elements constructively.
 
 Implicit knowledge representation such as is achieved in an attractor
 neural network, can also be achieved in a network of probabilistic
 logic relationships: the knowledge implicit in a set of logic
 relationships consists of those relationships that can be generated
 from the set via a short or otherwise simple-to-obtain inference
 trajectory.
 
 It is quite possible to use logic expressions to interrelate raw
 percepts (pixelAt(200,150) has color blue for instance) rather than
 abstract symbolic tokens (bird, animal)  this just happens not
 to be how logic has typically been used in AI systems, due to
 inadequate AI designs.

Do you know any AI rewriting the code base (whether at SEXPR or FPGA
gate level)? How does one implement an efficient Kolmogorov predictor 
in FPGA gates, given just the signal sequence to predict? How would 
such a predictor deal with noise in the input?

There must have been some recent progress in practical AI at the
architecture level. Since this is not my field, I would welcome pointers.
E.g. is there any trends in what DARPA Grand Challenge winners are using?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] FPGA coprocessor for the Opteron slot

2006-04-24 Thread Eugen Leitl
 to service the high-performance 
computing market.

Eventually, standard server makers could turn to the FPGAs to help with 
security or networking workloads.

There does seem to be this kind of general feeling in places like IBM and Sun 
that the time may be here to use some special purpose processors or parts of 
processors for various things, Haff said. The FPGA approach is certainly one 
way of doing that. It does have the advantage that you're not locked into a 
particular function at any time because you can dynamically reprogram it.

The DRC products also come with potential energy cost savings that could be a 
plus for end users and server vendors that have started hawking green 
computing. Power has become the most expensive item for many large data 
centers.

The first set of DRC modules will consume about 10 - 20 watts versus close to 
80 watts for an Opteron chip. An upcoming larger DRC module will consume twice 
the power and be able to handle larger algorithms.

We believe we will get 10 to 20x application acceleration at 40 per cent of 
the power, Laurich said. At the same time, we're looking at a 2 to 3x price 
performance advantage.

A motherboard with DRC and Opteron chip It will, of course, take some time to 
build out the software for the DRC modules. The company has started shipping 
its first machines to channel partners that specialize in developing 
applications for FPGAs. An oil and gas company wanting to move its code to the 
product could expect the process to take about 6 months.

If DRC takes off, the company plans to bulk up from its current 13-person 
operation and to tap partners in different verticals to help out with the 
software work.

DRC also thinks it can maintain a competitive advantage over potential rivals 
via its patent portfolio. The modules result from work done by FPGA pioneer 
Steve Casselman, who is a co-founder and CTO of the company. Casselman told us 
that he had been waiting for something like Hypertransport to come along for 
years and that AMD's opening up of the specification almost brought tears to 
his eyes.

It's always difficult to judge how well a start-up will pan out, especially one 
that needs to build out systems and software to make it a success. DRC, 
however, does have - at the moment - that rare feeling of something special.

It's playing off standard server components and riding the Opteron wave. In 
addition, it is reducing the cost of acceleration modules in a dramatic 
fashion. That combination of serious horsepower with much lower costs is 
typically the right recipe for a decent start-up, and we'll be curious to see 
how things progress in the coming months.

You can have a look at the DRC kit here 
(http://www.drccomputer.com/pages/products.html). ®


-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: Re: [Beowulf] Gigabit switch recommendations]

2006-03-31 Thread Eugen Leitl
- Forwarded message from Tony Ladd [EMAIL PROTECTED] -

From: Tony Ladd [EMAIL PROTECTED]
Date: Tue, 28 Mar 2006 23:44:39 -0500
To: beowulf@beowulf.org
Subject: Re: [Beowulf] Gigabit switch recommendations
X-Mailer: Microsoft Outlook, Build 10.0.6626


We recently tested 48 port gigabit switches from Extreme (summit-48t) and
Force10 (s50). We found the Extreme networks switch performed better than
the Force10 when all 48 ports were active. The s50 appeared to choke at
certain message sizes, leading to erratic rates and overall reduced
performance. The summit was much smoother with very little variation in
throughput. For example a bidirectional edge exchange had a max throughput
of 1540Mbps under LAM (using the Broadcom NIC) while 16 pairs (32 nodes) had
a max throughput of 1520Mbps per pair; the optimum message size was about
250KBytes. We also tested 2 switches connected by a 10G stacking cable. We
could connect 12 pairs of ports (12 on each switch) and run at essentially
the same speed (around 1500Mbps per pair) through the stacking cable.

There are a lot of hidden gotchas in switch technology so wire speed means
next to nothing. For example the Force 10 switch (which is a good edge
switch) has 4 12 port ASIC's. Ports on the same ASIC really do communicate
at wire speed, but between ASIC's the max bandwidth is 10Gbps, so the max
throughput is only 83% of what you would expect. By contrast the Extreme
switch is supposedly flat, with full bandwidth under all port
configurations. The Broadcom NICS could not push data fast enough to really
stress the Extreme switch (only about 1500Mbps max per pair) but with
MPIGAMMA I can get over 1800MBps between pairs which will up the load on the
switch.

These switches are not cheap; they list for $6000-8000 but they outperform
the cheaper switches by a considerable margin. We have not been able to get
close to the theoretical bandwidth from our cheap GigE switches (HP 2724
3Com SS3).

I have recently run netpipe with MPI/GAMMA
(http://www.disi.unige.it/project/gamma/mpigamma/) using two Intel PRO1000
NIC's (82545GM) wired back-to-back. The nodes are Dell PE850 with 3.0Ghz P4D
(dual core). MPI latency was 8.6 microsecs one way and 8.8 microsecs for
bidirectional messages. The max throughput was 983Mbps one way and 1856Mbps
bidirectional. The half throughput message size is about 4KBytes. These are
consistent with pingpong tests reported on the MPIGAMMA website. The higher
throughput will enable a better test of switch performance

We have just installed a stacked array of 4 X summit-48t's. I will post
benchmarks soon.

Tony

---
Tony Ladd
Professor, Chemical Engineering
University of Florida
PO Box 116005
Gainesville, FL 32611-6005

Tel: 352-392-6509
FAX: 352-392-9513
Email: [EMAIL PROTECTED]
Web: http://ladd.che.ufl.edu 


___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: Connectionists: Physiologically Realistic Cognitive Modelling: New Book]

2006-03-16 Thread Eugen Leitl
- Forwarded message from Andrew Coward [EMAIL PROTECTED] -

From: Andrew Coward [EMAIL PROTECTED]
Date: Wed, 6 Dec 2006 10:48:14 +1000
To: connectionists@cs.cmu.edu
Subject: Connectionists: Physiologically Realistic Cognitive Modelling: New
Book
X-Mailer: Apple Mail (2.623)

(Apologies if you receive this announcement more than once)


A recently published book, “A System Architecture Approach to the  
Brain: from Neurons to Consciousness” (ISBN 1-59454-433-6), applies  
some developments in systems theory to demonstrate that detailed  
modelling of higher cognitive processes in terms of neurophysiology  
requires some very specific architectural approaches.

The book demonstrates theoretical arguments that any learning system  
that is subject to a range of practical considerations will be  
constrained within a set of architectural bounds called the  
recommendation architecture. The theoretical arguments have been  
developed by analogy with the ways in which practical considerations  
constrain the architectures of extremely complex electronic control  
systems, although there is minimal direct resemblance between such  
architectures and those of learning systems.

The practical considerations are (1) the need to perform a large number  
of behavioural features with relatively limited physical resources for  
information recording, information processing and internal information  
communication; (2) the need to add and modify features without side  
effects on other features; (3) the need to protect the many different  
meanings of information generated by one part of the system and  
utilized for different purposes by each of a number of other parts of  
the system; (4) the need to maintain the association between results  
obtained by different parts of the system from a set of system inputs  
arriving at the same time; (5) the need to limit the volume of  
information required to specify the system construction process; (6)  
the need to limit the complexity of the construction process; and (7)  
the need to recover from construction errors and subsequent physical  
failures or damage.

The system theory demonstrates that if such needs are strong, there are  
some remarkably specific constraints on the system architecture. There  
are constraints on how functionality is separated into modules and  
components, on device information models, on the ways in which devices  
are organized and connected within and between modules and components,  
and on the ways in which information can be recorded and processed.

One key constraint is a requirement for a separation between a  
clustering subsystem which defines and detects conditions within the  
information available to the system, and several competition subsystems  
which receive some of the conditions and interpret each condition as a  
recommendation in favour of a range of different behaviours, each with  
a different weight. These competition subsystems determine the current  
total recommendation weights of all behaviours across all current  
conditions and implement the most strongly recommended behaviour.  
Consequence feedback following a behaviour can set or change  
recommendation weights but cannot change condition definitions.  
Furthermore, once a condition has been defined in clustering, there are  
tight restrictions on subsequent changes. The limited ability to change  
condition definitions is one primary difference from traditional neural  
networks.

The book describes the strong resemblances between the structures and  
processes predicted for a system within the recommendation architecture  
bounds and the physiological structures and processes of the mammal  
brain. The ways in which the recommendation architecture approach makes  
it possible to understand experimental results for a wide range of  
cognitive processes in terms of physiology are described.

Electronic implementations of systems within the recommendation  
architecture bounds are described that confirm the resemblances with  
biological brains.

L. Andrew Coward
Research Fellow
Department of Computer Science
Australian National University
Canberra ACT 0200
Australia
[EMAIL PROTECTED]

tel +61 02 6125 5694
mob +62 0431 529 197
http://cs.anu.edu.au/~Andrew.Coward/


Book Website:   
http://www.novapublishers.com/catalog/product_info.php? 
cPath=23_128products_id=2652



- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] 25 cpu's on Field Programmable Gate Arays(FPGA)

2006-03-02 Thread Eugen Leitl
On Tue, Feb 28, 2006 at 05:40:07PM -0500, Danny G. Goe wrote:
 Just thought I would pass this tid bit of information along... 
 
 Field Programmable Gate Arays(FPGA) 25 CPU's on a single chip will mean a big 
 jump in processing power and a big jump for AI Seed working its way through 
 the gazillion possible programs. Also a much smaller power bill. 

That's not what the article says. It's a simulator. It won't give you
memory bandwidth, unless your FPGA blocks contain embedded memory.
 
 http://www.builderau.com.au/program/work/soa/A_1_000_processor_computer_for_US_100K_/0,39024650,39236141,00.htm

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: [Comp-neuro] 2nd CFP: [EMAIL PROTECTED] 2006]

2006-01-31 Thread Eugen Leitl
.

 
**

IMPORTANT DATES:

Submissions: March 8, 2006
Notification   : April 21, 2006
Full paper deadline: May 1st, 2006
Final programme: June 21, 2006
Workshop Dates : July 31 - August 4, 2006

 
**

WORKSHOP PROGRAM COMMITTEE: (tentative)

Thomas ADDIS (University of  
Portsmouth) http://www.tech.port.ac.uk/staffweb/addist/tom.html
Nicholas ASHER (University of Texas, Austin, USA and IRIT-CNRS,  
Toulouse,  
France) http://www.utexas.edu/cola/depts/philosophy/faculty/asher/ 
main.html
John BATEMAN (University of Bremen,  
Germany)http://www-user.uni-bremen.de/~bateman/
Guido BOELLA (University of Torino,  
Italy)  http://www.di.unito.it/~guido/
Paolo BOUQUET (University of Trento,  
Italy)  http://dit.unitn.it/~bouquet/
Scott FARRAR (University of Bremen,  
Germany)http://www.u.arizona.edu/~farrar/
Roberta FERRARIO (LOA-ISTC, CNR, Trento,  
Italy)  http://www.loa-cnr.it/ferrario.html
Aldo GANGEMI (LOA-ISTC, CNR, Roma,  
Italy)  http://www.loa-cnr.it/gangemi.html
Nicola GUARINO (LOA-ISTC, CNR, Trento,  
Italy)  http://www.loa-cnr.it/guarino.html
Andreas HERZIG (IRIT-CNRS, Toulouse,  
France) http://www.irit.fr/ACTIVITES/LILaC/Pers/Herzig/
Joris HULSTIJN (Utrecht University, the  
Nehterlands)http://www.cs.vu.nl/~joris/
Kepa KORTA  (Universidad del Pais Vasco,  
Spain)  http://www.sc.ehu.es/ylwkocak/kepa.html
Nicolas MAUDET (University of Paris Dauphine,  
France) http://l1.lamsade.dauphine.fr/~maudet/
Massimo POESIO (University of Essex, UK and University of Trento,  
Italy)  http://cswww.essex.ac.uk/staff/poesio/
Laurent PREVOT (Academia Sinica, Taipei,  
Taiwan) http://www.loa-cnr.it/prevot.html
Matt PURVER (CSLI, Stanford, USA)http://www.stanford.edu/~mpurver/
Johan VAN BENTHEM (University of Amsterdam, the  
Netherlands)http://staff.science.uva.nl/~johan/
Rogier VAN EIJK (Utrecht University, the Netherlands)  
http://www.cs.uu.nl/people/rogier/
Laure VIEU (IRIT-CNRS, Toulouse, France) http://www.loa-cnr.it/vieu.html


 
**

LOCAL ARRANGEMENTS:

All workshop participants including the presenters will be
required to register for ESSLLI.  The registration fee for
authors presenting a paper will correspond to the early
student/workshop speaker registration fee.  Moreover, a number
of additional fee waiver grants might be made available by
the local organizing committee on a competitive basis and
workshop participants are eligible to apply for those.

There will be no reimbursement for travel costs or accommodation.
Workshop speakers who have difficulty in finding funding
should contact the local organizing committee to ask for the
possibilities of a grant.



 
**

FURTHER INFORMATION:

About the workshop: http://www.loa-cnr.it/esslli06/
  About ESSLLI: http://esslli2006.lcc.uma.es/
 
___
Comp-neuro mailing list
[EMAIL PROTECTED]
http://www.neuroinf.org/mailman/listinfo/comp-neuro

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] Performance is what counts...Financial Economics

2006-01-15 Thread Eugen Leitl
On Sat, Jan 14, 2006 at 10:41:16PM +, [EMAIL PROTECTED] wrote:

 Searching is a part of AI... But is not deep logic like Chess...

Both are isolated skills. Instead of a couple isolated peaks 
across the landscape of capabilities a general intelligence is 
balanced.

 Is IBM Deep Blue just a look up machine or really perceiving and logical 
 reasoning with an output of action.. the next move. 

I don't remember if it was a box with 64 dedicated chess ASICs but it definitely
contained Power CPUs which are all-purpose. 

 Of course, we do not know if people play chess well, we only know that 
 some play better than others. 
 Any AI worth its upkeep should be able to perceive, reason and act. 
 of course these abilities will develop over a considerable amount of 
 program design, development and implementation. 

If you're talking about explicitly implementing capabilities
(here a module for chess, here's a module for car navigation,
here's one for spring cleaning) you'll never progress below
a brittle set of isolate skills.
 
 Supposedly any intelligence can be reapplied to the AI System and 
 therefore benefit from its own success. Compounding over time will be the 
 major part as the development continues. The higher the rate of learning 
 will be most desirable. How those higher rates are obtained and kept 
 moving to the next level is a most important technology. 
 
 I do believe the Financial and Economics expert AI system will be of 
 enormous value and one of the best performance test of any AI System. 

An AI which only sees the raw numbers can't develop a better market
representation than a relatively simple predictor. 
 
 This should help fund any AI with a very abundant cash flow. 
 
 Let the games begin...
 
 Dan Goe
 

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] Google aims for AGI (purportedly)

2006-01-12 Thread Eugen Leitl

http://www.economist.com/business/displaystory.cfm?story_id=5382048

But some people think they detect an even more grandiose design. Google is 
already working on a massive and global computing grid. Eventually, says Mr 
Saffo, .they're trying to build the machine that will pass the Turing test..in 
other words, an artificial intelligence that can pass as a human in written 
conversations. Wisely or not, Google wants to be a new sort of deus ex machina.


-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: [nsg] Meeting Announcement]

2006-01-01 Thread Eugen Leitl
- Forwarded message from Fred Hapgood [EMAIL PROTECTED] -

From: Fred Hapgood [EMAIL PROTECTED]
Date: Sun, 01 Jan 2006 13:04:33 -0500
To: Nanotech Study Group [EMAIL PROTECTED]
Subject: [nsg] Meeting Announcement
X-Mailer: MIME::Lite 1.5  (F2.73; T1.15; A1.64; B3.05; Q3.03)


Meeting notice: The January 2 meeting will be held at 7:30 P.M. at the
Royal East (782 Main St., Cambridge), a block down from the corner of
Main St. and Mass Ave.  If you're new and can't recognize us, ask the
manager. He'll probably know where we are. More details below.

Suggested topic:  AL and AI

I recall as if it were yesterday Tom Ray's presentation of his
artificial life program Tierra at MIT.  The biologist had written two
programs, an environment and a replicator, both pretty crude.  When he
dropped the latter into the former and pressed 'start,' a real if
minimal ecology popped into existence in a only hundred thousand cycles
while over the same time the replicators bummed themselves down from 80-
odd statements to 20-odd.

I think most who heard that talk walked out expecting that AL would soon
be demonstrating the equivalent of multicellularity, tissue types,
sexuality, life stages, sensing, metabolism, social behavior
(hierarchies, altruism) and information processing. At least. There may
have been some uncertainty as to whether we would recognize these
complexities when they evolved, since obviously they would not look a
whole lot like their biological analogues, but I think everyone expected
AL to give us phenomena that would grow steadily more complex and
interesting.

Never happened, of course.  I don't know what the mainstream reasons are
for this failure.  Perhaps all the programming effort in biological
computing got hijacked by its function-oriented subdisciplines, like
neural nets and genetic algorithms (AL is more of a science than a
technology.)

However, just on the surface this dissappointment looks a lot like the
parallel failure of computer science to come up with systems that can
interact fluidly and accurately with the natural world, very much
including the natural world of ideas.  Both AI and AL are pointed at the
same target: dealing intelligently with the chaos of the unfiltered,
unprocessed world.

Is it remotely possible that the same conceptual failure, perhaps an
inability to find the right representational language, is holding back
progress in both?


++

In twenty years half the population of Europe will have visited the
moon.

-- Jules Verne, 1865

+

Announcement Archive: http://www.pobox.com/~fhapgood/nsgpage.html.

+

Legend:

NSG expands to Nanotechnology Study Group.  The Group meets on the
first and third Tuesdays of each month at the above address, which
refers to a restaurant located in Cambridge, Massachusetts.

The NSG mailing list carries announcements of these meetings and little
else. If you wish to subscribe to this list (perhaps having received a
sample via a forward) send the string 'subscribe nsg'  to
[EMAIL PROTECTED]  Unsubs follow the same model.

Comments, petitions, and suggestions re list management to:
[EMAIL PROTECTED]   www.pobox.com/~fhapgood  www.pobox.com/~fhapgood
  www.pobox.com/~fhapgood


___
Nsg mailing list
[EMAIL PROTECTED]
http://polymathy.org/mailman/listinfo/nsg_polymathy.org

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] [EMAIL PROTECTED]: Connectionists: CFP Neural Networks Special Issue on ESNs and LSMs]

2005-12-21 Thread Eugen Leitl
- Forwarded message from Herbert Jaeger [EMAIL PROTECTED] -

From: Herbert Jaeger [EMAIL PROTECTED]
Date: Tue, 20 Dec 2005 17:44:01 +0100
To: connectionists@cs.cmu.edu
Cc: Herbert Jaeger [EMAIL PROTECTED]
Subject: Connectionists: CFP Neural Networks Special Issue on ESNs and LSMs
User-Agent: Mozilla/5.0 (Windows; U; WinNT4.0; en-US;
rv:1.0.1) Gecko/20020823 Netscape/7.0

Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit
X-Virus-Scanned: by amavisd-new 20030616p5 at demetrius.iu-bremen.de

CALL FOR PAPERS: Neural Networks 2007 Special Issue

Echo State Networks and Liquid State Machines

Guest Co-Editors :

Dr. Herbert Jaeger, International University Bremen,
h.jaeger at iu-bremen.de
Dr. Wolfgang Maass, Technische Universitaet Graz,
maass at igi.tugraz.at
Dr. Jose C. Principe, University of Florida,
principe at cnel.ufl.edu


A new approach to analyzing and training recurrent neural network
(RNNs) has emerged over the last few years. The central idea is to
regard a RNN as a nonlinear, excitable medium, which is driven by
input signals or fed-back output signals. From the excited response
signals inside the medium, simple (typically linear), trainable
readout mechanisms distil the desired output signals. The medium
consists of a large, randomly connected network, which is not adapted
during learning. It is variously referred to as a dynamical reservoir
or liquid. There are currently two main flavours of such
networks. Echo state networks were developed from a mathematical and
engineering background and are composed of simple sigmoid units,
updated in discrete time. Liquid state machines were conceived from a
mathematical and computational neuroscience perspective and usually
are made of biologically more plausible, spiking neurons with a
continuous-time dynamics. These approaches have quickly gained
popularity because of their simplicity, expressiveness, ease of
training and biological appeal.

This Special Issue aims at establishing a first comprehensive overview
of this newly emerging area, demonstrating the versatility of the
approach, its mathematical foundations and also its limitations.
Submissions are solicited that contribute to this area of research
with respect to

--  mathematical and algorithmic analysis,
--  biological and cognitive modelling,
--  engineering applications,
--  toolboxes and hardware implementations.

One of the main questions in current research in this field concerns
the structure of the dynamical reservoir / liquid. Submissions are
especially welcome which investigate the relationship between the
excitable medium topology and algebraic properties and the resulting
modeling capacity, or methods for pre-adapting the medium by
unsupervised or evolutionary mechanisms, or including special-purpose
sub networks (as for instance, feature detectors) into the medium.

Submission of Manuscript

The manuscripts should be prepared according
to the format of the Neural Networks and electronically submitted to
one of the Guest Editors. The review will take place within 3 months
and only very minor revisions will be accepted. For any further
question, please contact the Guest Editors.

DEADLINE FOR SUBMISSION : June 1, 2006.


--
Dr. Herbert Jaeger

Professor for Computational Science
International University Bremen
Campus Ring 12
28759 Bremen, Germany

Phone (+49) 421 200 3215
Fax (+49) 421 200 49 3215
email  [EMAIL PROTECTED]

http://www.faculty.iu-bremen.de/hjaeger/
--

- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] AutoResponse - Email Returned SAXK (KMM18505542V85776L0KM) :kd1

2005-12-21 Thread Eugen Leitl
On Wed, Dec 21, 2005 at 09:09:32PM -0800, Mike Williams wrote:
 Why are we getting this PayPal message over and over?  Can't this be 
 stopped?

Extremely annoying it is, yes.

Some joker has apparently subscribed a paypal account ([EMAIL PROTECTED]
or somesuch) to the list. Perhaps the list owner should review the
subscripion process, and limit postings to subscriber-only.
 
 PayPal Customer Service 1 wrote:
 
 Thank you for contacting PayPal Customer Service.
 
 In an effort to assist you as quickly and efficiently as possible, please
 direct all customer service inquires through our website. Click on the 
 hyperlink below to go to the PayPal website. After entering your email 
 address and password into the Member Log In box, you can submit your
 inquiry via our Customer Service Contact form. If you indicate the type of
 question you have with as much detail as you can, we will be able to
 provide you with the best customer service possible.
 
 If your email program is unable to open hyperlinks, please copy and paste
 this URL into the address bar of your browser.
 
 https://www.paypal.com/wf/f=default
 
 If you are contacting PayPal because you are unable to log into your
 account, please use the contact form below.
 
 https://www.paypal.com/ewf/f=default
 
 Thank you for choosing PayPal!
 
 This email is sent to you by the contracting entity to your User Agreement,
 either PayPal Inc or PayPal (Europe) Limited. PayPal(Europe) Limited is
 authorised and regulated by the Financial Services Authority in the UK as
 an electronic money institution.
 
 
 
 Note: When you click on links in this email, you will be asked to log into
 your PayPal Account. As always, make sure that you are logging into a
 secure PayPal page by looking for 'https://www.paypal.com/' at the
 beginning of the URL.
 
 Please do not reply to this e-mail.  Mail sent to this address will not be
 answered.
 
 
 Original Email:
 Mark:
 
 MY LITTLE AGI PROJECT
 
 Since I started my studies I was interested in AI and creating AGI, thus
 
 I
 
 tried to learn as much as possible about various AI disciplines, to unify
 them later. In my spare time, I am working on a Production Rule System
 
 with
 
 reasoning abilities, which I plan to program/teach for the usage of
 
 various
 
 AI techniques (such as EA, RL, classification, unsupervised learning...),
 
 to
 
 evolve and develop the rules and facts in the system. I am interested
 
 about
 
 your opinion about a system like this.
 
 Your system sounds interesting, although it's not an entire AGI framework.
 You're on the right track trying to unify various AI approaches (such as
 planning, reasoning, perception, etc).  I have some basic ideas of how to
 build an AGI, but my project is still in its infancy.  I'd welcome other
 researchers to join my open source project.
 My AGI theory is based on the compression of sensory experience, and the
 basic operation is pattern recognition.  Traditional production rule
 systems
 may be a bit too limited because they cannot perform probabilistic
 inference, or statistical pattern recognition.
 Right now we're focusing on vision, which turns out to be extremely hard.
 Re your analysis of AGI social issues:  I think there should be some sort
 of
 built-in AGI mechanisms that prevents it from doing harmful things,
 although
 the exact form of it is still unclear to me.  The folks at SIAI have
 thought
 about this issue much more intensely, but I think their vision is a bit
 over
 the top.
 Secondly I agree that AGI may create more social inequality between those
 who knows how to exploit AGI and those who are left behind.  I'm afraid
 this
 is also inevitable.  The best we can do is to try to ameliorate such
 effects.  The good side to it is that AGI will be very easy to use because
 it can understand human language.
 Cheers,
 yky
 ---
 To unsubscribe, change your address, or temporarily deactivate your
 subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 
 
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your 
 subscription, please go to 
 http://v2.listbox.com/member/[EMAIL PROTECTED]
 
 
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your 
 subscription, please go to 
 http://v2.listbox.com/member/[EMAIL PROTECTED]
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


  1   2   >