om

In this article I will quote and address some of the issues raised
against my previous posting. I will then continue with the planned
discussion of the current state of AI, I will also survey some of the
choices available to the singularitarian community with regards these
developments. Specifically, I will outline a manhattan project style AI
development effort to reflect the urgency I feel is warranted by the
situation.

om

Tom says:
"""""
This is a *very* strong claim, considering how complex
intelligence is and how much had yet to be fully
understood.
"""""

The human genome project has found that human DNA only contains around
30,000 genes. While it is true that there are probably an equal number
of important regulatory sequences including codes for RNA strands which
act directly as enzymes, there remains a fairly strict upper bound on
the complexity of the entire human organism. In AI, there are two
sources of program complexity. First, there's the general AI algorithm,
and second there is the system which the AI must control. I agree that I
omitted a treatment of the input and output modalities. I did so because
they are negligable in this context. Yes, the brain does an a great deal
of sensory pre-processing, and indeed this is very important to it's
performance, however, each modality operates in strictly constant time,
each "frame" of information is processed with a predictably low latency.
The total complexity involved is strictly related to the bandwidth of
the information channel. Therefore it cancels out and the only
interesting variable which remains is the AI kernel. =)

Quoth tom:

""""""""
The "tabula rasa" hypothesis has been debunked every
which way from Sunday. See
http://207.210.67.162/~striz/docs/tooby-1992-pfc.pdf.
""""""""

Well, it seems that you responded before you finished reading the
sentence. The next part was "for the most part". While I agree that
there are several aspects of the human psyche that are clearly based
upon genetically transmitted evolutionary adaptations, these adaptations
have been localized through anatomical studies to unique, highly
specialized regions of neural tissue such as the hypothalamus and the
amygdala. However, when we look at the primary neural pathways including
the thalamus, hypocampus, and the basal ganglia (not to mention the
cortex), we find a highly regular anatomical structure. We can safely
assume these structures do not carry specific information regarding
genetic adaptations and is, for all practical purposes a very close
approximation to the tabula rasa.

The closest thing we can find to an exception to this claim is the
visual cortex which is fairly specialized for processing visual
information. -- It has additional layers not found in other cortical
regions. Even these specializations do not appear to contain any genetic
programming.

Tom:
"""""""""
Our brains are *much* more complex than a simple
concept-abstraction-and-pattern-recognition device.
Such a device, for instance, would be unable to add
two and two, or create a whole new category of objects
it has never seen.
"""""""""

;)

Wherever did you get the impression that imagination and human-style
mathematics is distinct from concept abstraction and pattern
recognition? I could go on for days explaining the evidence supporting
this idea as well as an explanation of how perception and imagination
are actually the same thing but I don't have nearly enough time to do
that now and it would make this post much longer than anyone is willing
to read. -- I will if I have to though! =\

tom, once more:
""""""""
This assumes that the pattern-matcher saturates with
repetition of stuff it has seen before. But the
universe contains *much more* information than any
human could possibly pattern-match- it's simply so
darn big.
""""""""

two responses:

1. Not if you break it down first, it doesn't. ;)
2. How about just the information contained in your neighborhood and
accessible to human style senses? Are you going to claim that is too
much for your computer even using a relatively poor encoding scheme?

Tom:
""""""""""
Every possible strong AI architecture may be *capable*
of absorbing new information and matching it to old
patterns, but that does not mean that that is *all* it
does.
"""""""""""

Yes, that is all it does. All AIs will be, at their core, pattern
matching machines. Every one of them. You can then procede to tack on
any other function which you believe will improve the AI's performance
but in every case you will be able to strip it down to pretty much a
bare pattern matching system and still obtain a truly formidable
intelligence!

Matt inscribed:
"""""""""""
When can we expect a demo?
"""""""""""

Six months after I somehow obtain 10 megabucks. =\

Matt:
""""""""""
People with autism lack the ability to recognize faces, which leads to
delayed language and social development during childhood.  However they
do not lack symbolic thought.  From http://en.wikipedia.org/wiki/Autism
""""""""""

Okay, I'll respond to that after you re-read the part you quoted about
the autistic people in question being "high functioning". Those people
merely have an impaired ability to conceptualized. I was referring to
cases of profound autism. In the cases you refer to, the impairment of
symbolic thought is evidenced in the reduced ability to form symbolic
concepts for complex things such as "person" or "face".


Matt regarding the space complexity of the AI:
""""""""""
The relationship is a little more complex.  I believe it has to do with
human brain size, which stops increasing around adolescence.  Vocabulary
development during childhood is fairly constant at about 5000 words per
year.  I had looked at the relationship between training set size and
information content as part of my original dissertation proposal, which
suggests a space complexity more like O(N/log N).
http://cs.fit.edu/~mmahoney/dissertation/
""""""""""

Okay, there are several noteworthy differences between the conditions
you used for your estimation and mine. First, you were looking at the
domain of strings of words written by something which we assume to be
intelligent. My estimation was based on raw sensory perceptions which
covers a domain of colors, lines, shapes, motion, sine waves, tastes,
smells, tactile impressions, etc. All of these things are inherently
compressible, furthermore, they exist in every perception the universe
will ever offer you. The brain is optimized to remove the redundant
information as quickly as possible and immediately focus on what is
unique about each new perception.

Secondly, Symbols != words. The brain processes millions of symbols
which are typically too trivial to invent a new word for, or are
processed symbolically at a "sub conscious" level which has no direct
connection to your faculty of speech.

"""""""""""
If Turing and Landauer are right, then a PC has enough computational
power to pass the Turing test.  What we lack is training data, which can
only come from the experience of growing up in a human body.
"""""""""""

Indeed! =)

"""""""""""
The article does not say if I click on a picture of a dog running across
a lawn, whether the system will retrieve pictures of dogs or pictures of
brown objects on a green background.
"""""""""""

True, but does that really matter? The performance of this system may be
impaired by it's inability at present to process high-level functions.
The question becomes whether it is indeed the case that the underlying
approach is of the type which could give rise to general intelligence,
or whether it is just an interesting algorithm. The description of the
system's performance exactly matches the performance I would expect from
a general intelligence algorithm hence I consider the probability that
this is indeed a potential general intelligence algorithm to be
extremely high.

This brings me to the subject of this posting.... (at long last...)

om

The news items I have been seeing lately strongly suggest that we are
rapidly approaching the singularity. The mid-point of the singularity
window could be as close as 2009. A rediculously pessimistic prediction
would put it around 2012. That would have to assume that some large
external event has caused massive social disruption and the people who
actually work on these algorithms are utter blockheads who can't see
what it can truly do. On the short side, we could literally be hours
away from a hard takeoff (yes, I realize it's almost August of 2007).

Singularitarians are a strange bunch, they long for a radically altered
existence but they, like most, don't do anything visible to really
advance that goal or, like kurzweil, make predictions and then push them
 further into the future as the progress of time brings them too close
for comfort. Others are far too preoccupied by the task of creating a
cult of personality around themselves to even bother doing anything
relevant to the singularity whatsoever. =P

Unfortunately, it appears that time has run out. The time for picking
our pattern-identical noses is past. The endgame has arrived.

To be fair, the risks are not all that great. Despite what you may have
been led to believe, the outcome where we do nothing will, in all
probability be fairly mild. We will start seeing high-functioning
androids. We will see massive layoffs at companies which have converted
to AI technologies. We will see a gradual trickle-down of AI based
medical advances as the relevant regulatory agencies and other
more-or-less mundane technological advances. -- In effect a slow slow
takeoff singularity.

However, if we go that route (which is the route we are on now!!!), we
will loose a number of important things. First we will loose access to
the AI's source code. At first it will be a trade secret, and later a
state secret. As things ramp up, we can expect AI surveillance of any
computer powerful enough to host an AI and will effectively be
prohibited from persuing an independent research program. We will loose
the financial rewards the inventor of AI is sure to receive. We will
loose the ability to press ahead with more radical technologies and
transhumanism in general because corporations will not permit any social
change which jeapordizes the bottom line and governments will not permit
any social change which will cause a 60 year old man to feel
uncomfortable. Finally, we will loose time. The older people among us
may not have the luxury of another 30 years in order for radically life
extending technologies to clear the red tape. Truly, none of us can be
so confident that we will not meet an untimely demise in such a span.

Because I'm a greedy bastard, I want AI. I want to profit from AI, and I
want to use AI for my own devious (-ant?) ends...

So what do we do? Clearly my own efforts have been crippled by a whole
list of personal failings save for lack of foresight.

Right now it looks like we have a workable cortical algorithm and,
scattered all over the place, just about every other noteworthy part
we'll need for an AI. What we need to do is bring these all together in
a system that can manifest intelligence in a clear and human-usable manner.

Unfortunately, since I have failed to deliver the operating system I
designed to support bringing all of these pieces together, it looks like
we will need to do it on conventional systems. Unfortunately
conventional systems have extremely poor IPC mechanisms and programs
can't be edited while they are running. To get around that we'll have to
use an interpreted system which can establish an internal computing
environment favorable to rapid prototyping. Squeak ( www.squeak.org )
closely approximates such a system but, sadly, has a number of
deficiencies such as poor support for 64 bit platforms (at present) and
no support for SMP or multicore systems -- it can't take advantage of
additional CPUs in the system.)

as I mentioned earlier, the pre-processors for the modalities are fairly
straightforward pieces of engineering. Acceptable hardware solutions can
be had commercially for no more than $8,000 per modality.

This brings us to the question of how we go about training and
interracting with the AI. The confused Dr. Goertzel presents a perfectly
reasonable case that virtual embodyment makes the most sense because,
after the first agent, each additional agent is practically free. All
things being equal, this would be a compelling argument. However,
Goertzel's own experience with the development of a virtual environment
has proven that it is a mamothly complex undertaking which adds so much
complexity to the problem as to double or even triple the total
development costs.

However, if we take a closer look at what we actually need for a robotic
platform, and discipline ourselves enough not to immediately jump to the
anthropomorphic platforms that are now coming on the market, we can
assemble a useful set of cameras, manipulators, and inexpensive mobile
platforms for not much more than $50,000. A believe that even such a
modest lab will put us in the race to solving AI.

I'm willing to attempt that on my own but at my current rate of savings,
it would take decades to save up even that much -- even if I eliminated
my anime budget. =P

If I had more to say, I forgot what it was... where's the send button?

om


-- 
Opera: Sing it loud! :o(  )>-<

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=26671282-37f48e

Reply via email to