Re: [agi] Neurons

2008-06-04 Thread J Storrs Hall, PhD
On Tuesday 03 June 2008 09:54:53 pm, Steve Richfield wrote:

 Back to those ~200 different types of neurons. There are probably some cute
 tricks buried down in their operation, and you probably need to figure out
 substantially all ~200 of those tricks to achieve human intelligence. If I
 were an investor, this would sure sound pretty scary to me without SOME sort
 of insurance like scanning capability, and maybe some simulations.

I'll bet there are just as many cute tricks to be found in computer 
technology, including software, hardware, fab processes, quantum mechanics of 
FETs, etc -- now imagine trying to figure all of them out at once by running 
Pentiums thru mazes with a few voltmeters attached. All at once because you 
never know for sure whether some gene expression pathway is crucially 
involved in dendrite growth for learning or is just a kludge against celiac 
disease. 
That's what's facing the neuroscientists, and I wish them well -- but I think 
we'll get to the working mind a lot faster studying things at a higher level.
For example:
http://repositorium.sdum.uminho.pt/bitstream/1822/5920/1/ErlhagenBicho-JNE06.pdf

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Neurons

2008-06-04 Thread Steve Richfield
Josh,

I apparently failed to clearly state my central argument. Allow me to try
again in simpler terms:

The difficulties in proceeding in both neuroscience and AI/AGI is NOT a lack
of technology or clever people to apply it, but is rather a lack of
understanding of the real world and how to effectively interact within
it. Some clues as to the totality of the difficulties are the ~200 different
types of neurons, and in the 40 years of ineffective AI/AGI research. I have
seen NO recognition of this fundamental issue in other postings on this
forum. This level of difficulty strongly implies that NO clever programming
will ever achieve human-scale (and beyond) intelligence, until some way is
found to mine the evolutionary lessons learned during the last ~200
million years.

Note that the CENTRAL difficulty in effectively interacting in the real
world is working with and around the creatures that already inhabit it,
which are the product of ~200 million years of evolution. Even a perfect
AGI would have to have some very imperfect logic to help predict the
actions of our world's present inhabitants. Hence, there seems (to me) that
there is probably no simple solution, as otherwise it would have already
evolved during the last ~200 million years, instead of evolving the highly
complex creatures that we now are.

That having been said, I will comment on your posting...

On 6/4/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 On Tuesday 03 June 2008 09:54:53 pm, Steve Richfield wrote:

  Back to those ~200 different types of neurons. There are probably some
 cute
  tricks buried down in their operation, and you probably need to figure
 out
  substantially all ~200 of those tricks to achieve human intelligence. If
 I
  were an investor, this would sure sound pretty scary to me without SOME
 sort
  of insurance like scanning capability, and maybe some simulations.

 I'll bet there are just as many cute tricks to be found in computer
 technology, including software, hardware, fab processes, quantum mechanics
 of
 FETs, etc -- now imagine trying to figure all of them out at once by
 running
 Pentiums thru mazes with a few voltmeters attached. All at once because you
 never know for sure whether some gene expression pathway is crucially
 involved in dendrite growth for learning or is just a kludge against celiac
 disease.


Of course, this has nothing to do with creating the smarts to deal with
our very complex real world well enough to compete with us who already
inhabit it.

That's what's facing the neuroscientists, and I wish them well -- but I
 think
 we'll get to the working mind a lot faster studying things at a higher
 level.


I agree that high level views are crucial, but with the present lack of
low-level knowledge, I see no hope for solving all of the problems while
remaining only at a high level.

For example:

 http://repositorium.sdum.uminho.pt/bitstream/1822/5920/1/ErlhagenBicho-JNE06.pdf


From that article: Our close cooperation with experimenters from
neuroscience and cognitive science has strongly influenced the proposed
architectures for implementing cognitive functions such as goal inference
and decision making. THIS is where efforts are needed - in bringing the
disparate views together rather than keeping your head in the clouds with
only a keyboard and screen in front of you.

In the 1980s I realized that neither neuroscience nor AI could proceed to
their manifest destinies until a system of real-world mathematics was
developed that could first predict details of neuronal functionality, and
then hopefully show what AI needed. The missing link seemed to be the lack
of knowledge as to just what the units were in the communications between
neurons. Pulling published and unpublished experimental results together,
mostly from Kathryn Graubard's research, I showed (and presented at the
first Int'l NN Conference) that there were more than one such unit, and that
one was clearly the logarithms of the probabilities of assertions being
true. Presuming this leads directly to a mathematics of synapses, that
accurately predicts the strange non-linear and discontinuous transfer
functions observed in inhibitory synapses, etc. It also leads to the optimal
manipulation of synaptic efficacies, etc. However, apparently NO ONE ELSE
saw the value in this. Without the units, there can be no substantial
mathematics, and without the mathematics, there is nothing to guide either
neuroscience, NN, or AI research. Hence, I remain highly skeptical of
claimed high level views.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Neurons

2008-06-04 Thread J Storrs Hall, PhD
Well, Ray Kurzweil famously believes that AI must wait for the mapping of the 
brain. But if that's the case, everybody on this list may as well go home for 
20 years, or start running rats in mazes. 

I personally think the millions of years of evolution argument is a red 
herring. Technological development not only moves much faster than evolution, 
but takes leaps evolution can't. And evolution is always, crucially, obsessed 
with reproductive success. Evolution would never build an airplane, because 
airplanes can't reproduce. But we can, and thus capture the aspect of birds 
that's germane to our needs -- flying -- with an assortment of kludges. And 
planes are still NOWHERE as sophisticated as birds, and guess what: 100 years 
later, they still don't lay eggs.

How much of the human mind is built around the necessity of eating, avoiding 
being eaten, finding mates, being obsessed with copulation, and raising and 
protecting children? Egg-laying for airplanes, in my view.

There are some key things we learned about flying by watching birds. But 
having learned them, we built machines to do what we wanted better than birds 
could. We'll do the same with the mind.

Josh


On Wednesday 04 June 2008 03:15:36 pm, Steve Richfield wrote:
 Josh,
 
 I apparently failed to clearly state my central argument. Allow me to try
 again in simpler terms:
 
 The difficulties in proceeding in both neuroscience and AI/AGI is NOT a lack
 of technology or clever people to apply it, but is rather a lack of
 understanding of the real world and how to effectively interact within
 it. Some clues as to the totality of the difficulties are the ~200 different
 types of neurons, and in the 40 years of ineffective AI/AGI research. I have
 seen NO recognition of this fundamental issue in other postings on this
 forum. This level of difficulty strongly implies that NO clever programming
 will ever achieve human-scale (and beyond) intelligence, until some way is
 found to mine the evolutionary lessons learned during the last ~200
 million years.
 
 Note that the CENTRAL difficulty in effectively interacting in the real
 world is working with and around the creatures that already inhabit it,
 which are the product of ~200 million years of evolution. Even a perfect
 AGI would have to have some very imperfect logic to help predict the
 actions of our world's present inhabitants. Hence, there seems (to me) that
 there is probably no simple solution, as otherwise it would have already
 evolved during the last ~200 million years, instead of evolving the highly
 complex creatures that we now are.
 
 That having been said, I will comment on your posting...
 
 On 6/4/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 
  On Tuesday 03 June 2008 09:54:53 pm, Steve Richfield wrote:
 
   Back to those ~200 different types of neurons. There are probably some
  cute
   tricks buried down in their operation, and you probably need to figure
  out
   substantially all ~200 of those tricks to achieve human intelligence. If
  I
   were an investor, this would sure sound pretty scary to me without SOME
  sort
   of insurance like scanning capability, and maybe some simulations.
 
  I'll bet there are just as many cute tricks to be found in computer
  technology, including software, hardware, fab processes, quantum mechanics
  of
  FETs, etc -- now imagine trying to figure all of them out at once by
  running
  Pentiums thru mazes with a few voltmeters attached. All at once because 
you
  never know for sure whether some gene expression pathway is crucially
  involved in dendrite growth for learning or is just a kludge against 
celiac
  disease.
 
 
 Of course, this has nothing to do with creating the smarts to deal with
 our very complex real world well enough to compete with us who already
 inhabit it.
 
 That's what's facing the neuroscientists, and I wish them well -- but I
  think
  we'll get to the working mind a lot faster studying things at a higher
  level.
 
 
 I agree that high level views are crucial, but with the present lack of
 low-level knowledge, I see no hope for solving all of the problems while
 remaining only at a high level.
 
 For example:
 
  
http://repositorium.sdum.uminho.pt/bitstream/1822/5920/1/ErlhagenBicho-JNE06.pdf
 
 
 From that article: Our close cooperation with experimenters from
 neuroscience and cognitive science has strongly influenced the proposed
 architectures for implementing cognitive functions such as goal inference
 and decision making. THIS is where efforts are needed - in bringing the
 disparate views together rather than keeping your head in the clouds with
 only a keyboard and screen in front of you.
 
 In the 1980s I realized that neither neuroscience nor AI could proceed to
 their manifest destinies until a system of real-world mathematics was
 developed that could first predict details of neuronal functionality, and
 then hopefully show what AI needed. The missing link seemed to be the lack
 of knowledge 

Re: [agi] Neurons

2008-06-04 Thread Jim Bromer
From: Steve Richfield said:

Some clues as to the totality of the difficulties are the ~200 different types 
of neurons, and in the 40 years of ineffective AI/AGI research. I have seen NO 
recognition of this fundamental issue in other postings on this forum. This 
level of difficulty strongly implies that NO clever programming will ever 
achieve human-scale (and beyond) intelligence, until some way is found to 
mine the evolutionary lessons learned during the last ~200 million years.
---

I totally agree that the complexities of the neuron, and how they interact is 
still far beyond the capabilities of contemporary science.  The fact that you 
have seen NO recognition of this fundamental issue in this discussion group is 
of little significance to the subject.  I know that I read a few comments that 
were in agreement with the basic argument that much remains to be discovered 
about the neuron so that statement seems to be a personal one.  Your opinion 
that NO clever programming will ever achieve human-scale intelligence until 
some way is found to mine the evolutionary lessons learned is not based on 
substantial technical evidence.  (I do feel that advanced AI would be quite 
different from human intelligence, and I also believe that there are some 
mysteries of conscious experience that are not explained by the computational 
theory of mind).  However, notice that the reasons that one might use to 
support your argument would almost all be
 passive (or incidental) and not actively instructive relative to the 
fundamental problem of finding further technical details of what would be 
needed to create higher forms of artificial intelligence.  There are certainly 
many cases in human history when this kind of argument was the most 
utilitarian, because it is the primitive argument of fundamental infeasibility. 
 Until a technology is developed for the first time, the argument that it 
cannot be done until some other event occurs is likely to be beyond direct 
disproof until the technology is actually developed.  But it is also beyond 
direct proof or even substantial discussion.  Your comment about the 200 
neurons can be investigated and thereby proven or disproved (within a range of 
acceptability) but your statement that human level intelligence will not occur 
until the evolutionary lessons of the development of intelligence is mined is a 
statement that can be neither proved or disproved until the
 technology has been developed.  The offering of some of those lessons might be 
interesting, but the statement of your opinion IS ONLY THAT (to use your 
capitalization strategy of expression.)  It cannot be proved or disproved for 
some time, it does not prove or disprove some other interesting technical 
question, nor does it provide new insight into the more interesting questions 
of what is feasible and what is not feasible in contemporary AI.
Jim Bromer


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Neurons

2008-06-04 Thread Steve Richfield
Josh,

On 6/4/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 Well, Ray Kurzweil famously believes that AI must wait for the mapping of
 the
 brain. But if that's the case, everybody on this list may as well go home
 for
 20 years, or start running rats in mazes.


It just isn't all that hard. Sure, complete mapping, complete with
nonlinearities and all other parameters is a long way off, but just having a
single brain as a wirelist in a database would answer countless questions.

I personally think the millions of years of evolution argument is a red
 herring. Technological development not only moves much faster than
 evolution,
 but takes leaps evolution can't.


Note also that the reverse is true, because evolution needn't explain away
its past failures, convince investors, do only small experiments that it can
afford, etc., etc.

And evolution is always, crucially, obsessed
 with reproductive success.


Is that any worse a measure than is economic success?

Evolution would never build an airplane, because
 airplanes can't reproduce.


... and industry would never build a bird because they couldn't make money
on them. So what?


 But we can, and thus capture the aspect of birds
 that's germane to our needs -- flying -- with an assortment of kludges. And
 planes are still NOWHERE as sophisticated as birds, and guess what: 100
 years
 later, they still don't lay eggs.


Neither do they just appear without the expenditure of large amounts of
money.

How much of the human mind is built around the necessity of eating, avoiding
 being eaten, finding mates, being obsessed with copulation, and raising and
 protecting children? Egg-laying for airplanes, in my view.


Now, can you find some way of saying the above that would be convincing to
prospective investors? If not, then it is like a tree falling in the forest
with no one to hear.

There are some key things we learned about flying by watching birds. But
 having learned them, we built machines to do what we wanted better than
 birds
 could. We'll do the same with the mind.


I like your bird analogy, but you got off track as you were going through
it. The Wright Brothers put in over 100 hours of wind tunnel testing before
they built their first flying machine. Learn from the mind as we learned
from the birds - by dissecting, diagramming, simulating, etc., just as was
done with birds. You want to fast-forward past all of this, to go from
outward (and some anecdotal inward) observations to a finished product. This
is great if it works (though it has failed for the last 40 years), but once
you hit a stumbling block, there is no way to debug your approach to
correct its shortcomings. You think that you can do a perfect job without
such debugging, but having been in the computer business just as long as you
have, I have seen WAY too many problems to ever believe in such miracles.

Steve Richfield
===
On Wednesday 04 June 2008 03:15:36 pm, Steve Richfield wrote:
 Josh,

 I apparently failed to clearly state my central argument. Allow me to try
 again in simpler terms:

 The difficulties in proceeding in both neuroscience and AI/AGI is NOT a
lack
 of technology or clever people to apply it, but is rather a lack of
 understanding of the real world and how to effectively interact within
 it. Some clues as to the totality of the difficulties are the ~200
different
 types of neurons, and in the 40 years of ineffective AI/AGI research. I
have
 seen NO recognition of this fundamental issue in other postings on this
 forum. This level of difficulty strongly implies that NO clever
programming
 will ever achieve human-scale (and beyond) intelligence, until some way is
 found to mine the evolutionary lessons learned during the last ~200
 million years.

 Note that the CENTRAL difficulty in effectively interacting in the real
 world is working with and around the creatures that already inhabit it,
 which are the product of ~200 million years of evolution. Even a perfect
 AGI would have to have some very imperfect logic to help predict the
 actions of our world's present inhabitants. Hence, there seems (to me)
that
 there is probably no simple solution, as otherwise it would have already
 evolved during the last ~200 million years, instead of evolving the highly
 complex creatures that we now are.

 That having been said, I will comment on your posting...

 On 6/4/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 
  On Tuesday 03 June 2008 09:54:53 pm, Steve Richfield wrote:
 
   Back to those ~200 different types of neurons. There are probably some
  cute
   tricks buried down in their operation, and you probably need to figure
  out
   substantially all ~200 of those tricks to achieve human intelligence.
If
  I
   were an investor, this would sure sound pretty scary to me without
SOME
  sort
   of insurance like scanning capability, and maybe some simulations.
 
  I'll bet there are just as many cute tricks to be found in computer
  technology, including 

Re: [agi] Neurons

2008-06-03 Thread Steve Richfield
Vladimir,

On 6/3/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Tue, Jun 3, 2008 at 6:59 AM, Steve Richfield
 [EMAIL PROTECTED] wrote:
 
  Note that modern processors are ~3 orders of magnitude faster than a
 KA10,
  and my 10K architecture would provide another 4 orders of magnitude, for
 a
  net improvement over the KA10 of ~7 orders of magnitude. Perhaps another
  order of magnitude would flow from optimizing the architecture to the
  application rather than emulating Pentiums or KA10s. That leaves us just
 one
  order of magnitude short, and we can easily make that up by using just 10
 of
  the 10K architecture processors. In short, we could emulate human-scale
  systems in a year or two with adequate funding. By that time, process
  improvements would probably allow us to make such systems on single
 wafers,
  at a manufacturing cost of just a few thousand dollars.
 

 Except that you still wouldn't know what to do with all that. ;-)


... which gets to my REAL source of frustration.

Intel isn't making 10K processors because no one is ordering them, because
of the lack of understanding of how our brain works. A scanning UV
fluorescence microscope could answer many of the outstanding questions, but
it would be VERY limited without a 10K processor to reconstruct the
diagrams. So, for the lack of a few million dollars, both computer science
and neuroscience are stymied in the same respective holes that they have
been in for most of the last 40 years.

From my viewpoint, AI is an oxymoron, because of this proof by exhibition
that there is no intelligence to make artificially! It appears that the
world is just too stupid to help, when such small bumps can
stop entire generations of research in multiple disciplines.

Meanwhile, drug companies are redirecting ~100% of medical research funding
into molecular biology, nearly all of which leads nowhere.

The present situation appears to be entirely too stable. There seems to be
no visible hope past this, short of some rich person throwing a lot of money
at it - and they are all too busy to keep up on forums like this one.

Are we on the same page here?

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Neurons

2008-06-03 Thread J Storrs Hall, PhD
Strongly disagree. Computational neuroscience is moving as fast as any field 
of science has ever moved. Computer hardware is improving as fast as any 
field of technology has ever improved. 

I would be EXTREMELY surprised if neuron-level simulation were necessary to 
get human-level intelligence. With reasonable algorithmic optimization, and a 
few tricks our hardware can do the brain can't (e.g. store sensory experience 
verbatim and review it as often as necessary into learning algorithms) we 
should be able to knock 3 orders of magnitude or so off the pure-neuro HEPP 
estimate -- which puts us at ten high-end graphics cards, e.g. less than the 
price of a car.  (or just wait till 2015 and get one high-end PC).

Figuring out the algorithms is the ONLY thing standing between us and AI.

Josh

On Tuesday 03 June 2008 12:16:54 pm, Steve Richfield wrote:
 ... for the lack of a few million dollars, both computer science
 and neuroscience are stymied in the same respective holes that they have
 been in for most of the last 40 years.
 ...
 Meanwhile, drug companies are redirecting ~100% of medical research funding
 into molecular biology, nearly all of which leads nowhere.
 
 The present situation appears to be entirely too stable. There seems to be
 no visible hope past this, short of some rich person throwing a lot of money
 at it - and they are all too busy to keep up on forums like this one.
 
 Are we on the same page here?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Neurons

2008-06-03 Thread Steve Richfield
Josh,

On 6/3/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 Strongly disagree. Computational neuroscience is moving as fast as any
 field
 of science has ever moved.


Perhaps you are seeing something that I am not. There are ~200 different
types of neurons, but no one seems to understand what the ~200 different
things are that they have to do. Sure some simple nets are working, but I
just don't see the expected leap from this.

Computer hardware is improving as fast as any
 field of technology has ever improved.


We have already discussed here how architecture (of commercially available
processors) has been in a state of arrested development for ~35 years, with
~1:1 in performance just waiting to be collected.

I would be EXTREMELY surprised if neuron-level simulation were necessary to
 get human-level intelligence.


So would I. My point was that some additional understanding, a wiring
diagram, etc., would go a LONG way to getting over some of the humps that
doubtless lie ahead. The history of AI is littered with those who have
underestimated the problems.

With reasonable algorithmic optimization, and a
 few tricks our hardware can do the brain can't (e.g. store sensory
 experience
 verbatim and review it as often as necessary into learning algorithms) we
 should be able to knock 3 orders of magnitude or so off the pure-neuro HEPP
 estimate -- which puts us at ten high-end graphics cards, e.g. less than
 the
 price of a car.  (or just wait till 2015 and get one high-end PC).


The point of agreement with BOTH of our various estimates is that computer
horsepower is NOT a barrier.

Figuring out the algorithms is the ONLY thing standing between us and AI.


Back to those ~200 different types of neurons. There are probably some cute
tricks buried down in their operation, and you probably need to figure out
substantially all ~200 of those tricks to achieve human intelligence. If I
were an investor, this would sure sound pretty scary to me without SOME sort
of insurance like scanning capability, and maybe some simulations.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Neurons

2008-06-02 Thread Steve Richfield
Josh,

On 6/2/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 One good way to think of the complexity  of a single neuron is to think of
 it
 as taking about 1 MIPS to do its work at that level of organization. (It
 has
 to take an average 10k inputs and process them at roughly 100 Hz.)


While a CNS spiking neuron may indeed have this sort of bandwidth (except
maybe only ~200 of the inputs are active at any one time), the glial cells
that comprise 90% of the brain are MUCH slower.

There appears to be various approaches for trimming the computation needed
to emulate a neuron, though there remains so much uncertainty as to what
they are actually doing that at best you can only compute the exponent. I
suspect that this could be trimmed down by an easy order of magnitude with
clever programming.

This is essentially the entire processing power of the DEC KA10, i.e. the
 computer that all the classic AI programs (up to, say, SHRDLU) ran on. One
 real-time neuron equivalent. (back in 1970 it was a 6-figure machine --
 nowadays, same power in a 50-cent PIC microcontroller).


For an underwater sound classification system, I once showed how the task
could be performed by a single real-world-capability neuron. The good news
is that if you really get it right, they each do a LOT.

A neuron does NOT simply perform a dot product and feed it in to a sigmoid.
 One good way to think of what it can do is to imagine a 100x100 raster
 lasting 10 ms. It can act as an associative memory for a fairly large
 number
 of such clips, firing in an arbitrary stored pattern when it sees one of
 them
 (or anything close enough).


Further, its inputs often incorporate differentiation or integration, and
the inhibitory synapses usually incorporate complex non-linear of often
discontinuous functions.

Compared to that, the ability to modify its behavior based on a handful of
 global scalar variables (the concentrations of neurotransmitters etc) is
 trivial.


The REAL problem with functionality is that neuroscientists are loathe to
talk about what they have seen but cannot prove exists. This makes a ~40
year gap between early observations and popular press available to AGIers.

Not simple -- how many ways could you program a KA10? But limited
 nonetheless.
 It still takes 30 billion of them to make a brain.


I suspect that the job could be done with only a billion or so of them,
though I have no idea how to interconnect or power them.

Note that modern processors are ~3 orders of magnitude faster than a KA10,
and my 10K architecture would provide another 4 orders of magnitude, for a
net improvement over the KA10 of ~7 orders of magnitude. Perhaps another
order of magnitude would flow from optimizing the architecture to the
application rather than emulating Pentiums or KA10s. That leaves us just one
order of magnitude short, and we can easily make that up by using just 10 of
the 10K architecture processors. In short, we could emulate human-scale
systems in a year or two with adequate funding. By that time, process
improvements would probably allow us to make such systems on single wafers,
at a manufacturing cost of just a few thousand dollars.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com