Josh,

On 4/15/08, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
>
> On Monday 14 April 2008 04:56:18 am, Steve Richfield wrote:
> > ... My present
> > efforts are now directed toward a new computer architecture that may be
> more
> > of interest to AGI types here than Dr. Eliza. This new architecture
> should
> > be able to build new PC internals for about the same cost, using the
> same
> > fabrication facilities, yet the processors will run ~10,000 times faster
> > running single-thread code.
>
> This (massively-parallel SIMD) is perhaps a little harder than you seem to
> think. I did my PhD thesis on it and led a multi-million-dollar 10-year
> ARPA-funded project to develop just such an architecture.


I didn't see any attachments. Perhaps you could send me some more
information about this? Whenever I present this stuff, I always emphasize
that there is NOTHING new here, just an assortment of things that are
decades old. Hopefully you have some good ideas in there, or maybe even some
old ideas that I can attribute "new" thinking to.

The first mistake everybody makes is to forget that the bottleneck for
> existing processors isn't computing power at all, it's memory bandwidth.
> All
> the cruft on a modern processor chip besides the processor is there to
> ameliorate that problem, not because they aren't smart enough to put more
> processors on.


Got this covered. Each of the ~10K ALUs has ~8 memory banks to work with,
for a total of ~80K banks, so there should be no latencies except for
inter-ALU communication. Have I missed something here?

The second mistake is to forget that processor and memory silicon fab use
> different processes, the former optimized for fast transistors, the latter
> for dense trench capacitors.  You won't get both at once -- you'll give up
> at
> least a factor of ten trying to combine them over the radically
> specialized
> forms.


Got that covered. Once multipliers and shift matrices are eliminated and
only a few adders, pipeline registers, and a little random logic remain,
then the entire thing can be fabricated with *MEMORY* fab technology! Note
that memories have been getting smarter (and even associative), e.g. cache
memories, and when you look at their addressing, row selection, etc., there
is nothing more complex than I am proposing for my ALUs. While the control
processor might at first appear to violate this, note that it needs no
computational speed, so its floating point and other complex instructions
can be emulated on slow memory-compatible logic.

The third mistake is to forget that nobody knows how to program SIMD.


This is a long and complicated subject. I spent a year at CDC digging some
of the last of the nasty bugs out of their Cyber-205 FORTRAN compiler's
optimizer and vectorizer, whose job it was to sweep these issues under the
rug. There are some interesting alternatives, like describing complex code
skeletons and how to vectorize them. When someone writes a loop whose
structure is new to the compiler, someone else would have to explain to the
computer how to vectorize it. Sounds kludgy, but co0nsidering the
man-lifetimes that it takes to write a good vectorizing compiler, this
actually works out to much less total effort.

I absolutely agree that programmers will quickly fall into two groups -
those who "get it" and make the transition to writing vectorizable code
fairly easily, and those who go into some other line of work.

They
> can't even get programmers to adopt functional programming, for god's
> sake;
> the only thing the average programmer can think in is BASIC,


I can make a pretty good argument for BASIC, as its simplicity makes it
almost ideal to write efficient compilers for. Add to that the now-missing
MAT statements for simple array manipulations, and you have a pretty serious
competitor for all other approaches.

or C which is
> essentially machine-independent assembly.


C is only SISD machine independent. When you move to more complex
architectures, its paradigm breaks down.

Not even LISP. APL, which is the
> closest approach to a SIMD language, died a decade or so back.


Yes. This is a political/psychological issue, as there were its
"practitioners" who learned its hieroglyphics, and the rest of the mere
mortals who simply ignored it. No one (that I know of)ever made the obvious
simple step of producing a humanized front-end to the language.

BTW, APL is still alive in some financial modeling applications.

Now frankly, a real associative processor (such as described in my thesis --
> read it) would be very useful for AI. You can get close to faking it
> nowadays
> by getting a graphics card and programming it GPGPU-style. I quit
> architecture and got back into the meat of AI because I think that Moore's
> law has won, and the cycles will be there before we can write the
> software,
> so it's a waste of time to try end-runs.


Not according to Intel, who sees the ~4GHz limit as being a permanent thing.
I sat on my ideas for ~20 years, just waiting for this to happen and blow
Moore out of the water.

Associative processing would have
> been REALLY useful for AI in the 80's, but we can get away without it,
> now.


With enough ALUs, associative processing is just another programming style.

Thanks.

Steve Richfield

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to