Ben,

On 12/12/08, Ben Goertzel <b...@goertzel.org> wrote:
>
> >> > There isn't much that an MIMD machine can do better than a
> similar-sized
> >> > SIMD machine.
> >>
> >> Hey, that's just not true.
> >>
> >> There are loads of math theorems disproving this assertion...
> >
> >
> > Oops, I left out the presumed adjective "real-world". Of course there are
> > countless diophantine equations and other math trivia that aren't
> > vectorizable.
> >
> > However, anything resembling a brain in that the process can be done by
> > billions of slow components must by its very nature vectorizable. Hence,
> in
> > the domain of our discussions, I think my statement still holds
>
> I'm not so sure, but for me to explore this area would require a lot
> of time and I don't
> feel like allocating it right now...


No need, so long as
1.  You see some possible future path to vectorizability, and
2.  My or similar vector processor chips aren't a reality yet.

I'm also not so sure our current models of brain mechanisms or
> dynamics are anywhere near
> accurate, but that's another issue...


I finally cracked the"theory of everything in cognition puzzle" discussed
here ~4 months ago, which comes with an understanding of the super-fast
learning observed in biological systems, e.g. visual systems the tune
themselves up in the first few seconds after an animal's eyes open for the
first time. I am now trying to translate it from "Steveze" to readable
English which hopefully should be done in a week or so. Also, insofar as
possible, I am translating all formulas into grammatically correct English
statements, for the mathematically challenged readers. Unless I missed
something really BIG, it will change everything from AGI to NN to ???. Most
especially, AGI is largely predicated on the INability to perform such fast
learning, which is where experts enter the picture. With this theory,
modifying present AGI approaches to learn fast shouldn't be all that
difficult.

After any off-line volunteers have first had their crack, I'll post it here
for everyone to beat it up.

Do I hear any volunteers out there in Cyberspace who want to help "hold my
feet to the fire" off-line regarding those pesky little details that so
often derail grand theories?

>> Indeed, AGI and physics simulation may be two of the app areas that have
> >> the easiest times making use of these 80-core chips...
> >
> >
> > I don't think Intel is even looking at these. They are targeting embedded
> > applications.
>
> Well, my bet is that a main app of multicore chips is ultimately gonna
> be gaming ...
> and gaming will certainly make use of fancy physics simulation ...


Present gaming video chips have special processors that are designed to
perform the 3D to 2D transformations needed for gaming, and for maintaining
3D models. It is hard (though not impossible) compete with custom hardware
that has been refined for a particular application.

Also, it would seem to be a terrible waste of tens of terraflops just to
operate a video game.

and
> I'm betting it will
> also make use of early-stage AGI...


There is already some of that creeping into some games, including actors who
perform complex jobs in changing virtual envrionments.

Steve Richfield



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to