>> Also, Moore's law is bound to hit a physical limit. It cannot be that far
now. It's already fishy, since it's being driven mostly by multicore
architectures. Moving from the sequential to the parallel world is far from
trivial in terms of software engineering. The brain is massively parallel
and asynchronous, and we are still very bad with that sort of stuff. Maybe
that's precisely where the missing good stuff lies.
There is massive effort in the computer business to solve the
parallelization problem. For certain classes of problems it is trivial --
say dsp (digital signal processing) or image rendering... such tasks can be
easily subdivided into smaller and smaller chunks that can be farmed out to
as many concurrently running cores as one has at one's disposal. But many
tasks are much harder to parallelize because one thing in a sequence depends
on the outcomes of some other thing for example. A lot of work is going on
to try to develop compiler algorithms that can discover opportunities for
the parallelization of sequential linearized tasks in order to try to
compile code into optimally chunked tasks that can be run in parallel. But
as you said this is a hard class of problem and often it is not easily
apparent when opportunities for parallelization exist or can be re-factored
into some workflow or body of code.
Multi-core architecture is going to continue to grow exponentially and soon
we will be seeing 16 core, 32, 62, 256, 512, 1k core machines and off to the
races we go.
As you said -- going to multi-core architectures allows HW manufacturers to
continue to drive metrics in an easy manner (so far at least) though at some
point the inter communication of cores will grow harder and harder to manage
and to keep a core level bus throughput going on. 
But on the Moore's Law still is holding for tradition metrics -- apart from
the multi-core dimension of growth. The industry has also already ramped up
considerable research into radical new possibilities and materials (such as
carbon nano-tubes for example) a lot of the challenges for moving towards
for example using electron spin as the holder of information and being able
to go towards an architecture that can shuttle individual electrons. 
I don't get the sense that the industry is going to hit any fundamental
physical Law limits on the further miniaturization and speeding up of
hardware any time soon. Perhaps with traditional chip architectures limits
may not be that much further off maybe ten years perhaps -- and AMD for
example is having problems scaling down to 20nm (though Intel is churning
them out at 22nm scale), but this applies for traditional chip architectures
on silicon. 
What about graphene? DNA/other molecular computers? 
There remains a huge amount of room at the bottom to continue to scale down
and I don't see any fundamental reasons why clever technologists with
increasingly sophisticated micro and nano scale manufacturing chops cannot
continue to devise clever ways to exploit various phenomena that can be
controlled, switched and stored in one state or the other at those smaller
and smaller scales and to grow in the orthogonal dimension of 3-D as well.
In fact as fewer and fewer electrons get squeezed through gates (smaller and
smaller scales) less and less power is needed and less and less waste heat
is generated.
In fact the human brain is a clear example of just how much room there is
yet to go at the bottom... we have 20 watt multi-core machines with 86
billion processors running a one hundred trillion connection network all
crackling away in a tightly folded case about the size of a grapefruit. How
many generations of Moore's Law will it take to reach that kind of density?
-Chris



-----Original Message-----
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
Sent: Saturday, August 24, 2013 2:00 PM
To: everything-list@googlegroups.com
Subject: Re: Deep Blue vs The Tianhe-2 Supercomputer

On Sat, Aug 24, 2013 at 9:05 PM, Platonist Guitar Cowboy
<multiplecit...@gmail.com> wrote:
> As I tried to comment in the other thread concerning chess: it's not 
> just about power, it's also about quality of coding. Just one fresh 
> opening, a novel variation or line in the mid game, a bug in the code, 
> one position falsely assessed, and all computing power in the universe 
> will still lose that game. To generalize this to all problems seems a 
> bit quick. PGC

I agree with the sentiment. Chess is a very narrow case though: the min-max
algorithm plus a brutal amount of computing power is surely going to beat a
human. The min-max algorithm is so simple that it is not that hard to
implement with zero defects. The issue, though, is the following: we
currently only know how to beat top human players with brutal computational
power. The part of the human brain devoted to playing chess (even in a Grand
Master) cannot possibly match what we already do artificially in terms of
computing power. It must use smarter algorithms. Our brain cannot possibly
hold the gigantic search trees involved in min-max, it must be doing
something much more clever. We don't know what that is.

We are now approaching a point where we can have supercomputers with the
same estimated computational power of a human brain, but we are very far
from replicating its capabilities. There's even a lot of stuff insects do
that we are not close to matching. I dare even say bacteria. There are many
fundamental algorithms yet to be discovered, that's for sure.

Also, Moore's law is bound to hit a physical limit. It cannot be that far
now. It's already fishy, since it's being driven mostly by multicore
architectures. Moving from the sequential to the parallel world is far from
trivial in terms of software engineering. The brain is massively parallel
and asynchronous, and we are still very bad with that sort of stuff. Maybe
that's precisely where the missing good stuff lies.

Incidentally, Richard Feynman was involved with a startup that tried to
create a new type of highly parallel computer. Here's an interesting read
about it:

http://longnow.org/essays/richard-feynman-connection-machine/

I love this part:

"We were arguing about what the name of the company should be when Richard
walked in, saluted, and said, "Richard Feynman reporting for duty. OK, boss,
what's my assignment?" The assembled group of not-quite-graduated MIT
students was astounded.

After a hurried private discussion ("I don't know, you hired him..."), we
informed Richard that his assignment would be to advise on the application
of parallel processing to scientific problems.

"That sounds like a bunch of baloney," he said. "Give me something real to
do."

So we sent him out to buy some office supplies."


Telmo.

>
> On Sat, Aug 24, 2013 at 6:07 PM, John Clark <johnkcl...@gmail.com> wrote:
>>
>> Suppose that in 1997 you had a very difficult problem to solve, so 
>> difficult that it would take Deep Blue, the supercomputer that beat 
>> the best human chess player in the world, 18 years to solve, what should
you do?
>> You'd do better to let Moore's law do all the heavy lifting and leave 
>> Deep Blue alone and sit on your hands from 1997 until just 2 minutes 
>> ago, because that's how long it would take the 2013 supercomputer 
>> Tianhe-2 to solve the problem. And in 20 years your wristwatch will 
>> be more powerful than Tianhe-2.
>>
>>   John K Clark
>>
>> --
>> You received this message because you are subscribed to the Google 
>> Groups "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, 
>> send an email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/groups/opt_out.
>
>
> --
> You received this message because you are subscribed to the Google 
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send 
> an email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to