_Xperience_ (http://www.xconomy.com/xperience) 
 
 
 
 
 
 
 
 
What Will the End of Moore’s Law Mean for  Consumers? Not Much 

 
 
 
 

 (http://www.xconomy.com/author/wroush/) 
 
_Wade  Roush_ (http://www.xconomy.com/author/wroush/)  
July 26th, 2013 

“The party isn’t exactly over, but the police have arrived, and the music 
has  been turned way down.” 
That’s how Peter Kogge, an ex-IBM computer scientist who teaches at Notre  
Dame, described the state of supercomputing in _a 2011 article in IEEE 
Spectrum_ 
(http://spectrum.ieee.org/computing/hardware/nextgeneration-supercomputers/0) . 
The  giant machines that researchers use to simulate things like 
climate change,  protein folding, and nuclear tests aren’t going to keep 
getting faster at the  same rate they have in the past, wrote Kogge, who led a 
study group on the  question for the U.S. Defense Advanced Research Projects 
Agency, or DARPA. 
The basic reason: pushing microprocessors to work a lot faster than they  
already do will require inordinate amounts of power, and generate an  
unmanageable amount of waste heat. You could build a computer that runs 100  
times 
faster than Cray’s 1-petaflop _Blue  Waters_ 
(http://www.ncsa.illinois.edu/BlueWaters/)  machine at the National Center for 
Supercomputing Applications, 
but  “you’d need a good-sized nuclear power plant next door,” not to 
mention a huge,  dedicated cooling system, Kogge observed. 
How do roadblocks in supercomputing relate to the kind of computing that we 
 average Joes do every day—sifting through e-mail, posting photos on 
Facebook,  maybe playing a few video games? 
Well, it used to be that advances at the biggest scales of computing 
heralded  near-term benefits for consumers. Consider that the ASCI Red 
supercomputer at  Sandia National Laboratories, built in 1996, had a peak speed 
of 1.8 
teraflops  (trillions of floating power operations per second). ASCI Red 
required 800,000  watts of power and took up 150 square meters of floor space. 
Just 10 years  later, thanks to steady advances in transistor 
miniaturization, Sony’s  PlayStation 3 could hit the same speed (1.8 teraflops) 
using less 
than 200 watts  of power, in a box small enough to fit under your TV. 
But that, unfortunately, is where the express train stopped. The clock 
rates  of commercial microprocessors peaked at about 3 gigahertz back in 2006, 
and  haven’t advanced at all since then. 
Obviously, there have been other kinds of advances since 2006. Engineers 
have  figured out how to put more processing cores on each chip, while 
tweaking them  to run at lower power. The dual-core A5X system-on-a-chip, 
designed 
by Apple and  manufactured by Samsung in Austin, TX, is the epitome of this 
kind of clever  engineering, giving the iPad, the iPhone, and the iPad Touch 
the power to run  mind-blowing games and graphics while still providing 
all-day battery life, all  at roughly 1 gigahertz. 
But the uncomfortable truth weighing on the minds of innovators is that  
Moore’s Law has expired, or will very soon. 
Moore’s Law was never a real physical law, of course, but merely a  
prediction, first ventured by Intel co-founder Gordon Moore back in 1965. It  
says 
that the number of transistors that chipmakers can squeeze into a  
microprocessor will double every 18 to 24 months, without adding to the 
device’s  
size or cost. 
The prediction held true for about 40 years, but now manufacturers are  
falling behind. Between 2009 and 2012, Intel improved the performance of its  
CPUs by only 10 or 20 percent per year—way behind the 60-percent-per-year 
gains  needed to keep pace with Moore’s original forecast. 
Though no one in Silicon Valley likes to talk about it, the “easy” years 
for  the semiconductor industry are over. Transistor gates are now only a few 
atoms  wide, meaning they can’t be shrunk any further without losing track 
of the  electrons flowing through them (it’s a quantum mechanics thing). In 
other words,  Intel, AMD, and their competitors won’t be able to make 
tomorrow’s chips faster,  smaller, and denser without fancy tricks, such as 3D 
circuit designs, that will  likely make future generations of computing devices 
sharply more expensive to  manufacture. So even if they can find ways to 
stick to the letter of Moore’s  Law, they’ll be violating its spirit, which 
was always really about economics.  (In Moore’s own words in _a 2006 
retrospective_ 
(http://www.ece.ucsb.edu/~strukov/ece15bSpring2011/others/MooresLawat40.pdf) , 
“integrated circuits were  going to be the path to significantly 
cheaper products.”) 
Let’s say I’m right, and the single most powerful technology trend of the  
last half-century—the one driving all sorts of other exponential advances, 
in  fields from telecommunications to robotics to genomics—has reached its 
endpoint.  What would that really mean, from a consumer’s point of view? 
Not very much; not enough to cause panic in the streets, at any rate. It’s  
not as if we’ll suffer a sudden dropoff in GDP or productivity or life  
expectancy. While the effects of the “Moorepocalypse,” as _some have called 
it_ 
(http://www.pcworld.com/article/2032913/the-end-of-moores-law-is-on-the-horizon-says-amd.html)
 , will be noticeable, they  won’t be catastrophic. That’
s because there are important frontiers where  computer scientists can make 
progress without having to wait for transistors to  get even smaller—and 
where a few breakthroughs could be extremely meaningful to  consumers. 
I’ll detail a few of them in a moment. But first, let’s acknowledge that 
some  pain is on the way. A slowdown in chip advances will have real 
repercussions in  the market for desktops, laptops, tablets, and smartphones, 
where 
we’ll probably  have to wait a lot longer between big upgrades. 
In the struggling desktop market, this pattern has actually been evident 
for  some time. For many people, PCs reached the “good enough” stage in the  
mid-2000s, as _PCWorld columnist Brad Chacos_ 
(http://www.pcworld.com/article/2032913/the-end-of-moores-law-is-on-the-horizon-says-amd.html)
  has noted. 
There  are lots of consumers who own personal computers mainly so that they 
can surf  the Web, play Solitaire, e-mail photos to their friends, or open 
an occasional  spreadsheet—and for them, hardware makers haven’t offered a 
lot of compelling  reasons to chuck the old tower PC from Dell, HP, or 
Gateway. (My parents got  along fine with a 2000-vintage Windows XP machine 
from 
Gateway until this  summer, when my brother and I finally talked them into 
getting a MacBook  Pro.) 
On the mobile side, “there has not been a ‘must have’ new device for quite 
 some time,” as consultant and columnist Mark Lowenstein _argued just this 
week_ 
(http://www.fiercewireless.com/story/lowensteins-view-smartphone-ennui-and-what-do-about-it/2013-07-25)
 . That probably helps to  explain the fact 
that smartphones aren’t selling as well as they used to in  developed 
countries. “Fact is, any mid-tier or better smartphone in the market  today is 
pretty fabulous. It does just about anything you would want it or need  it to 
do,” Lowenstein correctly observes. Yet another fact: phones can’t be made  
much thinner, lighter, faster, or brighter without sacrificing battery 
life. So  it’s hard to see what types of hardware innovation will send average 
cellular  subscribers running back to the Verizon, AT&T, or Apple stores. 
But while a slower hardware replacement cycle may cut into profits for PC 
and  handset makers, it’s hardly the end of the world. Let’s say 
microprocessor  speeds do level off exactly where they are today; there is 
still plenty 
 of room left for other types of improvements in the computing experience 
for  consumers. One might even argue that a pause on the hardware side would 
allow  software engineers to flesh out and optimize promising ideas that are 
still  nascent today, rather than having to spend so much time rewriting 
their  applications for new platforms and devices. 
I’ve got three big areas in mind: artificial intelligence, cloud computing, 
 and interface design. 
As companies like IBM and Google are demonstrating, making computers 
smarter  isn’t necessarily about making them faster. IBM’s Watson supercomputer 
was far  from the world’s fastest (it ranked 94th at the time it won on  
Jeopardy!), and Google is famous for filling its data centers with  custom 
Linux 
servers that have been optimized for low cost and low power  consumption 
rather than speed. What makes these systems smart—allowing Google to  fill in 
your search query even before you finish typing it, for example—is that  
they have access to huge amounts of data. 
Google Now, a feature of Google’s mobile search app for Android phones and  
iPhones, plumbs both your personal data (e-mail, calendar, location, etc.) 
and  Google’s _Knowledge  Graph_ 
(http://www.xconomy.com/san-francisco/2012/12/12/google-gets-a-second-brain-changing-everything-about-search/)
  
database to proactively offer you information about local weather,  traffic, 
public 
transit, airline flights, sports scores, and the like. Apple’s  Siri 
assistant is just as versatile, but usually waits to be asked. Google and  
Apple 
will vastly improve these systems over the coming years as they collect  more 
data about users’ travels, affiliations, habits, and speech patterns. (As  
Xconomy’s Curt Woodward reports today, _Apple  has opened a new office near 
MIT_ 
(http://www.xconomy.com/boston/2013/07/26/apples-boston-area-team-working-on-speech-in-nuances-backyard/)
  to work on exactly that.) 
At the same time, virtual personal assistants will turn up in many other  
walks of life, from banking to customer service to games and education. These 
 systems don’t need HAL-like intelligence to be useful—it turns out that 
having  some contextual data about our internal lives, plus an encyclopedic 
knowledge of  the external world, plus a bit of smarts about how we interact 
with that world,  can take them pretty far. 
Another big area of consumer technology that will still be ripe for  
innovation even in the post-Moore’s Law era is cloud computing. There’s  
obviously some overlap in my categories, as both Google Now and Siri are  
cloud-based services requiring a wireless connection to the companies’ data  
centers. 
But my point is that everything is gradually moving to the  cloud. 
Let’s suppose that today’s smartphones and laptops have as much computing  
horsepower as they’re ever going to have. Things will still be okay, since 
so  many applications—word processing, spreadsheets, even video editing—are 
now  available as cloud services, and it’s easy to just keep adding servers 
to data  centers. In a cloud-centric world, the limiting factor will be 
network bandwidth  (especially wireless bandwidth), not chip speeds. 
Thirdly, there’s room for major strides in interface design. We’ve already 
 graduated from the mouse-and-keyboard era into the touchscreen era. With 
their  motion-sensitive devices, companies like Leap Motion and PrimeSense 
are bringing  gesture control into the mix. Now it’s time to go back to basics 
and rethink how  information is presented, and how we move our focus within 
and between computing  tasks. 
It looks like Apple plans to advance the ball here again with iOS 7, the 
new  mobile operating system coming this fall. But I’m also waiting for big  
improvements in speech recognition—which, fortunately, is another data-driven 
 problem—as well as 3D displays and wearable displays. (I’ll be a Google 
Glass  skeptic until Google figures out how to make the display much more 
crisp and  much less obtrusive.) 
Is Moore’s Law really over, or is it just taking a breather? In the end, it 
 doesn’t really matter. We should know by now that innovation proceeds in 
fits  and starts, and that it’s always intertwined with political and 
economic  developments. 
Look at rocket technology: from the Nazis’ V-2 attacks on London to the 
U.S.  landings on the Moon, a mere 25 years went by, with the key advances in  
propulsion and guidance driven by a world war, then a cold war. If progress 
in  rocketry and space exploration slowed after Apollo, it’s largely because 
we  stopped needing better ICBMs. (Now, of course, companies like SpaceX 
are trying  to put orbital flight on a more rational economic footing.) 
Or look at commercial aviation. Planes haven’t changed much since Boeing  
brought out the 747 in 1970 (another Cold War spinoff, by the way: the plane’
s  double-decker design was patterned after the C-5 Galaxy cargo transport). 
But  access to jet travel has increased enormously, thanks to deregulation  
and greater competition. (In inflation-adjusted terms, domestic airfares 
have _fallen by half_ 
(http://www.theatlantic.com/business/archive/2013/02/how-airline-ticket-prices-fell-50-in-30-years-and-why-nobody-noticed/273506/)
  
since 1978.) 
Given the right economic or political incentives, computer researchers will 
 eventually perfect a new medium that will get the party going again, 
supporting  another several decades of exponential advances in processor speed. 
Maybe it  will be proteins or other molecules; maybe it will be qubits 
(quantum-entangled  particles). But whatever it is, it won’t be ready before 
Moore’
s Law peters out  on silicon-based devices. Engineers should use the 
interregnum to work on better  consumer-facing software, which isn’t nearly as 
cool as it could be, or should  be.






-- 
-- 
Centroids: The Center of the Radical Centrist Community 
<[email protected]>
Google Group: http://groups.google.com/group/RadicalCentrism
Radical Centrism website and blog: http://RadicalCentrism.org

--- 
You received this message because you are subscribed to the Google Groups 
"Centroids: The Center of the Radical Centrist Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to