This was a pretty pointless article.

Aaronson says nothing about what is potentially useful for QCs (if they 
ever work as the "theory" suggests) --

How parallel computing can be achieved with qubits:

       https://quantum-algorithms.herokuapp.com/433/shor-par/node11.html



On Friday, November 1, 2019 at 4:25:21 PM UTC-5, John Clark wrote:
>
> Quantum Computer expert Scott Aaronson wrote a editorial in the October 
> 30 2019 New York Times:
>
> Why Google’s Quantum Supremacy Milestone Matters 
> <https://www.nytimes.com/2019/10/30/opinion/google-quantum-computer-sycamore.html>
>
>
> *   Why Google’s Quantum Supremacy Milestone Matters*
> *     By Scott Aaronson*
>
> Google officially announced last week in the journal Nature that it 
> achieved the milestone of “quantum supremacy.” This phrase, coined by the 
> physicist John Preskill in 2012, refers to the first use of a quantum 
> computer to make a calculation much faster than we know how to do it with 
> even the fastest supercomputers available. The calculation doesn’t need to 
> be useful: much like the Wright Flyer in 1903, or Enrico Fermi’s nuclear 
> chain reaction in 1942, it only needs to prove a point.
>
> Over the last decade, together with students and colleagues, I helped 
> develop much of the theoretical underpinning for quantum supremacy 
> experiments like Google’s. I reviewed Google’s paper before it was 
> published. So the least I can do is to try to explain what it means.
>
> Until recently, every computer on the planet — from a 1960s mainframe to 
> your iPhone, and even inventions as superficially exotic as “neuromorphic 
> computers” and DNA computers — has operated on the same rules. These were 
> rules that Charles Babbage understood in the 1830s and that Alan Turing 
> codified in the 1930s. Through the course of the computer revolution, all 
> that has changed at the lowest level are the numbers: speed, amount of RAM 
> and hard disk, number of parallel processors.
>
> But quantum computing is different. It’s the first computing paradigm 
> since Turing that’s expected to change the fundamental scaling behavior of 
> algorithms, making certain tasks feasible that had previously been 
> exponentially hard. Of these, the most famous examples are simulating 
> quantum physics and chemistry, and breaking much of the encryption that 
> currently secures the internet.
>
> In my view, the Google demonstration was a critical milestone on the way 
> to this vision. At a lab in Santa Barbara, Calif., a Google team led by 
> John Martinis built a microchip called “Sycamore,” which uses 53 loops of 
> wire around which current can flow at two different energies, representing 
> a 0 or a 1. The chip is placed into a dilution refrigerator the size of a 
> closet, which cools the wires to a hundredth of a degree above absolute 
> zero, causing them to superconduct. For a moment — a few tens of millionths 
> of a second — this makes the energy levels behave as quantum bits or 
> “qubits,” entities that can be in so-called superpositions of the 0 and 1 
> states.
>
> This is the part that’s famously hard to explain. Many writers fall back 
> on boilerplate that makes physicists howl in agony: “imagine a qubit as 
> just a bit that can be both 0 and 1 at the same time, exploring both 
> possibilities simultaneously.” If I had room for the honest version, I’d 
> tell you all about amplitudes, the central concept of quantum mechanics 
> since Werner Heisenberg, Erwin Schrödinger and others discovered it in the 
> 1920s.
>
> Here’s a short version: In everyday life, the probability of an event can 
> range only from 0 percent to 100 percent (there’s a reason you never hear 
> about a negative 30 percent chance of rain). But the building blocks of the 
> world, like electrons and photons, obey different, alien rules of 
> probability, involving numbers — the amplitudes — that can be positive, 
> negative, or even complex (involving the square root of -1). Furthermore, 
> if an event — say, a photon hitting a certain spot on a screen — could 
> happen one way with positive amplitude and another way with negative 
> amplitude, the two possibilities can cancel, so that the total amplitude is 
> zero and the event never happens at all. This is “quantum interference,” 
> and is behind everything else you’ve ever heard about the weirdness of the 
> quantum world.
>
> Now, a qubit is just a bit that has some amplitude for being 0 and some 
> other amplitude for being 1. If you look at the qubit, you force it to 
> decide, randomly, whether to “collapse” to 0 or 1. But if you don’t look, 
> the two amplitudes can undergo interference — producing effects that depend 
> on both amplitudes, and that you can’t explain by the qubit’s having been 0 
> or by its having been 1.
>
> Crucially, if you have, say, a thousand qubits, and they can interact (to 
> form so-called “entangled” states), the rules of quantum mechanics are 
> unequivocal that you need an amplitude for every possible configuration of 
> all thousand bits. That’s 2 to the 1,000 amplitudes, much more than the 
> number of atoms in the observable universe. If you have a mere 53 qubits, 
> as in Google’s Sycamore chip, that’s still 2 to the 53 amplitudes, or about 
> 9 quadrillion.
>
> The goal, with Google’s quantum supremacy experiment, was to perform a 
> contrived calculation involving 53 qubits that computer scientists could be 
> as confident as possible really would take something like 9 quadrillion 
> steps to simulate with a conventional computer. The qubits in Sycamore are 
> laid out in a roughly rectangular grid, with each qubit able to interact 
> with its neighbors. Control signals, sent by wire from classical computers 
> outside the dilution refrigerator, tell each qubit how to behave, including 
> which of its neighbors to interact with and when.
>
> In other words, the device is fully programmable — that’s why it’s called 
> a “computer.” At the end, the qubits are all measured, yielding a random 
> string of 53 bits. Whatever sequence of interactions was used to produce 
> that string — in the case of Google’s experiment, the interactions were 
> simply picked at random — you can then rerun the exact same sequence again, 
> to sample another random 53-bit string in exactly the same way, and so on 
> as often as desired.
>
> In its Nature paper, Google estimated that its sampling calculation — the 
> one that takes 3 minutes and 20 seconds on Sycamore — would take 10,000 
> years for 100,000 conventional computers, running the fastest algorithms 
> currently known. Indeed the task was so hard, Google said, that even 
> directly verifying the full range of the results on classical computers was 
> out of reach for its team. Thus, to check the quantum computer’s work in 
> the hardest cases, Google relied on plausible extrapolations from easier 
> cases.
>
> IBM, which has built its own 53-qubit processor, posted a rebuttal. The 
> company estimated that it could simulate Google’s device in a mere 2.5 
> days, a millionfold improvement over Google’s 10,000 years. To do so, it 
> said, it would only need to commandeer the Oak Ridge Summit, the largest 
> supercomputer that currently exists on earth — which IBM installed last 
> year at Oak Ridge National Laboratory, filling an area the size of two 
> basketball courts. (And which Google used for some of its simulations in 
> verifying the Sycamore results.) Using this supercomputer’s eye-popping 250 
> petabytes of hard disk space, IBM says it could explicitly write down all 9 
> quadrillion of the amplitudes. Tellingly, not even IBM thinks the 
> simulation would be especially easy — nor, as of this writing, has IBM 
> actually carried it out. (The Oak Ridge supercomputer isn’t just sitting 
> around waiting for jobs.)
>
> We’re now in an era where, with heroic effort, the biggest supercomputers 
> on earth can still maybe, almost simulate quantum computers doing their 
> thing. But the very fact that the race is close today suggests that it 
> won’t remain close for long. If Google’s chip had used 60 qubits rather 
> than 53, then simulating its results with IBM’s approach would require 30 
> Oak Ridge supercomputers. With 70 qubits, it would require enough 
> supercomputers to fill a city. And so on.
>
> Is there real science behind the spectacle of these two tech titans 
> locking antlers? Is “quantum supremacy,” divorced from practical 
> applications, an important milestone at all? When should we expect those 
> practical applications, anyway? Assuming Google has achieved quantum 
> supremacy, what exactly has it proved — and is it something anyone doubted 
> in the first place?
>
> Let’s start with applications. A protocol that I came up with a couple 
> years ago uses a sampling process, just like in Google’s quantum supremacy 
> experiment, to generate random bits. While by itself that’s unimpressive, 
> the key is that these bits can be demonstrated to be random even to a 
> faraway skeptic, by using the telltale biases that come from quantum 
> interference. Trusted random bits are needed for various cryptographic 
> applications, such as proof-of-stake cryptocurrencies (environmentally 
> friendlier alternatives to Bitcoin). Google is now working toward 
> demonstrating my protocol; it bought the non-exclusive intellectual 
> property rights last year.
>
> Other applications will almost certainly require more qubits, and of a 
> higher quality — things that Google, IBM and the other players are racing 
> to build. One major milestone to watch for next will be the first use of 
> small quantum computers to simulate the quantum physics of chemicals and 
> materials in a way that’s actually useful to chemists and materials 
> scientists. Simulating quantum mechanics — that is, overcoming the 
> exponential explosion of amplitudes in nature via a computer equipped with 
> the same power — was the original application that the physicist Richard 
> Feynman envisioned when he proposed the idea of a quantum computer in the 
> early 1980s. It’s still the most important application we know — one that 
> could aid in the design of everything from batteries and solar cells to 
> fertilizers and lifesaving drugs.
>
> An even bigger milestone will be the first practical demonstration of 
> quantum error correction — a technology that, in theory, should be able to 
> keep qubits alive for vastly longer amounts of time by cleverly encoding 
> them across many physical qubits. Quantum computing researchers think that 
> quantum error correction is what will ultimately let quantum computers 
> scale beyond a couple hundred qubits, to the million- or billion-qubit 
> machines that would fully realize Feynman’s dream. But this hasn’t been 
> demonstrated yet, and no one knows when it will be.
>
> In the meantime, Google’s demonstration is a crucial proof of concept. 
> Building a quantum computer is so hard that, ever since serious efforts 
> began in the mid-1990s, some distinguished scientists have argued that the 
> task would be impossible. Qubits, they said, will always prove too fragile 
> to control. If quantum mechanics seems to predict that you can harness an 
> exponential number of amplitudes for computation, then so much the worse 
> for our present understanding of quantum mechanics.
>
> Google’s demonstration should give these skeptics pause. To all 
> appearances, a 53-qubit device really was able to harness 9 quadrillion 
> amplitudes for computation, surpassing (albeit for a special, useless task) 
> all the supercomputers on earth. Quantum mechanics worked: an outcome 
> that’s at once expected and mind-boggling, conservative and radical.
>
> The computer revolution was enabled, in large part, by a single invention: 
> the transistor. Before transistors, we were stuck with failure-prone vacuum 
> tubes. Yet vacuum tubes kind of, sort of worked: they translated abstract 
> Boolean logic into electrical signals reliably enough to be useful. We 
> don’t yet have the quantum computing version of the transistor — that would 
> be quantum error correction. Getting there will surely require immense 
> engineering, and probably further insights as well. In the meantime, 
> though, the significance of Google’s quantum supremacy demonstration is 
> this: after a quarter century of effort, we are now, finally, in the early 
> vacuum tube era of quantum computing.
>
> *Scott Aaronson is David J. Bruton Centennial Professor of Computer 
> Science at the University of Texas at Austin, and the founding director of 
> UT’s Quantum Information Center. He’s the author of “Quantum Computing 
> Since Democritus,” and blogs about quantum computing and other topics at 
> Shtetl-Optimized.*
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2c9a5a6e-0973-4cc7-af59-f2f2cce5173b%40googlegroups.com.

Reply via email to