The Nanor is an example of a quantum based solid state LENR photonic device
that operated in a state of quantum entanglement. A quantum computer could
well be based on the Nanor.

On Mon, Jan 26, 2015 at 11:35 PM, Axil Axil <[email protected]> wrote:

> It is my contention that the Ni/H reactor is a proof of principle for the
> quantum computer. In the Ni/H reactor energy is shared instantaneously
> between all the plasmonic components of the reactor because there exists a
> condition of global BEC maintained throughout the reactor.
>
> On Mon, Jan 26, 2015 at 11:27 PM, Axil Axil <[email protected]> wrote:
>
>> The mechanism that underpins the quantum computer is entanglement and the
>> speed of entanglement is instantaneous. Computing components will be
>> connected through long rang entanglement so data will be shared
>> instantaniously.
>>
>> On Mon, Jan 26, 2015 at 9:15 PM, James Bowery <[email protected]> wrote:
>>
>>> All boolean functions (meaning all programs) can be parallelized to only
>>> 2 gate delays.  The problem is your computer ends up with more gates than
>>> there are elementary particles in the universe.
>>>
>>> A good deal of real computation consists of, in essence, decompressing a
>>> compressed form of the the answer.  The difficulty of writing MPP software
>>> is essentially attempting to decompress the compressed form of the answer
>>> (ie: the program and its inputs) prior to run time so it maps on to your
>>> parallel architecture.
>>>
>>> To make software maintainable, you start out with the minimal
>>> description -- the Ockham's Razor version -- so that you don't introduce
>>> extraneous complexity to the program specification.  The rest, as they say,
>>> is expansion of the Kolmogorov Complexity and there is just no getting
>>> around the fact that you have a _lot_ of serial work in that process.
>>>
>>> On Mon, Jan 26, 2015 at 8:00 PM, Jed Rothwell <[email protected]>
>>> wrote:
>>>
>>>> James Bowery <[email protected]> wrote:
>>>>
>>>>
>>>>> Architectures that attempt to hide this problem with lots of
>>>>> processors accessing local stores in parallel are drunks looking for their
>>>>> keys under the lamp post.
>>>>>
>>>>
>>>> I disagree. The purpose of a computer is solve problems. To process
>>>> data. Not to crunch numbers as quickly as possible. The human brain is many
>>>> orders of magnitude slower than any computer, and yet we can recognize
>>>> faces faster than just about any computer, because the brain is a massively
>>>> parallel processor (MPP). Many neurons compare the image to stored images
>>>> simultaneously, and the neurons that find the closest match "come to mind."
>>>> Many data processing functions can be performed in parallel. Sorting and
>>>> searching arrays has been done in parallel since the 1950s. Polyphase sort
>>>> methods with multiple processors and mag tape decks were wonderfully fast.
>>>>
>>>> It is difficult to write MPP software, but once we master the
>>>> techniques the job will be done, and it will be much easier to update.
>>>> Already, Microsoft Windows works better on multi-processor computers than
>>>> single processor models. Multiprocessor also run voice input programs much
>>>> faster than single processors.
>>>>
>>>> A generation from now we may have personal computers with millions of
>>>> processors. Even if every processor were much slower than today's
>>>> processors, the overall speed for many classes of problems will be similar
>>>> to today's supercomputers -- which can solve problems hundreds of thousands
>>>> to millions of times faster than a PC or Mac. They will have the power of
>>>> today's Watson computer, which is to say, they will be able to play
>>>> Jeopardy or diagnose disease far better than any person. I expect they will
>>>> also recognize faces and do voice input better than any person.
>>>>
>>>> There may be a few esoteric problems that are inherently serial in
>>>> nature and that can only be solved by a single processor, but I expect most
>>>> real world can be broken down into procedures run in parallel. Of course
>>>> the breaking down will be done automatically. It is already.
>>>>
>>>> Before computers were invented, all large real world problems were
>>>> broken down and solved in parallel by large groups of people, usually
>>>> organized in a hierarchy. I mean, for example, the design of large
>>>> buildings or the management of corporations, nations or armies.
>>>>
>>>> The fastest data processing in the known universe, by a wide margin, is
>>>> biological cell reproduction. The entire genome is copied by every cell
>>>> that splits. This is a parallel process. The moment a strand of DNA is
>>>> exposed to solution, all of new bases begin match up simultaneously. DNA is
>>>> also by far the most compact form of data storage in the known universe,
>>>> and I predict is the most compact that will ever be found. I do not think
>>>> subatomic data storage will ever be possible. All the human data now
>>>> existing can be stored in about 7 ml of DNA.
>>>>
>>>> - Jed
>>>>
>>>>
>>>
>>
>

Reply via email to