Personally, I'd rather speculate about optical computing
infrastructures. We had the potential to start building that kind of
thing back in the '90s.

But, ok: a press release I found referencing the dwave indicates 128
qbits in the device. But I see nothing about how long it takes them to
converge and stabilize, nor how long it takes to set up the conditions
beforehand, nor whether there are any asymmetries nor anything about
failure rates. So, basically, it's suitable for implementing a search
mechanism that takes some amount of time (seconds? days?) to converge
and has some unknown level of reliability.

To use it for J, we'd want to be shuffling large amounts of data
through the chip. So, for example, a 1gb array might take 1gb*8/128
times as long to process as the unknown amount of time we would need
for a single operation. It's not yet clear whether the chip is
suitable for addition, though, so perhaps it would take considerably
longer to perform addition than some other operation.

Now, let's suppose that it has a 0.00001% defect rate - it might be
interesting to speculate how to manage this defective computation rate
to ensure a reasonable degree of accuracy in our hypothetical
operations, if only we knew how those operations worked.

Does that help?

Or maybe it would be better to just not use J and to instead use some
custom language which is specialized at dealing with whatever this
chip eventually winds up being able to do if it ever winds up working
in a meaningful fashion. We haven't even gotten J to work yet on GPU
infrastructure. For that matter, I was looking at the lapack lab today
and I see carefully written documentation on a routine named dgbrfs
but I can't find any hint of how I could make that routine run
(geev_jlapack_ works just fine, however - as near as I can tell,
anyways).

Thanks,

-- 
Raul


On Sat, Oct 12, 2013 at 11:11 AM, Donna Y <[email protected]> wrote:
> Yes I know there is such gibberish being said about it in this clip - but 
> speculating on what a high level language might need from the interface to 
> such hardware might be an interesting approach.
>
> Donna Y
> [email protected]
>
>
> On 2013-10-12, at 11:04 AM, Raul Miller <[email protected]> wrote:
>
>> Without spec sheets documenting how to use the chip, and physical
>> access to see how close those specs conform to actual behavior, it's
>> difficult to say much of anything reasonable about even the
>> characteristics of the hardware that would be hooked up to the chip,
>> let alone whether J would be useful for programming it.
>>
>> Thanks,
>>
>> --
>> Raul
>>
>> On Sat, Oct 12, 2013 at 10:59 AM, Donna Y <[email protected]> wrote:
>>> Computers capable of exploiting the power of J? Any thoughts about what J 
>>> could do for the D-Wave chip technology?
>>>
>>> Donna Y
>>> [email protected]
>>>
>>>
>>> http://gigaom.com/2013/10/11/google-nasa-explain-quantum-computing-and-making-mincemeat-of-big-data/
>>>
>>> ----------------------------------------------------------------------
>>> For information about J forums see http://www.jsoftware.com/forums.htm
>> ----------------------------------------------------------------------
>> For information about J forums see http://www.jsoftware.com/forums.htm
>>
>
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to