That sounds great, switchable contexts are exactly what I was looking for. Once 
I rewrite my code for python (it's currently in Mathematica and gets painfully 
slow just as the results are starting to get interesting) I'll compare the 
performance for the various contexts and post what I find to the mailing list. 
Thanks Toby!

Eliot  

________________________________________
From: Toby St Clere Smithe <[email protected]>
Sent: Wednesday, November 11, 2015 12:01 PM
To: Kapit, Eliot; Karl Rupp; [email protected]
Subject: Re: [ViennaCL-support] Can I choose what computational resource runs 
my code in PyViennaCL?

Dear Eliot,

If you build and install the version of PyViennaCL from git, then it
should be stable and do what you need. I think the corresponding version
of the ViennaCL core itself is a bit out of date, and I'm not thoroughly
sure of the compatibility with the most recent version; but unless you
need work from the most recent version, it should be fine.

Now, that caveat being said, you will be able to change the compute
device on a per-object or a default basis if you create a
pyviennacl.backend.Context object, passing in a PyOpenCL context
corresponding to the device you want to use. Then, if you set this
(PyViennaCL) Context object as pyviennacl.backend.default_context, then
it will be used by default. To use a different device selectively, pass
the corresponding Context object to the constructor of the PyViennaCL
matrix/vector/whatever using the "context" keyword argument:

c = pyviennacl.backend.Context(...)
v = pyviennacl.Vector(..., context=c)

I hope that helps! Alas, the main reason why this version of PyViennaCL
is not officially released is that I have not found the time to write
the corresponding documentation..

Toby


On 11/11/15 16:01, Kapit, Eliot wrote:
 > Hi Karl,
 >
 > Thanks for your response! Fortunately, the only complex arithmetic I
need involves much smaller matrices built from matrix elements between
the eigenstates of the larger problem (so only up to N = 200-300), and I
can just use Numpy for that. I'll check out the dev version and try to
compare the two resources; from my simple tests on dense matrices
ViennaCL can substantially outperform Numpy built with OpenBLAS, though
I haven't checked sparse matrix multiplication yet and I don't know if
it was using the CPUs or GPU to do that calculation.
 >
 > Thanks,
 > Eliot
 >
 > ________________________________________
 > From: Karl Rupp <[email protected]>
 > Sent: Wednesday, November 11, 2015 3:06 AM
 > To: Kapit, Eliot; [email protected]; Toby St
Clere Smithe
 > Subject: Re: [ViennaCL-support] Can I choose what computational
resource runs my code in PyViennaCL?
 >
 > Hi Eliot,
 >
 >   > I need to run a large series of sparse real symmetric matrix
 >> diagonalizations (matrix size up to 1M by 1M, but very sparse and I only
 >> need the first 200 eigenvectors/values at most) for a quantum computing
 >> project, and I'd like to write the code in Python and call ViennaCL for
 >> the heavy lifting. My workstation is a dual 8 core Haswell Xeon with 64
 >> GB of RAM and an AMD W8100 graphics card, all running Ubuntu 14.04. My
 >> question is, within PyViennaCL, is it possible to choose which resource
 >> (the two CPUs or the GPU) does the calculation? My guess is that the
 >> CPUS will probably be much faster, but the GPU has much higher
 >> theoretical max DP output.
 >
 > First question: Do you need complex arithmetic? If so, I'm afraid that I
 > have to disappoint you at this point because ViennaCL only supports real
 > arithmetic. I expect that ViennaCL will provide complex arithmetic at
 > some point, but this will still take some time.
 >
 > As for the resource management: If I remember correctly, Toby (in CC:)
 > implemented device switching capabilities in the developer-repository of
 > PyViennaCL [1]. As far as I know, there is no such functionality in the
 > latest 1.0.3 release on PyPI. I'm sure Toby can provide a precise answer
 > to this.
 >
 > Best regards,
 > Karli
 >
 > [1] https://github.com/viennacl/pyviennacl-dev/
 >

------------------------------------------------------------------------------
_______________________________________________
ViennaCL-support mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/viennacl-support

Reply via email to