Dear Spiga, 

Thank you for your kind replay. 

This GPU (Tesla C1060 / M1060) may be old and computation may be poor compared 
to todays GPUs but still the combined speed (we have 4 nos) could be more than 
multi core cpus. Even though the current QE-GPU is not supported on this GPU
 it successfully compiled but while invoking pw-gpu.x following errors is  
obtained 


"  ***WARNING: unbalanced configuration (1 MPI per node, 5 GPUs per node)
Error in (first zero) memory allocation , program will be terminated!!! Bye... "


can you please guide me to properly allocate memory between GPU and CPU or per 
node (2 nodes, 8 -cores/node)

with thanks and respectful regards 

Janardhan H L
Poornaprajna Institute of Scientific Research
Bangalore, India







On Thursday, 25 September 2014 6:28 AM, Filippo Spiga <spiga.filippo at 
gmail.com> wrote:
 


Dear Janardhan ,


On Sep 24, 2014, at 10:55 AM, janardhan H.L. <janardhanhl at yahoo.com> wrote:
> Tesla C1060 / M1060

those cards are very old (4 years or maybe 5 year) and have compute capability 
1.2. Supporto of double-precision is very poor, QE-GPU does not support those 
cards anymore.

Cheers,
F

--
Mr. Filippo SPIGA, M.Sc.
http://filippospiga.info ~ skype: filippo.spiga

?Nobody will drive us out of Cantor's paradise.? ~ David Hilbert

*****
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
http://pwscf.org/pipermail/pw_forum/attachments/20140925/b6a8548e/attachment.html
 

Reply via email to