Dear Ana,

To benefit from GPU architecture, over CPU, the algorithm needs to do quite 
significant number crunching – i.e. do at least certain number of floating 
point operations (FLOP) per one byte of data. It also needs to be highly 
parallel, preferably without conditional (if/else) statements. Finally, there 
is a variety of GPU architectures on the market and it is not exactly obvious 
that code written for one GPU will be optimal on another one. So if the code is 
based on a general purpose library, it will be easier to make sure that it runs 
efficiently on all GPU hardware.

I believe combination of these factors makes a big difference between imaging 
and MX.

Imaging processing is limited by FFT performance, which needs floating point 
performance. Libraries for FFT on GPUs are standard and provided by hardware 
vendors, so it is easy to implement.

On the other hand MX algorithms for image processing, at least the one I know 
of, do only handful of FLOP per pixel and they will probably not benefit from 
GPU processing significantly, even if ported to such architecture – which would 
be also a non-negligible effort. So while it is not impossible to imagine 
GPU-accelerated MX software and hopefully people are working on this, it is not 
a low hanging fruit, like in case of GPU acceleration for imaging or cryo-EM.

On a side note if one could find a way to use machine learning for data 
processing and implement data processing pipeline in Tensorflow, then GPUs 
would pay off quickly.

Regarding Tim’s Raspberry Pi argument – it should be compared with Nvidia 
Jetson price, which is more or less RPi with GPU, and it won’t be actually that 
significant difference.

Best,
Filip


From: CCP4 bulletin board <[email protected]> on behalf of Ana Carolina de 
Mattos Zeri <[email protected]>
Reply to: Ana Carolina de Mattos Zeri <[email protected]>
Date: Tuesday, 18 February 2020 at 20:58
To: "[email protected]" <[email protected]>
Subject: [ccp4bb] MX data processing with GPUs??

Dear all
we have asked this of a few people, but the question remains:
does any of you have experienced/tried using GPU based software to treat MX 
data? for reducing or subsequent image analysis?
is it a lost battle?
how do you deal with the crescent amount of data we are facing, at Synchrotrons 
and XFELs?
Here at the Manaca beamline at Sirius we will continue to support CPU based 
software, but due to developments in the imaging beam lines, GPU machines are 
looking very attractive.
many thanks in advance for your thoughts,
all the best
Ana


Ana Carolina Zeri, PhD
Manaca Beamline Coordinator (Macromolecular Micro and Nano Crystallography)
Brazilian Synchrotron Light Laboratory (LNLS)
Brazilian Center for Research in Energy and Materials (CNPEM)
Zip Code 13083-970, Campinas, Sao Paulo, Brazil.
(19) 3518-2498
www.lnls.br<http://www.lnls.br>
[email protected]<mailto:[email protected]>






Aviso Legal: Esta mensagem e seus anexos podem conter informações confidenciais 
e/ou de uso restrito. Observe atentamente seu conteúdo e considere eventual 
consulta ao remetente antes de copiá-la, divulgá-la ou distribuí-la. Se você 
recebeu esta mensagem por engano, por favor avise o remetente e apague-a 
imediatamente.

Disclaimer: This email and its attachments may contain confidential and/or 
privileged information. Observe its content carefully and consider possible 
querying to the sender before copying, disclosing or distributing it. If you 
have received this email by mistake, please notify the sender and delete it 
immediately.

________________________________

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1

########################################################################

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1

Reply via email to