On 02.01.2018 16:36, Matthieu Brucher wrote:
> Hi,
>
> Let's say that Numpy provides a GPU version on GPU. How would that
> work with all the packages that expect the memory to be allocated on CPU?
> It's not that Numpy refuses a GPU implementation, it's that it
> wouldn't solve the problem of GPU/CPU having different memory. When/if
> nVidia decides (finally) that memory should be also accessible from
> the CPU (like AMD APU), then this argument is actually void.

I actually doubt that. Sure, having a unified memory is convenient for
the programmer. But as long as copying data between host and GPU is
orders of magnitude slower than copying data locally, performance will
suffer. Addressing this performance issue requires some NUMA-like
approach, moving the operation to where the data resides, rather than
treating all data locations equal.

Stefan

-- 

      ...ich hab' noch einen Koffer in Berlin...
    

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion

Reply via email to