[julia-users] Re: ANN: CUDAdrv.jl, and CUDA.jl deprecation

2016-09-30 Thread Chris Rackauckas
Thanks for the update.

On Thursday, September 29, 2016 at 6:31:29 PM UTC-7, Tim Besard wrote:
>
> Hi all,
>
> CUDAdrv.jl  is Julia wrapper for 
> the CUDA driver API -- not to be confused with its counterpart CUDArt.jl 
>  which wraps the slightly 
> higher-level CUDA runtime API.
>
> The package doesn't feature many high-level or easy-to-use wrappers, but 
> focuses on providing the necessary functionality for other packages to 
> build upon. For example, CUDArt uses CUDAdrv for launching kernels, while 
> CUDAnative (the in-development native programming interface) completely 
> relies on CUDAdrv for all GPU interactions.
>
> It features a ccall-like cudacall interface for launching kernels and 
> passing values:
> using CUDAdrv
> using Base.Test
>
> dev = CuDevice(0)
> ctx = CuContext(dev)
>
> md = CuModuleFile(joinpath(dirname(@__FILE__), "vadd.ptx"))
> vadd = CuFunction(md, "kernel_vadd")
>
> dims = (3,4)
> a = round(rand(Float32, dims) * 100)
> b = round(rand(Float32, dims) * 100)
>
> d_a = CuArray(a)
> d_b = CuArray(b)
> d_c = CuArray(Float32, dims)
>
> len = prod(dims)
> cudacall(vadd, len, 1, (DevicePtr{Cfloat},DevicePtr{Cfloat},DevicePtr{
> Cfloat}), d_a, d_b, d_c)
> c = Array(d_c)
> @test a+b ≈ c
>
> destroy(ctx)
>
> For documentation, refer to the NVIDIA docs 
> . Even though they don't 
> fully match what CUDAdrv.jl implements, the package is well tested, and 
> redocumenting the entire thing is too much work.
>
> Current master of this package only supports 0.5, but there's a tagged 
> version supporting 0.4 (as CUDArt.jl does so as well). It has been tested 
> on CUDA 5.0 up to 8.0, but there might always be issues with certain 
> versions (as the wrappers aren't auto-generated, and probably will never be 
> due to how NVIDIA has implemented cuda.h)
>
> Anybody thinking there's a lot of overlap between CUDArt and CUDAdrv is 
> completely right, but it mimics the overlap between CUDA's runtime and 
> driver APIs as in some cases we do specifically need one or the other (eg., 
> CUDAnative wouldn't work with only the runtime API). There's also some 
> legacy at the Julia side: CUDAdrv.jl is based on CUDA.jl, while CUDArt.jl 
> has been an independent effort.
>
>
> In other news, we have recently *deprecated the old CUDA.jl package*. All 
> users should now use either CUDArt.jl or CUDAdrv.jl, depending on what 
> suits them best. Neither is a drop-in replacement, but the changes should 
> be minor. At the very least, users will have to change the kernel launch 
> syntax, which should use cudacall as shown above. In the future, we might 
> re-use the CUDA.jl package name for the native programming interface 
> currently at CUDAnative.jl
>
>
> Best,
> Tim
>


[julia-users] Re: ANN: CUDAdrv.jl, and CUDA.jl deprecation

2016-09-30 Thread Simon Danisch
Great work! :)

Well, I think GPUArrays should be the right place! If it is the right place 
depends on how much time and cooperation I get ;)
The plan is to integrate all these 3rd party libraries. If you could help 
me with that, it would already be a great first step to establish that 
library :)

Am Freitag, 30. September 2016 03:31:29 UTC+2 schrieb Tim Besard:
>
> Hi all,
>
> CUDAdrv.jl  is Julia wrapper for 
> the CUDA driver API -- not to be confused with its counterpart CUDArt.jl 
>  which wraps the slightly 
> higher-level CUDA runtime API.
>
> The package doesn't feature many high-level or easy-to-use wrappers, but 
> focuses on providing the necessary functionality for other packages to 
> build upon. For example, CUDArt uses CUDAdrv for launching kernels, while 
> CUDAnative (the in-development native programming interface) completely 
> relies on CUDAdrv for all GPU interactions.
>
> It features a ccall-like cudacall interface for launching kernels and 
> passing values:
> using CUDAdrv
> using Base.Test
>
> dev = CuDevice(0)
> ctx = CuContext(dev)
>
> md = CuModuleFile(joinpath(dirname(@__FILE__), "vadd.ptx"))
> vadd = CuFunction(md, "kernel_vadd")
>
> dims = (3,4)
> a = round(rand(Float32, dims) * 100)
> b = round(rand(Float32, dims) * 100)
>
> d_a = CuArray(a)
> d_b = CuArray(b)
> d_c = CuArray(Float32, dims)
>
> len = prod(dims)
> cudacall(vadd, len, 1, (DevicePtr{Cfloat},DevicePtr{Cfloat},DevicePtr{
> Cfloat}), d_a, d_b, d_c)
> c = Array(d_c)
> @test a+b ≈ c
>
> destroy(ctx)
>
> For documentation, refer to the NVIDIA docs 
> . Even though they don't 
> fully match what CUDAdrv.jl implements, the package is well tested, and 
> redocumenting the entire thing is too much work.
>
> Current master of this package only supports 0.5, but there's a tagged 
> version supporting 0.4 (as CUDArt.jl does so as well). It has been tested 
> on CUDA 5.0 up to 8.0, but there might always be issues with certain 
> versions (as the wrappers aren't auto-generated, and probably will never be 
> due to how NVIDIA has implemented cuda.h)
>
> Anybody thinking there's a lot of overlap between CUDArt and CUDAdrv is 
> completely right, but it mimics the overlap between CUDA's runtime and 
> driver APIs as in some cases we do specifically need one or the other (eg., 
> CUDAnative wouldn't work with only the runtime API). There's also some 
> legacy at the Julia side: CUDAdrv.jl is based on CUDA.jl, while CUDArt.jl 
> has been an independent effort.
>
>
> In other news, we have recently *deprecated the old CUDA.jl package*. All 
> users should now use either CUDArt.jl or CUDAdrv.jl, depending on what 
> suits them best. Neither is a drop-in replacement, but the changes should 
> be minor. At the very least, users will have to change the kernel launch 
> syntax, which should use cudacall as shown above. In the future, we might 
> re-use the CUDA.jl package name for the native programming interface 
> currently at CUDAnative.jl
>
>
> Best,
> Tim
>


[julia-users] Re: ANN: CUDAdrv.jl, and CUDA.jl deprecation

2016-09-30 Thread Kyunghun Kim
Good news!
I had wished there's would be some integration in several CUDA packages. 

By the way, is there's any plan for 'standard' GPU array type, such 
as https://github.com/JuliaGPU/GPUArrays.jl ?
CUDArt, CUDAdrv has its own CUDA array type and there's package such as 
ArrayFire.jl 

For example, if I add package wrapping new NVIDIA library such as cuRAND, 
which GPU array type should I support in that package? 

Best, 
Kyunghun