Or if you want to manually handle the devices (device 0 in this case):

CUDArt.init([0])
C = CUDArt.CudaArray(Float64, (10,10))
fill!(C, 2.0)
println(to_host(C))
CUDArt.close([0])



On Monday, October 26, 2015 at 2:44:20 PM UTC+1, Kristoffer Carlsson wrote:
>
> I believe you need to use an initialized device. See 
> https://github.com/JuliaGPU/CUDArt.jl#gpu-initialization
>
> For example:
>
> devices(dev->true) do devlist
>      C = CUDArt.CudaArray(Float64, (10,10))
>      fill!(C, 2.0)
>      println(to_host(C))
> end
>
>
>
> On Monday, October 26, 2015 at 2:30:40 PM UTC+1, Matthew Pearce wrote:
>>
>> I'm not having much luck filling a CUDArt.CudaArray matrix with a value.
>>
>> julia> C = CUDArt.CudaArray(Float64, (10,10))
>> CUDArt.CudaArray{Float64,2}(CUDArt.CudaPtr{Float64}(Ptr{Float64} @
>> 0x0000000b034a0e00),(10,10),0)
>>
>> julia> fill!(C, 2.0)
>> ERROR: KeyError: (0,"fill_contiguous",Float64) not found
>>  [inlined code] from essentials.jl:58
>>  in getindex at dict.jl:719
>>  in fill! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:158
>>
>> The fill! code works when matrix C is created by copying data to the gpu. 
>> This suggested to me the problem was one of memory allocation. However, 
>> I've tried variations on this which haven't worked, such as taking some of 
>> the source code:
>>
>> julia> function NewCudaArray(T::Type, dims::Dims)
>>            n = prod(dims)
>>            p = CUDArt.malloc(T, n)
>>            CudaArray{T,length(dims)}(p, dims, device())
>>        end
>> NewCudaArray (generic function with 1 method)
>>
>> julia> C = NewCudaArray(Float64, (10,10))
>> CUDArt.CudaArray{Float64,2}(CUDArt.CudaPtr{Float64}(Ptr{Float64} @
>> 0x0000000b034a1200),(10,10),0)
>>
>> julia> fill!(C, 2.0)
>> ERROR: KeyError: (0,"fill_contiguous",Float64) not found
>>  [inlined code] from essentials.jl:58
>>  in getindex at dict.jl:719
>>  in fill! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:158
>>
>> Copying things across unnecessarily sounds slow, so thoughts appreciated.
>>
>

Reply via email to