Thanks Andreas,

If I switch out the line "cuda.memcpy_dtoh(y,Y_gpu)" and replace it with something else - either another memory copy or a synchronize option like pycuda.autoinit.context.synchronize(). The same also happens if I get rid of explicit memory copies and use the In(), Out() and InOut() methods. If I don't execute the kernel then the memory copies work without errors. So something in the kernel is causing an error, but this is one of the example kernels from nvidia! Can you see anything wrong with it? Does it execute on anyone elses machine?

    # CUDA grid
    block_size=(4,1,1)
    grid = (n/block_size[0],1)

    # CUDA source
    cusrc = SourceModule("""
    __global__ void saxpy(int n, double a, double *x, double *y)
    {
    for (int i = blockIdx.x * blockDim.x + threadIdx.x;
        i < n;
        i += blockDim.x * gridDim.x)
        {
            y[i] = a * x[i] + y[i];
        }
    }
    """)
    SAXPY = cusrc.get_function('saxpy')

    # data arrays
    w = 500 #arbitrary
x = random.uniform(0,w,n) #.astype(float32) << same error with either float or double
    y = random.uniform(0,w,n) #.astype(float32)

    #init gpu (input) arrays
    a = float64(24.5)
    n = int32(n)

SAXPY(cuda.In(n), cuda.In(a), cuda.In(x), cuda.InOut(y), grid=grid, block=block_size)

    ...

___ error message ____
Traceback (most recent call last):
  File ".\as_cuda_loop.py", line 61, in <module>
    main()
  File ".\as_cuda_loop.py", line 46, in main
SAXPY(cuda.In(n), cuda.In(a), cuda.In(x), cuda.InOut(y), grid=grid, block=block_size) File "c:\users\james\appdata\local\enthought\canopy\user\lib\site-packages\pycuda\driver.py", line 377, in function_ca
ll
    Context.synchronize()
pycuda._driver.LogicError: cuCtxSynchronize failed: invalid/unknown error code
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: invalid/unknown error code
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: invalid/unknown error code
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: invalid/unknown error code
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: invalid/unknown error code
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuModuleUnload failed: invalid/unknown error code

Best regards

James




On 31/07/2014 01:16, Andreas Kloeckner wrote:
Hi James,

James Keaveney <[email protected]> writes:
I'm having an issue with PyCUDA that at first glance seem like they
might be similar to those of Thomas Unterthiner (messages from Jun 20
2014, "Weird bug when slicing arrays on Kepler cards"). I'm also using a
Kepler card (GTX 670) and getting the same clean-up/dead context errors.
However, unlike Thomas, I'm not using cublas. The simplest example I can
show is below, which is a cuda kernel taken directly from here:
http://devblogs.nvidia.com/parallelforall/cuda-pro-tip-write-flexible-kernels-grid-stride-loops/

[snip]
pycuda._driver.LogicError: cuMemcpyDtoH failed: invalid/unknown error code
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuModuleUnload failed: invalid/unknown error code
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: invalid/unknown error code
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: invalid/unknown error code
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: invalid/unknown error code
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: invalid/unknown error code
This means your context went away while PyCUDA was still talking to
it. This will happen most often if you perform some invalid operation
(such as access out-of-bounds memory in a kernel). In this case, the
cuMemcpyDtoH operation could be at fault.

HTH,
Andreas



_______________________________________________
PyCUDA mailing list
[email protected]
http://lists.tiker.net/listinfo/pycuda

Reply via email to