## Description
Unable to allocate any GPU memory when using mxnet 1.2.1 with Cuda Versions 
9.0-.2

## Environment info (Required)
OS: Windows 10 Enterprise
CPU: Intel Core i7-6800K 
GPU: Nvidia GTX 1060 and Nvidia GTX 1070
Mxnet Version: 1.2.1, installed via pip install mxnet-cu90/mxnet-cu91/mxnet-cu92
Cuda Version: 9.0-.2


```
----------Python Info----------
Version      : 3.6.6
Compiler     : MSC v.1900 64 bit (AMD64)
Build        : ('default', 'Jun 28 2018 11:27:44')
Arch         : ('64bit', 'WindowsPE')
------------Pip Info-----------
Version      : 10.0.1
Directory    : C:\tools\Anaconda3\envs\mxnet_dev_env\lib\site-packages\pip
----------MXNet Info-----------
Version      : 1.2.1
Directory    : C:\tools\Anaconda3\envs\mxnet_dev_env\lib\site-packages\mxnet
Hashtag not found. Not installed from pre-built package.
----------System Info----------
Platform     : Windows-10-10.0.15063-SP0
system       : Windows
node         : [redacted]
release      : 10
version      : 10.0.15063
----------Hardware Info----------
machine      : AMD64
processor    : Intel64 Family 6 Model 79 Stepping 1, GenuineIntel
Name
Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz
```

Package used (Python/R/Scala/Julia): Python




## Error Message:
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File 
"C:\tools\Anaconda3\envs\mxnet_dev_env\lib\site-packages\mxnet\ndarray\utils.py",
 line 146, in array
    return _array(source_array, ctx=ctx, dtype=dtype)
  File 
"C:\tools\Anaconda3\envs\mxnet_dev_env\lib\site-packages\mxnet\ndarray\ndarray.py",
 line 2338, in array
    arr = empty(source_array.shape, ctx, dtype)
  File 
"C:\tools\Anaconda3\envs\mxnet_dev_env\lib\site-packages\mxnet\ndarray\ndarray.py",
 line 3548, in empty
    return NDArray(handle=_new_alloc_handle(shape, ctx, False, dtype))
  File 
"C:\tools\Anaconda3\envs\mxnet_dev_env\lib\site-packages\mxnet\ndarray\ndarray.py",
 line 139, in _new_alloc_handle
    ctypes.byref(hdl)))
  File "C:\tools\Anaconda3\envs\mxnet_dev_env\lib\site-packages\mxnet\base.py", 
line 149, in check_call
    raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [16:54:01] 
c:\jenkins\workspace\mxnet-tag\mxnet\src\storage\pooled_storage_manager.h:108: 
cudaMalloc failed: device kernel image is invalid

## Minimum reproducible example
```
import mxnet as mx
arr = mx.nd.array([0], ctx=mx.gpu())
```
OR
```
import mxnet as mx
arr = mx.nd.array([0])
arr.as_in_context(mx.gpu())
```


## What have you tried to solve it?

1. Installed Cuda 9.0, 9.1, and 9.2 (with corresponding mxnet binaries)
2. Installed a second graphics card (allocating to both the 1060 and 1070 do 
not work)
3. Tried allocating pytorch tensors to the GPU and was successful
4. Ultimately downgraded to mxnet 1.2.0 and this resolved the issue.


[ Full content available at: 
https://github.com/apache/incubator-mxnet/issues/12228 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to