That's really good news. It means it should be 100% safe for programs not requiring more GPU memory than what your GPU can actually offer. For the ones that do, I would advise you to use regions or write RAII-like templates yourself as I don't think there is any way for Nim's GC to handle GPU memory (although maybe it's possible to write a separate GC implementation that does?).
- Success - calling custom CUDA kernels from Nim mratsim
- Re: Success - calling custom CUDA kernels from Nim mratsim
- Re: Success - calling custom CUDA kernels from Nim Udiknedormin
- Re: Success - calling custom CUDA kernels from N... mratsim
- Re: Success - calling custom CUDA kernels fr... andrea
- Re: Success - calling custom CUDA kerne... Udiknedormin
- Re: Success - calling custom CUDA k... andrea
- Re: Success - calling custom CU... mratsim
- Re: Success - calling custom CU... Udiknedormin
- Re: Success - calling custom CU... mratsim
- Re: Success - calling custom CU... Udiknedormin
