On Wednesday, 17 December 2014 at 12:58:23 UTC, ponce wrote:
Hi, I'm kind of embarrassed by my bitter post, must have been a bad day :).

On Tuesday, 16 December 2014 at 19:49:37 UTC, Shehzan wrote:
We also support CPU and OpenCL backends along with CUDA. This way, you can use the same ArrayFire code to run across any of those technologies without changes. All you need to do is link the correct library.

Cool, this was reason enough to avoid using NPP until now.

I've certainly found desirable to be able to target OpenCL, CPU or CUDA indifferently from the same codebase. What I'd like more than a library of functions though is an abstracted compute API. This would be a compiler from your own compute language to OpenCL C or CUDA C++ also an API wrapper. That would probably mean to leave some features behind to have the intersection. Similar to bgfx but for compute APIs, which has a shader compiler to many targets.

We used a BSD 3-Clause license to make it easy for everyone to use in their own projects.

Here is a blog I made about implementing Conway's Game of Life using ArrayFire http://arrayfire.com/conways-game-of-life-using-arrayfire/. It demonstrates how easy it is to use ArrayFire.

Our goal is to make it easy for people to get started with GPU programming and break down the barrier for non-programmers to use the hardware efficiently. I agree that complex algorithms require more custom solutions, but once you get started, things become much easier.

Your example is indeed very simple, so I guess it has its uses.

I know this is a really old post, but just to add to what Shehzan already mentioned, we have double precision support (both real and complex) since day one (and quite a long time before that as well). Our documentation does not make it obvious immediately because we just have a single array class. The array class holds the metadata of the data types and we eventually launch the appropriate kernels.

ArrayFire can also integrate with existing CUDA or OpenCL code. The goal of libraries (be it Thrust or Bolt or ArrayFire) is to not take back control, but to make sure users are not re-inventing the wheel over and over again. Having access to highly optimized, pre-existing GPU kernels for commonly used algorithms can only increase productivity.

Reply via email to