Hi Morten,
On 04/09/2013, at 9:53 PM, Morten Olsen Lysgaard mor...@lysgaard.no wrote:
I've been trying to get some speed out of the accelerate library today.
What I want to implement is something as simple as a matrix multiply.
I'd like it to be fast and memory efficient.
Well, the trouble
This paper explains how to implement them and gives example code:
http://community.haskell.org/~simonmar/papers/weak.pdf
I'm not aware of a package that does it for you, but I implemented one as part
of my own work so that may be able to provide another example.
Cheers,
-Trev
On
Hi all,
To be fair this is a shameless plug, but if you want to do GPGPU programming in
Haskell your best bet at the moment is probably Accelerate:
http://hackage.haskell.org/package/accelerate
There is a CUDA backend for NVIDIA cards with demonstrated good performance and
many example
Hi Geoff,
Yes, please do!
-T
On 02/04/2013, at 3:01 AM, Geoffrey Mainland mainl...@apeiron.net wrote:
Fantastic, glad you got it working! Maybe it's time for me to send
Trevor a pull request...
Geoff
On 04/01/2013 04:27 PM, Peter Caspers wrote:
indeed, not very helpful ...
When I
Hi,
CUDA package maintainer here. I don't have access to a Win7 box with compatible
GPU, nor a lot of experience writing packages for windows. If you do get it
working please send me a pull request on github. I'd be great to have this
working for windows as well.
Cheers,
-Trevor
On
Hi Clark,
The question of sequential loops in Accelerate has come up a few times in the
past. The main sticking point is knowing how to implement them in a way that
encourages efficient programs; avoiding irregular arrays (iteration depths),
discouraging scalar versions of collective combinators,
happening here than Trevor's?
- Clark
On Tue, Dec 4, 2012 at 7:08 PM, Alexander Solla alex.so...@gmail.com wrote:
I don't mean to be blunt, but have you guys taken a course in linear algebra?
On Mon, Dec 3, 2012 at 9:21 PM, Trevor L. McDonell
tmcdon...@cse.unsw.edu.au wrote:
As far as I
at 2:06 AM, Trevor L. McDonell
tmcdon...@cse.unsw.edu.au wrote:
Hi Clark,
The trick is that most accelerate operations work over multidimensional
arrays, so you can still get around the fact that we are limited to flat
data-parallelism only.
Here is matrix multiplication in Accelerate
Hi Clark,
The trick is that most accelerate operations work over multidimensional arrays,
so you can still get around the fact that we are limited to flat
data-parallelism only.
Here is matrix multiplication in Accelerate, lifted from the first Repa paper
[1].
import Data.Array.Accelerate
On 07/07/2011, at 1:36 AM, Johannes Waldmann wrote:
actually libcuda is in /usr/lib/nvidia-current ...
It still feels strange that I can build the examples from
NVIDIA_GPU_Computing_SDK/C/src/ without modifying LDFLAGS.
Okay. I've modified to configure script to fix the extra space
I should mention that the version of 'accelerate' on hackage is a little old and
unloved at the moment, but the source repo should work:
https://github.com/mchakravarty/accelerate
Also, the CUDA bindings package hasn't yet been tested/updated for the recent
4.0 toolkit release.
-T
On
driver
Copyright (c) 2005-2010 NVIDIA Corporation
Built on Thu_Nov_11_15:26:50_PST_2010
Cuda compilation tools, release 3.2, V0.2.1221
On 05/07/2011, at 10:16 PM, Johannes Waldmann wrote:
Trevor L. McDonell tmcdonell at cse.unsw.edu.au writes:
... source repo should work:
https://github.com
12 matches
Mail list logo