Re: [Haskell-cafe] Help optimize fannkuch program

2012-12-03 Thread Bryan O'Sullivan
On Sun, Dec 2, 2012 at 3:12 PM, Branimir Maksimovic bm...@hotmail.comwrote:

 Well, playing with Haskell I have literally trasnlated my c++ program

 http://shootout.alioth.debian.org/u64q/program.php?test=fannkuchreduxlang=gppid=3
 and got decent performance but not that good in comparison
 with c++
 On my machine Haskell runs 52 secs while c++ 30 secs.


Did you compile with -O2 -fllvm?

On my machine:

C++ 28 sec
Mine -O2 -fllvm 37 sec
Yours -O2 -fllvm 41 sec
Mine -O2 48 sec
Yours -O2 54 sec

My version of your Haskell code is here: http://hpaste.org/78705
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] RFC: Changes to Travis CI's Haskell support

2012-12-03 Thread Simon Hengel
Hi,
currently the default to test Haskell projects on Travis CI [1] is:

install:
  - cabal install --enable-tests
script:
  - cabal test

The issue with this is that it runs the test-suite twice, which is a
waste of resources and delays build reports.  This was an oversight on
my part, when I adapted Travis's Haskell support for
cabal-install-0.14.0.

I think the right thing to do is:

install:
  - cabal install --only-dependencies --enable-tests

script:
  - cabal configure --enable-tests  cabal build  cabal test

Please let me know if you think there are better ways to do it, or if
you see any issues.

Cheers,
Simon

[1] https://travis-ci.org/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can not use ST monad with polymorphic function

2012-12-03 Thread Dmitry Kulagin

 Basically, quantified types can't be given as arguments to type
 constructors (other than -, which is its own thing). I'm not entirely
 sure
 why, but it apparently makes the type system very complicated from a
 theoretical standpoint. By wrapping the quantified type in a newtype, the
 argument to IO becomes simple enough not to cause problems.


Thank you, I have read about predicative types and it seems I understand
the origin of the problem now.


  GHC has an extension -XImpredicativeTypes that lifts this restriction,
 but in my experience, it doesn't work very well.


Yes, it didn't help in my case.

Thank you,
Dmitry
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Design of a DSL in Haskell

2012-12-03 Thread Brent Yorgey
(Sorry, forgot to reply to the list initially; see conversation below.)

On Mon, Dec 03, 2012 at 03:49:00PM +0100, Joerg Fritsch wrote:
 Brent,

 I believe that inside the do-block (that basically calls my
 interpreter) I cannot call any other Haskell function that are not
 recognized by my parser and interpreter.

This seems to just require some sort of escape mechanism for
embedding arbitrary Haskell code into your language.  For example a
primitive

  embed :: a - CWMWL a

(assuming CWMWL is the name of your monad).  Whether this makes sense,
how to implement embed, etc. depends entirely on your language and
interpreter.  

However, as you imply below, this may or may not be possible depending
on the type a.  In that case I suggest making embed a type class method.
Something like

  class Embeddable a where
embed :: a - CWMWL a

I still get the feeling, though, that I have not really understood
your question.

 I am also trying to learn how I could preserve state from one line
 of code of my DSL to the next. I understand that inside the
 interpreter one would use a combination of the state monad and the
 reader monad, but could not find any non trivial example.

Yes, you can use the state monad to preserve state from one line to
the next.  I am not sure what you mean by using a combination of state
and reader monads.  There is nothing magical about the combination.
You would use state + reader simply if you had some mutable state as
well as some read-only configuration to thread through your
interpreter.

xmonad is certainly a nontrivial example but perhaps it is a bit *too*
nontrivial.  If I think of any other good examples I'll let you know.

-Brent

 
 
 On Dec 3, 2012, at 1:23 PM, Brent Yorgey wrote:
 
  On Sun, Dec 02, 2012 at 03:01:46PM +0100, Joerg Fritsch wrote:
  This is probably a very basic question.
  
  I am working on a DSL that eventuyally would allow me to say:
  
  import language.cwmwl
  main = runCWMWL $ do
 eval (isFib::, 1000, ?BOOL)
  
  I have just started to work on the interpreter-function runCWMWL and I 
  wonder whether it is possible to escape to real Haskell somehow (and how?) 
  either inside ot outside the do-block.
  
  I don't think I understand the question.  The above already *is* real
  Haskell.  What is there to escape?
  
  I thought of providing a defautl-wrapper for some required prelude
  functions (such as print) inside my interpreter but I wonder if
  there are more elegant ways to co-loacate a DSL and Haskell without
  falling back to being a normal library only.
  
  I don't understand this sentence either.  Can you explain what you are
  trying to do in more detail?
  
  -Brent
 
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Moscow Haskell Users Group (MskHUG) December meeting.

2012-12-03 Thread Serguey Zefirov
I would like to announce MskHUG December meeting and invite everyone interested.

The meeting will take place December 13th, 20:00 to 23:30 in the nice
conference center in centre of Moscow: http://www.nf-conference.ru/

The meeting's agenda is to start more intense discussions. Most
probably, there will be a couple of short presentations - I can tell
about creating fast Haskell programs to handle large data to start the
discussion.

I will bring white paper and pencils for everyone to write and draw,
there will be projector and screen and also tea, cofee and snacks.

If you want to participate, please, email me.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Design of a DSL in Haskell

2012-12-03 Thread Joerg Fritsch
Thanks Brent,

my question is basically how the function embed would in practice be 
implemented.

I want to be able to take everything that my own language does not have from 
the host language, ideally so that I can say:

evalt - eval (isFib::, 1000, ?BOOL))
case evalt of
   Left Str - 
   Right Str -  


or so.

--Joerg

On Dec 3, 2012, at 4:04 PM, Brent Yorgey wrote:

 (Sorry, forgot to reply to the list initially; see conversation below.)
 
 On Mon, Dec 03, 2012 at 03:49:00PM +0100, Joerg Fritsch wrote:
 Brent,
 
 I believe that inside the do-block (that basically calls my
 interpreter) I cannot call any other Haskell function that are not
 recognized by my parser and interpreter.
 
 This seems to just require some sort of escape mechanism for
 embedding arbitrary Haskell code into your language.  For example a
 primitive
 
  embed :: a - CWMWL a
 
 (assuming CWMWL is the name of your monad).  Whether this makes sense,
 how to implement embed, etc. depends entirely on your language and
 interpreter.  
 
 However, as you imply below, this may or may not be possible depending
 on the type a.  In that case I suggest making embed a type class method.
 Something like
 
  class Embeddable a where
embed :: a - CWMWL a
 
 I still get the feeling, though, that I have not really understood
 your question.
 
 I am also trying to learn how I could preserve state from one line
 of code of my DSL to the next. I understand that inside the
 interpreter one would use a combination of the state monad and the
 reader monad, but could not find any non trivial example.
 
 Yes, you can use the state monad to preserve state from one line to
 the next.  I am not sure what you mean by using a combination of state
 and reader monads.  There is nothing magical about the combination.
 You would use state + reader simply if you had some mutable state as
 well as some read-only configuration to thread through your
 interpreter.
 
 xmonad is certainly a nontrivial example but perhaps it is a bit *too*
 nontrivial.  If I think of any other good examples I'll let you know.
 
 -Brent
 
 
 
 On Dec 3, 2012, at 1:23 PM, Brent Yorgey wrote:
 
 On Sun, Dec 02, 2012 at 03:01:46PM +0100, Joerg Fritsch wrote:
 This is probably a very basic question.
 
 I am working on a DSL that eventuyally would allow me to say:
 
 import language.cwmwl
 main = runCWMWL $ do
   eval (isFib::, 1000, ?BOOL)
 
 I have just started to work on the interpreter-function runCWMWL and I 
 wonder whether it is possible to escape to real Haskell somehow (and how?) 
 either inside ot outside the do-block.
 
 I don't think I understand the question.  The above already *is* real
 Haskell.  What is there to escape?
 
 I thought of providing a defautl-wrapper for some required prelude
 functions (such as print) inside my interpreter but I wonder if
 there are more elegant ways to co-loacate a DSL and Haskell without
 falling back to being a normal library only.
 
 I don't understand this sentence either.  Can you explain what you are
 trying to do in more detail?
 
 -Brent
 
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Design of a DSL in Haskell

2012-12-03 Thread Joerg Fritsch
The below is probably not a good example since it does not require a DSL but 
the principle is clear that I want to take things from teh host language that I 
do not have implemented (yet) in my DSL.

--Joerg

On Dec 3, 2012, at 4:25 PM, Joerg Fritsch wrote:

 Thanks Brent,
 
 my question is basically how the function embed would in practice be 
 implemented.
 
 I want to be able to take everything that my own language does not have from 
 the host language, ideally so that I can say:
 
 evalt - eval (isFib::, 1000, ?BOOL))
 case evalt of
Left Str - 
Right Str -  
 
 
 or so.
 
 --Joerg
 
 On Dec 3, 2012, at 4:04 PM, Brent Yorgey wrote:
 
 (Sorry, forgot to reply to the list initially; see conversation below.)
 
 On Mon, Dec 03, 2012 at 03:49:00PM +0100, Joerg Fritsch wrote:
 Brent,
 
 I believe that inside the do-block (that basically calls my
 interpreter) I cannot call any other Haskell function that are not
 recognized by my parser and interpreter.
 
 This seems to just require some sort of escape mechanism for
 embedding arbitrary Haskell code into your language.  For example a
 primitive
 
  embed :: a - CWMWL a
 
 (assuming CWMWL is the name of your monad).  Whether this makes sense,
 how to implement embed, etc. depends entirely on your language and
 interpreter.  
 
 However, as you imply below, this may or may not be possible depending
 on the type a.  In that case I suggest making embed a type class method.
 Something like
 
  class Embeddable a where
embed :: a - CWMWL a
 
 I still get the feeling, though, that I have not really understood
 your question.
 
 I am also trying to learn how I could preserve state from one line
 of code of my DSL to the next. I understand that inside the
 interpreter one would use a combination of the state monad and the
 reader monad, but could not find any non trivial example.
 
 Yes, you can use the state monad to preserve state from one line to
 the next.  I am not sure what you mean by using a combination of state
 and reader monads.  There is nothing magical about the combination.
 You would use state + reader simply if you had some mutable state as
 well as some read-only configuration to thread through your
 interpreter.
 
 xmonad is certainly a nontrivial example but perhaps it is a bit *too*
 nontrivial.  If I think of any other good examples I'll let you know.
 
 -Brent
 
 
 
 On Dec 3, 2012, at 1:23 PM, Brent Yorgey wrote:
 
 On Sun, Dec 02, 2012 at 03:01:46PM +0100, Joerg Fritsch wrote:
 This is probably a very basic question.
 
 I am working on a DSL that eventuyally would allow me to say:
 
 import language.cwmwl
 main = runCWMWL $ do
   eval (isFib::, 1000, ?BOOL)
 
 I have just started to work on the interpreter-function runCWMWL and I 
 wonder whether it is possible to escape to real Haskell somehow (and 
 how?) either inside ot outside the do-block.
 
 I don't think I understand the question.  The above already *is* real
 Haskell.  What is there to escape?
 
 I thought of providing a defautl-wrapper for some required prelude
 functions (such as print) inside my interpreter but I wonder if
 there are more elegant ways to co-loacate a DSL and Haskell without
 falling back to being a normal library only.
 
 I don't understand this sentence either.  Can you explain what you are
 trying to do in more detail?
 
 -Brent
 
 
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] RFC: Changes to Travis CI's Haskell support

2012-12-03 Thread Johan Tibell
On Mon, Dec 3, 2012 at 1:04 AM, Simon Hengel s...@typeful.net wrote:
 I think the right thing to do is:

 install:
   - cabal install --only-dependencies --enable-tests

 script:
   - cabal configure --enable-tests  cabal build  cabal test

 Please let me know if you think there are better ways to do it, or if
 you see any issues.

This is the right thing to do.

-- Johan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Naive matrix multiplication with Accelerate

2012-12-03 Thread Clark Gaebel
Ah. I see now. Silly Haskell making inefficient algorithms hard to write
and efficient ones easy. It's actually kind of annoying when learning, but
probably for the best.

Is there a good write-up of the algorithm you're using somewhere? The Repa
paper was very brief in its explaination, and I'm having trouble
visualizing the mapping of the 2D matricies into 3 dimensions.

  - Clark


On Mon, Dec 3, 2012 at 2:06 AM, Trevor L. McDonell 
tmcdon...@cse.unsw.edu.au wrote:

 Hi Clark,

 The trick is that most accelerate operations work over multidimensional
 arrays, so you can still get around the fact that we are limited to flat
 data-parallelism only.

 Here is matrix multiplication in Accelerate, lifted from the first Repa
 paper [1].


 import Data.Array.Accelerate as A

 type Matrix a = Array DIM2 a

 matMul :: (IsNum e, Elt e) = Acc (Matrix e) - Acc (Matrix e) - Acc
 (Matrix e)
 matMul arr brr
   = A.fold (+) 0
   $ A.zipWith (*) arrRepl brrRepl
   where
 Z :. rowsA :. _ = unlift (shape arr):: Z :. Exp Int :. Exp Int
 Z :. _ :. colsB = unlift (shape brr):: Z :. Exp Int :. Exp Int

 arrRepl = A.replicate (lift $ Z :. All   :. colsB :. All)
 arr
 brrRepl = A.replicate (lift $ Z :. rowsA :. All   :. All)
 (A.transpose brr)


 If you use github sources rather than the hackage package, those
 intermediate replicates will get fused away.


 Cheers,
 -Trev

  [1] http://www.cse.unsw.edu.au/~chak/papers/KCLPL10.html




 On 03/12/2012, at 5:07 PM, Clark Gaebel cgae...@uwaterloo.ca wrote:

 Hello cafe,

 I've recently started learning about cuda and hetrogenous programming, and
 have been using accelerate [1] to help me out. Right now, I'm running into
 trouble in that I can't call parallel code from sequential code. Turns out
 GPUs aren't exactly like Repa =P.

 Here's what I have so far:

 import qualified Data.Array.Accelerate as A
 import Data.Array.Accelerate ( (:.)(..)
  , Acc
  , Vector
  , Scalar
  , Elt
  , fold
  , slice
  , constant
  , Array
  , Z(..), DIM1, DIM2
  , fromList
  , All(..)
  , generate
  , lift, unlift
  , shape
  )
 import Data.Array.Accelerate.Interpreter ( run )

 dotP :: (Num a, Elt a) = Acc (Vector a) - Acc (Vector a) - Acc (Scalar
 a)
 dotP xs ys = fold (+) 0 $ A.zipWith (*) xs ys

 type Matrix a = Array DIM2 a

 getRow :: Elt a = Int - Acc (Matrix a) - Acc (Vector a)
 getRow n mat = slice mat . constant $ Z :. n :. All

 -- Naive matrix multiplication:
 --
 -- index (i, j) is equal to the ith row of 'a' `dot` the jth row of 'b'
 matMul :: A.Acc (Matrix Double) - A.Acc (Matrix Double) - A.Acc (Matrix
 Double)
 matMul a b' = A.generate (constant $ Z :. nrows :. ncols) $
 \ix -
   let (Z :. i :. j) = unlift ix
in getRow i a `dotP` getRow j b
 where
 b = A.transpose b' -- I assume row indexing is faster than column
 indexing...
 (Z :. nrows :.   _  ) = unlift $ shape a
 (Z :.   _   :. ncols) = unlift $ shape b


 This, of course, gives me errors right now because I'm calling getRow and
 dotP from within the generation function, which expects Exp[ression]s, not
 Acc[elerated computation]s.

 So maybe I need to replace that line with an inner for loop? Is there an
 easy way to do that with Accelerate?

 Thanks for your help,
   - Clark

 [1] http://hackage.haskell.org/package/accelerate
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Help optimize fannkuch program

2012-12-03 Thread Branimir Maksimovic

Thanks. Your version is much faster.Yes, I have compiled with ghc --make -O2 
-fllvm -optlo-O3 -optlo-constprop fannkuchredux4.hs(there is bug in ghc 7.4.2 
regarding llvm 3.1  which is circumvented with constrprop)
results: yours:bmaxa@maxa:~/shootout/fannkuchredux$ time ./fannkuchredux4 
123968050Pfannkuchen(12) = 65
real0m39.200suser0m39.132ssys 0m0.044s
mine:bmaxa@maxa:~/shootout/fannkuchredux$ time ./fannkuchredux 
123968050Pfannkuchen(12) = 65
real0m50.784suser0m50.660ssys 0m0.092s
Seems that you machine is faster than mine and somewhat better for executing 
mine version.Thanks ! Should I contribute your version on shootout site?

Date: Mon, 3 Dec 2012 00:01:32 -0800
Subject: Re: [Haskell-cafe] Help optimize fannkuch program
From: b...@serpentine.com
To: bm...@hotmail.com
CC: haskell-cafe@haskell.org

On Sun, Dec 2, 2012 at 3:12 PM, Branimir Maksimovic bm...@hotmail.com wrote:

Well, playing with Haskell I have literally trasnlated my c++ program 
http://shootout.alioth.debian.org/u64q/program.php?test=fannkuchreduxlang=gppid=3
and got decent performance but not that good in comparisonwith c++ On my 
machine Haskell runs 52 secs while c++ 30 secs.
Did you compile with -O2 -fllvm?

On my machine:
C++ 28 secMine -O2 -fllvm 37 secYours -O2 -fllvm 41 secMine -O2 48 secYours -O2 
54 sec
My version of your Haskell code is here: http://hpaste.org/78705
  ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Moscow Haskell Users Group (MskHUG) December meeting.

2012-12-03 Thread Тимур Амиров
I'm in!
Would like to talk with people, writing some web-based production apps!

2012/12/3 Serguey Zefirov sergu...@gmail.com

 I would like to announce MskHUG December meeting and invite everyone
 interested.

 The meeting will take place December 13th, 20:00 to 23:30 in the nice
 conference center in centre of Moscow: http://www.nf-conference.ru/

 The meeting's agenda is to start more intense discussions. Most
 probably, there will be a couple of short presentations - I can tell
 about creating fast Haskell programs to handle large data to start the
 discussion.

 I will bring white paper and pencils for everyone to write and draw,
 there will be projector and screen and also tea, cofee and snacks.

 If you want to participate, please, email me.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Best
Timur DeTeam Amirov
Moscow, Russia
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Moscow Haskell Users Group (MskHUG) December meeting.

2012-12-03 Thread Dmitry Vyal

On 12/03/2012 07:13 PM, Serguey Zefirov wrote:

I would like to announce MskHUG December meeting and invite everyone interested.

Wow, great idea :) I'd like to participate. May I ask why don't you 
schedule the event on weekend or on Friday at least?


Best wishes,
Dmitry


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Design of a DSL in Haskell

2012-12-03 Thread Tillmann Rendel

Hi,

Joerg Fritsch wrote:

I am working on a DSL that eventuyally would allow me to say:

import language.cwmwl

main = runCWMWL $ do

 eval (isFib::, 1000, ?BOOL)


I have just started to work on the interpreter-function runCWMWL and I
wonder whether it is possible to escape to real Haskell somehow (and
how?) either inside ot outside the do-block.


You can already use Haskell in your DSL. A simple example:

  main = runCWMWL $ do
eval (isFib::, 500 + 500, ?BOOL)

The (+) operator is taken from Haskell, and it is available in your DSL 
program. This use of Haskell is completely for free: You don't have to 
do anything special with your DSL implementation to support it. I 
consider this the main benefit of internal vs. external DSLs.



A more complex example:

  main = runCWMWL $ do
foo - eval (isFib::, 1000, ?BOOL)
if foo
  then return 27
  else return 42

Here, you are using the Haskell if-then-else expression to decide which 
DSL program to run. Note that this example also uses (=) and return, 
so it only works because your DSL is monadic. Beyond writing the Monad 
instance, you don't have to do anything special to support this. In 
particular, you might not need an additional embed function if you've 
already implemented return from the Monad type class. I consider this 
the main benefit of the Monad type class.


  Tillmann

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Help optimize fannkuch program

2012-12-03 Thread Bryan O'Sullivan
On Mon, Dec 3, 2012 at 11:18 AM, Branimir Maksimovic bm...@hotmail.comwrote:

 Thanks ! Should I contribute your version on shootout site?


Do whatever you like with it.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Naive matrix multiplication with Accelerate

2012-12-03 Thread Trevor L. McDonell
As far as I am aware, the only description is in the Repa paper. I you are 
right, it really should be explained properly somewhere… 

At a simpler example, here is the outer product of two vectors [1].

vvProd :: (IsNum e, Elt e) = Acc (Vector e) - Acc (Vector e) - Acc (Matrix e)
vvProd xs ys = A.zipWith (*) xsRepl ysRepl
  where
n   = A.size xs
m   = A.size ys

xsRepl  = A.replicate (lift (Z :. All :. m  )) xs
ysRepl  = A.replicate (lift (Z :. n   :. All)) ys

If we then `A.fold (+) 0` the matrix, it would reduce along each row producing 
a vector. So the first element of that vector is going to be calculated as 
(xs[0] * ys[0] + xs[0] * ys[1] +  … xs[0] * ys[m-1]). That's the idea we want 
for our matrix multiplication … but I agree, it is difficult for me to 
visualise as well.

I do the same sort of trick with the n-body demo to get all n^2 particle 
interactions.

-Trev


 [1]: http://en.wikipedia.org/wiki/Outer_product#Vector_multiplication



On 04/12/2012, at 3:41 AM, Clark Gaebel cgae...@uwaterloo.ca wrote:

 Ah. I see now. Silly Haskell making inefficient algorithms hard to write and 
 efficient ones easy. It's actually kind of annoying when learning, but 
 probably for the best.
 
 Is there a good write-up of the algorithm you're using somewhere? The Repa 
 paper was very brief in its explaination, and I'm having trouble visualizing 
 the mapping of the 2D matricies into 3 dimensions.
 
   - Clark
 
 
 On Mon, Dec 3, 2012 at 2:06 AM, Trevor L. McDonell 
 tmcdon...@cse.unsw.edu.au wrote:
 Hi Clark,
 
 The trick is that most accelerate operations work over multidimensional 
 arrays, so you can still get around the fact that we are limited to flat 
 data-parallelism only.
 
 Here is matrix multiplication in Accelerate, lifted from the first Repa paper 
 [1].
 
 
 import Data.Array.Accelerate as A
 
 type Matrix a = Array DIM2 a
 
 matMul :: (IsNum e, Elt e) = Acc (Matrix e) - Acc (Matrix e) - Acc (Matrix 
 e)
 matMul arr brr
   = A.fold (+) 0
   $ A.zipWith (*) arrRepl brrRepl
   where
 Z :. rowsA :. _ = unlift (shape arr):: Z :. Exp Int :. Exp Int
 Z :. _ :. colsB = unlift (shape brr):: Z :. Exp Int :. Exp Int
 
 arrRepl = A.replicate (lift $ Z :. All   :. colsB :. All) arr
 brrRepl = A.replicate (lift $ Z :. rowsA :. All   :. All) 
 (A.transpose brr)
 
 
 If you use github sources rather than the hackage package, those intermediate 
 replicates will get fused away.
 
 
 Cheers,
 -Trev
 
  [1] http://www.cse.unsw.edu.au/~chak/papers/KCLPL10.html
 
 
 
 
 On 03/12/2012, at 5:07 PM, Clark Gaebel cgae...@uwaterloo.ca wrote:
 
 Hello cafe,
 
 I've recently started learning about cuda and hetrogenous programming, and 
 have been using accelerate [1] to help me out. Right now, I'm running into 
 trouble in that I can't call parallel code from sequential code. Turns out 
 GPUs aren't exactly like Repa =P.
 
 Here's what I have so far:
 
 import qualified Data.Array.Accelerate as A
 import Data.Array.Accelerate ( (:.)(..)
  , Acc
  , Vector
  , Scalar
  , Elt
  , fold
  , slice
  , constant
  , Array
  , Z(..), DIM1, DIM2
  , fromList
  , All(..)
  , generate
  , lift, unlift
  , shape
  )
 import Data.Array.Accelerate.Interpreter ( run )
 
 dotP :: (Num a, Elt a) = Acc (Vector a) - Acc (Vector a) - Acc (Scalar a)
 dotP xs ys = fold (+) 0 $ A.zipWith (*) xs ys
 
 type Matrix a = Array DIM2 a
 
 getRow :: Elt a = Int - Acc (Matrix a) - Acc (Vector a)
 getRow n mat = slice mat . constant $ Z :. n :. All
 
 -- Naive matrix multiplication:
 --
 -- index (i, j) is equal to the ith row of 'a' `dot` the jth row of 'b'
 matMul :: A.Acc (Matrix Double) - A.Acc (Matrix Double) - A.Acc (Matrix 
 Double)
 matMul a b' = A.generate (constant $ Z :. nrows :. ncols) $
 \ix -
   let (Z :. i :. j) = unlift ix
in getRow i a `dotP` getRow j b
 where
 b = A.transpose b' -- I assume row indexing is faster than column 
 indexing...
 (Z :. nrows :.   _  ) = unlift $ shape a
 (Z :.   _   :. ncols) = unlift $ shape b
 
 
 This, of course, gives me errors right now because I'm calling getRow and 
 dotP from within the generation function, which expects Exp[ression]s, not 
 Acc[elerated computation]s.
 
 So maybe I need to replace that line with an inner for loop? Is there an 
 easy way to do that with Accelerate?
 
 Thanks for your help,
   - Clark
 
 [1] http://hackage.haskell.org/package/accelerate
 

Re: [Haskell-cafe] Moscow Haskell Users Group (MskHUG) December meeting.

2012-12-03 Thread Sergey Mironov
Hi. Great idea, I'm in.

Sergey


2012/12/3 Serguey Zefirov sergu...@gmail.com:
 I would like to announce MskHUG December meeting and invite everyone 
 interested.

 The meeting will take place December 13th, 20:00 to 23:30 in the nice
 conference center in centre of Moscow: http://www.nf-conference.ru/

 The meeting's agenda is to start more intense discussions. Most
 probably, there will be a couple of short presentations - I can tell
 about creating fast Haskell programs to handle large data to start the
 discussion.

 I will bring white paper and pencils for everyone to write and draw,
 there will be projector and screen and also tea, cofee and snacks.

 If you want to participate, please, email me.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe