A simple and straight-forward way to define the DSP operations of correlation and convolution is as follows:

 correlate1 :: [Double] -> [Double] -> Double
 correlate1 ks = sum . zipWith (*) ks

 correlate :: [Double] -> [Double] -> [Double]
 correlate ks [] = []
 correlate ks xs = correlate1 ks xs : correlate ks (tail xs)

 convolute :: [Double] -> [Double] -> [Double]
 convolute ks = correlate (reverse ks)

This very simple code work - and actually works quite well. It has the nice property of generating output from input lazily, as it is demanded. If the correlate function could be altered to use Prelude list functions only, I would think the above code works quite well with stream fusion too. (Presumably you could do this using "inits" or something?)

Assuming DPH ever becomes a reality, how would you implement the above with it?

(My intuition tells me that unless you have a really *big* filter kernel, breaking correlate1 into parallel chunks is too granular. However, each call to correlate1 could be run independently, presumably confering a more or less linear speedup. Unless having two cores writing to the same memory page is going to upset the cache system...)

Presumably if you want to use parallel arrays, then the data must all exist in memory at once, and any chunking must be implemented manually and tediously?

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to