Re: [Haskell-cafe] Performance of delete-and-return-last-element

2013-08-30 Thread Clark Gaebel
I don't think a really smart compiler can make that transformation. It
looks like an exponential-time algorithm would be required, but I can't
prove that.

GHC definitely won't...

For this specific example, though, I'd probably do:

darle :: [a] - (a, [a])
darle xs =
  case reverse xs of
[]   - error darle: empty list
(x:xs) - (x, reverse xs)

  - Clark


On Fri, Aug 30, 2013 at 2:18 PM, Lucas Paul reilith...@gmail.com wrote:

 Suppose I need to get an element from a data structure, and also
 modify the data structure. For example, I might need to get and delete
 the last element of a list:

 darle xs = ((last xs), (rmlast xs)) where
   rmlast [_] = []
   rmlast (y:ys) = y:(rmlast ys)

 There are probably other and better ways to write rmlast, but I want
 to focus on the fact that darle here, for lack of a better name off
 the top of my head, appears to traverse the list twice. Once to get
 the element, and once to remove it to produce a new list. This seems
 bad. Especially for large data structures, I don't want to be
 traversing twice to do what ought to be one operation. To fix it, I
 might be tempted to write something like:

 darle' [a] = (a, [])
 darle' (x:xs) = let (a, ys) = darle' xs in (a, (x:ys))

 But this version has lost its elegance. It was also kind of harder to
 come up with, and for more complex data structures (like the binary
 search tree) the simpler expression is really desirable. Can a really
 smart compiler transform/optimize the first definition into something
 that traverses the data structure only once? Can GHC?

 - Lucas

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad Transformer Space Leak

2013-07-18 Thread Clark Gaebel
No I haven't.

  - Clark

On Thu, Jul 18, 2013 at 10:07 PM, Niklas Hambüchen m...@nh2.me wrote:
 Did you file this as a bug?

 On Tue 23 Apr 2013 23:16:03 JST, Clark Gaebel wrote:
 I'm on 7.6.2, and it does. Oh no.

   - Clark

 On Tuesday, April 23, 2013, Tom Ellis wrote:

 On Tue, Apr 23, 2013 at 09:36:04AM +0200, Petr Pudlák wrote:
  I tested it on GHC 6.12.1, which wasn't affected by the recent
 ackermann
  bug, but still it leaks memory.

 I tested it on GHC 7.4.1 and I don't see any space leak.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org javascript:;
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad Transformer Space Leak

2013-07-18 Thread Clark Gaebel
Then I will. Going to double check on 7.6.3, first.

Thanks for bringing this back to my attention. I forgot about it. :P

Regards,
  - Clark

On Thu, Jul 18, 2013 at 10:12 PM, Niklas Hambüchen m...@nh2.me wrote:
 Sounds like a Real Good Thing to do :)

 On Fri 19 Jul 2013 11:10:25 JST, Clark Gaebel wrote:
 No I haven't.

   - Clark

 On Thu, Jul 18, 2013 at 10:07 PM, Niklas Hambüchen m...@nh2.me wrote:
 Did you file this as a bug?

 On Tue 23 Apr 2013 23:16:03 JST, Clark Gaebel wrote:
 I'm on 7.6.2, and it does. Oh no.

   - Clark

 On Tuesday, April 23, 2013, Tom Ellis wrote:

 On Tue, Apr 23, 2013 at 09:36:04AM +0200, Petr Pudlák wrote:
  I tested it on GHC 6.12.1, which wasn't affected by the recent
 ackermann
  bug, but still it leaks memory.

 I tested it on GHC 7.4.1 and I don't see any space leak.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org javascript:;
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad Transformer Space Leak

2013-07-18 Thread Clark Gaebel
https://github.com/patperry/hs-monte-carlo/issues/9

On Thu, Jul 18, 2013 at 10:20 PM, Clark Gaebel cgae...@uwaterloo.ca wrote:
 Then I will. Going to double check on 7.6.3, first.

 Thanks for bringing this back to my attention. I forgot about it. :P

 Regards,
   - Clark

 On Thu, Jul 18, 2013 at 10:12 PM, Niklas Hambüchen m...@nh2.me wrote:
 Sounds like a Real Good Thing to do :)

 On Fri 19 Jul 2013 11:10:25 JST, Clark Gaebel wrote:
 No I haven't.

   - Clark

 On Thu, Jul 18, 2013 at 10:07 PM, Niklas Hambüchen m...@nh2.me wrote:
 Did you file this as a bug?

 On Tue 23 Apr 2013 23:16:03 JST, Clark Gaebel wrote:
 I'm on 7.6.2, and it does. Oh no.

   - Clark

 On Tuesday, April 23, 2013, Tom Ellis wrote:

 On Tue, Apr 23, 2013 at 09:36:04AM +0200, Petr Pudlák wrote:
  I tested it on GHC 6.12.1, which wasn't affected by the recent
 ackermann
  bug, but still it leaks memory.

 I tested it on GHC 7.4.1 and I don't see any space leak.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org javascript:;
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ordNub

2013-07-15 Thread Clark Gaebel
Apologies. I was being lazy. Here's a stable version:

  import qualified Data.HashSet as S

  hashNub :: (Ord a) = [a] - [a]
  hashNub l = go S.empty l
where
  go _ [] = []
  go s (x:xs) = if x `S.member` s then go s xs
else x : go (S.insert x s) xs

Which, again, will probably be faster than the one using Ord, and I
can't think of any cases where I'd want the one using Ord instead. I
may just not be creative enough, though.


  - Clark

On Mon, Jul 15, 2013 at 12:46 AM, Brandon Allbery allber...@gmail.com wrote:
 On Sun, Jul 14, 2013 at 7:54 AM, Clark Gaebel cgae...@uwaterloo.ca wrote:

 Oops sorry I guess my point wasn't clear.

 Why ord based when hashable is faster? Then there's no reason this has to
 be in base, it can just be a

 Did the point about stable fly overhead?

 --
 brandon s allbery kf8nh   sine nomine associates
 allber...@gmail.com  ballb...@sinenomine.net
 unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ordNub

2013-07-15 Thread Clark Gaebel
I'm procrastinating something else, so I wrote the patch to
unordered-containers. Feel free to comment on the github link:

https://github.com/tibbe/unordered-containers/pull/67

I'm still against having an Ord version, since my intuition tells me
that hash-based data structures are faster than ordered ones. Someone
else can write the patch, though!

As a tangent, can anyone think of a data structure for which you can
write an Ord instance but Hashable/Eq is impossible (or prove
otherwise)? How about the converse?

Regards,
  - Clark

On Mon, Jul 15, 2013 at 10:40 PM, John Lato jwl...@gmail.com wrote:
 On Tue, Jul 16, 2013 at 10:31 AM, Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com wrote:

 On 16 July 2013 11:46, John Lato jwl...@gmail.com wrote:
  In my tests, using unordered-containers was slightly slower than using
  Ord,
  although as the number of repeated elements grows unordered-containers
  appears to have an advantage.  I'm sure the relative costs of comparison
  vs
  hashing would affect this also.  But both are dramatically better than
  the
  current nub.
 
  Has anyone looked at Bart's patches to see how difficult it would be to
  apply them (or re-write them)?

 If I understand correctly, this function is proposed to be added to
 Data.List which lives in base... but the proposals here are about
 using either Sets from containers or HashSet from
 unordered-containers; I thought base wasn't supposed to depend on any
 other package :/


 That was one of the points up for discussion: is it worth including a subset
 of Set functionality to enable a much better nub in base?  Is it even worth
 having Data.List.nub if it has quadratic complexity?

 As an alternative, Bart's proposal was for both including ordNub in
 containers and an improved nub (with no dependencies outside base) in
 Data.List.  Unfortunately the patches are quite old (darcs format), and I
 don't know how they'd apply to the current situation.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ordNub

2013-07-15 Thread Clark Gaebel
nubBy is a very good suggestion. Added!

Regarding good hash functions: if your data structure is algebraic,
you can derive generic and Hashable will give you a pretty good hash
function:

 data ADT a = C0 Int String | C1 [a]
   deriving Generic

 instance Hashable a = Hashable (ADT a)

It's magic!

  - Clark

On Mon, Jul 15, 2013 at 11:35 PM, Richard A. O'Keefe o...@cs.otago.ac.nz 
wrote:

 On 16/07/2013, at 3:21 PM, Clark Gaebel wrote:

 I'm still against having an Ord version, since my intuition tells me
 that hash-based data structures are faster than ordered ones.

 There are at least four different things that an Ord version might
 mean:

  - first sort a list, then eliminate duplicates
  - sort a list eliminating duplicates stably as you go
(think 'merge sort', using 'union' instead of 'merge')
  - build a balanced tree set as you go
  - having a list that is already sorted, use that to
eliminated duplicates cheaply.

 These things have different costs.  For example, if there are N
 elements of which U are unique, the first as O(N.log N) cost,
 the third has O(N.log U) cost, and the fourth has O(N) cost.

 What I want is more often ordNubBy than ordNub, though.

 Someone
 else can write the patch, though!

 As a tangent, can anyone think of a data structure for which you can
 write an Ord instance but Hashable/Eq is impossible (or prove
 otherwise)? How about the converse?

 Since Ord has Eq as a superclass, and since 0 is a functionally
 correct hash value for anything, if you can implement Ord you
 can obviously implement Hashable/Eq.  Whether it is *useful* to
 do so is another question.

 It turns out that it _is_ possible to define good quality hash
 functions on sets, but most code in the field to do so is pretty bad.
 (Just a modular sum or exclusive or.)

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ordNub

2013-07-14 Thread Clark Gaebel
Similarly, I've always used:

import qualified Data.HashSet as S

nub :: Hashable a = [a] - [a]
nub = S.toList . S.fromList

And i can't think of any type which i can't write a Hashable instance, so
this is extremely practical.
On Jul 14, 2013 7:24 AM, Niklas Hambüchen m...@nh2.me wrote:

 tldr: nub is abnormally slow, we shouldn't use it, but we do.


 As you might know, Data.List.nub is O(n²). (*)

 As you might not know, almost *all* practical Haskell projects use it,
 and that in places where an Ord instance is given, e.g. happy, Xmonad,
 ghc-mod, Agda, darcs, QuickCheck, yesod, shake, Cabal, haddock, and 600
 more (see https://github.com/nh2/haskell-ordnub).

 I've taken the Ord-based O(n * log n) implementation from yi using a Set:

   ordNub :: (Ord a) = [a] - [a]
   ordNub l = go empty l
 where
   go _ [] = []
   go s (x:xs) = if x `member` s then go s xs
 else x : go (insert x s) xs


 and put benchmarks on

 http://htmlpreview.github.io/?https://github.com/nh2/haskell-ordnub/blob/1f0a2c94a/report.html
 (compare `nub` vs `ordNub`).

 `ordNub` is not only in a different complexity class, but even seems to
 perform better than nub for very small numbers of actually different
 list elements (that's the numbers before the benchmark names).

 (The benchmark also shows some other potential problem: Using a state
 monad to keep the set instead of a function argument can be up to 20
 times slower. Should that happen?)

 What do you think about ordNub?

 I've seen a proposal from 5 years ago about adding a *sort*Nub function
 started by Neil, but it just died.


 (*) The mentioned complexity is for the (very common) worst case, in
 which the number of different elements in the list grows with the list
 (alias you don't have an N element list with always only 5 different
 things inside).

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ordNub

2013-07-14 Thread Clark Gaebel
Oops sorry I guess my point wasn't clear.

Why ord based when hashable is faster? Then there's no reason this has to
be in base, it can just be a free function in Data.HashSet. If stability is
a concern then there's a way to easily account for that using HashMap.

  - Clark
On Jul 14, 2013 7:48 AM, Niklas Hambüchen m...@nh2.me wrote:

 One of my main points is:

 Should we not add such a function (ord-based, same output as nub,
 stable, no sorting) to base?

 As the package counting shows, if we don't offer an alternative, people
 obviously use it, and not to our benefit.

 (Not to say it this way:
 We could make the Haskell world fast with smarter fusion, strictness
 analysis and LLVM backends.
 Or we could stop using quadratic algorithms.)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Efficiency/Evaluation Question

2013-06-15 Thread Clark Gaebel
Yes. In general, GHC won't CSE for you.

  - Clark

On Saturday, June 15, 2013, Christopher Howard wrote:

 On 06/15/2013 04:39 PM, Tommy Thorn wrote:
 
 
  There's not enough context to answer the specific question,
  but lazy evaluation isn't magic and the answer is probably no.
 
  Tommy
 

 Perhaps to simplify the question somewhat with a simpler example.
 Suppose you have

 code:
 
 let f x = if (x  4) then f 0 else (sin x + 2 * cos x) : f (x + 1)
 

 After calculating at x={0,1,2,3}, and the cycle repeats, are sin, cos,
 etc. calculated anymore?

 --
 frigidcode.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Array, Vector, Bytestring

2013-06-03 Thread Clark Gaebel
How is this a problem?

If you're representing text, use 'text'.
If you're representing a string of bytes, use 'bytestring'.
If you want an array of values, think c++ and use 'vector'.
If you want to mutate arrays, first, make sure you do. You probably don't.
If you're sure, use MVector.

Don't use String, except to interface with legacy code. You probably want
'text'.
Don't use Array. Anything it can be used for, can be done with 'vector'.

  - Clark

This covers all the use-cases that I can think of.

On Monday, June 3, 2013, wrote:

 On Mon, 03 Jun 2013 19:16:08 +
 silvio silvio.fris...@gmail.com javascript:; wrote:

  Hi everyone,
 
  Every time I want to use an array in Haskell, I find myself having to
  look up in the doc how they are used, which exactly are the modules I
  have to import ... and I am a bit tired of staring at type signatures
  for 10 minutes to figure out how these arrays work every time I use them
  (It's even worse when you have to write the signatures). I wonder how
  other people perceive this issue and what possible solutions could be.

 My opinion, it's every bit as bad you say it is...
 Not a clue as to what can be done about it.

 Probably yet another vector module.





 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org javascript:;
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Array, Vector, Bytestring

2013-06-03 Thread Clark Gaebel
That's absolutely true. Wrappers around vector for your multidimensional
access is probably best, but Vectors of Vectors are usually easier.

But again, you're right. Multidimensional access is a pain. If it's a
matrix of numerical values, you could take a look at 'hmatrix'.

  - Clark

On Monday, June 3, 2013, Jason Dagit wrote:

 On Mon, Jun 3, 2013 at 7:45 PM, Clark Gaebel 
 cgae...@uwaterloo.cajavascript:;
 wrote:
  How is this a problem?
 
  If you're representing text, use 'text'.
  If you're representing a string of bytes, use 'bytestring'.
  If you want an array of values, think c++ and use 'vector'.
  If you want to mutate arrays, first, make sure you do. You probably
 don't.
  If you're sure, use MVector.
 
  Don't use String, except to interface with legacy code. You probably want
  'text'.
  Don't use Array. Anything it can be used for, can be done with 'vector'.

 You have to build multidimensional accessors for vector yourself.
 Array supports them out of the box. I still prefer vector, but it's
 only fair to note that multidimensional data is a weak spot of vector.

 Jason

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Call for Papers IFL 2013

2013-05-31 Thread Clark Gaebel
Well that's exciting! I really hope uu finds a student. This would be yet
another one of Haskell's killer features.

  - Clark

On Friday, May 31, 2013, wrote:


 ===
 VACANCY : 1x Phd Student in domain specific type error diagnosis for
 Haskell

 ===

 The activities of the Software Systems division at Utrecht University
 include
 research on programming methodologies, compiler construction, and program
 analysis, validation, and verification. For information about the research
 group of Software Technology, see:

  http://www.cs.uu.nl/wiki/Center

 Financed by the Netherlands Organisation for Scientific Research (NWO), we
 currently have a job opening for:

  * 1x PhD researcher (Ph D student) Software Technology

 Domain-specific languages (DSLs) have the potential both to reduce the
 effort of
 programming, and to result in programs that are easier to understand and
 maintain. For various good reasons, researchers have proposed to embed DSLs
 (then called EDSLs) into a general purpose host language. An important
 disadvantage of such an embedding is that it is very hard to make type
 error
 diagnosis domain-aware, because inconsistencies are by default explained in
 terms of the host language. We are currently looking for a highly motivated
 Ph D student to investigate this problem in the context of the functional
 language Haskell.

 The basic approach is to scale the concept of specialized type rules as
 developed by (Heeren, Hage and Swierstra, ICFP '03, see link below) for
 Haskell '98 to modern day Haskell with all of its type system extensions.
 The work is both technically challenging, i.e., how do you ensure that
 modifications to the type diagnositic process do not inadvertently change
 the
 type system, and practically immediately useful:  making domain-specific
 type
 error diagnosis a reality for a full sized language such as Haskell is
 likely
 to have a pervasive influence on the field of domain-specific languages,
 and
 the language Haskell.

 The ICFP '03 paper can be found at

 http://www.cs.uu.nl/people/jur/scriptingthetypeinferencer.pdf

 A project paper that describes the context and aims of the current project
 can
 be found here:

 http://www.cs.uu.nl/people/jur/tfp2013_submission_2.pdf

 At first, the work will be prototyped in our own Utrecht Haskell Compiler.
 If
 succesfull, the work will also make its way into the GHC.

 We expect the candidate to communicate the results academically, to
 present the
 work at scientific conferences, to supervise Master students, and to
 assist in
 teaching courses at Bachelor or Master level.

 -
 What we are looking for
 -

 The candidate should have an MSc in Computer Science, be highly motivated,
 speak and write English very well, and be proficient in producing
 scientific
 reports. Knowledge of and experience with at least one of the following two
 areas is essential:

   * functional programming, and Haskell in particular
   * type system concepts

 Furthermore, we expect the candidate to be able to reason formally.
 Experience in compiler construction is expected to be useful in this
 project.

 -
 What we offer
 -

 You are offered a full-time position for 4 years. The gross salary is in
 the
 range between Û 2083,- and maximum Û 2664,- per month. The salary is
 supplemented
 with a holiday bonus of 8% and an end-of-year bonus of 8,3% per year.

 In addition we offer: a pension scheme, a partially paid parental leave,
 flexible employment conditions. Conditions are based on the Collective
 Labour Agreement Dutch Universities.

 We aim to start November 1, 2013 at the latest, but preferably sooner.

 -
 In order to apply
 -

 To apply please attach a letter of motivation, a curriculum vitae, and
 (email)
 addresses of two referees. Make sure to also include a transcript of the
 courses
 you have followed (at bachelor and master level), with the grades you
 obtained, and to include a sample of your scientific writing, e.g., the
 pdf of
 your master thesis.

 It is possible to apply for this position if you are close to obtaining
 your Master's. In that case include a letter of your supervisor with an
 estimate
 of your progress, and do not forget to include at least a sample of your
 technical writing skills.

 Application closes on the 20th of June 2013.

 For application, visit http://www.cs.uu.nl/vacatures/en/583630.html and
 follow the link to the official job application page at the bottom.

 ---
 Contact person
 ---

 For further information you can direct your inquiries to:

  Dr. Jurriaan Hage
  Phone: (+31) 30 253 3283
  e-mail: j.h...@uu.nl javascript:;.
  

Re: [Haskell-cafe] hackage update brigade (was Re: ANNOUNCE: new bridge! (prelude-prime))

2013-05-27 Thread Clark Gaebel
I'd be down for helping update packages when the time comes.
On May 27, 2013 12:08 PM, Evan Laforge qdun...@gmail.com wrote:

  Yes, it would break code.  Probably a lot of code.

 So of course I volunteer to fix my code, but that's not much help,
 since it's a small minority of the code on hackage.  So that made me
 think, maybe we should organize a kind of hackage community service
 brigade, which, when the time is right, would spring into action.
 They would download sections of hackage, update the code, and send
 patches to the maintainers.  Then they'd keep track of which packages
 actually applied the patch, and that could go on an update status
 page.

 I'd sign up for that.

 Of course, package maintainers might prefer to do the change
 themselves, or more likely may be incommunicado.  But the presence of
 a small army of organized volunteers waiting to update code might
 reduce friction to make necessary changes, and would take the weight
 off the shoulders of the few who wind up doing it anyway when a new
 ghc comes out.

 If people think this is a good idea, I volunteer to do the setup.  I
 guess this means a short doc describing the process, and then a place
 where volunteers can sign up, and then keep the volunteer list up to
 date.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Non-deterministic behaviour of aeson's parser

2013-05-18 Thread Clark Gaebel
CNR with aeson 0.6.1.0 and ghc 7.6.3.

pkg-list output can be found at http://pastebin.com/Zuuujcaz

On Saturday, May 18, 2013, Niklas Hambüchen wrote:

 Can't reproduce:

 % ./aeson | sort | uniq -c
2000 Right ()
 % ./aeson | sort | uniq -c
2000 Right ()
 % ./aeson | sort | uniq -c
2000 Right ()
 % ./aeson | sort | uniq -c
2000 Right ()
 % ./aeson | sort | uniq -c
2000 Right ()

 Time 100:

 % ./aeson | sort | uniq -c
  20 Right ()


 My packages:

 % ghc-pkg list
 /var/lib/ghc/package.conf.d
Cabal-1.16.0
array-0.4.0.1
base-4.6.0.1
bin-package-db-0.0.0.0
binary-0.5.1.1
bytestring-0.10.0.2
containers-0.5.0.0
deepseq-1.3.0.1
directory-1.2.0.1
filepath-1.3.0.1
ghc-7.6.2
ghc-prim-0.3.0.0
haskell2010-1.1.1.0
haskell98-2.0.0.2
hoopl-3.9.0.0
hpc-0.6.0.0
integer-gmp-0.5.0.0
old-locale-1.0.0.5
old-time-1.1.0.1
pretty-1.1.1.0
process-1.1.0.2
rts-1.0
template-haskell-2.8.0.0
time-1.4.0.1
unix-2.6.0.1
 /home/niklas/.ghc/x86_64-linux-7.6.2/package.conf.d
HTTP-4000.2.8
HUnit-1.2.5.2
QuickCheck-2.6
Xauth-0.1
aeson-0.6.1.0
ansi-terminal-0.6
attoparsec-0.10.4.0
attoparsec-binary-0.2
base-unicode-symbols-0.2.2.4
blaze-builder-0.3.1.1
byteorder-1.0.4
cipher-aes-0.1.8
convertible-1.0.11.1
cpphs-1.16
dlist-0.5
ghc-paths-0.1.0.9
ghc-syb-utils-0.2.1.1
hashable-1.2.0.6
haskell-lexer-1.0
haskell-src-exts-1.13.5
hidapi-1.0
hlint-1.8.44
hscolour-1.20.3
hspec-1.5.4
hspec-expectations-0.3.2
io-choice-0.0.3
lifted-base-0.2.0.4
monad-control-0.3.2.1
mtl-2.1.2
network-2.4.1.2
parsec-3.1.3
pretty-show-1.5
primitive-0.5.0.1
quickcheck-io-0.1.0
random-1.0.1.1
robot-1.0.1.1
robot-1.1
setenv-0.1.0
stm-2.4.2
storable-record-0.0.2.5
syb-0.4.0
text-0.11.3.0
transformers-0.3.0.0
transformers-base-0.4.1
uniplate-1.6.10
unordered-containers-0.2.3.1
utility-ht-0.0.9
vector-0.10.0.1
vector-th-unbox-0.2.0.1
xhb-0.5.2012.11.23
zlib-0.5.4.1


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org javascript:;
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Non-deterministic behaviour of aeson's parser

2013-05-18 Thread Clark Gaebel
$ uname -a
Linux clark-laptop 3.9.0-2-ARCH #1 SMP PREEMPT Tue Apr 30 09:48:29 CEST
2013 x86_64 GNU/Linux

It'd take too long for my helpfulness to build with cabal install -fsse2
hashable and rebuild an environment.

If someone writes a bash script to do it (using cabal-dev please!), I'd be
more than happy to run it and post the results.

  - Clark

On Saturday, May 18, 2013, Gregory Collins wrote:

 First off, everyone reporting results to this thread: your bug report
 would be much more helpful if you included your OS/architecture/GHC version
 combo, as well as the results of re-running the tests if you build
 hashable with cabal install -f-sse2.

 I have a funny feeling that this is a bug in hashable or
 unordered-containers. I'm guessing hashable, especially because of this:

 https://github.com/tibbe/hashable/issues/66

 and because hashable has had subtle bugs in its C code before (see
 https://github.com/tibbe/hashable/issues/60).

 G

 On Sat, May 18, 2013 at 6:25 PM, Roman Cheplyaka 
 r...@ro-che.infojavascript:_e({}, 'cvml', 'r...@ro-che.info');
  wrote:

 I am observing a non-deterministic behaviour of aeson's parser.

 I'm writing here in addition to filing a bug report [1] to draw
 attention to this (pretty scary) problem.

 To try to reproduce this problem, do this:

   git clone https://gist.github.com/5604887.git aeson
   cd aeson
   ghc aeson.hs
   ./aeson | sort | uniq -c

 This is my result:

 32 Left key \module\ not present
 55 Left When parsing the record SymValue of type Main.SymValueInfo
 the key fixity was not present.
   1913 Right ()

 Can others reproduce this in their environments?

 Does anyone have ideas about where the bug may lie?
 Many aeson's dependencies do unsafe IO stuff that could lead to
 such consequences.

 Roman

 [1]: https://github.com/bos/aeson/issues/125

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org javascript:_e({}, 'cvml',
 'Haskell-Cafe@haskell.org');
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Gregory Collins g...@gregorycollins.net javascript:_e({}, 'cvml',
 'g...@gregorycollins.net');

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parallel ghc --make

2013-05-15 Thread Clark Gaebel
It's also useful to note that the disk cache might do a surprisingly good
job at caching those .hi files for you. That, and a lot of people (like
me!) use SSDs, where the parallel compilation takes the vast majority of
time.

I'd be really excited to see parallel ghc --make.

By the way, totally unrelated, but why does cabal support -j when cabal-dev
doesn't?

  - Clark

On Wednesday, May 15, 2013, Niklas Hambüchen wrote:

 Hello Thomas,

 thanks for your detailed answer.

  Could be worthwhile re-evaluating the patch.

 Does your patch still apply somewhat cleanly?
 And does it address all the caches in your list already or only some
 subset of them?

  To have a multi-process ghc --make you don't need thread-safety.
  However, without sharing the caches -- in particular the interface
  file caches -- the time to read data from the disk may outweigh any
  advantages from parallel execution.

 That might be a big step already - I've never seen a project where I'd
 care about parallel compilation that is not totally CPU-bound.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org javascript:;
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parallel ghc --make

2013-05-15 Thread Clark Gaebel
Oh! That works quite nicely.

It's not supported for install-deps. I assumed that it just wasn't
implemented.

I should open a ticket.

Thanks!
  - Clark

On Wednesday, May 15, 2013, Clark Gaebel wrote:

 It's also useful to note that the disk cache might do a surprisingly good
 job at caching those .hi files for you. That, and a lot of people (like
 me!) use SSDs, where the parallel compilation takes the vast majority of
 time.

 I'd be really excited to see parallel ghc --make.

 By the way, totally unrelated, but why does cabal support -j when
 cabal-dev doesn't?

   - Clark

 On Wednesday, May 15, 2013, Niklas Hambüchen wrote:

 Hello Thomas,

 thanks for your detailed answer.

  Could be worthwhile re-evaluating the patch.

 Does your patch still apply somewhat cleanly?
 And does it address all the caches in your list already or only some
 subset of them?

  To have a multi-process ghc --make you don't need thread-safety.
  However, without sharing the caches -- in particular the interface
  file caches -- the time to read data from the disk may outweigh any
  advantages from parallel execution.

 That might be a big step already - I've never seen a project where I'd
 care about parallel compilation that is not totally CPU-bound.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Where is haskell-platform?

2013-05-10 Thread Clark Gaebel
I'm looking for the version of haskell platform that was supposed to be
released May 6. It seems like it isn't out yet. What's preventing this from
happening, and is there anything I can do to help?

Regards,
  - Clark
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage checking maintainership of packages

2013-05-06 Thread Clark Gaebel
Deepseq comes to mind regarding a perfect package that doesn't require
active maintenance.

  - Clark


On Mon, May 6, 2013 at 2:21 PM, Petr Pudlák petr@gmail.com wrote:

 2013/5/6 Tillmann Rendel ren...@informatik.uni-marburg.de

 Petr Pudlák wrote:

  -- Forwarded message --
 From: *Niklas Hambüchen* m...@nh2.me mailto:m...@nh2.me
 Date: 2013/5/4
 ...
 I would even be happy with newhackage sending every package
 maintainer a
 quarterly question Would you still call your project X
 'maintained'?
 for each package they maintain; Hackage could really give us better
 indications concerning this.


 This sounds to me like a very good idea. It could be as simple as If
 you consider yourself to be the maintainer of package X please just hit
 reply and send. If Hackage doesn't get an answer, it'd just would
 display some red text like This package seems to be unmaintained since
 D.M.Y.


 I like the idea of displaying additional info about the status of package
 development, but I don't like the idea of annoying hard-working package
 maintainers with emails about their perfect packages that actually didn't
 need any updates since ages ago.


 I understand, but replying to an email with an empty body or clicking on a
 link once in a few months doesn't seem to be an issue for me. And if
 somebody is very busy and doesn't update the package, it's more fair to
 signal from the start that (s)he doesn't want to maintain the package.

 Personally it happened to me perhaps several times that I used a promising
 package and discovered later that's it's not being maintained. I'd say that
 the amount of time required to confirm if authors maintain their packages
 is negligible compared to the amount of time people lose this way.

 Just out of curiosity, do you have some examples of such packages, that
 are being maintained, but not updated since they're near perfect? I'd like
 to know if this is a real issue. It seems to me



 So what about this: Hackage could try to automatically collect and
 display information about the development status of packages that allow
 potential users to *guess* whether the package is maintained or not.
 Currently, potential users have to collect this information themselves.

 Here are some examples I have in mind:

  * Fetch the timestamp of the latest commit from the HEAD repo
  * Fetch the number of open issues from the issue tracker
  * Display reverse dependencies on the main hackage page
  * Show the timestamp of the last Hackage upload of the uploader

 Tillmann


 Those are good ideas. Some suggestions:

 I think we already have the timestamp of each upload, this already gives
 some information. Perhaps we could add a very simple feature saying how
 long ago that was and adding a warning color (like yellow if more than a
 year and red if more than two years).

 Reverse dependencies would certainly help a lot, but it works only for
 libraries, not for programs. (Although it's less likely that someone would
 search hackage for programs.)

 The problem with issue trackers is that (a) many packages don't have one,
 (b) there are many different issue trackers.


 Best regards,
 Petr

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage checking maintainership of packages

2013-05-05 Thread Clark Gaebel
If there's a github link in the package url, it could check the last update
to the default branch. If it's more than 6 months ago, an email to the
maintainer of is this package maintained? can be sent. If there's no
reply in 3 months, the package is marked as unmaintained. If the email is
ever responded to or a new version is uploaded, the package can be
un-marked.
  - Clark
On Sunday, May 5, 2013, Lyndon Maydwell wrote:

 I've got it!

 The answer was staring us in the face all along... We can just introduce
 backwards-compatibility breaking changes into GHC-head and see if the
 project fails to compile for x-time! That way we're SURE it's unmaintained.

 I'll stop sending emails now.


 On Mon, May 6, 2013 at 10:44 AM, Clark Gaebel cgae...@uwaterloo.cawrote:

 If there's a github link in the package url, it could check the last
 update to the default branch. If it's more than 6 months ago, an email to
 the maintainer of is this package maintained? can be sent. If there's no
 reply in 3 months, the package is marked as unmaintained. If the email is
 ever responded to or a new version is uploaded, the package can be
 un-marked.

   - Clark


 On Sunday, May 5, 2013, Lyndon Maydwell wrote:

 But what if the package is already perfect?

 Jokes aside, I think that activity alone wouldn't be a good indicator.


 On Mon, May 6, 2013 at 9:59 AM, Conrad Parker con...@metadecks.orgwrote:

 On 6 May 2013 09:42, Felipe Almeida Lessa felipe.le...@gmail.com wrote:
  Just checking the repo wouldn't work.  It may still have some activity
  but not be maintained and vice-versa.

 ok, how about this: if the maintainer feels that their repo and
 maintenance activities are non-injective they can additionally provide
 an http-accessible URL for the maintenance activity. Hackage can then
 do an HTTP HEAD request on that URL and use the Last-Modified response
 header as an indication of the last time of maintenance activity. I'm
 being a bit tongue-in-cheek, but actually this would allow you to
 point hackage to a blog as evidence of maintenance activity.

 I like the idea of just pinging the code repo.

 Conrad.

  On Sun, May 5, 2013 at 2:19 PM, Doug Burke dburke...@gmail.com wrote:
 
  On May 5, 2013 7:25 AM, Petr Pudlák petr@gmail.com wrote:
 
  Hi,
 
  on another thread there was a suggestion which perhaps went unnoticed
 by
  most:
 
  -- Forwarded message --
  From: Niklas Hambüchen m...@nh2.me
  Date: 2013/5/4
  ...
  I would even be happy with newhackage sending every package
 maintainer a
  quarterly question Would you still call your project X 'maintained'?
  for each package they maintain; Hackage could really give us better
  indications concerning this.
 
 
  This sounds to me like a very good idea. It could be as simple as If
 you
  consider yourself to be the maintainer of package X please just hit
 reply
  and send. If Hackage doesn't get an answer, it'd just would display
 some
  red text like This package seems to be unmaintained since D.M.Y.
 
  Best regards,
  Petr
 
 
  For those packages that give a repository, a query could be done
  automatically to see when it was last updated. It's not the same thing
 as
  'being maintained', but is less annoying for those people with many
 packages
  on hackage.
 
  Doug
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
 
  --
  Felipe.
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad Transformer Space Leak

2013-04-23 Thread Clark Gaebel
I'm on 7.6.2, and it does. Oh no.

  - Clark

On Tuesday, April 23, 2013, Tom Ellis wrote:

 On Tue, Apr 23, 2013 at 09:36:04AM +0200, Petr Pudlák wrote:
  I tested it on GHC 6.12.1, which wasn't affected by the recent
 ackermann
  bug, but still it leaks memory.

 I tested it on GHC 7.4.1 and I don't see any space leak.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org javascript:;
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monad Transformer Space Leak

2013-04-22 Thread Clark Gaebel
Hi everyone!

For some reason, this leaks thunks:

module Main where

import Control.Monad
import Control.Monad.MC -- from monte-carlo
import Control.Monad.ST.Strict

go :: Int - MCT (ST s) ()
go k = replicateM_ k (return ())

main = print $ runST $ evalMCT (go 1) rng
where
rng = mt19937 0

while this does not:

module Main where

import Control.Monad
import Control.Monad.MC

go :: Int - MC ()
go k = replicateM_ k (return ())

main = print $ evalMC (go 1) rng
where
rng = mt19937 0

Can anyone help me figure out what's going on here?

Thanks,
  - Clark
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad Transformer Space Leak

2013-04-22 Thread Clark Gaebel
More interestingly, the problem goes away if I enable profiling. That's
kind of worrisome.

  - Clark

On Monday, April 22, 2013, Clark Gaebel wrote:

 Hi everyone!

 For some reason, this leaks thunks:

 module Main where

 import Control.Monad
 import Control.Monad.MC -- from monte-carlo
 import Control.Monad.ST.Strict

 go :: Int - MCT (ST s) ()
 go k = replicateM_ k (return ())

 main = print $ runST $ evalMCT (go 1) rng
 where
 rng = mt19937 0

 while this does not:

 module Main where

 import Control.Monad
 import Control.Monad.MC

 go :: Int - MC ()
 go k = replicateM_ k (return ())

 main = print $ evalMC (go 1) rng
 where
 rng = mt19937 0

 Can anyone help me figure out what's going on here?

 Thanks,
   - Clark

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad Transformer Space Leak

2013-04-22 Thread Clark Gaebel
I don't have a copy of GHC HEAD handy, and don't have the time to set up
the ecosystem myself to test this one bug.

Would someone else with a copy lying around mind testing it out for me?

Thanks,
  - Clark

On Monday, April 22, 2013, Joachim Breitner wrote:

 Hi,

 Am Montag, den 22.04.2013, 16:44 -0400 schrieb Clark Gaebel:
  More interestingly, the problem goes away if I enable profiling.
  That's kind of worrisome.

 this part sounds similar than the recently discussed problem with the
 ackermann function (http://hackage.haskell.org/trac/ghc/ticket/7850) –
 maybe your code is only allocating stacks and nothing else? In that case
 you can try with GHC HEAD and see if the problem is fixed.

 Greetings,
 Joachim


 --
 Joachim nomeata Breitner
 Debian Developer
   nome...@debian.org javascript:; | ICQ# 74513189 | GPG-Keyid: 4743206C
   JID: nome...@joachim-breitner.de javascript:; |
 http://people.debian.org/~nomeata


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] primitive operations in StateT

2013-04-16 Thread Clark Gaebel
*sigh* nevermind.

I found it. Turns out there was:

liftMCT :: (Monad m) = MC a - MCT m a

in an unexported module in the monte-carlo package all along. I just need
to export it and I'll be good to go.

Thanks for your help!
  - Clark

On Tuesday, April 16, 2013, Clark Gaebel wrote:

 The monad my code is currently written in is:

 type MC = MCT Identity -- where MCT is the monad transformer version of it.

 I have two options for threading state through this:

 MCT (ST s) a
 StateT s MC a

 The first option would work if I had some function with the signature

 MCT Identity a - MCT (ST s) a

 but I know of no such function, and the second one would work if I had
 some way of making StateT a member of PrimMonad.

 Can I see an example with 'lift'?

   - Clark

 On Tuesday, April 16, 2013, Ivan Lazar Miljenovic wrote:

 On 16 April 2013 15:04, Clark Gaebel cgae...@uwaterloo.ca wrote:
  Hi list!
 
  I want to use MVectors in a StateT monad transformer.
 
  How do I do that? StateT isn't a member of 'PrimMonad', and I have no
 idea
  how to make it one.

 You can use Control.Monad.Trans.lift to lift the PrimMonad operations
 to PrimMonad m = StateT s m

 
  Regards,
- Clark
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 



 --
 Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com
 http://IvanMiljenovic.wordpress.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] trying to understand out of memory exceptions

2013-04-16 Thread Clark Gaebel
See the comment for hGetContents:

This function reads chunks at a time, doubling the chunksize on each read.
The final buffer is then realloced to the appropriate size. For files 
half of available memory, this may lead to memory exhaustion. Consider
using 
readFilehttp://hackage.haskell.org/packages/archive/bytestring/0.9.2.1/doc/html/Data-ByteString.html#v:readFile
in
this case.

http://hackage.haskell.org/packages/archive/bytestring/0.9.2.1/doc/html/Data-ByteString-Char8.html#g:31

Maybe try lazy bytestrings?

  - Clark

On Tuesday, April 16, 2013, Anatoly Yakovenko wrote:

 -- So why does this code run out of memory?

 import Control.DeepSeq
 import System.IO
 import qualified Data.ByteString.Char8 as BS

 scanl' :: NFData a = (a - b - a) - a - [b] - [a]
 scanl' f q ls =  q : (case ls of
 []   - []
 x:xs - let q' = f q x
 in q' `deepseq` scanl' f q' xs)


 main = do
file - openBinaryFile /dev/zero ReadMode
chars - BS.hGetContents file
let rv = drop 1000 $ scanl' (+) 0 $ map fromEnum $ BS.unpack
 chars
print (head rv)

 -- my scanl' implementation seems to do the right thing, because

 main = print $ last $ scanl' (+) (0::Int) [0..]

 -- runs without blowing up.  so am i creating a some thunk here?  or is
 hGetContents storing values?  any way to get the exception handler to print
 a trace of what caused the allocation?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] trying to understand out of memory exceptions

2013-04-16 Thread Clark Gaebel
Have you tried the lazy bytestring version?

http://hackage.haskell.org/packages/archive/bytestring/0.10.2.0/doc/html/Data-ByteString-Lazy-Char8.html#g:29

  - Clark

On Tuesday, April 16, 2013, Anatoly Yakovenko wrote:

 unfortunately read file tries to get the file size

 readFile :: FilePath - IO ByteStringreadFile f = bracket (openFile f 
 ReadMode) hClose(\h - hFileSize h = hGet h . fromIntegral)


 which wont work on a special file, like a socket.  which is what i am trying 
 to simulate here.



 On Tue, Apr 16, 2013 at 11:28 AM, Clark Gaebel 
 cg.wowus...@gmail.comjavascript:_e({}, 'cvml', 'cg.wowus...@gmail.com');
  wrote:

 See the comment for hGetContents:

 This function reads chunks at a time, doubling the chunksize on each
 read. The final buffer is then realloced to the appropriate size. For files
  half of available memory, this may lead to memory exhaustion. Consider
 using 
 readFilehttp://hackage.haskell.org/packages/archive/bytestring/0.9.2.1/doc/html/Data-ByteString.html#v:readFile
  in
 this case.


 http://hackage.haskell.org/packages/archive/bytestring/0.9.2.1/doc/html/Data-ByteString-Char8.html#g:31

 Maybe try lazy bytestrings?

   - Clark

 On Tuesday, April 16, 2013, Anatoly Yakovenko wrote:

 -- So why does this code run out of memory?

 import Control.DeepSeq
 import System.IO
 import qualified Data.ByteString.Char8 as BS

 scanl' :: NFData a = (a - b - a) - a - [b] - [a]
 scanl' f q ls =  q : (case ls of
 []   - []
 x:xs - let q' = f q x
 in q' `deepseq` scanl' f q' xs)


 main = do
file - openBinaryFile /dev/zero ReadMode
chars - BS.hGetContents file
let rv = drop 1000 $ scanl' (+) 0 $ map fromEnum $ BS.unpack
 chars
print (head rv)

 -- my scanl' implementation seems to do the right thing, because

 main = print $ last $ scanl' (+) (0::Int) [0..]

 -- runs without blowing up.  so am i creating a some thunk here?  or is
 hGetContents storing values?  any way to get the exception handler to print
 a trace of what caused the allocation?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] primitive operations in StateT

2013-04-15 Thread Clark Gaebel
Hi list!

I want to use MVectors in a StateT monad transformer.

How do I do that? StateT isn't a member of 'PrimMonad', and I have no idea
how to make it one.

Regards,
  - Clark
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] primitive operations in StateT

2013-04-15 Thread Clark Gaebel
The monad my code is currently written in is:

type MC = MCT Identity -- where MCT is the monad transformer version of it.

I have two options for threading state through this:

MCT (ST s) a
StateT s MC a

The first option would work if I had some function with the signature

MCT Identity a - MCT (ST s) a

but I know of no such function, and the second one would work if I had some
way of making StateT a member of PrimMonad.

Can I see an example with 'lift'?

  - Clark

On Tuesday, April 16, 2013, Ivan Lazar Miljenovic wrote:

 On 16 April 2013 15:04, Clark Gaebel cgae...@uwaterloo.ca javascript:;
 wrote:
  Hi list!
 
  I want to use MVectors in a StateT monad transformer.
 
  How do I do that? StateT isn't a member of 'PrimMonad', and I have no
 idea
  how to make it one.

 You can use Control.Monad.Trans.lift to lift the PrimMonad operations
 to PrimMonad m = StateT s m

 
  Regards,
- Clark
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org javascript:;
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 



 --
 Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com javascript:;
 http://IvanMiljenovic.wordpress.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] multivariate normal distribution in Haskell?

2013-04-14 Thread Clark Gaebel
Is [1] what you're looking for (see the 'multinormal' function)?

monte-carlo's pretty great... :)

  - Clark

[1]
http://hackage.haskell.org/packages/archive/monte-carlo/0.4.2/doc/html/Control-Monad-MC-Class.html#t:RNG


On Sat, Apr 13, 2013 at 9:26 AM, Bas de Haas w.b.deh...@uu.nl wrote:

 Dear List,

 I’m implementing a probabilistic model for recognising musical chords in
 Haskell. This model relies on a multivariate normal distribution. I’ve been
 searching the internet and mainly hackage for a Haskell library to do this
 for me, but so far I’ve been unsuccessful.

 What I’m looking for is a Haskell function that does exactly what the
 mvnpdf function in matlab does: http://www.mathworks.nl/help/**
 stats/multivariate-normal-**distribution.htmlhttp://www.mathworks.nl/help/stats/multivariate-normal-distribution.html

 Does anyone know a library that can help me out?

 Thanks.

 Kind regards,
 Bas de Haas

 --
 dr. W. Bas de Haas
 Department of Information and Computing Sciences
 Utrecht University

 E: w.b.deh...@uu.nl
 T: +31 30 253 5965
 I: http://www.uu.nl/staff/**WBdeHaas/ http://www.uu.nl/staff/WBdeHaas/

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type level natural numbers

2013-04-03 Thread Clark Gaebel
Where is [1]?


On Wed, Apr 3, 2013 at 3:42 PM, Mateusz Kowalczyk
fuuze...@fuuzetsu.co.ukwrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 About two weeks ago we got an email (at ghc-users) mentioning that
 comparing to 7.6, 7.7.x snapshot would contain (amongst other things),
 type level natural numbers.

 I believe the package used is at [1].

 Can someone explain what use is such package in Haskell? I understand
 uses in a language such as Agda where we can provide proofs about a
 type and then use that to perform computations using the type system
 (such as guaranteeing that concatenating two vectors together will
 give a new one with the length of the two initial vectors combine)
 however as far as I can tell, this is not the case in Haskell
 (although I don't want to say ?impossible? and have Oleg jump me).

 - --
 Mateusz K.
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.19 (GNU/Linux)

 iQIcBAEBAgAGBQJRXIYYAAoJEM1mucMq2pqXT00P/imTf/Wd7UZ0T0ZPUbM6i3Nj
 P5ffEUvUGf5V1jmXub/ibVFv6QHkTigsF/K9VPo13ChtA7u4qnxH7pd+zbTCp6c4
 +A4dgKLVvlB+tGSvKzmsJxEtRh/rtv0UMP2RtvKoB7DH2mMv99EDkKmndWaxOI2z
 VjAvYFqdi//3O0P9bN9/93KZNUZviHh/5IP8f6HcpCWVDu+Z5CKbzUM6roxsBNM1
 a1y6RjQp2SnUFMlJnKbWepRbn2p12dzmMrXzF2UINkTkDTytP+ZIK1ZpS8/qh6i6
 q44GUBa2doHxhX9H+Vo3Vims3S0otyVmTQX/b2J1R7FoBl6fsPa+XUeE8RJwfSzm
 0Ho75AX39rynO9AJ+/hZQdk6G6VkED/JszWBSnfC56VNB0vdYI4e2mBGny4uL9MU
 PnVb+fYk0xuSw7wAqLnVo2ZQqyvN79uNDT4x0uf/6zvkQ8LoSzMr99YwjWI5jo2X
 8dqphjPPArX9MV0xCPdkpU6wPHSvEK4fOxJcqDq104+ssJdNr8PXbhtIifa/KE/C
 B2jhmwRllbtbg1HGXQ8zWlY+VbE+sc5O2AvhrV14fKF8xkNtLRzvhAWR/cOTXLZ0
 hA7r6Jjf4mzQnUBgb/BW8pH6N+UnjkV/JgJ45WHB3PSADU3JspyuG7uJUOCod75e
 D/849dOCHHrkYsYEPdJq
 =JCfK
 -END PGP SIGNATURE-

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A Thought: Backus, FP, and Brute Force Learning

2013-03-20 Thread Clark Gaebel
Reading papers might not be the best way to get started with Haskell. It'll
be a great way to expand your knowledge later, but they're generally not
written to give the reader an introduction to functional programming.

I highly recommend Learn You A Haskell [1]. It is extremely well written.

Regards,
  - Clark

[1] http://learnyouahaskell.com


On Wed, Mar 20, 2013 at 6:59 PM, OWP owpmail...@gmail.com wrote:

 I made an error.  I meant FP to stand for Functional Programming, the
 concept not the language.

 On Wed, Mar 20, 2013 at 6:54 PM, OWP owpmail...@gmail.com wrote:

 This thought isn't really related to Haskell specifically but it's more
 towards FP ideal in general.

 I'm new to the FP world and to get me started, I began reading a few
 papers.  One paper is by John Backus called Can Programming Be Liberated
 from the von Neumann Style? A Functional Style and It's Algebra of
 Programs.

 While I like the premise which notes the limitation of the von Neumann
 Architecture, his solution to this problem makes me feel queasy when I read
 it.

 For me personally, one thing I enjoy about a typical procedural program
 is that it allows me to Brute Force Learn.  This means I stare at a
 particular section of the code for a while until I figure out what it
 does.  I may not know the reasoning behind it but I can have a pretty
 decent idea of what it does.  If I'm lucky, later on someone may tell me
 oh, that just did a gradient of such and such matrix.  In a way, I feel
 happy I learned something highly complex without knowing I learned
 something highly complex.

 Backus seems to throw that out the window.  He introduces major new terms
 which require me to break out the math book which then requires me to break
 out a few other books to figure out which bases things using archaic
 symbols which then requires me to break out the pen and paper to mentally
 expand what in the world that does.  It makes me feel CISCish except
 without a definition book nearby.  It's nice if I already knew what a
 gradient of such and such matrix is but what happens if I don't?

 For the most part, I like the idea that I have the option of Brute Force
 Learning my way towards something.  I also like the declarative aspect of
 languages such as SQL which let's me asks the computer of things once I
 know the meaning of what I'm asking.  I like the ability to play and learn
 but I also like the ability to declare this or that once I do learn.  From
 Backus paper, if his world comes to a reality, it seems like I should know
 what I'm doing before I even start.  The ability to learn while coding
 seems to have disappeared.  In a way, if the von Neumann bottleneck wasn't
 there, I'm not sure programming would be as popular as it is today.

 Unfortunately, I'm still very new and quite ignorant about Haskell so I
 do not know how much of Backus is incorporated in Haskell but so far, in
 the start of my FP learning adventure, this is how things seem to be seen.

 If I may generously ask, where am I wrong and where am I right with this
 thought?

 Thank you for any explanation

 P.S.  If anyone knows of a better place I can ask this question, please
 feel free to show me the way.



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: Nomyx 0.1 beta, the game where you can change the rules

2013-02-27 Thread Clark Gaebel
You could just hash it.

  - Clark


On Wed, Feb 27, 2013 at 2:08 PM, Corentin Dupont
corentin.dup...@gmail.comwrote:

 So I need to encrypt the user ID in some way? What I need is to
 associate the user ID to a random number and store the association is a
 table?



 On Wed, Feb 27, 2013 at 3:52 PM, Erik Hesselink hessel...@gmail.comwrote:

 Note that cookies are not the solution here. Cookies are just as user
 controlled as the url, just less visible. What you need is a session
 id: a mapping from a non-consecutive, non-guessable, secret token to
 the user id (which is sequential and thus guessable, and often exposed
 in urls etc.). It doesn't matter if you then store it in the url or a
 cookie. Cookies are just more convenient.

 Erik

 On Wed, Feb 27, 2013 at 3:30 PM, Corentin Dupont
 corentin.dup...@gmail.com wrote:
  Yes, having a cookie to keep track of the session if something I plan
 to do.
 
  On Wed, Feb 27, 2013 at 3:16 PM, Mats Rauhala mats.rauh...@gmail.com
  wrote:
 
  The user id is not necessarily the problem, but rather that you can
  impose as another user. For this, one solution is to keep track of a
  unique (changing) user token in the cookies and use that for verifying
  the user.
 
  --
  Mats Rauhala
  MasseR
 
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v1.4.10 (GNU/Linux)
 
  iEYEARECAAYFAlEuFVQACgkQHRg/fChhmVMu3ACeLLjbluDQRYekIA2XY37Xbrql
  tH0An1eQHrLLxCjHHBQcZKmy1iYxCxTt
  =tf0d
  -END PGP SIGNATURE-
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why no replace function in our regular expression libs ?

2013-01-25 Thread Clark Gaebel
I've needed this recently, too.

End result: I wrote an Attoparsec.Parser Text, and ran the text through
that.

A regex would have been much nicer...

  - Clark


On Fri, Jan 25, 2013 at 2:06 PM, Simon Michael si...@joyful.com wrote:

 People have put a lot of work into regular expression libraries on
 haskell. Yet it seems very few of them provide a replace/substitute
 function - just regex-compat and regepr as far as I know. Why is that ?
 #haskell says:

 sclv iirc its because that's a really mutatey operation in the
 underlying c libs
 sclv should be simple enough to write a general purpose wrapper layer
 that uses captures to create the effect

 Secondly, as of today what do y'all do when you need that functionality
 ?

 -Simon

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Documentation operator

2012-12-27 Thread Clark Gaebel
I love the idea, but it seems like it's a bit too early in Haskell's life
to implement it. Not everyone's on GHC 7.6.1+.

  - Clark


On Thu, Dec 27, 2012 at 3:20 PM, Iavor Diatchki iavor.diatc...@gmail.comwrote:

 Hi,

 I think that this is a neat idea that should be explored more!   GHC's
 parser has a bunch of awkward duplication to handle attaching documentation
 to types, and it'd be cool if we could replace it with an actual language
 construct.

 Happy holidays!
 -Iavor

 On Wed, Dec 26, 2012 at 3:27 AM, Christopher Done chrisd...@gmail.comwrote:

 Hello chums,

 I've been playing around with an idea, something that has obvious pros
 and cons, but I'll sell it to you because there might be some positive
 ideas out of it. Consider the following operator:

 {-# LANGUAGE TypeOperators, DataKinds, KindSignatures #-}

 module Docs where

 import GHC.TypeLits

 type a ? (sym :: Symbol) = a

 First I'll describe how I'd want to use this and then what I think
 are the advantages and disadvantages.

 I call this (?) operator “the documentation operator”, to be used for:

 * Things that either don't belong or can't be encoded in the type
   system, or for things need to be in English.
 * Things that cannot be encoded in Haddock.

 The simple case of ye olde days:

 -- | Lorem ipsum dolor sit amet. Suspendisse lacinia nibh et
 --   leo. Aenean auctor aliquam dapibus.
 loremIpsum :: Int - Int - String

 Which has since been somewhat evolved into:

 loremIpsum :: Int-- ^ Lorem ipsum dolor sit amet.
- Int-- ^ Suspendisse lacinia nibh et leo.
- String -- ^ Aenean auctor aliquam dapibus.

 But could now be written:

 loremIpsum :: Int? Lorem ipsum dolor sit amet.
- Int? Suspendisse lacinia nibh et leo.
- String ? Aenean auctor aliquam dapibus.

 Here is a contrived case I'll use later on:

 data Person = Person

 describeAge :: Int ? an age - String ? description of their
 elderliness
 describeAge n = undefined

 personAge :: Person ? a person - Int ? their age
 personAge = undefined

 One could also encode previously informal specifications more formally,
 so that

 -- | The action 'hFlush' @hdl@ causes any items buffered for output
 -- in handle @hdl@ to be sent immediately to the operating system.
 --
 -- This operation may fail with:
 --
 --  * 'isFullError' if the device is full;
 --
 --  * 'isPermissionError' if a system resource limit would be
 exceeded.
 --It is unspecified whether the characters in the buffer are
 discarded
 --or retained under these circumstances.
 hFlush :: Handle - IO ()
 hFlush handle = wantWritableHandle hFlush handle flushWriteBuffer

 with

 type Throws ex (docs :: Symbol) = docs

 could now be written

 hFlush :: Handle ? flush buffered items for output on this handle
 - IO ()
   ? Throws IsFullError if the device is full
   ? Throws IsPermissionError
if a system resource limit would be exceeded. It is \
\unspecified whether the characters in the  buffer are \
\discarded or retained under these circumstances.
 hFlush handle = wantWritableHandle hFlush handle flushWriteBuffer

 With this in place, in GHCi you get documentation lookup for free:

  :t hFlush
 hFlush
   :: (Handle ? flush buffered items for output on this handle)
  - (IO () ? Throws IsFullError if the device is full)
 ? Throws
 IsPermissionError
 if a system resource limit would be exceeded. It is
 unspecified
  whether the characters in the  buffer are discarded or
 retained
  under these circumstances.

 And you get function composition, or “documentation composition” for free:

  :t describeAge . personAge
 describeAge . personAge
   :: (Person ? a person)
  - String ? description of their elderliness

 We could have a :td command to print it with docs, and otherwise docs
 could be stripped out trivially by removing the ? annotations:

  :t describeAge . personAge
 describeAge . personAge
   :: Person - String
  :td describeAge . personAge
 describeAge . personAge
   :: (Person ? a person)
  - String ? description of their elderliness

 You could even add clever printing of such “documentation types”:

  :t hFlush
 hFlush
   :: Handle — flush buffered items for output on this handle
   - IO ()
 Throws IsFullError if the device is full
 Throws IsPermissionError if a system resource limit would be
   exceeded. It is unspecified whether the characters in the buffer
   are discarded or retained under these circumstances.

 Unfortunately it doesn't work with monadic composition, of course.

 So here are the advantages:

 * You get parsing for free (and anyone using haskell-src-exts).
 * You 

Re: [Haskell-cafe] containers license issue

2012-12-17 Thread Clark Gaebel
So I heard back from softwarefreedom.org, and they're looking for a
representative from haskell.org to talk to them, as they want to avoid
conflict-of-interests with other clients.

Does anyone with any official status want to talk to real lawyers about
this issue, then let the list know of anything interesting that was said?
Let me know.

  - Clark


On Mon, Dec 17, 2012 at 10:45 AM, Mike Meyer m...@mired.org wrote:

 Ketil Malde ke...@malde.org wrote:
 In particular when copyright is concerned, I believe that verbatim
 copying in many cases will require a license to the original work, but
 merly examining the original work to make use of algorithms, tricks,
 and
 structures from it will not.

 If you don't actually copy any of the text in the latter case, that would
 be correct. But there's an incredible amount of grey between those two
 extremes of black and white, and it's possible that you've unintentionally
 recreated significant bits of the original.

 The Oracle/Google lawsuit was all about those shades of grey - some of the
 API's in Dalvik were implemented by people who had read the Java sources.
 Oracle claimed as much as possible was derivative, Google that none of it
 was. The judge ruled that some uses were infringing and some uses were not.
 This was a technically literate judge - he ruled that one of the cases was
 non-infringing because he could trivially implement the function in Java
 himself.

 The lawyer who pointed out the possible infringement here isn't really
 worried about losing such a lawsuit - there are lots of ways to deal with
 that short of actually releasing any sources they consider proprietary.
 They want to avoid the lawsuit *at all*, as that will almost certainly be
 more expensive than losing it. At least, that's what I hear from clients
 who ask me not to include GPL'ed software.

mike
 --
 Sent from my Android tablet with K-9 Mail. Please excuse my swyping.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] edge: compile testing

2012-12-14 Thread Clark Gaebel
The OpenAL bindings aren't building for me on GHC 7.6:

Sound/OpenAL/ALC/QueryUtils.hs:66:1:
Unacceptable argument type in foreign declaration: ALCdevice
When checking declaration:
  foreign import ccall unsafe static alcGetString alcGetString
:: ALCdevice - ALCenum - IO (Ptr ALCchar)

Sound/OpenAL/ALC/QueryUtils.hs:102:1:
Unacceptable argument type in foreign declaration: ALCdevice
When checking declaration:
  foreign import ccall unsafe static alcGetIntegerv alcGetIntegerv
:: ALCdevice - ALCenum - ALCsizei - Ptr ALCint - IO ()

Sound/OpenAL/ALC/QueryUtils.hs:120:1:
Unacceptable argument type in foreign declaration: ALCdevice
When checking declaration:
  foreign import ccall unsafe static alcIsExtensionPresent
alcIsExtensionPresent_
:: ALCdevice - Ptr ALCchar - IO ALCboolean
Failed to install OpenAL-1.4.0.1



On Fri, Dec 14, 2012 at 10:52 PM, Christopher Howard 
christopher.how...@frigidcode.com wrote:

 Hey guys, to teach myself Haskell I wrote a little arcade game called
 The Edge, built on gloss. It is in hackage under the package name
 edge. Are there a few kind souls who would be willing to compile it on
 their machines and let me know if there are any problems at the
 compiling level? In the past, I've had issues with Haskell code
 compiling fine on my development system but not on others (due to
 dependency-related issues). I want to double check this before I try to
 make any distro-specific packages.

 I developed with GHC 7.4 and cabal-install 1.16.0.2 on a Gentoo system.
 Requires OpenGL and OpenAL (for sound).

 cabal update  cabal install edge

 --
 frigidcode.com


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] LGPL and Haskell (Was: Re: ANNOUNCE: tie-knot library)

2012-12-13 Thread Clark Gaebel
Outside of the Valley and FOSS movement, programs are still usually
distributed as binaries.

For example, I have a secret, dirty desire to write a game in Haskell. This
would be closed source, and if I'd have to rewrite most of the supporting
libraries, it would be a nonstarter.

Plus, it's hard enough advocating for Haskell adoption because it's hard
or less experienced developers won't get it. It'd rather not add the
entire ecosystem is GPL, and there's no dynamic linking to that list if I
could avoid it.

   - Clark


On Thu, Dec 13, 2012 at 3:14 AM, Colin Adams colinpaulad...@gmail.comwrote:

 On 13 December 2012 08:09, Michael Snoyman mich...@snoyman.com wrote:

 To take this out of the academic realm and into the real-life realm: I've
 actually done projects for companies which have corporate policies
 disallowing the usage of any copyleft licenses in their toolset. My use
 case was a web application, which would not have been affected by a GPL
 library usage since we were not distributing binaries. Nonetheless, those
 clients would not have allowed usage of any such libraries. You can argue
 whether or not this is a good decision on their part, but I don't think the
 companies I interacted with were unique in this regard.

 So anyone who's considering selling Haskell-based services to companies
 could very well be in a situation where any (L)GPL libraries are
 non-starters, regardless of actual legal concerns.


 Presumably you are talking about companies who want to distribute programs
 (a very small minority of companies, I would think)?

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Kernel Loops in Accelerate

2012-12-13 Thread Clark Gaebel
Sometimes we need some divergence in kernels, such as the random number
generation example I just posted. Technically (but not practically), we
could have a thread executing forever.

It's fine to discourage writing these loops, and with the proposed
signatures there still won't be any temptation to do data-parallel work,
but they're necessary for many real-world tasks on which parallel
algorithms don't entirely map.

Once this is all said and done, however, I think I'll be writing an initial
implementation of a parallel Monte Carlo library based on accelerate. It
(accelerate) seems really promising. I especially love how I can test my
programs without an Nvidia GPU, and can switch to performance test and to
run real-world code.

Thanks,
  - Clark

P.S. Don't forget about do-while! =)


On Thu, Dec 13, 2012 at 7:47 AM, Trevor L. McDonell 
tmcdon...@cse.unsw.edu.au wrote:

 Hi Clark,

 The question of sequential loops in Accelerate has come up a few times in
 the
 past. The main sticking point is knowing how to implement them in a way
 that
 encourages efficient programs; avoiding irregular arrays (iteration
 depths),
 discouraging scalar versions of collective combinators, etc. Basically we
 need
 to avoid thread divergence where possible, else the GPU SIMD hardware
 won't be
 well utilised.

 I've experimented a little with this. If you check the generated code of
 the
 mandelbrot program [1], you'll see it has recovered the _fixed_ iteration
 depth
 into a scalar loop.

 I'll hack up something like a while loop for value iteration and see what
 happens. Tentative proposal (perhaps just the second):

   iterate :: Exp Int-- fixed iteration count (maybe just a
 regular Int)
   - (Exp a - Exp a)   -- function to repeatedly apply
   - Exp a  -- initial (seed) value
   - Exp a

   while   :: (Exp a - Exp Bool)
   - (Exp a - Exp a)
   - Exp a
   - Exp a

 It would be good if we could collect some additional concrete applications
 and
 see if there are better ways to map certain classes of sequential
 iteration to
 efficient GPU code.

 Additional thoughts/comments welcome!

 Cheers,
 -Trev


 [1]:
 https://github.com/AccelerateHS/accelerate-examples/tree/master/examples/mandelbrot

 On 12/12/2012, at 4:34 PM, Clark Gaebel cgae...@uwaterloo.ca wrote:

 Hi Trevor (and cafe),

 I've been playing more and more with accelerate, and I find it quite
 annoying that there are no loops. It makes implementing many algorithms
 much harder than it should be.

 For example, I would love to submit a patch to fix issue #52 [0] on github
 by implementing MWC64X [1], but it's very hard to port the OpenCL code on
 that page when it's impossible to write kernel expressions with loops.
 Also, that means there are no high-level combinators I'm used to for my
 sequential code (such as map and fold) that would work on an accelerate
 CUDA kernel.

 As a nice strawman example, how would one implement the following kernel
 in accelerate, assuming 'rand_next', 'rand_get', and 'rand_skip' can all be
 implemented cheaply? :

 typedef uint64_t rand_state;

 __device__ rand_state rand_next(rand_state s);
 __device__ uint32_t rand_get(rand_state s);
 __device__ rand_state rand_skip(rand_state s, uint64_t distance);
 __device__ uint32_t round_to_next_pow2(uint32_t n);

 // Fills an array with random numbers given a random seed,
 // a maximum random number to generate, and an output
 // array to put the result in. The output will be in the range
 // [0, rand_max).
 __kernel__ void fill_random(rand_state start_state, uint32_t rand_max,
 uint32_t* out) {
 rand_state current_state = start_state;
 int i = blockDim.x*blockIdx.x + threadIdx.x;
 // assumes we skip less than 1 million times per element...
 current_state = rand_skip(current_state, i*1e6);
 uint32_t mask = round_to_next_pow2(rand_max) - 1;
 uint32_t result;
 do {
 result = rand_get(current_state);
 current_state = rand_next(current_state);
 } while(result  mask = rand_max);

 out[i] = result;
 } // note: code was neither debugged, run, nor compiled.

 Thanks,
   - Clark

 [0] https://github.com/AccelerateHS/accelerate/issues/52
 [1] http://cas.ee.ic.ac.uk/people/dt10/research/rngs-gpu-mwc64x.html



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] containers license issue

2012-12-13 Thread Clark Gaebel
I didn't even know that site existed. Let's add them to the thread!

softwarefreedom.org, what are your opinions on what was discussed in this
thread:

http://www.haskell.org/pipermail/haskell-cafe/2012-December/105193.html

Is there anything that we, as a community, should know about? Should we
proceed differently?

Thanks,
  - Clark

(you might need to sign up to haskell-cafe to post. maybe use a different
account?)

On Thu, Dec 13, 2012 at 1:45 PM, Christopher Howard 
christopher.how...@frigidcode.com wrote:

 On 12/13/2012 08:34 AM, Clint Adams wrote:
  On Wed, Dec 12, 2012 at 11:11:28PM -0800, Chris Smith wrote:
 
  That's true.  However, haskell.org's fiscal sponsor receives pro bono
  legal services.
 
 
  I may have been conflating threads, though the response to what I assume
  was just a lawyer asking a question seems excessive too.
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 

 Just thought I'd mention: It is possible for anyone involved in a FOSS
 project to get pro bono legal advice from the SFLC, from actual lawyers
 who are highly familiar with the legal aspects of FOSS licenses:

 https://www.softwarefreedom.org

 quote:
 
 If you are involved in a Free, Libre and Open Source Software (FLOSS)
 project in need of legal advice, please email h...@softwarefreedom.org.
 When seeking legal advice, please use only this address to contact us
 (unless you are already a client).
 

 I'm not sure if they are willing to help those who are trying to /avoid/
 making a free software product, but they would likely be willing to
 answer any generic questions about applicability of the GPLs, derived
 works, etc.

 --
 frigidcode.com


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] LGPL and Haskell (Was: Re: ANNOUNCE: tie-knot library)

2012-12-12 Thread Clark Gaebel
Since we've already heard from the aggressive (L)GPL side of this debate,
I think it's time for someone to provide the opposite opinion.

I write code to help users. However, as a library designer, my users are
programmers just like me. Writing my Haskell libraries with restrictions
like the (L)GPL means my users need to jump through hoops to use my
software, and I personally find that unacceptable. Therefore, I gravitate
more towards BSD3 and beer-ware type licenses. This also means my users
aren't subjected to my religious views just because they want to use my
ones and zeros.

Also, with GHC's aggressive inlining, even if you do have a static linking
exception in your (L)GPL license, it still may not hold up! Although the
entire idea is untested in court, GHC can (and will!) inline potentially
huge parts of statically linked libraries into your code, and this would
force you to break the license terms if you were to distribute the software
without source code. In Haskell-land, the GPL is the ultimate in viral
licensing, and very hard to escape.

That's why I don't use (L)GPL licenses.

Just making sure both sides have a horse in this race :)
  - Clark


On Wed, Dec 12, 2012 at 9:51 AM, kudah kudahkuka...@gmail.com wrote:

 On Wed, 12 Dec 2012 10:06:23 +0100 Petr P petr@gmail.com wrote:

  2012/12/12 David Thomas davidleotho...@gmail.com
 
  Yet another solution would be
  what David Thomas suggest: To provide the source code to your users,
  but don't allow them to use the code for anything but relinking the
  program with a different version of the library (no distribution, no
  modification etc.).

 You can also provide object code for linking, though I'm sure this
 will not work with Haskell object files. Providing alternative
 distribution of your program linked dynamically, or a promise to
 provide one on notice, also satisfies the LGPL as long as
 dynamic-version is as functional as the static and can be dropped-in
 as a replacement.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] containers license issue

2012-12-12 Thread Clark Gaebel
I think this is a potential problem, but, obviously, IANAL. [1]

According to the GPL:

To “propagate” a work means to do anything with it that, without
permission, would make you directly or secondarily liable for infringement
under applicable copyright law, except executing it on a computer or
modifying a private copy. Propagation includes copying, distribution (with
or without modification), making available to the public, and in some
countries other activities as well.

and

You may make, run and propagate covered works that you do not convey,
without conditions so long as your license otherwise remains in force.

and of course

You may not propagate or modify a covered work except as expressly provided
under this License. Any attempt otherwise to propagate or modify it is
void, and will automatically terminate your rights under this License
(including any patent licenses granted under the third paragraph of section
11).


I believe that this counts as propagation of the original work, since it
would be considered infringement under applicable copyright law. Now, the
wording in the GPL is a bit confusing on this point. I'm not sure if
propagation requires that the BSD3 that containers is licensed under must
remain in force, or the GPL on which the which is derived must remain in
force. Does anyone else have better luck interpreting this?

  - Clark

[1] Aside: Can we stop saying IANAL? Let's just all assume that, until
proven otherwise, no one here is a lawyer.
[2] Required Reading: http://www.gnu.org/licenses/gpl.html


On Wed, Dec 12, 2012 at 11:00 AM, David Thomas davidleotho...@gmail.com
wrote:

 Right. If either of the following hold, you should be able to carry on as
you were (but double check with your lawyer):

 1) The algorithm is borrowed but the code was not copied.  In this case,
copyright doesn't cover it, and the GPL is inapplicable.  (Patents could
conceivably be an issue, but no more so than if it was BSD code).

 2) If you are not going to be distributing the code - either it is used
for internal tools or in the backend of a networked service (which the GPL
does not treat as distribution, as distinct from the AGPL).

 If a sizable chunk of actual code was copied, then the containers package
would have to be GPL, and if you are using the library and distribute
programs built with it then those programs must be GPL as well.



 On Wed, Dec 12, 2012 at 7:47 AM, Vo Minh Thu not...@gmail.com wrote:

 2012/12/12 Dmitry Kulagin dmitry.kula...@gmail.com:
  Hi Cafe,
 
  I am faced with unpleasant problem. The lawyer of my company checked
sources
  of containers package and found out that it refers to some GPL-library.
 
  Here is quote:
  The algorithm is derived from Jorg Arndt's FXT library
  in file Data/IntMap/Base.hs
 
  The problem is that FXT library is GPL and thus containers package can
not
  be considered as BSD3. And it means that it can not be used in my case
  (closed source software).
 
  Is this logic actually correct and containers should be considered as
GPL?
 
  The package is widely used by other packages and the only way I see
right
  now is to fix sources to reimplement this functionality, which is not
good
  option.

 GPL covers code, not algorithms.

 Beside, you can use GPL in closed-source code. GPL forces you to make
 the source available when you distribute the software, but if you
 don't distribute the software, there is nothing wrong to use GPL and
 not make your code available.

 HTH, IANAL,
 Thu

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] containers license issue

2012-12-12 Thread Clark Gaebel
It's not an algorithm. The source code of containers is derived from the
source code of another library.

  - Clark


On Wed, Dec 12, 2012 at 11:27 AM, Vo Minh Thu not...@gmail.com wrote:

 I'm not sure what your point is.

 Re-implementing an algorithm is not a copyright infringement (nor is a
 propagation of the original work). Algorithms are not covered by
 copyright.

 2012/12/12 Clark Gaebel cgae...@uwaterloo.ca:
  I think this is a potential problem, but, obviously, IANAL. [1]
 
  According to the GPL:
 
  To “propagate” a work means to do anything with it that, without
 permission,
  would make you directly or secondarily liable for infringement under
  applicable copyright law, except executing it on a computer or modifying
 a
  private copy. Propagation includes copying, distribution (with or without
  modification), making available to the public, and in some countries
 other
  activities as well.
 
  and
 
  You may make, run and propagate covered works that you do not convey,
  without conditions so long as your license otherwise remains in force.
 
  and of course
 
  You may not propagate or modify a covered work except as expressly
 provided
  under this License. Any attempt otherwise to propagate or modify it is
 void,
  and will automatically terminate your rights under this License
 (including
  any patent licenses granted under the third paragraph of section 11).
 
 
  I believe that this counts as propagation of the original work, since
 it
  would be considered infringement under applicable copyright law. Now,
 the
  wording in the GPL is a bit confusing on this point. I'm not sure if
  propagation requires that the BSD3 that containers is licensed under must
  remain in force, or the GPL on which the which is derived must remain in
  force. Does anyone else have better luck interpreting this?
 
- Clark
 
  [1] Aside: Can we stop saying IANAL? Let's just all assume that, until
  proven otherwise, no one here is a lawyer.
  [2] Required Reading: http://www.gnu.org/licenses/gpl.html
 
 
  On Wed, Dec 12, 2012 at 11:00 AM, David Thomas davidleotho...@gmail.com
 
  wrote:
 
  Right. If either of the following hold, you should be able to carry on
 as
  you were (but double check with your lawyer):
 
  1) The algorithm is borrowed but the code was not copied.  In this case,
  copyright doesn't cover it, and the GPL is inapplicable.  (Patents could
  conceivably be an issue, but no more so than if it was BSD code).
 
  2) If you are not going to be distributing the code - either it is used
  for internal tools or in the backend of a networked service (which the
 GPL
  does not treat as distribution, as distinct from the AGPL).
 
  If a sizable chunk of actual code was copied, then the containers
 package
  would have to be GPL, and if you are using the library and distribute
  programs built with it then those programs must be GPL as well.
 
 
 
  On Wed, Dec 12, 2012 at 7:47 AM, Vo Minh Thu not...@gmail.com wrote:
 
  2012/12/12 Dmitry Kulagin dmitry.kula...@gmail.com:
   Hi Cafe,
  
   I am faced with unpleasant problem. The lawyer of my company checked
   sources
   of containers package and found out that it refers to some
 GPL-library.
  
   Here is quote:
   The algorithm is derived from Jorg Arndt's FXT library
   in file Data/IntMap/Base.hs
  
   The problem is that FXT library is GPL and thus containers package
 can
   not
   be considered as BSD3. And it means that it can not be used in my
 case
   (closed source software).
  
   Is this logic actually correct and containers should be considered as
   GPL?
  
   The package is widely used by other packages and the only way I see
   right
   now is to fix sources to reimplement this functionality, which is not
   good
   option.
 
  GPL covers code, not algorithms.
 
  Beside, you can use GPL in closed-source code. GPL forces you to make
  the source available when you distribute the software, but if you
  don't distribute the software, there is nothing wrong to use GPL and
  not make your code available.
 
  HTH, IANAL,
  Thu
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] containers license issue

2012-12-12 Thread Clark Gaebel
Just for reference:

In Data/IntMap/Base.hs

highestBitMask :: Nat - Nat
highestBitMask x0
  = case (x0 .|. shiftRL x0 1) of
 x1 - case (x1 .|. shiftRL x1 2) of
  x2 - case (x2 .|. shiftRL x2 4) of
   x3 - case (x3 .|. shiftRL x3 8) of
x4 - case (x4 .|. shiftRL x4 16) of
#if !(defined(__GLASGOW_HASKELL__)  WORD_SIZE_IN_BITS==32)
 x5 - case (x5 .|. shiftRL x5 32) of   -- for 64 bit platforms
#endif
  x6 - (x6 `xor` (shiftRL x6 1))

In FXT bithigh.h:

static inline ulong highest_one(ulong x)
// Return word where only the highest bit in x is set.
// Return 0 if no bit is set.
{
#if defined  BITS_USE_ASM
if ( 0==x )  return 0;
x = asm_bsr(x);
return  1ULx;
#else
x = highest_one_01edge(x);
return  x ^ (x1);
#endif  // BITS_USE_ASM
}

And in FXT bits/bithigh-edge.h:

static inline ulong highest_one_01edge(ulong x)
// Return word where all bits from (including) the
//   highest set bit to bit 0 are set.
// Return 0 if no bit is set.
//
// Feed the result into bit_count() to get
//   the index of the highest bit set.
{
#if defined  BITS_USE_ASM

if ( 0==x )  return 0;
x = asm_bsr(x);
return  (2ULx) - 1;

#else  // BITS_USE_ASM

x |= x1;
x |= x2;
x |= x4;
x |= x8;
x |= x16;
#if  BITS_PER_LONG = 64
x |= x32;
#endif
return  x;
#endif  // BITS_USE_ASM
}

=

However... I think the easy solution for this is to just find this in
http://graphics.stanford.edu/~seander/bithacks.html, and cite it instead. I
looked briefly and couldn't find it, but I'm almost sure it's in there.

  - Clark


On Wed, Dec 12, 2012 at 11:37 AM, Niklas Larsson metanik...@gmail.comwrote:

 2012/12/12 David Thomas davidleotho...@gmail.com:
  Ah, that's more than we'd been told.  If that is the case, then
 containers
  is in violation of the GPL (unless they got permission to copy that code,
  separately), and either must obtain such permission, be relicensed,
  remove/replace that code.

 I think it's just a confusion of language, the derived algorithm
 clashes uncomfortably with the lawyerly derived work. They are not
 used in the same sense.

 Niklas

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] containers license issue

2012-12-12 Thread Clark Gaebel
I just did a quick derivation from
http://graphics.stanford.edu/~seander/bithacks.html#RoundUpPowerOf2 to get
the highest bit mask, and did not reference FXT nor the containers
implementation. Here is my code:

highestBitMask :: Word64 - Word64
highestBitMask x1 = let x2 = x1 .|. x1 `shiftR` 1
x3 = x2 .|. x2 `shiftR` 2
x4 = x3 .|. x3 `shiftR` 4
x5 = x4 .|. x4 `shiftR` 8
x6 = x5 .|. x5 `shiftR` 16
x7 = x6 .|. x6 `shiftR` 32
 in x7 `xor` (x7 `shiftR` 1)

This code is hereby released into the public domain. Problem solved.

  - Clark


On Wed, Dec 12, 2012 at 12:23 PM, Mike Meyer m...@mired.org wrote:

 Niklas Larsson metanik...@gmail.com wrote:
 2012/12/12 Niklas Larsson metanik...@gmail.com:
 
  There is no copied code from FXT (which can be said with certainty as
  FXT is a C library), hence the there can be copyright issue.
 Gah, I should proofread! NO copyright issue, of course.

 Um, no. Copyright *includes* translations. A translated copy of a work is
 based on the original and requires copyright permissions. This makes it a
 modified work according to the definitions in the GPL.

 You're all thinking about this as if logic and the law had something in
 common. The relevant question isn't  whether or not the GPL applies, but
 whether or not a case can be made that the GPL should apply. Clearly, that
 case can be made, so if you include the containers code without treating it
 as GPL'ed, you risk winding up in court. I suspect that's what the lawyer
 is really trying to avoid, as it would mean they'd actually have to work.
 --
 Sent from my Android tablet with K-9 Mail. Please excuse my swyping.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] containers license issue

2012-12-12 Thread Clark Gaebel
Possibly. I tend to trust GHC's strictness analyzer until proven otherwise,
though. Feel free to optimize as necessary.

  - Clark


On Wed, Dec 12, 2012 at 3:06 PM, Vo Minh Thu not...@gmail.com wrote:

 2012/12/12 Johan Tibell johan.tib...@gmail.com:
  On Wed, Dec 12, 2012 at 10:40 AM, Clark Gaebel cgae...@uwaterloo.ca
 wrote:
  I just did a quick derivation from
  http://graphics.stanford.edu/~seander/bithacks.html#RoundUpPowerOf2 to
 get
  the highest bit mask, and did not reference FXT nor the containers
  implementation. Here is my code:
 
  highestBitMask :: Word64 - Word64
  highestBitMask x1 = let x2 = x1 .|. x1 `shiftR` 1
  x3 = x2 .|. x2 `shiftR` 2
  x4 = x3 .|. x3 `shiftR` 4
  x5 = x4 .|. x4 `shiftR` 8
  x6 = x5 .|. x5 `shiftR` 16
  x7 = x6 .|. x6 `shiftR` 32
   in x7 `xor` (x7 `shiftR` 1)
 
  This code is hereby released into the public domain. Problem solved.
 
  I will integrate this into containers later today.

 Note that I think the current implementation use a series of case
 expression instead of a let binding, possibly to force the evaluation.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Control.bimap?

2012-12-12 Thread Clark Gaebel
http://hackage.haskell.org/packages/archive/categories/0.59/doc/html/Control-Categorical-Bifunctor.html


On Wed, Dec 12, 2012 at 3:54 PM, Gregory Guthrie guth...@mum.edu wrote:

 I found a nice idiom for a graph algorithm where the pairs of nodes
 representing links could be merged into node lists by something like:

 ns = nub $ map fst  g--head nodes

 ne = nub $ map snd g   -- tail nodes

 ** **

 And found a nicer approach:

(ns,ne) = (nub***nub) unzip g

 Or perhaps:

(ns.ne) = bimap nub nub $ unzip g-- from Control.Bifunctor 

 ** **

 The SO reference I saw described bimap as a way to map a function over a
 pair, and it seemed like a great match, but I cannot find the bimap
 function, and cabal reports no package Control.Bifunctor.

 ??

 ---

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Control.bimap?

2012-12-12 Thread Clark Gaebel
Also,
http://hackage.haskell.org/packages/archive/bifunctors/3.0/doc/html/Data-Bifunctor.html


On Wed, Dec 12, 2012 at 4:12 PM, Clark Gaebel cgae...@uwaterloo.ca wrote:


 http://hackage.haskell.org/packages/archive/categories/0.59/doc/html/Control-Categorical-Bifunctor.html


 On Wed, Dec 12, 2012 at 3:54 PM, Gregory Guthrie guth...@mum.edu wrote:

 I found a nice idiom for a graph algorithm where the pairs of nodes
 representing links could be merged into node lists by something like:

 ns = nub $ map fst  g--head nodes

 ne = nub $ map snd g   -- tail nodes

 ** **

 And found a nicer approach:

(ns,ne) = (nub***nub) unzip g

 Or perhaps:

(ns.ne) = bimap nub nub $ unzip g-- from Control.Bifunctor 

 ** **

 The SO reference I saw described bimap as a way to map a function over a
 pair, and it seemed like a great match, but I cannot find the bimap
 function, and cabal reports no package Control.Bifunctor.

 ??

 ---

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] C++

2012-12-11 Thread Clark Gaebel
If you're trying to memoize a recursive algorithm with a global array of
previous states, you could use the marvellous MemoTrie package [1]. It lets
you write your algorithm recursively, while getting all the benefits of
memoization! Here's an example with the fibonacci function:

fib :: Int - Integer
fib 0 = 1
fib 1 = 1
fib n = memofib (n - 2) + memofib (n - 1)

memofib :: Int - Integer
memofib = memo fib

 memofib 113 -- runs in O(n), with (permanent!) memory usage also at O(n).

  - Clark

[1] http://hackage.haskell.org/package/MemoTrie


On Tue, Dec 11, 2012 at 2:59 PM, Serguey Zefirov sergu...@gmail.com wrote:

 This array is for dynamic programming.

 You can diagonalize it into a list and use technique similar to the
 Fibonacci numbers.

 The resulting solution should be purely declarative.

 2012/12/11 mukesh tiwari mukeshtiwari.ii...@gmail.com:
  Hello All
  I am trying to transform this C++ code in Haskell. In case any one
  interested this is solution of SPOJ problem.
 
  #includecstdio
  #includeiostream
  #includecstring
  using namespace std;
 
  int memo[1100][1100] ;
 
  int recurse( int h , int a , int cnt , bool flag )
  {
if ( h = 0 || a = 0 ) return cnt ;
if ( memo[h][a] ) return memo[h][a] ;
if ( flag ) memo[h][a] =  recurse ( h + 3 , a + 2 , cnt + 1 ,
 !flag )
  ;
else
   memo[h][a] = max ( memo[h][a] ,  max ( recurse ( h - 5 , a - 10
 ,
  cnt + 1 , !flag ) , recurse ( h - 20 , a + 5 , cnt + 1 , !flag ) ) ) ;
 
   return memo[h][a];
 }
 
  int main()
{
  int n , a , b ;
  scanf( %d, n );
  for(int i = 0 ; i  n ; i++)
  {
   memset ( memo , 0 , sizeof memo ) ;
   scanf(%d%d, a , b );
   printf(%d\n , recurse( a , b , -1 ,  1 ));
   if( i != ( n - 1 ) ) printf(\n);
  }
 
}
 
  I am stuck with that memo[1100][1100] is global variable so I tried to
 solve
  this problem using state monad ( Don't know if its correct approach or
 not )
  but it certainly does not seem correct to me. Till now I came up with
 code.
  Could some one please tell me how to solve this kind of problem (
 Generally
  we have a global variable either multi dimensional array or map  and we
  store the best values found so far in the table ).
 
  import qualified Data.Map.Strict as SM
  import Control.Monad.State
 
  {--
  funsolve_WF :: Int - Int - Int - Int
  funsolve_WF h a cnt
   | h = 0 || a = 0 = cnt
   | otherwise = funsolve_Air h a ( cnt + 1 )
 
  funsolve_Air :: Int - Int - Int - Int
  funsolve_Air h a cnt = max ( funsolve_WF ( h + 3 - 5 ) ( a + 2 - 10 )
 cnt' )
  ( funsolve_WF ( h + 3  - 20 )  ( a + 2  + 5 )  cnt' ) where
cnt' = cnt + 1
  --}
 
 
 
  funSolve :: Int - Int - Int - Bool - State ( SM.Map ( Int , Int )
  Int )
  Int
  funSolve hl am cnt f
  | hl = 0  am = 0 = return cnt
  | otherwise = do
  mp - get
  case () of
 _| SM.member ( hl , am ) mp - return  mp SM.! ( hl , am )
  | f - do
--here I have to insert the value return by
 function
  funSolve ( hl + 3 ) ( am + 2 ) (  cnt + 1 )  ( not f ) to map whose key
 is (
  hl , am )
  let k = evalState ( funSolve ( hl + 3 ) ( am + 2
 )
  ( cnt + 1 ) ( not f ) ) mp
  modify ( SM.insert ( hl , am ) k )
 
 
  | otherwise -  do
 do
  let k_1 = evalState ( funSolve ( hl - 5 )  ( am
 - 10
  )  ( cnt + 1 ) ( not f  ) ) mp
  k_2 = evalState ( funSolve ( hl - 20 ) ( am
 + 5
  )  ( cnt + 1 ) ( not f  ) ) mp
  k_3 =  mp SM.! ( hl , am )
  modify ( SM.insert ( hl , am )  ( maximum [ k_1 ,
  k_2 , k_3 ] )  )
 
  Regards
  Mukesh Tiwari
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Kernel Loops in Accelerate

2012-12-11 Thread Clark Gaebel
Hi Trevor (and cafe),

I've been playing more and more with accelerate, and I find it quite
annoying that there are no loops. It makes implementing many algorithms
much harder than it should be.

For example, I would love to submit a patch to fix issue #52 [0] on github
by implementing MWC64X [1], but it's very hard to port the OpenCL code on
that page when it's impossible to write kernel expressions with loops.
Also, that means there are no high-level combinators I'm used to for my
sequential code (such as map and fold) that would work on an accelerate
CUDA kernel.

As a nice strawman example, how would one implement the following kernel in
accelerate, assuming 'rand_next', 'rand_get', and 'rand_skip' can all be
implemented cheaply? :

typedef uint64_t rand_state;

__device__ rand_state rand_next(rand_state s);
__device__ uint32_t rand_get(rand_state s);
__device__ rand_state rand_skip(rand_state s, uint64_t distance);
__device__ uint32_t round_to_next_pow2(uint32_t n);

// Fills an array with random numbers given a random seed,
// a maximum random number to generate, and an output
// array to put the result in. The output will be in the range
// [0, rand_max).
__kernel__ void fill_random(rand_state start_state, uint32_t rand_max,
uint32_t* out) {
rand_state current_state = start_state;
int i = blockDim.x*blockIdx.x + threadIdx.x;
// assumes we skip less than 1 million times per element...
current_state = rand_skip(current_state, i*1e6);
uint32_t mask = round_to_next_pow2(rand_max) - 1;
uint32_t result;
do {
result = rand_get(current_state);
current_state = rand_next(current_state);
} while(result  mask = rand_max);

out[i] = result;
} // note: code was neither debugged, run, nor compiled.

Thanks,
  - Clark

[0] https://github.com/AccelerateHS/accelerate/issues/52
[1] http://cas.ee.ic.ac.uk/people/dt10/research/rngs-gpu-mwc64x.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is it possible to have constant-space JSON decoding?

2012-12-04 Thread Clark Gaebel
Aeson is used for the very common usecase of short messages that need to be
parsed as quickly as possible into a static structure. A lot of things are
sacrificed to make this work, such as incremental parsing and good error
messages. It works great for web APIs like twitter's.

I didn't even know people used JSON to store millions of integers. It
sounds like fun.

  - Clark


On Tue, Dec 4, 2012 at 9:38 AM, Iustin Pop ius...@google.com wrote:

 On Tue, Dec 04, 2012 at 12:23:19PM -0200, Felipe Almeida Lessa wrote:
  Aeson doesn't have an incremental parser so it'll be
  difficult/impossible to do what you want.  I guess you want an
  event-based JSON parser, such as yajl [1].  I've never used this
  library, though.

 Ah, I see. Thanks, I wasn't aware of that library.

 So it seems that using either 'aeson' or 'json', we should be prepared
 to pay the full cost of input message (string/bytestring) plus the cost
 of the converted data structures.

 thanks!
 iustin

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is hackage.haskell.org down?

2012-12-04 Thread Clark Gaebel
Works for me.


On Tue, Dec 4, 2012 at 11:03 AM, Niklas Hambüchen m...@nh2.me wrote:

 Down for me.

 On Tue 04 Dec 2012 15:44:10 GMT, Ivan Perez wrote:
  Hi haskellers,
 
  I've been having problems to access hackage.haskell.org for the past
  2-4 hours. Is everything ok?
 
  Cheers,
  Ivan
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Naive matrix multiplication with Accelerate

2012-12-04 Thread Clark Gaebel
No. But that doesn't stop me from being curious with Accelerate. Might you
have a better explaination for what's happening here than Trevor's?

  - Clark


On Tue, Dec 4, 2012 at 7:08 PM, Alexander Solla alex.so...@gmail.comwrote:

 I don't mean to be blunt, but have you guys taken a course in linear
 algebra?


 On Mon, Dec 3, 2012 at 9:21 PM, Trevor L. McDonell 
 tmcdon...@cse.unsw.edu.au wrote:

 As far as I am aware, the only description is in the Repa paper. I you
 are right, it really should be explained properly somewhere…

 At a simpler example, here is the outer product of two vectors [1].

 vvProd :: (IsNum e, Elt e) = Acc (Vector e) - Acc (Vector e) - Acc
 (Matrix e)
 vvProd xs ys = A.zipWith (*) xsRepl ysRepl
   where
 n   = A.size xs
 m   = A.size ys

 xsRepl  = A.replicate (lift (Z :. All :. m  )) xs
 ysRepl  = A.replicate (lift (Z :. n   :. All)) ys

 If we then `A.fold (+) 0` the matrix, it would reduce along each row
 producing a vector. So the first element of that vector is going to be
 calculated as (xs[0] * ys[0] + xs[0] * ys[1] +  … xs[0] * ys[m-1]). That's
 the idea we want for our matrix multiplication … but I agree, it is
 difficult for me to visualise as well.

 I do the same sort of trick with the n-body demo to get all n^2 particle
 interactions.

 -Trev


  [1]: http://en.wikipedia.org/wiki/Outer_product#Vector_multiplication



 On 04/12/2012, at 3:41 AM, Clark Gaebel cgae...@uwaterloo.ca wrote:

 Ah. I see now. Silly Haskell making inefficient algorithms hard to write
 and efficient ones easy. It's actually kind of annoying when learning, but
 probably for the best.

 Is there a good write-up of the algorithm you're using somewhere? The
 Repa paper was very brief in its explaination, and I'm having trouble
 visualizing the mapping of the 2D matricies into 3 dimensions.

   - Clark


 On Mon, Dec 3, 2012 at 2:06 AM, Trevor L. McDonell 
 tmcdon...@cse.unsw.edu.au wrote:

 Hi Clark,

 The trick is that most accelerate operations work over multidimensional
 arrays, so you can still get around the fact that we are limited to flat
 data-parallelism only.

 Here is matrix multiplication in Accelerate, lifted from the first Repa
 paper [1].


 import Data.Array.Accelerate as A

 type Matrix a = Array DIM2 a

 matMul :: (IsNum e, Elt e) = Acc (Matrix e) - Acc (Matrix e) - Acc
 (Matrix e)
 matMul arr brr
   = A.fold (+) 0
   $ A.zipWith (*) arrRepl brrRepl
   where
 Z :. rowsA :. _ = unlift (shape arr):: Z :. Exp Int :. Exp
 Int
 Z :. _ :. colsB = unlift (shape brr):: Z :. Exp Int :. Exp
 Int

 arrRepl = A.replicate (lift $ Z :. All   :. colsB :.
 All) arr
 brrRepl = A.replicate (lift $ Z :. rowsA :. All   :.
 All) (A.transpose brr)


 If you use github sources rather than the hackage package, those
 intermediate replicates will get fused away.


 Cheers,
 -Trev

  [1] http://www.cse.unsw.edu.au/~chak/papers/KCLPL10.html




 On 03/12/2012, at 5:07 PM, Clark Gaebel cgae...@uwaterloo.ca wrote:

 Hello cafe,

 I've recently started learning about cuda and hetrogenous programming,
 and have been using accelerate [1] to help me out. Right now, I'm running
 into trouble in that I can't call parallel code from sequential code. Turns
 out GPUs aren't exactly like Repa =P.

 Here's what I have so far:

 import qualified Data.Array.Accelerate as A
 import Data.Array.Accelerate ( (:.)(..)
  , Acc
  , Vector
  , Scalar
  , Elt
  , fold
  , slice
  , constant
  , Array
  , Z(..), DIM1, DIM2
  , fromList
  , All(..)
  , generate
  , lift, unlift
  , shape
  )
 import Data.Array.Accelerate.Interpreter ( run )

 dotP :: (Num a, Elt a) = Acc (Vector a) - Acc (Vector a) - Acc
 (Scalar a)
 dotP xs ys = fold (+) 0 $ A.zipWith (*) xs ys

 type Matrix a = Array DIM2 a

 getRow :: Elt a = Int - Acc (Matrix a) - Acc (Vector a)
 getRow n mat = slice mat . constant $ Z :. n :. All

 -- Naive matrix multiplication:
 --
 -- index (i, j) is equal to the ith row of 'a' `dot` the jth row of 'b'
 matMul :: A.Acc (Matrix Double) - A.Acc (Matrix Double) - A.Acc
 (Matrix Double)
 matMul a b' = A.generate (constant $ Z :. nrows :. ncols) $
 \ix -
   let (Z :. i :. j) = unlift ix
in getRow i a `dotP` getRow j b
 where
 b = A.transpose b' -- I assume row indexing is faster than
 column indexing...
 (Z :. nrows :.   _  ) = unlift $ shape a
 (Z :.   _   :. ncols) = unlift $ shape b


 This, of course, gives me errors right now because I'm

Re: [Haskell-cafe] Naive matrix multiplication with Accelerate

2012-12-04 Thread Clark Gaebel
Thanks! I'll read through Matricies and Linear Algebra over the next few
days.

  - Clark


On Tue, Dec 4, 2012 at 7:43 PM, Alexander Solla alex.so...@gmail.comwrote:

 Sorry, I didn't realize that course was offered next year.  I read through
 Matrices and Linear Algebra when I was in high school.  And used
 Friedberg, Insel, Spence's Linear Algebra in college.


 On Tue, Dec 4, 2012 at 4:37 PM, Alexander Solla alex.so...@gmail.comwrote:

 Well, an m x n matrix corresponds to a linear transformation in at most
 min{m,n} dimensions.  In particular, this means that a 2x2 matrix
 corresponds to a plane, line, or the origin of 3-space, as a linear
 subspace.  Which of those the matrix corresponds to depends on the matrix's
 rank, which is the number of linearly independent columns (or rows) in
 the matrix.

 Do you really need to know /which/ plane or line a matrix corresponds to?
  If so, reduce it using Gaussian elimination and, if appropriate, compute
 its eigenvectors or span.  Otherwise, think of it as a generic
 plane/line/0-point.

 Outer products represent more of these simple facts about linear algebra.
  The product of an mx1 and 1xn matrices is an mxn matrix with rank at most
 1.  Trouble visualizing this means you are missing the essential facts (for
 the general picture as the product as a line or origin), or requires some
 computational details -- reducing the matrix using Gaussian elimination and
 determining its span.

 As I said, I don't mean to be harsh, but playing with a vector algebra
 package without understanding vectors is like playing with a calculator
 without understanding multiplication.  You're better off learning what
 multiplication represents first, before using a machine to do it fast.  So,
 I can humbly recommend taking a course on the subject.  For example,
 https://www.coursera.org/course/matrix


 On Tue, Dec 4, 2012 at 4:13 PM, Clark Gaebel cgae...@uwaterloo.cawrote:

 No. But that doesn't stop me from being curious with Accelerate. Might
 you have a better explaination for what's happening here than Trevor's?

   - Clark


 On Tue, Dec 4, 2012 at 7:08 PM, Alexander Solla alex.so...@gmail.comwrote:

 I don't mean to be blunt, but have you guys taken a course in linear
 algebra?


 On Mon, Dec 3, 2012 at 9:21 PM, Trevor L. McDonell 
 tmcdon...@cse.unsw.edu.au wrote:

 As far as I am aware, the only description is in the Repa paper. I you
 are right, it really should be explained properly somewhere…

 At a simpler example, here is the outer product of two vectors [1].

 vvProd :: (IsNum e, Elt e) = Acc (Vector e) - Acc (Vector e) - Acc
 (Matrix e)
 vvProd xs ys = A.zipWith (*) xsRepl ysRepl
   where
 n   = A.size xs
 m   = A.size ys

 xsRepl  = A.replicate (lift (Z :. All :. m  )) xs
 ysRepl  = A.replicate (lift (Z :. n   :. All)) ys

 If we then `A.fold (+) 0` the matrix, it would reduce along each row
 producing a vector. So the first element of that vector is going to be
 calculated as (xs[0] * ys[0] + xs[0] * ys[1] +  … xs[0] * ys[m-1]). That's
 the idea we want for our matrix multiplication … but I agree, it is
 difficult for me to visualise as well.

 I do the same sort of trick with the n-body demo to get all n^2
 particle interactions.

 -Trev


  [1]: http://en.wikipedia.org/wiki/Outer_product#Vector_multiplication



 On 04/12/2012, at 3:41 AM, Clark Gaebel cgae...@uwaterloo.ca wrote:

 Ah. I see now. Silly Haskell making inefficient algorithms hard to
 write and efficient ones easy. It's actually kind of annoying when
 learning, but probably for the best.

 Is there a good write-up of the algorithm you're using somewhere? The
 Repa paper was very brief in its explaination, and I'm having trouble
 visualizing the mapping of the 2D matricies into 3 dimensions.

   - Clark


 On Mon, Dec 3, 2012 at 2:06 AM, Trevor L. McDonell 
 tmcdon...@cse.unsw.edu.au wrote:

 Hi Clark,

 The trick is that most accelerate operations work over
 multidimensional arrays, so you can still get around the fact that we are
 limited to flat data-parallelism only.

 Here is matrix multiplication in Accelerate, lifted from the first
 Repa paper [1].


 import Data.Array.Accelerate as A

 type Matrix a = Array DIM2 a

 matMul :: (IsNum e, Elt e) = Acc (Matrix e) - Acc (Matrix e) - Acc
 (Matrix e)
 matMul arr brr
   = A.fold (+) 0
   $ A.zipWith (*) arrRepl brrRepl
   where
 Z :. rowsA :. _ = unlift (shape arr):: Z :. Exp Int :.
 Exp Int
 Z :. _ :. colsB = unlift (shape brr):: Z :. Exp Int :.
 Exp Int

 arrRepl = A.replicate (lift $ Z :. All   :. colsB :.
 All) arr
 brrRepl = A.replicate (lift $ Z :. rowsA :. All   :.
 All) (A.transpose brr)


 If you use github sources rather than the hackage package, those
 intermediate replicates will get fused away.


 Cheers,
 -Trev

  [1] http://www.cse.unsw.edu.au/~chak/papers/KCLPL10.html




 On 03/12/2012, at 5:07 PM, Clark Gaebel cgae...@uwaterloo.ca

Re: [Haskell-cafe] Naive matrix multiplication with Accelerate

2012-12-03 Thread Clark Gaebel
Ah. I see now. Silly Haskell making inefficient algorithms hard to write
and efficient ones easy. It's actually kind of annoying when learning, but
probably for the best.

Is there a good write-up of the algorithm you're using somewhere? The Repa
paper was very brief in its explaination, and I'm having trouble
visualizing the mapping of the 2D matricies into 3 dimensions.

  - Clark


On Mon, Dec 3, 2012 at 2:06 AM, Trevor L. McDonell 
tmcdon...@cse.unsw.edu.au wrote:

 Hi Clark,

 The trick is that most accelerate operations work over multidimensional
 arrays, so you can still get around the fact that we are limited to flat
 data-parallelism only.

 Here is matrix multiplication in Accelerate, lifted from the first Repa
 paper [1].


 import Data.Array.Accelerate as A

 type Matrix a = Array DIM2 a

 matMul :: (IsNum e, Elt e) = Acc (Matrix e) - Acc (Matrix e) - Acc
 (Matrix e)
 matMul arr brr
   = A.fold (+) 0
   $ A.zipWith (*) arrRepl brrRepl
   where
 Z :. rowsA :. _ = unlift (shape arr):: Z :. Exp Int :. Exp Int
 Z :. _ :. colsB = unlift (shape brr):: Z :. Exp Int :. Exp Int

 arrRepl = A.replicate (lift $ Z :. All   :. colsB :. All)
 arr
 brrRepl = A.replicate (lift $ Z :. rowsA :. All   :. All)
 (A.transpose brr)


 If you use github sources rather than the hackage package, those
 intermediate replicates will get fused away.


 Cheers,
 -Trev

  [1] http://www.cse.unsw.edu.au/~chak/papers/KCLPL10.html




 On 03/12/2012, at 5:07 PM, Clark Gaebel cgae...@uwaterloo.ca wrote:

 Hello cafe,

 I've recently started learning about cuda and hetrogenous programming, and
 have been using accelerate [1] to help me out. Right now, I'm running into
 trouble in that I can't call parallel code from sequential code. Turns out
 GPUs aren't exactly like Repa =P.

 Here's what I have so far:

 import qualified Data.Array.Accelerate as A
 import Data.Array.Accelerate ( (:.)(..)
  , Acc
  , Vector
  , Scalar
  , Elt
  , fold
  , slice
  , constant
  , Array
  , Z(..), DIM1, DIM2
  , fromList
  , All(..)
  , generate
  , lift, unlift
  , shape
  )
 import Data.Array.Accelerate.Interpreter ( run )

 dotP :: (Num a, Elt a) = Acc (Vector a) - Acc (Vector a) - Acc (Scalar
 a)
 dotP xs ys = fold (+) 0 $ A.zipWith (*) xs ys

 type Matrix a = Array DIM2 a

 getRow :: Elt a = Int - Acc (Matrix a) - Acc (Vector a)
 getRow n mat = slice mat . constant $ Z :. n :. All

 -- Naive matrix multiplication:
 --
 -- index (i, j) is equal to the ith row of 'a' `dot` the jth row of 'b'
 matMul :: A.Acc (Matrix Double) - A.Acc (Matrix Double) - A.Acc (Matrix
 Double)
 matMul a b' = A.generate (constant $ Z :. nrows :. ncols) $
 \ix -
   let (Z :. i :. j) = unlift ix
in getRow i a `dotP` getRow j b
 where
 b = A.transpose b' -- I assume row indexing is faster than column
 indexing...
 (Z :. nrows :.   _  ) = unlift $ shape a
 (Z :.   _   :. ncols) = unlift $ shape b


 This, of course, gives me errors right now because I'm calling getRow and
 dotP from within the generation function, which expects Exp[ression]s, not
 Acc[elerated computation]s.

 So maybe I need to replace that line with an inner for loop? Is there an
 easy way to do that with Accelerate?

 Thanks for your help,
   - Clark

 [1] http://hackage.haskell.org/package/accelerate
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Naive matrix multiplication with Accelerate

2012-12-02 Thread Clark Gaebel
Hello cafe,

I've recently started learning about cuda and hetrogenous programming, and
have been using accelerate [1] to help me out. Right now, I'm running into
trouble in that I can't call parallel code from sequential code. Turns out
GPUs aren't exactly like Repa =P.

Here's what I have so far:

import qualified Data.Array.Accelerate as A
import Data.Array.Accelerate ( (:.)(..)
 , Acc
 , Vector
 , Scalar
 , Elt
 , fold
 , slice
 , constant
 , Array
 , Z(..), DIM1, DIM2
 , fromList
 , All(..)
 , generate
 , lift, unlift
 , shape
 )
import Data.Array.Accelerate.Interpreter ( run )

dotP :: (Num a, Elt a) = Acc (Vector a) - Acc (Vector a) - Acc (Scalar a)
dotP xs ys = fold (+) 0 $ A.zipWith (*) xs ys

type Matrix a = Array DIM2 a

getRow :: Elt a = Int - Acc (Matrix a) - Acc (Vector a)
getRow n mat = slice mat . constant $ Z :. n :. All

-- Naive matrix multiplication:
--
-- index (i, j) is equal to the ith row of 'a' `dot` the jth row of 'b'
matMul :: A.Acc (Matrix Double) - A.Acc (Matrix Double) - A.Acc (Matrix
Double)
matMul a b' = A.generate (constant $ Z :. nrows :. ncols) $
\ix -
  let (Z :. i :. j) = unlift ix
   in getRow i a `dotP` getRow j b
where
b = A.transpose b' -- I assume row indexing is faster than column
indexing...
(Z :. nrows :.   _  ) = unlift $ shape a
(Z :.   _   :. ncols) = unlift $ shape b


This, of course, gives me errors right now because I'm calling getRow and
dotP from within the generation function, which expects Exp[ression]s, not
Acc[elerated computation]s.

So maybe I need to replace that line with an inner for loop? Is there an
easy way to do that with Accelerate?

Thanks for your help,
  - Clark

[1] http://hackage.haskell.org/package/accelerate
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] To my boss: The code is cool, but it is about 100 times slower than the old one...

2012-11-29 Thread Clark Gaebel
If you can give an example of some underperforming code, I'm sure someone
(or several people) on this list would be more than happy to help you make
it more performant.

Generally, it doesn't take much. It's all in knowing where to look. Also,
if you know performance is key, you should be using the
performance-oriented data structures (ByteString, Text, Vector) from the
very beginning. Personally, I never find myself using Data.Array, and
rarely use String in real code. It's just not worth the performance
headaches.

And finally, depending on what you're doing, neither Haskell, nor C might
be right for you! Especially with certain numerics-related code, you might
find fortran, OpenCL, or CUDA easier to make performant.

Examples of this would be lovely.

  - Clark


On Thu, Nov 29, 2012 at 2:00 PM, Alfredo Di Napoli 
alfredo.dinap...@gmail.com wrote:

 Hi there,
 I'm only an amateur so just my 2 cent: Haskell can be really fast, but
 reaching that speed can be all but trivial: you need to use different data
 types (e.g. ByteString vs. the normal String type) relies on
 unconventional IO (e.g. Conduit, Iterateee) and still be ready to go out
 of the base, using packages and functions wich are not in base/haskell
 platform (e.g. mwc-random).

 My 2 cents :)
 A.

 On 29 November 2012 18:09, Fixie Fixie fixie.fi...@rocketmail.com wrote:

 Hi all haskellers

 I every now and then get the feeling that doing my job code in Haskell
 would be a good idea.

 I have tried a couple of times, but each time I seem to run into
 performance problems - I do lots of heavy computing.

 The problem seems to be connected to lazy loading, which makes my
 programs so slow that I really can not show them to anyone. I have tried
 all tricks in the books, like !, seq, non-lazy datatypes...

 I was poking around to see if this had changed, then I ran into this
 forum post:
 http://stackoverflow.com/questions/9409634/is-indexing-of-data-vector-unboxed-mutable-mvector-really-this-slow

 The last solution was a haskell program which was in the 3x range to C,
 which I think is ok. This was in the days of ghc 7.0

 I then tried compile the programs myself (ghc 7.4.1), but found that now
 the C program now was more that 100x faster. The ghc code was compiled with
 both O2 and O3, giving only small differences on my 64-bit Linux box.

 So it seems something has changed - and even small examples are still not
 safe when it comes to the lazy-monster. It reminds me of some code I read a
 couple of years ago where one of the Simons actually fired off a new
 thread, to make sure a variable was realized.

 A sad thing, since I am More that willing to go for Haskell if proves to
 be usable. If anyone can see what is wrong with the code (there are two
 haskell versions on the page, I have tried the last and fastest one) it
 would also be interesting.

 What is your experience, dear haskellers? To me it seems this beautiful
 language is useless without a better lazy/eager-analyzer.

 Cheers,

 Felix

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to incrementally update list

2012-11-28 Thread Clark Gaebel
Here's a version that works:

*import Control.DeepSeq*

list = [1,2,3,4,5]

advance l = *force $* map (\x - x+1) l

run 0 s = s
run n s = run (n-1) $ advance s

main = do
let s =  run 5000 list
putStrLn $ show s

The problem is that you build of a huge chain of updates to the list. If we
just commit each update as it happens, we'll use a constant amount of
memory.

Haskell's laziness is tricky to understand coming from imperative
languages, but once you figure out its evaluation rules, you'll begin to
see the elegance.

Ηope this helps,
  - Clark


On Wed, Nov 28, 2012 at 7:07 AM, Benjamin Edwards edwards.b...@gmail.comwrote:

 TCO + strictnesses annotations should take care of your problem.
 On 28 Nov 2012 11:44, Branimir Maksimovic bm...@hotmail.com wrote:

  Problem is following short program:
 list = [1,2,3,4,5]

 advance l = map (\x - x+1) l

 run 0 s = s
 run n s = run (n-1) $ advance s

 main = do
 let s =  run 5000 list
 putStrLn $ show s

 I want to incrementally update list lot of times, but don't know
 how to do this.
 Since Haskell does not have loops I have to use recursion,
 but problem is that recursive calls keep previous/state parameter
 leading to excessive stack.and memory usage.
 I don't know how to tell Haskell not to keep previous
 state rather to release so memory consumption becomes
 managable.

 Is there some solution to this problem as I think it is rather
 common?


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal failures...

2012-11-20 Thread Clark Gaebel
+1 to this. The friction of finding, setting up, and using Windows isn't
even comparable to just sshing into another unix box and testing something
quickly.

As a university student, I also find it relatively rare that I get to test
on a Windows machine. My personal computer runs linux, my technical friends
run linux or osx, and my non-technical ones run osx. Also, all the school
servers that I have access to run either FreeBSD or Linux.

If I want to run something on linux system, I have about 40 different
computers that I can ssh into and run code on.

If I want to run something on osx, I just have to call a friend and ask if
they can turn on their computer and allow me to ssh in (to my own account,
of course).

If I want to run something on Windows, I have to track down a friend (in
person!), ask to borrow their computer for a few hours, get administrator
access to install the Haskell Platform, get frustrated that HP hasn't been
upgraded to 7.6, and give up.

It's just not practical, especially for the large amount of small (500
LOC) packages on Hackage.

  - Clark


On Tue, Nov 20, 2012 at 9:05 PM, Erik de Castro Lopo
mle...@mega-nerd.comwrote:

 Albert Y. C. Lai wrote:

  Clearly, since 90% of computers have Windows, it should be trivial to
  find one to test on, if a programmer wants to. Surely every programmer
  is surrounded by Windows-using family and friends? (Perhaps to the
  programmer's dismay, too, because the perpetual I've got a virus again,
  can you help? is so annoying?) We are not talking about BeOS.
 
  Therefore, if programmers do not test on Windows, it is because they do
  not want to.

 I have been an open source contributor for over 15 years. All the general
 purpose machines in my house run Linux. My father's and my mother-in-law's
 computers also run Linux (easier for me to provide support). For testing
 software, I have a PowerPC machine and virtual machines running various
 versions of Linux, FreeBSD and OpenBSD.

 What I don't have is a windows machine. I have, at numerous times, spent
 considerable amounts of time (and even real money for licenses) setting
 up (or rather trying to) windows in a VM and it is *always* considerably
 more work to set up, maintain and fix when something goes wrong. Setting
 up development tools is also a huge pain in the ass. And sooner or later
 they fail in some way I can't fix and I have to start again. Often its
 not worth the effort.

 At my day job we have on-demand windows VMs, but I am not officially
 allowed (nor do I intend to start) to use those resources for my open
 source work.

 So is it difficult for an open source contributor to test on windows?
 Hell yes! You have no idea how hard windows is in comparison to say
 FreeBSD. Even Apple's OS X is easier than windows, because I have
 friends who can give me SSH access to their machines.

 Erik
 --
 --
 Erik de Castro Lopo
 http://www.mega-nerd.com/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to determine correct dependency versions for a library?

2012-11-14 Thread Clark Gaebel
To prevent this, I think the PVP should specify that if dependencies get a
major version bump, the package itself should bump its major version
(preferably the B field).

Hopefully, in the future, cabal would make a distinction between packages *
used* within another package (such as a hashmap exclusively used to
de-duplicate elements in lists) and packages *needed for the public
API*(such as Data.Vector needed for aeson). That way, internal
packages can
update dependencies with impunity, and we still get the major version
number bump of packages needed for the public API.

  - Clark


On Wed, Nov 14, 2012 at 2:20 PM, Tobias Müller trop...@bluewin.ch wrote:

 Peter Simons sim...@cryp.to wrote:
  Hi Clark.
 
I think we just use dependencies [to specify] different things.
 
  If dependency version constraints are specified as a white-list --
  i.e. we include only those few versions that have been actually
  verified and exclude everything else --, then we take the risk of
  excluding *too much*. There will be versions of the dependencies that
  would work just fine with our package, but the Cabal file prevents
  them from being used in the build.
 
  The opposite approach is to specify constraints as a black-list. This
  means that we don't constrain our build inputs at all, unless we know
  for a fact that some specific versions cannot be used to build our
  package. In that case, we'll exclude exactly those versions, but
  nothing else. In this approach, we risk excluding *too little*. There
  will probably be versions of our dependencies that cannot be used to
  build our package, but the Cabal file doesn't exclude them from being
  used.
 
  Now, the black-list approach has a significant advantage. In current
  versions of cabal-install, it is possible for users to extend an
  incomplete black-list by adding appropriate --constraint flags on
  the command-line of the build. It is impossible, however, to extend an
  incomplete white-list that way.
 
  In other words: build failures can be easily avoided if some package
  specifies constraints that are too loose. Build failures caused by
  version constraints that are too strict, however, can be fixed only by
  editing the Cabal file.
 
  For this reason, dependency constraints in Cabal should rather be
  underspecified than overspecified.

 The blacklisting approach has one major disadvantage that noone has
 mentioned yet:
 Adding more restrictive constraints does not work, the broken package will
 be on hackage forever, while adding a new version with relaxed constraints
 works well.

 Consider the following example:

 A 1.1.4.0 build-depends: B ==2.5.* C ==3.7.* (overspecified)
 B 2.5.3.0 build-depends: C ==3.* (underspecified)
 C 3.7.1.0

 Everything works nice until C-3.8.0.0 appears with incompatible changes
 that break B, but not A.

 Now both A and B have to update their dependencies and we have now:

 A 1.1.5.0 build-depends: B ==2.5.* C =3.7  3.9
 B 2.5.4.0 build-depends: C =3  3.8
 C 3.8.0.0

 And now the following combination is still valid:
 A 1.1.5.0
 B 2.5.3.0 (old version)
 C 3.8.0.0
 Bang!

 Tobi

 PS: This is my first post on this list. I'm not actively using haskell, but
 following this list for quite a while just out of interest.


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Quickcheck

2012-11-13 Thread Clark Gaebel
Your implication is backwards. == is read implies

So your way has do blah with positive integers implies x  0  y  0.
That's backwards.

Try prop_something x y = x  0  y  0 == ... do blah with positive
integers

  - Clark


On Tue, Nov 13, 2012 at 4:52 PM, gra...@fatlazycat.com wrote:

 Thanks, will try them both. With regards to the implication I assume
 it's just regarded as one property test ?

 To get two values greater than zero I have something like

 prop_something x y = ...do blah with positive integers
   == x  0  y  0

 But my test fails as it appears to be injecting a negative number and
 the test fails. But the implication does not cause the failed test to be
 ignored.

 Must be missing something ???

 Thanks

 On Mon, Nov 12, 2012, at 10:00 PM, Iustin Pop wrote:
  On Mon, Nov 12, 2012 at 10:14:30PM +0100, Simon Hengel wrote:
   On Mon, Nov 12, 2012 at 07:21:06PM +, gra...@fatlazycat.com wrote:
Hi,
   
Trying to find some good docs on QuickCheck, if anyone has one ?
   
Been scanning what I can find, but a question.
   
What would be the best way to generate two different/distinct
 integers ?
  
   I would use Quickcheck's implication operator here:
  
   quickCheck $ \x y - x /= (y :: Int) == ...
 
  That's good, but it only eliminates test cases after they have been
  generated. A slightly better (IMHO) version is to generate correct
  values in the first place:
 
  prop_Test :: Property
  prop_Test =
  forAll (arbitrary::Gen Int) $ \x -
  forAll (arbitrary `suchThat` (/= x)) $ \y -
  …
 
  regards,
  iustin

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to determine correct dependency versions for a library?

2012-11-09 Thread Clark Gaebel
What I usually do is start out with dependencies listed like:

aeson ==0.6.*

and then, as your dependencies evolve, you either bump the version number:

aeson ==0.7.*

or, if you're willing to support multiple version, switch to a range:

aeson =0.6  = 0.7

If someone uses a previous version of a library, and wants your library to
support it too (and, preferably, it works out of the box), they'll send a
pull request.

That's what works for me. Maybe you could use it as a starting point to
find what works for you!

  - Clark


On Fri, Nov 9, 2012 at 11:15 AM, Janek S. fremenz...@poczta.onet.pl wrote:

 Recently I started developing a Haskell library and I have a question
 about package dependencies.
 Right now when I need my project to depend on some other package I only
 specify the package name
 in cabal file and don't bother with providing the package version. This
 works because I am the
 only user of my library but I am aware that if the library were to be
 released on Hackage I would
 have to supply version numbers in the dependencies. The question is how to
 determine proper
 version numbers?

 I can be conservative and assume that version of libraries in my system
 are the minimum required
 ones. This is of course not a good solution, because my library might work
 with earlier versions
 but I don't know a way to check that. What is the best way to determine a
 minimal version of a
 package required by my library?

 I also don't see any sane way of determining maximum allowed versions for
 the dependencies, but
 looking at other packages I see that this is mostly ignored and package
 maintainers only supply
 lower versions. Is this correct approach?

 Janek

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to determine correct dependency versions for a library?

2012-11-09 Thread Clark Gaebel
Just like if your C application depends on either SQLite 2 or SQLite 3,
you're going to need to test it with both before a release.

Hoping that your library works against a previous major revision is just
asking for trouble!

I usually just take the easy way out and switch to ==0.7.


On Fri, Nov 9, 2012 at 11:31 AM, Janek S. fremenz...@poczta.onet.pl wrote:

 Thanks Clark! You're method seems good at first but I think I see a
 problem. So let's say you
 started with aeson 0.6. As new versions of aeson are released you
 introduce version ranges, but
 do you really have a method to determine that your package does indeed
 work with earlier
 versions? If you're upgrading aeson and don't have the older versions
 anymore you can only hope
 that the code changes you introduce don't break the dependency on earlier
 versions. Unless I am
 missing something?

 Janek

 Dnia piątek, 9 listopada 2012, Clark Gaebel napisał:
  What I usually do is start out with dependencies listed like:
 
  aeson ==0.6.*
 
  and then, as your dependencies evolve, you either bump the version
 number:
 
  aeson ==0.7.*
 
  or, if you're willing to support multiple version, switch to a range:
 
  aeson =0.6  = 0.7
 
  If someone uses a previous version of a library, and wants your library
 to
  support it too (and, preferably, it works out of the box), they'll send a
  pull request.
 
  That's what works for me. Maybe you could use it as a starting point to
  find what works for you!
 
- Clark
 
  On Fri, Nov 9, 2012 at 11:15 AM, Janek S. fremenz...@poczta.onet.pl
 wrote:
   Recently I started developing a Haskell library and I have a question
   about package dependencies.
   Right now when I need my project to depend on some other package I only
   specify the package name
   in cabal file and don't bother with providing the package version. This
   works because I am the
   only user of my library but I am aware that if the library were to be
   released on Hackage I would
   have to supply version numbers in the dependencies. The question is how
   to determine proper
   version numbers?
  
   I can be conservative and assume that version of libraries in my system
   are the minimum required
   ones. This is of course not a good solution, because my library might
   work with earlier versions
   but I don't know a way to check that. What is the best way to
 determine a
   minimal version of a
   package required by my library?
  
   I also don't see any sane way of determining maximum allowed versions
 for
   the dependencies, but
   looking at other packages I see that this is mostly ignored and package
   maintainers only supply
   lower versions. Is this correct approach?
  
   Janek
  
   ___
   Haskell-Cafe mailing list
   Haskell-Cafe@haskell.org
   http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to determine correct dependency versions for a library?

2012-11-09 Thread Clark Gaebel
It's not restrictive. Anything that I put on Hackage is open source. If
someone finds that it works fine on a previous (or later) version, I accept
their patch with a constraint change, and re-release immediately. I just
don't like to claim that my package works with major versions of packages
that I haven't tested.


On Fri, Nov 9, 2012 at 12:36 PM, Janek S. fremenz...@poczta.onet.pl wrote:

  I usually just take the easy way out and switch to ==0.7.
 I see. I guess I don't yet have enough experience in Haskell to anticipate
 how restrictive is such
 a choice.

 Janek

 
  On Fri, Nov 9, 2012 at 11:31 AM, Janek S. fremenz...@poczta.onet.pl
 wrote:
   Thanks Clark! You're method seems good at first but I think I see a
   problem. So let's say you
   started with aeson 0.6. As new versions of aeson are released you
   introduce version ranges, but
   do you really have a method to determine that your package does indeed
   work with earlier
   versions? If you're upgrading aeson and don't have the older versions
   anymore you can only hope
   that the code changes you introduce don't break the dependency on
 earlier
   versions. Unless I am
   missing something?
  
   Janek
  
   Dnia piątek, 9 listopada 2012, Clark Gaebel napisał:
What I usually do is start out with dependencies listed like:
   
aeson ==0.6.*
   
and then, as your dependencies evolve, you either bump the version
  
   number:
aeson ==0.7.*
   
or, if you're willing to support multiple version, switch to a range:
   
aeson =0.6  = 0.7
   
If someone uses a previous version of a library, and wants your
 library
  
   to
  
support it too (and, preferably, it works out of the box), they'll
 send
a pull request.
   
That's what works for me. Maybe you could use it as a starting point
 to
find what works for you!
   
  - Clark
   
On Fri, Nov 9, 2012 at 11:15 AM, Janek S. fremenz...@poczta.onet.pl
 
  
   wrote:
 Recently I started developing a Haskell library and I have a
 question
 about package dependencies.
 Right now when I need my project to depend on some other package I
 only specify the package name
 in cabal file and don't bother with providing the package version.
 This works because I am the
 only user of my library but I am aware that if the library were to
 be
 released on Hackage I would
 have to supply version numbers in the dependencies. The question is
 how to determine proper
 version numbers?

 I can be conservative and assume that version of libraries in my
 system are the minimum required
 ones. This is of course not a good solution, because my library
 might
 work with earlier versions
 but I don't know a way to check that. What is the best way to
  
   determine a
  
 minimal version of a
 package required by my library?

 I also don't see any sane way of determining maximum allowed
 versions
  
   for
  
 the dependencies, but
 looking at other packages I see that this is mostly ignored and
 package maintainers only supply
 lower versions. Is this correct approach?

 Janek

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
  
   ___
   Haskell-Cafe mailing list
   Haskell-Cafe@haskell.org
   http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to determine correct dependency versions for a library?

2012-11-09 Thread Clark Gaebel
I think we just use dependencies different things. This is a problem
inherent in cabal.

When I (and others) specify a dependency, I'm saying My package will work
with these packages. I promise.
When you (and others) specify a dependency, you're saying If you use a
version outside of these bounds, my package will break. I promise.

They're similar, but subtly different. There are merits to both of these
strategies, and it's unfortunate that this isn't specified in the PVP [1].

Janek: I've already given my method, and Peter has told you his method.
Pick either, or make your own! Who knows, maybe someone else (or you!) will
have an even better way to deal with this. :)

  - Clark

[1] http://www.haskell.org/haskellwiki/Package_versioning_policy


On Fri, Nov 9, 2012 at 1:03 PM, Peter Simons sim...@cryp.to wrote:

 Hi Clark,

   It's not restrictive.

 how can you say that by adding a version restriction you don't restrict
 anything?


   I just don't like to claim that my package works with major versions
   of packages that I haven't tested.

 Why does it not bother you to claim that your package can *not* be built
 with all those versions that you excluded without testing whether those
 restrictions actually exist or not?

 Take care,
 Peter


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCE] hashable-generics

2012-11-06 Thread Clark Gaebel
For the merge into Hashable, the default instance is only included if we're
on a compatible GHC. This means Hashable itself will be portable, but it
strongly encourages other packages not to be.

I think the portability requirement is just used as an easy way to filter
out lower quality code, anyway.

  - Clark


On Tue, Nov 6, 2012 at 6:31 AM, Herbert Valerio Riedel h...@gnu.org wrote:

 Clark Gaebel cgae...@uwaterloo.ca writes:

  How would the ghc-dependance affect hashable's inclusion in the haskell
  platform? Doesn't the haskell platform ship only a recent version of ghc
  (i.e. one with support for generics)?

 I was under the impression that the haskell platform, albeit currently
 bundling GHC, aims for portability, as in [1] its required that a
 package

 | * Compile on all operating systems and compilers that the platform
 targets. [rationale-8.4]

 and in [2] there's a (somewhat weaker) mention of portability as well:

 | *Portability*. Good code is portable. In particular, try to ensure the
 |code runs in Hugs and GHC, and on Windows and Linux.

 Maybe Hugs is a bit too outdated/unmaintained, but on the other hand
 maybe JHC and UHC compatibility should be aimed for instead these days
 for core packages?


  [1]:
 http://trac.haskell.org/haskell-platform/wiki/AddingPackages#Packagerequirements
  [2]:
 http://www.haskell.org/haskellwiki/Library_submissions#Guidance_for_proposers


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Defining a Strict language pragma

2012-11-06 Thread Clark Gaebel
What if the strict code were to assume nothing is ever _|_, and result in
undefined behavior if it is? Kind of like a NULL pointer in C.


On Tue, Nov 6, 2012 at 8:36 AM, Jan-Willem Maessen jmaes...@alum.mit.eduwrote:

 On Mon, Nov 5, 2012 at 5:52 PM, Johan Tibell johan.tib...@gmail.comwrote:

 The tricky part is to define the semantics of this pragma in terms of
 Haskell, instead of in terms of Core. While we also need the latter, we
 cannot describe the feature to users in terms of Core. The hard part is to
 precisely define the semantics, especially in the presence of separate
 compilation (i.e. we might import lazy functions).

 I'd like to get the Haskell communities input on this. Here's a strawman:

  * Every function application f _|_ = _|_, if f is defined in this module
 [1]. This also applies to data type constructors (i.e. the code acts if all
 fields are preceded by a bang).

  * lets and where clauses act like (strict) case statements.


 What ordering constraints will exist on let and where clauses?  Is the
 compiler free to re-order them in dependency order?

 Must they be strictly evaluated in the context in which they occur?
  Haskell syntax readily lends itself to a style a bit like this:

 f x y z
   | p x = ... a ... b
   | q y = ... a ... c
   | otherwise = ... d ...
   where a = ...
   b = ...
   c = ...
   d = ...

 This tripped us up a lot in pH and Eager Haskell, where we at least wanted
 to be able to float d inwards and where it was sometimes surprising and
 costly if we missed the opportunity.  But that changes the semantics if d =
 _|_.  It's even worse if d = _|_ exactly when p x || q y.

 Part of the answer, I'm sure, is don't do that, but it might mean some
 code ends up surprisingly less readable than you'd expect.

  * It's still possible to define strict arguments, using ~. In essence
 the Haskell lazy-by-default with opt-out via ! is replaced with
 strict-by-default with opt-out via ~.

 Thoughts?


 I found myself wondering about free variables of lambdas, but realized
 that would be handled at the point where those variables are bound (the
 binding will either be strict or lazy).

 -Jan


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] [ANNOUNCE] network-bitcoin

2012-11-05 Thread Clark Gaebel
Hello Cafe,

You've heard of the neat crypto-currency bitcoin[1], haven't you?

Well, I've just released network-bitcoin[2] which provides Haskell bindings
to the bitcoin daemon. Hopefully, this will make your bitcoin-related goals
easier to achieve. Who knows, it might even make bitcoin integration for
the various web frameworks a bit easier.

Special thanks to Michael Hendricks who wrote the original version of
network-bitcoin, which I used as a base for this release.

Regards,
  - Clark

[1] http://bitcoin.org
[2] http://hackage.haskell.org/package/network-bitcoin-1.0.1
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCE] hashable-generics

2012-11-04 Thread Clark Gaebel
Thanks a lot!

I've updated the benchmark accordingly, and have released a new version
without the 1.3x slowdown disclaimer as generic-hashable 1.1.8.

  - Clark


On Sun, Nov 4, 2012 at 10:25 AM, Herbert Valerio Riedel h...@gnu.org wrote:

 Clark Gaebel cgae...@uwaterloo.ca writes:

 [...]

  Oh yeah, and if anyone wants to help me figure out why it's 1.3x slower
  than my hand-rolled instances, that would be really helpful.

 [...]

 I've taken a look at the bench/Bench.hs benchmark[1]:

 The generated Core looks almost[2] the same as your 'HandRolled'; but
 the 1.3x slow-down factor seems to be caused by the way the
 'bigGenericRolledDS' CAF is defined in the test-harness: if I define it
 explicitly (i.e. just as 'bigHandRolledDS' is defined, and not as an
 isomorphic transformation of the 'bigHandRolledDS' value) the benchmark
 results in both versions having more or less equal performance as would
 be expected.


  [1]:
 https://github.com/wowus/hashable-generics/blob/master/bench/Bench.hs

  [2]: with the following change, it would look exactly the same (modulo
   alpha renamings):

 --8---cut here---start-8---
 --- a/bench/Bench.hs
 +++ b/bench/Bench.hs
 @@ -18,7 +18,7 @@ data GenericRolled = GR0
  deriving Generic

  instance Hashable HandRolled where
 -hashWithSalt salt HR0   = hashWithSalt salt $ (Left () :: Either
 () ())
 +hashWithSalt salt HR0   = hashWithSalt salt $ ()
  hashWithSalt salt (HR1 mi)  = hashWithSalt salt $ (Right $ Left mi ::
 Either () (Either (Maybe Int) ()))
  hashWithSalt salt (HR2 x y) = hashWithSalt salt $ (Right $ Right (x,
 y) :: Either () (Either () (HandRolled, HandRolled)))
 --8---cut here---end---8---

 hth,
 hvr

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCE] hashable-generics

2012-11-04 Thread Clark Gaebel
@dag:

I would love for this to be merged into Data.Hashable, and I think it would
make a lot of people's lives easier, and prevent them from writing bad hash
functions accidentally.

  - Clark


On Sun, Nov 4, 2012 at 10:30 AM, dag.odenh...@gmail.com 
dag.odenh...@gmail.com wrote:

 Have you talked with upstream about possibly adding this to hashable
 proper, using DefaultSignatures? CPP can be used to make it portable to
 older GHC versions.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCE] hashable-generics

2012-11-04 Thread Clark Gaebel
Yes. Sorry if I wasn't clear. That's what I intended.

So would a patch adding this to hashable be accepted?

  - Clark


On Sun, Nov 4, 2012 at 11:39 AM, Johan Tibell johan.tib...@gmail.comwrote:

 On Sun, Nov 4, 2012 at 8:35 AM, Clark Gaebel cgae...@uwaterloo.ca wrote:
 
  @dag:
 
  I would love for this to be merged into Data.Hashable, and I think it
 would make a lot of people's lives easier, and prevent them from writing
 bad hash functions accidentally.


 Couldn't we do it using GHC's default implementations based on
 signatures features, so we don't have to expose any new things in the
 API?

 We used that in unordered-containers like so:

 #ifdef GENERICS
 default parseRecord :: (Generic a, GFromRecord (Rep a)) = Record
 - Parser a
 parseRecord r = to $ gparseRecord r
 #endif

 -- Johan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCE] hashable-generics

2012-11-04 Thread Clark Gaebel
How would the ghc-dependance affect hashable's inclusion in the haskell
platform? Doesn't the haskell platform ship only a recent version of ghc
(i.e. one with support for generics)?

  - Clark
On Nov 4, 2012 6:00 PM, Herbert Valerio Riedel h...@gnu.org wrote:

 Clark Gaebel cgae...@uwaterloo.ca writes:

  @dag:
 
  I would love for this to be merged into Data.Hashable, and I think it
 would
  make a lot of people's lives easier, and prevent them from writing bad
 hash
  functions accidentally.

 Jfyi, a discussion came up when I posted a proposal to add a
 generics-based NFData deriver to the 'deepseq' package, with the result
 that the generics-based code was put in the separate `deepseq-generics`
 package:

  http://comments.gmane.org/gmane.comp.lang.haskell.libraries/17940

 ...and since there's plan (iirc) to bring the 'hashable' package into
 the haskell-platform, some of the arguments brought up in that thread
 with respect to the 'deepseq' package might apply here as well.

 cheers,
 hvr

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] [ANNOUNCE] hashable-generics

2012-11-02 Thread Clark Gaebel
Hi everybody!

I have just released a handy package on Hackage that will interest you if
you've ever used unordered-containers with a custom type.

In order to do such a thing, you'd need to define an instance of Hashable.
This process could easily be automated.

And so I did.

{-# LANGUAGE DeriveGeneric #-}
module ThisIsPrettyNeat where

import Data.Hashable.Generic
import GHC.Generics

data MyCoolType a = MCT0 | MCT1 (Either Int a) | MCT2 (MyCoolType a)
(MyCoolType a)
deriving Generic

instance Hashable a = Hashable MyCoolType where
hashWithSalt s x = gHashWithSalt s x
{-# INLINEABLE hashWithSalt #-}

and voila. You have a very performant instance of Hashable, with minimal
boilerplate, no template haskell, and no subtle bugs.

If you want to play with it, here it is:
http://hackage.haskell.org/package/hashable-generics-1.1.6

Have fun!
  - Clark

Oh yeah, and if anyone wants to help me figure out why it's 1.3x slower
than my hand-rolled instances, that would be really helpful.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Building all possible element combinations from N lists.

2012-10-28 Thread Clark Gaebel
Golfed: http://en.wikipedia.org/wiki/Code_golf
= : Also known as Kleisli composition. More info:
http://www.haskell.org/hoogle/?hoogle=%3C%3D%3C

On Sun, Oct 28, 2012 at 4:36 PM, dokondr doko...@gmail.com wrote:

 On Fri, Oct 26, 2012 at 2:34 AM, Jake McArthur jake.mcart...@gmail.comwrote:

 I golfed a bit. :)

 sequence = filterM (const [False ..])


 What is golfed and  = ? Please, explain.

 Thanks,
 Dmitri

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC maintenance on Arch

2012-10-28 Thread Clark Gaebel
Personally, I like the latest version of GHC being in the repository, as
that's the version I normally use.

What packages aren't working for you on 7.6? I find that they get updated
pretty quickly, and if you run into any that aren't, feel free to send the
authors a pull request. Almost everything is on github.

- Clark

On Sun, Oct 28, 2012 at 4:49 PM, timothyho...@seznam.cz wrote:

 Hello,
 Who is in charge of the ghc and haskell packages on Arch linux?  The
 current system isn't working.

 Arch linux tends to update packages very quickly.

 For ghc, always having the latest ghc isn't a good thing.  At least if you
 actually want to get some work done.  A majority of the time the latest GHC
 is unusable. This is because the packages in hackage simply don't keep up.
 With the current ghc version(7.6.1) even some basic packages in hackage are
 not upgraded yet.

 Right now, a large number of other haskell related packages are in the
 arch repos. Other than gtk2hs, I think these packages are pointless
 duplications.  In the other cases, it has been my experience that it is
 simpler to maintain these packages through cabal rather than through
 pacman.  Support for these packages in Arch should probably be dropped.

 If you want to get work done in Arch with haskell, you should only install
 ghc and cabal-install(right now, you'll have to search the Internet for the
 old binaries, because the arch repos usually don't keep the old versions
 around).  Then you should add these packages to IgnorePkg = in
 pacman.conf  this way things won't break every couple of months.  You can
 then choose to upgrade when you wish.

 I hope that someone who is involved with the haskell Arch stuff reads
 this.  The current model needs to be rethought.  Linux should be sane by
 default, but I've lost many many hours learning that arch's relationship
 with haskell is not so :(  Probably the best solution would be to make Arch
 automatically keep two versions of ghc around at any given time.

 Thank you for your time,
 Timothy Hobbs

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Security] Put haskell.org on https

2012-10-28 Thread Clark Gaebel
Do it at home.

If you're at an internet cafe, though, it'd be nice if you could trust
cabal packages.

- Clark

On Sun, Oct 28, 2012 at 5:07 PM, Patrick Hurst phu...@amateurtopologist.com
 wrote:


 On Oct 28, 2012, at 4:38 PM, Changaco chang...@changaco.net wrote:

  On Sun, 28 Oct 2012 17:46:10 +0100 Petr P wrote:
  In this particular case, cabal can have the public part of the
  certificate built-in (as it has the web address built in). So once one
  has a verified installation of cabal, it can verify the server
  packages without being susceptible to MitM attack (no matter if
  they're PGP signed or X.509 signed).
 
  This is PGP's security model, so it's probably better to use PGP keys.


 How do you get a copy of cabal while making sure that somebody hasn't
 MITMed you and replaced the PGP key?
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] referential transparency? (for fixity of local operators)

2012-10-05 Thread Clark Gaebel
Compile with -Wall and the flaw becomes obvious:

interactive:2:5:
Warning: This binding for `+' shadows the existing binding
   imported from `Prelude' (and originally defined in `GHC.Num')

interactive:2:9:
Warning: This binding for `*' shadows the existing binding
   imported from `Prelude' (and originally defined in `GHC.Num')

interactive:2:16:
Warning: Defaulting the following constraint(s) to type `Integer'
   (Num a0) arising from the literal `1'
In the first argument of `(+)', namely `1'
In the first argument of `(*)', namely `1 + 2'
In the expression: 1 + 2 * 3

interactive:2:16:
Warning: Defaulting the following constraint(s) to type `Integer'
   (Num a0) arising from the literal `1' at interactive:2:16
   (Show a0) arising from a use of `print' at
interactive:2:1-34
In the first argument of `(+)', namely `1'
In the first argument of `(*)', namely `1 + 2'
In the expression: 1 + 2 * 3

Shadowing is bad, and tends (as in this case) to be confusing.

  - Clark

On Fri, Oct 5, 2012 at 7:22 AM, Roman Cheplyaka r...@ro-che.info wrote:

 * Johannes Waldmann waldm...@imn.htwk-leipzig.de [2012-10-05
 11:11:48+]
  I was really surprised at the following:
 
  *Main 1 + 2 * 3
  7
 
  *Main ( \ (+) (*) - 1 + 2 * 3 ) (+) (*)
  9
 
  because I was somehow assuming that either
 
  a) the Prelude fixities of the operators are kept
  b) or they are undefined, so the parser rejects.
 
  but the Haskell standard says Any operator lacking a fixity declaration
  is assumed to be infixl 9. This really should be infix 9?

 This behaviour is really handy when you use functions as operators
 (using backticks notation). They typically lack infix annotations, but
 having to put parentheses would be very annoying.

 Roman

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] a parallel mapM?

2012-10-03 Thread Clark Gaebel
I'm not sure that exposing a liftIO for Monad.Par is the best idea. Since
all these parallel computations use runPar :: Par a - a, it advertises
that the result is deterministic. I'm not really comfortable with a hidden
unsafePerformIO hiding in the background.

That said, I don't see a reason for not including a separate version of
runParIO :: ParIO a - IO a for non-deterministic computations. It seems
really useful!

Regards,
  - Clark

On Wed, Oct 3, 2012 at 10:24 AM, Ryan Newton rrnew...@gmail.com wrote:

 Several of the monad-par schedulers COULD provide a MonadIO instance and
 thus liftIO, which would make them easy to use for this kind of parallel
 IO business:


 http://hackage.haskell.org/packages/archive/monad-par/0.3/doc/html/Control-Monad-Par-Scheds-Direct.html

 And that would be a little more scalable because you wouldn't get a
 separate IO thread for each parallel computation.  But, to be safe-haskell
 compliant, we don't currently expose IO capabilities. I can add another
 module that exposes this capability if you are interested...

   -Ryan

 On Fri, Sep 28, 2012 at 4:48 PM, Alexander Solla alex.so...@gmail.comwrote:



 On Fri, Sep 28, 2012 at 11:01 AM, Greg Fitzgerald gari...@gmail.comwrote:


 I also tried Control.Parallel.Strategies [2].  While that route works,
 I had to use unsafePerformIO.  Considering that IO is for sequencing
 effects and my IO operation doesn't cause any side-effects (besides
 hogging a file handle), is this a proper use of unsafePerformIO?


 That's actually a perfectly fine use for unsafePerformIO, since the IO
 action you are performing is pure and therefore safe (modulo your file
 handle stuff).

 unsafePerformIO is a problem when the IO action being run has side
 effects and their order of evaluation matters (since unsafePerformIO will
 cause them to be run in an unpredictable order)

 One common use for unsafePerformIO is to run a query against an external
 library.  It has to be done in the IO monad, but it is a pure computation
 insofar as it has no side-effects that matter.  Doing this lets us promote
 values defined in external libraries to bona fide pure Haskell values.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Improvement suggestions

2012-08-15 Thread Clark Gaebel
Try:

concat . intersperse \n $ (sequence $ map loop docs)

On Wed, Aug 15, 2012 at 11:01 AM, José Lopes jose.lo...@ist.utl.pt wrote:

 Hello everyone,

 I am quite new to monadic code so I would like to ask for improvement
 suggestions on the last line of the code below.
 I know I could do something like do strs - mapM ...; intercalate ...
 etc, but I would like to avoid the use of -.

 Thank you,
 José

 data XmlState = XmlState Int
 type XmlM a = State XmlState a

 loop :: Document - XmlM String

 someFn :: [Document] - XmlM String
 someFn docs =
   return concat `ap` (sequence $ intersperse (return \n) (map loop
 docs))--- improve this line

 --
 José António Branquinho de Oliveira Lopes
 58612 - MEIC-A
 Instituto Superior Técnico (IST), Universidade Técnica de Lisboa (UTL)
 jose.lo...@ist.utl.pt


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sorting efficiency

2012-08-04 Thread Clark Gaebel
It's generally not advisable to use Data.List for performance-sensitive
parts of an application.

Try using Data.Vector instead: http://hackage.haskell.org/package/vector

On Sat, Aug 4, 2012 at 11:23 AM, David Feuer david.fe...@gmail.com wrote:

 I'm writing a toy program (for a SPOJ problem--see
 https://www.spoj.pl/problems/ABCDEF/ ) and the profiler says my
 performance problem is that I'm spending too much time sorting. I'm
 using Data.List.sort on [Int32] (it's a 32-bit architecture). Others,
 using other languages, have managed to solve the problem within the
 time limit using the same approach I've taken (I believe), but mine is
 taking too long. Any suggestions? Do I need to do something insane
 like sorting in an STUArray?

 David Feuer

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Knight Capital debacle and software correctness

2012-08-04 Thread Clark Gaebel
Yes.

On Sat, Aug 4, 2012 at 1:47 PM, Jay Sulzberger j...@panix.com wrote:



 On Sat, 4 Aug 2012, Jake McArthur jake.mcart...@gmail.com wrote:

  I feel like this thread is kind of surreal. Knight Capital's mistake
 was to use imperative programming styles? An entire industry is
 suffering because they haven't universally applied category theory to
 software engineering and live systems? Am I just a victim of a small
 troll/joke?

 - Jake


 ad application of category theory: No joke.

 Atul Gawande's book The Checklist Manifesto deals with some of
 this:

   
 http://us.macmillan.com/**thechecklistmanifesto/**AtulGawandehttp://us.macmillan.com/thechecklistmanifesto/AtulGawande

 In related news, for every type t of Haskell is it the case that
 something called _|_ is an object of the type?

 oo--JS.




 On Sat, Aug 4, 2012 at 12:46 PM, Jay Sulzberger j...@panix.com wrote:



 On Sat, 4 Aug 2012, Vasili I. Galchin vigalc...@gmail.com wrote:

  Hello Haskell Group,

I work in mainstream software industry.

I am going to make an assumption  except for Jane Street
 Capital all/most Wall Street software is written in an imperative
 language.

Assuming this why is Wall Street not awaken to the dangers. As I
 write, Knight Capital may not survive the weekend.


 Regards,

 Vasili



 I believe this particular mild error was in part due to a failure
 to grasp and apply category theory.  There are several systems here:

 1. The design of the code.

 2. The coding of the code.

 3. The testing of the code.

 4. The live running of the code.

 5. The watcher systems which watch the live running.

 If the newspaper reports are to be believed, the watcher systems,
 all of them, failed.  Or there was not even one watcher system
 observing/correcting/halting at the time of running.

 Category theory suggests that all of these systems are worthy of
 study, and that these systems have inter-relations, which are
 just as worthy of study.

 oo--JS.


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe




 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Knight Capital debacle and software correctness

2012-08-04 Thread Clark Gaebel
As far as I know, you can't check equivalence of _|_. Since Haskell uses
_|_ to represent a nonterminating computation, this would be
synonymouswith solving the halting
problem.

On Sat, Aug 4, 2012 at 2:04 PM, Jay Sulzberger j...@panix.com wrote:



 On Sat, 4 Aug 2012, Clark Gaebel cgae...@uwaterloo.ca wrote:

  Yes.


 Thank you!

 Further, if you want:

   Let us have two types s and t.  Let _|_^s be the_|_ for type s,
   and let _|_^t be the _|_ for type t.

   For which famous equivalences of the Haskell System are these two
   _|_ objects equivalent?

 oo--JS.



 On Sat, Aug 4, 2012 at 1:47 PM, Jay Sulzberger j...@panix.com wrote:



 On Sat, 4 Aug 2012, Jake McArthur jake.mcart...@gmail.com wrote:

  I feel like this thread is kind of surreal. Knight Capital's mistake

 was to use imperative programming styles? An entire industry is
 suffering because they haven't universally applied category theory to
 software engineering and live systems? Am I just a victim of a small
 troll/joke?

 - Jake


 ad application of category theory: No joke.

 Atul Gawande's book The Checklist Manifesto deals with some of
 this:

   
 http://us.macmillan.com/thechecklistmanifesto/AtulGawandehttp://us.macmillan.com/**thechecklistmanifesto/**AtulGawande
 http://us.**macmillan.com/**thechecklistmanifesto/**AtulGawandehttp://us.macmillan.com/thechecklistmanifesto/AtulGawande
 


 In related news, for every type t of Haskell is it the case that
 something called _|_ is an object of the type?

 oo--JS.




  On Sat, Aug 4, 2012 at 12:46 PM, Jay Sulzberger j...@panix.com wrote:



 On Sat, 4 Aug 2012, Vasili I. Galchin vigalc...@gmail.com wrote:

  Hello Haskell Group,


I work in mainstream software industry.

I am going to make an assumption  except for Jane Street
 Capital all/most Wall Street software is written in an imperative
 language.

Assuming this why is Wall Street not awaken to the dangers. As I
 write, Knight Capital may not survive the weekend.


 Regards,

 Vasili



 I believe this particular mild error was in part due to a failure
 to grasp and apply category theory.  There are several systems here:

 1. The design of the code.

 2. The coding of the code.

 3. The testing of the code.

 4. The live running of the code.

 5. The watcher systems which watch the live running.

 If the newspaper reports are to be believed, the watcher systems,
 all of them, failed.  Or there was not even one watcher system
 observing/correcting/halting at the time of running.

 Category theory suggests that all of these systems are worthy of
 study, and that these systems have inter-relations, which are
 just as worthy of study.

 oo--JS.


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafehttp://www.haskell.org/**mailman/listinfo/haskell-cafe
 **http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe
 




  ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafehttp://www.haskell.org/**mailman/listinfo/haskell-cafe
 **http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe
 




 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Confused by ghci output

2012-05-31 Thread Clark Gaebel
*X 3^40 `mod` 3 == modexp2 3 40 3
False
*X modexp2 3 40 3
0
*X 3^40 `mod` 3
0

I'm confused. Last I checked, 0 == 0.

Using GHC 7.4.1, and the file x.hs (which has been loaded in ghci) can be
found here: http://hpaste.org/69342

I noticed this after prop_sanemodexp was failing.

Any help would be appreciated,
  - Clark
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Confused by ghci output

2012-05-31 Thread Clark Gaebel
Wow, thanks! That was subtle.

  - Clark

On Thu, May 31, 2012 at 12:49 PM, Claude Heiland-Allen cla...@goto10.orgwrote:

 Hi Clark,

 ghci is defaulting to Integer
 modexp2 forces Int
 Int overflows with 3^40


 On 31/05/12 17:35, Clark Gaebel wrote:

 *X  3^40 `mod` 3 == modexp2 3 40 3
 False


 *X fromInteger (3^40 `mod` 3) == modexp2 3 40 3
 True


  *X  modexp2 3 40 3
 0
 *X  3^40 `mod` 3
 0


 *X 3^40 `mod` 3 ::Int

 2

  I'm confused. Last I checked, 0 == 0.


 Int overflow is ugly!

 *X 3^40
 12157665459056928801
 *X maxBound :: Int
 9223372036854775807


 Claude


 Using GHC 7.4.1, and the file x.hs (which has been loaded in ghci) can be
 found here: http://hpaste.org/69342

 I noticed this after prop_sanemodexp was failing.

 Any help would be appreciated,
   - Clark




 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe



 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Confused by ghci output

2012-05-31 Thread Clark Gaebel
The cafe is certainly responsive today!

Thanks everyone - got it. Integer overflow ;)

Regards,
  - Clark
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Need inputs for a Haskell awareness presentation

2012-05-31 Thread Clark Gaebel
Regarding 2d, Debug.Trace is perfect for that.

On Thu, May 31, 2012 at 2:23 PM, C K Kashyap ckkash...@gmail.com wrote:

 Hi folks,

 I have the opportunity to make a presentation to folks (developers and
 managers) in my organization about Haskell - and why it's important - and
 why it's the only way forward. I request you to share your
 experiences/suggestions for the following -
 1. Any thoughts around the outline of the presentation - target audience
 being seasoned imperative programmers who love and live at the pinnacle of
  object oriented bliss.
 2. Handling questions/comments like these in witty/interesting ways -
 a) It looks good and mathematical but practically, what can we do with
 it, all our stuff is in C++
 b) Wow, what do you mean you cannot reason about its space complexity?
 c) Where's my inheritance?
 d) Debugging looks like a nightmare - we cannot even put a print in
 the function?
 e) Static types - in this day and age - come on - productivity in X is
 so much more - and that's because they got rid of type mess.
 f)  Is there anything serious/large written in it? [GHC will not
 qualify as a good answer I feel]
 g) Oh FP, as in Lisp, oh, that's AI stuff right ... we don't really do
 AI.
 h) Any other questions/comments that you may have heard.
 3. Ideas about interesting problems that can be used so that it appeals to
 people. I mean, while fibonacci etc look good but showing those examples
 tend to send the signal that it's good for those kind of problems.
 4. Is talking about or referring to Lambda calculus a good idea - I mean,
 showing that using its ultra simple constructs one could build up things
 like if/then etc

 I'm gonna do my bit to wear the limestone!!!

 Regards,
 Kashyap

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Large graphs

2012-05-20 Thread Clark Gaebel
I had issues with FGL in the past, too. Although FGL is really nice to
work with, it just uses a ridiculous amount of memory for large
graphs.

In the end, I used Data.Graph from containers [1]. This was a lot more
reasonable, and let me finish my project relatively easily.

Regards,
  - Clark

[1] 
http://hackage.haskell.org/packages/archive/containers/0.5.0.0/doc/html/Data-Graph.html

On Sun, May 20, 2012 at 10:55 AM, Serguey Zefirov sergu...@gmail.com wrote:
 2012/5/20 Benjamin Ylvisaker benjam...@fastmail.fm:
 I have a problem that I'm trying to use Haskell for, and I think I'm running 
 into scalability issues in FGL.  However, I am quite new to practical 
 programming in Haskell, so it's possible that I have some other bone-headed 
 performance bug in my code.  I tried looking around for concrete information 
 about the scalability of Haskell's graph libraries, but didn't find much.  
 So here are the characteristics of the problem I'm working on:

 - Large directed graphs.  Mostly 10k-100k nodes, but some in the low 100ks.
 - Sparse graphs.  The number of edges is only 2-3x the number of nodes.
 - Immutable structure, mutable labels.  After initially reading in the 
 graphs, their shape doesn't change, but information flows around the 
 graph, changing the labels on nodes and edges.

 I would like to suggest to you a representation based in 32-bit
 integers as vertex index. I.e., roll your own

 Use strict IntMap IntSet for neighbor information, it is very efficient.

 I wrote some code that reads in graphs and some some basic flow computations 
 on them.  The first few graphs I tried were around 10k nodes, and the 
 performance was okay (on the order of several seconds).  When I tried some 
 larger graphs (~100k), the memory consumption spiked into multiple GB, the 
 CPU utilization went down to single digit percentages and the overall 
 running time was closer to hours than seconds.

 Looks like your code does not force everything. It leaves some thunks
 unevaluated, check for that situation.

 It is common pitfall, not only for computations on graphs.


 Because the graph structure is basically immutable for my problem, I'm 
 tempted to write my own graph representation based on mutable arrays.  
 Before I embark on that, I wonder if anyone else can share their experience 
 with large graphs in Haskell?  Is there a library (FGL or otherwise) that 
 should be able to scale up to the size of graph I'm interested in, if I 
 write my code correctly?

 The above structure (IntMap IntSet) allowed for fast computations on
 relatively large arrays, in order of 1M vertices and 16M
 undirected/32M directed edges.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Conduits and Unix Pipes

2012-04-07 Thread Clark Gaebel
Has anyone built an adapter between unix pipes and conduits? Something like:

upipe2conduit :: String - Conduit Char Char

let someLines = hello\nworld
nLines - (read . runResourceT $ sourceList someLines $= upipe2conduit wc
-l $$ consume) :: Integer

If this has been done, is it on hackage?

Thanks,
  - clark
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conduits and Unix Pipes

2012-04-07 Thread Clark Gaebel
Well look at that. Thanks!

On Sat, Apr 7, 2012 at 1:49 PM, Bin Jin bjin1...@gmail.com wrote:

 I think process-conduit is what you are looking for.
 On Apr 8, 2012 1:22 AM, Clark Gaebel cgae...@uwaterloo.ca wrote:

 Has anyone built an adapter between unix pipes and conduits? Something
 like:

 upipe2conduit :: String - Conduit Char Char

 let someLines = hello\nworld
 nLines - (read . runResourceT $ sourceList someLines $= upipe2conduit
 wc -l $$ consume) :: Integer

 If this has been done, is it on hackage?

 Thanks,
   - clark

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] thread killed

2012-04-04 Thread Clark Gaebel
Whenever I've deadlocked, it terminated the program with thread
blocked indefinitely in an MVar operation.

On Wed, Apr 4, 2012 at 5:59 PM, tsuraan tsur...@gmail.com wrote:
 My Snap handlers communicate with various resource pools, often
 through MVars.  Is it possible that MVar deadlock would be causing the
 runtime system to kill off a contending thread, giving it a
 ThreadKilled exception?  It looks like ghc does do deadlock detection,
 but I can't find any docs on how exactly it deals with deadlocks.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Stack overflow while programming imperatively

2012-03-18 Thread Clark Gaebel
Hey list.

I was recently fixing a space leak by dropping down to imperative
programming in a section of my code, when it started developing space
leaks of its own.

I found the problem though - it was my for loop: http://hpaste.org/65514

Can anyone provide suggestions on why that stack overflows? It seems
ridiculously tail recursive. I tried to do it more haskell-like with
http://hpaste.org/65517, but it was still spending 75% of its time in
GC.

Is there any way to write such a loop with ~100% productivity? I don't
think there should be _any_ garbage generated, as all values may be
stack allocated. Is this a performance regression in GHC?

Thanks,
  - clark

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stack overflow while programming imperatively

2012-03-18 Thread Clark Gaebel
Yay, that fixed it. Thanks!

On Sun, Mar 18, 2012 at 2:50 PM, Aleksey Khudyakov
alexey.sklad...@gmail.com wrote:
 On 18.03.2012 22:32, Clark Gaebel wrote:

 Hey list.

 I was recently fixing a space leak by dropping down to imperative
 programming in a section of my code, when it started developing space
 leaks of its own.

 I found the problem though - it was my for loop: http://hpaste.org/65514

 Can anyone provide suggestions on why that stack overflows? It seems
 ridiculously tail recursive. I tried to do it more haskell-like with
 http://hpaste.org/65517, but it was still spending 75% of its time in
 GC.

 Excessive laziness could be cleverly hiding. modifyIORef doesn't modify
 IORef's value but builds huge chain of thunks. When you try to evaluate
 it you get stack overflow. Forcing value of IORef will fix this space leak.

 You could use strict version of modifyIORef:

 modifyIORef' x f = do
  a - readIORef x
  writeIORef x $! f a

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Global Arrays

2012-03-12 Thread Clark Gaebel
Is there any proof of this? I'm not familiar enough with core to check.

On Mon, Mar 12, 2012 at 3:48 AM, Ketil Malde ke...@malde.org wrote:
 Clark Gaebel cgae...@csclub.uwaterloo.ca writes:

 In Haskell, what's the canonical way of declaring a top-level array
 (Data.Vector of a huge list of doubles, in my case)? Performance is
 key in my case.

 The straightforward way would just be something like:

 globalArray :: V.Vector Double
 globalArray = V.fromList [ huge list of doubles ]
 {-# NOINLINE globalArray #-}

 However, I don't want to have to run the fromList at runtime.

 I think GHC will convert it to an array (and in general evaluate
 constants) at compile time (probably requires -O).

 -k


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Global Arrays

2012-03-10 Thread Clark Gaebel
Wouldn't that still have to loop through the array (or in this case,
evaluate the monad) in order to use it the first time?

On Sat, Mar 10, 2012 at 2:22 AM, Alexandr Alexeev afis...@gmail.com wrote:
 what's the canonical way of declaring a top-level array
 Did you try State/StateT monads?

 10 марта 2012 г. 5:05 пользователь John Meacham j...@repetae.net написал:

 On Fri, Mar 9, 2012 at 5:49 PM, Clark Gaebel
 cgae...@csclub.uwaterloo.ca wrote:
  What's the advantage of using D.A.Storable over D.Vector? And yes,
  good call with creating an array of HSDouble directly. I didn't think
  of that!

 Oh, looks like D.Vector has an unsafeFromForeignPtr too, I didn't see
  that. so D.Vector should work just fine. :)

    John

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 С уважением, Александр
 Личный блог: http://eax.me/
 Мой форум: http://it-talk.org/
 Мой Twitter: http://twitter.com/afiskon


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Global Arrays

2012-03-09 Thread Clark Gaebel
In Haskell, what's the canonical way of declaring a top-level array
(Data.Vector of a huge list of doubles, in my case)? Performance is
key in my case.

The straightforward way would just be something like:

globalArray :: V.Vector Double
globalArray = V.fromList [ huge list of doubles ]
{-# NOINLINE globalArray #-}

However, I don't want to have to run the fromList at runtime. Not only
would this mean a bigger executable (having to store a linked list,
instead of an array), it would be quite inefficient since we don't
even use the source list!

Therefore, I was thinking of storing the array in a C file:

static const double globalArray[] = { huge list of doubles };
double* getGlobalArray() { return globalArray; }
intgetGlobalArraySize() { return
sizeof(globalArray)/sizeof(globalArray[0]); }

And importing it in haskell witht he FFI, followed with an unsafeCast:

foreign import ccall unsafe getGlobalArray c_globalArray :: Ptr CDouble
foreign import ccall unsafe getGlobalArraySize c_globalArraySize :: CInt

globalArray :: V.Vector Double
globalArray = V.unsafeCast $ unsafeFromForeignPtr0 (unsafePerformIO $
newForeignPtr_ c_globalArray) (fromIntegral c_globalArraySize)
{-# NOINLINE globalArray #-}

But this version (clearly) is full of unsafety. Is there a better
way that I haven't thought of?

Regards,
  - clark

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Global Arrays

2012-03-09 Thread Clark Gaebel
What's the advantage of using D.A.Storable over D.Vector? And yes,
good call with creating an array of HSDouble directly. I didn't think
of that!

On Fri, Mar 9, 2012 at 8:25 PM, John Meacham j...@repetae.net wrote:
 On Fri, Mar 9, 2012 at 12:48 PM, Clark Gaebel
 cgae...@csclub.uwaterloo.ca wrote:
 static const double globalArray[] = { huge list of doubles };
 double* getGlobalArray() { return globalArray; }
 int        getGlobalArraySize() { return
 sizeof(globalArray)/sizeof(globalArray[0]); }

 And importing it in haskell witht he FFI, followed with an unsafeCast:

 foreign import ccall unsafe getGlobalArray c_globalArray :: Ptr CDouble
 foreign import ccall unsafe getGlobalArraySize c_globalArraySize :: CInt

 You can use Data.Array.Storable to do this.
 http://hackage.haskell.org/packages/archive/array/0.3.0.3/doc/html/Data-Array-Storable.html

 Also, there is no need to create stub C functions, you can foreign import
 the array directly
 And if you don't want to cast between CDouble and Double you can declare
 your array to be of HsDouble and #include HsFFI.h

 const HsDouble globalArray[] = { huge list of doubles };
 foreign import ccall unsafe globalArray :: Ptr Double

    John


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Double-dispatch

2012-03-05 Thread Clark Gaebel
Is there any way in Haskell to have the correct function selected
based on the types of two different types? For example, let's say I'm
writing intersection tests:

aABBandAABB :: AABB - AABB - Bool
oBBandOBB :: OBB - OBB - Bool
oBBandPoint :: OBB - Point - Bool

Is there some way (such as with Type Families) that I can write some
sort of generic method for this:

intersects :: Intersectable - Intersectable - Bool

which automatically selects the right function to call based on the
types of the two intersectables?

Regards,
  - clark

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Double-dispatch

2012-03-05 Thread Clark Gaebel
Well look at that.

Thanks!

On Mon, Mar 5, 2012 at 4:07 PM, Felipe Almeida Lessa
felipe.le...@gmail.com wrote:
 {-# LANGUAGE MultiParamTypeClasses #-}

 class Intersectable a b where
  intersectsWith :: a - b - Bool

 --
 Felipe.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Double-dispatch

2012-03-05 Thread Clark Gaebel
Wow, that's a lot of freedom in the type system. Haskell never fails
to amaze me how it can remove features and increase expressiveness in
one fell sweep.

I also like how the user will get type errors if attempting
intersection between two geometries which do not have intersection
defined. It makes the API really intuitive.

In terms of the extra features, in my case (geometric intersection
tests), MultiParamTypeClasses seem to be the perfect fit. However,
thanks for giving me a much more comprehensive arsenal of type system
hacks!

Regards,
  - clark

On Tue, Mar 6, 2012 at 12:28 AM, wren ng thornton w...@freegeek.org wrote:
 On 3/5/12 4:24 PM, Clark Gaebel wrote:

 Well look at that.

 Thanks!

 On Mon, Mar 5, 2012 at 4:07 PM, Felipe Almeida Lessa
 felipe.le...@gmail.com  wrote:

 {-# LANGUAGE MultiParamTypeClasses #-}

 class Intersectable a b where
  intersectsWith :: a -  b -  Bool


 Assuming that intersectsWith is something like unification, equality, or
 similar operations:

 Do note that this can lead to needing quadratically many instances, about
 half of which will be redundant if intersectsWith is supposed to be
 symmetric.

 Often times we know that the vast majority of these quadratically many
 instances should be vacuous (i.e., always return False), and it'd be nice to
 avoid writing them out. This can be achieved via -XOverlappingInstances
 where you give a default instance:

    instance Intersectable a b where intersectsWith _ _ = False

 and then override it for more specific choices of a and b. Beware that if
 you want to have other polymorphic instances you may be forced to use
 -XIncoherentInstances, or else resolve the incoherence by filling out the
 lattice of instances.

 The other notable complication is if you want your collection of types to
 have more than just a set structure (e.g., if you want some kind of
 subtyping hierarchy). It's doable, but things get complicated quickly.


 Other than those caveats, have at it! The ability to do this sort of thing
 is part of what makes Haskell great. Few languages have multiple-dispatch
 this powerful.

 --
 Live well,
 ~wren

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   >