Re: [Haskell-cafe] ANNOUNCE: taffybar: an alternative status bar for xmonad

2011-08-14 Thread Tristan Ravitch
On Sat, Aug 13, 2011 at 09:54:13PM -0700, Joel Burget wrote:
 This sounds really intriguing. Since I'm temporarily not using xmonad, and
 I'm sure others would like to see as well, could we get a screenshot?

Oops, how could I forget.

  http://pages.cs.wisc.edu/~travitch/taffybar.jpg

I have the xmonad log on the left, a CPU graph, memory graph,
date/time, weather, and then system tray visible there.



signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Invitation to connect on LinkedIn

2011-08-14 Thread Andrew Smith B.Sc(Hons),MBA
LinkedIn




   
I'd like to add you to my professional network on LinkedIn.

- Andrew

Andrew Smith B.Sc(Hons),MBA
Founder and CEO at VTRL - Value Technology Research Ltd 
Edinburgh, United Kingdom

Confirm that you know Andrew Smith B.Sc(Hons),MBA
https://www.linkedin.com/e/uc6lxc-grc8ge3y-3d/isd/3853824536/atjCCKYr/


 
-- 
(c) 2011, LinkedIn Corporation___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Invitation to connect on LinkedIn

2011-08-14 Thread Daniel Patterson
lol. I don't know Andrew Smith. How about y'all?

On Aug 14, 2011, at 12:32 PM, Andrew Smith B.Sc(Hons),MBA wrote:

 LinkedIn
 I'd like to add you to my professional network on LinkedIn.
 
 - Andrew
 
 Andrew Smith B.Sc(Hons),MBA
 Founder and CEO at VTRL - Value Technology Research Ltd 
 Edinburgh, United Kingdom
 Confirm that you know Andrew
 
 © 2011, LinkedIn Corporation
  ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Invitation to connect on LinkedIn

2011-08-14 Thread vagif . verdi
That's a clever way to build your resume. :))
Someone looking at his numerous contacts with haskell community may think 
Wow, this guys is some sort of haskell guru. Better bring him in for our 
stock market division.

On Sunday, August 14, 2011 01:02:42 PM Daniel Patterson wrote:
 lol. I don't know Andrew Smith. How about y'all?
 
 On Aug 14, 2011, at 12:32 PM, Andrew Smith B.Sc(Hons),MBA wrote:
  LinkedIn
  I'd like to add you to my professional network on LinkedIn.
  
  - Andrew
  
  Andrew Smith B.Sc(Hons),MBA
  Founder and CEO at VTRL - Value Technology Research Ltd
  Edinburgh, United Kingdom
  Confirm that you know Andrew
  
  © 2011, LinkedIn Corporation
  
   ___
  
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Invitation to connect on LinkedIn

2011-08-14 Thread Brandon Allbery
On Sun, Aug 14, 2011 at 13:09, vagif.ve...@gmail.com wrote:

 That's a clever way to build your resume. :))


No, just LinkedIn being stupid; the sign-up form is designed to spam your
contacts, and you actually have to take some care to avoid it.

-- 
brandon s allbery  allber...@gmail.com
wandering unix systems administrator (available) (412) 475-9364 vm/sms
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Invitation to connect on LinkedIn

2011-08-14 Thread MigMit
Not so sure; his company's website is under construction for more than a year 
and after brief google'ing I still don't understand even what kind of business 
are they supposed to be in. Seems more likely that it's actually Andrew who 
does the spamming.

Отправлено с iPad

14.08.2011, в 21:18, Brandon Allbery allber...@gmail.com написал(а):

 On Sun, Aug 14, 2011 at 13:09, vagif.ve...@gmail.com wrote:
 That's a clever way to build your resume. :))
 
 No, just LinkedIn being stupid; the sign-up form is designed to spam your 
 contacts, and you actually have to take some care to avoid it.
  
 -- 
 brandon s allbery  allber...@gmail.com
 wandering unix systems administrator (available) (412) 475-9364 vm/sms
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal #3339: Add (+) as a synonym for mappend

2011-08-14 Thread Brandon Allbery
On Sun, Aug 14, 2011 at 13:38, Yitzchak Gale g...@sefer.org wrote:

 Brandon Allbery wrote:
  Umm, I think the semigroups package will break everything that creates
  Monoid instances anyway.

 It has never broken anything for me. What do you mean?


Anything useful has to be modified to depend on SemiGroup as well to get
mconcat or its replacement; that's why you jumped the proposal to begin
with  As others have noted, this is a rather intrusive change to the
Haskell ecosystem.

*If* it has in fact been decided to go ahead with it, *then* this proposal
should be merged with yours since the best time to do it is when mconcat is
being relocated anyway.  I don't think we've made it past the if,
especially given the comments so far.

-- 
brandon s allbery  allber...@gmail.com
wandering unix systems administrator (available) (412) 475-9364 vm/sms
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal #3339: Add (+) as a synonym for mappend

2011-08-14 Thread Yitzchak Gale
Brandon Allbery wrote:
 Anything useful has to be modified to depend on SemiGroup as well to get
 mconcat or its replacement; that's why you jumped the proposal to begin
 with

Not at all. Types with Monoid instances need an additional
instance, a Semgroup instance, in order to be able to use '' instead
of mappend. mconcat is not involved in this discussion.

That is the current situation. I am advocating leaving it that way.

 As others have noted, this is a rather intrusive change to the
 Haskell ecosystem.

Exporting  from Data.Monoid is the intrusive change. I am strongly
against it. In the long run, it will create ugliness and inconvenience
to our class system, which has enough problems as it is. I advocate leaving
things as they are.

If individual library authors wish to add a Semigroup instance to their
Monoid instances as a convenience, which is harmless, I think that
would be wonderful. But that is a separate issue.

Thanks,
Yitz

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] how to read CPU time vs wall time report from GHC?

2011-08-14 Thread Wishnu Prasetya

Hi guys,

I'm new in parallel programming with Haskell. I made a simple test 
program using that par combinator etc, and was a bit unhappy that it 
turns out to be  slower than its sequential version. But firstly, I dont 
fully understand how to read the runtime report produced by GHC with -s 
option:


  SPARKS: 5 (5 converted, 0 pruned)

  INIT  time0.02s  (  0.01s elapsed)
  MUT   time3.46s  (  0.89s elapsed)
  GCtime5.49s  (  1.46s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time8.97s  (  2.36s elapsed)

As I understand it from the documentation, the left time-column is the 
CPU time, whereas the right one is elapses wall time. But how come that 
the wall time is less than the CPU time? Isn't wall time = user's 
perspective of time; so that is CPU time + IO + etc?


Any help?

--Wish.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal #3339: Add (+) as a synonym for mappend

2011-08-14 Thread Chris Smith
On Sun, 2011-08-14 at 21:05 +0300, Yitzchak Gale wrote:
 Brandon Allbery wrote:
  Anything useful has to be modified to depend on SemiGroup as well to get
  mconcat or its replacement; that's why you jumped the proposal to begin
  with
 
 Not at all. Types with Monoid instances need an additional
 instance, a Semgroup instance

That does require depending on semigroups though, and I think that's
what Brandon was saying.

Of course, the obvious solution to this would be to promote semigroups,
e.g., by adding it to the Haskell Platform or including it in base...
but the current semigroups package is a bit heavyweight for that; it
exports four new modules for what is really a very simple concept!

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to read CPU time vs wall time report from GHC?

2011-08-14 Thread Edward Z. Yang
Hello Wishnu,

That is slightly odd. What CPU and operating system are you running on?
Include Kernel versions if Linux.

Cheers,
Edward

Excerpts from Wishnu Prasetya's message of Sun Aug 14 14:11:36 -0400 2011:
 Hi guys,
 
 I'm new in parallel programming with Haskell. I made a simple test 
 program using that par combinator etc, and was a bit unhappy that it 
 turns out to be  slower than its sequential version. But firstly, I dont 
 fully understand how to read the runtime report produced by GHC with -s 
 option:
 
SPARKS: 5 (5 converted, 0 pruned)
 
INIT  time0.02s  (  0.01s elapsed)
MUT   time3.46s  (  0.89s elapsed)
GCtime5.49s  (  1.46s elapsed)
EXIT  time0.00s  (  0.00s elapsed)
Total time8.97s  (  2.36s elapsed)
 
 As I understand it from the documentation, the left time-column is the 
 CPU time, whereas the right one is elapses wall time. But how come that 
 the wall time is less than the CPU time? Isn't wall time = user's 
 perspective of time; so that is CPU time + IO + etc?
 
 Any help?
 
 --Wish.
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to read CPU time vs wall time report from GHC?

2011-08-14 Thread Iustin Pop
On Sun, Aug 14, 2011 at 08:11:36PM +0200, Wishnu Prasetya wrote:
 Hi guys,
 
 I'm new in parallel programming with Haskell. I made a simple test
 program using that par combinator etc, and was a bit unhappy that it
 turns out to be  slower than its sequential version. But firstly, I
 dont fully understand how to read the runtime report produced by GHC
 with -s option:
 
   SPARKS: 5 (5 converted, 0 pruned)
 
   INIT  time0.02s  (  0.01s elapsed)
   MUT   time3.46s  (  0.89s elapsed)
   GCtime5.49s  (  1.46s elapsed)
   EXIT  time0.00s  (  0.00s elapsed)
   Total time8.97s  (  2.36s elapsed)
 
 As I understand it from the documentation, the left time-column is
 the CPU time, whereas the right one is elapses wall time. But how
 come that the wall time is less than the CPU time? Isn't wall time =
 user's perspective of time; so that is CPU time + IO + etc?

Yes, but if you have multiple CPUs, then CPU time accumulates faster
than wall-clock time.

Based on the above example, I guess you have or you run the program on 4
cores (2.36 * 4 = 9.44, which means you got a very nice ~95%
efficiency).

regards,
iustin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to read CPU time vs wall time report from GHC?

2011-08-14 Thread Wishnu Prasetya

Hello Edward,

I'm using Windows 7 on Intel i7 (4 cores with hyperthreading)...

--Wish.


Hello Wishnu,

That is slightly odd. What CPU and operating system are you running on?
Include Kernel versions if Linux.

Cheers,
Edward

Excerpts from Wishnu Prasetya's message of Sun Aug 14 14:11:36 -0400 2011:

Hi guys,

I'm new in parallel programming with Haskell. I made a simple test
program using that par combinator etc, and was a bit unhappy that it
turns out to be  slower than its sequential version. But firstly, I dont
fully understand how to read the runtime report produced by GHC with -s
option:

SPARKS: 5 (5 converted, 0 pruned)

INIT  time0.02s  (  0.01s elapsed)
MUT   time3.46s  (  0.89s elapsed)
GCtime5.49s  (  1.46s elapsed)
EXIT  time0.00s  (  0.00s elapsed)
Total time8.97s  (  2.36s elapsed)

As I understand it from the documentation, the left time-column is the
CPU time, whereas the right one is elapses wall time. But how come that
the wall time is less than the CPU time? Isn't wall time = user's
perspective of time; so that is CPU time + IO + etc?

Any help?

--Wish.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to read CPU time vs wall time report from GHC?

2011-08-14 Thread Edward Z. Yang
Ah, good catch. :-)

Edward

Excerpts from Iustin Pop's message of Sun Aug 14 14:25:02 -0400 2011:
 On Sun, Aug 14, 2011 at 08:11:36PM +0200, Wishnu Prasetya wrote:
  Hi guys,
  
  I'm new in parallel programming with Haskell. I made a simple test
  program using that par combinator etc, and was a bit unhappy that it
  turns out to be  slower than its sequential version. But firstly, I
  dont fully understand how to read the runtime report produced by GHC
  with -s option:
  
SPARKS: 5 (5 converted, 0 pruned)
  
INIT  time0.02s  (  0.01s elapsed)
MUT   time3.46s  (  0.89s elapsed)
GCtime5.49s  (  1.46s elapsed)
EXIT  time0.00s  (  0.00s elapsed)
Total time8.97s  (  2.36s elapsed)
  
  As I understand it from the documentation, the left time-column is
  the CPU time, whereas the right one is elapses wall time. But how
  come that the wall time is less than the CPU time? Isn't wall time =
  user's perspective of time; so that is CPU time + IO + etc?
 
 Yes, but if you have multiple CPUs, then CPU time accumulates faster
 than wall-clock time.
 
 Based on the above example, I guess you have or you run the program on 4
 cores (2.36 * 4 = 9.44, which means you got a very nice ~95%
 efficiency).
 
 regards,
 iustin
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to read CPU time vs wall time report from GHC?

2011-08-14 Thread Wishnu Prasetya

On 14-8-2011 20:25, Iustin Pop wrote:

On Sun, Aug 14, 2011 at 08:11:36PM +0200, Wishnu Prasetya wrote:

Hi guys,

I'm new in parallel programming with Haskell. I made a simple test
program using that par combinator etc, and was a bit unhappy that it
turns out to be  slower than its sequential version. But firstly, I
dont fully understand how to read the runtime report produced by GHC
with -s option:

   SPARKS: 5 (5 converted, 0 pruned)

   INIT  time0.02s  (  0.01s elapsed)
   MUT   time3.46s  (  0.89s elapsed)
   GCtime5.49s  (  1.46s elapsed)
   EXIT  time0.00s  (  0.00s elapsed)
   Total time8.97s  (  2.36s elapsed)

As I understand it from the documentation, the left time-column is
the CPU time, whereas the right one is elapses wall time. But how
come that the wall time is less than the CPU time? Isn't wall time =
user's perspective of time; so that is CPU time + IO + etc?

Yes, but if you have multiple CPUs, then CPU time accumulates faster
than wall-clock time.

Based on the above example, I guess you have or you run the program on 4
cores (2.36 * 4 = 9.44, which means you got a very nice ~95%
efficiency).

regards,
iustin
That makes sense... But are you sure thats how i should read this?   I 
dont want to jump happy too early.


--Wish.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Actors, Linda, publish / subscribe models?

2011-08-14 Thread dokondr
On Sat, Aug 13, 2011 at 3:54 PM, dokondr doko...@gmail.com wrote:

 Hi,
 I am trying to figure out what Haskell libraries can be used to build
 publish / subscribe communication between threads running both in the same
 and different address spaces on the net.
 For my needs any of these models will work:
 - Actors [ http://en.wikipedia.org/wiki/Actor_model ]
 - Linda tuple space [
 http://en.wikipedia.org/wiki/Linda_%28coordination_language%29 ]
 - Publish / subscribe [
 http://en.wikipedia.org/wiki/Java_Message_Service#Publish.2Fsubscribe_model]

 I need to build a framework to coordinate task producers / consumers
 distributed in the same and different address spaces. I need to scale a data
 processing application somewhat Hadoop-like way yet in more flexible manner,
 without Hadoop-specific distributed FS constraints.

 Looking through Applications and libraries/Concurrency and parallelism:

 http://www.haskell.org/haskellwiki/Applications_and_libraries/Concurrency_and_parallelism

 I found Haskell actor package [
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor ] that
 fails to build with ghc 7.0.

 Please advise on latest working libraries.

 Thanks!



 Have anybody used Haskell actor package [
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor ]?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to read CPU time vs wall time report from GHC?

2011-08-14 Thread Iustin Pop
On Sun, Aug 14, 2011 at 08:32:36PM +0200, Wishnu Prasetya wrote:
 On 14-8-2011 20:25, Iustin Pop wrote:
 On Sun, Aug 14, 2011 at 08:11:36PM +0200, Wishnu Prasetya wrote:
 Hi guys,
 
 I'm new in parallel programming with Haskell. I made a simple test
 program using that par combinator etc, and was a bit unhappy that it
 turns out to be  slower than its sequential version. But firstly, I
 dont fully understand how to read the runtime report produced by GHC
 with -s option:
 
SPARKS: 5 (5 converted, 0 pruned)
 
INIT  time0.02s  (  0.01s elapsed)
MUT   time3.46s  (  0.89s elapsed)
GCtime5.49s  (  1.46s elapsed)
EXIT  time0.00s  (  0.00s elapsed)
Total time8.97s  (  2.36s elapsed)
 
 As I understand it from the documentation, the left time-column is
 the CPU time, whereas the right one is elapses wall time. But how
 come that the wall time is less than the CPU time? Isn't wall time =
 user's perspective of time; so that is CPU time + IO + etc?
 Yes, but if you have multiple CPUs, then CPU time accumulates faster
 than wall-clock time.
 
 Based on the above example, I guess you have or you run the program on 4
 cores (2.36 * 4 = 9.44, which means you got a very nice ~95%
 efficiency).
 
 regards,
 iustin

 That makes sense... But are you sure thats how i should read this?

As far as I know, this is correct.

 I dont want to jump happy too early.

Well, you algorithm does work in parallel, but if you look at the GC/MUT
time, ~60% of the total runtime is spent in GC, so you have a space leak
or an otherwise inefficient algorithm. The final speedup is just
3.46s/2.36s, i.e. 1.46x instead of ~4x, so you still have some work to
do to make this better.

At least, this is how I read those numbers.

regards,
iustin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Invitation to connect on LinkedIn

2011-08-14 Thread Ketil Malde
Andrew Smith B.Sc(Hons),MBA asmith9...@gmail.com writes:

 I'd like to add you to my professional network on LinkedIn.

Since LinkedIn tends to spam even more ambitiously than the other social
network sites, I have a procmail rule sending any mail from Linkedin to
/dev/null.  But it doesn't work when it's sent via other mailing lists,
when some person of dubious faculty and questionable morals wants to add
an entire list to his professional network... 

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to read CPU time vs wall time report from GHC?

2011-08-14 Thread Daniel Fischer
On Sunday 14 August 2011, 21:53:21, Iustin Pop wrote:
 On Sun, Aug 14, 2011 at 08:32:36PM +0200, Wishnu Prasetya wrote:
  On 14-8-2011 20:25, Iustin Pop wrote:
  On Sun, Aug 14, 2011 at 08:11:36PM +0200, Wishnu Prasetya wrote:
  Hi guys,
  
  I'm new in parallel programming with Haskell. I made a simple test
  program using that par combinator etc, and was a bit unhappy that it
  turns out to be  slower than its sequential version. But firstly, I
  dont fully understand how to read the runtime report produced by GHC
  
  with -s option:
 SPARKS: 5 (5 converted, 0 pruned)
 
 INIT  time0.02s  (  0.01s elapsed)
 MUT   time3.46s  (  0.89s elapsed)
 GCtime5.49s  (  1.46s elapsed)
 EXIT  time0.00s  (  0.00s elapsed)
 Total time8.97s  (  2.36s elapsed)
  
  As I understand it from the documentation, the left time-column is
  the CPU time, whereas the right one is elapses wall time. But how
  come that the wall time is less than the CPU time? Isn't wall time =
  user's perspective of time; so that is CPU time + IO + etc?
  
  Yes, but if you have multiple CPUs, then CPU time accumulates
  faster than wall-clock time.
  
  Based on the above example, I guess you have or you run the program
  on 4 cores (2.36 * 4 = 9.44, which means you got a very nice ~95%
  efficiency).
  
  regards,
  iustin
  
  That makes sense... But are you sure thats how i should read this?
 
 As far as I know, this is correct.

It is indeed. CPU time is the sum of CPU time for all threads, which is 
typically larger than elapsed time when several threads run in parallel.

 
  I dont want to jump happy too early.
 
 Well, you algorithm does work in parallel, but if you look at the GC/MUT
 time, ~60% of the total runtime is spent in GC, so you have a space leak
 or an otherwise inefficient algorithm.

Not enough data to make more than guesses concerning the cause, but 60% GC 
definitely indicates a problem with the algorithm (resp. its 
implementation),

 The final speedup is just
 3.46s/2.36s, i.e. 1.46x instead of ~4x, so you still have some work to
 do to make this better.

We don't know the times for a non-threaded run (or an -N1 run), so it could 
be anything from a slowdown to a  4× speedup (but it's likely to be a 
speedup by a factor  4×).

 
 At least, this is how I read those numbers.
 
 regards,
 iustin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal #3339: Add (+) as a synonym for mappend

2011-08-14 Thread Edward Kmett
I originally didn't have the package exporting those things.

I would be amenable to standardization without them, but I use them in about
20 packages that are built on top of semigroups, and naturals and non-empty
lists come up when talking about semigroups a lot.

Rather than having them live way up in extension land with the rest of my
algebra libraries i moved them down to where they could do some good and
admit some optimizations.

On Sun, Aug 14, 2011 at 2:21 PM, Chris Smith cdsm...@gmail.com wrote:

 On Sun, 2011-08-14 at 21:05 +0300, Yitzchak Gale wrote:
  Brandon Allbery wrote:
   Anything useful has to be modified to depend on SemiGroup as well to
 get
   mconcat or its replacement; that's why you jumped the proposal to begin
   with
 
  Not at all. Types with Monoid instances need an additional
  instance, a Semgroup instance

 That does require depending on semigroups though, and I think that's
 what Brandon was saying.

 Of course, the obvious solution to this would be to promote semigroups,
 e.g., by adding it to the Haskell Platform or including it in base...
 but the current semigroups package is a bit heavyweight for that; it
 exports four new modules for what is really a very simple concept!

 --
 Chris Smith


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Invitation to connect on LinkedIn

2011-08-14 Thread Brandon Allbery
On Sun, Aug 14, 2011 at 16:15, Ketil Malde ke...@malde.org wrote:

 Andrew Smith B.Sc(Hons),MBA asmith9...@gmail.com writes:
  I'd like to add you to my professional network on LinkedIn.

 Since LinkedIn tends to spam even more ambitiously than the other social
 network sites, I have a procmail rule sending any mail from Linkedin to


I should actually mention that this is a double infelicity:  the OP has a
gmail.com address.  gmail has an annoying tendency to add every address
you've ever sent mail to to your primary address book; so telling something
like LinkedIn to send invites to your address book, with the reasonable
expectation that that means your actual contacts, suddenly turns into
spamming the (your) known universe.

-- 
brandon s allbery  allber...@gmail.com
wandering unix systems administrator (available) (412) 475-9364 vm/sms
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to read CPU time vs wall time report from GHC?

2011-08-14 Thread Wishnu Prasetya

On 14-8-2011 22:17, Daniel Fischer wrote:

On Sunday 14 August 2011, 21:53:21, Iustin Pop wrote:

On Sun, Aug 14, 2011 at 08:32:36PM +0200, Wishnu Prasetya wrote:

On 14-8-2011 20:25, Iustin Pop wrote:

On Sun, Aug 14, 2011 at 08:11:36PM +0200, Wishnu Prasetya wrote:

Hi guys,

I'm new in parallel programming with Haskell. I made a simple test
program using that par combinator etc, and was a bit unhappy that it
turns out to be  slower than its sequential version. But firstly, I
dont fully understand how to read the runtime report produced by GHC

with -s option:
   SPARKS: 5 (5 converted, 0 pruned)

   INIT  time0.02s  (  0.01s elapsed)
   MUT   time3.46s  (  0.89s elapsed)
   GCtime5.49s  (  1.46s elapsed)
   EXIT  time0.00s  (  0.00s elapsed)
   Total time8.97s  (  2.36s elapsed)

As I understand it from the documentation, the left time-column is
the CPU time, whereas the right one is elapses wall time. But how
come that the wall time is less than the CPU time? Isn't wall time =
user's perspective of time; so that is CPU time + IO + etc?

Yes, but if you have multiple CPUs, then CPU time accumulates
faster than wall-clock time.

Based on the above example, I guess you have or you run the program
on 4 cores (2.36 * 4 = 9.44, which means you got a very nice ~95%
efficiency).

regards,
iustin

That makes sense... But are you sure thats how i should read this?

As far as I know, this is correct.

It is indeed. CPU time is the sum of CPU time for all threads, which is
typically larger than elapsed time when several threads run in parallel.


I dont want to jump happy too early.

Well, you algorithm does work in parallel, but if you look at the GC/MUT
time, ~60% of the total runtime is spent in GC, so you have a space leak
or an otherwise inefficient algorithm.

Not enough data to make more than guesses concerning the cause, but 60% GC
definitely indicates a problem with the algorithm (resp. its
implementation),


The final speedup is just
3.46s/2.36s, i.e. 1.46x instead of ~4x, so you still have some work to
do to make this better.

We don't know the times for a non-threaded run (or an -N1 run), so it could
be anything from a slowdown to a  4× speedup (but it's likely to be a
speedup by a factor  4×).


Well, the -N1 is below. The sequential version of the program has almost 
the same profile:


  SPARKS: 5 (1 converted, 4 pruned)

  INIT  time0.00s  (  0.00s elapsed)
  MUT   time2.78s  (  2.99s elapsed)
  GCtime4.35s  (  4.15s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time7.13s  (  7.14s elapsed)

Am I correct then to say that the speed up with respect to sequential is 
equal to: tot-elapse-time-N1 / tot-elapse-N4 ? So I have 7.14 / 2.36 = 
3.0 speed up, and not 1.46 as Iustin said?


I'll probably have to do something with that GC :)

--Wish.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to read CPU time vs wall time report from GHC?

2011-08-14 Thread Daniel Fischer
On Sunday 14 August 2011, 22:42:13, Wishnu Prasetya wrote:
 On 14-8-2011 22:17, Daniel Fischer wrote:
  
  We don't know the times for a non-threaded run (or an -N1 run), so it
  could be anything from a slowdown to a  4× speedup (but it's likely
  to be a speedup by a factor  4×).
 
 Well, the -N1 is below. The sequential version of the program has almost
 the same profile:
 
SPARKS: 5 (1 converted, 4 pruned)
 
INIT  time0.00s  (  0.00s elapsed)
MUT   time2.78s  (  2.99s elapsed)
GCtime4.35s  (  4.15s elapsed)
EXIT  time0.00s  (  0.00s elapsed)
Total time7.13s  (  7.14s elapsed)
 
 Am I correct then to say that the speed up with respect to sequential is
 equal to: tot-elapse-time-N1 / tot-elapse-N4 ? So I have 7.14 / 2.36 =
 3.0 speed up, and not 1.46 as Iustin said?

Yes (with respect to wall-clock time of course).
That's not too bad, though it should be possible to increase that factor.

 
 I'll probably have to do something with that GC :)

But that should be the first target, there's probably some low-hanging 
fruit there.
Maybe increasing the size of the allocation area (+RTS -Ax) or the heap 
(+RTS -Hx) would already do some good.
Also do heap profiling to find out what most likely takes so much GC time 
(before compiling for profiling, running with +RTS -hT could produce useful 
information).


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Evolving faster Haskell programs - where are the files?

2011-08-14 Thread Robert Clausecker
I just read the enlightening article by Don Stewart[1], and wanted to
test this method on one of my own programs, but it seems that there some
of the attached files are missing, namely [2], [3] and [4]. Is there
anybody who knows, where I can get those files? I don't really want to
rewrite those file by hand.

Yours, Robert Clausecker

[1]:
http://donsbot.wordpress.com/2009/03/09/evolving-faster-haskell-programs
[2]: http://galois.com/~dons/acovea/gcc43_core2.acovea
[3]: http://galois.com/~dons/acovea/ghc-basic.acovea
[4]: http://galois.com/~dons/acovea/ghc-6_10.acovea


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Evolving faster Haskell programs - where are the files?

2011-08-14 Thread Warren Henning
I mentioned this to him on Twitter a while ago. Presumably it has to
do with the fact that he's no longer at Galois.

Another unfortunate fact is that ACOVEA is at this point unmaintained,
and that is why the official homepage for it was removed. When I
emailed the author, he said he couldn't afford to spend the time on it
it was requiring. :(

I've wondered if it would be worth it to make a pure-Haskell
replacement specifically devoted to improving the performance of
Haskell programs, and not only optimizing a single objective but
presenting an efficient frontier between space usage and execution
time or whatever other criteria it makes sense to optimize for..

Warren

On Sun, Aug 14, 2011 at 2:13 PM, Robert Clausecker fuz...@gmail.com wrote:
 I just read the enlightening article by Don Stewart[1], and wanted to
 test this method on one of my own programs, but it seems that there some
 of the attached files are missing, namely [2], [3] and [4]. Is there
 anybody who knows, where I can get those files? I don't really want to
 rewrite those file by hand.

 Yours, Robert Clausecker

 [1]:
 http://donsbot.wordpress.com/2009/03/09/evolving-faster-haskell-programs
 [2]: http://galois.com/~dons/acovea/gcc43_core2.acovea
 [3]: http://galois.com/~dons/acovea/ghc-basic.acovea
 [4]: http://galois.com/~dons/acovea/ghc-6_10.acovea


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: taffybar: an alternative status bar for xmonad

2011-08-14 Thread Alexander Dunlap
On 13 August 2011 08:56, Tristan Ravitch travi...@cs.wisc.edu wrote:
 I've wanted a slightly fancier status bar than xmobar for a while, so
 I finally made one.  It uses gtk2hs and dbus extensively, so if you
 hate either of those things it probably isn't for you.  Being written
 in gtk, though, it can have more graphical widgets.

  http://hackage.haskell.org/package/taffybar

 Current feature highlights:

  * It has a system tray
  * Generic graph widget for things like CPU/memory
  * XMonad log over DBus so it can be restarted independently of xmonad
  * Graphical battery widget

 There is still a lot that I want to add but I figured getting some
 feedback early would be handy.  Documentation is currently at
 http://pages.cs.wisc.edu/~travitch/taffybar until I figure out how to
 appease Hackage (see the System.Taffybar module).


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



I apologize if I'm missing something obvious here, but when I try to
run taffybar I get

Launching custom binary /home/alex/.cache/taffybar/taffybar-linux-i386

taffybar-linux-i386: ConnectionError connectSession:
DBUS_SESSION_BUS_ADDRESS is missing or invalid.

Is there some D-BUS configuration that needs to happen before the
package is usable?

Thanks,
Alex

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: taffybar: an alternative status bar for xmonad

2011-08-14 Thread Tristan Ravitch
On Sun, Aug 14, 2011 at 02:45:24PM -0700, Alexander Dunlap wrote:

 I apologize if I'm missing something obvious here, but when I try to
 run taffybar I get

 Launching custom binary /home/alex/.cache/taffybar/taffybar-linux-i386

 taffybar-linux-i386: ConnectionError connectSession:
 DBUS_SESSION_BUS_ADDRESS is missing or invalid.

 Is there some D-BUS configuration that needs to happen before the
 package is usable?

Sorry, I assumed you would have dbus running already.  If you add a
line like:

  eval `dbus-launch --auto-syntax`

early in your ~/.xsession (if logging in via some graphical login
manager) or ~/.xinitrc (if starting X via startx).  That should work
for any normal -sh-style or -csh-style shell.

That command starts DBus and sets the DBUS_SESSION* environment
variables, and both xmonad and taffybar need to have the same settings
for that variable, so make sure you execute that command before
starting either of them.

I'll add some notes in the documentation about this.


pgpvc8frKIAwI.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to read CPU time vs wall time report from GHC?

2011-08-14 Thread Wishnu Prasetya

On 14-8-2011 23:05, Daniel Fischer wrote:

On Sunday 14 August 2011, 22:42:13, Wishnu Prasetya wrote:

On 14-8-2011 22:17, Daniel Fischer wrote:

We don't know the times for a non-threaded run (or an -N1 run), so it
could be anything from a slowdown to a   4× speedup (but it's likely
to be a speedup by a factor   4×).

Well, the -N1 is below. The sequential version of the program has almost
the same profile:

SPARKS: 5 (1 converted, 4 pruned)

INIT  time0.00s  (  0.00s elapsed)
MUT   time2.78s  (  2.99s elapsed)
GCtime4.35s  (  4.15s elapsed)
EXIT  time0.00s  (  0.00s elapsed)
Total time7.13s  (  7.14s elapsed)

Am I correct then to say that the speed up with respect to sequential is
equal to: tot-elapse-time-N1 / tot-elapse-N4 ? So I have 7.14 / 2.36 =
3.0 speed up, and not 1.46 as Iustin said?

Yes (with respect to wall-clock time of course).
That's not too bad, though it should be possible to increase that factor.


I'll probably have to do something with that GC :)

But that should be the first target, there's probably some low-hanging
fruit there.
Maybe increasing the size of the allocation area (+RTS -Ax) or the heap
(+RTS -Hx) would already do some good.
Also do heap profiling to find out what most likely takes so much GC time
(before compiling for profiling, running with +RTS -hT could produce useful
information).

Ah ok. Thanks a lot for all the answers and tips.

--Wish.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strictness properties of monoidal folds

2011-08-14 Thread Sebastian Fischer
Hello Alexey,

sorry for my slow response.

On Thu, Aug 4, 2011 at 7:10 AM, Alexey Khudyakov
alexey.sklad...@gmail.comwrote:

 On 02.08.2011 08:16, Sebastian Fischer wrote:

 Data.Foldable also provides the monoidal fold function foldMap. It is
 left unspecified whether the elements are accumulated leftwards,
 rightwards or in some other way, which is possible because the combining
 function is required to be associative. Does this additional freedom for
 implementors go so far as to allow for strict accumulation?

  Left and right folds behave identically for finite structures but they
 are different for infinite ones.


I agree that for types like normal lists that allow infinite structure it
makes sense to have different folds like foldr and foldl that are explicit
about the nesting of the combining function.

I don't think that there are laws for foldMap that require it to work for
infinite lists. One could even argue that the monoid laws imply that the
results for folding leftwards and rightwards should be equal, that is
undefined..

For types that only allow finite sequences (like Data.Sequence or
Data.Vector) this is not an issue. But you convinced me that a strict and
lazy foldMap can be distinguished even for list types that have a strict
append function by using a lazy mappend function for accumulation.

Best regards,
Sebastian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe