[Haskell-cafe] Weird problem: Couldn't read cabal file hashable/1.2.0.0/hashable.cabal

2013-01-06 Thread Eugene Kirpichov
Hi,

A friend of mine is trying to install my timeplotters (see email
signature).
He's getting a weird error: one of the tools installs fine but the other
one fails with the error Couldn't read cabal file hashable/
1.2.0.0/hashable.cabal; curiously, so does running cabal install
cabal-install.

Even more curiously, hashable 1.2.0.0 does not appear anywhere in the list
of dependencies, even indirect. Look:

$ cabal -v3 install cabal-install
searching for ghc in path.
found ghc at /usr/bin/ghc
(/usr/bin/ghc,[--numeric-version])
/usr/bin/ghc is version 7.0.3
looking for tool ghc-pkg near compiler in /usr/bin
found ghc-pkg in /usr/bin/ghc-pkg
(/usr/bin/ghc-pkg,[--version])
/usr/bin/ghc-pkg is version 7.0.3
(/usr/bin/ghc,[--supported-languages])
(/usr/bin/ghc,[--info])
Reading installed packages...
(/usr/bin/ghc-pkg,[dump,--global,-v2])
(/usr/bin/ghc-pkg,[dump,--user,-v2])
(/usr/bin/ghc,[--print-libdir])
Reading available packages...
Resolving dependencies...
cabal: Couldn't read cabal file hashable/1.2.0.0/hashable.cabal

He's using haskell platform on Ubuntu 12.04.
$ ghc —version
The Glorious Glasgow Haskell Compilation System, version 7.0.3
$ cabal —version
cabal-install version 0.10.2
using version 1.10.1.0 of the Cabal library

He *did* try erasing ~/.cabal, ~/.ghc and /usr/lib/ghc and then
reinstalling haskell-platform package - it didn't change anything. He
doesn't remember ever having Haskell installed before, but that can't be
completely ruled out.

I just tried installing a fresh Ubuntu 12.04 onto a VM and I did not have
this problem that he has.

What additional information can I ask him for? Maybe somebody already
encountered this problem?

(Actually, I have a whole strace cabal install cabal-install, but it's 33Mb
- I can upload it somewhere if it could be helpful).

-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
http://jkff.info/software/timeplotters - my performance visualization tools
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Weird problem: Couldn't read cabal file hashable/1.2.0.0/hashable.cabal

2013-01-06 Thread Eugene Kirpichov
The curious thing about that strace is that it doesn't ever attempt to open
hashable.cabal - in fact, it isn't attempting to open *any* .cabal file
whatsoever before failing!
So I'm totally confused as to why cabal might be complaining that it can't
open hashable.cabal.

On Sun, Jan 6, 2013 at 10:05 PM, Eugene Kirpichov ekirpic...@gmail.comwrote:

 Couldn't read cabal file




-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
http://jkff.info/software/timeplotters - my performance visualization tools
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Decent docs to my visualization tools

2012-12-27 Thread Eugene Kirpichov
Hi cafe,

I just finally published a decent documentation website and binary
distributions for my log visualization tools written in Haskell -
http://jkff.info/software/timeplotters/ .

I think now they might actually stand a chance of starting to be adopted by
a wider circle than my friends :)

Feedback and sharing would be very welcome.

-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Incremental regular expressions - article and library

2012-12-18 Thread Eugene Kirpichov
Hi Haskellers,

I just published an article that can be interesting to lovers of functional
programming, even though it's not directly relevant to Haskell per se.

http://jkff.info/articles/ire/

It's based on Dan Piponi's blogpost Fast incremental regular expression
matching with monoids
http://blog.sigfpe.com/2009/01/fast-incremental-regular-expression.html but
expands Dan's ideas to include *locating* matches, matching multiple
regular expressions at once, using a more compact datastructure than
fingertrees, etc.

And - sorry - the implementation is in Java, for reasons explained in the
article :)

-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] tplot (out of memory)

2012-11-30 Thread Eugene Kirpichov
Hi,

Wow, that's weird. I wonder what kinds of fonts were missing? I was just
using the default cairo font everywhere.


On Fri, Nov 30, 2012 at 11:27 PM, Malcolm Wallace malcolm.wall...@me.comwrote:

 For the record, it turned out that the key difference between the linux
 machines was the fonts packages installed via RPM.  The strace utility told
 me that the crash happened shortly after cairo/pango attempted (and failed)
 to open some font configuration files.  After installing some of the X11
 font packages (and some others too), the crash went away.

 On 18 Oct 2012, at 09:55, malcolm.wallace wrote:

  Did you ever solve this?  I have a similar message ( user error (out of
 memory) ) arising from a different app (not tplot) that uses the Haskell
 Chart library (and cairo underneath).  On some linux machines, it crashes,
 on others it works fine.  I can find no environment differences between the
 machines.  The app does not use a lot of memory, and the machine is not
 running out of physical or swap.
  Regards,
  Malcolm
 
 
  On 04 Sep, 2012,at 04:01 PM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:
 
  Hi Manish,
 
  Please provide the input file, I'll debug this.
 
  On Mon, Sep 3, 2012 at 1:06 PM, Manish Trivedi trivman...@gmail.com
 wrote:
   Hi,
  
   I am running into a weird out of memory issue. While running timeplot
 over
   an input file having ~800 rows. From below provided info, seems like
 machine
   has enough ram (1849MB).
   Please let me know if anyone has pointers.
  
   # free -m
   total used free shared buffers cached
   Mem: 3825 1975 1849 0 13 71
   -/+ buffers/cache: 1891 1934
   Swap: 4031 111 3920
  
   #time tplot -o out.png -or 1024x768 -k 'CurrentPerHour' 'lines' -k
   'RequiredPerHour' 'lines' -if adgroup_delivery_chart.input -tf 'date
   %Y-%m-%d %H:%M:%OS'
  
   tplot: user error (out of memory)
  
   real 0m0.026s
   user 0m0.018s
   sys 0m0.008s
  
   -Manish
  
   ___
   Haskell-Cafe mailing list
   Haskell-Cafe@haskell.org
   http://www.haskell.org/mailman/listinfo/haskell-cafe
  
 
 
 
  --
  Eugene Kirpichov
  http://www.linkedin.com/in/eugenekirpichov
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Maximum bipartite matching: 24 lines

2012-10-22 Thread Eugene Kirpichov
Hi,

fwd = foldr (\(x,y) - M.insertWith (++) x [y]) M.empty $ S.toList g

Use foldl' here, foldr is absolutely useless here and it only consumes
the stack space as your operation is strict.

As for the actual code: I'd prefer the code itself to be more
readable, rather than have a lot of literate comments around it;
currently, IMO all the uncurry's, flips, eithers, maybes and
point-free style hurt readability heavily. I think it would be better
to devise your own very simple datatypes for this.

Maybe I'm too capricious or heretically unhaskellish, I'll probably
try to write my own version as an exercise :)

On Mon, Oct 22, 2012 at 1:28 PM, Stefan Klinger
all-li...@stefan-klinger.de wrote:
 Hello.

 I have written a function that calculates maximum cardinality matchings
 on bipartite graphs.  It's only 24 lines of code.

 It seems (tested, not proven) to run faster, use less memory, and scale
 better than using MaxFlow from FGL with constant weight and additional
 source and sink nodes.  But it's not as good as Hopcroft–Karp would be.

 Attached is the module MaxMatching which also contains extensive
 documentation of the rationale behind its design.  I would hope to get
 any feedback on this: What do you think about the approach?  Did I
 oversee anything?  Do you know of any other purely functional solution
 to this problem?

 Just as an exmaple, run `ghci MaxMatching.lhs` and then use

 matching $ S.fromList [(1,'a'),(1,'b'),(2,'a'),(2,'c'),(3,'b')]

 to calculate a maximum cardinality matching of the graph shown below.

 1---a   Note the somewhat awkward type of the matching
  \ /function, returning a Map instead of a Set, with the
   X edges being backwards!
  / \
 2   b   matching :: (Ord b, Ord a) = S.Set (a, b) - M.Map b a
  \ /
   X On my machine, it takes less than 2s on 10k edges
  / \or 225 nodes.
 3   c

 Comments are welcome!

 Thank you!
 Stefan


 --
 Stefan Klinger  o/klettern
 /\/  bis zum
 send plaintext only - max size 32kB - no spam \   Abfallen
 http://stefan-klinger.de

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
We're hiring! http://tinyurl.com/mirantis-openstack-engineer

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] tplot (out of memory)

2012-10-19 Thread Eugene Kirpichov
Hi,

I didn't - because I didn't run into this myself. Manish, maybe you did? :)

On Thu, Oct 18, 2012 at 1:55 AM, malcolm.wallace malcolm.wall...@me.com wrote:
 Did you ever solve this?  I have a similar message ( user error (out of
 memory) ) arising from a different app (not tplot) that uses the Haskell
 Chart library (and cairo underneath).  On some linux machines, it crashes,
 on others it works fine.  I can find no environment differences between the
 machines.  The app does not use a lot of memory, and the machine is not
 running out of physical or swap.

 Regards,
 Malcolm


 On 04 Sep, 2012,at 04:01 PM, Eugene Kirpichov ekirpic...@gmail.com wrote:

 Hi Manish,

 Please provide the input file, I'll debug this.

 On Mon, Sep 3, 2012 at 1:06 PM, Manish Trivedi trivman...@gmail.com wrote:
 Hi,

 I am running into a weird out of memory issue. While running timeplot over
 an input file having ~800 rows. From below provided info, seems like
 machine
 has enough ram (1849MB).
 Please let me know if anyone has pointers.

 # free -m
 total used free shared buffers cached
 Mem: 3825 1975 1849 0 13 71
 -/+ buffers/cache: 1891 1934
 Swap: 4031 111 3920

 #time tplot -o out.png -or 1024x768 -k 'CurrentPerHour' 'lines' -k
 'RequiredPerHour' 'lines' -if adgroup_delivery_chart.input -tf 'date
 %Y-%m-%d %H:%M:%OS'

 tplot: user error (out of memory)

 real 0m0.026s
 user 0m0.018s
 sys 0m0.008s

 -Manish

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
We're hiring! http://tinyurl.com/mirantis-openstack-engineer

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 64-bit vs 32-bit haskell platform on Mac: misleading notice on Platform website?

2012-10-11 Thread Eugene Kirpichov
Hi Mark,

Thank you for the clarification.

I think, then, that we should indeed provide a link to a notice about
native libraries - like, if you're using native libraries, make sure
their architecture is 32-bit  or universal as well; in case of
MacPorts you can achieve this by doing port install libfoo +universal;
in case of brew by brew install libfoo --universal.

On Mon, Oct 8, 2012 at 9:52 PM, Mark Lentczner mark.lentcz...@gmail.com wrote:
 I'm the source of the 32-bit recommendation, and the HP Mac distribution
 builder

 To summarize what I read in this thread:

 32-bit GHC/HP didn't work with 64-bit Cario libs
 Some libs available via brew were 64-bit, and 32-bit ones would have to be
 compiled
 There is still some bug with 64-bit ghci and some graphics libs
 There is a ghc bug with 64-bit on mac (bug #7040), which isn't fixed until
 7.6

 There seemed to be the implication that a 64-bit ghc would work with 32-bit
 libs, but I don't think that's true. Mac doesn't (generally) support mixed
 modes in the same executable. All system libs are shipped dual-architecture.
 I don't think there are any pre-installed libs that are shipped 64-bit only.
 The problem seen with Cairo would cut both ways: If one had installed the
 32-bit version of Cairo, one would see the same problem with the 64-bit HP:
 wrong architecture.

 Since code compiled with the 32-bit system is both faster, and uses less
 memory, and it has been the case that all libs are either shipped dual-arch,
 or easily available as 32-bit, and there were known problems with the 64-bit
 version for some use cases, it seemed to me to be best to suggest the 32-bit
 version by default.

 The major source of the problems in the OP, seem to be that MacPorts and/or
 brew don't appear to follow the Mac OS X lib standard of installing libs
 dual arch. A brief look at the MacPorts page indicated that there were
 various rules (OS version and processor version) that determined which arch.
 it built by default. Perhaps we should tell people to install the HP
 architecture that matches the architecture that they use for MacPorts or
 brew. However, I bet most people don't know, so we'd need a pointer to where
 they could find out the defaults for those systems, or how to establish what
 their system is using.

 Finally, I note that HP 2012.4.0.0 (out in a month) will ship with GHC
 7.4.2, and so will still have the above bugs.

 - Mark


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
We're hiring! http://tinyurl.com/mirantis-openstack-engineer

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 64-bit vs 32-bit haskell platform on Mac: misleading notice on Platform website?

2012-10-08 Thread Eugene Kirpichov
Johan, should I also file the bugreport remove the suggestion to
install 32-bit platform there, or is there a different place for bugs
of the platform website?

On Mon, Oct 8, 2012 at 7:25 AM, Johan Tibell johan.tib...@gmail.com wrote:
 On Mon, Oct 8, 2012 at 3:28 AM, Christiaan Baaij
 christiaan.ba...@gmail.com wrote:
 Hi,

 I finally found another OS X mountain lion install and can confirm the 
 behaviour I described earlier:
 32-bit: compiled code works, interpreted code works
 64-bit: compiled code works, interpreted code fails

 Here's the test case:
 - cabal install gloss --flags-GLUT GLFW
 - cabal unpack gloss-examples
 - cd gloss-examples-1.7.6.2/picture/GameEvent
 - ghci -fno-ghci-sandbox Main.hs
 - main

 I get the following crash report: http://pastebin.com/jZjfFtm7

 The weird thing is the following:
 When I run 'ghci' from inside 'gdb' (to find the origin for the segfault), 
 everything works fine:
 ghci: segfault
 ghci from gdb: everything works

 I have no idea what's going on, so if anyone has any pointers on how to make 
 sure ghci behaves the same in gdb please let me know.

 Could you please file a bug report at:

 http://hackage.haskell.org/trac/ghc/

 Thanks!

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
We're hiring! http://tinyurl.com/mirantis-openstack-engineer

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] 64-bit vs 32-bit haskell platform on Mac: misleading notice on Platform website?

2012-09-25 Thread Eugene Kirpichov
Hi,

I installed Haskell Platform 32-bit on a fresh 64-bit mac, because the
page http://www.haskell.org/platform/mac.html says: Pick the 32-bit
vesion, unless you have a specific reason to use the 64-bit version.
The 32-bit one is slightly faster for most programs.

Then, during the installation of a package, the following happeed:

Loading package cairo-0.12.3.1 ... command line: can't load .so/.DLL
for: /opt/local/lib/libz.dylib (dlopen(/opt/local/lib/libz.dylib, 9):
no suitable image found.  Did find:
        /opt/local/lib/libz.dylib: mach-o, but wrong architecture)

That libz.dylib is a 64-bit library and it can't be used by 32-bit
Haskell platform.

QUESTION: It seems to me that most people, at least most who care
about slightly faster programs, are likely to run into something
like this - using native 64-bit libraries. Compatibility exists only
in the opposite direction. Wouldn't it be appropriate to remove this
notice and ask people to use the 64-bit version unless they have a
specific reason not to?


-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
We're hiring! http://tinyurl.com/mirantis-openstack-engineer

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Choosing color of the track

2012-09-18 Thread Eugene Kirpichov
Hi Manish,

The meaning of @ is not what you think it is. It merely draws
colored bars, it does NOT control the color of other kinds of charts.

Here's how what you want can be achieved:
* Remove the @ lines
* Append a common prefix to the input tracks you want to be displayed
on the same output track, e.g. t.:
2012-09-18 00:10:48,166 =t.CurrentPerHour13057 0.0
2012-09-18 00:10:58,155 =t.CurrentPerHour13057 0.0
..
2012-09-18 03:58:28,159 =t.PacingReqPerHr13057 2375.0534412521242
2012-09-18 03:58:38,161 =t.PacingReqPerHr13057 2375.383005739316

* Use the within diagram kind: tplot  -dk ''within[.] lines

Then you'll have a single output track where data from these input
tracks is displayed with different color. However, you don't control
the precise color (I just never really needed to control it, I only
needed it to be different for different tracks).

On Mon, Sep 17, 2012 at 9:40 PM, Manish Trivedi trivman...@gmail.com wrote:
 Hi Folks,

 I am trying to plot two tracks with different colors.
 Following is my input file,

 --
 2012-09-18 00:10:48,166 @CurrentPerHour13057 Red
 2012-09-18 00:10:48,166 =CurrentPerHour13057 0.0
 2012-09-18 00:10:58,155 =CurrentPerHour13057 0.0
 2012-09-18 00:11:08,203 =CurrentPerHour13057 0.0
 2012-09-18 00:11:18,166 =CurrentPerHour13057 0.0
 2012-09-18 00:11:28,159 =CurrentPerHour13057 0.0
 2012-09-18 00:11:38,170 =CurrentPerHour13057 0.0
 2012-09-18 00:11:48,175 =CurrentPerHour13057 0.0
 2012-09-18 00:11:58,174 =CurrentPerHour13057 0.0
 2012-09-18 00:12:08,216 =CurrentPerHour13057 0.0
 2012-09-18 00:12:18,218 =CurrentPerHour13057 0.0
 2012-09-18 03:58:28,159 @PacingReqPerHr13057 Blue
 2012-09-18 03:58:28,159 =PacingReqPerHr13057 2375.0534412521242
 2012-09-18 03:58:38,161 =PacingReqPerHr13057 2375.383005739316
 2012-09-18 03:58:48,175 =PacingReqPerHr13057 2375.713057258422
 2012-09-18 03:58:58,160 =PacingReqPerHr13057 2376.0422443063935
 2012-09-18 03:59:08,192 =PacingReqPerHr13057 2376.3730727321126
 2012-09-18 03:59:18,207 =PacingReqPerHr13057 2375.9038854561304
 2012-09-18 03:59:28,168 =PacingReqPerHr13057 2376.232615497
 2012-09-18 03:59:38,156 =PacingReqPerHr13057 2376.5619853019184
 2012-09-18 03:59:48,160 =PacingReqPerHr13057 2376.892145685344
 2012-09-18 03:59:58,160 =PacingReqPerHr13057 2377.65743067
 -

 tplot -o /root/color.png  -or 1024x768 -k 'CurrentPerHour' 'lines' -k
 'PacingReqPerHr' 'lines' -if /root/color.input -tf 'date %Y-%m-%d %H:%M:%OS'

 I cant get to print plots in different color. Could anyone point out what am
 I doing wrong?

 Thanks,
 Manish

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
We're hiring! http://tinyurl.com/mirantis-openstack-engineer

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Choosing color of the track

2012-09-18 Thread Eugene Kirpichov
Hi,

Suppose you have data like this:

2012-09-18 00:10:48,166 =responseTime 53
...

Then you should just use -dk 'quantile 0.95' and you'll see a graph of
stacked bars like [][Y] (but vertical) where XXX is min..95%
and YYY is 95%..max.

On Tue, Sep 18, 2012 at 11:44 AM, Manish Trivedi trivman...@gmail.com wrote:
 Thanks a ton Eugene. that worked like a charm :) Appreciate you looking into
 this and suggesting me the correct approach.
 I have another question that if I had single track storing response time of
 each request, how could I get 95th percentile of the response time. I know
 TimePlot wiki has quantile example but I havent been able to successfully
 use it.

 Regards,
 -Manish


 On Tue, Sep 18, 2012 at 10:48 AM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:

 Hi Manish,

 The meaning of @ is not what you think it is. It merely draws
 colored bars, it does NOT control the color of other kinds of charts.

 Here's how what you want can be achieved:
 * Remove the @ lines
 * Append a common prefix to the input tracks you want to be displayed
 on the same output track, e.g. t.:
 2012-09-18 00:10:48,166 =t.CurrentPerHour13057 0.0
 2012-09-18 00:10:58,155 =t.CurrentPerHour13057 0.0
 ..
 2012-09-18 03:58:28,159 =t.PacingReqPerHr13057 2375.0534412521242
 2012-09-18 03:58:38,161 =t.PacingReqPerHr13057 2375.383005739316

 * Use the within diagram kind: tplot  -dk ''within[.] lines

 Then you'll have a single output track where data from these input
 tracks is displayed with different color. However, you don't control
 the precise color (I just never really needed to control it, I only
 needed it to be different for different tracks).

 On Mon, Sep 17, 2012 at 9:40 PM, Manish Trivedi trivman...@gmail.com
 wrote:
  Hi Folks,
 
  I am trying to plot two tracks with different colors.
  Following is my input file,
 
  --
  2012-09-18 00:10:48,166 @CurrentPerHour13057 Red
  2012-09-18 00:10:48,166 =CurrentPerHour13057 0.0
  2012-09-18 00:10:58,155 =CurrentPerHour13057 0.0
  2012-09-18 00:11:08,203 =CurrentPerHour13057 0.0
  2012-09-18 00:11:18,166 =CurrentPerHour13057 0.0
  2012-09-18 00:11:28,159 =CurrentPerHour13057 0.0
  2012-09-18 00:11:38,170 =CurrentPerHour13057 0.0
  2012-09-18 00:11:48,175 =CurrentPerHour13057 0.0
  2012-09-18 00:11:58,174 =CurrentPerHour13057 0.0
  2012-09-18 00:12:08,216 =CurrentPerHour13057 0.0
  2012-09-18 00:12:18,218 =CurrentPerHour13057 0.0
  2012-09-18 03:58:28,159 @PacingReqPerHr13057 Blue
  2012-09-18 03:58:28,159 =PacingReqPerHr13057 2375.0534412521242
  2012-09-18 03:58:38,161 =PacingReqPerHr13057 2375.383005739316
  2012-09-18 03:58:48,175 =PacingReqPerHr13057 2375.713057258422
  2012-09-18 03:58:58,160 =PacingReqPerHr13057 2376.0422443063935
  2012-09-18 03:59:08,192 =PacingReqPerHr13057 2376.3730727321126
  2012-09-18 03:59:18,207 =PacingReqPerHr13057 2375.9038854561304
  2012-09-18 03:59:28,168 =PacingReqPerHr13057 2376.232615497
  2012-09-18 03:59:38,156 =PacingReqPerHr13057 2376.5619853019184
  2012-09-18 03:59:48,160 =PacingReqPerHr13057 2376.892145685344
  2012-09-18 03:59:58,160 =PacingReqPerHr13057 2377.65743067
  -
 
  tplot -o /root/color.png  -or 1024x768 -k 'CurrentPerHour' 'lines' -k
  'PacingReqPerHr' 'lines' -if /root/color.input -tf 'date %Y-%m-%d
  %H:%M:%OS'
 
  I cant get to print plots in different color. Could anyone point out
  what am
  I doing wrong?
 
  Thanks,
  Manish
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 



 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov
 We're hiring! http://tinyurl.com/mirantis-openstack-engineer





-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
We're hiring! http://tinyurl.com/mirantis-openstack-engineer

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Choosing color of the track

2012-09-17 Thread Eugene Kirpichov
Hi,

I'm the author and I'll reply tomorrow, sorry, going to sleep now :) thanks for 
the interest in the tools anyway!



17.09.2012, в 21:40, Manish Trivedi trivman...@gmail.com написал(а):

 Hi Folks,
 
 I am trying to plot two tracks with different colors.
 Following is my input file,
 
 --
 2012-09-18 00:10:48,166 @CurrentPerHour13057 Red
 2012-09-18 00:10:48,166 =CurrentPerHour13057 0.0
 2012-09-18 00:10:58,155 =CurrentPerHour13057 0.0
 2012-09-18 00:11:08,203 =CurrentPerHour13057 0.0
 2012-09-18 00:11:18,166 =CurrentPerHour13057 0.0
 2012-09-18 00:11:28,159 =CurrentPerHour13057 0.0
 2012-09-18 00:11:38,170 =CurrentPerHour13057 0.0
 2012-09-18 00:11:48,175 =CurrentPerHour13057 0.0
 2012-09-18 00:11:58,174 =CurrentPerHour13057 0.0
 2012-09-18 00:12:08,216 =CurrentPerHour13057 0.0
 2012-09-18 00:12:18,218 =CurrentPerHour13057 0.0
 2012-09-18 03:58:28,159 @PacingReqPerHr13057 Blue
 2012-09-18 03:58:28,159 =PacingReqPerHr13057 2375.0534412521242
 2012-09-18 03:58:38,161 =PacingReqPerHr13057 2375.383005739316
 2012-09-18 03:58:48,175 =PacingReqPerHr13057 2375.713057258422
 2012-09-18 03:58:58,160 =PacingReqPerHr13057 2376.0422443063935
 2012-09-18 03:59:08,192 =PacingReqPerHr13057 2376.3730727321126
 2012-09-18 03:59:18,207 =PacingReqPerHr13057 2375.9038854561304
 2012-09-18 03:59:28,168 =PacingReqPerHr13057 2376.232615497
 2012-09-18 03:59:38,156 =PacingReqPerHr13057 2376.5619853019184
 2012-09-18 03:59:48,160 =PacingReqPerHr13057 2376.892145685344
 2012-09-18 03:59:58,160 =PacingReqPerHr13057 2377.65743067
 -
 
 tplot -o /root/color.png  -or 1024x768 -k 'CurrentPerHour' 'lines' -k 
 'PacingReqPerHr' 'lines' -if /root/color.input -tf 'date %Y-%m-%d %H:%M:%OS'
 
 I cant get to print plots in different color. Could anyone point out what am 
 I doing wrong?
 
 Thanks,
 Manish
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Help a young graduate haskeller to land its dream job

2012-09-12 Thread Eugene Kirpichov
Hi Alfredo,

You might look at the various bigdata companies. I was surprised by
how many of them are using Scala or Clojure - it's definitely over
50%. Looks like FP is really gaining traction in this area.

On Wed, Sep 12, 2012 at 11:48 AM, Alfredo Di Napoli
alfredo.dinap...@gmail.com wrote:
 Hi everyone,

 If this mail sound strange to you, you are free to ignore it.

 My name is Alfredo Di Napoli and I'm a 24-year-old programmer from
 Rome, Italy. I've graduated in May and I'm currently working as an intern
 for a company involved in the defence field.

 In my spare time, though, I study functional programming, especially
 Haskell. FP is my true passion and I'm another dreamer trying to land the
 job he loves.

 In a nutshell I'm looking for every possibility to do Haskell/functional
 programming in Europe/North Europe. I'm throwing this stone into this pond
 because life has endless possibilities, who knows? :)

 A disclaimer, though: I'm not an expert Haskeller, but I'm very passionate
 about technology and I love learning (I've obviously already read LYAH and
 RWH). You can find more information about me (including my CV if interested)
 here:

 www.alfredodinapoli.com

 Oh! One last thing! I would be very grateful to everyone willing to spent
 two minutes of his time giving me any kind of suggestion about the FP job
 world or how to prepare/improve myself for the foreseeable future.

 Thanks again,
 and sorry for the OT/spammish plug.

 Humbly,
 Alfredo Di Napoli

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
We're hiring! http://tinyurl.com/mirantis-openstack-engineer

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] tplot (out of memory)

2012-09-04 Thread Eugene Kirpichov
Hi Manish,

Please provide the input file, I'll debug this.

On Mon, Sep 3, 2012 at 1:06 PM, Manish Trivedi trivman...@gmail.com wrote:
 Hi,

 I am running into a weird out of memory issue. While running timeplot over
 an input file having ~800 rows. From below provided info, seems like machine
 has enough ram (1849MB).
 Please let me know if anyone has pointers.

 # free -m
  total   used   free sharedbuffers cached
 Mem:  3825   1975   1849  0 13 71
 -/+ buffers/cache:   1891   1934
 Swap: 4031111   3920

 #time tplot -o out.png  -or 1024x768 -k 'CurrentPerHour' 'lines' -k
 'RequiredPerHour' 'lines' -if adgroup_delivery_chart.input  -tf 'date
 %Y-%m-%d %H:%M:%OS'

 tplot: user error (out of memory)

 real0m0.026s
 user0m0.018s
 sys0m0.008s

 -Manish

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] wondering about a MonadIO instance for a heap data type

2012-07-11 Thread Eugene Kirpichov
Use Debug.Trace.
It does not make sense to declare that heap is a monad, as a monad is an 
abstraction of sequencing computations, and a heap is not an abstraction of 
sequencing computations at all. You don't make your String class implement the 
rendering engine interface just because you want to use it in a computer game 
program, equally you dont pretend that a heap is a way of sequencing 
computations just because you want to sequence computations related to heaps.

The actual computation in your case is the heapsort function, not the heap. If 
you absolutely must use IO, add IO to the functions type.

11.07.2012, в 15:19, Qi Qi qiqi...@gmail.com написал(а):

 Hi,
 
 I was wondering about creating an instance of MonadIO for a heap data.
 Any hints?
 
 data Heap a = E | T Int a (Heap a) (Heap a)
 deriving (Eq, Ord, Read, Show)
 
 The reason is that I want to use liftIO during a heapsort to print out
 intermediate results.
 
 Thanks.
 
 Qi Qi
 
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] wondering about a MonadIO instance for a heap data type

2012-07-11 Thread Eugene Kirpichov
List is a monad because it has an associated way of sequencing
computations: namely, collect results of the second computation
invoked on all results of the first computation. That's not because
List is a data structure (similarly to Heap), it's because it is
associated with the computational abstraction of computations with
multiple results. The heap data structure does not have an associated
computational abstraction, at least not that one I'm aware of, and
definitely not one that would make any sense in the context of
heapsort. So it doesn't make sense to pretend that it does.

On Wed, Jul 11, 2012 at 5:00 PM, Qi Qi qiqi...@gmail.com wrote:
 List [] is a monad, why not for heap data. Heap data could be an instance of
 Monad too.
 I have the heapsort function, and just wanted to rewrite a verbose version
 of it by using liftIO.
 But I would look into Debug.Trace. Thanks for your hint.

 Qi


 On Wednesday, July 11, 2012 5:28:17 PM UTC-5, Eugene Kirpichov wrote:

 Use Debug.Trace.
 It does not make sense to declare that heap is a monad, as a monad is an
 abstraction of sequencing computations, and a heap is not an abstraction of
 sequencing computations at all. You don't make your String class implement
 the rendering engine interface just because you want to use it in a
 computer game program, equally you dont pretend that a heap is a way of
 sequencing computations just because you want to sequence computations
 related to heaps.

 The actual computation in your case is the heapsort function, not the
 heap. If you absolutely must use IO, add IO to the functions type.

 11.07.2012, в 15:19, Qi Qi qiqi...@gmail.com написал(а):

  Hi,
 
  I was wondering about creating an instance of MonadIO for a heap data.
  Any hints?
 
  data Heap a = E | T Int a (Heap a) (Heap a)
  deriving (Eq, Ord, Read, Show)
 
  The reason is that I want to use liftIO during a heapsort to print out
  intermediate results.
 
  Thanks.
 
  Qi Qi
 
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to select n random words from a file ...

2012-06-10 Thread Eugene Kirpichov
Hi,

Look up reservoir sampling, it will most probably help.

On Sun, Jun 10, 2012 at 6:21 AM, Noon Silk noonsli...@gmail.com wrote:
 Hi,

  I'm clearly new to haskell, and I suppose those is a basic question,
 but after a bit of searching I've been unable to figure out the best
 way to do this. I was first trying to find out how to, say, get a
 random element from a list, but I'm starting to think that may not be
 the best way (because list access isn't constant time, among other
 reasons). So, I wonder if there is a more direct way to just get a
 random word from a wordfile (i.e. one word per line) ... can anyone
 suggest a method?

  Ideally I'd like to be able to adapt this to get n random words from
 several files (i.e. randomly select n words from all files; i.e. just
 read them into one merged array and choose from there?)

  Thanks for any advice ...

 --
 Noon Silk

 Fancy a quantum lunch? https://sites.google.com/site/quantumlunch/

 Every morning when I wake up, I experience an exquisite joy — the joy
 of being this signature.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Finding the average in constant space

2012-05-27 Thread Eugene Kirpichov
A lot of people have done this :) eg from me: google up a fairly recent thread 
from me about processing streams and perhaps the keyword timeplot (writing 
from a dying phone, can't do myself)



27.05.2012, в 12:04, Chris Wong chrisyco+haskell-c...@gmail.com написал(а):

 Hello all
 
 I just came up with a way of executing multiple folds in a single
 pass. In short, we can write code like this:
 
   average = foldLeft $ (/) $ sumF * lengthF
 
 and it will only traverse the input list once.
 
 The code is at: https://gist.github.com/2802644
 
 My question is: has anyone done this already? If not, I might release
 this on Hackage -- it seems quite useful.
 
 Chris
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Google Summer of Code - Lock-free data structures

2012-03-29 Thread Eugene Kirpichov
Though of course you can implement CAS in terms of STM, CAS is much
more low-level and will probably be many times (though not
asymptotically) faster.

On Thu, Mar 29, 2012 at 12:01 PM, Heinrich Apfelmus
apfel...@quantentunnel.de wrote:
 Florian Hartwig wrote:

 Hi everyone,
 I would like to do the GSoC project outlined in
 http://hackage.haskell.org/trac/summer-of-code/ticket/1608

 One of Haskell's great strengths is its support for all kinds of
 concurrent
 and parallel programmming, so I think that the Haskell ecosystem would
 benefit from having a wider range of efficient concurrent data structures.
 Also, more selfishly, I remember reading the article in CACM last summer
 and
 thinking that the whole idea of updating data structures directly using
 atomic compare-and-swap was really cool, so I would love to have an excuse
 to play around with it.

 Some (initial) ideas for lock-free data structures worth implementing:
 1. A lock-free priority queue, e.g. using the approach based on skip-lists
 explained in [2]
 2. Stacks, see [1] and [3]
 3. Hash tables [4]
 but if anyone has other suggestions, I would obviously happy to hear them.


 Since I don't know much about concurrency, I have a simple question: what is
 the difference between atomic compare-and-swap and software transactional
 memory? Both are lock-free? Is it possible to implement every data structure
 based on CAS in terms of STM? What are the drawbacks? The other way round?


 Best regards,
 Heinrich Apfelmus

 --
 http://apfelmus.nfshost.com


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Inject cabal version or VCS version as a CPP macro

2012-02-22 Thread Eugene Kirpichov
Hi,

I'd like my program to print something like this is $program 1.0.4 git
45fea6b when invoked with --version, or at least just the 1.0.4 part.

Can Cabal expose the version as a preprocessor macro by default, or do I
have to use Build-Type: Custom and add a preprocessing step of my own?

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Inject cabal version or VCS version as a CPP macro

2012-02-22 Thread Eugene Kirpichov
Thanks, this is exactly what I was looking for!

I might look some more into exposing VCS tags too, however.

On Wed, Feb 22, 2012 at 12:25 PM, Roel van Dijk vandijk.r...@gmail.comwrote:

 For each package myPackage Cabal generates a module containing,
 among other things, the package's version as a Haskell value:

  import Paths_myPackage ( version )
  import Data.Version  ( showVersion )
  main = showVersion version

 See also Accessing data files from package code in
 http://www.haskell.org/cabal/users-guide/

 I do not now how to expose information from the VCS.

 2012/2/22 Eugene Kirpichov ekirpic...@gmail.com:
  Hi,
 
  I'd like my program to print something like this is $program 1.0.4 git
  45fea6b when invoked with --version, or at least just the 1.0.4 part.
 
  Can Cabal expose the version as a preprocessor macro by default, or do I
  have to use Build-Type: Custom and add a preprocessing step of my own?




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Inject cabal version or VCS version as a CPP macro

2012-02-22 Thread Eugene Kirpichov
Whoa, I didn't think about using Template Haskell here. Thanks.
Perhaps this should be abstracted into a library.

On Wed, Feb 22, 2012 at 12:40 PM, Michael Snoyman mich...@snoyman.comwrote:

 I have a project at work that embeds the Mercurial version in the
 final executable. I use Template Haskell to call the hg executable and
 parse its output. Here's the relevant code snippet:

putStrLn $ Mercurial commit:  ++ $(do
let getChangeset s =
let (k, v') = break (== ':') s
v = dropWhile isSpace $ drop 1 v'
 in if k == changeset then Just v else Nothing
res - qRunIO $ readProcess hg [heads] 
let cs = mapMaybe getChangeset $ lines res
lift $ case cs of
x:_ - x
[] - unknown)

 On Wed, Feb 22, 2012 at 10:37 AM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:
  Thanks, this is exactly what I was looking for!
 
  I might look some more into exposing VCS tags too, however.
 
 
  On Wed, Feb 22, 2012 at 12:25 PM, Roel van Dijk vandijk.r...@gmail.com
  wrote:
 
  For each package myPackage Cabal generates a module containing,
  among other things, the package's version as a Haskell value:
 
   import Paths_myPackage ( version )
   import Data.Version  ( showVersion )
   main = showVersion version
 
  See also Accessing data files from package code in
  http://www.haskell.org/cabal/users-guide/
 
  I do not now how to expose information from the VCS.
 
  2012/2/22 Eugene Kirpichov ekirpic...@gmail.com:
   Hi,
  
   I'd like my program to print something like this is $program 1.0.4
 git
   45fea6b when invoked with --version, or at least just the 1.0.4 part.
  
   Can Cabal expose the version as a preprocessor macro by default, or
 do I
   have to use Build-Type: Custom and add a preprocessing step of my own?
 
 
 
 
  --
  Eugene Kirpichov
  Principal Engineer, Mirantis Inc. http://www.mirantis.com/
  Editor, http://fprog.ru/
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Inject cabal version or VCS version as a CPP macro

2012-02-22 Thread Eugene Kirpichov
Hey,

I created a small package:
http://hackage.haskell.org/package/vcs-revision, repo
http://github.com/jkff/vcs-revision

It can be used like this:

{-# LANGUAGE TemplateHaskell #-}
 import Distribution.VcsRevision.Git
 import Language.Haskell.TH.Syntax

 showMyGitVersion :: String
 showMyGitVersion = $(do
   v - qRunIO getRevision
   lift $ case v of
 Nothing   - none
 Just (hash,True)  - hash ++  (with local modifications)
 Just (hash,False) - hash)


On Wed, Feb 22, 2012 at 2:26 PM, Herbert Valerio Riedel h...@gnu.org wrote:

 Hi,

 Eugene Kirpichov ekirpic...@gmail.com writes:

  I'd like my program to print something like this is $program 1.0.4 git
  45fea6b when invoked with --version, or at least just the 1.0.4 part.

 Here's some proof-of-concept code we use slightly modified in production
 here for over a year now successfully:

  https://gist.github.com/656738

 The primary goal was to have a reliable version number (allowing to find
 the exact corresponding git-commit in the source-code repository), and
 to be able to detect when that version number is unreliable (there's
 nothing more annoying than wasting time debugging the wrong
 source-code...).

 The idea is to dynamically infer and overwrite cabal's version when
 building from the git repository, and have it accessible via the cabal's
 auto-generated Paths_pkg-name module Data.Version...

 ...and when creating a source-dist, via runghc Setup.hs sdist the
 current dynamic git-version string is embedded into the generated
 .tar.gz's .cabal file, so that the source-distribution is just a plain
 simple .cabal project (that could be uploaded to hackage)

 hth,
 hvr
 --




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Inject cabal version or VCS version as a CPP macro

2012-02-22 Thread Eugene Kirpichov
On Wed, Feb 22, 2012 at 2:57 PM, Herbert Valerio Riedel h...@gnu.org wrote:

 Eugene Kirpichov ekirpic...@gmail.com writes:

  It can be used like this:
 
  {-# LANGUAGE TemplateHaskell #-}
   import Distribution.VcsRevision.Git
   import Language.Haskell.TH.Syntax
 
   showMyGitVersion :: String
   showMyGitVersion = $(do
 v - qRunIO getRevision
 lift $ case v of
   Nothing   - none
   Just (hash,True)  - hash ++  (with local modifications)
   Just (hash,False) - hash)

 Btw, I'm wondering (haven't tried myself), when using TH to generate the
 version string, does GHC's and/or cabal's dependency tracking

  a) reliably refresh the generated hash so that you can be sure it's
the git-commit id compiled into the binary is reliable, and

  b) avoid re-generating the TH file and redundant recompilation if the
git commit-id hasn't changed?

 Do you mean all this in the context where this resides in a library rather
than an application? I haven't thought about that yet.


 hvr
 --




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hierarchical tracing for debugging laziness

2012-01-25 Thread Eugene Kirpichov
Thanks!

I released it:

http://hackage.haskell.org/package/htrace
http://github.com/jkff/htrace

On Wed, Jan 25, 2012 at 4:18 AM, Felipe Almeida Lessa 
felipe.le...@gmail.com wrote:

 Really nice!  Looks like it could be a useful mini-package on Hackage.

 --
 Felipe.




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Hierarchical tracing for debugging laziness

2012-01-24 Thread Eugene Kirpichov
Hi cafe,

Look how one can watch the evaluation tree of a computation, to debug
laziness-related problems.

{-# LANGUAGE BangPatterns #-}
module HTrace where

import Data.List (foldl')
import Data.IORef
import System.IO.Unsafe

level = unsafePerformIO $ newIORef 0

htrace str x = unsafePerformIO $ do
  lvl - readIORef level
  putStrLn (replicate (4*lvl) ' ' ++ str)
  writeIORef level (lvl+1)
  let !vx = x
  writeIORef level lvl
  return vx

xs = map (\x - htrace (show x) x) [1..10]

s = foldl (\a b - htrace + (a+b)) 0 xs
s2 = foldl' (\a b - htrace + (a+b)) 0 xs

b = htrace b 2
c = htrace c 3
a = htrace a $ b + c
x = htrace x $ b + c

*HTrace a
a
b
c
5
*HTrace x
x
5

*HTrace s
+
+
+
+
+
+
+
+
+
+
1
2
3
4
5
6
7
8
9
10
55

(reload)
*HTrace s2
+
1
+
2
+
3
+
4
+
5
+
6
+
7
+
8
+
9
+
10
55

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Natural Transformations and fmap

2012-01-23 Thread Eugene Kirpichov
Have you tried generating a free theorem for :- ? (I haven't as I'm writing 
from my phone)



24.01.2012, в 9:06, Ryan Ingram ryani.s...@gmail.com написал(а):

 On Mon, Jan 23, 2012 at 8:05 PM, Daniel Fischer 
 daniel.is.fisc...@googlemail.com wrote:
 On Tuesday 24 January 2012, 04:39:03, Ryan Ingram wrote:
  At the end of that paste, I prove the three Haskell monad laws from the
  functor laws and monoid-ish versions of the monad laws, but my proofs
  all rely on a property of natural transformations that I'm not sure how
  to prove; given
 
  type m :- n = (forall x. m x - n x)
  class Functor f where fmap :: forall a b. (a - b) - f a - f b
  -- Functor identity law: fmap id = id
  -- Functor composition law fmap (f . g) = fmap f . fmap g
 
  Given Functors m and n, natural transformation f :: m :- n, and g :: a
  - b, how can I prove (f . fmap_m g) = (fmap_n g . f)?
 
 Unless I'm utterly confused, that's (part of) the definition of a natural
 transformation (for non-category-theorists).
 
 Alright, let's pretend I know nothing about natural transformations and just 
 have the type declaration
 
 type m :- n = (forall x. m x - n x)
 
 And I have
 f :: M :- N
 g :: A - B
 instance Functor M -- with proofs of functor laws
 instance Functor N -- with proofs of functor laws
 
 How can I prove
   fmap g. f :: M A - N B
   =
   f . fmap g :: M A - N B
 
 I assume I need to make some sort of appeal to the parametricity of M :- N. 
  
  Is there some
  more fundamental law of natural transformations that I'm not aware of
  that I need to use?  Is it possible to write a natural transformation
  in Haskell that violates this law?
 
-- ryan
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Is there a way to do *allocation* profiling with a breakdown by type?

2012-01-19 Thread Eugene Kirpichov
Hi,

I've got a program that seems to spend much of its time allocating
short-lived objects, which therefore don't show up in +RTS -hy or alike.

Is there a way to get a breakdown by type of the objects that are being
*allocated* but not necessarily retained (disappear after gen0)?

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Encouraging performance with ghc 7.4

2012-01-18 Thread Eugene Kirpichov
Hi cafe,

Just wanted to inform you that I've been benchmarking my compute-intensive
stuff on ghc 7.0.4 vs 7.4.0, and 7.4.0 gave a large speedup - one program
that took 2.9s on a particular input is now taking 2.2s. This result is
repeatable.

So I encourage people to try out GHC 7.4.0. Some stuff may stop compiling
(I had to do 3 basically one-line fixes in 3 packages before I got mine to
compile), but I didn't encounter other problems.

Below is +RTS -s of two runs: one compiled with ghc 7.4.0 and another with
7.0.4.

I can make a more detailed comparison if it's useful and if someone tells
me how - I thought about including +RTS -p, but Simon Marlow wrote recently
that it has started giving meaningful results only in 7.4, so comparison
with 7.0.4 would be unfair.

However in this case I'm sure that the program is heavily compute-bound by
the following two functions (because it's always compute-bound by them):
1) https://github.com/jkff/timeplot/blob/master/Tools/TimePlot/Plots.hs#L226
2) https://github.com/jkff/timeplot/blob/master/Tools/TimePlot/Conf.hs -
localToUTC and strptime.

jkff@jkff-laptop ~/projects/logs/... $ tplot -if e.trace -dk 'within[.]
cumsum 10' -o e.png +RTS -s
tplot -if e.trace -dk within[.] cumsum 10 -o e.png +RTS -s
   2,751,756,296 bytes allocated in the heap
 135,129,112 bytes copied during GC
  33,149,720 bytes maximum residency (22 sample(s))
   1,755,868 bytes maximum slop
  56 MB total memory in use (0 MB lost due to fragmentation)

  Generation 0:  4994 collections, 0 parallel,  0.22s,  0.23s elapsed
  Generation 1:22 collections, 0 parallel,  0.08s,  0.09s elapsed

  INIT  time0.01s  (  0.00s elapsed)
  MUT   time2.61s  (  2.91s elapsed)
  GCtime0.30s  (  0.32s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time2.91s  (  3.22s elapsed)

  %GC time  10.1%  (9.8% elapsed)

  Alloc rate1,051,262,083 bytes per MUT second

  Productivity  89.6% of total user, 80.9% of total elapsed

jkff@jkff-laptop ~/projects/logs/... $ tplot -if e.trace -dk 'within[.]
cumsum 10' -o e.png +RTS -s
   2,161,811,620 bytes allocated in the heap
 107,589,660 bytes copied during GC
  34,799,400 bytes maximum residency (22 sample(s))
   1,721,152 bytes maximum slop
  58 MB total memory in use (0 MB lost due to fragmentation)

Tot time (elapsed)  Avg pause  Max pause
  Gen  0  3899 colls, 0 par0.13s0.14s 0.s0.0003s
  Gen  122 colls, 0 par0.08s0.09s 0.0043s0.0482s

  INITtime0.00s  (  0.00s elapsed)
  MUT time2.03s  (  2.28s elapsed)
  GC  time0.21s  (  0.23s elapsed)
  EXITtime0.00s  (  0.00s elapsed)
  Total   time2.26s  (  2.51s elapsed)

  %GC time   9.3%  (9.1% elapsed)

  Alloc rate1,056,397,390 bytes per MUT second

  Productivity  90.7% of total user, 81.5% of total elapsed


-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Encouraging performance with ghc 7.4

2012-01-18 Thread Eugene Kirpichov
I got even more encouraging results on another input - 5 vs 7 seconds:

=== GHC 7.4.0 ==
jkff@jkff-laptop ~/projects/logs/... $ tplot -if lat.trace -dk 'within[.]
duration quantile 10 0.25,0.5,0.75,0.9,0.95' -o lat3.png +RTS -s
   2,809,230,872 bytes allocated in the heap
 358,393,440 bytes copied during GC
  42,478,364 bytes maximum residency (68 sample(s))
   1,833,848 bytes maximum slop
 113 MB total memory in use (0 MB lost due to fragmentation)

Tot time (elapsed)  Avg pause  Max pause
  Gen  0  4740 colls, 0 par0.39s0.40s 0.0001s0.0005s
  Gen  168 colls, 0 par0.42s0.48s 0.0071s0.1157s

  INITtime0.00s  (  0.00s elapsed)
  MUT time3.76s  (  4.01s elapsed)
  GC  time0.82s  (  0.88s elapsed)
  EXITtime0.00s  (  0.00s elapsed)
  Total   time4.59s  (  4.89s elapsed)

  %GC time  17.8%  (18.0% elapsed)

  Alloc rate744,774,183 bytes per MUT second

  Productivity  82.2% of total user, 77.2% of total elapsed

 GHC 7.0.4 ===
jkff@jkff-laptop ~/projects/logs/... $ tplot -if lat.trace -dk 'within[.]
duration quantile 10 0.25,0.5,0.75,0.9,0.95' -o lat3.png +RTS -s
tplot -if lat.trace -dk within[.] duration quantile 10
0.25,0.5,0.75,0.9,0.95 -o lat3.png +RTS -s
   4,024,048,244 bytes allocated in the heap
 419,997,300 bytes copied during GC
  44,546,920 bytes maximum residency (70 sample(s))
   1,840,208 bytes maximum slop
 113 MB total memory in use (0 MB lost due to fragmentation)

  Generation 0:  6995 collections, 0 parallel,  0.63s,  0.64s elapsed
  Generation 1:70 collections, 0 parallel,  0.39s,  0.43s elapsed

  INIT  time0.01s  (  0.00s elapsed)
  MUT   time5.96s  (  6.17s elapsed)
  GCtime1.02s  (  1.08s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time6.99s  (  7.24s elapsed)

  %GC time  14.6%  (14.9% elapsed)

  Alloc rate674,478,767 bytes per MUT second

  Productivity  85.3% of total user, 82.3% of total elapsed


On Wed, Jan 18, 2012 at 8:22 PM, Eugene Kirpichov ekirpic...@gmail.comwrote:

 Hi cafe,

 Just wanted to inform you that I've been benchmarking my compute-intensive
 stuff on ghc 7.0.4 vs 7.4.0, and 7.4.0 gave a large speedup - one program
 that took 2.9s on a particular input is now taking 2.2s. This result is
 repeatable.

 So I encourage people to try out GHC 7.4.0. Some stuff may stop compiling
 (I had to do 3 basically one-line fixes in 3 packages before I got mine to
 compile), but I didn't encounter other problems.

 Below is +RTS -s of two runs: one compiled with ghc 7.4.0 and another with
 7.0.4.

 I can make a more detailed comparison if it's useful and if someone tells
 me how - I thought about including +RTS -p, but Simon Marlow wrote recently
 that it has started giving meaningful results only in 7.4, so comparison
 with 7.0.4 would be unfair.

 However in this case I'm sure that the program is heavily compute-bound by
 the following two functions (because it's always compute-bound by them):
 1)
 https://github.com/jkff/timeplot/blob/master/Tools/TimePlot/Plots.hs#L226
 2) https://github.com/jkff/timeplot/blob/master/Tools/TimePlot/Conf.hs -
 localToUTC and strptime.

 jkff@jkff-laptop ~/projects/logs/... $ tplot -if e.trace -dk 'within[.]
 cumsum 10' -o e.png +RTS -s
 tplot -if e.trace -dk within[.] cumsum 10 -o e.png +RTS -s
2,751,756,296 bytes allocated in the heap
  135,129,112 bytes copied during GC
   33,149,720 bytes maximum residency (22 sample(s))
1,755,868 bytes maximum slop
   56 MB total memory in use (0 MB lost due to fragmentation)

   Generation 0:  4994 collections, 0 parallel,  0.22s,  0.23s elapsed
   Generation 1:22 collections, 0 parallel,  0.08s,  0.09s elapsed

   INIT  time0.01s  (  0.00s elapsed)
   MUT   time2.61s  (  2.91s elapsed)
   GCtime0.30s  (  0.32s elapsed)
   EXIT  time0.00s  (  0.00s elapsed)
   Total time2.91s  (  3.22s elapsed)

   %GC time  10.1%  (9.8% elapsed)

   Alloc rate1,051,262,083 bytes per MUT second

   Productivity  89.6% of total user, 80.9% of total elapsed

 jkff@jkff-laptop ~/projects/logs/... $ tplot -if e.trace -dk 'within[.]
 cumsum 10' -o e.png +RTS -s
2,161,811,620 bytes allocated in the heap
  107,589,660 bytes copied during GC
   34,799,400 bytes maximum residency (22 sample(s))
1,721,152 bytes maximum slop
   58 MB total memory in use (0 MB lost due to fragmentation)

 Tot time (elapsed)  Avg pause  Max
 pause
   Gen  0  3899 colls, 0 par0.13s0.14s 0.s
  0.0003s
   Gen  122 colls, 0 par0.08s0.09s 0.0043s
  0.0482s

   INITtime0.00s  (  0.00s elapsed)
   MUT time2.03s  (  2.28s elapsed)
   GC  time0.21s  (  0.23s elapsed

[Haskell-cafe] __GLASGOW_HASKELL__ for GHC 7.4

2012-01-17 Thread Eugene Kirpichov
Hi,

I'm fixing a build error in a package that depends on the RTS API, which
changed in 7.4, using #if __GLASGOW_HASKELL__ = 740.

However, build log shows that __GLASGOW_HASKELL__ is still 704, not 740 as
I'd expect.

Is this a bug?

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] __GLASGOW_HASKELL__ for GHC 7.4

2012-01-17 Thread Eugene Kirpichov
My bad, it *should* really be 704.
http://www.haskell.org/ghc/docs/latest/html/users_guide/options-phases.html#c-pre-processor

On Tue, Jan 17, 2012 at 3:35 PM, Eugene Kirpichov ekirpic...@gmail.comwrote:

 Hi,

 I'm fixing a build error in a package that depends on the RTS API, which
 changed in 7.4, using #if __GLASGOW_HASKELL__ = 740.

 However, build log shows that __GLASGOW_HASKELL__ is still 704, not 740 as
 I'd expect.

 Is this a bug?

 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC exceeding command line length limit with split-objs - and a possible fix

2012-01-11 Thread Eugene Kirpichov
Thanks, looks like I already succeeded by downloading xcode 3.

Now my original question remains - is such a change a good idea?
(I've already found the place in code where the fix has to be made; should
take an hour of work at most)

On Wed, Jan 11, 2012 at 6:07 PM, Hans Aberg haber...@telia.com wrote:

 On 11 Jan 2012, at 13:38, John Lato wrote:

  I used https://github.com/kennethreitz/osx-gcc-installer/downloads to
  get a real gcc on Lion.  Biggish download, but it worked.  I've also
  seen reports of success by self-compiling gcc, or by installing XCode
  4 on top of an existing XCode 3 installation.

 GCC 4.6.2 builds on OS X 10.7 with Xcode 4.2. Build GMP 5.0.2 with clang,
 and then GCC with /usr/bin/gcc - llvm-gcc-4.2. I think I saw someone guild
 GCC 4.7, but this is highest stable.

 Hans

  From: Eugene Kirpichov ekirpic...@gmail.com
 
  Oh well... looks like building ghc won't be easy, as it doesn't build
 with
  llvm-gcc and it's not easy to get a real gcc on Lion. But I don't stop
  trying :)





-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC exceeding command line length limit with split-objs - and a possible fix

2012-01-11 Thread Eugene Kirpichov
Hi Brandon,

Thanks - looks like this would be a modification of the linking stage,
splitting it into two parts: batching objects and then actually linking
them.

Do you think that what I originally proposed is still a good thing to have
before implementing your solution? (it definitely would be for myself, as
it's easier to do and I'd then quicker become able to build my application
with split-objs and shrink its executable size from 11 to 2Mb)

On Wed, Jan 11, 2012 at 9:48 PM, Brandon Allbery allber...@gmail.comwrote:

 On Wed, Jan 11, 2012 at 02:12, Eugene Kirpichov ekirpic...@gmail.comwrote:

 I think a nice fix would be to employ gcc's ability to read options from
 a file - gcc @file - and write overly long option strings into temp files.


 What immediately occurs to me is, what if the problem (or another
 manifestation of same) occurs in the ld step?  OS X's ld(1) doesn't have
 such an option /per se/, and binutils ld(1) does not reliably create valid
 Mach-O objects.

 I would consider batching split-objs files into static archives (see
 ar(1) and ranlib(1)).  This also has the advantages of being portable
 (other platforms other have length limits; I believe it's the main reason
 split-objs is disabled by default on e.g. Solaris) and that with many
 linkers it's faster than individual objects because it can use the archive
 table of contents to speed up searching for files and symbols.

 --
 brandon s allbery  allber...@gmail.com
 wandering unix systems administrator (available) (412) 475-9364 vm/sms




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] GHC exceeding command line length limit with split-objs - and a possible fix

2012-01-10 Thread Eugene Kirpichov
Hi,

I'm building gtk2hs on a mac with -fsplit-objs, and during compilation of
the largest file, which produces about 7000 split objects, the linking
phase fails.
I'm assuming that's because the command line length has been exceeded,
because other files compile fine, without -fsplit-objs it compiles fine
too, and it compiles fine *with* -fsplit-objs on other OS - so perhaps the
reason is in mac's tempfile name generation (they're longer than on other
platforms) or something.

Anyway.

I think a nice fix would be to employ gcc's ability to read options from a
file - gcc @file - and write overly long option strings into temp files.

It'd be fun for me to implement this as my first contribution to ghc; is
this a good idea in general?

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC exceeding command line length limit with split-objs - and a possible fix

2012-01-10 Thread Eugene Kirpichov
Oh well... looks like building ghc won't be easy, as it doesn't build with
llvm-gcc and it's not easy to get a real gcc on Lion. But I don't stop
trying :)

On Wed, Jan 11, 2012 at 11:12 AM, Eugene Kirpichov ekirpic...@gmail.comwrote:

 Hi,

 I'm building gtk2hs on a mac with -fsplit-objs, and during compilation of
 the largest file, which produces about 7000 split objects, the linking
 phase fails.
 I'm assuming that's because the command line length has been exceeded,
 because other files compile fine, without -fsplit-objs it compiles fine
 too, and it compiles fine *with* -fsplit-objs on other OS - so perhaps the
 reason is in mac's tempfile name generation (they're longer than on other
 platforms) or something.

 Anyway.

 I think a nice fix would be to employ gcc's ability to read options from a
 file - gcc @file - and write overly long option strings into temp files.

 It'd be fun for me to implement this as my first contribution to ghc; is
 this a good idea in general?

 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] All the random packages

2012-01-07 Thread Eugene Kirpichov
Hi,

I came across the beautiful algorithm for O(1) generation of rational
discrete distributions http://www.keithschwarz.com/darts-dice-coins/ and
though it'd be fun to implement it in Haskell and put on Hackage, but then
it turned out that there's about a dozen random number packages, so I'm not
sure to which one to contribute, or better to roll out my own package and
encourage authors of the other packages to reference it and fit under their
interface.

What do you think?

Which random number packages are considered to be most mature or perhaps
promising - i.e., contributing to which ones will bring the community the
most benefit?
Is there any effort on unifying them going on?

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type class for sequences with dependent types

2012-01-05 Thread Eugene Kirpichov
Will this help?

data List a b where
List :: [a] - List a a

05.01.2012, в 17:12, Robbert Krebbers mailingli...@robbertkrebbers.nl 
написал(а):

 Hello,
 
 in a Haskell development I have to represent paths that have elements of 
 alternating types. GADTs allow me to do this easily:
 
 data AltList :: * - * - * where
  Nil   :: AltList a a
  ICons :: Int - AltList Int b - AltList () b
  UCons :: () - AltList () b - AltList Int b
 
 Since I have various kinds of these data structures I defined a type class 
 for common operations on such data structures.
 
 class DepSequence l where
  nil :: l a a
  app :: l a b - l b c - l a c
 
 The instance for AltList is as follows:
 
 instance DepSequence AltList where
  nil = Nil
  Nil   `app` k = k
  ICons i l `app` k = ICons i (l `app` k)
  UCons i l `app` k = UCons i (l `app` k)
 
 This all works nicely, however, I also want ordinary lists to be an instance 
 of this class too. I tried the following:
 
 type IntListAlt a b = [Int]
 instance DepSequence IntListAlt where
  nil = []
  app = (++)
 
 But GHC does not like this, it yields:
 
  Type synonym `IntListAlt' should have 2 arguments, but has been
  given none In the instance declaration for `DepList IntListAlt'
 
 The following alternative works, but it is quite ugly
 
 newtype IntList a b = IntList { getList :: [Int] }
 instance DepSequence IntList where
  nil = IntList []
  app l k = IntList (getList l ++ getList k)
 
 and also does not give me a nil of type [Int] and an app of type [Int] - 
 [Int] - [Int].
 
 Does anyone know whether Haskell allows me to do this in a better way?
 
 Best,
 
 Robbert
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reifying case expressions [was: Re: On stream processing, and a new release of timeplot coming]

2011-12-26 Thread Eugene Kirpichov
2011/12/26 Gábor Lehel illiss...@gmail.com

 On Sun, Dec 25, 2011 at 9:19 PM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:
  Hello Heinrich,
 
  Thanks, that's sure some food for thought!
 
  A few notes:
  * This is indeed analogous to Iteratees. I tried doing the same with
  Iteratees but failed, so I decided to put together something simple of my
  own.
  * The Applicative structure over this stuff is very nice. I was thinking,
  what structure to put on - and Applicative seems the perfect fit. It's
 also
  possible to implement Arrows - but I once tried and failed; however, I
 was
  trying that for a more complex stream transformer datatype (a hybrid of
  Iteratee and Enumerator).
  * StreamSummary is trivially a bifunctor. I actually wanted to make it an
  instance of Bifunctor, but it was in the category-extras package and I
  hesitated to reference this giant just for this purpose :) Probably
  bifunctors should be in prelude.

 Edward Kmett has been splitting that up into a variety of smaller
 packages, for instance:

 http://hackage.haskell.org/package/bifunctors

Actually it's not a bifunctor - it's a functor in one argument and
contrafunctor in the other.
Is there a name for such a structure?




  * Whereas StreamSummary a r abstracts deconstruction of lists, the dual
  datatype (StreamSummary a r -) abstracts construction; however I just
 now
  (after looking at your first definition of length) understood that it is
  trivially isomorphic to the regular list datatype - you just need to be
  non-strict in the state - listify :: ListTo a [a] = CaseOf [] (\x - fmap
  (x:) listify). So you don't need functions of the form (forall r .
 ListTo a
  r - ListTo b r) - you just need (ListTo b [a]). This is a revelation for
  me.
 
  On Sun, Dec 25, 2011 at 2:25 PM, Heinrich Apfelmus
  apfel...@quantentunnel.de wrote:
 
  Eugene Kirpichov wrote:
 
  In the last couple of days I completed my quest of making my graphing
  utility timeplot ( http://jkff.info/software/timeplotters ) not load
 the
  whole input dataset into memory and consequently be able to deal with
  datasets of any size, provided however that the amount of data to
 *draw*
  is
  not so large. On the go it also got a huge speedup - previously
  visualizing
  a cluster activity dataset with a million events took around 15 minutes
  and
  a gig of memory, now it takes 20 seconds and 6 Mb max residence.
  (I haven't yet uploaded to hackage as I have to give it a bit more
  testing)
 
  The refactoring involved a number of interesting programming patterns
  that
  I'd like to share with you and ask for feedback - perhaps something can
  be
  simplified.
 
  The source is at http://github.com/jkff/timeplot
 
  The datatype of incremental computations is at
 
 
 https://github.com/jkff/timeplot/blob/master/Tools/TimePlot/Incremental.hs.
  Strictness is extremely important here - the last memory leak I
  eliminated
  was lack of bang patterns in teeSummary.
 
 
  Your  StreamSummary  type has a really nice interpretation: it's a
  reification of  case  expressions.
 
  For instance, consider the following simple function from lists to
  integers
 
 length :: [a] - Int
 length xs = case xs of
 [] - 0
 (y:ys) - 1 + length ys
 
  We want to reify the case expression as constructor of a data type. What
  type should it have? Well, a case expression maps a list  xs  to a
 result,
  here of type Int, via two cases: the first case gives a result and the
 other
  maps a value of type  a  to a function from lists to results again. This
  explanation was probably confusing, so I'll just go ahead and define a
 data
  type that represents functions from lists  [a] to some result of type  r
 
 data ListTo a r = CaseOf r (a - ListTo a r)
 
 interpret :: ListTo a r - ([a] - r)
 interpret (CaseOf nil cons) xs =
 case xs of
 [] - nil
 (y:ys) - interpret (cons y) ys
 
  As you can see, we are just mapping each  CaseOf  constructor back to a
  built-in case expression.
 
  Likewise, each function from lists can be represented in terms of our
 new
  data type: simply replace all built-in case expressions with the new
  constructor
 
 length' :: ListTo a Int
 length' = CaseOf
 (0)
 (\x - fmap (1+) length')
 
 length = interpret length'
 
  The CaseOf may look a bit weird, but it's really just a straightforward
  translation of the case expression you would use to define the function
  go
   instead.
 
  Ok, this length function is really inefficient because it builds a huge
  expression of the form  (1+(1+...)). Let's implement a strict variant
  instead
 
 lengthL :: ListTo a Int
 lengthL = go 0
 where
 go !n = CaseOf (n) (\x - go (n+1))
 
  While we're at it, let's translate two more list functions
 
 foldL' :: (b - a - b) - b - ListTo a b
 foldL' f b = Case b (\a - foldL' f $! f b a)
 
 sumL:: ListTo Int Int
 sumL

[Haskell-cafe] Convert Double to Data.Fixed

2011-12-26 Thread Eugene Kirpichov
Hi cafe,

How do I most efficiently convert a Double to a Data.Fixed?

I'm asking because I want to convert fractional seconds to the seconds
field of Data.Time.TimeOfDay, which is Pico = Data.Fixed.Fixed E12.

For now the fastest thing I came up with was fromIntegral (round
(sec*100)) / fromIntegral 100, but this frankly sucks and is rather
slow, there must be a better way.

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reifying case expressions [was: Re: On stream processing, and a new release of timeplot coming]

2011-12-26 Thread Eugene Kirpichov
Whoa. Sebastian, you're my hero — I've been struggling with defining Arrow for 
ListTransformer for a substantial time without success, and here you got it, 
dramatically simpler than I thought it could be done (I was using explicit 
queues).
I wonder if now this datatype of yours is isomorphic to StreamSummary b r - 
StreamSummary a r.



26.12.2011, в 19:56, Sebastian Fischer fisc...@nii.ac.jp написал(а):

 On Sun, Dec 25, 2011 at 11:25 AM, Heinrich Apfelmus 
 apfel...@quantentunnel.de wrote:
 Your  StreamSummary  type has a really nice interpretation: it's a 
 reification of  case  expressions [on lists].
 
 nice observation!
  
 For instance, consider the following simple function from lists to integers
 
length :: [a] - Int
length xs = case xs of
[] - 0
(y:ys) - 1 + length ys
 
 We want to reify the case expression as constructor of a data type. [...]
 
data ListTo a r = CaseOf r (a - ListTo a r)
 
interpret :: ListTo a r - ([a] - r)
interpret (CaseOf nil cons) xs =
case xs of
[] - nil
(y:ys) - interpret (cons y) ys
 
 [...]
 
 Likewise, each function from lists can be represented in terms of our new 
 data type [...]
 
length' :: ListTo a Int
length' = CaseOf
(0)
(\x - fmap (1+) length')
 
length = interpret length'
 
 This version of `length` is tail recursive while the previous version is not. 
 In general, all functions defined in terms of `ListTo` and `interpret` are 
 spine strict - they return a result only after consuming all input list 
 constructors.
 
 This is what Eugene observed when defining the identity function as
 
 idC = CaseOf [] (\x - (x:) $ idC)
 
 This version does not work for infinite lists. Similarly, `head` and `take` 
 cannot be defined as lazily as in the standard libraries.
 
 We can support lazier list consumers by adding a case to the ListTo type that 
 allows to stop consuming the list. To avoid confusion, I chose new names for 
 my new types.
 
 data ListConsumer a b
   = Done !b
   | Continue !b (a - ListConsumer a b)
 
 The interpretation function just ignores the remaining input in the case of 
 `Done`:
 
 consumeList :: ListConsumer a b - [a] - b
 consumeList (Done b)   _  = b
 consumeList (Continue b _) [] = b
 consumeList (Continue _ f) (x:xs) = consumeList (f x) xs
 
 We can define lazier versions of `head` and `take` as follows:
 
 headC :: ListConsumer a a
 headC = Continue (error head of empty list) Done
 
 takeC :: Int - ListConsumer a [a]
 takeC 0 = Done []
 takeC n = Continue [] (\x - (x:) $ takeC (n-1))
 
 However, we still cannot define a lazy version of the identity funtion with 
 list consumers. 
 
 The identity function and `takeC` belong to a special case of a list 
 consumers because they also *produce* lists. We can define a specialized type 
 for list transformers that consume and produce lists. One advantage of this 
 specialization will be that we can define a lazy version of the identity 
 function. The transformer type can have functor and applicative instances 
 similar to the consumer type to compose transformers in parallel. 
 Additionally, it can have category and arrow instances to compose 
 transformers sequentially.
 
 Here is a type for lazy list transformers:
 
 data ListTransformer a b
   = Cut
   | Put b (ListTransformer a b)
   | Get (a - ListTransformer a b)
 
 A transformer can either cut off the input list and return the empty list, 
 output a new element before transforming the input, or consume one element 
 from the input list and transform the remaining elements. The interpretation 
 function should make this clearer:
 
 transformList :: ListTransformer a b - [a] - [b]
 transformList Cut   _  = []
 transformList (Put b t) xs = b : transformList t xs
 transformList (Get _)   [] = []
 transformList (Get f)   (x:xs) = transformList (f x) xs
 
 Note that, if the transformer wants to read another element that is not 
 there, it simply returns the empty list.
 
 Now we can define a lazy identity function and another version of `take`:
 
 idT :: ListTransformer a a
 idT = Get (\x - Put x idT)
 
 takeT :: Int - ListTransformer a a
 takeT 0 = Cut
 takeT n = Get (\x - Put x (takeT (n-1)))
 
 Here is another translation of a common list function:
 
 filterT :: (a - Bool) - ListTransformer a a
 filterT p = Get (\x - if p x then Put x (filterT p) else filterT p)
 
 `filterT` is an example for a function that can consume several input 
 elements before producing an output element. Other examples for functions of 
 this kind are chunking functions:
 
 pairsT :: ListTransformer a (a,a)
 pairsT = Get (\x - Get (\y - Put (x,y) pairsT))
 
 chunks :: Int - ListTransformer a [a]
 chunks n = collect n
  where
   collect 0 = Put [] (chunks n)
   collect m = Get (\x - 

Re: [Haskell-cafe] Reifying case expressions [was: Re: On stream processing, and a new release of timeplot coming]

2011-12-26 Thread Eugene Kirpichov
On Tue, Dec 27, 2011 at 7:23 AM, Sebastian Fischer fisc...@nii.ac.jpwrote:

 2011/12/26 Eugene Kirpichov ekirpic...@gmail.com

 Whoa. Sebastian, you're my hero — I've been struggling with defining
 Arrow for ListTransformer for a substantial time without success, and here
 you got it, dramatically simpler than I thought it could be done (I was
 using explicit queues).


 This stuff is tricky. I noticed that my Applicative instance did not
 satisfy all required laws. I think I could fix this by changing the
 implementation of pure to

 pure x = Put x $ pure x

 in analogy to the ZipList instance. At least, QuickCheck does not complain
 anymore (I did not write proofs).

 The original definition of `pure` was inspired by Chris Smith's post on
 the connection of Category/Applicative and Arrow:


 http://cdsmith.wordpress.com/2011/08/13/arrow-category-applicative-part-iia/

 However, even with the fixed Applicative instance, the Arrow instance does
 not satisfy all laws. ListTransformer seems to be a type that has valid
 Category and Applicative instances which do not give rise to a valid Arrow
 instance as outlined by Chris. One of his additional axioms relating
 Category and Applicative does not hold.

 I have extended the (corrected) code with QuickCheck tests:

 https://gist.github.com/1521467

Thanks, I'll take a look.



  I wonder if now this datatype of yours is isomorphic to StreamSummary b
 r - StreamSummary a r.


 Not sure what you mean here. StreamSummary seems to be the same as
 ListConsumer but I don't see how functions from consumers to consumers are
 list transformers, i.e., functions from lists to lists.

Well. They are isomorphic, if list transformers are represented as
functions from lists. I'm assuming they could be with the other
representation too.

type ListT a b = forall r . ([b] - r) - ([a] - r)

there :: ([a] - [b]) - ListT a b
there as2bs bs2r = bs2r . as2bs

back :: ListT a b - ([a] - [b])
back f = f id



 Sebastian




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reifying case expressions [was: Re: On stream processing, and a new release of timeplot coming]

2011-12-25 Thread Eugene Kirpichov
Hello Heinrich,

Thanks, that's sure some food for thought!

A few notes:
* This is indeed analogous to Iteratees. I tried doing the same with
Iteratees but failed, so I decided to put together something simple of my
own.
* The Applicative structure over this stuff is very nice. I was thinking,
what structure to put on - and Applicative seems the perfect fit. It's also
possible to implement Arrows - but I once tried and failed; however, I was
trying that for a more complex stream transformer datatype (a hybrid of
Iteratee and Enumerator).
* StreamSummary is trivially a bifunctor. I actually wanted to make it an
instance of Bifunctor, but it was in the category-extras package and I
hesitated to reference this giant just for this purpose :) Probably
bifunctors should be in prelude.
* Whereas StreamSummary a r abstracts deconstruction of lists, the dual
datatype (StreamSummary a r -) abstracts construction; however I just now
(after looking at your first definition of length) understood that it is
trivially isomorphic to the regular list datatype - you just need to be
non-strict in the state - listify :: ListTo a [a] = CaseOf [] (\x - fmap
(x:) listify). So you don't need functions of the form (forall r . ListTo a
r - ListTo b r) - you just need (ListTo b [a]). This is a revelation for
me.

On Sun, Dec 25, 2011 at 2:25 PM, Heinrich Apfelmus 
apfel...@quantentunnel.de wrote:

 Eugene Kirpichov wrote:

 In the last couple of days I completed my quest of making my graphing
 utility timeplot ( 
 http://jkff.info/software/**timeplottershttp://jkff.info/software/timeplotters)
  not load the
 whole input dataset into memory and consequently be able to deal with
 datasets of any size, provided however that the amount of data to *draw*
 is
 not so large. On the go it also got a huge speedup - previously
 visualizing
 a cluster activity dataset with a million events took around 15 minutes
 and
 a gig of memory, now it takes 20 seconds and 6 Mb max residence.
 (I haven't yet uploaded to hackage as I have to give it a bit more
 testing)

 The refactoring involved a number of interesting programming patterns that
 I'd like to share with you and ask for feedback - perhaps something can be
 simplified.

 The source is at 
 http://github.com/jkff/**timeplothttp://github.com/jkff/timeplot

 The datatype of incremental computations is at
 https://github.com/jkff/**timeplot/blob/master/Tools/**
 TimePlot/Incremental.hshttps://github.com/jkff/timeplot/blob/master/Tools/TimePlot/Incremental.hs.
 Strictness is extremely important here - the last memory leak I eliminated
 was lack of bang patterns in teeSummary.


 Your  StreamSummary  type has a really nice interpretation: it's a
 reification of  case  expressions.

 For instance, consider the following simple function from lists to integers

length :: [a] - Int
length xs = case xs of
[] - 0
(y:ys) - 1 + length ys

 We want to reify the case expression as constructor of a data type. What
 type should it have? Well, a case expression maps a list  xs  to a result,
 here of type Int, via two cases: the first case gives a result and the
 other maps a value of type  a  to a function from lists to results again.
 This explanation was probably confusing, so I'll just go ahead and define a
 data type that represents functions from lists  [a] to some result of type
  r

data ListTo a r = CaseOf r (a - ListTo a r)

interpret :: ListTo a r - ([a] - r)
interpret (CaseOf nil cons) xs =
case xs of
[] - nil
(y:ys) - interpret (cons y) ys

 As you can see, we are just mapping each  CaseOf  constructor back to a
 built-in case expression.

 Likewise, each function from lists can be represented in terms of our new
 data type: simply replace all built-in case expressions with the new
 constructor

length' :: ListTo a Int
length' = CaseOf
(0)
(\x - fmap (1+) length')

length = interpret length'

 The CaseOf may look a bit weird, but it's really just a straightforward
 translation of the case expression you would use to define the function  go
  instead.

 Ok, this length function is really inefficient because it builds a huge
 expression of the form  (1+(1+...)). Let's implement a strict variant
 instead

lengthL :: ListTo a Int
lengthL = go 0
where
go !n = CaseOf (n) (\x - go (n+1))

 While we're at it, let's translate two more list functions

foldL' :: (b - a - b) - b - ListTo a b
foldL' f b = Case b (\a - foldL' f $! f b a)

sumL:: ListTo Int Int
sumL= foldL' (\b a - a+b) 0


 And now we can go for the point of this message: unlike ordinary functions
 from lists, we can compose these in lock-step! In particular, the following
 applicative instance

instance Applicative (ListTo a) where
pure b = CaseOf b (const $ pure b)
(CaseOf f fs) * (CaseOf x xs) =
CaseOf (f x) (\a - fs a * xs a)

 allows us to write a function

Re: [Haskell-cafe] Reifying case expressions [was: Re: On stream processing, and a new release of timeplot coming]

2011-12-25 Thread Eugene Kirpichov
On Mon, Dec 26, 2011 at 12:19 AM, Eugene Kirpichov ekirpic...@gmail.comwrote:

 Hello Heinrich,

 Thanks, that's sure some food for thought!

 A few notes:
 * This is indeed analogous to Iteratees. I tried doing the same with
 Iteratees but failed, so I decided to put together something simple of my
 own.
 * The Applicative structure over this stuff is very nice. I was thinking,
 what structure to put on - and Applicative seems the perfect fit. It's also
 possible to implement Arrows - but I once tried and failed; however, I was
 trying that for a more complex stream transformer datatype (a hybrid of
 Iteratee and Enumerator).
 * StreamSummary is trivially a bifunctor. I actually wanted to make it an
 instance of Bifunctor, but it was in the category-extras package and I
 hesitated to reference this giant just for this purpose :) Probably
 bifunctors should be in prelude.
 * Whereas StreamSummary a r abstracts deconstruction of lists, the dual
 datatype (StreamSummary a r -) abstracts construction; however I just now
 (after looking at your first definition of length) understood that it is
 trivially isomorphic to the regular list datatype - you just need to be
 non-strict in the state - listify :: ListTo a [a] = CaseOf [] (\x - fmap
 (x:) listify). So you don't need functions of the form (forall r . ListTo a
 r - ListTo b r) - you just need (ListTo b [a]). This is a revelation for
 me.

Oops, this is wrong!
You cannot create a working listify that would produce the list [1..] when
applied to the list [1..].
So production of elements also needs to be explicitly abstracted by the
dual type.



 On Sun, Dec 25, 2011 at 2:25 PM, Heinrich Apfelmus 
 apfel...@quantentunnel.de wrote:

 Eugene Kirpichov wrote:

 In the last couple of days I completed my quest of making my graphing
 utility timeplot ( 
 http://jkff.info/software/**timeplottershttp://jkff.info/software/timeplotters)
  not load the
 whole input dataset into memory and consequently be able to deal with
 datasets of any size, provided however that the amount of data to *draw*
 is
 not so large. On the go it also got a huge speedup - previously
 visualizing
 a cluster activity dataset with a million events took around 15 minutes
 and
 a gig of memory, now it takes 20 seconds and 6 Mb max residence.
 (I haven't yet uploaded to hackage as I have to give it a bit more
 testing)

 The refactoring involved a number of interesting programming patterns
 that
 I'd like to share with you and ask for feedback - perhaps something can
 be
 simplified.

 The source is at 
 http://github.com/jkff/**timeplothttp://github.com/jkff/timeplot

 The datatype of incremental computations is at
 https://github.com/jkff/**timeplot/blob/master/Tools/**
 TimePlot/Incremental.hshttps://github.com/jkff/timeplot/blob/master/Tools/TimePlot/Incremental.hs.
 Strictness is extremely important here - the last memory leak I
 eliminated
 was lack of bang patterns in teeSummary.


 Your  StreamSummary  type has a really nice interpretation: it's a
 reification of  case  expressions.

 For instance, consider the following simple function from lists to
 integers

length :: [a] - Int
length xs = case xs of
[] - 0
(y:ys) - 1 + length ys

 We want to reify the case expression as constructor of a data type. What
 type should it have? Well, a case expression maps a list  xs  to a result,
 here of type Int, via two cases: the first case gives a result and the
 other maps a value of type  a  to a function from lists to results again.
 This explanation was probably confusing, so I'll just go ahead and define a
 data type that represents functions from lists  [a] to some result of type
  r

data ListTo a r = CaseOf r (a - ListTo a r)

interpret :: ListTo a r - ([a] - r)
interpret (CaseOf nil cons) xs =
case xs of
[] - nil
(y:ys) - interpret (cons y) ys

 As you can see, we are just mapping each  CaseOf  constructor back to a
 built-in case expression.

 Likewise, each function from lists can be represented in terms of our new
 data type: simply replace all built-in case expressions with the new
 constructor

length' :: ListTo a Int
length' = CaseOf
(0)
(\x - fmap (1+) length')

length = interpret length'

 The CaseOf may look a bit weird, but it's really just a straightforward
 translation of the case expression you would use to define the function  go
  instead.

 Ok, this length function is really inefficient because it builds a huge
 expression of the form  (1+(1+...)). Let's implement a strict variant
 instead

lengthL :: ListTo a Int
lengthL = go 0
where
go !n = CaseOf (n) (\x - go (n+1))

 While we're at it, let's translate two more list functions

foldL' :: (b - a - b) - b - ListTo a b
foldL' f b = Case b (\a - foldL' f $! f b a)

sumL:: ListTo Int Int
sumL= foldL' (\b a - a+b) 0


 And now we can go for the point of this message: unlike

Re: [Haskell-cafe] strict, lazy, non-strict, eager

2011-12-24 Thread Eugene Kirpichov
I applaud the pedantry, but I must admit that the tone of the original
email is unusually harsh for the Haskell community, even though not so
harsh as to really make me (for example) scared.

On Sat, Dec 24, 2011 at 12:47 PM, Murray Campbell mur...@sonology.netwrote:

 On Sat, Dec 24, 2011 at 08:54:43AM +0100, Yves Parès wrote:
 See that's typically the speech that scares people away from
 Haskell...
 --
 The ⥠is a lie.
 
 2011/12/24 Albert Y. C. Lai [1]tre...@vex.net
 
  [ snip. ]

 I find this sort of discussion is precisely what draws me to, and keeps me
 in
 pursuit of Haskell.  There are many approaches to producing code that
 are designed (to use a charitable term) to be minimally frightening at
 the expense of rigour.  The dogged and uncompromising attitude of many
 in the Haskell community is inspiring to me, knowing as I do that, while
 I don't *need* to follow the more rarified flights of mathematical fancy
 to get things done, the ecosystem in which I am choosing to work is
 being built by those who can.

 It's not as though the problem of writing good code has been solved.  It
 appears to be quite hard!  Isn't Haskell the place where the difficult
 and nit-picky questions ought to be raised and researched?

 It's too late to avoid success at all costs but please don't banish our
 precious pedantry!

 Scare on!

 Murray Campbell


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parallel Karatsuba - A Weird speed up value greater than 4 on an Intel Quadcore CPU!

2011-12-24 Thread Eugene Kirpichov
Superlinear speedup can occur due to the increased cache size.



24.12.2011, в 19:49, Burak Ekici ekcbu...@hotmail.com написал(а):

 Dear List,
 
 I am trying to parallelize Karatsuba multiplication with Haskell's
 second generation strategies. Although, I am running the code on an
 Intel quad-core  CPU, I abnormally have a speedup much greater 
 than 4, around 10, which means a weird parallelization or something 
 occurs.
 
 I would be appreciated, if anyone make some comments on the issue 
 explaining the possible reasons why this weird incident occurs?
 
 Here is the basic parallel portion of the code:
 
 karatsuba :: Int - [Bool] - [Bool] - [Bool]
 karatsuba _ [] _ = []
 karatsuba _ _ [] = []
 karatsuba currentDepth xs ys 
  | (l  32 || currentDepth = limit) = mul xs ys
  | otherwise = (x `add` (replicate l False ++ (z `add` (replicate l False ++ 
 y `Main.using` strategy  
   where 
l = (min (length xs) (length ys)) `div` 2
(xs0, xs1) = splitAt l xs
(ys0, ys1) = splitAt l ys
x  = (normalize (karatsuba (currentDepth+1) xs0 ys0))
y  = (normalize (karatsuba (currentDepth+1) xs1 ys1)) 
z  = ((karatsuba (currentDepth+1) (add xs0 xs1) (add ys0 ys1)) `sub` 
 (normalize (karatsuba (currentDepth+1) xs0 ys0)) `sub` (normalize (karatsuba 
 (currentDepth+1) xs1 ys1)))
strategy res = do (Main.rpar) (x)
  (Main.rpar) (y)
  (Main.rpar) (z)
  Main.rdeepseq res
 
 Many thanks in advance and kind regards.
 
 Saluti,
 Burak.
 
 
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parallel Karatsuba - A Weird speed up value greater than 4 on an Intel Quadcore CPU!

2011-12-24 Thread Eugene Kirpichov
Well, assume that cache is x times faster than main memory and that the hot 
working set size is y, and cache size of one core is z, and that the algorithm 
is really bound by memory access. Then some simple math should give the answer 
:) I can't do it myself now as I don't have a pen and paper at the moment.


24.12.2011, в 19:58, Burak Ekici ekcbu...@hotmail.com написал(а):

 
 First of all, thanks a lot for your quick answer!
 However, the question is what are the approximate limits 
 of this super-linear speedup? I mean, is it acceptable, if
 parallelization happens even 100 time faster?
 
 How can I calculate the limits of this speedup via the 
 cache size of my processor?
 
 Cheers,
 Burak.
 
 CC: haskell-cafe@haskell.org
 From: ekirpic...@gmail.com
 Subject: Re: [Haskell-cafe] Parallel Karatsuba - A Weird speed up value 
 greater than 4 on an Intel Quadcore CPU!
 Date: Sat, 24 Dec 2011 19:53:26 +0400
 To: ekcbu...@hotmail.com
 
 Superlinear speedup can occur due to the increased cache size.
 
 
 
 24.12.2011, в 19:49, Burak Ekici ekcbu...@hotmail.com написал(а):
 
 Dear List,
 
 I am trying to parallelize Karatsuba multiplication with Haskell's
 second generation strategies. Although, I am running the code on an
 Intel quad-core  CPU, I abnormally have a speedup much greater 
 than 4, around 10, which means a weird parallelization or something 
 occurs.
 
 I would be appreciated, if anyone make some comments on the issue 
 explaining the possible reasons why this weird incident occurs?
 
 Here is the basic parallel portion of the code:
 
 karatsuba :: Int - [Bool] - [Bool] - [Bool]
 karatsuba _ [] _ = []
 karatsuba _ _ [] = []
 karatsuba currentDepth xs ys 
  | (l  32 || currentDepth = limit) = mul xs ys
  | otherwise = (x `add` (replicate l False ++ (z `add` (replicate l False ++ 
 y `Main.using` strategy  
   where 
l = (min (length xs) (length ys)) `div` 2
(xs0, xs1) = splitAt l xs
(ys0, ys1) = splitAt l ys
x  = (normalize (karatsuba (currentDepth+1) xs0 ys0))
y  = (normalize (karatsuba (currentDepth+1) xs1 ys1)) 
z  = ((karatsuba (currentDepth+1) (add xs0 xs1) (add ys0 ys1)) `sub` 
 (normalize (karatsuba (currentDepth+1) xs0 ys0)) `sub` (normalize (karatsuba 
 (currentDepth+1) xs1 ys1)))
strategy res = do (Main.rpar) (x)
  (Main.rpar) (y)
  (Main.rpar) (z)
  Main.rdeepseq res
 
 Many thanks in advance and kind regards.
 
 Saluti,
 Burak.
 
 
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parallel Karatsuba - A Weird speed up value greater than 4 on an Intel Quadcore CPU!

2011-12-24 Thread Eugene Kirpichov
If the cache was infinitely faster, then doubling it would give an infinite 
speedup for an algorithm whose working set was exactly one core's cache size.



24.12.2011, в 19:58, Burak Ekici ekcbu...@hotmail.com написал(а):

 
 First of all, thanks a lot for your quick answer!
 However, the question is what are the approximate limits 
 of this super-linear speedup? I mean, is it acceptable, if
 parallelization happens even 100 time faster?
 
 How can I calculate the limits of this speedup via the 
 cache size of my processor?
 
 Cheers,
 Burak.
 
 CC: haskell-cafe@haskell.org
 From: ekirpic...@gmail.com
 Subject: Re: [Haskell-cafe] Parallel Karatsuba - A Weird speed up value 
 greater than 4 on an Intel Quadcore CPU!
 Date: Sat, 24 Dec 2011 19:53:26 +0400
 To: ekcbu...@hotmail.com
 
 Superlinear speedup can occur due to the increased cache size.
 
 
 
 24.12.2011, в 19:49, Burak Ekici ekcbu...@hotmail.com написал(а):
 
 Dear List,
 
 I am trying to parallelize Karatsuba multiplication with Haskell's
 second generation strategies. Although, I am running the code on an
 Intel quad-core  CPU, I abnormally have a speedup much greater 
 than 4, around 10, which means a weird parallelization or something 
 occurs.
 
 I would be appreciated, if anyone make some comments on the issue 
 explaining the possible reasons why this weird incident occurs?
 
 Here is the basic parallel portion of the code:
 
 karatsuba :: Int - [Bool] - [Bool] - [Bool]
 karatsuba _ [] _ = []
 karatsuba _ _ [] = []
 karatsuba currentDepth xs ys 
  | (l  32 || currentDepth = limit) = mul xs ys
  | otherwise = (x `add` (replicate l False ++ (z `add` (replicate l False ++ 
 y `Main.using` strategy  
   where 
l = (min (length xs) (length ys)) `div` 2
(xs0, xs1) = splitAt l xs
(ys0, ys1) = splitAt l ys
x  = (normalize (karatsuba (currentDepth+1) xs0 ys0))
y  = (normalize (karatsuba (currentDepth+1) xs1 ys1)) 
z  = ((karatsuba (currentDepth+1) (add xs0 xs1) (add ys0 ys1)) `sub` 
 (normalize (karatsuba (currentDepth+1) xs0 ys0)) `sub` (normalize (karatsuba 
 (currentDepth+1) xs1 ys1)))
strategy res = do (Main.rpar) (x)
  (Main.rpar) (y)
  (Main.rpar) (z)
  Main.rdeepseq res
 
 Many thanks in advance and kind regards.
 
 Saluti,
 Burak.
 
 
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parallel Karatsuba - A Weird speed up value greater than 4 on an Intel Quadcore CPU!

2011-12-24 Thread Eugene Kirpichov
I mean exactly 2x one cores cache size of course.



24.12.2011, в 20:06, Eugene Kirpichov ekirpic...@gmail.com написал(а):

 If the cache was infinitely faster, then doubling it would give an infinite 
 speedup for an algorithm whose working set was exactly one core's cache size.
 
 
 
 24.12.2011, в 19:58, Burak Ekici ekcbu...@hotmail.com написал(а):
 
 
 First of all, thanks a lot for your quick answer!
 However, the question is what are the approximate limits 
 of this super-linear speedup? I mean, is it acceptable, if
 parallelization happens even 100 time faster?
 
 How can I calculate the limits of this speedup via the 
 cache size of my processor?
 
 Cheers,
 Burak.
 
 CC: haskell-cafe@haskell.org
 From: ekirpic...@gmail.com
 Subject: Re: [Haskell-cafe] Parallel Karatsuba - A Weird speed up value 
 greater than 4 on an Intel Quadcore CPU!
 Date: Sat, 24 Dec 2011 19:53:26 +0400
 To: ekcbu...@hotmail.com
 
 Superlinear speedup can occur due to the increased cache size.
 
 
 
 24.12.2011, в 19:49, Burak Ekici ekcbu...@hotmail.com написал(а):
 
 Dear List,
 
 I am trying to parallelize Karatsuba multiplication with Haskell's
 second generation strategies. Although, I am running the code on an
 Intel quad-core  CPU, I abnormally have a speedup much greater 
 than 4, around 10, which means a weird parallelization or something 
 occurs.
 
 I would be appreciated, if anyone make some comments on the issue 
 explaining the possible reasons why this weird incident occurs?
 
 Here is the basic parallel portion of the code:
 
 karatsuba :: Int - [Bool] - [Bool] - [Bool]
 karatsuba _ [] _ = []
 karatsuba _ _ [] = []
 karatsuba currentDepth xs ys 
  | (l  32 || currentDepth = limit) = mul xs ys
  | otherwise = (x `add` (replicate l False ++ (z `add` (replicate l False ++ 
 y `Main.using` strategy  
   where 
l = (min (length xs) (length ys)) `div` 2
(xs0, xs1) = splitAt l xs
(ys0, ys1) = splitAt l ys
x  = (normalize (karatsuba (currentDepth+1) xs0 ys0))
y  = (normalize (karatsuba (currentDepth+1) xs1 ys1)) 
z  = ((karatsuba (currentDepth+1) (add xs0 xs1) (add ys0 ys1)) `sub` 
 (normalize (karatsuba (currentDepth+1) xs0 ys0)) `sub` (normalize (karatsuba 
 (currentDepth+1) xs1 ys1)))
strategy res = do (Main.rpar) (x)
  (Main.rpar) (y)
  (Main.rpar) (z)
  Main.rdeepseq res
 
 Many thanks in advance and kind regards.
 
 Saluti,
 Burak.
 
 
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strict, lazy, non-strict, eager

2011-12-24 Thread Eugene Kirpichov
On Sat, Dec 24, 2011 at 10:49 PM, Dan Doel dan.d...@gmail.com wrote:

 On Sat, Dec 24, 2011 at 2:31 AM, Albert Y. C. Lai tre...@vex.net wrote:
  1. a function f is strict if  f ⊥ = ⊥
  2. ⊥ represents any computation which does not terminate, i.e. an
 exception
  or an infinite loop
  3. strict describes the denotational semantics

 All three of these statements are true. The only potentially
 controversial one is 2, but any term that the operational semantics
 would identify as simple non-termination (which is invariably what
 they're talking about when they say 2; not some partially defined
 term) will be given denotation ⊥.

  B. Actually there are more, but apparently two is already enough to cause
  all kinds of incoherent statements. If I draw your attention to algebraic
  semantics, will you start saying it is too lazy, need to make it more
  idempotent?

 Yes, there are more than two. And they don't exist in completely
 separate vacuums from one another. Denotational and operational
 properties are sometimes (often?) correlated. And algebraic semantics
 is often the sweet spot for reasoning about the structure of the
 operational or denotational semantics of your code, without bringing
 in all the irrelevant details from the latter two. I can make a
 denotational semantics for System F where each term is denoted by its
 normal form (an operational concept).

 I think it's good to be clear on all these specifics, and people could
 do with a better recognition of the difference between (non-)strict
 and (lazy)eager (hint: you can have an eager, non-strict language).

Can you elaborate? That's apparently my blind spot.


 But it isn't necessarily a problem that people speak in terms of more
 than one at once. The different kinds of semantics aren't in conflict
 with one another.

 The main problem would be that such casual mixing prevents newcomers
 from learning the distinctions by osmosis.

 -- Dan

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strict, lazy, non-strict, eager

2011-12-24 Thread Eugene Kirpichov
Thanks, this makes sense.

On Sun, Dec 25, 2011 at 10:03 AM, Dan Doel dan.d...@gmail.com wrote:

 On Sun, Dec 25, 2011 at 12:14 AM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:
  On Sat, Dec 24, 2011 at 10:49 PM, Dan Doel dan.d...@gmail.com wrote:
  I think it's good to be clear on all these specifics, and people could
  do with a better recognition of the difference between (non-)strict
  and (lazy)eager (hint: you can have an eager, non-strict language).
 
  Can you elaborate? That's apparently my blind spot.

 A while back, there was a paper on something called (I believe)
 optimistic evaluation. The strategy goes like this: when you evaluate
 'f x', first you start evaluating 'x'. If that takes too long, or you
 encounter an exception, you (re)thunk it, and continue evaluating the
 body of f lazy style, in case you don't really need x.

 This is arguably eager, since you reduce arguments to functions
 immediately if possible. And it has some advantages over lazy
 evaluation in common cases. For instance, it avoids foldl building up
 a huge nested thunk that would cause a stack overflow. But it isn't
 strict, because

const 5 undefined

 is still 5.

 You can also imagine sparking on every function application, so that
 arguments automatically get reduced in parallel, and as soon as
 possible. I think that's been determined to not be very effective,
 though.

 -- Dan




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] book.realworldhaskell.org is down

2011-12-24 Thread Eugene Kirpichov
Thanks.
Now it's working - it wasn't at the moment of my email.

On Sun, Dec 25, 2011 at 11:21 AM, Tom Murphy amin...@gmail.com wrote:

 realworldhaskell.org/book
 On Dec 25, 2011 1:46 AM, Eugene Kirpichov ekirpic...@gmail.com wrote:

 See subject.
 Is this expected?

 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] On stream processing, and a new release of timeplot coming

2011-12-23 Thread Eugene Kirpichov
Hi Cafe,

In the last couple of days I completed my quest of making my graphing
utility timeplot ( http://jkff.info/software/timeplotters ) not load the
whole input dataset into memory and consequently be able to deal with
datasets of any size, provided however that the amount of data to *draw* is
not so large. On the go it also got a huge speedup - previously visualizing
a cluster activity dataset with a million events took around 15 minutes and
a gig of memory, now it takes 20 seconds and 6 Mb max residence.
(I haven't yet uploaded to hackage as I have to give it a bit more testing)

The refactoring involved a number of interesting programming patterns that
I'd like to share with you and ask for feedback - perhaps something can be
simplified.

The source is at http://github.com/jkff/timeplot

The datatype of incremental computations is at
https://github.com/jkff/timeplot/blob/master/Tools/TimePlot/Incremental.hs .
Strictness is extremely important here - the last memory leak I eliminated
was lack of bang patterns in teeSummary.
There's an interesting function statefulSummary that shows how closures
allow you to achieve encapsulation over an unknown piece of state -
curiously enough, you can't define StreamSummary a r as StreamSummary {
init :: s, insert :: a-s-s, finalize :: s-r } (existentially qualified
over s), as then you can't define summaryByKey - you don't know what type
to store in the states map.

It is used to incrementally build all plots simultaneously, shown by the
main loop in makeChart at
https://github.com/jkff/timeplot/blob/master/Tools/TimePlot.hs

Incremental building of plots of different types is at
https://github.com/jkff/timeplot/blob/master/Tools/TimePlot/Plots.hs
There are also a few interesting functions in that file - e.g.
edges2eventsSummary, which applies a summary over a stream of long events
to a stream of rise/fall edges.
This means that you can define a stream transformer (Stream a - Stream
b) as a function (StreamSummary b - StreamSummary a), which can be much
easier. I have to think more about this idea.

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Drawing charts over a lot of data

2011-11-21 Thread Eugene Kirpichov
Hi cafe,

Are there any (possibly unfinished?) libraries dedicated to drawing charts
over large amounts of data? I mean, such amounts that you don't want to
store the whole set of input data in memory and instead you prefer to do
one or a few passes over the input, calling the library's drawing functions
for each data point.

I'm currently using the excellent Chart package
http://hackage.haskell.org/package/Chart , but it seems to require quite a
bit of work to become usable in this mode (I haven't looked really deep
yet).

So, I'd be happy if someone pointed me to an existing package (I didn't
find one on hackage) or if someone expressed interest in making Chart avoid
having the whole input in memory.

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ridiculously slow FFI, or cairo binding?

2011-11-11 Thread Eugene Kirpichov
Hi Axel,

When do you expect to publish an updated version of gtk2hs on hackage?

On Thu, Nov 3, 2011 at 1:02 PM, Eugene Kirpichov ekirpic...@gmail.com wrote:
 Hi,
 The actual thanks for tracking this down go to Vincent Hanquez for finding
 that we're doing a lot of gmp calls (and for making me aware of ltrace), and
 to Felipe Lessa for finding that this is caused by poor code generated for
 cFloatConv  :)
 Your changes look identical to those that I made in my copy.
 On Thu, Nov 3, 2011 at 11:41 AM, Axel Simon axel.si...@in.tum.de wrote:

 Hi Eugene,

 On 02.11.2011, at 17:34, Eugene Kirpichov wrote:

 Heh.

 Guess what!
 A simple {-# INLINE cFloatConv #-} helped to the same extent!

 Axel, I think this change should be pretty easy to incorporate, and it
 probably makes sense to inline all other functions in Types.chs too.


 I've added INLINE pragmas to all these odd c2hs marshalling functions.
 Could you pull and check if I've done it correctly?

 Thanks for tracking this down!

 Axel

 Would you like me to send the trivial darcs patch or the gtk2hs team
 will take care of this?

 On Wed, Nov 2, 2011 at 7:29 PM, Felipe Almeida Lessa
 felipe.le...@gmail.com wrote:

 On Wed, Nov 2, 2011 at 11:24 AM, Jean-Marie Gaillourdet
 j...@gaillourdet.net wrote:

 Hi Eugene,

 did you try using the SPECIALIZE pragma? It is part of the Haskell 98
 and Haskell 2010 specifications.

 I don't think it's going to make any difference, as the core already
 have an specialized poor version.  See my first e-mail.

 --
 Felipe.




 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/




 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ridiculously slow FFI, or cairo binding?

2011-11-03 Thread Eugene Kirpichov
Hi,

The actual thanks for tracking this down go to Vincent Hanquez for finding
that we're doing a lot of gmp calls (and for making me aware of ltrace),
and to Felipe Lessa for finding that this is caused by poor code generated
for cFloatConv  :)

Your changes look identical to those that I made in my copy.

On Thu, Nov 3, 2011 at 11:41 AM, Axel Simon axel.si...@in.tum.de wrote:

 Hi Eugene,


 On 02.11.2011, at 17:34, Eugene Kirpichov wrote:

  Heh.

 Guess what!
 A simple {-# INLINE cFloatConv #-} helped to the same extent!

 Axel, I think this change should be pretty easy to incorporate, and it
 probably makes sense to inline all other functions in Types.chs too.


 I've added INLINE pragmas to all these odd c2hs marshalling functions.
 Could you pull and check if I've done it correctly?

 Thanks for tracking this down!

 Axel


  Would you like me to send the trivial darcs patch or the gtk2hs team
 will take care of this?

 On Wed, Nov 2, 2011 at 7:29 PM, Felipe Almeida Lessa
 felipe.le...@gmail.com wrote:

 On Wed, Nov 2, 2011 at 11:24 AM, Jean-Marie Gaillourdet
 j...@gaillourdet.net wrote:

 Hi Eugene,

 did you try using the SPECIALIZE pragma? It is part of the Haskell 98
 and Haskell 2010 specifications.


 I don't think it's going to make any difference, as the core already
 have an specialized poor version.  See my first e-mail.

 --
 Felipe.




 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/





-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Ridiculously slow FFI, or cairo binding?

2011-11-02 Thread Eugene Kirpichov
Hello,

I've got two very simple programs that draw a very simple picture using
cairo, doing a couple hundred thousand of cairo calls.
One program is in C++. The other is in Haskell and uses the cairo library
bindings.

The C++ program completes in a fraction of a second, the Haskell program
takes about 7-8 seconds to run. They produce exactly the same output.

What could be at fault here? Why are the cairo bindings working so slow? (I
suppose there isn't too much cairo-specific stuff here, perhaps it's a
general FFI question?)

#include cairo.h
int main() {
   cairo_surface_t *surface =
cairo_image_surface_create(CAIRO_FORMAT_ARGB32, 1024, 768);
   cairo_t *cr = cairo_create(surface);
   cairo_set_source_rgb(cr, 0, 255, 0);
   for(int x = 0; x  1024; x += 2) for(int y = 0; y  768; y += 2) {
   cairo_rectangle(cr, x, y, 1, 1);
   cairo_fill(cr);
   }
   cairo_surface_write_to_png(surface, picture.png);
   return 0;
}

module Main where

import qualified Graphics.Rendering.Cairo as C
import Control.Monad

main = C.withImageSurface C.FormatARGB32 1024 768 $ \s - do
 C.renderWith s $ do
   C.setSourceRGBA 0 255 0 255
   forM_ [0,2..1024] $ \x - do
 forM_ [0,2..768] $ \y - do
   C.rectangle x y 1 1
   C.fill
 C.surfaceWriteToPNG s picture.png

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ridiculously slow FFI, or cairo binding?

2011-11-02 Thread Eugene Kirpichov
I forgot to specify my environment.

Windows Server 2008 R2 x64, ghc 7.0.3.

However, I observed the same speed differences on a 64-bit ubuntu with ghc
6.12 - I profiled my application with cairo-trace, and cairo-perf-trace
drew in a fraction of a second the picture that my Haskell program spend a
dozen seconds drawing.

On Wed, Nov 2, 2011 at 1:17 PM, Eugene Kirpichov ekirpic...@gmail.comwrote:

 Hello,

 I've got two very simple programs that draw a very simple picture using
 cairo, doing a couple hundred thousand of cairo calls.
 One program is in C++. The other is in Haskell and uses the cairo library
 bindings.

 The C++ program completes in a fraction of a second, the Haskell program
 takes about 7-8 seconds to run. They produce exactly the same output.

 What could be at fault here? Why are the cairo bindings working so slow?
 (I suppose there isn't too much cairo-specific stuff here, perhaps it's a
 general FFI question?)

 #include cairo.h
 int main() {
cairo_surface_t *surface =
 cairo_image_surface_create(CAIRO_FORMAT_ARGB32, 1024, 768);
cairo_t *cr = cairo_create(surface);
cairo_set_source_rgb(cr, 0, 255, 0);
for(int x = 0; x  1024; x += 2) for(int y = 0; y  768; y += 2) {
cairo_rectangle(cr, x, y, 1, 1);
cairo_fill(cr);
}
cairo_surface_write_to_png(surface, picture.png);
return 0;
 }

 module Main where

 import qualified Graphics.Rendering.Cairo as C
 import Control.Monad

 main = C.withImageSurface C.FormatARGB32 1024 768 $ \s - do
  C.renderWith s $ do
C.setSourceRGBA 0 255 0 255
forM_ [0,2..1024] $ \x - do
  forM_ [0,2..768] $ \y - do
C.rectangle x y 1 1
C.fill
  C.surfaceWriteToPNG s picture.png

 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ridiculously slow FFI, or cairo binding?

2011-11-02 Thread Eugene Kirpichov
Hi Claude,

I suspected that the issue could be about unsafe foreign imports - all
imports in the cairo bindings are safe.
I compiled myself a version of cairo bindings with the rectangle and
fill functions marked as unsafe.

Unfortunately that didn't help the case at all, even though the core
changed FFI calls from __pkg_ccall_GC to __pkg_ccall. The performance
stayed the same; the overhead is elsewhere.

On Wed, Nov 2, 2011 at 1:31 PM, Claude Heiland-Allen cla...@goto10.orgwrote:

 On 02/11/11 09:17, Eugene Kirpichov wrote:

 Hello,

 I've got two very simple programs that draw a very simple picture using
 cairo, doing a couple hundred thousand of cairo calls.
 One program is in C++. The other is in Haskell and uses the cairo library
 bindings.

 The C++ program completes in a fraction of a second, the Haskell program
 takes about 7-8 seconds to run. They produce exactly the same output.

 What could be at fault here? Why are the cairo bindings working so slow?
 (I
 suppose there isn't too much cairo-specific stuff here, perhaps it's a
 general FFI question?)


 I filed a bug report about this some months ago, having noticed similar
 slowness:

 gtk2hs ticket #1228 cairo performance is very bad
 http://hackage.haskell.org/**trac/gtk2hs/ticket/1228http://hackage.haskell.org/trac/gtk2hs/ticket/1228

 My conclusion was that it isn't FFI being slow, but some other reason,
 possibly too much redirection / high level fanciness in the implementation
 of cairo bindings that the compiler can't see through to optimize
 aggressively, or possibly some Double / CDouble / realToFrac rubbishness.


 Claude

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ridiculously slow FFI, or cairo binding?

2011-11-02 Thread Eugene Kirpichov
Oh. This is pretty crazy, I wonder what they're doing with GMP so much...

I modified the Haskell program to use cairo directly, even with safe calls,
and it now takes the same time as the C program.

{-# LANGUAGE ForeignFunctionInterface #-}
module Main where

import qualified Graphics.Rendering.Cairo as C
import Control.Monad
import Foreign
import Foreign.C.Types
import Foreign.C.String

foreign import ccall cairo.h cairo_image_surface_create
cairo_image_surface_create :: CInt - CInt - CInt - IO (Ptr ())
foreign import ccall cairo.h cairo_create cairo_create :: Ptr () - IO
(Ptr ())
foreign import ccall cairo.h cairo_set_source_rgb cairo_set_source_rgb ::
Ptr () - CDouble - CDouble - CDouble - IO ()
foreign import ccall cairo.h cairo_rectangle cairo_rectangle :: Ptr () -
CDouble - CDouble - CDouble - CDouble - IO ()
foreign import ccall cairo.h cairo_fill cairo_fill :: Ptr () - IO ()
foreign import ccall cairo.h cairo_surface_write_to_png
cairo_surface_write_to_png :: Ptr () - CString - IO ()

main = do
  s - cairo_image_surface_create 0 1024 768
  cr - cairo_create s
  cairo_set_source_rgb cr 0 255 0
  forM_ [0,2..1024] $ \x - do
forM_ [0,2..768] $ \y - do
  cairo_rectangle cr x y 1 1
  cairo_fill cr
  pic - newCString picture.png
  cairo_surface_write_to_png s pic

On Wed, Nov 2, 2011 at 1:58 PM, Vincent Hanquez t...@snarc.org wrote:

 On 11/02/2011 09:51 AM, Eugene Kirpichov wrote:

 Hi Claude,

 I suspected that the issue could be about unsafe foreign imports - all
 imports in the cairo bindings are safe.
 I compiled myself a version of cairo bindings with the rectangle and
 fill functions marked as unsafe.

 Unfortunately that didn't help the case at all, even though the core
 changed FFI calls from __pkg_ccall_GC to __pkg_ccall. The performance
 stayed the same; the overhead is elsewhere.

  doing a ltrace, i think the reason is pretty obvious, there's a lot of
 GMP calls:

 __gmpz_init(0x7f5043171730, 1, 0x7f5043171750, 0x7f5043171740,
 0x7f50431d2508) = 0x7f50431d2530
 __gmpz_mul(0x7f5043171730, 0x7f5043171750, 0x7f5043171740, 0x7f50431d2538,
 0x7f50431d2508) = 1
 __gmpz_init(0x7f5043171728, 1, 0x7f5043171748, 0x7f5043171738,
 0x7f50431d2538) = 0x7f50431d2568
 __gmpz_mul(0x7f5043171728, 0x7f5043171748, 0x7f5043171738, 0x7f50431d2570,
 0x7f50431d2538) = 1
 __gmpn_gcd_1(0x7f50431d2580, 1, 1, 1, 1) = 1
 repeated thousand of time

 before each call cairo calls.

 just to make sure, the C version doesn't exhibit this behavior.

 --
 Vincent


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ridiculously slow FFI, or cairo binding?

2011-11-02 Thread Eugene Kirpichov
+gtk2hs-users

On Wed, Nov 2, 2011 at 2:10 PM, Eugene Kirpichov ekirpic...@gmail.comwrote:

 Oh. This is pretty crazy, I wonder what they're doing with GMP so much...

 I modified the Haskell program to use cairo directly, even with safe
 calls, and it now takes the same time as the C program.

 {-# LANGUAGE ForeignFunctionInterface #-}
 module Main where

 import qualified Graphics.Rendering.Cairo as C
 import Control.Monad
 import Foreign
 import Foreign.C.Types
 import Foreign.C.String

 foreign import ccall cairo.h cairo_image_surface_create
 cairo_image_surface_create :: CInt - CInt - CInt - IO (Ptr ())
 foreign import ccall cairo.h cairo_create cairo_create :: Ptr () - IO
 (Ptr ())
 foreign import ccall cairo.h cairo_set_source_rgb cairo_set_source_rgb
 :: Ptr () - CDouble - CDouble - CDouble - IO ()
 foreign import ccall cairo.h cairo_rectangle cairo_rectangle :: Ptr ()
 - CDouble - CDouble - CDouble - CDouble - IO ()
 foreign import ccall cairo.h cairo_fill cairo_fill :: Ptr () - IO ()
 foreign import ccall cairo.h cairo_surface_write_to_png
 cairo_surface_write_to_png :: Ptr () - CString - IO ()

 main = do
   s - cairo_image_surface_create 0 1024 768
   cr - cairo_create s
   cairo_set_source_rgb cr 0 255 0
   forM_ [0,2..1024] $ \x - do
 forM_ [0,2..768] $ \y - do
   cairo_rectangle cr x y 1 1
   cairo_fill cr
   pic - newCString picture.png
   cairo_surface_write_to_png s pic

 On Wed, Nov 2, 2011 at 1:58 PM, Vincent Hanquez t...@snarc.org wrote:

 On 11/02/2011 09:51 AM, Eugene Kirpichov wrote:

 Hi Claude,

 I suspected that the issue could be about unsafe foreign imports - all
 imports in the cairo bindings are safe.
 I compiled myself a version of cairo bindings with the rectangle and
 fill functions marked as unsafe.

 Unfortunately that didn't help the case at all, even though the core
 changed FFI calls from __pkg_ccall_GC to __pkg_ccall. The performance
 stayed the same; the overhead is elsewhere.

  doing a ltrace, i think the reason is pretty obvious, there's a lot of
 GMP calls:

 __gmpz_init(0x7f5043171730, 1, 0x7f5043171750, 0x7f5043171740,
 0x7f50431d2508) = 0x7f50431d2530
 __gmpz_mul(0x7f5043171730, 0x7f5043171750, 0x7f5043171740,
 0x7f50431d2538, 0x7f50431d2508) = 1
 __gmpz_init(0x7f5043171728, 1, 0x7f5043171748, 0x7f5043171738,
 0x7f50431d2538) = 0x7f50431d2568
 __gmpz_mul(0x7f5043171728, 0x7f5043171748, 0x7f5043171738,
 0x7f50431d2570, 0x7f50431d2538) = 1
 __gmpn_gcd_1(0x7f50431d2580, 1, 1, 1, 1) = 1
 repeated thousand of time

 before each call cairo calls.

 just to make sure, the C version doesn't exhibit this behavior.

 --
 Vincent


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ridiculously slow FFI, or cairo binding?

2011-11-02 Thread Eugene Kirpichov
Any idea how to debug why all the GMP calls?
I'm looking at even the auto-generated source for cairo bindings, but I
don't see anything at all that could lead to *thousands* of them.

On Wed, Nov 2, 2011 at 2:14 PM, Vincent Hanquez t...@snarc.org wrote:

 On 11/02/2011 10:10 AM, Eugene Kirpichov wrote:

 Oh. This is pretty crazy, I wonder what they're doing with GMP so much...

 I modified the Haskell program to use cairo directly, even with safe
 calls, and it now takes the same time as the C program.

  yep, i ended up doing the exact same thing for testing,

 foreign import ccall cairo_rectangle
  my_rectangle :: CI.Cairo - CDouble - CDouble - CDouble - CDouble -
 IO ()

 and just replacing the rectangle call make almost all the difference for
 me (almost as fast as C)

 --
 Vincent




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fwd: Ridiculously slow FFI, or cairo binding?

2011-11-02 Thread Eugene Kirpichov
Yay!!!

I made a small change in Types.chs and got my original cairo-binding-based
program to be just as blazing fast. The only problem I have with this is
that I used multiparameter type classes.

Dear gtk2hs team! Is it possible to incorporate my changes? I'm pretty sure
people will be happy by an order-of-magnitude speedup. Probably the stuff
could be wrapped in #define's for those who aren't using GHC and can't use
multiparameter type classes?

I am pretty sure I could have done the same with rewrite rules, but I tried
for a while and to no avail.

FAILED SOLUTION: rewrite rules
cFloatConv :: (RealFloat a, RealFloat b) = a - b
cFloatConv  = realToFrac
{-# NOINLINE cFloatConv #-}
{-# RULES cFloatConv/float2Double cFloatConv = float2Double #-}
{-# RULES cFloatConv/double2Float cFloatConv = double2Float #-}
{-# RULES cFloatConv/self cFloatConv = id   #-}

For some reason, the rules don't fire. Anyone got an idea why?

SUCCEEDED SOLUTION: multiparameter type classes

I rewrote cFloatConv like this:

import GHC.Float
class (RealFloat a, RealFloat b) = CFloatConv a b where
  cFloatConv :: a - b
  cFloatConv = realToFrac

instance CFloatConv Double Double where cFloatConv = id
instance CFloatConv Double CDouble
instance CFloatConv CDouble Double
instance CFloatConv Float Float where cFloatConv = id
instance CFloatConv Float Double where cFloatConv = float2Double
instance CFloatConv Double Float where cFloatConv = double2Float

and replaced a couple of constraints in functions below by usage of
CFloatConv.


On Wed, Nov 2, 2011 at 2:25 PM, Felipe Almeida Lessa felipe.le...@gmail.com
 wrote:

 +gtk2hs-devel

 On Wed, Nov 2, 2011 at 8:15 AM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:
  Any idea how to debug why all the GMP calls?
  I'm looking at even the auto-generated source for cairo bindings, but I
  don't see anything at all that could lead to *thousands* of them.

 Found them.  Look at the Types module and you'll see

  cFloatConv :: (RealFloat a, RealFloat b) = a - b
  cFloatConv  = realToFrac

 This function (or its cousins peekFloatConv, withFloatConv...) are
 used *everywhere*.

 Looking at this module with ghc-core we see that GHC compiled a
 generic version of cFloatConv:

 Graphics.Rendering.Cairo.Types.$wcFloatConv
  :: forall a_a3TN b_a3TO.
 (RealFloat a_a3TN, RealFrac b_a3TO) =
 a_a3TN - b_a3TO
 [GblId,
  Arity=3,

  Unf=Unf{Src=vanilla, TopLvl=True, Arity=3, Value=True,
 ConLike=True, Cheap=True, Expandable=True,
 Guidance=IF_ARGS [3 3 0] 12 0}]
 Graphics.Rendering.Cairo.Types.$wcFloatConv =
  \ (@ a_a3TN)
(@ b_a3TO)
(w_s5zg :: RealFloat a_a3TN)
(ww_s5zj :: RealFrac b_a3TO)
(w1_s5zA :: a_a3TN) -
fromRational
  @ b_a3TO
  ($p2RealFrac @ b_a3TO ww_s5zj)
  (toRational
 @ a_a3TN
 ($p1RealFrac
@ a_a3TN ($p1RealFloat @ a_a3TN w_s5zg))
 w1_s5zA)

 Note that this is basically cFloatConv = fromRational . toRational.

 *However*, GHC also compiled a Double - Double specialization:

 Graphics.Rendering.Cairo.Types.cFloatConv1
  :: Double - Double
 [GblId,
  Arity=1,

  Unf=Unf{Src=InlineStable, TopLvl=True, Arity=1, Value=True,
 ConLike=True, Cheap=True, Expandable=True,
 Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=False)
 Tmpl= \ (eta_B1 [Occ=Once!] :: Double) -
 case eta_B1 of _ { D# ww_a5v3 [Occ=Once] -
 case $w$ctoRational ww_a5v3
 of _ { (# ww2_a5v8 [Occ=Once], ww3_a5v9 [Occ=Once] #) -
 $wfromRat ww2_a5v8 ww3_a5v9
 }
 }}]
 Graphics.Rendering.Cairo.Types.cFloatConv1 =
  \ (eta_B1 :: Double) -
case eta_B1 of _ { D# ww_a5v3 -
case $w$ctoRational ww_a5v3
of _ { (# ww2_a5v8, ww3_a5v9 #) -
$wfromRat ww2_a5v8 ww3_a5v9
}
}

 ...which is also equivalent to fromRational . toRational however with
 the type class inlined!  Oh, god...

 Cheers,

 --
 Felipe.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fwd: Ridiculously slow FFI, or cairo binding?

2011-11-02 Thread Eugene Kirpichov
Sorry for re-sending, my previous attempt got ignored by gtk2hs-devel
mailing list as I wasn't subscribed. Now I am.

On Wed, Nov 2, 2011 at 3:14 PM, Eugene Kirpichov ekirpic...@gmail.comwrote:

 Yay!!!

 I made a small change in Types.chs and got my original cairo-binding-based
 program to be just as blazing fast. The only problem I have with this is
 that I used multiparameter type classes.

 Dear gtk2hs team! Is it possible to incorporate my changes? I'm pretty
 sure people will be happy by an order-of-magnitude speedup. Probably the
 stuff could be wrapped in #define's for those who aren't using GHC and
 can't use multiparameter type classes?

 I am pretty sure I could have done the same with rewrite rules, but I
 tried for a while and to no avail.

 FAILED SOLUTION: rewrite rules
 cFloatConv :: (RealFloat a, RealFloat b) = a - b
 cFloatConv  = realToFrac
 {-# NOINLINE cFloatConv #-}
 {-# RULES cFloatConv/float2Double cFloatConv = float2Double #-}
 {-# RULES cFloatConv/double2Float cFloatConv = double2Float #-}
 {-# RULES cFloatConv/self cFloatConv = id   #-}

 For some reason, the rules don't fire. Anyone got an idea why?

 SUCCEEDED SOLUTION: multiparameter type classes

 I rewrote cFloatConv like this:

 import GHC.Float
 class (RealFloat a, RealFloat b) = CFloatConv a b where
   cFloatConv :: a - b
   cFloatConv = realToFrac

 instance CFloatConv Double Double where cFloatConv = id
 instance CFloatConv Double CDouble
 instance CFloatConv CDouble Double
 instance CFloatConv Float Float where cFloatConv = id
 instance CFloatConv Float Double where cFloatConv = float2Double
 instance CFloatConv Double Float where cFloatConv = double2Float

 and replaced a couple of constraints in functions below by usage of
 CFloatConv.


 On Wed, Nov 2, 2011 at 2:25 PM, Felipe Almeida Lessa 
 felipe.le...@gmail.com wrote:

 +gtk2hs-devel

 On Wed, Nov 2, 2011 at 8:15 AM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:
  Any idea how to debug why all the GMP calls?
  I'm looking at even the auto-generated source for cairo bindings, but I
  don't see anything at all that could lead to *thousands* of them.

 Found them.  Look at the Types module and you'll see

  cFloatConv :: (RealFloat a, RealFloat b) = a - b
  cFloatConv  = realToFrac

 This function (or its cousins peekFloatConv, withFloatConv...) are
 used *everywhere*.

 Looking at this module with ghc-core we see that GHC compiled a
 generic version of cFloatConv:

 Graphics.Rendering.Cairo.Types.$wcFloatConv
  :: forall a_a3TN b_a3TO.
 (RealFloat a_a3TN, RealFrac b_a3TO) =
 a_a3TN - b_a3TO
 [GblId,
  Arity=3,

  Unf=Unf{Src=vanilla, TopLvl=True, Arity=3, Value=True,
 ConLike=True, Cheap=True, Expandable=True,
 Guidance=IF_ARGS [3 3 0] 12 0}]
 Graphics.Rendering.Cairo.Types.$wcFloatConv =
  \ (@ a_a3TN)
(@ b_a3TO)
(w_s5zg :: RealFloat a_a3TN)
(ww_s5zj :: RealFrac b_a3TO)
(w1_s5zA :: a_a3TN) -
fromRational
  @ b_a3TO
  ($p2RealFrac @ b_a3TO ww_s5zj)
  (toRational
 @ a_a3TN
 ($p1RealFrac
@ a_a3TN ($p1RealFloat @ a_a3TN w_s5zg))
 w1_s5zA)

 Note that this is basically cFloatConv = fromRational . toRational.

 *However*, GHC also compiled a Double - Double specialization:

 Graphics.Rendering.Cairo.Types.cFloatConv1
  :: Double - Double
 [GblId,
  Arity=1,

  Unf=Unf{Src=InlineStable, TopLvl=True, Arity=1, Value=True,
 ConLike=True, Cheap=True, Expandable=True,
 Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=False)
 Tmpl= \ (eta_B1 [Occ=Once!] :: Double) -
 case eta_B1 of _ { D# ww_a5v3 [Occ=Once] -
 case $w$ctoRational ww_a5v3
 of _ { (# ww2_a5v8 [Occ=Once], ww3_a5v9 [Occ=Once] #) -
 $wfromRat ww2_a5v8 ww3_a5v9
 }
 }}]
 Graphics.Rendering.Cairo.Types.cFloatConv1 =
  \ (eta_B1 :: Double) -
case eta_B1 of _ { D# ww_a5v3 -
case $w$ctoRational ww_a5v3
of _ { (# ww2_a5v8, ww3_a5v9 #) -
$wfromRat ww2_a5v8 ww3_a5v9
}
}

 ...which is also equivalent to fromRational . toRational however with
 the type class inlined!  Oh, god...

 Cheers,

 --
 Felipe.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ridiculously slow FFI, or cairo binding?

2011-11-02 Thread Eugene Kirpichov
Hi,

No, I didn't, as I read in the GHC docs that it is deprecated in favor of
the RULES pragma (I wanted to replace specifically with floatToDouble and
doubleToFloat).

On Wed, Nov 2, 2011 at 5:24 PM, Jean-Marie Gaillourdet
j...@gaillourdet.netwrote:

 Hi Eugene,

 did you try using the SPECIALIZE pragma? It is part of the Haskell 98 and
 Haskell 2010 specifications.

 On 02.11.2011, at 12:14, Eugene Kirpichov wrote:

  Yay!!!
 
  I made a small change in Types.chs and got my original
 cairo-binding-based program to be just as blazing fast. The only problem I
 have with this is that I used multiparameter type classes.
 
  Dear gtk2hs team! Is it possible to incorporate my changes? I'm pretty
 sure people will be happy by an order-of-magnitude speedup. Probably the
 stuff could be wrapped in #define's for those who aren't using GHC and
 can't use multiparameter type classes?
 
  I am pretty sure I could have done the same with rewrite rules, but I
 tried for a while and to no avail.
 
  FAILED SOLUTION: rewrite rules
  cFloatConv :: (RealFloat a, RealFloat b) = a - b
  cFloatConv  = realToFrac
  {-# NOINLINE cFloatConv #-}
  {-# RULES cFloatConv/float2Double cFloatConv = float2Double #-}
  {-# RULES cFloatConv/double2Float cFloatConv = double2Float #-}
  {-# RULES cFloatConv/self cFloatConv = id   #-}


 See [1] in GHC User Guide.

 cFloatConv :: (RealFloat a, RealFloat b) = a - b
 cFloatConv = realToFrac -- or try fromRational . toRational

 {-# SPECIALIZE cFloatConv :: Float - Double #-}
 {-# SPECIALIZE cFloatConv :: Double - Float #-}

 I did not try to compile or even benchmark this code. But I think it might
 help in your case.

 Cheers,
  Jean

 [1]:
 http://www.haskell.org/ghc/docs/latest/html/users_guide/pragmas.html#specialize-pragma




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ridiculously slow FFI, or cairo binding?

2011-11-02 Thread Eugene Kirpichov
Heh.

Guess what!
A simple {-# INLINE cFloatConv #-} helped to the same extent!

Axel, I think this change should be pretty easy to incorporate, and it
probably makes sense to inline all other functions in Types.chs too.

Would you like me to send the trivial darcs patch or the gtk2hs team
will take care of this?

On Wed, Nov 2, 2011 at 7:29 PM, Felipe Almeida Lessa
felipe.le...@gmail.com wrote:
 On Wed, Nov 2, 2011 at 11:24 AM, Jean-Marie Gaillourdet
 j...@gaillourdet.net wrote:
 Hi Eugene,

 did you try using the SPECIALIZE pragma? It is part of the Haskell 98 and 
 Haskell 2010 specifications.

 I don't think it's going to make any difference, as the core already
 have an specialized poor version.  See my first e-mail.

 --
 Felipe.




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fwd: Ridiculously slow FFI, or cairo binding?

2011-11-02 Thread Eugene Kirpichov
Thanks! I'll definitely consider your library in the future, but for now,
as we can see, there's no necessity in rewriting cFloatConv at all - {-#
INLINE #-} suffices :)

On Thu, Nov 3, 2011 at 3:30 AM, wren ng thornton w...@freegeek.org wrote:

 On 11/2/11 7:14 AM, Eugene Kirpichov wrote:

 I rewrote cFloatConv like this:

 import GHC.Float
 class (RealFloat a, RealFloat b) =  CFloatConv a b where
   cFloatConv :: a -  b
   cFloatConv = realToFrac

 instance CFloatConv Double Double where cFloatConv = id
 instance CFloatConv Double CDouble
 instance CFloatConv CDouble Double
 instance CFloatConv Float Float where cFloatConv = id
 instance CFloatConv Float Double where cFloatConv = float2Double
 instance CFloatConv Double Float where cFloatConv = double2Float


 If you're going the MPTC route, I suggest you use 
 logfloat:Data.Number.**RealToFrac[1].
 I don't have the CDouble and CFloat instances, but I could add them. The
 instances themselves are only moderately more clever than yours ---namely
 using CPP for portability to non-GHC compilers--- but I think it's good for
 people to rally around one implementation of the solution instead of having
 a bunch of copies of the same thing, each poorly maintained because of the
 distributedness.


 [1] http://hackage.haskell.org/**packages/archive/logfloat/0.**
 12.1/doc/html/Data-Number-**RealToFrac.htmlhttp://hackage.haskell.org/packages/archive/logfloat/0.12.1/doc/html/Data-Number-RealToFrac.html

 --
 Live well,
 ~wren


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lazy A-star search

2011-10-31 Thread Eugene Kirpichov
Anton, I think the mapM inside searchBy is incorrect. You're threading state 
between exploration of different branches, which you I think shouldn't be doing.



30.10.2011, в 19:44, Anton Kholomiov anton.kholom...@gmail.com написал(а):

 I'm misunderstanding astar. I've thought that 'whole route'-heuristic 
 will prevent algorithm from going in circles. The more you circle around
 the more the whole route distance is. Thank you for showing this. 
 Here is an updated version. searchBy function contains a state.
 State value accumulates visited nodes:
 
 -- | Heuristic search. Nodes are visited from smaller to greater.
 searchBy :: Ord b = (a - b) - (a - a - Ordering) - Tree a - [a]
 searchBy f heur t = evalState (searchBy' f heur t) S.empty
 
 searchBy' :: Ord b 
 = (a - b) - (a - a - Ordering) - Tree a - State (S.Set b) [a]
 searchBy' f heur (Node v ts) = get = phi
 where phi visited
 | S.member (f v) visited = return []
 | otherwise  = 
 put (S.insert (f v) visited) 
 (v :) . foldr (mergeBy heur) [] $ 
 mapM (searchBy' f heur) ts
 
 I need to add function Ord b = (a - b). It
 converts tree nodes into visited nodes. I'm using it 
 for saving distance-values alongside with nodes
 in astar algorithm.
 
 In attachment you can find algorithm with your example. 
 
 2011/10/27 Ryan Ingram ryani.s...@gmail.com
 Also, this wasn't clear in my message, but the edges in the graph only go one 
 way; towards the top/right; otherwise the best path is ABCDEHIJ :)
 
 
 On Thu, Oct 27, 2011 at 10:48 AM, Ryan Ingram ryani.s...@gmail.com wrote:
 You're missing one of the key insights from A-star (and simple djikstra, for 
 that matter): once you visit a node, you don't have to visit it again.
 
 Consider a 5x2 2d graph with these edge costs:
 
 B 1 C 1 D 1 E 9 J
 1   1   1   1   1
 A 2 F 2 G 2 H 2 I
 
 with the start node being A, the target node being J, and the heuristic being 
 manhattan distance.  Your search will always try to take the top route, on 
 every node along the bottom path, even though you visit every node along the 
 top route in your first try at reaching the goal.  You need a way to mark 
 that a node is visited and remove it from future consideration, or else 
 you're wasting work.
 
 A-star will visit the nodes in the order ABCDE FGHIJ; your algorithm visits 
 the nodes in the order ABCDE FCDE GDE HE IJ.
 
   -- ryan
 
 On Sat, Oct 22, 2011 at 5:28 AM, Anton Kholomiov anton.kholom...@gmail.com 
 wrote:
 Recently I was looking for an A-star search algorithm. I've found a package 
 but I couldn't understand the code. Then I saw some blogposts but they
  were difficult to understand too. I thought about some easier solution that
 relies on laziness. And I've come to this:
 
 Heuristic search is like depth-first search but solutions in sub-trees 
 are concatenated with mergeBy function, that concatenates two 
 list by specific order:
 
 module Search where
 
 import Control.Applicative
 import Data.Function(on)
 import Control.Arrow(second)
 import Data.Tree
 
 -- | Heuristic search. Nodes are visited from smaller to greater.
 searchBy :: (a - a - Ordering) - Tree a - [a]
 searchBy  heur (Node v ts) = 
 v : foldr (mergeBy heur) [] (searchBy heur $ ts)
 
 -- | Merge two lists. Elements concatenated in specified order.
 mergeBy :: (a - a - Ordering) - [a] - [a] - [a]
 mergeBy _ a []  = a
 mergeBy _ []b   = b
 mergeBy p (a:as)(b:bs)  
 | a `p` b == LT= a : mergeBy p as (b:bs)
 | otherwise = b : mergeBy p bs (a:as)
 
 
 Now we can define specific heuristic search in terms of searchBy:
 
 -- | Heuristic is distance to goal.
 bestFirst :: Ord h = (a - h) - (a - [a]) - a - [a]
 bestFirst dist alts = 
 searchBy (compare `on` dist) . unfoldTree (\a - (a, alts a))
 
 -- | A-star search.
 -- Heuristic is estimated length of whole path. 
 astar :: (Ord h, Num h) = (a - h) - (a - [(a, h)]) - a - [a]
 astar dist alts s0 = fmap fst $ 
 searchBy (compare `on` astarDist) $ unfoldTree gen (s0, 0)
 where astarDist (a, d) = dist a + d
   gen (a, d)  = d `seq` ((a, d), second (+d) $ alts a)
 
 I'm wondering is it effective enough?
 
 
 Anton
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
 
 Search.hs
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] When does the UNPACK pragma work?

2011-10-28 Thread Eugene Kirpichov
Hi,

I'm trying to speed up Data.Time (the time package) 'cause my program is
spending a very substantial fraction of time manipulating dates.

I decided to start with strictifying and unpacking the date/time types, as
there's no point in having them be lazy and I indeed had a lot of allocation
in date/time arithmetic in my performance profile.

data UTCTime = UTCTime {
utctDay :: {-# UNPACK #-} !Day,
utctDayTime :: {-# UNPACK #-} !DiffTime
}

(unpacks and strictness mine)

newtype Day = ModifiedJulianDay {toModifiedJulianDay :: Integer}
newtype DiffTime = MkDiffTime Pico

And Pico is also essentially a newtype for Integer.

So, I'm getting warnings on this definition of UTCTime.
QUESTION: Is it the case that I can only UNPACK primitive fields, and not
even their newtypes?

Data\Time\Clock\UTC.hs:30:16:
Warning: Ignoring unusable UNPACK pragma on the
 first argument of `UTCTime'
In the definition of data constructor `UTCTime'
In the data type declaration for `UTCTime'

Data\Time\Clock\UTC.hs:30:16:
Warning: Ignoring unusable UNPACK pragma on the
 second argument of `UTCTime'
In the definition of data constructor `UTCTime'
In the data type declaration for `UTCTime'



-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] When does the UNPACK pragma work?

2011-10-28 Thread Eugene Kirpichov
Oh, I see, thanks!

So then, I guess, the solution would be to use a fixed-precision integer
type instead.

On Fri, Oct 28, 2011 at 1:54 PM, Daniel Fischer 
daniel.is.fisc...@googlemail.com wrote:

 On Friday 28 October 2011, 11:41:15, Eugene Kirpichov wrote:
  newtype Day = ModifiedJulianDay {toModifiedJulianDay :: Integer}
  newtype DiffTime = MkDiffTime Pico
 
  And Pico is also essentially a newtype for Integer.
 
  So, I'm getting warnings on this definition of UTCTime.
  QUESTION: Is it the case that I can only UNPACK primitive fields, and
  not even their newtypes?

 The problem is that you can't {-# UNPACK #-} Integer.
 You can only unpack single-constructor types.




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] When does the UNPACK pragma work?

2011-10-28 Thread Eugene Kirpichov
Another question: Can I unpack some fields in a record and not unpack
others? Does their order matter then?

On Fri, Oct 28, 2011 at 1:57 PM, Eugene Kirpichov ekirpic...@gmail.comwrote:

 Oh, I see, thanks!

 So then, I guess, the solution would be to use a fixed-precision integer
 type instead.


 On Fri, Oct 28, 2011 at 1:54 PM, Daniel Fischer 
 daniel.is.fisc...@googlemail.com wrote:

 On Friday 28 October 2011, 11:41:15, Eugene Kirpichov wrote:
  newtype Day = ModifiedJulianDay {toModifiedJulianDay :: Integer}
  newtype DiffTime = MkDiffTime Pico
 
  And Pico is also essentially a newtype for Integer.
 
  So, I'm getting warnings on this definition of UTCTime.
  QUESTION: Is it the case that I can only UNPACK primitive fields, and
  not even their newtypes?

 The problem is that you can't {-# UNPACK #-} Integer.
 You can only unpack single-constructor types.




 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] When does the UNPACK pragma work?

2011-10-28 Thread Eugene Kirpichov
On Fri, Oct 28, 2011 at 2:32 PM, Daniel Fischer 
daniel.is.fisc...@googlemail.com wrote:

 On Friday 28 October 2011, 11:57:54, Eugene Kirpichov wrote:
  Another question: Can I unpack some fields in a record and not unpack
  others?

 Yes, no problem with that.

  Does their order matter then?

 In what way? The order of the fields in the definition of the type will
 determine the order in which the components (pointers/unpacked values)
 appear in the constructor. That may influence performance, but I wouldn't
 expect a significant difference.


Oh, OK. I was just thinking, maybe, you can only unpack a prefix of the
field set...


-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Cross-compilation x64 - x86 easier under Ubuntu 11.10 with multiarch?

2011-10-13 Thread Eugene Kirpichov
Hi cafe,

Ubuntu 11.10 has been released, which includes support for multiarch
https://wiki.ubuntu.com/MultiarchSpec

Am I right that this will facilitate compiling Haskell code into x86
binaries on an x64 machine?

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Packaged my log visualization tools

2011-09-16 Thread Eugene Kirpichov
Hello cafe,

I've packaged my log visualization tools in binary form for several
platforms. Perhaps those of you who have better things to do than
struggle installing them from source will find this useful :)

Here you go:

http://jkff.info/presentations/two-visualization-tools.pdf Documentation

http://jkff.info/software/timeplotters-0.1-1_win32.zip Windows (any
architecture)
http://jkff.info/software/timeplotters-0.1-1_i386.deb Debian (i386)
http://jkff.info/software/timeplotters-0.1-1_amd64.deb Debian (amd64)
http://jkff.info/software/timeplotters-0.1-1_i386.tar.gz *nix binaries
(i386), require gtk2.0, gmp3, libc6.
http://jkff.info/software/timeplotters-0.1-1_amd64.tar.gz *nix
binaries (amd64), require gtk2.0, gmp3, libc6.

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] regex-applicative library needs your help! (algorithmic challenge)

2011-09-13 Thread Eugene Kirpichov
Hi,
I don't see how fallback to NFA simulation is really a failure wrt DoS
attacks. It's not exponential in time or memory, just linear memory
(in size of regex) instead of constant, and slower than DFA.

On Wed, Sep 14, 2011 at 1:29 AM, Roman Cheplyaka r...@ro-che.info wrote:
 * Chris Kuklewicz hask...@list.mightyreason.com [2011-09-13 08:21:55+0100]
 I wrote regex-tdfa, the efficient (though not yet lightning fast) Posix-like 
 engine.

 You are not the first to want an efficient Perl-like engine.  The answer you
 seek flows from Russ Cox (though in C++):

  http://google-opensource.blogspot.com/2010/03/re2-principled-approach-to-regular.html

  http://code.google.com/p/re2/

 Quoting relevant bit:

  It also finds the leftmost-first match, the same match that Perl would, and
  can return submatch information. The one significant exception is that RE2
  drops support for backreferences¹ and generalized zero-width assertions,
  because they cannot be implemented efficiently.

 Hi Chris, thanks for the response.

 There's one thing about Cox's work that I don't understand. On the
 page [1] he mentions that the algorithm falls back to direct NFA
 simulation (search for If all else fails on that page).
 This defeats his goal of defending against DoS attacks (the second
 paragraph of that page).

 And I wouldn't be comfortable with an algorithm that is worst-case
 exponential either.

 Then there's another issue: I specifically want a combinator library,
 and not every automaton-based algorithm can serve this purpose in a
 statically typed language (unless there's some clever trick I don't
 know).

 [1]: http://swtch.com/~rsc/regexp/regexp3.html

 --
 Roman I. Cheplyaka :: http://ro-che.info/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] alex 3.0 broken with Data.ByteString.Lazy - w2c conversion missing (fix attached)

2011-08-09 Thread Eugene Kirpichov
Hi Simon,

I found a bug in alex-3.0 and I'm attaching a fixed source file -
templates/wrappers.hs (modified against alex-3.0 from cabal unpack).


Explanation:
I was installing bytestring-lexing 0.2.1 and it failed to install with
alex 3.0, which was released on Aug 4.
It succeeded, however, with the most recent earlier alex 2.3.5.

Turns out that the wrapper for lazy bytestrings (strict are ok) had
alexGetByte not doing w2c conversion:
alexGetByte (_, cs) | ByteString.null cs = Nothing
| otherwise  = Just (ByteString.head cs,
(ByteString.head cs, ByteString.tail cs))

It should say ByteString.w2c $ ByteString.head cs here.
With this and an additional import of Data.ByteString.Internal (w2c),
the generated code compiles.

To reproduce, cabal unpack bytestring-lexing and try installing it.
Installation will fail, then manually alex the .x files in
Data/ByteString/Lex{,/Lazy} and try ghci'ing to them, doing the
modification I did and ghci'ing again.



-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/


wrappers.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Analyzing slow performance of a Haskell program

2011-08-07 Thread Eugene Kirpichov
What about using unsafe array indexing operations? (i.e. array `unsafeAt` index)

2011/8/7 Chris Yuen kizzx2+hask...@gmail.com:
 Here is an updated version using Data.Array.Unboxed  http://ideone.com/YXuVL
 And the profile http://hpaste.org/49940

 Still taking 5+ minutes...

 Chris

 On Sun, Aug 7, 2011 at 5:20 PM, Daniel Fischer
 daniel.is.fisc...@googlemail.com wrote:

 On Sunday 07 August 2011, 10:52:20, Max Bolingbroke wrote:
  In short I don't see how to get further without changing the algorithm
  or doing some hacks like manual unrolling. Maybe someone else has some
  ideas?

 Well, the C# implementation uses arrays for lookup while the Haskell
 version uses list lookups

                      in (tens !! fromIntegral t) ++ wordify x

 and case'd functions

 lenTens 0 = 0
 lenTens 1 = 3
 lenTens 2 = 6
 lenTens 3 = 6
 lenTens 4 = 5
 lenTens 5 = 5
 lenTens 6 = 5
 lenTens 7 = 7
 lenTens 8 = 6
 lenTens 9 = 6

 wordify is only called once at the end, so that should not have a
 measurable impact, but the lenXXXs might.
 I'm not sure what

 CaseLen.$wlenTens :: GHC.Prim.Int# - GHC.Prim.Int#
 [GblId,
  Arity=1,
  Str=DmdType L,
  Unf=Unf{Src=vanilla, TopLvl=True, Arity=1, Value=True,
         ConLike=True, Cheap=True, Expandable=True,
         Guidance=IF_ARGS [12] 11 0}]
 CaseLen.$wlenTens =
  \ (ww_shY :: GHC.Prim.Int#) -
    case ww_shY of _ {
      __DEFAULT -
        CaseLen.lenTens1
        `cast` (CoUnsafe GHC.Types.Int GHC.Prim.Int#
                :: GHC.Types.Int ~ GHC.Prim.Int#);
      0 - 0;
      1 - 3;
      2 - 6;
      3 - 6;
      4 - 5;
      5 - 5;
      6 - 5;
      7 - 7;
      8 - 6;
      9 - 6
    }

 means at a lower level, but it's certainly worth trying out whether an
 unboxed array lookup is faster.


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell fgl example

2011-07-19 Thread Eugene Kirpichov
I've got a small example here
https://github.com/jkff/minxmod/blob/master/Buchi.hs .

2011/7/19 Rohit Agrawalla rohit.agrawa...@gmail.com:
 I am new to haskell and the haskell fgl (functional graph library). l am
 looking for some examples of the haskell fgl. Would appreciate any pointers
 regarding the same.


 --
 Rohit
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] pointer equality

2011-07-19 Thread Eugene Kirpichov
reallyUnsafePointerEq#, and it really is as unsafe as it sounds :)



20.07.2011, в 7:51, Nikhil A. Patil patil.nik...@gmail.com написал(а):

 Hi,
 
 Is there any way of getting the following code to immediately return
 True without performing the element-by-element comparison? Essentially
 this boils down to checking whether pointers are equal before
 comparing the contents.
 
 main = print $ f == f
  where f = [1..10^9]
 
 Thanks!!
 
 nikhil
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Patterns for processing large but finite streams

2011-07-01 Thread Eugene Kirpichov
Plain old lazy lists do not allow me to combine multiple concurrent
computations, e.g. I cannot define average from sum and length.

2011/7/1 Heinrich Apfelmus apfel...@quantentunnel.de:
 Eugene Kirpichov wrote:

 I'm rewriting timeplot to avoid holding the whole input in memory, and
 naturally a problem arises:

 How to represent large but finite streams and functions that process
 them, returning other streams or some kinds of aggregate values?

 Examples:
 * Adjacent differences of a stream of numbers
 * Given a stream of numbers with times, split it into buckets by time
 of given width and produce a stream of (bucket, 50%,75% and 90%
 quantiles in this bucket)
 * Sum a stream of numbers

 Is this, perhaps, what comonads are for? Or iteratees?

 Plain old lazy lists?


 Best regards,
 Heinrich Apfelmus

 --
 http://apfelmus.nfshost.com


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Patterns for processing large but finite streams

2011-07-01 Thread Eugene Kirpichov
I meant the average of the whole list - given a sumS and lengthS (S
for Stream), write meanS as something like liftS2 (/) sumS lengthS.

Or is that possible with lazy lists too?

(looks like arrows actually - which arrow is appropriate here?)

2011/7/1 Malcolm Wallace malcolm.wall...@me.com:
 Sure you can.

 runningAverage :: Int - [Double] - [Double]
 runningAverage n xs = let chunk = take n xs
                      in (sum chunk / length chunk) : runningAverage (tail xs)

 Lazy lists are absolutely ideal for this purpose.
 Regards,
    Malcolm

 On 1 Jul 2011, at 07:33, Eugene Kirpichov wrote:

 Plain old lazy lists do not allow me to combine multiple concurrent
 computations, e.g. I cannot define average from sum and length.

 2011/7/1 Heinrich Apfelmus apfel...@quantentunnel.de:
 Eugene Kirpichov wrote:

 I'm rewriting timeplot to avoid holding the whole input in memory, and
 naturally a problem arises:

 How to represent large but finite streams and functions that process
 them, returning other streams or some kinds of aggregate values?

 Examples:
 * Adjacent differences of a stream of numbers
 * Given a stream of numbers with times, split it into buckets by time
 of given width and produce a stream of (bucket, 50%,75% and 90%
 quantiles in this bucket)
 * Sum a stream of numbers

 Is this, perhaps, what comonads are for? Or iteratees?

 Plain old lazy lists?


 Best regards,
 Heinrich Apfelmus

 --
 http://apfelmus.nfshost.com


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Patterns for processing large but finite streams

2011-07-01 Thread Eugene Kirpichov
Alexey, your definition of mean does not look like liftS2 (/) sum
length - you have to manually fuse these computations.

I'm asking for a formalism that does this fusion automatically (and
guaranteedly).

2011/7/1 Alexey Khudyakov alexey.sklad...@gmail.com:
 On Fri, Jul 1, 2011 at 12:21 PM, Eugene Kirpichov ekirpic...@gmail.com 
 wrote:
 I meant the average of the whole list - given a sumS and lengthS (S
 for Stream), write meanS as something like liftS2 (/) sumS lengthS.

 Or is that possible with lazy lists too?

 Sure you can. Sum, length and mean could be calculated as left
 fold. If you need to calculate more that one statistic at time you
 can combine accumulators

 sum = foldl (+) 0
 length = foldl (\n _ - n+1) 0
 data Mean Double Int
 mean = foldl (\(Mean m n) x - Mean (m + (x - m) / fromIntegral (n+1)) 
 (n+1)) (Mean 0 0)

 AFAIU iteratees basically use same technique.




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Patterns for processing large but finite streams

2011-07-01 Thread Eugene Kirpichov
Thanks but I'm afraid that's still not quite what I'm looking for;
guess I'll have to define my desire by my implementation - so once
it's ready I'll show the result to cafe :)

2011/7/1 Alexey Khudyakov alexey.sklad...@gmail.com:
 On Fri, Jul 1, 2011 at 12:54 PM, Eugene Kirpichov ekirpic...@gmail.com 
 wrote:
 Alexey, your definition of mean does not look like liftS2 (/) sum
 length - you have to manually fuse these computations.

 Well it was fused for numerical stability

 I'm asking for a formalism that does this fusion automatically (and
 guaranteedly).

 Joining accumulators is quite straightforward. So is joining of initial
 state. Just creating a
 joinAcc :: (acc1 - x - acc1) - (acc2 - x - acc2) - (acc1,acc2) - x - 
 (acc1,acc2)
 joinAcc f1 f2 (s1,s2) x = (f1 s1 x, f2 s2 x)

 Still you have to handle them separately.
 sum' = foldl (+) 0
 len  = foldl (\n _ - n+1) 0
 sumLen = foldl (joinAcc (+) (\n _ - n+1)) (0,0)

 There is more regular approach but it only works with statistics.
 (function which do not depend on order of elements in the sample)
 For every statistics monoid for its evaluation could be constructed.
 For example sum:
 newtype Sum a = Sum a
 instance Num a = Monoid (Sum a) where
   mempty = Sum 0
   mappend (Sum a) (Sum b) = Sum (a+b)

 Composition of these monoids becomes trivial. Just use


 I pursued this approach in monoid-statistics[1] package.
 It's reasonably well documented

  [1] http://hackage.haskell.org/package/monoid-statistics




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Patterns for processing large but finite streams

2011-07-01 Thread Eugene Kirpichov
Hi,

You're right, reifying stream processing functions seems indeed the
way to go - and that looks even more like arrows :)

I thought of something like this:

data SP i o = Yield [o] (Maybe (Maybe i - SP i o))

Scalar functions like sum and length are just SP's that return a
single item in the output stream.

sum :: (Num a) = SP a a
sum = sum' 0 where sum' s = Yield [] $ Just $ maybe (Yield [s]
Nothing) (sum' . (s+))

Adjacent differences would be like liftA2 (-) input laggedInput

laggedInput would be like:

laggedInput :: SP i i
laggedInput = li Nothing
  where
li maybePrev = Yield (maybe2list maybePrev) $ Just $ maybe empty (li . Just)

Looks like this can be made into an instance of Arrow and can be composed etc.

2011/7/1 Heinrich Apfelmus apfel...@quantentunnel.de:
 Eugene Kirpichov wrote:

 Plain old lazy lists do not allow me to combine multiple concurrent
 computations, e.g. I cannot define average from sum and length.

 I meant the average of the whole list - given a sumS and lengthS (S
 for Stream), write meanS as something like liftS2 (/) sumS lengthS.

 Or is that possible with lazy lists too?

 (looks like arrows actually - which arrow is appropriate here?)

 That's a very good point. Just to clarify for everyone: Eugene wants to
 write the function  average  almost *literally* as

   average xs = sum xs / length xs

 but he wants the functions  sum  and  length  to fuse, so that the input
 stream  xs  is *not* shared as a whole.


 I have thought about this problem for a while actually and have observed the
 following:

 1) You are not looking for a representation of streams, but for a
 representation of *functions* on streams. The essence of a function on
 streams is its case analysis of the input. Hence, the simplest solution is
 to make the case analysis explicit:

   data StringTo a = CaseOf a (Char - StringTo a)

   -- function on a stream (here: String)
   interpret :: StringTo a - (String - a)
   interpret (CaseOf nil cons) []     = nil
   interpret (CaseOf nil cons) (x:xs) = interpret (cons x) xs

   instance Applicative StringTo where
       pure a = CaseOf a (const $ pure a)
       (CaseOf nil1 cons1) * (CaseOf nil2 cons2) =
           CaseOf (nil1 $ nil2) (\c - cons1 c * cons2 c)

   length = go 0 where go n = CaseOf n (\_ - go $! n+1)

   average = liftA2 (/) sum length

 In other words, if you reify  case .. of  expression , you will be able to
 fuse them.

 2) If Haskell were to support some kind of evaluation under the lambda
 (partial evaluation, head normal form instead of weak head normal form), it
 would be unnecessary to make the case expressions implicit. Rather, the
 applicative instance could be written as follows

   instance Applicative ((-) String) where
       pure a  = const a
       f * x = \cs - case cs of
          []     - f [] $ x []
          (c:cs) -
               let f' cs = f (c:cs) -- partial evaluation on this
                   x' cs = x (c:cs)
               in f' `partialseq` x' `partialseq` (f' * x') cs

 We could simply write

    average = liftA2 (/) sum length

 and everything would magically fuse.

 3) John Hughes has already thought about this problem in his PhD thesis. :)
 (but it is not available for download on the internet, unfortunately. :( ).
 His solution was a SYNCHLIST primitive in conjunction with some sort of
 parallelism PAR. Basically, the SYNCHLIST primitive only allows simultaneous
 access to the input stream and the parallelism is used to make that
 simultaneity happen.


 Best regards,
 Heinrich Apfelmus

 --
 http://apfelmus.nfshost.com


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Patterns for processing large but finite streams

2011-06-30 Thread Eugene Kirpichov
Hi,

I'm rewriting timeplot to avoid holding the whole input in memory, and
naturally a problem arises:

How to represent large but finite streams and functions that process
them, returning other streams or some kinds of aggregate values?

Examples:
* Adjacent differences of a stream of numbers
* Given a stream of numbers with times, split it into buckets by time
of given width and produce a stream of (bucket, 50%,75% and 90%
quantiles in this bucket)
* Sum a stream of numbers

Is this, perhaps, what comonads are for? Or iteratees?

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conditional IO ?

2011-06-20 Thread Eugene Kirpichov
| otherwise = return ()

or:

handleParseErrors errors = when (not . null $ errors) $ writeFile .

2011/6/20 Dmitri O.Kondratiev doko...@gmail.com:
 Hi,
 What is right way to do conditional IO?
 For example, I need to write to file errors only in case they exist,
 otherwise my function should do nothing:

 handleParseErrors errors
   | (not . null) errors =  writeFile parse-errors.txt (show errors)
   | otherwise = ?

 What should be an 'otherwise' case ?

 Thanks!

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Installing Gtk2Hs on Win32

2011-06-10 Thread Eugene Kirpichov
Hi,

http://www.gtk.org/download/win32.php
http://www.gtk.org/download/win64.php

search for all-in one bundle.

2011/6/10 Dmitri O.Kondratiev doko...@gmail.com:
 Please help to find pre-compiled binary libraries of Gtk+ for Win32.

 Gtk2Hs insatll instructions at:
 http://code.haskell.org/gtk2hs/INSTALL

 tells us:

 [quote]
 Building on Windows
 

 Installation on Windows is nearly as easy as on Unix platforms. However, you
 need to download the pre-compiled binary libraries of Gtk+ and all it's
 dependent libraries. Point your browser to

 http://www.gtk.org/download-windows.html

 and download one of the All-in-one bundles. Note that you do *not* need to
 install MinGW nor MSys (but it does not hurt if they are installed on your
 system). Install the binaries by unpacking them into a directory without
 spaces.
 [/quote]

 This link (http://www.gtk.org/download-windows.html) is dead.
 Where to get pre-compiled binary libraries of Gtk+ ?

 Thanks!

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Errors installing 'Haskell Charts'

2011-06-10 Thread Eugene Kirpichov
Hi,

You forgot to install gtk2hs-buildtools, it includes pkg-config.

2011/6/10 Dmitri O.Kondratiev doko...@gmail.com:
 I am trying to install HaskellCharts at:
 http://dockerz.net/twd/HaskellCharts

 Running:


     cabal update
     cabal install gtk2hs-buildtools
     cabal install gtk
     cabal install chart

 goes well untill 'cabal install chart' which results in:

 C:\wks\RCAcabal install chart
 Resolving dependencies...
 C:\Users\DKONDR~1\AppData\Local\Temp\cairo-0.12.03136\cairo-0.12.0\Gtk2HsSetup.h
 s:25:2: warning: #warning Setup.hs is guessing the version of Cabal. If
 compilat
 ion of Setup.hs fails use -DCABAL_VERSION_MINOR=x for Cabal version 1.x.0
 when b
 uilding (prefixed by --ghc-option= when using the 'cabal' command)
 [1 of 2] Compiling Gtk2HsSetup  (
 C:\Users\DKONDR~1\AppData\Local\Temp\cairo
 -0.12.03136\cairo-0.12.0\Gtk2HsSetup.hs,
 C:\Users\DKONDR~1\AppData\Local\Temp\ca
 iro-0.12.03136\cairo-0.12.0\dist\setup\Gtk2HsSetup.o )
 [2 of 2] Compiling Main (
 C:\Users\DKONDR~1\AppData\Local\Temp\cairo
 -0.12.03136\cairo-0.12.0\Setup.hs,
 C:\Users\DKONDR~1\AppData\Local\Temp\cairo-0.
 12.03136\cairo-0.12.0\dist\setup\Main.o )
 Linking
 C:\Users\DKONDR~1\AppData\Local\Temp\cairo-0.12.03136\cairo-0.12.0\dist\
 setup\setup.exe ...
 Configuring cairo-0.12.0...
 setup.exe: The program pkg-config version =0.9.0 is required but it could
 not
 be found.
 C:\Users\DKONDR~1\AppData\Local\Temp\glib-0.12.03136\glib-0.12.0\Gtk2HsSetup.hs:
 25:2: warning: #warning Setup.hs is guessing the version of Cabal. If
 compilatio
 n of Setup.hs fails use -DCABAL_VERSION_MINOR=x for Cabal version 1.x.0 when
 bui
 lding (prefixed by --ghc-option= when using the 'cabal' command)
 [1 of 2] Compiling Gtk2HsSetup  (
 C:\Users\DKONDR~1\AppData\Local\Temp\glib-
 0.12.03136\glib-0.12.0\Gtk2HsSetup.hs,
 C:\Users\DKONDR~1\AppData\Local\Temp\glib
 -0.12.03136\glib-0.12.0\dist\setup\Gtk2HsSetup.o )
 [2 of 2] Compiling Main (
 C:\Users\DKONDR~1\AppData\Local\Temp\glib-
 0.12.03136\glib-0.12.0\Setup.hs,
 C:\Users\DKONDR~1\AppData\Local\Temp\glib-0.12.
 03136\glib-0.12.0\dist\setup\Main.o )
 Linking
 C:\Users\DKONDR~1\AppData\Local\Temp\glib-0.12.03136\glib-0.12.0\dist\se
 tup\setup.exe ...
 Configuring glib-0.12.0...
 setup.exe: The program pkg-config version =0.9.0 is required but it could
 not
 be found.
 cabal: Error: some packages failed to install:
 Chart-0.14 depends on glib-0.12.0 which failed to install.
 cairo-0.12.0 failed during the configure step. The exception was:
 ExitFailure 1
 gio-0.12.0 depends on glib-0.12.0 which failed to install.
 glib-0.12.0 failed during the configure step. The exception was:
 ExitFailure 1
 gtk-0.12.0 depends on glib-0.12.0 which failed to install.
 pango-0.12.0 depends on glib-0.12.0 which failed to install.


 Where to get glib-0.12.0 ?

 Thanks!

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] *GROUP HUG*

2011-05-25 Thread Eugene Kirpichov
Why so? I think it wouldn't.

data FList a = FNil | FCons Int a (FList a)
empty = FNil
len FNil = 0
len (FCons n _) = n
cons x xs = FCons (1 + len xs) x xs
tail (FCons _ _ xs) = xs

2011/5/24 Tony Morris tonymor...@gmail.com:
 On 24/05/11 22:41, Johannes Waldmann wrote:
 Then tell me, why does calculating the length of a (Haskell)
 list has O(n) complexity.
 Infiniticity aside, tail would become O(n) if you store a length with
 each cons cell.

 --
 Tony Morris
 http://tmorris.net/



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Markov chain algorithm (from The Practice of Programming, Kernighan/Pike)

2011-05-11 Thread Eugene Kirpichov
Because insertWith has a different type than your code needs.
It's not like a - b - b, it's a - a - a and (:) is not like this.
Try insertWith (++ or flip (++)) [(Moby, Dick)] 

2011/5/11 michael rice nowg...@yahoo.com

 It's hard to improve on a 20 line Awk program for generating text but I
 thought it would be fun to investigate a Haskell solution.

 Why can't I cons an element onto an existing list?

 Michael

 Prelude Data.List Data.Map insertWith (:) (Moby, Dick) will
 (fromList [((Joe, Blow),[is]), ((Moby, Dick),[may])])

 interactive:1:11:
 Occurs check: cannot construct the infinite type: a = [a]
   Expected type: a
   Inferred type: [a]
 In the first argument of `insertWith', namely `(:)'
 In the expression:
 insertWith
   (:)
   (Moby, Dick)
   will
   (fromList [((Joe, Blow), [is]), ((Moby, Dick),
 [may])])
 Prelude Data.List Data.Map


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Cairo package broken on Windows, or am I cooking it wrong?

2011-05-10 Thread Eugene Kirpichov
Hello,

After a moderate struggle with getting gtk2hs to install on Windows, I
tried to install Chart and got a linker message saying
_cairo_image_surface_get_data was missing.
After another struggle I figured that if I launch ghci -lcairo-2 then
everything works fine.

I tried patching cairo.cabal by adding this line:

extra-libraries: cairo-2

and reinstalling cairo. This helped, I could install Chart without problems.

My question is:
Should this line be added to cairo.cabal, or should everything have
worked fine in some other magical way, but did not, because of some
other error?
How can I diagnose what's wrong?

How is cabal install generally supposed to find that a dll is
needed, if I don't write it down in the extra-libraries field?

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] For Project Euler #1 isn't it more efficient to generate just the numbers you need? Spoiler

2011-05-07 Thread Eugene Kirpichov
One doesn't have to touch them to compute their sum.

2011/5/7 KC kc1...@gmail.com:
 Do you mean O(1) complexity?

 Don't you have to touch at least the multiples of 3  5 making it O(k)
 where is the number of multiples of 3 and the number of multiples of
 5?


 On Fri, May 6, 2011 at 10:10 PM, Lyndon Maydwell maydw...@gmail.com wrote:
 If you're looking for efficiency, I believe you can actually do #1 in
 constant time:


 On Sat, May 7, 2011 at 7:31 AM,  cas...@istar.ca wrote:
 -- Instead of this
 -- sumMultiples3or5 s = sum [x | x - [3..s-1], x `mod` 3 == 0 || x `mod` 5
 == 0]


 -- Isn't this faster

 sumMultiples3or5 s = sum ([x | x - [3,6..s-1]] `merge` [x | x -
 [5,10..s-1]])

 merge xs [] = xs
 merge [] ys = ys
 merge txs@(x:xs) tys@(y:ys)
    | x  y     = x : xs `merge` tys
    | x  y     = y : txs `merge` ys
    | otherwise = x : xs `merge` ys



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 --
 Regards,
 KC

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: iterIO-0.1 - iteratee-based IO with pipe operators

2011-05-05 Thread Eugene Kirpichov
Sounds just terrific! Thanks!



06.05.2011, в 8:15, David Mazieres dm-list-haskell-c...@scs.stanford.edu 
написал(а):

 Hi, everyone.  I'm pleased to announce the release of a new iteratee
 implementation, iterIO:
 
http://hackage.haskell.org/package/iterIO
 
 IterIO is an attempt to make iteratees easier to use through an
 interface based on pipeline stages reminiscent of Unix command
 pipelines.  Particularly if you've looked at iteratees before and been
 intimidated, please have a look at iterIO to see if it makes them more
 accessible.
 
 Some aspects of iterIO that should simplify learning and using
 iteratees are:
 
   * Every aspect of the library is thoroughly document in haddock
 including numerous examples of use.
 
   * Enumerators are easy to build out of iteratees.
 
   * There is no difference between enumerators and enumeratees
 (i.e., inner pipeline stages).  The former is just a
 type-restricted version of the latter.
 
   * Parsing combinators provide detailed error reporting and support
 LL(*) rather than LL(1) parsing, leading to fewer non-intuitive
 parsing failures.  A couple of tricks avoid consuming excessive
 memory for backtracking.
 
   * Super-fast LL(1) parsing is also available through seamless
 integration with attoparsec.
 
   * A universal exception mechanism works across invocations of mtl
 monad transformers, thereby unifying error handling.
 
   * All pipe operators have uniform semantics, eliminating corner
 cases.  In particular, if the writing end of a pipe fails, the
 reading end always gets EOF, allowing it to clean up resources.
 
   * One can catch exceptions thrown by any contiguous subset of
 stages in a pipeline.  Moreover, enumerator exception handlers
 can resume downstream stages that haven't failed.
 
   * The package is full of useful iteratees and enumerators,
 including basic file and socket processing, parsec-like
 combinators, string search, zlib/gzip compression, SSL, HTTP, and
 loopback enumerator/iteratee pairs for testing a protocol
 implementation against itself.
 
 Please enjoy.  I'd love to hear feedback.
 
 David
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: timeplot-0.3.0 - the analyst's swiss army knife for visualizing ad-hoc log files

2011-05-01 Thread Eugene Kirpichov
Hello,

Sorry for the broken link: the correct link to the presentation is:

http://jkff.info/presentations/two-visualization-tools.pdf

2011/4/30 Eugene Kirpichov ekirpic...@gmail.com:
  Hello fellow haskellers,

 I announce the release of timeplot-0.3.0, the analyst's swiss army
 knife for visualizing ad-hoc log files.

 Links:
  * http://jkff.info/presentation/two-visualization-tools - a
 presentation saying what the tools are all about and giving plenty of
 graphical examples on cluster computing use cases. At the end of the
 presentation there's also a couple of slides about installation. It is
 a little bit outdated, it corresponds to versions just before 0.3.0.
  * http://hackage.haskell.org/package/timeplot
  * http://github.com/jkff/timeplot
  * The sibling tool, splot - for visualizing the activity of many
 concurrent processes - http://hackage.haskell.org/package/splot and
 http://github.com/jkff/splot . It has also gotten a couple of new
 features since my last announcement.

 The major new feature of tplot is the introduction of subplots, the
 'within' plots.
 It allows one to plot data from several sub-tracks on one track of the graph:
  - several line- or dot-plots
  - several plots of sums or cumulative sums, perhaps stacked (to see
 how the sub-tracks contribute to the total sum - e.g. if your log
 speaks about different types of overhead and you wish to see how they
 contribute to the total)
  - stacked activity count plot - a generalization of the previous
 activity count plot, which allows you to, given a log saying like
 Machine started servicing job JOB1 ... Machine finished servicing job
 JOB1 etc, plot how many machines are servicing each job at any
 moment, in a stacked fashion - so, how loads by different jobs
 contribute to the whole cluster's load. The activity frequency plot
 plots the same on a relative scale.

 The syntax is, for example: within[.] dots or within[.] acount or
 even within[.] duration cumsum stacked etc.

 Note that these are of course just example use cases and the tool is
 universal, it is not in any sense specialized to clusters, jobs,
 overheads or actually even to logs.
 I'd like to encourage you to give it a try and look around for a use case :)

 If you do give the tool a try, please tell me if something goes wrong,
 be it an installation problem or a bug (the version is fresh released,
 so this is quite possible).

 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: timeplot-0.3.0 - the analyst's swiss army knife for visualizing ad-hoc log files

2011-04-30 Thread Eugene Kirpichov
 Hello fellow haskellers,

I announce the release of timeplot-0.3.0, the analyst's swiss army
knife for visualizing ad-hoc log files.

Links:
 * http://jkff.info/presentation/two-visualization-tools - a
presentation saying what the tools are all about and giving plenty of
graphical examples on cluster computing use cases. At the end of the
presentation there's also a couple of slides about installation. It is
a little bit outdated, it corresponds to versions just before 0.3.0.
 * http://hackage.haskell.org/package/timeplot
 * http://github.com/jkff/timeplot
 * The sibling tool, splot - for visualizing the activity of many
concurrent processes - http://hackage.haskell.org/package/splot and
http://github.com/jkff/splot . It has also gotten a couple of new
features since my last announcement.

The major new feature of tplot is the introduction of subplots, the
'within' plots.
It allows one to plot data from several sub-tracks on one track of the graph:
 - several line- or dot-plots
 - several plots of sums or cumulative sums, perhaps stacked (to see
how the sub-tracks contribute to the total sum - e.g. if your log
speaks about different types of overhead and you wish to see how they
contribute to the total)
 - stacked activity count plot - a generalization of the previous
activity count plot, which allows you to, given a log saying like
Machine started servicing job JOB1 ... Machine finished servicing job
JOB1 etc, plot how many machines are servicing each job at any
moment, in a stacked fashion - so, how loads by different jobs
contribute to the whole cluster's load. The activity frequency plot
plots the same on a relative scale.

The syntax is, for example: within[.] dots or within[.] acount or
even within[.] duration cumsum stacked etc.

Note that these are of course just example use cases and the tool is
universal, it is not in any sense specialized to clusters, jobs,
overheads or actually even to logs.
I'd like to encourage you to give it a try and look around for a use case :)

If you do give the tool a try, please tell me if something goes wrong,
be it an installation problem or a bug (the version is fresh released,
so this is quite possible).

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] tplot and splot - analyst's swiss army knifes for visualizing log files

2011-03-18 Thread Eugene Kirpichov
Hello Ferenc,

Thank you for reporting the bug - it's very curious, I will look into
it in the nearest couple of days.

Could you please send me privately the actual input file on which the
program crashes?

I have added a downloadable link to the presentation to all places
where the slideshare link was present (finally making some use of my
hosting...).
Here it is: http://jkff.info/presentations/two-visualization-tools.pdf

2011/3/17 Ferenc Wagner wf...@niif.hu:
 Eugene Kirpichov ekirpic...@gmail.com writes:

 2010/12/17 Henning Thielemann schlepp...@henning-thielemann.de:

 Eugene Kirpichov schrieb:

 I've published a large presentation about two Haskell-based tools of
 mine - tplot and splot.

 Their motto is visualize system behavior from logs with a shell 
 one-liner.
 Based on my experience, they usually seem to live up to this motto.


 http://www.slideshare.net/jkff/two-visualization-tools


 [attention attractor: the presentation has *really a lot* of pictures]

 ... and complete TeX code attached! :-) However can I also view a simple
 PDF document of the presentation?

 You can download the PDF here -
 http://www.slideshare.net/jkff/two-visualization-tools/download
 (however one has to be logged in to Slideshare, for example with a
 facebook acct., for this link to work)

 Just in case, I'm also attaching a PDF of the current version to this
 email, but visiting the link is preferable, since I'll be updating the
 contents.

 Please, if at all possible, link an up-to-date downloadable PDF from the
 documentation (http://hackage.haskell.org/package/timeplot) or from the
 homepage (http://haskell.org/haskellwiki/Timeplot) to make our life
 easier!

 Anyway, your tools look very interesting, I gave tplot a shot.
 Unfortunately, I hit various strange failures:

 $ head -4 or.log
 Mar  8 18:55:11 =overrun 1
 Mar  8 18:55:13 =overrun 6
 Mar  8 18:55:15 =overrun 13
 Mar  8 18:55:16 =overrun 3

 $ wc -l or.log
 466 or.log

 $ ls -l or.log overruns466.log
 lrwxrwxrwx 1 wferi wferi    15 Mar 17 14:45 or.log - overruns466.log
 -rw-rw-r-- 1 wferi wferi 12587 Mar 17 14:35 overruns466.log

 $ tplot -if or.log -tf 'date %b %e %T' -o overruns.png -k 'overrun' 'sum 10'

 This worked just fine.  However, when given the same file with a longer
 name, tplot does not terminate:

 $ tplot -if overruns466.log -tf 'date %b %e %T' -o overruns.png -k 'overrun' 
 'sum 10'
 ^C

 while doing the same the other way around still works:

 $ cat overruns466.log | tplot -if - -tf 'date %b %e %T' -o overruns.png -k 
 'overrun' 'sum 10'

 Choosing any other extension (svg, pdf or ps) also results in
 nontermination (or at least unbearable runtime and memory consumption).

 Adding a simple no-op statement, like:

 diff -ur ../timeplot-0.2.19/Tools/TimePlot.hs ./Tools/TimePlot.hs
 --- ../timeplot-0.2.19/Tools/TimePlot.hs        2011-03-09 11:36:24.0 
 +0100
 +++ ./Tools/TimePlot.hs 2011-03-17 16:42:57.247625607 +0100
 @@ -627,6 +627,7 @@
   when (null args || args == [--help]) $ showHelp  exitSuccess
   case (readConf args) of
     Conf conf - do
 +      putStr 
       let render = case (outFormat conf) of {
           PNG    - \c w h f - const () `fmap` renderableToPNGFile c w h f;
           PDF    - renderableToPDFFile ;

 also results in nontermination, even in the previously working case.
 Something is clearly wrong here, seemingly in the runtime IO system.
 --
 Thanks,
 Feri.

 GHC 6.12.1
 Chart-0.14
 bytestring-0.9.1.5
 bytestring-lexing-0.2.1
 cairo-0.11.0
 colour-2.3.1
 containers-0.3.0.0
 data-accessor-0.2.1.3
 data-accessor-template-0.2.1.7
 haskell98-1.0.1.1
 regex-tdfa-1.1.4
 strptime-1.0.1
 time-1.1.4




-- 
Eugene Kirpichov
Principal Engineer,
Mirantis Inc. http://www.mirantis.com/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell IDE

2011-03-02 Thread Eugene Kirpichov
A year and something ago I used Leksah and I was reasonably satisfied
with what it had to offer at that time. If I'm understanding
correctly, it has been much improved since.

However, now I actually use vim - but that's because I'm scared of
trying to install Leksah on Windows (maybe it isn't hard, I haven't
tried) and because I'm only doing rather tiny things with Haskell at
the moment.

2011/3/3 Hauschild, Klaus (EXT) klaus.hauschild@siemens.com:
 Hi Haskellers,

 whats your Haskell IDE of choise? Currently I use leksah. Is the EclipseFP
 Plugin for Eclipse a real alternative?

 Thanks

 Klaus


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Eugene Kirpichov
Senior Software Engineer,
Mirantis Inc. http://www.mirantis.com/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell platform installation failure on OS X 10.6.6 (intel)

2011-02-24 Thread Eugene Kirpichov
Hello,

Below is an experience report from Artem (cc'd) on a failed
installation of Haskell platform on a Mac.

OS: OS X 10.6.6
CPU: Intel Core i5
Platform downloaded from:
http://lambda.haskell.org/hp-tmp/2010.2.0.0/haskell-platform-2010.2.0.0.i386.dmg

Sequence of actions: Downloaded the file, mounted the image, ran
Install new GHC, pressed OK until the following screen appeared
http://i.imgur.com/GGdbg.png - there the Install button was disabled;
pressing change install location did not help since only 1 option
was available there, and it was already selected.
After that, Artem tried to uninstall GHC, but unsuccessfully, since OS
told that it was not installed.
Then he tried to install the platform, and eventually got the
following error around moving files into place:
http://i.imgur.com/YHb22.png

I've also heard somewhat similar problem reports on installing
haskell-platform on OS X from other people (I myself don't have it) -
like being unable to install it without first installing XCode, etc.

Can anyone help resolve this issue? I'm sure this is not expected
behavior. What further information would help find the cause?

-- 
Eugene Kirpichov
Senior Software Engineer,
Mirantis Inc. http://www.mirantis.com/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell platform installation failure on OS X 10.6.6 (intel)

2011-02-24 Thread Eugene Kirpichov
Thanks Gregory,

Indeed xcode was not installed - however I wonder, why is it a dependency?
It seems quite a heavyweight thing to require for installing Haskell,
and this requirement may very well be a good reason to *not* install
it...

Could you please point me to the ticket you opened? I was not able to google it.

2011/2/24 Gregory Collins g...@gregorycollins.net:
 Hello,

 Usually this is because XCode is not installed. I believe the README
 states that this is a dependency for the install process. I opened a
 ticket with GHC HQ a while ago re: making this scenario friendlier for
 users.

 G.

 On Thu, Feb 24, 2011 at 3:41 PM, Eugene Kirpichov ekirpic...@gmail.com 
 wrote:
 Hello,

 Below is an experience report from Artem (cc'd) on a failed
 installation of Haskell platform on a Mac.

 OS: OS X 10.6.6
 CPU: Intel Core i5
 Platform downloaded from:
 http://lambda.haskell.org/hp-tmp/2010.2.0.0/haskell-platform-2010.2.0.0.i386.dmg

 Sequence of actions: Downloaded the file, mounted the image, ran
 Install new GHC, pressed OK until the following screen appeared
 http://i.imgur.com/GGdbg.png - there the Install button was disabled;
 pressing change install location did not help since only 1 option
 was available there, and it was already selected.
 After that, Artem tried to uninstall GHC, but unsuccessfully, since OS
 told that it was not installed.
 Then he tried to install the platform, and eventually got the
 following error around moving files into place:
 http://i.imgur.com/YHb22.png

 I've also heard somewhat similar problem reports on installing
 haskell-platform on OS X from other people (I myself don't have it) -
 like being unable to install it without first installing XCode, etc.

 Can anyone help resolve this issue? I'm sure this is not expected
 behavior. What further information would help find the cause?

 --
 Eugene Kirpichov
 Senior Software Engineer,
 Mirantis Inc. http://www.mirantis.com/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Gregory Collins g...@gregorycollins.net




-- 
Eugene Kirpichov
Senior Software Engineer,
Mirantis Inc. http://www.mirantis.com/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   4   >