Re: [Haskell-cafe] Ticking time bomb

2013-02-12 Thread Bob Ippolito
The Python and Ruby communities are actively working on improving the
security of their packaging infrastructure. I haven't paid close attention
to any of the efforts so far, but anyone working on cabal/hackage security
should probably take a peek. I lurk on Python's catalog-sig list and here's
the interesting bits I've noticed from the past few weeks:

[Catalog-sig] [Draft] Package signing and verification process
http://mail.python.org/pipermail/catalog-sig/2013-February/004832.html

[Catalog-sig] [DRAFT] Proposal for fixing PyPI/pip security
http://mail.python.org/pipermail/catalog-sig/2013-February/004994.html

Python PyPi Security Working Document:
https://docs.google.com/document/d/1e3g1v8INHjHsUJ-Q0odQOO8s91KMAbqLQyqj20CSZYA/edit

Rubygems Threat Model:
http://mail.python.org/pipermail/catalog-sig/2013-February/005099.html
https://docs.google.com/document/d/1fobWhPRqB4_JftFWh6iTWClUo_SPBnxqbBTdAvbb_SA/edit

TUF: The Update Framework
https://www.updateframework.com/



On Fri, Feb 1, 2013 at 4:07 AM, Christopher Done chrisd...@gmail.comwrote:

 Hey dude, it looks like we made the same project yesterday:


 http://www.reddit.com/r/haskell/comments/17njda/proposal_a_trivial_cabal_package_signing_utility/

 Yours is nice as it doesn't depend on GPG. Although that could be a
 nice thing because GPG manages keys. Dunno.

 Another diff is that mine puts the .sig inside the .tar.gz, yours puts
 it separate.

 =)

 On 31 January 2013 09:11, Vincent Hanquez t...@snarc.org wrote:
  On 01/30/2013 07:27 PM, Edward Z. Yang wrote:
 
  https://status.heroku.com/incidents/489
 
  Unsigned Hackage packages are a ticking time bomb.
 
  I agree this is terrible, I've started working on this, but this is
 quite a
  bit of work and other priorities always pop up.
 
  https://github.com/vincenthz/cabal
  https://github.com/vincenthz/cabal-signature
 
  My current implementation generate a manifest during sdist'ing in cabal,
 and
  have cabal-signature called by cabal on the manifest to create a
  manifest.sign.
 
  The main issue i'm facing is how to create a Web of Trust for doing all
 the
  public verification bits.
 
  --
  Vincent
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] arrow notation

2013-02-12 Thread Petr Pudlák
2013/2/11 Ertugrul Söylemez e...@ertes.de

 Petr Pudlák petr@gmail.com wrote:

  class Arrow a = ArrowDelay a where
  delay :: a b c - a () (b - c)
 
  force :: Arrow a = a () (b - c) - a b c
 
  Perhaps it would be convenient to have ArrowDelay and the
  corresponding conversions included in the library so that defining and
  using Applicative instances for arrows would become more
  straightforward.

 I appreciate the idea from a theoretical standpoint, but you don't
 actually have to define an ArrowDelay instance for the notation to work.
 The compiler can't check the laws anyway.


That's true. But I'm afraid that without the ArrowDelay constraint people
would think that every arrow forms an applicative functor and eventually
get into hard-to-trace problems with failing the applicative laws.

The compiler can't check the laws, so somebody else has to. Should it be
users of an arrow or its authors?

Without the constraint, the burden would be on the users: Before using the
applicative instance, check if the arrow is really an applicative functor.
That's something users of a library aren't supposed to do.

With the constraint, the burden would be on the authors of the arrow -
they'd have to define the instance and be responsible for satisfying the
laws. I feel this is more appropriate.

  Best regards,
  Petr Pudlak
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Problem installing cabal-dev

2013-02-12 Thread David Turner

Hi,

From a clean install of Haskell Platform 2012.4.0.0 (on Windows) I have 
issued just:


 cabal update
 cabal install cabal-install
 cabal install cabal-dev

The last command fails with:

Resolving dependencies...
In order, the following would be installed:
tar-0.3.2.0 (new package)
transformers-0.2.2.0 (new version)
mtl-2.0.1.0 (new version)
parsec-3.1.3 (reinstall) changes: mtl-2.1.2 - 2.0.1.0
network-2.3.2.0 (new version)
HTTP-4000.2.8 (new version)
cabal-dev-0.9.1 (new package)
cabal: The following packages are likely to be broken by the reinstalls:
network-2.3.1.0
haskell-platform-2012.4.0.0
cgi-3001.1.7.4
HTTP-4000.2.5
Use --force-reinstalls if you want to install anyway.


I *think* the problem is that cabal-dev has these dependencies:

  mtl = 1.1   2.1,
  transformers = 0.2   0.3,

where the latest version of mtl and transformers are 2.1.2 and 0.3 
respectively. At least, if I download the latest cabal-dev package and 
relax those upper bounds I can get it to install.


The irony is, of course, that I'm trying to install cabal-dev to get me 
out of a totally unrelated dependencies hole!


What should I do now?

Cheers,

--
David Turner
Senior Consultant
Tracsis PLC

Tracsis PLC is a company registered in
England and Wales with number 5019106.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem installing cabal-dev

2013-02-12 Thread JP Moresmau
Hello David, what I did is get cabal-dev from source (git clone git://
github.com/creswick/cabal-dev.git). This build fine, the upper bounds have
been edited. Hopefully the new version will be released soon.

JP


On Tue, Feb 12, 2013 at 11:45 AM, David Turner d.tur...@tracsis.com wrote:

 Hi,

 From a clean install of Haskell Platform 2012.4.0.0 (on Windows) I have
 issued just:

  cabal update
  cabal install cabal-install
  cabal install cabal-dev

 The last command fails with:

 Resolving dependencies...
 In order, the following would be installed:
 tar-0.3.2.0 (new package)
 transformers-0.2.2.0 (new version)
 mtl-2.0.1.0 (new version)
 parsec-3.1.3 (reinstall) changes: mtl-2.1.2 - 2.0.1.0
 network-2.3.2.0 (new version)
 HTTP-4000.2.8 (new version)
 cabal-dev-0.9.1 (new package)
 cabal: The following packages are likely to be broken by the reinstalls:
 network-2.3.1.0
 haskell-platform-2012.4.0.0
 cgi-3001.1.7.4
 HTTP-4000.2.5
 Use --force-reinstalls if you want to install anyway.


 I *think* the problem is that cabal-dev has these dependencies:

   mtl = 1.1   2.1,
   transformers = 0.2   0.3,

 where the latest version of mtl and transformers are 2.1.2 and 0.3
 respectively. At least, if I download the latest cabal-dev package and
 relax those upper bounds I can get it to install.

 The irony is, of course, that I'm trying to install cabal-dev to get me
 out of a totally unrelated dependencies hole!

 What should I do now?

 Cheers,

 --
 David Turner
 Senior Consultant
 Tracsis PLC

 Tracsis PLC is a company registered in
 England and Wales with number 5019106.

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
JP Moresmau
http://jpmoresmau.blogspot.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem installing cabal-dev

2013-02-12 Thread Bob Ippolito
The version of cabal-dev on Hackage doesn't work with recent versions of
Haskell due to https://github.com/creswick/cabal-dev/issues/74 - You have
to install from a recent git checkout.

These instructions were done on Mac but should be straightforward enough to
do the same on Windows:
http://bob.ippoli.to/archives/2013/01/11/getting-started-with-haskell/#install-cabal-dev



On Tue, Feb 12, 2013 at 2:45 AM, David Turner d.tur...@tracsis.com wrote:

 Hi,

 From a clean install of Haskell Platform 2012.4.0.0 (on Windows) I have
 issued just:

  cabal update
  cabal install cabal-install
  cabal install cabal-dev

 The last command fails with:

 Resolving dependencies...
 In order, the following would be installed:
 tar-0.3.2.0 (new package)
 transformers-0.2.2.0 (new version)
 mtl-2.0.1.0 (new version)
 parsec-3.1.3 (reinstall) changes: mtl-2.1.2 - 2.0.1.0
 network-2.3.2.0 (new version)
 HTTP-4000.2.8 (new version)
 cabal-dev-0.9.1 (new package)
 cabal: The following packages are likely to be broken by the reinstalls:
 network-2.3.1.0
 haskell-platform-2012.4.0.0
 cgi-3001.1.7.4
 HTTP-4000.2.5
 Use --force-reinstalls if you want to install anyway.


 I *think* the problem is that cabal-dev has these dependencies:

   mtl = 1.1   2.1,
   transformers = 0.2   0.3,

 where the latest version of mtl and transformers are 2.1.2 and 0.3
 respectively. At least, if I download the latest cabal-dev package and
 relax those upper bounds I can get it to install.

 The irony is, of course, that I'm trying to install cabal-dev to get me
 out of a totally unrelated dependencies hole!

 What should I do now?

 Cheers,

 --
 David Turner
 Senior Consultant
 Tracsis PLC

 Tracsis PLC is a company registered in
 England and Wales with number 5019106.

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] arrow notation

2013-02-12 Thread Ross Paterson
On Mon, Feb 11, 2013 at 09:32:25AM +0100, Petr Pudlák wrote:
 While the implementation of Applicative can be defined without actually using
 `delay`:
 
 newtype ArrowApp a b c = ArrowApp (a b c)
 
 instance Arrow a = Functor (ArrowApp a b) where
 fmap f (ArrowApp a) = ArrowApp (a ^ f)
 instance ArrowDelay a = Applicative (ArrowApp a b) where
 pure x =
 ArrowApp $ arr (const x)
 (ArrowApp af) * (ArrowApp ax) =
 ArrowApp $ (af  ax) ^ uncurry ($)
 
 I believe it only satisfies the laws only if the arrow satisfies delay/force
 laws.

This is a reader, which always satisfies the applicative laws.
What ArrowDelay does is pick out the arrows that are equivalent to
the static arrow, i.e. F(b-c), of some applicative functor F.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Announcement] hArduino: Control your Arduino board from Haskell

2013-02-12 Thread Ivan Perez
I, too, am very happy to see this implemented. I'll give it a try and
tell you how it goes. (not inmediately, sadly, I don't have my arduino
with me.)

Thanks a lot!

On 11 February 2013 08:04, Alfredo Di Napoli alfredo.dinap...@gmail.com wrote:
 Sounds cool!
 Thanks for your effort! :)
 A.

 On 10 February 2013 22:54, Levent Erkok erk...@gmail.com wrote:

 I'm happy to announce hArduino: a library that allows Arduino boards to be
 controlled directly from Haskell. hArduino uses the Firmata protocol
 (http://firmata.org), to communicate with and control Arduino boards, making
 it possible to implement many controller projects directly in Haskell.

Home page: http://leventerkok.github.com/hArduino/
Hackage: http://hackage.haskell.org/package/hArduino

 Some example programs:

Blink the led:
 http://hackage.haskell.org/packages/archive/hArduino/latest/doc/html/System-Hardware-Arduino-SamplePrograms-Blink.html
Digital counter:
 http://hackage.haskell.org/packages/archive/hArduino/latest/doc/html/System-Hardware-Arduino-SamplePrograms-Counter.html
Control an LCD:
 http://hackage.haskell.org/packages/archive/hArduino/latest/doc/html/System-Hardware-Arduino-SamplePrograms-LCD.html

 Short (4.5 mins) youtube video of the blink example:
 http://www.youtube.com/watch?v=PPa3im44t2g

 hArduino is work-in-progress: Patches, bug-reports, and feedback is most
 welcome.

 -Levent.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Why isn't Program Derivation a first class citizen?

2013-02-12 Thread Nehal Patel
A few months ago I took the Haskell plunge, and all goes well... -- but I 
really want to understand the paradigms as fully as possible, and as it stands, 
I find myself with three or four questions for which I've yet to find suitable 
answers.  I've picked one to ask the cafe -- like my other questions, it's 
somewhat amorphous and ill posed -- so much thanks in advance for any thoughts 
and comments!


Why isn't Program Derivation a first class citizen?
---

(Hopefully the term program derivation is commonly understood?  I mean it in 
the sense usually used to describe the main motif of Bird's The Algebra of 
Programming.  Others seem to use it as well...) 

For me, it has come to mean solving problems in roughly the following way
1) Defining the important functions and data types in a pedantic way so that 
the semantics are clear as possible to a human, but possibly inefficient (I 
use quotes because one of my other questions is about whether it is really 
possible to reason effectively about program performance in ghc/Haskell…)
2) Work out some proofs of various properties of your functions and data types 
3) Use the results from step 2 to provide an alternate implementation with 
provably same semantics but possibly much better performance.

To me it seems that so much of Haskell's design is dedicated to making steps 
1-3 possible, and those steps represent for me (and I imagine many others) the 
thing that makes Haskell (and it cousins) so beautiful.

And so my question is, that in 2013, why isn't this process a full fledged part 
of the language? I imagine I just don't know what I'm talking about, so correct 
me if I'm wrong, but this is how I understand the workflow used in practice 
with program derivation:  1) state the problem pedantically in code, 2) work 
out a bunch of proofs with pen and paper, 3) and then translate that back into 
code, perhaps leaving you with function_v1, function_v2, function_v3, etc   -- 
that seems fine for 1998, but why is it still being done this way?

What I'm asking about might sound related to theorem provers, -- but if so ,I 
feel like what I'm thinking about is not so much the very difficult problem of 
automated proofs or even proof assistants, but the somewhat simpler task of 
proof verification. Concretely, here's a possibility of how I imagine   the 
workflow could be:

++ in code, pedantically setup the problem. 
++ in code, state a theorem, and prove it -- this would require a revision to 
the language (Haskell 201x) and perhaps look like Isabella's ISAR -- a 
-structured- proof syntax that is easy for humans to construct and understand 
-- in particular it would possible to easily reuse previous theorems and leave 
out various steps.  At compile time, the compiler would check that the proof 
was correct, or perhaps complain that it can't see how I went from step 3 to 
step 4, in which case I might have to add in another step (3b) to  help the 
compiler verify the proof.  
++ afterwards, I would use my new theorems to create semantically identical 
variants of my original functions (again this process would be integrated into 
the language)

While I feel like theorem provers offer some aspects of this workflow (in 
particular with the ability to export scala or haskell code when you are done), 
I feel that in practice it is only useful for modeling fairly technically 
things like protocols and crypto stuff -- whereas if this existed within 
haskell proper it would find much broader use and have greater impact.

I haven't fully fleshed out all the various steps of what exactly it would mean 
to have program derivation be a first class citizen, but I'll pause here and 
followup if a conversation ensues. 

To me, it seems that something like this should be possible -- am i being 
naive? does it already exist?  have people tried and given up? is it just 
around the corner?  can you help me make sense of all of this?

thanks! nehal
 
  
 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-12 Thread Nicolas Bock
Thanks so much for your efforts, this really helped!

Thanks again,

nick



On Sat, Feb 9, 2013 at 11:54 PM, Branimir Maksimovic bm...@hotmail.comwrote:

  Here is haskell version that is faster than python, almost as fast as c++.
 You need to install bytestring-lexing package for readDouble.

 bmaxa@maxa:~/haskell$ time ./printMatrixDecay -  output.txt
 read 16384 matrix elements (128x128 = 16384)
 [0.00e0, 1.00e-8) = 0 (0.00%) 0
 [1.00e-8, 1.00e-7) = 0 (0.00%) 0
 [1.00e-7, 1.00e-6) = 0 (0.00%) 0
 [1.00e-6, 1.00e-5) = 0 (0.00%) 0
 [1.00e-5, 1.00e-4) = 1 (0.01%) 1
 [1.00e-4, 1.00e-3) = 17 (0.10%) 18
 [1.00e-3, 1.00e-2) = 155 (0.95%) 173
 [1.00e-2, 1.00e-1) = 1434 (8.75%) 1607
 [1.00e-1, 1.00e0) = 14777 (90.19%) 16384
 [1.00e0, 2.00e0) = 0 (0.00%) 16384

 real0m0.031s
 user0m0.028s
 sys 0m0.000s
 bmaxa@maxa:~/haskell$ time ./printMatrixDecay.py -  output.txt
 (-) read 16384 matrix elements (128x128 = 16384)
 [0.00e+00, 1.00e-08) = 0 (0.00%) 0
 [1.00e-08, 1.00e-07) = 0 (0.00%) 0
 [1.00e-07, 1.00e-06) = 0 (0.00%) 0
 [1.00e-06, 1.00e-05) = 0 (0.00%) 0
 [1.00e-05, 1.00e-04) = 1 (0.00%) 1
 [1.00e-04, 1.00e-03) = 17 (0.00%) 18
 [1.00e-03, 1.00e-02) = 155 (0.00%) 173
 [1.00e-02, 1.00e-01) = 1434 (0.00%) 1607
 [1.00e-01, 1.00e+00) = 14777 (0.00%) 16384
 [1.00e+00, 2.00e+00) = 0 (0.00%) 16384

 real0m0.081s
 user0m0.080s
 sys 0m0.000s

 Program follows...

 import System.Environment
 import Text.Printf
 import Text.Regex.PCRE
 import Data.Maybe
 import Data.Array.IO
 import Data.Array.Unboxed
 import qualified Data.ByteString.Char8 as B
 import Data.ByteString.Lex.Double (readDouble)

 strataBounds :: UArray Int Double
 strataBounds = listArray (0,10) [ 0.0, 1.0e-8, 1.0e-7, 1.0e-6, 1.0e-5,
 1.0e-4, 1.0e-3, 1.0e-2, 1.0e-1, 1.0, 2.0 ]

 newStrataCounts :: IO(IOUArray Int Int)
 newStrataCounts = newArray (bounds strataBounds) 0

 main = do
 l - B.getContents
 let a = B.lines l
 strataCounts - newStrataCounts
 n - calculate strataCounts a 0
 let
 printStrataCounts :: IO ()
 printStrataCounts = do
 let s = round $ sqrt (fromIntegral n::Double) :: Int
 printf read %d matrix elements (%dx%d = %d)\n n s s n
 printStrataCounts' 0 0
 printStrataCounts' :: Int - Int - IO ()
 printStrataCounts' i total
 | i  (snd $ bounds strataBounds) = do
 count - readArray strataCounts i
 let
 p :: Double
 p = (100.0*(fromIntegral count) ::
 Double)/(fromIntegral n :: Double)
 printf [%1.2e, %1.2e) = %i (%1.2f%%) %i\n (strataBounds
 ! i) (strataBounds ! (i+1))
 count p
 (total + count)
 printStrataCounts' (i+1) (total+count)
 | otherwise = return ()
 printStrataCounts

 calculate :: IOUArray Int Int - [B.ByteString] - Int - IO Int
 calculate _ [] n = return n
 calculate counts (l:ls) n = do
 let
 a = case getAllTextSubmatches $ l =~ B.pack matrix.*=
 ([0-9eE.+-]+)$ :: [B.ByteString] of
 [_,v] - Just (readDouble v) :: Maybe (Maybe
 (Double,B.ByteString))
 _ - Nothing
 b = (fst.fromJust.fromJust) a
 loop :: Int - IO()
 loop i
 | i  (snd $ bounds strataBounds) =
 if (b = (strataBounds ! i))  (b  (strataBounds !
 (i+1)))
 then do
 c - readArray counts i
 writeArray counts i (c+1)
 else
 loop (i+1)
 | otherwise = return ()
 if isNothing a
 then
 calculate counts ls n
 else do
 loop 0
 calculate counts ls (n+1)


 --
 From: nicolasb...@gmail.com
 Date: Fri, 8 Feb 2013 12:26:09 -0700
 To: haskell-cafe@haskell.org
 Subject: [Haskell-cafe] performance question

 Hi list,

 I wrote a script that reads matrix elements from standard input, parses
 the input using a regular expression, and then bins the matrix elements by
 magnitude. I wrote the same script in python (just to be sure :) ) and find
 that the python version vastly outperforms the Haskell script.

 To be concrete:

 $ time ./createMatrixDump.py -N 128 | ./printMatrixDecay
 real0m2.655s
 user0m2.677s
 sys 0m0.095s

 $ time ./createMatrixDump.py -N 128 | ./printMatrixDecay.py -
 real0m0.445s
 user0m0.615s
 sys 0m0.032s

 The Haskell script was compiled with ghc --make printMatrixDecay.hs.

 Could you have a look at the script and give me some pointers as to where
 I could improve it, both in terms of performance and also generally, as I
 am very new to Haskell.

 Thanks already,

 nick


 ___ Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Structured Graphs

2013-02-12 Thread John Sharley
What are the prospects for Haskell supporting Structured Graphs as defined here?
http://www.cs.utexas.edu/~wcook/Drafts/2012/graphs.pdf

Is there an interest by developers of GHC in doing this?

-John___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-12 Thread briand
On Tue, 12 Feb 2013 15:57:37 -0700
Nicolas Bock nicolasb...@gmail.com wrote:

   Here is haskell version that is faster than python, almost as fast as c++.
  You need to install bytestring-lexing package for readDouble.


I was hoping Branimir could comment on how the improvements were allocated.

how much is due to text.regex.pcre (which looks to be a wrapper to libpcre) ?

how much can be attributed to using data.bytestring ?

you have to admit, it's amazing how well a byte-compiled, _dynamically typed_ 
interpreter can do against an actualy native code compiler.  Can't regex be 
done effectively in haskell ?  Is it something that can't be done, or is it 
just such minimal effort to link to pcre that it's not worth the trouble ?


Brian

  import Text.Regex.PCRE
  import Data.Maybe
  import Data.Array.IO
  import Data.Array.Unboxed
  import qualified Data.ByteString.Char8 as B
  import Data.ByteString.Lex.Double (readDouble)
 
  strataBounds :: UArray Int Double
  strataBounds = listArray (0,10) [ 0.0, 1.0e-8, 1.0e-7, 1.0e-6, 1.0e-5,
  1.0e-4, 1.0e-3, 1.0e-2, 1.0e-1, 1.0, 2.0 ]
 
  newStrataCounts :: IO(IOUArray Int Int)
  newStrataCounts = newArray (bounds strataBounds) 0
 
  main = do
  l - B.getContents
  let a = B.lines l
  strataCounts - newStrataCounts
  n - calculate strataCounts a 0
  let
  printStrataCounts :: IO ()
  printStrataCounts = do
  let s = round $ sqrt (fromIntegral n::Double) :: Int
  printf read %d matrix elements (%dx%d = %d)\n n s s n
  printStrataCounts' 0 0
  printStrataCounts' :: Int - Int - IO ()
  printStrataCounts' i total
  | i  (snd $ bounds strataBounds) = do
  count - readArray strataCounts i
  let
  p :: Double
  p = (100.0*(fromIntegral count) ::
  Double)/(fromIntegral n :: Double)
  printf [%1.2e, %1.2e) = %i (%1.2f%%) %i\n (strataBounds
  ! i) (strataBounds ! (i+1))
  count p
  (total + count)
  printStrataCounts' (i+1) (total+count)
  | otherwise = return ()
  printStrataCounts
 
  calculate :: IOUArray Int Int - [B.ByteString] - Int - IO Int
  calculate _ [] n = return n
  calculate counts (l:ls) n = do
  let
  a = case getAllTextSubmatches $ l =~ B.pack matrix.*=
  ([0-9eE.+-]+)$ :: [B.ByteString] of
  [_,v] - Just (readDouble v) :: Maybe (Maybe
  (Double,B.ByteString))
  _ - Nothing
  b = (fst.fromJust.fromJust) a
  loop :: Int - IO()
  loop i
  | i  (snd $ bounds strataBounds) =
  if (b = (strataBounds ! i))  (b  (strataBounds !
  (i+1)))
  then do
  c - readArray counts i
  writeArray counts i (c+1)
  else
  loop (i+1)
  | otherwise = return ()
  if isNothing a
  then
  calculate counts ls n
  else do
  loop 0
  calculate counts ls (n+1)
 
 
  --
  From: nicolasb...@gmail.com
  Date: Fri, 8 Feb 2013 12:26:09 -0700
  To: haskell-cafe@haskell.org
  Subject: [Haskell-cafe] performance question
 
  Hi list,
 
  I wrote a script that reads matrix elements from standard input, parses
  the input using a regular expression, and then bins the matrix elements by
  magnitude. I wrote the same script in python (just to be sure :) ) and find
  that the python version vastly outperforms the Haskell script.
 
  To be concrete:
 
  $ time ./createMatrixDump.py -N 128 | ./printMatrixDecay
  real0m2.655s
  user0m2.677s
  sys 0m0.095s
 
  $ time ./createMatrixDump.py -N 128 | ./printMatrixDecay.py -
  real0m0.445s
  user0m0.615s
  sys 0m0.032s
 
  The Haskell script was compiled with ghc --make printMatrixDecay.hs.
 
  Could you have a look at the script and give me some pointers as to where
  I could improve it, both in terms of performance and also generally, as I
  am very new to Haskell.
 
  Thanks already,
 
  nick
 
 
  ___ Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-12 Thread Bob Ippolito
On Tuesday, February 12, 2013, wrote:

 On Tue, 12 Feb 2013 15:57:37 -0700
 Nicolas Bock nicolasb...@gmail.com javascript:; wrote:

Here is haskell version that is faster than python, almost as fast as
 c++.
   You need to install bytestring-lexing package for readDouble.


 I was hoping Branimir could comment on how the improvements were allocated.

 how much is due to text.regex.pcre (which looks to be a wrapper to
 libpcre) ?

 how much can be attributed to using data.bytestring ?

 you have to admit, it's amazing how well a byte-compiled, _dynamically
 typed_ interpreter can do against an actualy native code compiler.  Can't
 regex be done effectively in haskell ?  Is it something that can't be done,
 or is it just such minimal effort to link to pcre that it's not worth the
 trouble ?


I think that there are two bottlenecks: the regex engine, and converting a
bytestring to a double. There doesn't appear to be a fast and accurate
strtod implementation for Haskell, and the faster regex implementations
that I could find appear to be unmaintained.




 Brian

   import Text.Regex.PCRE
   import Data.Maybe
   import Data.Array.IO
   import Data.Array.Unboxed
   import qualified Data.ByteString.Char8 as B
   import Data.ByteString.Lex.Double (readDouble)
  
   strataBounds :: UArray Int Double
   strataBounds = listArray (0,10) [ 0.0, 1.0e-8, 1.0e-7, 1.0e-6, 1.0e-5,
   1.0e-4, 1.0e-3, 1.0e-2, 1.0e-1, 1.0, 2.0 ]
  
   newStrataCounts :: IO(IOUArray Int Int)
   newStrataCounts = newArray (bounds strataBounds) 0
  
   main = do
   l - B.getContents
   let a = B.lines l
   strataCounts - newStrataCounts
   n - calculate strataCounts a 0
   let
   printStrataCounts :: IO ()
   printStrataCounts = do
   let s = round $ sqrt (fromIntegral n::Double) :: Int
   printf read %d matrix elements (%dx%d = %d)\n n s s n
   printStrataCounts' 0 0
   printStrataCounts' :: Int - Int - IO ()
   printStrataCounts' i total
   | i  (snd $ bounds strataBounds) = do
   count - readArray strataCounts i
   let
   p :: Double
   p = (100.0*(fromIntegral count) ::
   Double)/(fromIntegral n :: Double)
   printf [%1.2e, %1.2e) = %i (%1.2f%%) %i\n
 (strataBounds
   ! i) (strataBounds ! (i+1))
   count p
   (total + count)
   printStrataCounts' (i+1) (total+count)
   | otherwise = return ()
   printStrataCounts
  
   calculate :: IOUArray Int Int - [B.ByteString] - Int - IO Int
   calculate _ [] n = return n
   calculate counts (l:ls) n = do
   let
   a = case getAllTextSubmatches $ l =~ B.pack matrix.*=
   ([0-9eE.+-]+)$ :: [B.ByteString] of
   [_,v] - Just (readDouble v) :: Maybe (Maybe
   (Double,B.ByteString))
   _ - Nothing
   b = (fst.fromJust.fromJust) a
   loop :: Int - IO()
   loop i
   | i  (snd $ bounds strataBounds) =
   if (b = (strataBounds ! i))  (b  (strataBounds !
   (i+1)))
   then do
   c - readArray counts i
   writeArray counts i (c+1)
   else
   loop (i+1)
   | otherwise = return ()
   if isNothing a
   then
   calculate counts ls n
   else do
   loop 0
   calculate counts ls (n+1)
  
  
   --
   From: nicolasb...@gmail.com javascript:;
   Date: Fri, 8 Feb 2013 12:26:09 -0700
   To: haskell-cafe@haskell.org javascript:;
   Subject: [Haskell-cafe] performance question
  
   Hi list,
  
   I wrote a script that reads matrix elements from standard input, parses
   the input using a regular expression, and then bins the matrix
 elements by
   magnitude. I wrote the same script in python (just to be sure :) ) and
 find
   that the python version vastly outperforms the Haskell script.
  
   To be concrete:
  
   $ time ./createMatrixDump.py -N 128 | ./printMatrixDecay
   real0m2.655s
   user0m2.677s
   sys 0m0.095s
  
   $ time ./createMatrixDump.py -N 128 | ./printMatrixDecay.py -
   real0m0.445s
   user0m0.615s
   sys 0m0.032s
  
   The Haskell script was compiled with ghc --make printMatrixDecay.hs.
  
   Could you have a look at the script and give me some pointers as to
 where
   I could improve it, both in terms of performance and also generally,
 as I
   am very new to Haskell.
  
   Thanks already,
  
   nick
  
  
   ___ Haskell-Cafe mailing
 list
   Haskell-Cafe@haskell.org javascript:;
   http://www.haskell.org/mailman/listinfo/haskell-cafe
  



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org

Re: [Haskell-cafe] ghc-mod and cabal targets

2013-02-12 Thread 山本和彦
Francesco,

 I can confirm that 1.11.1 works.

I think I fixed this problem.
Would you try the master branch?

https://github.com/kazu-yamamoto/ghc-mod

--Kazu

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How far compilers are allowed to go with optimizations?

2013-02-12 Thread wren ng thornton

On 2/11/13 11:47 AM, Johan Holmquist wrote:

I was about to leave this topic not to swamp the list with something
that appears to go nowere. But now I feel that I must answer the
comments, so here it goes.

By agressive optimisation I mean an optimisation that drastically
reduces run-time performance of (some part of) the program. So I guess
automatic vectorisation could fall into this term.


In that case, I'd say automatic vectorization counts. List fusion also 
counts, but I'm pretty sure you don't want to get rid of that either. 
IMHO, the only thing that distinguishes aggressive optimizations from 
the rest is that programmers find them to be finicky. There are two 
general causes of perceived finickiness:


(1) The optimization really is finicky and is triggered or not in highly 
unpredictable ways. This is often the case when a optimization is new, 
because it hasn't been battle-tested enough to make it reliable to the 
diversity we see in real-world code. The early days of list fusion were 
certainly like that. As were the early days of optimizations that depend 
on alias detection. IME, these things tend to get straightened out in a 
few version cycles. If they don't, then the optimization truly is 
finicky and therefore is something bad that should be avoided. I haven't 
found that situation to be very common however.


(2) The optimization is reliable (enough) but the programmer doesn't 
understand it well enough and thus inadvertently breaks it when doing 
innocuous code transformations. Again, this is generally the case any 
time a new optimization shows up. The only way to fix this, really, is 
to wait for people to learn new habits and to internalize a new model of 
how the compiler works. Good explanations of the technology often help 
that process along; but don't obviate the need for the process.



Both of those situations are triggered by an optimization being new, and 
both of them tend to be resolved when the optimization is no longer new. 
Thus, I don't think it makes much sense to disallow compilers from 
making aggressive optimizations, because it is only through doing so 
that those optimizations can be rendered non-aggressive.


In all of this, I'm reminded of a recent article on code metrics:

http://www.neverworkintheory.org/?p=488

The key idea there is to use automatically generated code refactorings 
in order to see how different versions of the same code were rated. 
Perhaps it'd be worth doing this sort of thing in order to identify 
which optimizations are stable under particular code transformations.


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why isn't Program Derivation a first class citizen?

2013-02-12 Thread Jan Stolarek
 To me, it seems that something like this should be possible -- am i being
 naive? does it already exist?  
During the compilation process GHC optimizes the code by performing successive 
transformations of 
the program. These transformations are known to preserve meaning of the program 
- they are based 
on some already proved facts and properties. I've written a blog post recently 
on this:

http://lambda.jstolarek.com/2013/01/taking-magic-out-of-ghc-or-tracing-compilation-by-transformation/

Now, you might be asking why does GHC transform the Core representation of a 
program and not the 
original Haskell code itself? The answer is simplicity. Core is equivalent to 
Haskell, but it's 
much easier to work with since there are few language constructs and it's 
easier to design 
transformations and prove them correct.

I hope this at least partially answers your question. Whether there automated 
ways of transforming 
Haskell programs into more efficient Haskell programs I don't know.

Janek

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How far compilers are allowed to go with optimizations?

2013-02-12 Thread Tristan Seligmann
On Mon, Feb 11, 2013 at 6:47 PM, Johan Holmquist holmi...@gmail.com wrote:
 By agressive optimisation I mean an optimisation that drastically
 reduces run-time performance of (some part of) the program. So I guess
 automatic vectorisation could fall into this term.

Even something like running the program on a different CPU or
different OS version can drastically improve or harm the performance
of a particular program, without any change in the compiler itself. If
you care about performance, the only real recourse is to have
benchmarks / performance tests that verify the things you care about,
and run them regularly in your target environment so that any
performance-critical changes are noticed right away.
-- 
mithrandi, i Ainil en-Balandor, a faer Ambar

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe