Re: [Haskell-cafe] HSpec vs Tasty (was: ANN: hspec-test-framework - Run test-framework tests with Hspec)

2013-08-21 Thread Andrey Chudnov
So is there a high-level comparison of HSpec and tasty? The only 
difference I've glimpsed so far was that HSpec has a syntactic sugar for 
describing tests which, honestly, I haven't found very useful.
So, could someone write up a quick comparison of the two for the benefit 
of the folks like me who have a lot of test-framework tests and have to 
switch to either tasty or hspec?


On 08/18/2013 06:27 PM, Roman Cheplyaka wrote:

My answer to this and many similar questions regarding tasty is:

- I am probably not going to work on this
- but I would be happy to see someone doing it

Note that hspec-test-framework is a separate package, and it didn't have
to be written or even approved by Simon. Same here — please write more
supplementary packages if you feel a need.

Roman

* Alfredo Di Napoli alfredo.dinap...@gmail.com [2013-08-18 15:18:07+0200]

Hi Simon,

this is an exciting news!

May I ask the question that maybe is lurking in the shadow?

Due to the recent announcement of Roman's tasty library, are there plans
to basically release something similar to hspec-test-framework and
hspec-test-framework-th but targeting tasty instead?

Bye :)
A.


On 18 August 2013 14:50, Simon Hengel s...@typeful.net wrote:


Hi,
I just released hspec-test-framework[1] and hspec-test-framework-th[2]
to Hackage.

They can be used to run test-framework tests with Hspec unmodified.

This can also be used to work around test-framework's incompatibility
with QuickCheck-2.6 and base-4.7.0 ;)

Have a look at the README for usage instructions:

 https://github.com/sol/hspec-test-framework#readme

Cheers,
Simon

[1] http://hackage.haskell.org/package/hspec-test-framework
[2] http://hackage.haskell.org/package/hspec-test-framework-th

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: tasty, a new testing framework

2013-08-05 Thread Andrey Chudnov

On 08/05/2013 02:48 PM, Roman Cheplyaka wrote:

(which is unmaintained).

  Has this been confirmed by the author/maintainer?

Tasty supports HUnit, SmallCheck, QuickCheck, and golden tests out of
the box (through the standard packages), but it is very extensible, so
you can write your own test providers.

Please see the home page for more information:
http://documentup.com/feuerbach/tasty
Is it a drop-in replacement for test-framework, e.g. if I substitute 
test-framework for tasty in my .cabal files, will it work? If not, 
could you provide a quick guide for porting? Also, is the current 
version (0.1) recommended for general use?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Design of extremely usable programming language libraries

2013-05-28 Thread Andrey Chudnov
 is
to statically constrain all the values that are passed to, say, the
pretty-printer so that they are guaranteed to be free of
anti-quotes (see an example definition below). However, that, again,
requires GADTs (e.g. have all the AST datatypes have an extra type
parameter).

 data EType  = Complete | HasHoles
 type family Quoted a b :: *
 canHaveHolesT :: a - b - Quoted a b
 canHaveHolesT _ _ = undefined
 type instance Quoted HasHoles Complete = HasHoles
 type instance Quoted Complete HasHoles = HasHoles
 type instance Quoted HasHoles HasHoles = HasHoles
 type instance Quoted Complete Complete = HasHoles
 data Expr t where
   EInt :: Int - Expr Complete
   EAdd :: Expr t1 - Expr c2 - Expr (Holes t1 t2)
   ...
   EQuote :: String - Expr HasHoles

And then we could have a normal parser return a value 'Expr Complete'
and a quasi-quotation parser retunr a value 'Expr
HasHoles'. Similarly, the pretty printer function could have type
'Expr Complete - Doc'.

4) We should be able to annotate ASTs with arbitrary values, and
change the types of those values as we go. The most user friendly way,
IMO, is to have the AST datatypes be polymorphic and have that type
parameter as an extra field in every constructor. E.g.,
 data Expr t a where
   EInt :: a - Int - Expr Complete a
   EAdd :: a - Expr t1 a - Expr c2 a - Expr (Holes t1 t2) a
   ...
   EQuote :: a - String - Expr HasHoles a

Then we can use the functions in Traverseable to change types of
annotations,
and inspect the values by pattern-matching on constructors.

5) support for generic operations on syntax trees. Uniplate, which has
been designed to work with ASTs, and is awesome for that purpose
because it saves a lot of time. I use
transform(Bi) and universe(Bi) all the time and it saves *a lot* of
typing. Pretty much all my analysis/transformation code uses those four
small-but-powerful function calls -- and, dare I say, it's quite
elegant.


Other useful, but not crucial features include:

1) diffs for ASTs (in the spirit of the 'gdiff' library, which, alas,
doesn't work with polymorphic datatypes)

2) QuickCheck arbitrary instances for ASTs. No technical difficulty
there, but writing instances that generate interesting programs and
don't run out of memory is quite hard :) I wish 'Agata' was still
supported, or there was some library that helps writing Arbitrary instances
for ASTs.


If you think there's another feature in mind that is missing from the
list, please, let me know.

The (perceived) challenges in implementing the functionality outlined
above are as follows:

1) No multi-mode pretty-printing library. I think that the mutli-mode
   functionality could be implemented on top of an existing library by
   definining new combinators, but it would be nice to have a library
   that supports them out of the box. The particular features that I'm
   missing are:
   - non-essential space/(soft-)line break combinators that are
 interpreted as spaces/line-breaks in the pretty mode and as empty
 docs in the minified mode.
   - comment combinator which inserts the text in a comment only if the
 debug mode is on
   - being able to record the positions of AST nodes in the resulting
 text (for generating source maps). Not sure what would be a
 convenient interface for that. Note: I know that mainland-pretty has
 position information, but I don't think it's helpful for generating
 source maps.

2) The biggest problem is that there are two good reasons to use GADTs
   when specifying AST datatypes.  However, uniplate doesn't work with
   GADTs and, as far as I know, no currently supported generic
   programming library does (to be precise, I need support for
   families of mutually recursive polymorphic GADTs). Am I missing
   some library, or is my understanding correct? If it's the latter,
   is there any fundamental limitation that prevents creating such a
   library?  Maybe there are other (but still elegant) ways to satisfy
   my requirements without using GADTs?

3) 'gdiff' doesn't support polymorphic datatypes. Is there any other
library that does?


[1] http://www.serpentine.com/blog/2010/03/03/whats-in-a-parsing-library-1/
[2] http://en.wikibooks.org/wiki/Haskell/GADT

PS: My attempts so far are in
https://github.com/achudnov/language-nextgen/blob/master/Language/Nextgen/Syntax.hs

Regards,
Andrey Chudnov


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Design of extremely usable programming language libraries

2013-05-28 Thread Andrey Chudnov
Thanks for a prompt reply, Roman.

On 05/28/2013 04:52 PM, Roman Cheplyaka wrote:
 Any syb-style library works with GADTs, by the virtue of dealing with
 value representations instead of type representations. 
I tried to use syb, but the following code fails to typecheck for me.
What am I doing wrong?
 {-# LANGUAGE GADTs, EmptyDataDecls, MultiParamTypeClasses,
TypeFamilies #-}
 {-# LANGUAGE DeriveDataTypeable, StandaloneDeriving #-}

 data HasHoles
 data Complete
 deriving instance Typeable HasHoles
 deriving instance Data HasHoles
 deriving instance Typeable Complete
 deriving instance Data Complete
 type family Holes a b :: *
 canHaveHolesT :: a - b - Holes a b
 canHaveHolesT _ _ = undefined
 type instance Holes HasHoles Complete = HasHoles
 type instance Holes Complete HasHoles = HasHoles
 type instance Holes HasHoles HasHoles = HasHoles
 type instance Holes Complete Complete = HasHoles

 data Expression k a where
   EQuote  :: a - String - Expression HasHoles a
   IntLit  :: a - Int - Expression Complete a
   EArith  :: a - ArithOp - Expression k1 a - Expression k2 a -
  Expression (Holes k1 k2) a
 deriving instance Typeable2 (Expression)
 deriving instance Data (Expression k a)
 data ArithOp = OpAdd
   | OpSub
   | OpMul
   | OpDiv
deriving (Data, Typeable)

Fails with:
 Couldn't match type `Complete' with `HasHoles'
 Expected type: a - String - Expression k a
   Actual type: a - String - Expression HasHoles a
 In the first argument of `z', namely `EQuote'
 In the first argument of `k', namely `z EQuote'
 When typechecking the code for  `Data.Data.gunfold'
   in a standalone derived instance for `Data (Expression k a)':
   To see the code I am typechecking, use -ddump-deriv


 Not sure what you mean here — attoparsec does support unlimited
 lookahead, in the sense that a parser may fail arbitrarily late in the
 input stream, and backtrack to any previous state. Although attoparsec
 is a poor choice for programming language parsing, primarily because
 of the error messages. 
I guess I have an outdated notion of attoparsec. But yes, error messages
seem to be the weak point of attoparsec. Also, the fact that it only
accepts bytestrings makes it harder (but no impossible, since we can
convert Strings to ByteStrings) to reuse the parser as a QuasiQuoter.
So, I'll rephrase my question. What's the best choice for a library for
parsing programming languages nowadays?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] simple parsec question

2013-03-03 Thread Andrey Chudnov

Immanuel,
Since a heading always starts with a new line (and ends with a colon 
followed by a carriage return or just a colon?), I think it might be 
useful to first separate the input into lines and then classify them 
depending on whether it's a heading or not and reassemble them into the 
value you need. You don't even need parsec for that.


However, if you really want to use parsec, you can write something like 
(warning, not tested):

many $ liftM2 Section headline content
   where headline = anyChar `manyTill` (char ':'  spaces  newline)
   content  = anyChar `manyTill` (try $ newline  headline)

/Andrey

On 3/3/2013 10:44 AM, Immanuel Normann wrote:
I am trying to parse a semi structured text with parsec that basically 
should identify sections. Each section starts with a headline and has 
an unstructured content - that's all. For instance, consider the 
following example text (inside the dashed lines):


---

top 1:

some text ... bla

top 2:

more text ... bla bla


---

This should be parsed into a structure like this:

[Section (Top 1) (Content some text ... bla), Section (Top 1) 
(Content more text ... bla)]


Say, I have a parser headline, but the content after a headline 
could be anything that is different from what headline parses.

How could the section parser making use of headline look like?
My idea would be to use the manyTill combinator, but I dont find an 
easy solution.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] simple parsec question

2013-03-03 Thread Andrey Chudnov
Immanuel,
I tried but I couldn't figure it out. Here's a gist with my attempts and
results so far: https://gist.github.com/achudnov/f3af65f11d5162c73064
There, 'test' uses my attempt at specifying the parser, 'test2' uses
yours. Note that your attempt wouldn't parse multiple sections -- for
that you need to use 'many section' instead of just 'section' in 'parse'
('parseFromFile' in the original).
I think what's going on is the lookahead is wrong, but I'm not sure how
exactly. I'll give it another go tomorrow if I have time.

/Andrey

On 03/03/2013 05:16 PM, Immanuel Normann wrote:
 Andrey,

 Thanks for your attempt, but it doesn't seem to work. The easy part is
 the headline, but the content makes trouble.

 Let me write the code a bit more explicit, so you can copy and paste it:

 --
 {-# LANGUAGE FlexibleContexts #-}

 module Main where

 import Text.Parsec

 data Top = Top String deriving (Show)
 data Content = Content String deriving (Show)
 data Section = Section Top Content deriving (Show)

 headline :: Stream s m Char = ParsecT s u m Top
 headline = manyTill anyChar (char ':'  newline) = return . Top

 content :: Stream s m Char = ParsecT s u m Content
 content = manyTill anyChar (try headline) = return . Content

 section :: Stream s m Char = ParsecT s u m Section
 section = do {h - headline; c - content; return (Section h c)}
 --


 Assume the following example text is stored in  /tmp/test.txt:
 ---
 top 1:

 some text ... bla

 top 2:

 more text ... bla bla
 ---

 Now I run the section parser in ghci against the above mentioned
 example text stored in /tmp/test.txt:

 *Main parseFromFile section /tmp/test.txt
 Right (Section (Top top 1) (Content ))

 I don't understand the behaviour of the content parser here. Why does
 it return ? Or perhaps more generally, I don't understand the
 manyTill combinator (though I read the docs).

 Side remark: of cause for this little task it is probably to much
 effort to use parsec. However, my content in fact has an internal
 structure which I would like to parse further, but I deliberately
 abstracted from these internals as they don't effect my above stated
 problem.

 Immanuel


 2013/3/3 Andrey Chudnov achud...@gmail.com mailto:achud...@gmail.com

 Immanuel,
 Since a heading always starts with a new line (and ends with a
 colon followed by a carriage return or just a colon?), I think it
 might be useful to first separate the input into lines and then
 classify them depending on whether it's a heading or not and
 reassemble them into the value you need. You don't even need
 parsec for that.

 However, if you really want to use parsec, you can write something
 like (warning, not tested):
 many $ liftM2 Section headline content
where headline = anyChar `manyTill` (char ':'  spaces  newline)
content  = anyChar `manyTill` (try $ newline 
 headline)

 /Andrey


 On 3/3/2013 10:44 AM, Immanuel Normann wrote:

 I am trying to parse a semi structured text with parsec that
 basically should identify sections. Each section starts with a
 headline and has an unstructured content - that's all. For
 instance, consider the following example text (inside the
 dashed lines):

 ---

 top 1:

 some text ... bla

 top 2:

 more text ... bla bla


 ---

 This should be parsed into a structure like this:

 [Section (Top 1) (Content some text ... bla), Section (Top
 1) (Content more text ... bla)]

 Say, I have a parser headline, but the content after a
 headline could be anything that is different from what
 headline parses.
 How could the section parser making use of headline look like?
 My idea would be to use the manyTill combinator, but I dont
 find an easy solution.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Deprecating packages on Hackage

2012-08-04 Thread Andrey Chudnov
Hello. What are the best practices in deprecating packages on Hackage? 
I've seen packages marked DEPRECATED in the synopsis field on Hackage, 
and one could add GHC deprecated pragmas for every module, but is that 
the best one can do?


Thank you in advance,
Andrey Chudnov

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe