[Haskell-cafe] Parsing workflow

2010-10-31 Thread Nils Schweinsberg

Hi!

I'm having a really hard time to write a correct parser for a small 
language I've developed. I have been trying to write a parser using 
parsec, but always get a lot of error messages like unexpected \n, 
expected ..., new-line or... when trying to run the parser. Then I read 
about the happy parser and really liked the separation of lexing the 
text into tokens and parsing the actual logic behind those tokens. Since 
I couldn't get familiar with the lexer alex I gave up on the 
alex-happy-approach again and went back to parsec. But with that 
lexer-parser idea on my mind, my parser currently looks a lot like a 
lexer. So I came up with the idea of using a combination of parsec and 
happy, where I generate a list of tokens for my text via parsec and 
analyse it with happy.



My questions would be:

- Is this a valid approach?

- What is your workflow on parsing complex data structures?

- What about performance? Since my project is going to be an interpreted 
language parsing performance might be interesting aswell. I've read that 
happy is in general faster than parsec, but what if I combine both of 
them as I said above? I guess that parsing a simple list of tokens 
without any nested parser structures would be pretty fast?


- Do you have any other ideas on how to improve my parser?

- What are your general thoughts on happy vs. parsec?


Thanks for any replies,

Nils
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: network-2.2.3, merger with network-bytestring

2010-10-31 Thread Johan Tibell
Hi all,

I like to announce a new version of the network package,
network-2.2.3. You can install the latest version by running:

cabal update  cabal install network

This version marks the end of the network-bytestring package, which
has now been merged into the network package. This means that
efficient and correct networking using ByteStrings is available as
part of the standard network package.

As part of the merger, two new modules have been added:
Network.Socket.ByteString and Network.Socket.ByteString.lAzy

This release is backwards compatible with the previous release.

Johan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Edit Hackage

2010-10-31 Thread Tillmann Rendel

Ketil Malde wrote:

Most web-based email archives seem to suck - where can we point to a nice
URL to get an overview of a -cafe thread?


http://thread.gmane.org/gmane.comp.lang.haskell.cafe/82667

  Tillmann
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing workflow

2010-10-31 Thread Ozgur Akgun
I don't know if you've already used it, but Parsec includes some kind of a
lexer through the
Languagehttp://hackage.haskell.org/packages/archive/parsec/3.1.0/doc/html/Text-Parsec-Language.htmland
Tokenhttp://hackage.haskell.org/packages/archive/parsec/3.1.0/doc/html/Text-Parsec-Token.htmlmodules.
You can start by having a look at the
makeTokenParserhttp://hackage.haskell.org/packages/archive/parsec/3.1.0/doc/html/Text-Parsec-Token.html#v:makeTokenParser
 function.

On 31 October 2010 15:11, Nils Schweinsberg m...@n-sch.de wrote:

 Hi!

 I'm having a really hard time to write a correct parser for a small
 language I've developed. I have been trying to write a parser using parsec,
 but always get a lot of error messages like unexpected \n, expected ...,
 new-line or... when trying to run the parser. Then I read about the happy
 parser and really liked the separation of lexing the text into tokens and
 parsing the actual logic behind those tokens. Since I couldn't get familiar
 with the lexer alex I gave up on the alex-happy-approach again and went
 back to parsec. But with that lexer-parser idea on my mind, my parser
 currently looks a lot like a lexer. So I came up with the idea of using a
 combination of parsec and happy, where I generate a list of tokens for my
 text via parsec and analyse it with happy.


 My questions would be:

 - Is this a valid approach?

 - What is your workflow on parsing complex data structures?

 - What about performance? Since my project is going to be an interpreted
 language parsing performance might be interesting aswell. I've read that
 happy is in general faster than parsec, but what if I combine both of them
 as I said above? I guess that parsing a simple list of tokens without any
 nested parser structures would be pretty fast?

 - Do you have any other ideas on how to improve my parser?

 - What are your general thoughts on happy vs. parsec?


 Thanks for any replies,

 Nils
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Ozgur Akgun
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing workflow

2010-10-31 Thread Vo Minh Thu
2010/10/31 Nils Schweinsberg m...@n-sch.de:
 Hi!

 I'm having a really hard time to write a correct parser for a small language
 I've developed. I have been trying to write a parser using parsec, but
 always get a lot of error messages like unexpected \n, expected ...,
 new-line or... when trying to run the parser.
 [snip]

Hi,

I can't really tell from your description, but maybe this is because
of the way Parsec works when it deals with alternatives. When you
combine several parsers with e.g. '|' or 'choice', an alternative
that can consume some input but fails will make the whole combined
parser fail too. So you have to either factorize you parsers or use
the 'try'. See the documentation for 'try' at
http://hackage.haskell.org/packages/archive/parsec/3.1.0/doc/html/Text-Parsec-Prim.html
.

HTH,
Thu
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing workflow

2010-10-31 Thread Stephen Tetley
If you use the Language and Token modules, Parsec gives you something
close to a lexer / parser separation _but_ you can drop down to
character level parsers if you want to - this is very handy. There are
some caveats though - for instance, the number parsers from the Token
module follow Haskell's lexical syntax but you can override them if
necessary.

You can also write separate parsers this is covered in the (pdf)
Parsec manual available from Daan Leijen's old home page, however I
usually avoid this as it seems rather cumbersome.

Happy / Alex seems generally much faster than Parsec, I would imagine
the combination of Happy and Parsec to inherit the speed of Parsec.

Personally I prefer Parsec, uu-parsing is also nice but it didn't have
the equivalent of Parsec's Token parsers so it was always a bit less
convenient. However, if I'm starting from an LR grammar I'll use Happy
/ Alex.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing workflow

2010-10-31 Thread Stephen Tetley
On 31 October 2010 15:55, Stephen Tetley stephen.tet...@gmail.com wrote:
ecessary.

 You can also write separate parsers this is covered in the (pdf)
 Parsec manual available from Daan Leijen's old home page, however I
 usually avoid this as it seems rather cumbersome.

D'oh. I meant separate _scanners_ of course, see chapter 2.11
Advanced: Seperate scanners
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing workflow

2010-10-31 Thread Nils Schweinsberg

Am 31.10.2010 16:50, schrieb Ozgur Akgun:
 I don't know if you've already used it, but Parsec includes some kind of
 a lexer through the Language
 
http://hackage.haskell.org/packages/archive/parsec/3.1.0/doc/html/Text-Parsec-Language.html 


 and Token
 
http://hackage.haskell.org/packages/archive/parsec/3.1.0/doc/html/Text-Parsec-Token.html 


 modules.
 You can start by having a look at the makeTokenParser
 
http://hackage.haskell.org/packages/archive/parsec/3.1.0/doc/html/Text-Parsec-Token.html#v:makeTokenParser 
function.


Yeah, I've read about the TokenParser, but since my language is not a 
typical programming language it's use is very limited for me. My 
language is basicly a combination of a scripting language and a markup 
language like, for example, markdown. And parsing that scripting 
language isn't the difficult part so far...

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing workflow

2010-10-31 Thread Nils Schweinsberg

Am 31.10.2010 16:53, schrieb Vo Minh Thu:

I can't really tell from your description, but maybe this is because
of the way Parsec works when it deals with alternatives. When you
combine several parsers with e.g. '|' or 'choice', an alternative
that can consume some input but fails will make the whole combined
parser fail too. So you have to either factorize you parsers or use
the 'try'. See the documentation for 'try' at
http://hackage.haskell.org/packages/archive/parsec/3.1.0/doc/html/Text-Parsec-Prim.html
.



This is exactly what gives me headaches. It's hard to tell where you 
need try/lookAhead and where you don't need them. And I don't really 
feel comfortable wrapping everything into try blocks...

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing workflow

2010-10-31 Thread Ozgur Akgun
On 31 October 2010 16:15, Nils Schweinsberg m...@n-sch.de wrote:

 This is exactly what gives me headaches. It's hard to tell where you need
 try/lookAhead and where you don't need them. And I don't really feel
 comfortable wrapping everything into try blocks...


I always thought this was an obvious decision: when combining 2 parsers `a`
 and `b` (i.e. a | b), if parser `a` fails without consuming any input do
not wrap it in a try, otherwise do.

Am I missing something?

Ozgur
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing workflow

2010-10-31 Thread Stephen Tetley
On 31 October 2010 16:23, Ozgur Akgun ozgurak...@gmail.com wrote:

 Am I missing something?

Left factoring! :-)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] commutativity

2010-10-31 Thread Brent Yorgey
On Sat, Oct 30, 2010 at 11:55:48AM +0100, Patrick Browne wrote:
 Hi,
 Below are two questions on commutative operations in Haskell.
 
 infixl 5 `com`
 com :: Int - Int - Int
 x `com` y  = (x + y)

 commutative com a b = (a `com` b) == (b`com`a)

Note that the first parameter to commutative shadows the previous
definition of com, I don't know if that's what you intended.

 
 -- 1 + 3 == 3 + 1
 -- This gives true by virtue of the value of LHS and RHS being equal
 after the plus operation
 
 -- Question 1
 -- commutative com 1 3
 -- This also gives true. Is it because of commutative equation or
 because of the plus operation?

I'm not quite sure I understand your question.  In any case:

commutative com 1 3
  
  = (1 `com` 3) == (3 `com` 1)   {- definition of 'commutative' -}

  = (1 + 3) == (3 + 1)   {- definition of 'com' -}

  = True

 
 -- Question 2
 -- In Haskell can commutativity be specified as a property of infix
 operations?

No, it can't.

-Brent
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell meeting in Berlin

2010-10-31 Thread Sönke Hahn
On Friday, October 29, 2010 02:07:55 am Daniel van den Eijkel wrote:
 Hi,
 
 thanks to all participants for the funny meeting! I had a lot of fun and
 I'm looking forward to see you again.

I had fun, too. There were twice as many people than I anticipated (4 instead 
of 2) and we outnumbered the Lisp meeting by 100% (I don't think, they stick 
to the c-base calendar anymore).

I can confirm the next meeting:

Date: Tuesday, November 9th
Time: from 20:00
Location: c-base, Rungestrasse 20, 10179 Berlin

I was asked by some people from c-base, if we could do a talk about Haskell 
for non-haskellers, which is a good idea, IMHO. We could discuss the details 
in the mext meeting.

Sönke





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing workflow

2010-10-31 Thread Malcolm Wallace


On 31 Oct 2010, at 16:15, Nils Schweinsberg wrote:


Am 31.10.2010 16:53, schrieb Vo Minh Thu:

 So you have to either factorize you parsers or use
the 'try'.


This is exactly what gives me headaches. It's hard to tell where you  
need try/lookAhead and where you don't need them. And I don't really  
feel comfortable wrapping everything into try blocks...


Have you considered using a different set of parser combinators, a set  
that is actually composable, and does not require the mysterious  
try?  I would recommend polyparse (because I wrote it), but uuparse  
would also be a fine choice.


Regards,
Malcolm
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: network-2.2.3, merger with network-bytestring

2010-10-31 Thread Christopher Done
On 31 October 2010 16:14, Johan Tibell johan.tib...@gmail.com wrote:
 This version marks the end of the network-bytestring package, which
 has now been merged into the network package. This means that
 efficient and correct networking using ByteStrings is available as
 part of the standard network package.

 As part of the merger, two new modules have been added:
 Network.Socket.ByteString and Network.Socket.ByteString.lAzy

Huzzah! I updated my little throttler program to use ByteString.[1]

One thing I'm curious about, that maybe you probably know off the top
of your head, is that when I'm limiting the speed, I was merely
recv'ing and send'ing at 1024 bytes a pop[2], however, when I did this
at, say, ~500KB/s, Firefox is really slow at receiving it, whereas
when I set it 4096 it's much faster/more accurate to the speed I
intend. Chrome doesn't seem to care.

I think the difference is that Firefox is single threaded
(select/poll-based?) and has to switch between various jobs while
receiving every tiny block that I'm sending. Chrome probably has the
receiver in a separate process.

So it seems like, short of balancing the package size (at powers of 2)
and the delay to get some ideal throughput, it's easiest and probably
realistically equivalent to set it to 4096 and just rely on an
accurate delay? What do you think? Just curious. This is something I
think I'll look into in the future, I really don't have a good feel
for typical network latency and throughput. It's not a big deal right
now as 56k slow is all I need to test my web app.

[1]: 
http://github.com/chrisdone/throttle/commit/97e03bfc64adc074c9d1f19c2605cb496576c593
[2]: 
http://github.com/chrisdone/throttle/commit/30dc1d970a7c0d43c1b6dc33da9deecf30808114
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Current thinking on CompositionAsDot issue in haskell prime?

2010-10-31 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/29/10 20:33 , C. McCann wrote:
 I suggest U+2621.

Did you mean U+2620 SKULL AND CROSSBONES there?

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkzNrfoACgkQIn7hlCsL25UTPwCfZIoFnec/y9Hx9BfIqzw6gSr8
ajAAnjycfEoRw7ZbBrplVb3xApb3Cq6J
=7No/
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Current thinking on CompositionAsDot issue in haskell prime?

2010-10-31 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/29/10 22:30 , wren ng thornton wrote:
 On 10/29/10 8:33 PM, C. McCann wrote:
 I suggest U+2621.
 
 I'm not sure I'd've ever recognized a funny 'z' as caution sign... :)

You'd have to be a TeX / Metafont user to get that one.  (The LaTeX book
doesn't use that symbol for caution, so I don't count LaTeX users so much.)

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkzNrlIACgkQIn7hlCsL25VoygCgnohkDoepVXT+SNzOOpJ15r/6
IB4AnA/dI1d6pk2pje3K2LSFGEXoCADE
=JyAA
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing workflow

2010-10-31 Thread Malcolm Wallace

- Is this a valid approach?


It is possible that your Parsec lexer will need to see the entire  
input before it delivers any tokens at all to the Happy parser.  This  
might cause a space problem, depending on how large your inputs are  
likely to be.



- What is your workflow on parsing complex data structures?


I usually write a lexer by hand, as a little state machine delivering  
a lazy list of tokens, then pass the tokens as the input to a grammar  
built from parser combinators.  For larger inputs, I use lazy parser  
combinators to avoid space leaks.


- What about performance? Since my project is going to be an  
interpreted language parsing performance might be interesting  
aswell. I've read that happy is in general faster than parsec, but  
what if I combine both of them as I said above? I guess that parsing  
a simple list of tokens without any nested parser structures would  
be pretty fast?


Parser combinators are rarely a performance bottleneck in my  
experience.  However, relying on parser combinators to do lexing often  
slows things down too much (too much back-tracking).


Regards,
Malcolm
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: network-2.2.3, merger with network-bytestring

2010-10-31 Thread Christopher Done
(Er, that should be (speed/4), not (speed*4). x4 the block size should
be x4 the delay.)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing workflow

2010-10-31 Thread Stephen Tetley
On 31 October 2010 16:53, Nils Schweinsberg m...@n-sch.de wrote:
 Am 31.10.2010 17:27, schrieb Stephen Tetley:

 Left factoring! :-)

 Stupid question: Whats that? :)


Actually a good question...

Its a standard grammar transformation - if you have two alternative
productions that share a common prefix

A - a b
   | a c

Left factoring produces:

A - A_part (b | c)
A_part - a

Left factoring can quickly lead to the grammar getting quite
convoluted, but using 'try' can lead to very fragile parsers that stop
working in mysterious ways when you extend them.

Because parser combinators are just functions, you can do questionable
hacks to make left factoring a bit less tedious:

parseA :: Parser A
parseA = do
   a - a_part
   (parseB a | parseC a)

parseB :: a - Parser A
parseB a = do
  b - b_part
  return $ B a b

parseC :: b - Parser A
parseC a = do
  c - c_part
  return $ C a c
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Generic traversals for GADTs

2010-10-31 Thread Ozgur Akgun
Café,

SYB-style libraries (and especially Uniplate) make it very easy to run
generic
traversals (queries/transformations) on ADTs.

 data Expr = ...
 x :: Expr
 f :: Expr - Expr
 transform :: (Expr - Expr) - Expr - Expr
 transform f x :: Expr -- applies f to x (and its children) in a bottom-up
manner

Uniplate is very powerful and doesn't require any hand written type-class
instances to get going reasonably efficiently.

I can simply re-implement the functionality of Uniplate myself, modulo
requiring hand-written instances. (I didn't want to go into that road, since
I
don't need it for this argument.)

I did this exercise, because I wanted to understand why we don't have
similar
generic traversal libraries for GADTs. And, it didn't take long :)
I can post my trials but they are really baby steps and they wouldn't be
helpful.

My question is:

Considering how useful generic traversal libraries and GADTs are, has
anybody
tried implementing a generic traversal library (like Uniplate or similar)
for
GADTs? Is it a hard (or impossible, or not efficient, or problematic in some
other way) thing to implement, or is it just not a requirement for anyone
else?

Best,
Ozgur
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type class design

2010-10-31 Thread Ivan Lazar Miljenovic
On 30 October 2010 22:44, Uwe Schmidt u...@fh-wedel.de wrote:
 Another possible argument: large type classes can look daunting for
 both implementors and users, even if only one or two methods need to
 be defined for a minimal instantiation (I'm tring to work out what to
 do here myself, as I have some typeclasses that for efficiency reasons
 it might be nice to have more methods in the class, but it makes it a
 little overwhelming).

 But by putting just a small part of the interface into the class
 does not make the live of a user any simpler.
 The user usually has to know the whole interface of the module.
 Or am I missing something?

Well, if you have a small class then it's possible for you to split up
the different types of functions to be in different modules into more
logical sub-types, etc.

Also, I find the documentation for the list functions in the Prelude
and Data.List easier to read than the ones in ListLike [1], partially
because typical documentation for class methods is typically smaller
and more compact than the stand-alone functions.

[1]: http://hackage.haskell.org/package/ListLike

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How can we make Haskell-Cafe scale? (was: Re: Edit Hackage)

2010-10-31 Thread Ivan Lazar Miljenovic
On 31 October 2010 04:08, Sterling Clover gersh...@gmail.com wrote:
 There's been some grumbling about users migrating from -cafe to Reddit and 
 Stack Overflow in particular. First, as Don has pointed out, the fact is that 
 people are moving away, and that's just the trend.

 But why are people moving? One reason is that Stack Overflow, Reddit, and 
 -cafe all provide *different things*, with features tailored to specific 
 purposes. I'll try to explain what those things are, why they don't threaten, 
 but augment -cafe, and why we need at least one new mailinglist as well.

Are they though?  Consider this:

* Someone subscribes to -cafe; this indicates (IMHO) that they find
Haskell interesting and want to keep up, get involved with the
discussions, etc.

* Someone subscribes or reads the Haskell reddit: they may comment on
articles they find interesting, but it could just be that they find
Haskell technically interesting and want to keep abreast of cool
things that are going on rather than actually using it.

* Someone posts a question on SO: a) they want help for homework, a
project, etc. b) they're curious about some design details (most of
the ones I've seen have been in the former group).

I am of course probably over-generalising this.  I wonder, however, if
people are really moving away, or if these new discussion/question
forums are providing new people a chance to discover Haskell, and if
they do so and start following along whether they are really
interested or just finding it a curiosity.

 1. When Stack Overflow is Better Than -cafe

 Stack Overflow is *better* for a QA format. I can scan for questions I'm 
 competent to answer or interested in answers to (and the rss feed helps with 
 that), and be confident that for the most part there won't be thread derails, 
 and that the title will maintain consistency with the discussion inside. If 
 the title isn't informative, I can be confident that someone with sufficient 
 rights (which are granted by accumulation of karma) will edit it to be 
 informative. Likewise the question itself, if it evolves over time, can be 
 edited to reflect what it should have been to begin with.

Except that some of the greatest/most informative discussions on
-cafe have come out of other questions/threads (e.g. this one).

 And most importantly, if an answer to a question is provided on -cafe, it 
 quickly gets drowned in surrounding traffic. And if an answer becomes out of 
 date, there's no good way to annotate the archives to indicate that. I 
 don't *need* to read all s.o. questions, on the off chance than an answer 
 might become relevant to me later. I know that s.o. provides the right tools 
 so I can find the answer when I need it. And I know that s.o. lets users 
 update pages with new posts, or edits, so that answers don't become stale 
 over time.

People will really go back and update old questions and answers when
best practices/styles/API change?

This to me indicates that we need to focus on documenting such things
and maintaining them on the wiki.

 2. When Reddit is Better Than -cafe

 As for reddit, while some people obviously treat -cafe as an anything goes 
 place to crack jokes or pursue arguments of rather narrow interest, I'm 
 acutely aware that every message sent to this list lands in the mailbox of 
 thousands some multiple thousands of programmers, many very talented, and 
 almost all with better things to do than wade through the massive traffic of 
 -cafe for the few things of interest to them.

 So, generally, I tend to shy away from posting to -cafe. Reddit, on the other 
 hand has (although maybe a bit too much) a bit more of the freewheeling 
 atmosphere of, e.g., the #haskell irc channel. People are going to reddit 
 during a long compile, or on a coffee break. There's an understanding that 
 the conversation will tend to be casual, that there will be occasional trolls 
 and occasional gems, and that the primary content will be in high-quality 
 links to papers, blogposts, articles, etc., while the discussion threads will 
 be a way to shoot the breeze about them. This distinction between content and 
 chatter is in many ways a useful thing.

 Again, the medium of reddit facilitates this type of content. Because 
 messages are shown in their threaded context with relatively high information 
 density, they don't have to quote or indicate their surrounding context. The 
 incremental cost of each message, both to reader and writer, is tiny, and 
 this facilitates a more free-flowing conversational style than over email. 
 Again, upvotes and downvotes (the dreaded ratings and karma) make it easy to 
 see immediately which points others found useful or bogus, and to register 
 simple agreement or disagreement without adding more semantic noise to the 
 mix.

I can't really refute any of this, though I'm still leary of having
such content under the control of a (commercial) third party.

 4. Towards haskell-commun...@haskell.org!

 Of 

Re: [Haskell-cafe] Parsing workflow

2010-10-31 Thread Erik de Castro Lopo
Nils Schweinsberg wrote:

 This is exactly what gives me headaches. It's hard to tell where you 
 need try/lookAhead and where you don't need them. And I don't really 
 feel comfortable wrapping everything into try blocks...

In my experience, try blocks should only be used at the very inner
most parts of the parser. That is avoid wrapping large combinator
functions in try blocks.

Erik
-- 
--
Erik de Castro Lopo
http://www.mega-nerd.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector-space and standard API for vectors

2010-10-31 Thread Alexey Khudyakov
On Sat, Oct 30, 2010 at 2:07 PM, Henning Thielemann
schlepp...@henning-thielemann.de wrote:
 wren ng thornton schrieb:
 On 10/22/10 8:46 AM, Alexey Khudyakov wrote:
 Hello everyone!

 It's well known that Num  Co type classes are not adequate for vectors
 (I don't mean arrays). I have an idea how to address this problem.

 Conal Elliott wrote very nice set of type classes for vectors.
 (Definition below). I used them for some time and quite pleased. Code is
 concise and readable.

   class AdditiveGroup v where
   zeroV :: v
   (^+^) :: v - v - v
   negateV :: v - v
 [...]
 I'd like to know opinion of haskellers on this and specifically opinion
 of Conal Eliott as author and maintainer (I CC'ed him)

 Looks like you are about to re-implement numeric-prelude. :-)

Only limited subset. Very limited. Everything is too big to be implemented

 Vector (Complex a) is a vector with respect to both 'a' and 'Complex a'.

It is but it's difficult to encode this. Type class below allows to have
multiple scalars. But then type checker cannot infer type of 2 in expression
`2 *^ vector' and so type signature must be added which is hardly usable

class Module v s where
  (*^) :: s - v - v

I think one is forced to convert real number to complex or use some
operations specific to data type.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] who's in charge?

2010-10-31 Thread Richard O'Keefe

On 29/10/2010, at 7:33 PM, Gregory Crosswhite wrote:

 Also, this is a complete aside but what the heck.  :-)
 
 Has anyone else been driven crazy by the way that Java code and libraries are 
 documented?  It seems like whenever I try to figure out how to use a piece of 
 Java code, the functionality is spread out over a huge collection of classes 
 and methods so that it is impossible to figure out where things actually 
 happen and how the code is supposed to be used.  Am I correct to perceive 
 this as a general trend in Java, or is it just the projects that I looked at 
 and/or my lack of experience in how Java sources and libraries are organize?

Talking about documentation in the 3rd year software engineering paper I teach,
I give JavaDoc as an example of what not to do.  I give R as an example of
something that works quite well.  It's *examples* that make the difference.

One thing I will say for the JavaMail specification is that it has some useful
examples.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] who's in charge?

2010-10-31 Thread Gaius Hammond


On 29 Oct 2010, at 07:33, Gregory Crosswhite wrote:


Also, this is a complete aside but what the heck.  :-)

Has anyone else been driven crazy by the way that Java code and  
libraries are documented?  It seems like whenever I try to figure  
out how to use a piece of Java code, the functionality is spread out  
over a huge collection of classes and methods so that it is  
impossible to figure out where things actually happen and how the  
code is supposed to be used.  Am I correct to perceive this as a  
general trend in Java, or is it just the projects that I looked at  
and/or my lack of experience in how Java sources and libraries are  
organize?




Yes I had a bit of a rant about that in my blog recently 
http://gaiustech.wordpress.com/2010/09/27/api-documentation/



It's by no means only Java; automated documentation generators are the  
problem. They add no more value than a smart editor could give you  
anyway, or reading the comments. You need an API reference *and*  
sample code *and* a high level introduction and explanation of intent  
to count as documentation in my book.




Cheers,



G



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] who's in charge?

2010-10-31 Thread Daniel Santa Cruz
 Talking about documentation in the 3rd year software engineering paper I 
 teach,
 I give JavaDoc as an example of what not to do.  I give R as an example of
 something that works quite well.  It's *examples* that make the difference.

 One thing I will say for the JavaMail specification is that it has some useful
 examples.

+1 to documentation with some samples.  Back in my PHP days, I used to
love the fact that the PHP docs had not only examples by the
developer, but also comments with examples from the community.

Daniel
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector-space and standard API for vectors

2010-10-31 Thread Alexey Khudyakov
On Wed, Oct 27, 2010 at 2:53 AM, wren ng thornton w...@freegeek.org wrote:
 On 10/26/10 8:51 AM, Alexey Khudyakov wrote:

 On 24.10.2010 03:38, wren ng thornton wrote:

 I don't care much about the name of the class, I'd just like support for
 monoids, semirings,... when they lack a group, ring,... structure.

 Then what about following type class hierarchy? I think it supports
 those structures. Only restriction is that it forces one to have both
 left and right modules. It's possible to split them but I think it will
 be to painful for vector spaces over R and C.

 class Module v where
 type Scalar v :: *
 (*^) :: Scalar v → v → v
 (^*) :: v → Scalar v → v
 (^*) = flip (*^)

 Is there any good reason for forcing them together? Why not, use the
 hierarchy I proposed earlier? If you want to reduce the clutter in type
 signatures for real and complex vector spaces then just add to my previous

    -- Or just call it Module if preferred.
    class (LeftModule v, RightModule v) = AssociativeModule v where
        -- Law: (^*) == flip (*^)

 This way, when (not if) people want nonassociative modules the classes are
 already there. The additional overhead in defining an associative module is
 only three lines when using default implementation; two lines otherwise:

    type instance Scalar Foo = Bar
    instance AssociativeModule Foo where
    instance RightModule Foo where
        (^*) = flip (^*)
    instance LeftModule Foo where
        (*^) = ...
 vs

    instance Module Foo where
        type Scalar Foo = Bar
        (*^) = ...

 And once it's defined, the usage is the same: just require AssociativeModule
 and you'll pull in both (*^) and (^*).

Main reason is that it complicate creation of instances for types for which
multiplication is associative and commutative more complicated.
Programmer must write three instances instead of one and they must
satisfy some law. It leads to code which more difficult to understand and
contain more bug (mostly dumb).

This is tradeoff between usability and generality. Modules are much less
frequent and much less known than vector space. I for example didn't known
about their existence before. I think difficulties should be pushed on
the people
who define instances for non associative modules.

One possibility is to add separate type classes for left and right modules
and require that is type is has both Module and LeftModule instances
(*^^) == (*^)

  class Module v where
(^*) :: v - Scalar v - v
(*^) :: Scalar v - v - v

Of course code that is written to work with left/right modules wont work with
associative modules.



 We already know that there are noncommutative modules/vectorspaces of
 interest (e.g., modules over quaternions and modules over graph paths), why
 not support them from the beginning? It seems like you're going out of your
 way to exclude things that would be trivial to include. This is exactly why
 this is my standard complaint against the various proposals out there for
 new numeric hierarchies. People who are used to only using R^n think the
 proposals are just fine, but none of the proposals capture the structures I
 work with daily. Which means the new proposals are no better than the
 Prelude for me.

I have to admit that I with R^n and therefore mostly care about this use
case. I think good set of type classes should be small enough (IMHO five or so).
It should not require to write unnecessary code in common cases. Probably this
is case when rule one size doesn't fit all applies.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How can we make Haskell-Cafe scale? (was: Re: Edit Hackage)

2010-10-31 Thread Christopher Done
On 31 October 2010 04:08, Sterling Clover gersh...@gmail.com wrote:
 How can we make Haskell-Cafe scale?
 There's been some grumbling about users migrating from -cafe to Reddit and 
 Stack Overflow in particular. First, as Don has pointed out, the fact is that 
 people are moving away, and that's just the trend.

I really don't think the amount of Haskell discussion is so large on
Haskell-Cafe. I've seen maybe 5 new threads started all day. If
complaints of being swamped by messages is the cause, then it seems
people tend not to know how to ignore a thread with their client. For
me, today has been like a ghost town in Haskell-Cafe, but I set
threads I don't want to read to ignore. Regardless, I do think SO,
Reddit, Planet Haskell, Meetup, HaskellWiki, hpaste, Github,
Haskellers, the IRC, etc etc etc. serve their specific purpose well,
and using them is good. But the Haskell community really is not that
big that we need to further partition it and spread it thin. How many
different channels can people be bothered to monitor?

On 31 October 2010 22:43, Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com wrote:
 As for the large number of posts: as far as I'm aware most email
 clients allow you to say mark all messages in this thread as read
 under some guise or another so you don't have to actually read each
 email just to stop your client telling you that you have new mail.

GMail, for example, has the mute feature, to stop emails being shown
in the inbox for a given conversation (thread).
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Reference for technique wanted

2010-10-31 Thread Richard O'Keefe
There's a long-known technique in functional languages
where
[x1,...,xn] = \tail - x1:...xn:tail
xs ++ ys= f . g
xs  = f []

A correspondent mentioned to me that he couldn't find a reference
to the idea (which I gather he had independently rediscovered).
I know I've read about it somewhere.  Can anyone provide a reference?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-10-31 Thread Gregory Collins
Richard O'Keefe o...@cs.otago.ac.nz writes:

 There's a long-known technique in functional languages
 where
   [x1,...,xn] = \tail - x1:...xn:tail
   xs ++ ys= f . g
   xs  = f []

 A correspondent mentioned to me that he couldn't find a reference
 to the idea (which I gather he had independently rediscovered).
 I know I've read about it somewhere.  Can anyone provide a reference?

They're called difference lists:
http://hackage.haskell.org/packages/archive/dlist/latest/doc/html/Data-DList.html

They allow O(1) snoc and append. I use them all the time!

G
-- 
Gregory Collins g...@gregorycollins.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-10-31 Thread Derek Elkins
Well, you can get A Novel Representation of Lists and Its Application
to the Function 'Reverse' by John Hughes online published in 1986
which is referenced by Wadler's 1987 The Concatenate Vanishes and
references Richard Bird's 1984 paper Transformational programming and
the paragraph problem though I'd be quite surprised if that was the
first place the representation appeared in print.

On Sun, Oct 31, 2010 at 6:51 PM, Richard O'Keefe o...@cs.otago.ac.nz wrote:
 There's a long-known technique in functional languages
 where
        [x1,...,xn]     = \tail - x1:...xn:tail
        xs ++ ys        = f . g
        xs              = f []

 A correspondent mentioned to me that he couldn't find a reference
 to the idea (which I gather he had independently rediscovered).
 I know I've read about it somewhere.  Can anyone provide a reference?


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-10-31 Thread Richard O'Keefe

On 1/11/2010, at 12:10 PM, Derek Elkins wrote:

 Well, you can get A Novel Representation of Lists and Its Application
 to the Function 'Reverse' by John Hughes online published in 1986
 which is referenced by Wadler's 1987 The Concatenate Vanishes and
 references Richard Bird's 1984 paper Transformational programming and
 the paragraph problem though I'd be quite surprised if that was the
 first place the representation appeared in print.

Thank you.  I'd seen Wadler's paper and forgotten it.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-10-31 Thread Richard O'Keefe

On 1/11/2010, at 12:05 PM, Gregory Collins wrote:
 They're called difference lists:

As a matter of fact the original context was precisely
difference lists in logic programming.

 http://hackage.haskell.org/packages/archive/dlist/latest/doc/html/Data-DList.html

Thank you.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-10-31 Thread Derek Elkins
On Sun, Oct 31, 2010 at 7:27 PM, Richard O'Keefe o...@cs.otago.ac.nz wrote:

 On 1/11/2010, at 12:05 PM, Gregory Collins wrote:
 They're called difference lists:

 As a matter of fact the original context was precisely
 difference lists in logic programming.

 http://hackage.haskell.org/packages/archive/dlist/latest/doc/html/Data-DList.html

Difference lists in logic programming are almost the opposite of
functional representations of lists and I find the move to try to
nominally connect them worse than useless.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Data records. Default values initialisation. Reading datatype from string.

2010-10-31 Thread Siim Haugas
Hello,

I need some help with data records. Normally I can use a `read' function to
convert string to some data type e.g. `(read somestring) :: Type'. However,
sometimes I'd like to read a record with some uninitialised fields. I have
also set up a default values for my data record, for example: `defrec =
{a=0,b=1, ..., z=0}'. Now I would like to override some values, so I have a
string defrec{l=100,r=88}, but it seems that it can't be done with read
`(read defRec{l=100,r=88}) :: defRec'.

Any suggestions?

Siim
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Data records. Default values initialisation. Reading datatype from string.

2010-10-31 Thread Siim Haugas
Oh sorry, I had some mistakes in my previous mail..
`defrec = {a=0,b=1, ..., z=0}' should be `defrec =
SomeDataConstructor{a=0,b=1, ..., z=0}' .

Siim

On Mon, Nov 1, 2010 at 2:31 AM, Siim Haugas hau...@gmail.com wrote:

 Hello,

 I need some help with data records. Normally I can use a `read' function to
 convert string to some data type e.g. `(read somestring) :: Type'. However,
 sometimes I'd like to read a record with some uninitialised fields. I have
 also set up a default values for my data record, for example: `defrec =
 {a=0,b=1, ..., z=0}'. Now I would like to override some values, so I have a
 string defrec{l=100,r=88}, but it seems that it can't be done with read
 `(read defRec{l=100,r=88}) :: defRec'.

 Any suggestions?

 Siim

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Am I using type families well?

2010-10-31 Thread Yves Parès
Hello,

I'm trying to make a simple monad (built on operational's ProgramT) for
resource loading.
I have classes featuring type families :

{-# LANGUAGE TypeFamilies, FlexibleContexts, GADTs #-}

-- | A ResourceId is something that identifies a resource.
-- It should be unique for one resource, and should be used to find the
location (the path) of the resource,
-- possibly by using a configuration datatype
class (Ord id) = ResourceId id where
  type LocOf id
  type CfgOf id
  retrieveLoc :: CfgOf id - id - LocOf id

-- | Class describing a resource of type @rsc@
class (ResourceId (IdOf rsc)) = Resource rsc where
  type IdOf rsc
  load   :: LocOf (IdOf rsc) - IO (Maybe rsc)
-- ^ Called when a resource needs to be loaded
  unload :: rsc - IO ()
-- ^ Idem for unloading

-- | Then, the operations that the loader can perform
data EDSL id a where
  Load :: id - EDSL id ()
  IsLoaded :: id - EDSL id Bool
  Unload   :: id - EDSL id ()

-- | The loader monad itself
type RscLoader rsc m a = ProgramT (EDSL (IdOf rsc)) m a

-- | And finally, how to run a loader
runLoader :: (Monad m, Resource rsc) = CfgOf (IdOf rsc) - RscLoader rsc m
a - m a
runLoader cfg loader = viewT loader = eval M.empty
  where
eval :: (Monad m, Resource rsc) =
 M.Map (IdOf rsc) rsc
 - ProgramViewT (EDSL rsc) m a
 - m a
eval _(Return x) = return x
eval rscs (instr := k) = case instr of
  Load id - do let loc = retrieveLoc cfg id
  -- open and load from loc will go here
  viewT (k ()) = eval rscs
  -- -- -- Other cases yet to come...



Well, there is no way I can get it type-check. I think I must be misusing
the type families (I tried with multi-param typeclasses and functional
dependencies, but it ends up to be the same kind of nightmare...).
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector-space and standard API for vectors

2010-10-31 Thread wren ng thornton

On 10/31/10 6:36 PM, Alexey Khudyakov wrote:

On Wed, Oct 27, 2010 at 2:53 AM, wren ng thornton wrote:

Is there any good reason for forcing them together? Why not, use the
hierarchy I proposed earlier?
[...]


Main reason is that it complicate creation of instances for types for which
multiplication is associative and commutative more complicated.
Programmer must write three instances instead of one and they must
satisfy some law. It leads to code which more difficult to understand and
contain more bug (mostly dumb).


Regardless of the class API instances must obey the law (or else have 
buggy results), so the law is uninteresting. We implicitly use such laws 
all the time, but usually we do so without having a class constraint to 
make our assumptions explicit. Isn't making assumptions explicit part of 
the reason for using a strongly typed language?


And as I mentioned previously, the burden of implementation is 2 or 3 
*lines* of code. Talking about the number of class instances people must 
write is obfuscating the fact that it's trivial to add two lines of 
code. And this is code that gets written once, in a library.



This is tradeoff between usability and generality.


Your proposal, like so many I've seen before, is unusable because it 
lacks the necessary generality. The complexity of my proposal is 
insignificant: it requires only 2~3 lines of code, and an acknowledgment 
that there are structures other than vectorspaces.



Modules are much less
frequent and much less known than vector space.


Modules are more frequent than vector spaces, by definition, because 
every vector space is a module. Since you're not requiring 
Fractional(Scalar v), there isn't even any difference between them in 
the class API.


Your claim is like saying that we shouldn't have support for Applicative 
because Monads are more common. Again, there are more applicative 
functors than monads, the only cognitive overhead of having the 
Applicative class is acknowledging that there are structures other than 
monads, and adding Applicative enables a helpful style of programming 
which there is no reason to exclude.




One possibility is to add separate type classes for left and right modules
and require that is type is has both Module and LeftModule instances
(*^^) == (*^)

   class Module v where
 (^*) :: v -  Scalar v -  v
 (*^) :: Scalar v -  v -  v

Of course code that is written to work with left/right modules wont work with
associative modules.


Which my proposal fixes by making associative modules a subclass of both 
left- and right-modules.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-10-31 Thread wren ng thornton

On 10/31/10 7:10 PM, Derek Elkins wrote:

Well, you can get A Novel Representation of Lists and Its Application
to the Function 'Reverse' by John Hughes online published in 1986
which is referenced by Wadler's 1987 The Concatenate Vanishes and
references Richard Bird's 1984 paper Transformational programming and
the paragraph problem though I'd be quite surprised if that was the
first place the representation appeared in print.


Barring the worse than useless appellation, the technique has been 
around in logic programming (and classic Lisp, IIRC) for a few decades 
longer. I've always heard it referred to as part of the folklore of 
logic/functional programming though, so I'm not sure of any earlier 
print references off-hand.


(Though I find it curious that you think the logic version is so 
different...)


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-10-31 Thread Derek Elkins
On Sun, Oct 31, 2010 at 9:02 PM, wren ng thornton w...@freegeek.org wrote:
 On 10/31/10 7:10 PM, Derek Elkins wrote:

 Well, you can get A Novel Representation of Lists and Its Application
 to the Function 'Reverse' by John Hughes online published in 1986
 which is referenced by Wadler's 1987 The Concatenate Vanishes and
 references Richard Bird's 1984 paper Transformational programming and
 the paragraph problem though I'd be quite surprised if that was the
 first place the representation appeared in print.

 Barring the worse than useless appellation, the technique has been around
 in logic programming (and classic Lisp, IIRC) for a few decades longer. I've
 always heard it referred to as part of the folklore of logic/functional
 programming though, so I'm not sure of any earlier print references
 off-hand.

I agree that difference lists have been around quite a bit longer, but
they are something different.

 (Though I find it curious that you think the logic version is so
 different...)

I'm curious as to how you find them similar.  Beyond both of them
being ways to get fast appends in a declarative language, they have no
similarities.  To begin, Prolog is a first order language so it
clearly can't represent functional lists.  Meanwhile, difference lists
rely on, at least, single assignment variables which Haskell does not
have as a language feature so Haskell can't represent difference lists
outside of a monad.  The use of logic variables requires a linear
use of a difference list within a branch of non-deterministic
computation, i.e. difference lists are not persistent.  Functional
lists clearly are.  As a simple example, if xs is a functional list, I
can return a pair (xs . ys, xs . zs), if I tried to do that with
difference lists I would be unifying ys and zs.  If I -really- wanted
to do that with difference lists I would have to use a difference list
copy predicate to do it.  Functional lists are an opaque data
structure.  If I want to know what the head of a functional list is, I
have to first turn it into a real list and then take the head.  With
difference lists, I can just look at the head, and this is cheap and
easy.  Both representations have junk, though I'm inclined to say the
functional representation has quite a bit more.  At any rate, the junk
is rather different.  The junk of a the functional representation is
any [a] - [a] function that can't be put into the form (xs ++) for
some list xs.  For example, reverse.  Difference lists are pairs of
lists where the latter is a suffix of the former.  The junk in the
typical representation, i.e. just pairs of lists, are pairs that don't
meet that criterion.  The idea behind difference lists is to represent
the list xs as the pair (xs ++ ys, ys), i.e. xs ++ ys - ys = xs is
where the name difference list comes from.  One way of motivating
the functional representation is that it is nothing more than the
natural embedding of the list monoid into its monoid of endofunctions;
for every set X, X - X is a monoid, and for every monoid (M,*,1),
curry (*) is a monoid homomorphism from M to (M - M).  I'm unsure how
to apply either of these views to the other representation.  In fact,
difference lists are nothing more than the normal imperative way of
handling appends quickly for singly-linked lists, with NULL replaced
by an unbound variable.

To simplify, the difference in persistence between the two
representations is enough to consider them very different as it makes
a dramatic difference in interface.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-10-31 Thread Richard O'Keefe

On 1/11/2010, at 2:02 PM, wren ng thornton wrote:
 
 Barring the worse than useless appellation, the technique has been around 
 in logic programming (and classic Lisp, IIRC) for a few decades longer. I've 
 always heard it referred to as part of the folklore of logic/functional 
 programming though, so I'm not sure of any earlier print references off-hand.
 
 (Though I find it curious that you think the logic version is so different...)


The logic programming version
  uses a SINGLE data structure for lists and differences, so that
  + converting from the difference representation to the plain 
representation
involves no more than binding a single (logic) variable
  + does not involve ANY overheads compared with building a list directly
  + can easily be traversed while still under construction, as in the
queue([s(...s(z)...), [X1,...,Xn|Hole], Hole)
representation for a queue, which allows O(1) worst case enqueue and 
dequeue.
enqueue(X, queue(N,List,[X|Hole]),queue(s(N),List,Hole)).
dequeue(X, queue(s(N),[X|List],Hole), queue(N,List,Hole)).
   (Fernando Pereira taught me this one.)
  - can only be used once, after which the hole has *gone*.

The closure version
  uses TWO data structures, so that
  - converting from the difference representation to the plain list 
representation
involves building a whole new data structure at O(n) time and space cost
  - requires building closures which have no other reason to exist, which may
retain references to data that would otherwise be dead
  - cannot be traversed while under construction
  + can be used as many times as you want

I don't see the closure technique used in logic programming at all.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Equivalent of withForeignPtr for System.Mem.Weak?

2010-10-31 Thread Matthew Steele
I have an object to which I have added one or more finalizers via  
addFinalizer from System.Mem.Weak.  I would like to have a function  
that allows me to make use of the object within a block of IO code,  
and guarantee that the finalizer(s) will not be called during the code  
block -- sort of like withForeignPtr, but for arbitrary objects.  Does  
such a function exist, and if not, how could I write one?


I can imagine writing something like:

\begin{code}
withObject :: a - IO b - IO b
withObject obj action = do
  result - action
  touchObject obj  -- touchObject :: a - IO ()
  return result
\end{code}

However, I don't know how I should implement touchObject and be  
confident that it won't ever be optimized out.  Any suggestions?


Cheers,
-Matt

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Forall and type synonyms in GHC 7.0

2010-10-31 Thread Mario Blažević
Before uploading a new version of my project on Hackage, I decided to
future-proof it against GHC 7.0. I ran into several compile errors caused by
the changes in let generalization, but these were easy to fix by adding
extra type annotations. But then I ran into another problem that I can't fix
so easily. Here is its trimmed-down reproduction:

 {-# LANGUAGE RankNTypes #-}

 module Test where

 data Component c = Component {with :: c}

 pair1 :: (Bool - c1 - c2 - c3) - Component c1 - Component c2 -
Component c3
 pair1 combinator (Component c1) (Component c2) = Component (combinator
True c1 c2)

 type PairBinder m = forall x y r. (x - y - m r) - m x - m y - m r

 pair2 :: Monad m = (PairBinder m - c1 - c2 - c3) - Component c1 -
Component c2 - Component c3
 pair2 combinator = pair1 (combinator . chooseBinder)

 chooseBinder :: Monad m = Bool - PairBinder m
 chooseBinder right = if right then rightBinder else leftBinder

 leftBinder :: Monad m = PairBinder m
 leftBinder f mx my = do {x - mx; y - my; f x y}

 rightBinder :: Monad m = PairBinder m
 rightBinder f mx my = do {y - my; x - mx; f x y}

   The general idea here, if you're intrigued, is that pair1 belongs to a
generic module that packages things it knows nothing about into Components.
The remaining definitions belong to a client of the generic module, and
pair2 is a specialization of pair1 to components that have something to do
with monads.

   Now this little test compiles fine with GHC 6.12.1, but GHC
7.0.0.20101029 reports the following error in the pair2 definition:

TestForall.lhs:13:42:
Couldn't match expected type `forall x y r.
  (x - y - m r) - m x - m y - m r'
with actual type `(x - y - m1 r) - m1 x - m1 y - m1 r'
Expected type: Bool - PairBinder m
  Actual type: Bool - (x - y - m1 r) - m1 x - m1 y - m1 r
In the second argument of `(.)', namely `chooseBinder'
In the first argument of `pair1', namely
  `(combinator . chooseBinder)'

I've tried adding extra type annotations without making any progress. At
this point I'm beginning to suspect I ran into a bug in GHC 7.0, but I can't
find it in GHC Trac; the only ticket that looks similar is #4347, but that
one works for me. Is this a bug? If not, how do I make my code compile?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Equivalent of withForeignPtr for System.Mem.Weak?

2010-10-31 Thread Antoine Latter
On Sun, Oct 31, 2010 at 10:14 PM, Matthew Steele mdste...@alum.mit.edu wrote:
 I have an object to which I have added one or more finalizers via
 addFinalizer from System.Mem.Weak.  I would like to have a function that
 allows me to make use of the object within a block of IO code, and guarantee
 that the finalizer(s) will not be called during the code block -- sort of
 like withForeignPtr, but for arbitrary objects.  Does such a function exist,
 and if not, how could I write one?

 I can imagine writing something like:

 \begin{code}
 withObject :: a - IO b - IO b
 withObject obj action = do
  result - action
  touchObject obj  -- touchObject :: a - IO ()
  return result
 \end{code}


CC'ing GHC-users list.

The 'primitive' package has a general purpose 'touch' function:

http://hackage.haskell.org/packages/archive/primitive/0.3.1/doc/html/Control-Monad-Primitive.html#v:touch

But it doesn't state what guarantees are provided by the function.

Antoine
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Tsuru Capital Tokyo is looking for Southern Hemisphere summer (Dec 10 - Feb 11) interns

2010-10-31 Thread employment
We are looking for an intern that has experience programming in a functional
language, and is familiar with Haskell and web development.  Some experience
with Linux/Unix system administration is preferred.  Familiarity with Git
would be useful.

The position is in Tokyo in a small English speaking team.  We do
algorithmic options market-making.

The exact role is not fixed yet but we are looking particularly for someone
to help with making a simulation scheduling framework and results
visualisation in a web application.

Compensation would be a small package including flights and accommodation.
If interested please send a resume to

j...@tsurucapital.com

You can see more details about the company at

tsurucapital.com  and at the company page on LinkedIn
http://tsurucapital.com/en/jobs.html#programmer
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-10-31 Thread wren ng thornton

On 10/31/10 10:26 PM, Richard O'Keefe wrote:

On 1/11/2010, at 2:02 PM, wren ng thornton wrote:

(Though I find it curious that you think the logic version is so different...)


The logic programming version
   uses a SINGLE data structure for lists and differences, so that
   + converting from the difference representation to the plain 
representation
 involves no more than binding a single (logic) variable
   + does not involve ANY overheads compared with building a list directly
   + can easily be traversed while still under construction,

- can only be used once, after which the hole has *gone*.

Sure, but all of these come from the differences in how functional 
languages and logic languages define their notions of variable.


In logic programming you're allowed to play with open terms, so the 
ability to move around difference lists before they're complete is the 
same as being able to manipulate any other open term. Whereas in 
functional languages you (typically) are not allowed to have open terms, 
and terms with variables in them must be guarded by lambdas which 
prohibit you looking underneath them. One benefit of lambdas is that 
they lift the names of unknown substructure up to the top of an 
expression; the difflist(xs,hole) structure is just there to give that 
same lifting of names, since otherwise we wouldn't have any way of 
knowing that xs is unground nor how to make it (more) ground[1]. The 
various other differences regarding linear usage just come down to the 
fact that difflist(xs,hole) is a closure, not a function. But there's 
nothing to prevent functionalizing it, if the logic language provides a 
mechanism for generating fresh variables.


When I look at difference lists in functional languages I see them in 
the same light: as a mechanism for manipulating open terms. The details 
of what manipulations are allowed is different (mostly because of 
function vs closure), but that's to be expected given the paradigm 
differences. But then I take the algorithmic idea of using open terms 
for fast concatenation as primary and the implementation details of what 
open terms are as secondary.



[1] Assuming a pure logic language, which Prolog is not.

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe