Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-03 Thread AntC
Kevin Quick quick at sparq.org writes:

 
  Currently under H98:
 f.g-- (both lower case, no space around the dot)
  Is taken as function composition -- same as (f . g).
 f.  g  -- is taken as func composition (f . g)
 f  .g  -- is taken as func composition (f . g)
 
 And so it is.  Could have sworn these weren't accepted, but clearly I'm  
 wrong.  Thanks for pointing this out.
 

On a bit more digging, I'm scaring myself. These are both valid (H98):

 Data.Char.toUpper.Prelude.head.Prelude.tail $ hello   -- Strewth!
 hello.$Prelude.tail.$Prelude.head.$Data.Char.toUpper
 -- using (.$) = flip ($) as fake dot notation
GHCiorHugs== 'E'

The first example is good in that you can mix qualified names in with dot 
notation, and the lexer can bind the module name tighter than dot-as-function-
composition.

It's bad that not only are we proposing changing the meaning of dot, we're 
also changing the direction it binds. If you put in the parens:

 (Data.Char.toUpper.(Prelude.head.(Prelude.tail)))  hello
 ((hello.$Prelude.tail).$Prelude.head).$Data.Char.toUpper

Or perhaps not so bad, left-to-right thinking?

Another syntax change about dot-notation is that it binds tighter **than even 
function application**:

  map toUpper customer.lastName

Desugars to:

  map toUpper (lastName customer)

Compare if that dot were function composition:

  (map toUpper customer) . lastName-- of course this isn't type-valid


But wait! there's more! we can make it worse! A field selector is just a 
function, so I can select a field and apply a function all in one string of 
dots:

  customer.lastName.tail.head.toUpper   -- Yay!!

 
 I was trying to ... *but* also  
 indicate that I specifically want the field selector rather than some  
 arbitrary f.  I wanted to extract the field f of every record in recs but  
 clearly indicate that f was a field selector and not a free function.
 
 And this is finally our difference.  I had wanted the no-space preceeding  
 dot syntax (.f) to specifically indicate I was selecting a field.  ...

You seem to be not alone in wanting some special syntax for applying field 
selectors (see other posts on this thread). H98 field selectors don't do this, 
they're just functions.

And there's me bending over backwards to make all Type-Directed overloaded-
Name Resolution field selectors just functions, so you can mix field selectors 
and functions **without** special syntax. Example Yay!! above.

I'm puzzled why you want different syntax for field selectors. Can you give 
some intuition?

Of course you can adopt a convention in your own code that dot-notation is for 
field selection only. (But you can't legislate for code you're importing.) 
(And Donn Cave wants to be able to ignore dot notation all together.)

AFAIC OO languages lets you put all sorts of weird stuff together with dot 
notation. SPJ's got an example from Java in his TDNR.

I hope it's not because you name your fields and functions with brief, 
cryptic, one-letter codes!! You do have a coding convention in you production 
code to use long_and_meaningful_names, don't you?!

So you can tell `customer' is a customer (record), and `lastName' is a last 
Name (field), etc.


  The issue can  
 be resolved by explicit module namespace notation (ala. Prelude.map v.s.  
 Data.List.map).

I want module namespace notation **as well as** dot notation. This is my 
import from a distant planet example. And it'll work, going by example 
Strewth! above.

 
 In addition, under SORF, SPJ indicated that Dot notation must work in  
 cascades (left-associatively), and with an expression to the left:
r.x
r.x.y
(foo v).y
 
 I assume DORF would also support this as well and that r.x.y.z would  
 desugar to z (y (x r)).

Yes, as per discussion above.

 
 With regards to module namespace notation, neither SORF nor DORF mentions  
 anything that I found, but I'm assuming that the assertion is that it's  
 not needed because of the type-directed resolution.

It's rather the other way round. We want to avoid qualified names, and type-
directed resolution is the mechanism to achieve that ...

Where this 'Records in Haskell' thread started is that currently if you want 
to have the same field name in different records, you have to declare the 
records in different modules, then import them to the same place, and still 
you can only refer to them by putting the module prefix. (Unless you use the -
XDisambiguateRecordFields flag, but this only works within the scope of 
pattern matches and explicit record/data constructors; it doesn't work for the 
free-floating selector functions.)

And on balance, putting module prefixes everywhere is just too cumbersome.

So yes, the plan with SORF and DORF is that you can (mostly) use un-qualified 
names, and the resolution mechanism figures out which record type you're 
talking about.

One difference between DORF and SORF is that I 

Re: [Haskell-cafe] Adding Html formatting to formlets

2012-02-03 Thread Alberto G. Corona
Hi Jeremy

If  the signature of a formlet or digestive functor is

View *format m a *

with `*m*` a monad,  `*a*` the resulting value  and `*format*` the
formatting (Usually HTML)

then the signatures of the operators for Text.XHtml format are:

*() :: Monad m = (Html - Html) - View Html m a - View Html m a

(++) :: Monad m = Html - View Html m a - View Html m a

(++) :: Monad m = View Html m a - Html - View Html m a

(+) ::  Monad m  = View Html m a- View Html m b - View Html m
(Either a' b')*


My mplementation is tightly integrated with other non  related
functionalities in an app server that I´m developing so a translation to
diggestive functors is not trivial, but I too would love to have this
integrated in digestive functors for the shake of modularity.



2012/2/3 Jeremy Shaw jer...@n-heptane.com:
 Hello,

 Formlets is deprecated in favor of digestive functors. If you have not
looked at the digestive-functors package I highly recommend that you do. It
fixes a lot of little issues that formlets had -- but is basically the same
thing.

 The () operator is a already a standard operator in Control.Category /
Control.Arrow. So, it is probably confusing to reuse that symbol for
something else…

 The digestive functors library defines two new operators (++) and (++)
which are somewhat related to what you are trying to do.

 In HTML, the label tag is supposed to reference the 'id' of the field
is it labeling. For example, you might have:

  label for=usernameUsername: /labelinput text=text id=username
name=username value=

 In formlets, there was no way to do that because the 'ids' are not
accessible to the user. In digestive functors you can use the ++ operator
to 'share' an id between to elements. That allows you to write:

 label Username : ++  inputText Nothing

 Anyway, I would  love to see:

  a) your new combinators renamed and reimplemented for digestive-functors
  b) the type signatures of these new operators

 With out the type signatures it is a bit hard to decipher what your new
operators are actually doing..

 But, I love seeing any new improvements to the
formlets/digestive-functors concept!

 - jeremy



 On Feb 2, 2012, at 6:50 PM, Alberto G. Corona wrote:

 I came across the idea that is easy to define additional operators to
 Text.FormLets for adding custom HTML formatting.
 Had anyone tried that?


 For example to enclose the Prod formulary in a Table using Text.XHtml
 tags. I defined additional operators  and ++ for enclosing and
 prepending
 Html to a formLet, respectively:

 data Prod= Prod{pname :: String, pprice :: Int}

 getProd= table  (
  Prod $ tr  (td  enter the name  ++ td  getString
(pname $ mp))
* tr  (td  enter the price   ++ td  getInt
( pprice $ mp)))


 even:

 p  paragraph ++ getProd   ++ (more Html stuff)

  is possible

 or even it is possible an operator +

 getProd + someOtherStuff

 to return  Either Prod OtherStuff


 I did it in my own version of FormLets. So It is too heavy to put
 here a running example. It is part of a package that I will upload
 soon to hackage.



 This also may work for embedding formLets in other haskell HTML
 formats besides Text.XHtml.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Again, version conflicting problem with cabal-install

2012-02-03 Thread Magicloud Magiclouds
Yes and probably a runtime crash even if passed the compile.
But I think if we let it stay the way it is, the hackage empire would
be down any minute. All big hackages are depending on many other
hackages by many other authors. So big chance that the top hackage
cannot be installed (like I suffered recently).
Maybe I should crack my cabal source to add an argument to ignore
version checking

On Fri, Feb 3, 2012 at 2:44 PM, Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com wrote:
 On 3 February 2012 17:29, Magicloud Magiclouds
 magicloud.magiclo...@gmail.com wrote:
 Thank you. The document does say it more clearly than me.
 But still, currently, ghc only gives me one option: cannot be built.
 How about giving me another one: throw away the version information of
 D when building A. So when A uses types in D with B and C, it might
 work. Just the risk is on me now.
 It is not perfect, but would work sometimes

 But not always.  We'd then have other errors: why isn't this build working?

 Types can be re-exported, class instances are implicitly imported/exported, 
 etc.


 On Fri, Feb 3, 2012 at 2:04 PM, Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com wrote:
 On 3 February 2012 16:54, Magicloud Magiclouds
 magicloud.magiclo...@gmail.com wrote:
 Hello,
  As I recalled, ghc works in staticly link mode. So after one library
 is compiled, all its build dependencies are useless. Lost, changed,
 wheresoever, it does not matter.
  Then why the problem of version conflicting exists?
  By version conflicting I mean like following. This way, A is not
 installable by cabal.
 A needs B 0.1
 A needs C 0.1
 B needs C0.2

 See the Dreaded Diamond Dependency Problem: http://www.well-typed.com/blog/9

 --
 Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com
 IvanMiljenovic.wordpress.com



 --
 竹密岂妨流水过
 山高哪阻野云飞

 And for G+, please use magiclouds#gmail.com.



 --
 Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com
 IvanMiljenovic.wordpress.com



-- 
竹密岂妨流水过
山高哪阻野云飞

And for G+, please use magiclouds#gmail.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Again, version conflicting problem with cabal-install

2012-02-03 Thread Andres Löh
Hi.

On Fri, Feb 3, 2012 at 7:44 AM, Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com wrote:
 On 3 February 2012 17:29, Magicloud Magiclouds
 magicloud.magiclo...@gmail.com wrote:
 Thank you. The document does say it more clearly than me.
 But still, currently, ghc only gives me one option: cannot be built.
 How about giving me another one: throw away the version information of
 D when building A. So when A uses types in D with B and C, it might
 work. Just the risk is on me now.
 It is not perfect, but would work sometimes

 But not always.  We'd then have other errors: why isn't this build working?

 Types can be re-exported, class instances are implicitly imported/exported, 
 etc.

It's a valid complaint, and there's ongoing work to fix some of these
issues. In the meantime, the development version of cabal-install, in
particular the new modular solver, can deal with a few situations that
can't be resolved by older cabal-install versions. I can't promise it
will help here. But I'm still interested in feedback.

Cheers,
  Andres

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal-1.10.1.0 and bytestring-0.9.2.1 hackage problem.

2012-02-03 Thread Ketil Malde


Alan Pogrebinschi alan...@gmail.com writes:

 That Cabal-1.10.1.0 bug seems to be back, now with bytestring-0.9.2.1
 just uploaded to hackage. 

Thanks for the link!  I was banging my head on against the virtual wall,
since all I'm getting is:

  % cabal install -v biopsl 
  Reading available packages...
  Resolving dependencies...
  cabal: Couldn't read cabal file bytestring/0.9.2.1/bytestring.cabal

It's not exactly obvious what this error message means.

 Same workaround as last time works

I.e:

  tar -f ~/.cabal/packages/hackage.haskell.org/00-index.tar --delete 
bytestring/0.9.2.1

This is the cabal-install shipped with Ubuntu 12.04 (i.e. the unreleased
beta, which will become the new LTS in April), so buggy or not, it ought
not to be broken if we can avoid it.  I've filed a bug:

  https://bugs.launchpad.net/ubuntu/+source/haskell-cabal-install/+bug/925967

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal-1.10.1.0 and bytestring-0.9.2.1 hackage problem.

2012-02-03 Thread Erik Hesselink
 Same workaround as last time works

 I.e:

  tar -f ~/.cabal/packages/hackage.haskell.org/00-index.tar --delete 
 bytestring/0.9.2.1

This will only work until the next 'cabal update', right? Does anyone
have a better workaround?

 This is the cabal-install shipped with Ubuntu 12.04 (i.e. the unreleased
 beta, which will become the new LTS in April), so buggy or not, it ought
 not to be broken if we can avoid it.  I've filed a bug:

It's also the cabal-install shipped with Ubuntu 11.10, i.e. the
current version that everyone uses. So it would be nice if the
offending version could also be removed from hackage or something
similar to prevent this problem for everyone.

Erik

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-03 Thread Gábor Lehel
On Fri, Feb 3, 2012 at 10:30 AM, AntC anthony_clay...@clear.net.nz wrote:
 You seem to be not alone in wanting some special syntax for applying field
 selectors (see other posts on this thread). H98 field selectors don't do this,
 they're just functions.

 And there's me bending over backwards to make all Type-Directed overloaded-
 Name Resolution field selectors just functions, so you can mix field selectors
 and functions **without** special syntax. Example Yay!! above.

 I'm puzzled why you want different syntax for field selectors. Can you give
 some intuition?

Here's my problems with allowing postfix application using dot for all
functions.

The first problem is that mixing prefix and postfix function
application within the same line makes it harder to read. When you
read code to try to understand what it does, the direction you like to
go in is here's some object, first do this to it, then do that to it,
then do this other thing to it, then this fourth thing to produce the
final result. In Haskell code with prefix application, this is easy:
you read it from right to left. In OO-style code using dots, it's even
easier: you read it from left to right. But if you mix the two, it's
much harder than either: you first have to figure out where the
sentence even begins, which is going to be somewhere in the middle,
and then every time the expression switches between prefix and
postfix, you have to figure out where to continue reading. The
algorithm your brain needs to follow is a lot branchier, so to speak.

This is the smaller problem. If prefix and postfix notations are
completely interchangeable, then we can at least expect people to not
make their own code hard to read, and to stick to one or the other
within an expression. (If they're *not* interchangeable, and one or
the other is required in some cases, then it's a bigger problem.)

The other problem is that, in order to make partial application
convenient, you want to put your function's parameters in the order of
least specific to most specific. If you want to make postfix
application convenient, you have to do the reverse.

For example, take the filter function from the Prelude:

filter :: (a - Bool) - [a] - [a]

The order of its parameters makes it easy to write specialized filter
functions by partially applying filter, for example:

filterEvens = filter even

This is convenient and useful. (It's even more useful within
expressions, when you want to pass a function as an argument to a
higher-order function, which happens very frequently.) By contrast,
it's not usually useful to be able to specialize filter by the list it
filters, which is what you could conveniently do if the order of
filter's parameters were swapped:

filter :: [a] - (a - Bool) - [a]
filterOneToTen = filter [1..10] -- ??

But for postfix function application, this latter order is the one you want:

[1..10].filter even
is a lot more intuitive than
even.filter [1..10]

So if you have postfix function application in the language, you end
up with a zero-sum situation where a function can be convenient to
partially apply, or it can be convenient to use with postfix notation,
but (unless it's single-argument) it can't be both. You'll end up with
some people preferring postfix notation and writing their functions
one way, other people preferring partial application and writing their
functions the other way, and a lot of frustration when people from one
group want to use functions written by the other. I hope you'll agree
that writing two versions of every function is not a satisfactory
solution. Having postfix application supply the last argument rather
than the first one -would- be satisfactory, but in Haskell's case it's
hard to tell which one that is. (Thanks to the fact that
multi-argument functions are just single-argument functions returning
other single-argument functions.)

Given this incompatibility, my humble opinion is that we should choose
one or the other. All of our existing functions, with only a few
irritating exceptions (writeIORef, I'm looking at you), are optimized
for partial application, so we should stick with it.

To finally get around to the point:

All of this said, record.field is still the most readable, intuitive,
and familiar syntax for selecting a field from a record that I know
of. It would be nice to have it. If we restrict this postfix notation
to only selecting fields from records, then the second problem from
above is completely obviated, and the first one is at least greatly
alleviated, to the point where I think the benefit outweighs the harm.

So my preferred solution is:

- Selecting fields from records can be written (equivalently) using
either prefix or postfix notation;
- Everything else can be written only with prefix notation.

My second-choice solution is to not introduce postfix notation.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Exceeding OS limits for simultaneous socket connections

2012-02-03 Thread Matthew Farkas-Dyck
Rob Stewart wrote:
 transmitting thousands of messages to each other, sometimes within a small 
 period of time.

Either SCTP or UDP seems far more appropriate than TCP (which I
assume, hopefully safely, to be at work here) for this task.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-03 Thread Steve Horne

On 03/02/2012 11:13, Gábor Lehel wrote:

The first problem is that mixing prefix and postfix function
application within the same line makes it harder to read. When you
read code to try to understand what it does, the direction you like to
go in is here's some object, first do this to it, then do that to it,
then do this other thing to it, then this fourth thing to produce the
final result. In Haskell code with prefix application, this is easy:
you read it from right to left.
I've argued before (don't think here - most likely on Programmers.SE) 
that even mathematicians think imperatively, often viewing an expression 
as if it were a right-to-left series of imperative mutations. I get 
called an idiot when I say that.


But...


This is the smaller problem. If prefix and postfix notations are
completely interchangeable, then we can at least expect people to not
make their own code hard to read, and to stick to one or the other
within an expression. (If they're *not* interchangeable, and one or
the other is required in some cases, then it's a bigger problem.)
There are already some right-associative operators and some 
left-associative operators. So the question isn't really about the 
language grammar, but how something reads.


But then, even in Haskell, where order matters, most things read from 
left to right. With the monadic bind, for example, the left argument is 
before the right argument. In let expressions, the first definition is 
the leftmost definition. In a list or a tuple, the leftmost item is 
normally considered the first item - by definition it's the head in a 
list. When currying arguments, the leftmost argument is the first to 
curry. This isn't an absolute, of course, but still - function 
composition with the dot is arguably the odd-one out.


If the point is that TDNR should use some other symbol, I have some 
sympathy with that, but Haskells freedom with operator identifiers has a 
downside - there are few if any completely safe symbols available to 
use. Unless of course we choose a completely new character that has 
never been available before...


http://www.geek.com/articles/geek-pick/unicode-6-1-released-complete-with-emoji-characters-and-a-pile-of-poo-2012022/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Conduit experiment: Is this correct?

2012-02-03 Thread Ertugrul Söylemez
Hello there,

I'm trying to build a server for testing the conduit and network-conduit
packages.  As a contrived example the goal is to pick the first three
lines from the client and send them back without the line feeds.  After
that, I'd like to switch to a simple echo server.  This is the code:

module Main where

import Data.Conduit
import Data.Conduit.Binary as Cb
import Data.Conduit.List as Cl
import Data.Conduit.Network

handleClient :: Application
handleClient src snk =
src $$ do
(Cb.lines =$= Cl.isolate 3) =$ snk
snk

main :: IO ()
main = runTCPServer (ServerSettings 4000 Nothing) handleClient

I'm not sure whether it is correct to use the 'snk' sink multiple times,
and intuitively I'd say that this is wrong.  What would be the proper
way to do this?


Greets,
Ertugrul

-- 
nightmare = unsafePerformIO (getWrongWife = sex)
http://ertes.de/


signature.asc
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-03 Thread AntC
Gábor Lehel illissius at gmail.com writes:

 
 On Fri, Feb 3, 2012 at 10:30 AM, AntC anthony_clayden at clear.net.nz 
wrote:
  You seem to be not alone in wanting some special syntax for applying field
  selectors (see other posts on this thread). H98 field selectors don't do 
this,
  they're just functions.
 
 
  I'm puzzled why you want different syntax for field selectors. Can you give
  some intuition?
 
 Here's my problems with allowing postfix application using dot for all
 functions.
 

Thank you Gábor for explaining this so clearly.

I can see that mixing prefix and postfix style would be confusing. I suppose 
in other programming paradigms (like database access) record.field is regarded 
as 'atomic', not as function application. And under my proposal (or SORF or 
TDNR) it's atomic-ish, because the dot binds tighter than **even function 
application**.

We already have in H98 field selection as function application. I'm keen not 
to break that, because then I can use dot notation on H98-style records. And 
I'm very keen that field selection (continue to) be equivalent to function 
application, precisely so that people who prefer prefix notation can carry on 
regardless.

Do people really write code with huge pile-ups of functions prefix upon 
prefix? Wouldn't that be confusing even when it's unidirectional? I've seen 
some examples in other threads mixing dot notation with function composition 
with user-defined operators built with a dot (like . ) and a sprinkling of 
parentheses. They were indeed unreadable, but frankly, I don't think that was 
purely down to the dot notation.


 The first problem is that mixing prefix and postfix function
 application within the same line makes it harder to read. 

I can see that. As you say, it's hopeless if readers have to start in the 
middle somewhere and work outwards, swerving to and fro.

If binding-dot is just (reverse) function application, I can't stop people 
exploiting it for more than field selection, and some functions just 'feel' 
like fields. SPJ gave the examples of:

customer.fullName-- fullName is a function to concat first ++ last
shape.area   -- polymorph area overloded for each shape

And then there's:
datetime.month   -- calculate month from number-of-days format
tuple.fst
string.last
name.middleInitial
address.streetNumber
polar.theta.arctan 

We're on the slippery slope! Where will it end?

And now that I've found it, I so love:

customer.lastName.tail.head.toUpper-- Yay!


I notice that for prefix functions you do sometimes need a bit of trickery to 
deal with partial application and inconvenient order of parameters. Of course 
there's parentheses to help, but there's also a family of combinators, 
especially:
($) -- loose-binding function application
(.) -- function composition

So I'm going to take your post as a challenge: can we build a family of 
combinators for postfix style? The objective is to 'keep up the momentum' left 
to right.

I've already been using one such:
(.$)  = flip ($)  -- looks combinator-ish to me!
(.$!) = flip ($!) -- strict version

customer.lastName .$ tail .$ head .$ toUpper-- Yay.$!

 The other problem is that, in order to make partial application
 convenient, you want to put your function's parameters in the order of
 least specific to most specific. If you want to make postfix
 application convenient, you have to do the reverse.

True-ish. I guess it depends how 'tight' you feel the function binds with it's 
least specific parameters. What's atomic?

 
 For example, take the filter function from the Prelude:
 
 filter :: (a - Bool) - [a] - [a]
 
 But for postfix function application, this latter order is the one you want:
 
 [1..10].filter even
 is a lot more intuitive than
 even.filter [1..10]

Agreed. Easy. How do you like these?:

 [1..10] .$ filter even
 [1..10] .$ filter even .$ sum ^ 2
 [1..10] .$ filter even .$ foldr (+) 0 ^ 2

I'm looking at those thinking 'Oh yes! foldr (+) 0 is atomic-ish'.

 
 ... You'll end up with
 some people preferring postfix notation and writing their functions
 one way, other people preferring partial application and writing their
 functions the other way, and a lot of frustration when people from one
 group want to use functions written by the other.

Yeah, like little-endians vs. big-endians.

 I hope you'll agree
 that writing two versions of every function is not a satisfactory
 solution.

Absolutely! And we've a huge body of code defined in prefix form, we don't 
want to re-engineer that. And there's a whole body of 
mathematics/algebra/logic that uses prefix style.

 
 To finally get around to the point:
 
 All of this said, record.field is still the most readable, intuitive,
 and familiar syntax for selecting a field from a record that I know
 of. It would be nice to have it.

Indeed!

 If we restrict this postfix notation
 to only selecting fields from 

Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-03 Thread Ertugrul Söylemez
Steve Horne sh006d3...@blueyonder.co.uk wrote:

 There's a proposal at the moment to add support for TDNR to Haskell -
 to leverage the power of the dot (e.g. for intellisense).

 http://hackage.haskell.org/trac/haskell-prime/wiki/TypeDirectedNameResolution

I'm not sure whether this should really be a language feature.  A smart
editor together with compiler support can do this without language
extensions.

The basic problem is that without the dot style you write the function
before you write its argument.  For an intellisense-like feature you
need to write the argument before you write the function.

Now in a smart editor you could write x., at which point the editor
could examine the source file to find the actual type of 'x' as well as
the expected type of the spot where you are currently writing.  Once it
has built a list of suitable functions, it could rewrite the x. to
x, place the cursor in front of it and let you browse the list of
suggestions:

   x._
- [suggestions]_ x

An even smarter editor could provide something like agda-mode's hole
feature.  In Agda you can write f ?, at which point agda-mode replaces
the question mark by a hole.  You can then ask for the type of the term
that goes into the hole as well as try to infer the value.  Agda-mode
doesn't provide you with a list of suggestions, but in Haskell with type
inference this could certainly be possible.  I would prefer holes over
dot-application.


Greets,
Ertugrul

-- 
nightmare = unsafePerformIO (getWrongWife = sex)
http://ertes.de/


signature.asc
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conduit experiment: Is this correct?

2012-02-03 Thread Michael Snoyman
2012/2/3 Ertugrul Söylemez e...@ertes.de:
 Hello there,

 I'm trying to build a server for testing the conduit and network-conduit
 packages.  As a contrived example the goal is to pick the first three
 lines from the client and send them back without the line feeds.  After
 that, I'd like to switch to a simple echo server.  This is the code:

    module Main where

    import Data.Conduit
    import Data.Conduit.Binary as Cb
    import Data.Conduit.List as Cl
    import Data.Conduit.Network

    handleClient :: Application
    handleClient src snk =
        src $$ do
            (Cb.lines =$= Cl.isolate 3) =$ snk
            snk

    main :: IO ()
    main = runTCPServer (ServerSettings 4000 Nothing) handleClient

 I'm not sure whether it is correct to use the 'snk' sink multiple times,
 and intuitively I'd say that this is wrong.  What would be the proper
 way to do this?


 Greets,
 Ertugrul

In this particular case, it will work due to the implementation of
snk. In general, however, you're correct: you should not use the same
sink twice.

I haven't thought about it much yet, but my initial recommendation
would be to create a new Conduit using SequencedSink, which takes the
three lines and then switches over to a passthrough conduit. The
result looks like this:


module Main where

import Data.Conduit
import Data.Conduit.Binary as Cb
import Data.Conduit.List as Cl
import Data.Conduit.Network

handleClient :: Application
handleClient src snk = src $$ myConduit =$ snk

main :: IO ()
main = runTCPServer (ServerSettings 4000 Nothing) handleClient

myConduit =
sequenceSink 3 go
  where
go 0 = return $ StartConduit $ Cl.map id
go count = do
mx - Cb.lines =$ Cl.head
case mx of
Nothing - return Stop
Just x - return $ Emit (count - 1) [x]

Michael

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-03 Thread Gábor Lehel
On Fri, Feb 3, 2012 at 2:37 PM, AntC anthony_clay...@clear.net.nz wrote:
 Do people really write code with huge pile-ups of functions prefix upon
 prefix? Wouldn't that be confusing even when it's unidirectional?

Not really. Pipeline-like chains where you apply each function to the
result of the previous one are quite common and readable, whether in
the shell, Haskell, or your 'Yay!!' example. But possibly we aren't
referring to the same thing.

 I've seen
 some examples in other threads mixing dot notation with function composition
 with user-defined operators built with a dot (like . ) and a sprinkling of
 parentheses. They were indeed unreadable, but frankly, I don't think that was
 purely down to the dot notation.

Well, yeah. If you want to write confusing code you can certainly do
that. You can do it already. I don't think adding another way to do it
is a huge problem. I think you can expect people to not shoot
themselves in the feet intentionally. What -is- a problem is if you
are forced or encouraged to write confusing code (because there's no
other way to do it or because it's the path of least resistance),
which is why I dislike proposals which make postfix application
mandatory for some purposes, or which make it have different behaviour
from normal prefix application.

 And now that I've found it, I so love:

    customer.lastName.tail.head.toUpper    -- Yay!

I agree that this is nice, but it only works with single-argument functions.

 I notice that for prefix functions you do sometimes need a bit of trickery to
 deal with partial application and inconvenient order of parameters. Of course
 there's parentheses to help, but there's also a family of combinators,
 especially:
    ($) -- loose-binding function application
    (.) -- function composition

 So I'm going to take your post as a challenge: can we build a family of
 combinators for postfix style? The objective is to 'keep up the momentum' left
 to right.

 I've already been using one such:
    (.$)  = flip ($)          -- looks combinator-ish to me!
    (.$!) = flip ($!)         -- strict version

    customer.lastName .$ tail .$ head .$ toUpper    -- Yay.$!

I don't see a benefit here over plain dot...


 For example, take the filter function from the Prelude:

 filter :: (a - Bool) - [a] - [a]

 But for postfix function application, this latter order is the one you want:

 [1..10].filter even
 is a lot more intuitive than
 even.filter [1..10]

 Agreed. Easy. How do you like these?:

     [1..10] .$ filter even
     [1..10] .$ filter even .$ sum ^ 2
     [1..10] .$ filter even .$ foldr (+) 0 ^ 2

 I'm looking at those thinking 'Oh yes! foldr (+) 0 is atomic-ish'.

Oh, well, this looks alright. Hmm.


 If we restrict this postfix notation
 to only selecting fields from records,

 Would you like to include 'virtual' fields like fullName or area? Or fst or
 last or middleInitial?

I guess these would be OK. Virtual fields are effectively required to
be single-argument, so you don't encounter the argument-order problem,
and if you can write them equally prefix and postfix then you can
avoid the mix-and-match problem. But this opinion might be obsolete,
see below.



 So my preferred solution is:

 - Selecting fields from records can be written (equivalently) using
 either prefix or postfix notation;
 - Everything else can be written only with prefix notation.

 My second-choice solution is to not introduce postfix notation.


 Noted. (And from the above, you won't expect me to agree.) I guess GHC HQ gets
 the final decision. Glad I'm not having to mediate.

If postfix code can be conveniently written using your (.$) combinator
(and presumably its extended family), with no changes required to
existing or future functions, I guess it could all work out. What I'm
afraid of is that introducing postfix notation results in a pressure
to make functions convenient to use with it, and then we eventually
end up in the morass I described. If we can reasonably expect that
having the postfix combinators around will remove that pressure or
that people will resist it, and that we won't end up with a
proliferation of writeIORef-endian functions on Hackage, I guess I
would be okay with it. I'm not sure what we would need to be able to
reasonably expect that.

(Not that me being okay with it is required for anything.)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Again, version conflicting problem with cabal-install

2012-02-03 Thread Magicloud Magiclouds
Glad to hear that. I will checkout the trunk and try.

On Fri, Feb 3, 2012 at 6:20 PM, Andres Löh andres.l...@googlemail.com wrote:
 Hi.

 On Fri, Feb 3, 2012 at 7:44 AM, Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com wrote:
 On 3 February 2012 17:29, Magicloud Magiclouds
 magicloud.magiclo...@gmail.com wrote:
 Thank you. The document does say it more clearly than me.
 But still, currently, ghc only gives me one option: cannot be built.
 How about giving me another one: throw away the version information of
 D when building A. So when A uses types in D with B and C, it might
 work. Just the risk is on me now.
 It is not perfect, but would work sometimes

 But not always.  We'd then have other errors: why isn't this build working?

 Types can be re-exported, class instances are implicitly imported/exported, 
 etc.

 It's a valid complaint, and there's ongoing work to fix some of these
 issues. In the meantime, the development version of cabal-install, in
 particular the new modular solver, can deal with a few situations that
 can't be resolved by older cabal-install versions. I can't promise it
 will help here. But I'm still interested in feedback.

 Cheers,
  Andres



-- 
竹密岂妨流水过
山高哪阻野云飞

And for G+, please use magiclouds#gmail.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conduit experiment: Is this correct?

2012-02-03 Thread yi huang
2012/2/3 Michael Snoyman mich...@snoyman.com

 2012/2/3 Ertugrul Söylemez e...@ertes.de:
  Hello there,
 
  I'm trying to build a server for testing the conduit and network-conduit
  packages.  As a contrived example the goal is to pick the first three
  lines from the client and send them back without the line feeds.  After
  that, I'd like to switch to a simple echo server.  This is the code:
 
 module Main where
 
 import Data.Conduit
 import Data.Conduit.Binary as Cb
 import Data.Conduit.List as Cl
 import Data.Conduit.Network
 
 handleClient :: Application
 handleClient src snk =
 src $$ do
 (Cb.lines =$= Cl.isolate 3) =$ snk
 snk
 
 main :: IO ()
 main = runTCPServer (ServerSettings 4000 Nothing) handleClient
 
  I'm not sure whether it is correct to use the 'snk' sink multiple times,
  and intuitively I'd say that this is wrong.  What would be the proper
  way to do this?
 
 
  Greets,
  Ertugrul

 In this particular case, it will work due to the implementation of
 snk. In general, however, you're correct: you should not use the same
 sink twice.


Since Sink works in a CPS fashion, by which i mean every step it return a
new push close pair, i think it can be used multiple time.



 I haven't thought about it much yet, but my initial recommendation
 would be to create a new Conduit using SequencedSink, which takes the
 three lines and then switches over to a passthrough conduit. The
 result looks like this:


module Main where

import Data.Conduit
import Data.Conduit.Binary as Cb
import Data.Conduit.List as Cl
import Data.Conduit.Network

handleClient :: Application
 handleClient src snk = src $$ myConduit =$ snk

main :: IO ()
main = runTCPServer (ServerSettings 4000 Nothing) handleClient

 myConduit =
sequenceSink 3 go
  where
go 0 = return $ StartConduit $ Cl.map id
go count = do
mx - Cb.lines =$ Cl.head
case mx of
Nothing - return Stop
Just x - return $ Emit (count - 1) [x]

 Michael

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
http://www.yi-programmer.com/blog/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-03 Thread AntC
Gábor Lehel illissius at gmail.com writes:

 
 On Fri, Feb 3, 2012 at 2:37 PM, AntC anthony_clayden at clear.net.nz 
wrote:
  Do people really write code with huge pile-ups of functions prefix upon
  prefix? Wouldn't that be confusing even when it's unidirectional?
 
 Not really. Pipeline-like chains where you apply each function to the
 result of the previous one are quite common and readable, whether in
 the shell, ..

Thank you for reminding me! Unix Pipelining -- that's where I've seen it. And 
in the shell, the pipelining is postfix.

My (.$) is loose-binding postfix application. But let me do:

(.|) = flip ($)-- same as (.$), but suggestive of the pipe

customer.lastName  -- field select, dot 'allowed' per Gábor
 .| tail   -- function apply, dot not
 .| head
 .| toUpper-- are you warming to it?

[1..10]
 .| filter even
 .| foldr (+) 0
 .| (^ 2)  -- the parens is a bit of a let-down

 
 What -is- a problem is if you
 are forced or encouraged to write confusing code (because there's no
 other way to do it or because it's the path of least resistance),
 which is why I dislike proposals which make postfix application
 mandatory for some purposes, or which make it have different behaviour
 from normal prefix application.

Totally agree, that's one of the things I didn't like about TDNR or SORF. 
That's why I'm trying to support both prefix and dot-notation field selectors.

The main thing, though, I like about field selectors as functions (and nothing 
more) is that we've then got a mechanism for overloading them to select from 
multiple record types, and the mechanism is rock-sold instance resolution, not 
some semi-syntactic/semi-type-driven dodginess.

[I'll let you into a secret about my plan for world domination:
 If field selection is just an (overloaded) function,
 we can apply it to other things than records.
  tuple.fst
 We can turn our data dictionary into a type dictionary:
  newtype Customer_id = Customer_id Int
 We can 'hunt out' the customer_id from a tuple:
  tuple.customer_id
 (Using instance resolution to the only Customer_id in that tuple.)

 And now we've got tuples as anonymous records.
 Crucially: we don't care about the field's position within the tuple.
 We could have two tuples with the same fields, but different order.
 And treat them as equivalent at the type level.
 (What relational theory calls 'union compatible'.)

 End of mad moment.]

 If postfix code can be conveniently written using your (.$) combinator
 (and presumably its extended family), with no changes required to
 existing or future functions, I guess it could all work out. What I'm
 afraid of is that introducing postfix notation results in a pressure
 to make functions convenient to use with it, and then we eventually
 end up in the morass I described.

Totally agree, I think order of parameters in declarations should continue to 
expect prefix style, with least specific first (that is, leftmost).

 I'm not sure what we would need to be able to
 reasonably expect that.
 

I think time for others 'listening in' to develop the family of combinators!




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Again, version conflicting problem with cabal-install

2012-02-03 Thread Jason Dagit
In my experience the diamond of death is typically because:
  * You install package A that uses C-0.1
  * Someone uploads C-0.2 to hackage
  * Later you 'cabal update', this does not rebuild A to use C-0.2,
even though it could.
  * You install package B that uses C, and cabal builds it with C-0.2
  * Now you want to build D that uses A and B, but A requires C-0.1
and B requires C-0.2, even though they could be rebuilt to use the
same C.

Typically, the constraints on A, B, and D would all accept the same
version of C.  Except that when they were compiled separately and C
changed between building A and B, then the compiled versions become
fixed on C-0.1 and C-0.2.

It's true that it won't solve it in all cases, but in my experience
using cabal-dev made the problem go away.

I hope that clarifies.

Jason

On Fri, Feb 3, 2012 at 7:33 AM, Magicloud Magiclouds
magicloud.magiclo...@gmail.com wrote:
 Well, cabal-dev could not resolve the conflict of the diamond.
 Because the conflict is depending different version at the SAME
 time.

 On Fri, Feb 3, 2012 at 9:20 PM, Jason Dagit dag...@gmail.com wrote:
 On Thu, Feb 2, 2012 at 9:54 PM, Magicloud Magiclouds
 magicloud.magiclo...@gmail.com wrote:
 Hello,
  As I recalled, ghc works in staticly link mode. So after one library
 is compiled, all its build dependencies are useless. Lost, changed,
 wheresoever, it does not matter.
  Then why the problem of version conflicting exists?

 I'm not sure, but for me this problem has gone away by using cabal-dev
 to build everything.

 Jason



 --
 竹密岂妨流水过
 山高哪阻野云飞

 And for G+, please use magiclouds#gmail.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conduit experiment: Is this correct?

2012-02-03 Thread Ertugrul Söylemez
Michael Snoyman mich...@snoyman.com wrote:

 In this particular case, it will work due to the implementation of
 snk. In general, however, you're correct: you should not use the same
 sink twice.

 I haven't thought about it much yet, but my initial recommendation
 would be to create a new Conduit using SequencedSink, which takes the
 three lines and then switches over to a passthrough conduit. The
 result looks like this:

 [...]

Thanks a lot.  This conduit world is really new to me and feels a bit
more complicated than enumerators, but at least I seem to be getting the
right intuition.


Greets,
Ertugrul

-- 
Key-ID: E5DD8D11 Ertugrul Soeylemez e...@ertes.de
FPrint: BD28 3E3F BE63 BADD 4157  9134 D56A 37FA E5DD 8D11
Keysrv: hkp://subkeys.pgp.net/


signature.asc
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-03 Thread Donn Cave
Quoth AntC anthony_clay...@clear.net.nz,
...
 We're on the slippery slope! Where will it end?

 And now that I've found it, I so love:

 customer.lastName.tail.head.toUpper-- Yay!

... compared to present practice, with where dot is function
composition only -

(toUpper.head.tail.lastName) customer

So two competing meanings of ., where one is literally the reverse
of the other.  Of course we won't be able to spell composition
without spaces any more, so technically the backwards and forward
sense of . are distinct, but it seems kind of unfortunate anyway.

...

If you'll consider an idea from the peanut gallery ...  for me, the
dot notation for fields may as well be spelling as an operator -
that is, customer.lastName deploys a field named .lastName.

If someone modified Haskell to allow postfix notation from this
perspective, when compiler sees customer.lastName, it would
look for an identifier .lastName, so it would work only where
the fields are so declared:

data Customer = Customer { .lastName :: String, .firstName :: String }

Without explicit dot nomenclature (as per current practice), only
normal function application syntax would be supported (as per current
practice.)  Unspaced composition (fromInteger.ord) would still be
broken, I suppose, but the error (Not in scope: `.ord') would at
least be pretty obvious.

Donn

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Again, version conflicting problem with cabal-install

2012-02-03 Thread Magicloud Magiclouds
I think this could be gone with cabal-dev only if A has a new version
or the compiled one that could use C-0.2 or caba-dev could ignore the
version constraint. So it is kind of an old binary problem. It could
also be resolved by reinstall A to use C-0.2 with cabal-install.
The problem I met is that, A uses B and C-0.2, and B uses C-0.1. Both
dependencies to C were strict. I had not gotten any B or C installed.
Now I just started to install A. It would fail.
In fact, there was a chance that I just modified B.cabal to use C-0.2
and all things done. But there also was a risk of unable to compile or
even runtime crash.
I am just wanting an option (ignore versions) to take that risk in
develop environment.

2012/2/3 Jason Dagit dag...@gmail.com:
 In my experience the diamond of death is typically because:
  * You install package A that uses C-0.1
  * Someone uploads C-0.2 to hackage
  * Later you 'cabal update', this does not rebuild A to use C-0.2,
 even though it could.
  * You install package B that uses C, and cabal builds it with C-0.2
  * Now you want to build D that uses A and B, but A requires C-0.1
 and B requires C-0.2, even though they could be rebuilt to use the
 same C.

 Typically, the constraints on A, B, and D would all accept the same
 version of C.  Except that when they were compiled separately and C
 changed between building A and B, then the compiled versions become
 fixed on C-0.1 and C-0.2.

 It's true that it won't solve it in all cases, but in my experience
 using cabal-dev made the problem go away.

 I hope that clarifies.

 Jason

 On Fri, Feb 3, 2012 at 7:33 AM, Magicloud Magiclouds
 magicloud.magiclo...@gmail.com wrote:
 Well, cabal-dev could not resolve the conflict of the diamond.
 Because the conflict is depending different version at the SAME
 time.

 On Fri, Feb 3, 2012 at 9:20 PM, Jason Dagit dag...@gmail.com wrote:
 On Thu, Feb 2, 2012 at 9:54 PM, Magicloud Magiclouds
 magicloud.magiclo...@gmail.com wrote:
 Hello,
  As I recalled, ghc works in staticly link mode. So after one library
 is compiled, all its build dependencies are useless. Lost, changed,
 wheresoever, it does not matter.
  Then why the problem of version conflicting exists?

 I'm not sure, but for me this problem has gone away by using cabal-dev
 to build everything.

 Jason



 --
 竹密岂妨流水过
 山高哪阻野云飞

 And for G+, please use magiclouds#gmail.com.



-- 
竹密岂妨流水过
山高哪阻野云飞

And for G+, please use magiclouds#gmail.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conduit experiment: Is this correct?

2012-02-03 Thread Felipe Almeida Lessa
On Fri, Feb 3, 2012 at 1:38 PM, yi huang yi.codepla...@gmail.com wrote:
 Since Sink works in a CPS fashion, by which i mean every step it return a
 new push close pair, i think it can be used multiple time.

Actually, this is exactly why it *can't* be used multiple times.

Cheers!

-- 
Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Again, version conflicting problem with cabal-install

2012-02-03 Thread Andres Löh
 I am just wanting an option (ignore versions) to take that risk in
 develop environment.

A controlled way of ignoring version constraints (mainly upper bounds,
actually) is certainly on my TODO list for the new solver. The main
issue to work out is a good way how to control the disabled bounds via
the command line, because you usually don't want to ignore all of
them. Currently, my UI-preference is a flag

  --force-allow=foo-1.3

with the semantics that all dependencies on foo will be changed to
allow foo-1.3 to be chosen. Would that be ok? Other suggestions?

Cheers,
  Andres

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Again, version conflicting problem with cabal-install

2012-02-03 Thread Felipe Almeida Lessa
On Fri, Feb 3, 2012 at 2:22 PM, Andres Löh andres.l...@googlemail.com wrote:
 A controlled way of ignoring version constraints (mainly upper bounds,
 actually) is certainly on my TODO list for the new solver. The main
 issue to work out is a good way how to control the disabled bounds via
 the command line, because you usually don't want to ignore all of
 them. Currently, my UI-preference is a flag

  --force-allow=foo-1.3

 with the semantics that all dependencies on foo will be changed to
 allow foo-1.3 to be chosen. Would that be ok? Other suggestions?

Can't this be integrated with the current --constraint flag?  If the
constraint is able to be satisfied without unrestricting any bounds,
fine.  Otherwise, unrestrict any bounds on that constraint.  What
would be the drawbacks?

An advantage is being able to specify --constraint='foo = 1.3' to get
foo-1.3.7.2 instead of having to find out exactly which version you
want.  And if you already know what you want, you may always say
--constraint='foo == 1.3.7.2'.

Looking forward to the new solver! =)

Cheers!

-- 
Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Again, version conflicting problem with cabal-install

2012-02-03 Thread Andres Löh
Hi.

  --force-allow=foo-1.3

 with the semantics that all dependencies on foo will be changed to
 allow foo-1.3 to be chosen. Would that be ok? Other suggestions?

 Can't this be integrated with the current --constraint flag?

It could be, but ...

 If the
 constraint is able to be satisfied without unrestricting any bounds,
 fine.  Otherwise, unrestrict any bounds on that constraint.  What
 would be the drawbacks?

... it shouldn't happen automatically. There are perfectly valid and
safe reasons to use --constraint, whereas this new feature is
inherently unsafe. But allowing general constraint syntax and calling
the flag something with constraint in it is perhaps a good idea.

 An advantage is being able to specify --constraint='foo = 1.3' to get
 foo-1.3.7.2 instead of having to find out exactly which version you
 want.  And if you already know what you want, you may always say
 --constraint='foo == 1.3.7.2'.

Yes.

 Looking forward to the new solver! =)

I need testers and feedback. You can already use it. It's in the
cabal-install development version, and can be enabled by saying
--solver=modular on the command line.

Cheers,
  Andres

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conduit experiment: Is this correct?

2012-02-03 Thread Ertugrul Söylemez
Michael Snoyman mich...@snoyman.com wrote:

 In this particular case, it will work due to the implementation of
 snk. In general, however, you're correct: you should not use the same
 sink twice.

 I haven't thought about it much yet, but my initial recommendation
 would be to create a new Conduit using SequencedSink, which takes the
 three lines and then switches over to a passthrough conduit. The
 result looks like this:

I think I'm getting the conduit stuff, at least on a high level.  As a
little exercise I have ported a simplified variant of the 'netlines'
enumerator to the conduit library.  This is the code:

import qualified Data.ByteString as B

netLine :: (Resource m) = Int - Sink B.ByteString m B.ByteString
netLine n0 = sinkState (n0, B.empty) push (return . snd)
where
push (n, str') dstr' =
return $
case B.elemIndex 10 dstr' of
  Nothing -
  let dstr = B.take n dstr'
  str  = B.append str' dstr
  in str `seq` StateProcessing (n - B.length dstr, str)
  Just i -
  let (pfx, sfx) = B.splitAt i dstr'
  str= B.append str' (B.take n pfx)
  in str `seq` StateDone (Just . B.copy $ B.tail sfx) str

netLines :: (Resource m) = Int - Conduit B.ByteString m B.ByteString
netLines n = sequenceSink () (\s - fmap (\ln - Emit s [ln]) (netLine n))

It reads a 256 MiB file with random data in 1.3 seconds and runs in
constant memory for infinite lines.  This is reassuring.

But anyway, is this the proper/idiomatic way to do it, or would you go
for a different direction?


Greets,
Ertugrul

-- 
Key-ID: E5DD8D11 Ertugrul Soeylemez e...@ertes.de
FPrint: BD28 3E3F BE63 BADD 4157  9134 D56A 37FA E5DD 8D11
Keysrv: hkp://subkeys.pgp.net/


signature.asc
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] TFPIE: Trends in Functional Programming in Education 2012

2012-02-03 Thread Simon Thompson

[Apologies for multiple postings, Simon.]



CALL FOR PAPERS/PRESENTATIONS TFPIE 2012
International Workshop on Trends in Functional Programming in Education 2012
June 11 2012
University of St Andrews, Scotland
http://www.cs.ru.nl/P.Achten/TFPIE_2012/TFPIE_2012_home.html


The first International Workshop on Trends in Functional Programming in 
Education, TFPIE 2012, will be co-located with TFP 2012 at the University of St 
Andrews in Scotland. The goal of TFPIE is to gather researchers, professors, 
teachers, and all professionals that use or are interested in the use of 
functional programming in education. TFPIE aims to be a venue where novel 
ideas, classroom-tested ideas, and work in progress on the use of functional 
programming in education are discussed. The one-day workshop will foster a 
spirit of open discussion by having a review process for publication after the 
workshop.

The program chairs of TFPIE 2012 will screen submissions to ensure that all 
presentations are within scope and are of interest to participants. Potential 
presenters are invited to submit an extended abstract (4-6 pages) or an article 
(up to 16 pages). The authors of all accepted presentations will have their 
preprints and their slides made available on the workshop's website/wiki. Any 
visitors to the TFPIE 2012 website/wiki will be able to add comments. This 
includes presenters who may respond to comments and questions as well as 
provide pointers to improvements and follow-up work. After the workshop, the 
program committee will review, using prevailing academic standards, the 
articles accepted for presentation to select the best for publication in 
Electronic Proceedings in Theoretical Computer Science (EPTCS). Articles 
rejected for presentation and all extended abstracts will not be formally 
reviewed by the PC.


TOPICS OF INTEREST

TFPIE 2012 welcomes submissions describing practical techniques used in the 
classroom, tools used and/or developed, and any creative use of functional 
programming (FP) to aid education in or outside Computer Science. Topics of 
interest include, but are not limited to:

FP and beginning CS students
FP in Artificial Intelligence
FP in Robotics
FP and Music
Advanced FP for undergraduates
FP in graduate education
Engaging students in research using FP
FP in Programming Languages
FP in the high school curriculum
FP as a stepping stone to other CS topics
FP and Philosophy


If you are not sure if your work is appropriate for TFPIE 2012, please contact 
the PC chairs by e-mail at: tfpie2...@cs.ru.nl .


Program Committee

Peter Achten, Radboud University Nijmegen
Jost Berthold, University of Copenhagen
Marc Feeley, University of Montreal
Ralf Hinze, University of Oxford
Shriram Krishnamurthi, Brown University
Michel Mauny, ENSTA Paris Tech
James McKinna, UK
Marco T. Morazan, Seton Hall University
Rinus Plasmeijer, Radboud University Nijmegen
Simon Thompson, University of Kent


Important Dates

May 20submission of abstract or article
May 25notification of acceptance   
June 11   TFPIE
July 6submission of formal paper   
September 10  notification of acceptance   
October 1 camera-ready paper


Venue

The University of St Andrews is Scotland's first university and the third 
oldest in the English-speaking world, founded in 1413. Over six centuries it 
has established a reputation as one of Europe's leading and most distinctive 
centers for teaching and research. St Andrews is situated on the east coast of 
Fife, Scotland, UK. The town is approximately 50 miles north-east of Edinburgh, 
14 miles south-east of Dundee, 78 miles south of Aberdeen, and 82 miles east of 
Glasgow making it easily accessible by any means of transportation. Help on 
traveling to St Andrews can be found at: 
http://www.st-andrews.ac.uk/visiting/GettingtoStAndrews/ .


Questions?

If you have any questions, do not hesitate to contact us at: tfpie2...@cs.ru.nl 
.


Simon Thompson | Professor of Logic and Computation 
School of Computing | University of Kent | Canterbury, CT2 7NF, UK
s.j.thomp...@kent.ac.uk | M +44 7986 085754 | W www.cs.kent.ac.uk/~sjt


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-03 Thread wren ng thornton
On 2/3/12 6:13 AM, Gábor Lehel wrote:
 The first problem is that mixing prefix and postfix function
 application within the same line makes it harder to read. When you
 read code to try to understand what it does, the direction you like to
 go in is here's some object, first do this to it, then do that to it,
 then do this other thing to it, then this fourth thing to produce the
 final result. In Haskell code with prefix application, this is easy:
 you read it from right to left. In OO-style code using dots, it's even
 easier: you read it from left to right. But if you mix the two, it's
 much harder than either: you first have to figure out where the
 sentence even begins, which is going to be somewhere in the middle,
 and then every time the expression switches between prefix and
 postfix, you have to figure out where to continue reading. The
 algorithm your brain needs to follow is a lot branchier, so to speak.

It's just as easy as reading function pointers in C :)

-- 
Live well,
~wren





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: combinatorics

2012-02-03 Thread Brent Yorgey
On Fri, Feb 03, 2012 at 01:06:16AM -0500, wren ng thornton wrote:
 On 2/2/12 6:46 PM, Carter Schonwald wrote:
 On Thu, Feb 2, 2012 at 4:06 PM, Ivan Lazar Miljenovic wrote:
 On 3 February 2012 07:11, Brent Yorgey wrote:
 On Wed, Feb 01, 2012 at 01:53:03AM -0500, wren ng thornton wrote:
 
 [2] HaskellForMaths, gamma, statistics, erf, math-functions,
 combinat,...
 
 To this list I'd like to add 'species' and also the specialized
 'multiset-comb' packages.  The former doesn't build under recent GHCs
 but I plan to fix that (and continue extending the library) soon.
 
 Would people be interested in creating a mailing list for those
 interested in combinatorics+Haskell?
 
 I would.
 
 ditto
 
 I don't know how involved I'd be, but I'd join. It's just a side
 interest for me (more a curiosity than a hobby at this point), though
 I would definitely like to see things better organized and integrated
 since it seems like there are a bunch of folks with passing interest
 in the area, and it's someplace that Haskell could really shine.

OK, I've emailed mail...@haskell.org requesting the creation of a
combinator...@haskell.org mailing list (if anyone knows of a better
way to go about this let me know).

-Brent

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conduit experiment: Is this correct?

2012-02-03 Thread Erik de Castro Lopo
Ertugrul Söylemez wrote:

 Thanks a lot.  This conduit world is really new to me and feels a bit
 more complicated than enumerators, but at least I seem to be getting the
 right intuition.

I can assure you that while this may be true for simple cases, it most
definitely is not true for at least one more complex case.

I have a hackage package http-proxy which initially used Enumerator and
now uses Conduit. The Enumerator version was extremely difficult to figure
out and eventually required a function like this:

enumIteratee :: MonadIO m = Int64
 - (Int - Iteratee ByteString m ByteString)
 - Enumerator ByteString (Iteratee ByteString m) c

with an Iteratee nested inside an Enumerator.

The Conduit version was much easier to put together because conduits seem
to compose much more naturally. IMO, Conduit is a significant improvement
over Enumerator but a better solution may still exist (I'm interested in
seeing how Pipes work out).

Cheers,
Erik
-- 
--
Erik de Castro Lopo
http://www.mega-nerd.com/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Contributing to http-conduit

2012-02-03 Thread Myles C. Maxfield
Here is the patch to Web.Cookie. I didn't modify the tests at all because
they were already broken - they looked like they hadn't been updated since
SetCookie only had 5 parameters. I did verify by hand that the patch works,
though.

Thanks,
Myles

On Thu, Feb 2, 2012 at 11:26 PM, Myles C. Maxfield myles.maxfi...@gmail.com
 wrote:

 Alright, I'll make a small patch that adds 2 fields to SetCookie:
 setCookieMaxAge :: Maybe DiffTime
 setCookieSecureOnly :: Bool

 I've also gotten started on those cookie functions. I'm currently writing
 tests for them.

 @Chris: The best advice I can give is that Chrome (what I'm using as a
 source on all this) has the data baked into a .cc file. However, they have
 directions in a README and a script which will parse the list and generate
 that source file. I recommend doing this. That way, the Haskell module
 would have 2 source files: one file that reads the list and generates the
 second file, which is a very large source file that contains each element
 in the list. The list should export `elem`-type queries. I'm not quite sure
 how to handle wildcards that appear in the list - that part is up to you.
 Thanks for helping out with this :]

 --Myles


 On Thu, Feb 2, 2012 at 10:53 PM, Michael Snoyman mich...@snoyman.comwrote:

 Looks good to me too. I agree with Aristid: let's make the change to
 cookie itself. Do you want to send a pull request? I'm also
 considering making the SetCookie constructor hidden like we have for
 Request, so that if in the future we realize we need to add some other
 settings, it doesn't break the API.

 Chris: I would recommend compiling it into the module. Best bet would
 likely being converting the source file to Haskell source.

 Michael

 On Fri, Feb 3, 2012 at 6:32 AM, Myles C. Maxfield
 myles.maxfi...@gmail.com wrote:
  Alright. After reading the spec, I have these questions / concerns:
 
  The spec supports the Max-Age cookie attribute, which Web.Cookies
 doesn't.
 
  I see two possible solutions to this. The first is to have
 parseSetCookie
  take a UTCTime as an argument which will represent the current time so
 it
  can populate the setCookieExpires field by adding the Max-Age attribute
 to
  the current time. Alternatively, that function can return an IO
 SetCookie so
  it can ask for the current time by itself (which I think is inferior to
  taking the current time as an argument). Note that the spec says to
 prefer
  Max-Age over Expires.
  Add a field to SetCookie of type Maybe DiffTime which represents the
 Max-Age
  attribute
 
  Cookie code should be aware of the Public Suffix List as a part of its
  domain verification. The cookie code only needs to be able to tell if a
  specific string is in the list (W.Ascii - Bool)
 
  I propose making an entirely unrelated package, public-suffix-list,
 with a
  module Network.PublicSuffixList, which will expose this function, as
 well as
  functions about parsing the list itself. Thoughts?
 
  Web.Cookie doesn't have a secure-only attribute. Adding one in is
  straightforward enough.
  The spec describes cookies as a property of HTTP, not of the World Wide
 Web.
  Perhaps Web.Cookie should be renamed? Just a thought; it doesn't
 really
  matter to me.
 
  As for Network.HTTP.Conduit.Cookie, the spec describes in section 5.3
  Storage Model what fields a Cookie has. Here is my proposal for the
  functions it will expose:
 
  receiveSetCookie :: SetCookie - Req.Request m - UTCTime - Bool -
  CookieJar - CookieJar
 
  Runs the algorithm described in section 5.3 Storage Model
  The UTCTime is the current-time, the Bool is whether or not the caller
 is an
  HTTP-based API (as opposed to JavaScript or anything else)
 
  updateCookieJar :: Res.Response a - Req.Request m - UTCTime -
 CookieJar
  - (CookieJar, Res.Response a)
 
  Applies receiveSetCookie to a Response. The output CookieJar is
 stripped
  of any Set-Cookie headers.
  Specifies True for the Bool in receiveSetCookie
 
  computeCookieString :: Req.Request m - CookieJar - UTCTime - Bool -
  (W.Ascii, CookieJar)
 
  Runs the algorithm described in section 5.4 The Cookie Header
  The UTCTime and Bool are the same as in receiveSetCookie
 
  insertCookiesIntoRequest :: Req.Request m - CookieJar - UTCTime -
  (Req.Request m, CookieJar)
 
  Applies computeCookieString to a Request. The output cookie jar has
  updated last-accessed-times.
  Specifies True for the Bool in computeCookieString
 
  evictExpiredCookies :: CookieJar - UTCTime - CookieJar
 
  Runs the algorithm described in the last part of section 5.3 Storage
 Model
 
  This will make the relevant part of 'http' look like:
 
  go count req'' cookie_jar'' = do
  now - liftIO $ getCurrentTime
  let (req', cookie_jar') = insertCookiesIntoRequest req''
  (evictExpiredCookies cookie_jar'' now) now
  res' - httpRaw req' manager
  let (cookie_jar, res) = updateCookieJar res' req' now
 cookie_jar'
  case getRedirectedRequest req' 

[Haskell-cafe] ANN: bytestring-lexing 0.4.0

2012-02-03 Thread wren ng thornton


-- bytestring-lexing 0.4.0


The bytestring-lexing package offers efficient reading and packing of 
common types like Double and Integral types.




-- Changes (since 0.3.0)


* Data.ByteString.Lex.Integral: added the function

readDecimal_ :: Integral a = ByteString - a

A variant of readDecimal which does not return the tail of the
string, and returns 0 instead of Nothing. This is twice as fast
for Int64 on 32-bit systems, but has identical performance to
readDecimal for all other types and architectures.



-- Links


Homepage:
http://code.haskell.org/~wren/

Hackage:
http://hackage.haskell.org/package/bytestring-lexing

Darcs:
http://community.haskell.org/~wren/bytestring-lexing

Haddock (Darcs version):

http://community.haskell.org/~wren/bytestring-lexing/dist/doc/html/bytestring-lexing

--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: bytestring-lexing 0.4.0

2012-02-03 Thread Erik de Castro Lopo
wren ng thornton wrote:

 
 -- Changes (since 0.3.0)
 
 
 * Data.ByteString.Lex.Integral: added the function
 
  readDecimal_ :: Integral a = ByteString - a
 
  A variant of readDecimal which does not return the tail of the
  string, and returns 0 instead of Nothing. This is twice as fast
  for Int64 on 32-bit systems, but has identical performance to
  readDecimal for all other types and architectures.

Thanks Wren, that is awesome!

Erik
-- 
--
Erik de Castro Lopo
http://www.mega-nerd.com/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] how to print out intermediate results in a recursive function?

2012-02-03 Thread Qi Qi
Hello,

I have a question;how can I print out the intermediate number lists in a
mergesort recursive function like the following one.

merge [] ys = ys
merge xs [] = xs
merge (x:xs) (y:ys) = if x = y
  then x : merge xs (y:ys)
  else y : merge (x:xs) ys

mergesort [] = []
mergesort [x] = [x]
mergesort xs = let (as, bs) = splitAt (length xs `quot` 2) xs
   in merge (mergesort as) (mergesort bs)

main = do
   print $ mergesort [5,4,3,2,1]
  

In the main function, it only prints out the final number list. But I'd
like to print out the number lists in every recursive level. How can I
do that? Thanks.


-- 
Qi Qi

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to print out intermediate results in a recursive function?

2012-02-03 Thread Ivan Lazar Miljenovic
On 5 February 2012 05:23, Qi Qi qiqi...@gmail.com wrote:
 Hello,

 I have a question;how can I print out the intermediate number lists in a
 mergesort recursive function like the following one.

You can use the (completely evil and shouldn't be used in production
code) Debug.Trace module.


 merge [] ys = ys
 merge xs [] = xs
 merge (x:xs) (y:ys) = if x = y
                      then x : merge xs (y:ys)
                      else y : merge (x:xs) ys

 mergesort [] = []
 mergesort [x] = [x]
 mergesort xs = let (as, bs) = splitAt (length xs `quot` 2) xs
               in merge (mergesort as) (mergesort bs)

 main = do
       print $ mergesort [5,4,3,2,1]


 In the main function, it only prints out the final number list. But I'd
 like to print out the number lists in every recursive level. How can I
 do that? Thanks.


 --
 Qi Qi

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to print out intermediate results in a recursive function?

2012-02-03 Thread yi huang
You can use trace from Debug.Trace, change the code like this:

mergesort l = case trace l l of
[] - ...
[x] - ...
(x:xs) - ...

On Sun, Feb 5, 2012 at 2:23 AM, Qi Qi qiqi...@gmail.com wrote:

 Hello,

 I have a question;how can I print out the intermediate number lists in a
 mergesort recursive function like the following one.

 merge [] ys = ys
 merge xs [] = xs
 merge (x:xs) (y:ys) = if x = y
  then x : merge xs (y:ys)
  else y : merge (x:xs) ys

 mergesort [] = []
 mergesort [x] = [x]
 mergesort xs = let (as, bs) = splitAt (length xs `quot` 2) xs
   in merge (mergesort as) (mergesort bs)

 main = do
   print $ mergesort [5,4,3,2,1]


 In the main function, it only prints out the final number list. But I'd
 like to print out the number lists in every recursive level. How can I
 do that? Thanks.


 --
 Qi Qi

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
http://www.yi-programmer.com/blog/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to print out intermediate results in a recursive function?

2012-02-03 Thread Ozgur Akgun
Hi,

There is also this nice trick to use Debug.Trace:

merge xs ys | trace (show (xs,ys)) False = undefined -- add this as the
first case to merge
mergesort xs | trace (show xs) False = undefined -- and this as the first
case to mergesort

HTH,
Ozgur
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe