Re: [Haskell-cafe] List comprehensions with Word8

2013-05-16 Thread Casey McCann
At risk of belaboring the now-obvious, note that the empty lists begin
at 1, which is 10^8, and thus the first power of 10 evenly
divisible by 2^8.

The largest value in the list for each 10^n is likewise 0 modulo 2^n.
(Figuring out why the sequence has those particular multiples of 2^n
is left as an exercise for the reader.)

- C.

On Thu, May 16, 2013 at 5:15 PM, Jose A. Lopes jose.lo...@ist.utl.pt wrote:
 Hello everyone,

 I was playing with Word8 and list comprehensions and
 the following examples came up. I have to admit the
 behavior looks quite strange because it does not seem
 to be consistent. Can someone shed some light on reason
 behind some of these outputs?

 By the way, I have abbreviated some outputs with ellipsis ...

 [1..10] :: [Word8]
 [1,2,3,4,5,6,7,8,9,10]

 [1..100] :: [Word8]
 [1,2,3,4,5,6,7,8,9,10,...,100]

 [1..1000] :: [Word8]
 [1,2,3,4,5,6,7,8,9,10,...,232]

 [1..1] :: [Word8]
 [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]

 [1..10] :: [Word8]
 [1,2,3,4,5,6,7,8,9,10,...,160]

 [1..100] :: [Word8]
 [1,2,3,4,5,6,7,8,9,10,...,64]

 [1..1000] :: [Word8]
 [1,2,3,4,5,6,7,8,9,10,...,128]

 [1..1] :: [Word8]
 []

 [1..10] :: [Word8]
 []

 Thank you,
 Jose

 --
 José António Branquinho de Oliveira Lopes
 Instituto Superior Técnico
 Technical University of Lisbon


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage 2 maintainership

2012-02-13 Thread Casey McCann

On 02/13/2012 06:44 PM, Ben Gamari wrote:

I currently have a running installation on my personal machine and
things seem to be working as they should. On the whole, installation was
quite trivial, so it seems likely that the project is indeed at a point
where it can take real use (although a logout option in the web
interface would make testing a bit easier).


I'm not certain that this is the case, honestly. It's very close, to be 
sure, but there are enough rough edges and possible concerns that I 
think it'll need some polishing before it's ready for full use.


At any rate, I had independently decided to start tinkering with the 
Hackage 2 codebase after being frustrated at the lack of progress. I 
have a publicly-accessible instance running at 
http://hackage2.uptoisomorphism.net:8080/ that's currently pretty empty, 
but I'll be doing at least a partial mirror of Hackage onto it in the 
near future as well as some hacking on the code. Feel free to check it 
out but please don't do anything too abusive to the server.


I've also uploaded the hackage2 source to my GitHub at the urging of 
people on IRC, which should make it easier for more people to get the 
code and hack on it if they feel inspired: 
https://github.com/isomorphism/hackage2


- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-21 Thread Casey McCann
On Tue, Sep 20, 2011 at 11:47 PM, Richard O'Keefe o...@cs.otago.ac.nz wrote:

 On 21/09/2011, at 2:18 PM, Casey McCann wrote:

 I still don't see why it makes sense to add separate IEEE comparisons
 instead of just adding a standard partial order class, though.

 In any mathematical partial order, we expect
        x `le` x
 to be a law.  But in IEEE arithmetic, if x is a NaN, x `le` x is
 false.  I don't see how to reconcile these.

Ah, true. There is an obvious way to reconcile this that almost
suffices, and is what I'd had in mind--simply declare that, just as
positive and negative zero are distinct values but identified with
each other by the ordering, let NaN be disidentified with itself.
Essentially this treats NaN as representing an unbounded collection of
distinct, but indistinguishable and incomparable, values, where you
never end up getting the same one twice. This interpretation is
self-consistent so long as the expressions being compared are distinct
to begin with, but now that you point it out explicitly I realize it
not only can't be justified when comparing syntactically identical
terms, but that given equivalent expressions it would imply that a
pure function gives different results each time, which is not in any
way a satisfactory result of something that's trying to *improve* the
semantics involved!

So that's a bust. Bother. Specialized comparisons providing IEEE
semantics seems the best option after all, then. I'd still like to see
a standard partial order type class, but apparently it wouldn't help
in this case.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-21 Thread Casey McCann
On Tue, Sep 20, 2011 at 11:33 PM,  rocon...@theorem.ca wrote:
 For what it's worth, at some point in time I was sketching a proposal to
 split the Enum class into two classes because I felt that two distinct ideas
 were being conflated.  Unfortunately this was years ago and I have forgotten
 what the details I was thinking.  Perhaps someone can reconstruct a proposal
 along these lines.

Considering the desugaring of list range syntax in general, rather
than the Enum class as such, I would argue for *three* ideas, which
are all supported with varying degrees of success by the current
implementation:

1) Exhaustive enumeration of a finite range, where the desired meaning
of [a..z] is almost exactly that of Data.Ix.range (a, z).

2) Iterative generation of a sequence, where the desired meaning of
[a, b..z] is iterating a function implicitly defined by the offset
between a and b, plus an optional takeWhile using some predicate
determined by z. The nature of the offset, predicate, c. would be
defined on a per-type basis, possibly including a default offset for
when b isn't specified, but personally I'd rather just disallow that
in this case.

3) Evenly-spaced divisions of an infinite range, where the desired
meaning of [a,b..z] assumes that the distance from a to b evenly
divides the distance from a to z, and the result is a list containing
(1 + (z-a)/(b-a)) elements such that all differences between
successive elements are equal, with a and z present as the first and
last elements.

For most types other than fractional numbers and floats, the third
interpretation isn't well-defined and the first coincides both with an
Ix instance (if one exists) and with the second interpretation using
the smallest nonzero offset. Note that the first interpretation does
not require a total ordering, and in fact the Ord constraint on Ix is
somewhat misleading and nonsensical. As such, the first interpretation
naturally extends to more general ranges than what the second can
describe.

For rationals, floats, approximations of the reals, or any other type
with a conceptually infinite number of values in a range, the first
interpretation isn't well-defined, and the second and third
interpretations should coincide when all three parameters are equal,
ignoring rounding errors and inexact representations.

The current Enum class attempts to be something like an ill-defined
mixture of all three, and where the interpretations don't coincide,
the disagreement between them is a likely source of bugs. Worse still,
the instance for floating point values mixes the naively expected
results of both the second and third in a very counterintuitive way:
the enum to value at the end behaves neither as an upper bound (the
sequence may exceed it in an effort to avoid rounding errors) nor as a
final element (it may not be in the sequence at all, even if it has an
exact floating point representation). This seems needlessly confusing
to me and is arguably broken no matter which way you slice it.

My thoughts are that the first interpretation is most naturally suited
to list range syntax, that the second would be better served by a
slightly different syntax to make the predicate more explicit, and
that the third bugs the crap out of me because it's really very useful
but I can't think of a concise and unambiguous syntax for it.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-21 Thread Casey McCann
On Wed, Sep 21, 2011 at 12:09 AM, Daniel Fischer
daniel.is.fisc...@googlemail.com wrote:
 Yes. Which can be inconvenient if you are interested in whether you got a
 -0.0, so if that's the case, you can't simply use (== -0.0).
 Okay, problematic is a too strong word, but it's another case that may
 require special treatment.

Hmm. I was going to suggest that it's not a major concern so long as
the distinction can't be observed without using functions specific to
floating point values, since that preserves consistent behavior for
polymorphic functions, but... that's not true, because the sign is
preserved when dividing by zero! So we currently have the following
behavior:

0   == (-0) = True
1/0 == 1/(-0)   = False
signum (-0) = 0.0
signum (1/0)= 1.0
signum (1/(-0)) = -1.0

All of which is, I believe, completely correct according to IEEE
semantics, but seems to cause very awkward problems for any sensible
semantics of Haskell's type classes.

...sigh.

 which is correct and shouldn't break any expected behavior.
 I don't think it's required that distinguishable values be unequal,

 But desirable, IMO.

I'm ambivalent. I can see it making sense for truly equivalent values,
where there's a reasonable expectation that anything using them should
give the same answer, or when there's a clearly-defined normal form
that values may be reduced to.

But as demonstrated above, this isn't the case with signed zeros if
Num is available as well as Eq.

 I still don't see why it makes sense to add separate IEEE comparisons

 Pure and simple: speed.
 That is what the machine instructions, and hence the primops, deliver.

Oh, I assume the IEEE operations would be available no matter what,
possibly as separate operations monomorphic to Float and Double, that
they'd be used to define the partial ordering instance, and could be
imported directly from some appropriate module.

But as it turns out the partial ordering isn't valid anyway, so I
retract this whole line of argument.

 Ah, yes, wherein someone suggested that comparing to NaN should be a
 runtime error rather than give incorrect results. A strictly more
 correct approach, but not one I find satisfactory...

 Umm, 'more correct' only in some sense. Definitely unsatisfactory.

More correct in the very narrow sense of producing fewer incorrect
answers, according to Haskell semantics. :] That it would produce
fewer answers in general and a great deal more bottoms is another
matter. Certainly not useful, and in fact actively counterproductive
given that the whole purpose of silent NaNs is to allow computations
to proceed without handling exceptions at every step along the way.

I'm becoming increasingly convinced that the only strictly coherent
approach in the overall scheme of things would be to banish floating
point values from most of the standard libraries except where they can
be given correct implementations according to Haskell semantics, and
instead provide a module (not re-exported by the Prelude) that gives
operations using precise IEEE semantics and access to all the expected
primops and such. As you said above, the importance of floating point
values is for speed, and the IEEE semantics are designed to support
that. So I'm happy to consider floats as purely a performance
optimization that should only be used when number crunching is
actually a bottleneck. Let Rational be the default fractional type
instead and save everyone a bunch of headaches.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-21 Thread Casey McCann
On Wed, Sep 21, 2011 at 2:41 PM, Brandon Allbery allber...@gmail.com wrote:
 On Wed, Sep 21, 2011 at 14:31, Casey McCann c...@uptoisomorphism.net wrote:

 My thoughts are that the first interpretation is most naturally suited
 to list range syntax, that the second would be better served by a
 slightly different syntax to make the predicate more explicit, and
 that the third bugs the crap out of me because it's really very useful
 but I can't think of a concise and unambiguous syntax for it.

 Based on what you said, I'm wondering if the first gets basic fromTo syntax,
 the third gets fromThenTo syntax, and the second strikes me as a simplified
 form of list comprehension and might possibly be phrased as a cross between
 range and comprehension.  Although the most correct such cross has an
 ambiguity with the comma... can we still use | as the delimiter, read as
 such that?  ([a .. z | filter])

Hmm. I actually wrote (..) better served by some variation on a list
comprehension there at first before editing it to be more
non-committal. Interesting to see someone else immediately go for the
same idea. Anyway, I think this can already be expressed using GHC's
generalized list comprehensions, but the result is more verbose than I
would like for this particular very common case. My first thought on
resolving ambiguity is to rely on having something distinct following
a .., e.g. desugaring [a, b.. |  z] as takeWhile ( z) [a,
b..], where anything ending in .. ] is taken to be an infinite
iterated sequence. This is only slightly more verbose than the current
form, arguably more readable, and certainly more explicit. Would need
to be more clearly specified what forms the predicate expression could
have, however.

The fromThenTo syntax doesn't seem entirely satisfactory for the third
case, because it creates ambiguity if the step size doesn't evenly
divide the range size. Having the first and last elements appear
exactly as given in the sequence and having the interval sizes be as
consistent as possible are pretty much the entire purpose here, so I'm
not sure how to reconcile that. Perhaps rounding the specified
interval to the nearest divisor? Kind of a hack, but seems to best
approximate the intent (as well as being resilient in the face of
imprecision, which is also important).

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-20 Thread Casey McCann
On Tue, Sep 20, 2011 at 12:47 PM, Paterson, Ross r.pater...@city.ac.uk wrote:
 Daniel Fischer writes:
 A numeric range [a..a+n] might be expected
 to have a+n+1 elements, but that doesn't hold either for Float and
 Double.  I think Enum for floating point values is broken

Yes, it is. Like Eq and Ord.

 .. only more so.  And the brokenness has infected Rational: try

 [1,3..20]::[Rational]

I actually think the brokenness of Ord for floating point values is
worse in many ways, as demonstrated by the ability to insert a value
into a Data.Set.Set and have other values disappear from the set as
a result. Getting an unexpected element in a list doesn't really seem
as bad as silently corrupting entire data structures.

For numeric ranges of fractional values, I expect that what one
typically wants is either the two end points, and N intermediate
values evenly spaced or the starting value, and successive
increments up to some cut-off, with the default increment being 1 or
-1 as appropriate. The current Enum instance splits the difference and
gets you some of both, except that you might get the wrong values or
something past the cut-off. Similarly, the current Ord instance splits
the difference between a coherent total order (what Ord instances
should be) and the semantically correct partial order (what floats
should have, as defined in the IEEE standard).

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-20 Thread Casey McCann
On Tue, Sep 20, 2011 at 3:48 PM, Chris Smith cdsm...@gmail.com wrote:
 On Tue, 2011-09-20 at 15:28 -0400, Casey McCann wrote:
 I actually think the brokenness of Ord for floating point values is
 worse in many ways, as demonstrated by the ability to insert a value
 into a Data.Set.Set and have other values disappear from the set as
 a result.

 Definitely Ord is worse.  I'd very much like to see the Ord instance for
 Float and Double abandon the IEEE semantics and just put NaN somewhere
 in there -- doesn't matter where -- and provide new functions for the
 IEEE semantics.

It should be first, to make floating point values consistent with
applying Maybe to a numeric type.

Personally, I contend that the most correct solution is to distinguish
between meaningful ordering relations and ones used for algorithmic
convenience. As another example, the type (Integer, Integer), regarded
as Cartesian coordinates, has no meaningful ordering at all but does
have an obvious arbitrary total order (i.e., the current Ord
instance). For purposes like implementing Data.Set.Set, we don't care
at all whether the ordering used makes any particular kind of sense;
we care only that it is consistent and total. For
semantically-meaningful comparisons, we want the semantically-correct
answer and no other.

For types with no meaningful order at all, or with a meaningful total
order that we can use, there is no ambiguity, but floating point
values have both a semantic partial order and an obvious arbitrary
total order which disagree about NaN. In the true spirit of compromise
the current Ord instance fails to implement both, ensuring that things
work incorrectly all the time rather than half the time.

That said, in lieu of introducing multiple new type classes, note that
the Haskell Report specifically describes Ord as representing a total
order[0], so the current instances for floating point values seem
completely indefensible. Since removing the instances entirely is
probably not a popular idea, the least broken solution would be to
define NaN as equal to itself and less than everything else, thus
accepting the reality of Ord as the meaningless arbitrary total
order type class I suggested above and leaving Haskell bereft of any
generic semantic comparisons whatsoever. Ah, pragmatism.

 As for Enum, if someone were to want a type class to represent an
 enumeration of all the values of a type, then such a thing is reasonable
 to want.  Maybe you can even reasonably wish it were called Enum.  But
 it would be the *wrong* thing to use as a desugaring for list range
 notation.  List ranges are very unlikely to be useful or even meaningful
 for most such enumerations (what is [ Red, Green .. LightPurple]?); and
 conversely, as we've seen in this thread, list ranges *are* useful in
 situations where they are not a suitable way of enumerating all values
 of a type.

It's not clear that Enum, as it stands, actually means anything coherent at all.

Consider again my example of integer (x, y) coordinates. Naively, what
would [(0, 0) .. (3, 3)] appear to mean? Why, obviously it's the
sixteen points whose coordinates range from 0 to 3, except it isn't
because Enum isn't defined on pairs and doesn't work that way anyhow.
Could we describe this range with an iteration function and a
comparison? No, because the Ord instance here is intrinsically
nonsensical. And yet, the intent is simple and useful, so why don't we
have a standard type class for it?[1] This would seem to be the
obvious, intuitive interpretation for range syntax with starting and
ending values.

To the extent that Enum can be given a coherent interpretation (which
requires ignoring many existing instances), it seems to describe types
with unary successor/predecessor operations. As such, instances for
Rational, floating point values, and the like are patently nonsensical
and really should be removed. An obvious generalization would be to
define Enum based on an increment operation of some sort, in which
case those instances could be defined reasonably with a default
increment of 1, which is merely dubious, rather than ridiculous. The
increment interpretation would be very natural for infinite lists
defined with open ranges and an optional step size.

Absent from the above is any interpretation of expressions like [0,2
..11], which are ill-defined anyway, as evidenced by that expression
producing lists of different lengths depending on what type is chosen
for the numeric literals. Myself, I'm content to declare that use of
range syntax a mistake in general, and insist that an unbounded range
and something like takeWhile be used instead.

- C.

[0]: See here: 
http://www.haskell.org/onlinereport/haskell2010/haskellch6.html#x13-1290006.3.2
[1]: Spoiler: We do.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-20 Thread Casey McCann
On Tue, Sep 20, 2011 at 5:56 PM, Evan Laforge qdun...@gmail.com wrote:
 I actually think the brokenness of Ord for floating point values is
 worse in many ways, as demonstrated by the ability to insert a value
 into a Data.Set.Set and have other values disappear from the set as
 a result. Getting an unexpected element in a list doesn't really seem
 as bad as silently corrupting entire data structures.

 Whoah, that's scary.  What are some examples of this happening?  Does
 this mean it's unsafe to store Doubles in a Map?

Well, you can safely store Doubles in a Map as long as you use a key
type with a bug-free Ord instance. :]

Anyway, the problem with the Ord instance on floating point values is
that it attempts (but fails anyway) to implement the ordering
semantics given by the IEEE spec, which requires that all ordering
comparisons return false if either argument is NaN, and that NaN /=
NaN returns true. This is not the behavior normally expected from an
Ord instance! Adding insult to injury, the compare function returns
a value of type Ordering (which assumes a consistent total order), so
the instance contradicts itself: compare (0/0) (0/0) gives GT, but
0/0  0/0 is false.

This plays havoc with the search tree used internally by Set and Map,
the result being that if you have any NaN values in the data
structure, you may not be able to find other values anymore. Because
NaN values never compare equal to themselves, I'm not sure if it's
even possible to remove them from the structure, and because of tree
rebalancing I'm not sure how to predict what the impact of one or more
NaNs would be over multiple operations on the data structure.

In short: Using Doubles in a Set, or as the key to a Map, should be
regarded as a bug until proven otherwise (i.e., proving that NaN will
never be inserted).

If you'd like to see an explicit demonstration (which you can try in
GHCi yourself!) see here:
http://stackoverflow.com/questions/6399648/what-happens-to-you-if-you-break-the-monad-laws/6399798#6399798
where I use it as an example of why it's important for type class
instances to obey the relevant laws.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-20 Thread Casey McCann
On Tue, Sep 20, 2011 at 6:58 PM, Daniel Fischer
daniel.is.fisc...@googlemail.com wrote:
 On Wednesday 21 September 2011, 00:20:09, Casey McCann wrote:
 Because
 NaN values never compare equal to themselves, I'm not sure if it's
 even possible to remove them from the structure,

 filter (not . isNaN)

 resp.

 filterWithKey (\k _ - not $ isNaN k)

Er, right. Yes, of course. I'm not sure what I was thinking there. :]
Though that still leaves the question of any damage done in the
meantime, unless the filtering would repair the tree in the process.

 and because of tree
 rebalancing I'm not sure how to predict what the impact of one or more
 NaNs would be over multiple operations on the data structure.

 Yuck. Don't even try to predict that (unless you absolutely have to).

Agreed. The consequence of not trying, however, is that it isn't
viable to let things slide at all--every insertion must be checked for
NaNs, because otherwise you lose any guarantee that the tree will be
valid next time you use it.

One can imagine a similar data structure designed to be resilient and
predictable in the face of ill-behaved comparisons, but surely it
would be easier to just fix the problem instances!

 If you'd like to see an explicit demonstration (which you can try in
 GHCi yourself!) see here:
 http://stackoverflow.com/questions/6399648/what-happens-to-you-if-you-br
 eak-the-monad-laws/6399798#6399798 where I use it as an example of why
 it's important for type class instances to obey the relevant laws.

 Nice and short.

Yes, and credit where due for the original example. :] Don't recall
which -cafe thread that came from, though.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-20 Thread Casey McCann
On Tue, Sep 20, 2011 at 6:05 PM, Chris Smith cdsm...@gmail.com wrote:
 There's nothing *wrong* with pragmatism, but in any case, we seem to
 agree on this.  As I said earlier, we ought to impose a (rather
 arbitrary) total order on Float and Double, and then offer comparison
 with IEEE semantics as a separate set of functions when they are needed.
 (I wonder if Ocaml-style (.) and (.) and such are used anywhere.)

I think the only point of disagreement here is that I'm advocating the
introduction of a partial ordering class (for which floating point
values could be given a proper instance according to IEEE semantics)
rather than treating floats as a special case. I would prefer going a
step further and having two distinct total order classes to
distinguish meaningful total orders from nonsense ones like for Float
and Double, but perhaps that seems excessive to other people.

 It's clear to me that Enum for Float means something coherent.  If
 you're looking for a meaning independent of the instance, I'd argue you
 ought to be surprised if you find one, not the other way around.  Why
 not look for a meaning for Monoid that's independent of the instance?
 There isn't one; instead, there are some rules that the instance is
 expected to satisfy, but there are plenty of types that have many
 possible Monoid instances, and we pick one and leave you to use newtypes
 if you wanted a different one.

I have to disagree here. Monoid has a very clear, narrow,
type-independent meaning: the eponymous algebraic structure. The
minimal definition of the class is a value and a binary operation;
this is a very small interface, and the laws expected of an instance
nearly exhaust the properties of these definitions, either by
specifying behavior (e.g., associativity) or by deliberately not
specifying (is the binary operation commutative? not in general, but
it could be). Simply by satisfying the type signature, any instance is
going to at least vaguely resemble a valid one, and checking the laws
is straightforward.

On the other hand, Enum has conversions to and from Int and a host of
interdefined operations with at best loose guidelines for how they
should behave. Does toEnum . fromEnum = id hold? Not in general.
Does succ . fromEnum = fromEnum . succ hold? Probably not. I think.
What do enumFrom, enumFromThen, c. mean? What the instance author
thought made sense, I suppose, since they're only defined as what
list range syntax desugars to. In the case of types that also have a
Bounded instance there are further requirements, mostly relating to
where runtime errors should be produced (gosh, that helps).

Consider this: How many Enum instances do you think override the
default definitions, not for efficiency, but in ways that give
different results? How many Monoid instances do you think override
mconcat in a way that gives a different answer than foldr mappend
mempty?

Here's a thought experiment. Imagine that, instead of Monoid, we had a
type class called Summarize used mostly to desugar some sort of
built in summation syntax. The main function used is summarize ::
(Summarize a) = [a] - a, the class is described as a generalized
sum, and the motivating examples are all independent of the order of
elements in the list (because addition is commutative, right). But
nowhere is it specified what the behavior of the class should be,
other than that it desugars the syntax in some way that presumably
makes sense. It's not required that summarize [] produce an identity
value, it's not required that summarizing repeatedly should be
associative, it's not required that reordering the list give the same
summary, and so on. Most instances do have all these properties of
course, but then someone makes a library with an extremely
non-commutative instance for Summarize and we get a -cafe thread
complaining about it and then I write a very long and tedious message
all about how Summarize is underspecified and has no clear meaning and
probably should be explicitly defined as some sort of monoid, either
commutative or more general.

But I digress.

The ambiguity from Monoid is purely that many types have multiple ways
to fulfill the very precise requirements of the class. The ambiguity
of Enum is that it isn't clear what, if anything, the requirements
even are, and nothing rules out a wide variety of equally valid
instances other than a vague notion of which one makes sense, a
point on which reasonable people may disagree!

Possibly a better example would be MonadPlus, for which (if memory
serves me) there's some similar ambiguity about the laws an instance
should follow, with inconsistency even in the standard library as to
which interpretation is chosen, and resulting in actual confusion
about what code should do.

 I'm not saying that Enum must be left exactly as is... but I *am* saying
 that the ability to use floating point types in list ranges is important
 enough to save.  For all its faults, at least the current language can
 do that.  

Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-20 Thread Casey McCann
On Tue, Sep 20, 2011 at 8:20 PM, Daniel Fischer
daniel.is.fisc...@googlemail.com wrote:
 Yes, where NaNs matter, you always have to check (well, unless you *know*
 that your calculations don't produce any NaNs).
 Btw, -0.0 can be problematic too.

How so? As far as I can tell Ord and Eq treat it as equal to 0.0 in
every way, which is correct and shouldn't break any expected behavior.
I don't think it's required that distinguishable values be unequal,
and while I imagine arguments could be made both ways on whether that
would be a good idea, I don't see any way that could cause problems in
code polymorphic on instances of Eq or Ord, which is the main concern
to my mind.

 Except that people might expect IEEE semantics for (==), () etc.

Yes, but probably fewer people than expect Map and Set to work correctly. :]

 However, nowadays I tend to think that making the Eq and Ord instances
 well-behaved (wrt the class contract) and having separate IEEE comparisons
 would overall be preferable.
 There is still the question whether all NaNs should be considered equal or
 not [and where Ord should place NaNs].

IEEE semantics are incompatible with Ord regardless. The problem can
be fixed by changing Ord, removing the instance completely, or
changing the instance to ignore the IEEE spec. I think the latter is
the least bad option in the big picture.

I still don't see why it makes sense to add separate IEEE comparisons
instead of just adding a standard partial order class, though. Surely
posets are common enough to justify the abstraction, and it surprises
me that one isn't already included. No doubt there are at least three
or four different partial ordering classes on Hackage already.

As for where Ord should place NaN, I still suggest it be the least
element, to be consistent with the Ord instance for Maybe. If
different NaNs are unequal, that may change matters.

 Google suggests Exception for NaN from May.

Ah, yes, wherein someone suggested that comparing to NaN should be a
runtime error rather than give incorrect results. A strictly more
correct approach, but not one I find satisfactory...

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Tupling functions

2011-09-15 Thread Casey McCann
On Thu, Sep 15, 2011 at 7:51 AM, Markus Läll markus.l...@gmail.com wrote:
 Intuitively it seems to be exactly the same as the type families'
 aproach, and looks quite clear too.

Not exact, no--as written, it's strictly more powerful. Your fundeps
go in both directions, whereas the type families didn't (though could
easily be extended to do so, if desired, at the cost of some extra
verbosity). The main argument in favor of type families here is the ~
equality constraint which, as you've found, works in combination with
fundeps as well. :] You can actually simulate it with fundeps alone,
but you probably don't want to.

All else equal I personally find type families easier to work with,
but in this case the difference is minimal. For bidirectional
constraints and simple transformations, fundeps are probably a bit
nicer, so in hindsight I think yours is the better idea here.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Tupling functions

2011-09-14 Thread Casey McCann
On Wed, Sep 14, 2011 at 9:32 AM, Victor Nazarov
asviraspossi...@gmail.com wrote:
 I've just tried another approach (code below). And GHC even inferred
 type for tupleF. But I think GHC inferred the wrong type and I can't
 formulate the right one, it seems to require infinite number of
 constraints. With GHC inferred type this function is not usable,
 though:

GHC can't actually infer your type with that implementation of tcons.
There's no way for it to get from the arguments THead t and TTail
t to the tuple type t, because (unlike type constructors) type
families aren't necessarily injective, so there could be more than one
type t that THead and TTail map to the types received. Furthermore,
the open world assumption for type families means that even if there's
only one valid t in scope, it can't simply select that because it
must account for the possibility of more instances being introduced in
other scopes.

On the other hand, it can get from t to THead t and TTail t just
fine, so if you give a type annotation that fixes the result type it
should work. But that can be clumsy for actual use.

The above issue is exactly why the implementation that I gave uses a
slightly peculiar approach to calculate the other types based only on
the type of the tuple argument. A slightly more complicated approach
could probably be used to get some inference going in both directions,
but in most cases the direction I gave will be what you want most.

That said, the essential idea of what you're trying to do is a good
one. Why not try separating the steps, though? Use one type family to
give a bijection between standard tuples and some sort of right-nested
pair representation (which is easy to infer both ways), then use
standard type-level recursion to process the latter form however you
like. You can do generic equivalents of map, fold, zip, c. this way
pretty easily.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Tupling functions

2011-09-13 Thread Casey McCann
On Tue, Sep 13, 2011 at 10:03 PM, Chris Smith cdsm...@gmail.com wrote:
 Ah, okay... then sure, you can do this:

 class Tuple a b c | a b - c where
    tuple :: a - b - c

 instance Tuple (a - b, a - c) a (b,c) where
    tuple (f,g) x = (f x, g x)

This wouldn't actually work well in practice. There's no dependency
between the various occurrences of a in the types, so unless they're
already known to be the same, GHC will complain about an ambiguous
instance (please excuse the silly GHCi prompt):

Ok, modules loaded: Tupling.
∀x. x ⊢ tuple ((+3), show) 4

interactive:0:1:
No instance for (Tuple (a0 - a0, a1 - String) b0 c0)
  arising from a use of `tuple'

Given that the class is only intended to be used where those types are
equal, you really want it to unify them based on use of the tuple
function.

 and so on...  You'll need fundeps (or type families if you prefer to
 write it that way), and probably at least flexible and/or overlapping
 instances, too, but of course GHC will tell you about those.

I rather prefer type families in this case, both because the problem
is easily expressed in type function style, and because it gives you
an easy type equality constraint to use, rather than using arcane
trickery with overlaps to force post-hoc unification. We'd probably
want to do something like this:

class Tuple t where
type Arg t :: *
type Result t :: *
tuple :: t - Arg t - Result t

instance (x1 ~ x2) = Tuple (x1 - a, x2 - b) where
type Arg (x1 - a, x2 - b) = x1
type Result (x1 - a, x2 - b) = (a, b)
tuple (f, g) x = (f x, g x)

instance (x1 ~ x2, x2 ~ x3) = Tuple (x1 - a, x2 - b, x3 - c) where
type Arg (x1 - a, x2 - b, x3 - c) = x1
type Result (x1 - a, x2 - b, x3 - c) = (a, b, c)
tuple (f, g, h) x = (f x, g x, h x)

Used like so:

Ok, modules loaded: Tupling.
∀x. x ⊢ tuple ((+2), show, ( 2)) 3
(5,3,False)

Note that not only does this avoid ambiguity, it even unifies
ambiguous types that are then defaulted by the usual means.

That said, I question the utility of a class like this. The
boilerplate instances are tedious to write and it's not flexible in
any way; tuples not being defined inductively makes them a real pain
to work with unless there's a particularly good reason to do so.
Something equivalent to right-nested (,) with () as a terminator is
much more pleasant, and since we're deep in the pits of
non-portability anyway, might as well pull out bang patterns and
UNPACK pragmas if avoiding extra bottoms was the reason for using
plain tuples.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ur tutorial, and a challenge

2011-07-27 Thread Casey McCann
On Wed, Jul 27, 2011 at 8:30 AM, Christopher Done
chrisd...@googlemail.com wrote:
 On 27 July 2011 13:58, Adam Chlipala ad...@impredicative.com wrote:
 Does this static type system support metaprogramming strong enough to
 implement my challenge problem with the level of static guarantee for all
 specialization parameters that I ask for?

 Again I don't really know what you're talking about so I'll drop it.

Here's a question that I suspect may be relevant: Can you generate
code with TH that isn't well-typed?

If your first thought is something like of course, how would you type
check the code before generating it?, I would ask how that differs
from of course programs can crash, how can you check all inputs
before reading them at run time?.

Bonus question for any OCaml folks: What's the difference between
using Camlp4 and using MetaOCaml?

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] compare iteratee with python's yield

2011-07-01 Thread Casey McCann
On Fri, Jul 1, 2011 at 6:01 AM, Ertugrul Soeylemez e...@ertes.de wrote:
 I don't know Python very well, but I suspect that its generators are
 really a sort of coroutines.  Iteratees are also coroutines, but their
 architecture is quite different.

Python generators were originally a sort of heavily restricted
coroutine mostly used to implement corecursive sequences, i.e. what we
use lazy lists for Haskell. As Python allows arbitrary side-effects,
this makes them pretty directly equivalent to using lazy I/O in
Haskell. They were later extended to be more like full coroutines,
allowing them to both generate and consume data. I imagine that
something akin to iteratees could be built on top of the
coroutine-style extended generators, but it would likely be more
reliant on the programmer not using things incorrectly and the benefit
of the whole idea is unclear in this context (for the reasons outlined
in the rest of your message).

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why aren't there anonymous sum types in Haskell?

2011-06-22 Thread Casey McCann
On Wed, Jun 22, 2011 at 2:19 PM, wren ng thornton w...@freegeek.org wrote:
 [1] modulo the A:+:A ~ A issue.

 Oops, I should've said A:*:A there.

 That issue is exactly my concern, though, and it seems a bit too
 thorny to handwave aside.

 Indeed. If we have A:*:A ~ A, then A:*:A is not a categorical product.
(...)

Thank you, that clarified my intuitive sense of why it didn't seem to
work. The same argument, with arrows flipped appropriately, would
apply to a hypothetical coproduct where A :+: A ~ A, wouldn't it?

 Disjoint pairs are sufficient; they needn't be ordered. All we need is
 that they are tagged in the same way that disjoint unions are, so that
 we can distinguish the components of A*A.

Oh. Yes, of course, I probably knew that. I think confused myself by
habitually calling the components fst and snd. Sigh.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why aren't there anonymous sum types in Haskell?

2011-06-22 Thread Casey McCann
On Wed, Jun 22, 2011 at 5:00 PM, Alexander Solla alex.so...@gmail.com wrote:
 You're building up (Either a b) into a monoidal category.  There used to be
 a package called category-extras for this kind of stuff.  I think it has
 been broken up.  Does anybody know the status of its replacement(s)?

You probably want to look at Edward Kmett's github: http://github.com/ekmett

Also, both bifunctors and monoidal categories seem to now be in the
categories package, which is on Hackage:
http://hackage.haskell.org/package/categories

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why aren't there anonymous sum types in Haskell?

2011-06-21 Thread Casey McCann
On Tue, Jun 21, 2011 at 5:24 PM, pipoca eliyahu.ben.mi...@gmail.com wrote:
 If you were to have your anonymous sum types be a union instead of the
 disjoint union, then you could say that A :+: A has no meaning.
 That's what I was originally thinking of when I suggested that
 syntax.  However, as was pointed out to me by David Sankel, disjoint
 unions are more powerful than regular unions.  Since that's the case,
 Matthew Steele's suggested syntax makes more sense.  It means that you
 need to remember the order of your arguments, but you need to do that
 with tuples, anyway.

Of course, the same idea could be applied to tuples as well. An
anonymous product A :*: B would be a collection with no defined
order for its elements, indexed by type instead of position. A :*: A
would be meaningless for similar reasons to A :+: A.

That said, I don't think either retains the tidy algebraic properties
that disjoint unions and tuples have, so I'm not sure if calling them
sums and products is actually correct.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why aren't there anonymous sum types in Haskell?

2011-06-21 Thread Casey McCann
On Tue, Jun 21, 2011 at 9:51 PM, wren ng thornton w...@freegeek.org wrote:
 I don't think there are any problems[1].
(...)
 [1] modulo the A:+:A ~ A issue.

That issue is exactly my concern, though, and it seems a bit too
thorny to handwave aside. For instance, doesn't this also cause
problems for anything of the form (A :+: B) :+: (A :+: C)? I don't
doubt there exist formulations that make sense, but it's not
immediately obvious to me what they would be, and how they'd behave
used as types in something like Haskell. You wouldn't be able to have
a straightforward function with the type forall a b c. (a - b) - (a
:+: c - b :+: c), among other things.

In contrast, ordered pairs and disjoint unions are tidy, simple, and obvious.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: remove Stability from haddock documentation on hackage

2011-06-07 Thread Casey McCann
On Tue, Jun 7, 2011 at 9:22 AM, Tillmann Rendel
ren...@informatik.uni-marburg.de wrote:
 On
 http://hackage.haskell.org/packages/archive/base/latest/doc/html/Control-Applicative.html,
 in the upper right corner, the module is marked as experimental. I think
 this is a Haddock feature, not a Hackage feature.

Oddly, I couldn't find any of the fields in the haddock module header
documented anywhere, though I didn't perform a very thorough search.

I also observe that, in base, Data.Bool is apparently experimental,
whereas Control.Monad is merely provisional. Prelude, at least, is
reassuringly stable.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can it be proven there are no intermediate useful type classes between Applicative Functors Monads?

2011-06-06 Thread Casey McCann
On Mon, Jun 6, 2011 at 12:19 PM, Brent Yorgey byor...@seas.upenn.edu wrote:
 The idea is that Applicative computations
 have a fixed structure which is independent of intermediate results;
 Monad computations correspond to (potentially) infinitely branching
 trees, since intermediate results (which could be of an infinite-sized
 type) can be used to compute the next action; but Branching
 computations correspond to *finitely* branching trees, since future
 computation can depend on intermediate results, but only one binary
 choice at a time.

Is this truly an intermediate variety of structure, though? Or just
different operations on existing structures? With Applicative, there
are examples of useful structures that truly can't work as a Monad,
the usual example being arbitrary lists with liftA2 (,) giving zip,
not the cartesian product. Do you know any examples of both:

1) Something with a viable instance for Branching, but either no Monad
instance, or multiple distinct Monad instances compatible with the
Branching instance
2) Same as above, except for a viable Applicative instance without a
single obvious Branching instance

In other words, an implementation of branch for some type that's not
obviously equivalent to one of these definitions:

branchMonad mb t f = do { b - mb; if b then t else f }
branchApplicative = liftA3 (\b t f - if b then t else f)

I can certainly believe that such an example exists, but I can't think
of one. In particular, it doesn't seem to be possible for ZipList (the
obvious almost-instance does not quite do what you may think it does).

If memory serves me, sometimes the limited nature of Applicative
allows a more efficient implementation than Monad, and in such cases I
can easily believe that branch could be made more efficient than the
generic form based on Monad. But that's not terribly persuasive for
creating a type class, I don't think.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can it be proven there are no intermediate useful type classes between Applicative Functors Monads?

2011-06-06 Thread Casey McCann
On Mon, Jun 6, 2011 at 5:32 PM, Matthew Steele mdste...@alum.mit.edu wrote:
 I think Branching is to Monad what ArrowChoice is to ArrowApply.
 Branching allows the shape of the computation to depend on run-time
 values (which you can't do with Applicative), but still allows only a
 finite number of computation paths.  By purposely making a functor an
 instance of Branching but _not_ of Monad, you allow it to have some
 amount of run-time flexibility while still retaining the ability to
 statically analyze the effects of a computation in that functor.

Yes, that's what I gathered as well. It's a straightforward concept.

My question is whether there exist instances of Branching that are
distinct in results from an implementation in terms of a Monad
instance, rather than merely allowing a more efficient implementation.
Not that the latter isn't worthwhile, but to make a case for something
like Branching as an intermediate between Applicative and Monad one
would expect it to differ from both in what types have possible
instances.

ArrowChoice and ArrowApply are conceptually distinct and I expect
there are instances of the former that have no possible instance for
the latter. Branching vs. Monad I am much less certain of.

 branchApplicative = liftA3 (\b t f - if b then t else f)

 This definition doesn't satisfy the laws given for the Branching
 class; it will execute the effects of both branches regardless of
 which is chosen.

How would it violate the laws for Identity or Reader?

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What's the advantage of writing Haskell this way?

2011-05-30 Thread Casey McCann
On Mon, May 30, 2011 at 9:01 AM, John Ky newho...@gmail.com wrote:
 instance Monoid (Stream a) where
 mempty = Chunks mempty
 mappend (Chunks xs) (Chunks ys) = Chunks (xs ++ ys)
 mappend _ _ = EOF

 I guess, it shows my lack of experience in Haskell, but my question is, why
 is writing the code this way preferred over say writing it like this:

I don't care for the inconsistency in this example, using both mempty
and (++). Your version is at least consistent, but I'd actually prefer
to use mappend instead of (++) here, because it makes it clear that
this isn't actually defining a new Monoid instance, just translating
an existing instance for the constructor parameter to work for the
surrounding data type.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Function application layout

2011-05-26 Thread Casey McCann
2011/5/26 Daniel Fischer daniel.is.fisc...@googlemail.com
 As far as I'm concerned, a left-associative version of ($) would sometimes
 be nice (on the other hand, right-associativity of ($) is sometimes also
 nice), but usually, I don't find parentheses too obnoxious.

I have a whole set of function application/composition/lifting
operators that I'm rather fond of, but it involves replacing some
standard operators, and in particular changes the fixity of ($)
drastically, so it's something I only use in small bits of personal
code that I'll never publish anywhere. The main idea is that there are
two groups of operators, each of which are internally pretty
consistent and vaguely generalized from standard operators.

Very low precedence, function application associates toward argument:
f | x = x | f = f x, () and () for composition, and (=),
(=), (=), and (=) behaving as expected. (|) takes the place of
standard ($), and (|) allows a pipe forward style similar to using
(=).

Mid-to-high precedence, function application associates away from
argument: ($) has the same fixity as ($) and (*), as do the
binding operators (=$) and (=*), the latter being a function I
haven't seen before that does about what you'd expect from the name.
Composition is usually just (.) in most cases because of the style in
which I use these.

What it amounts to is that the first group is used mostly as
pseudo-syntax delimiting expressions that would otherwise be
parenthesized, while the second group is used for writing expressions
that would conceptually be simple one-liners if not for involving
lifting into some sort of Functor. The choice of symbols makes it easy
to remember which is which, even if it's not perfectly consistent.

Mostly, though, this is probably just another reason why my personal
coding style would be bafflingly opaque to most people, so oh well.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Names for pretty-printing combinators

2011-05-25 Thread Casey McCann
One drastic approach I've used in personal libraries--operator-heavy EDSLs
specifically--is to define everything first with alphanumeric names, then
put operators in their own modules. In some cases I'd have three such
modules: One providing a minimal set of operators that don't clash with
anything significant, one providing a larger set of operators that clash
with one or more common modules (often deliberately, e.g. duplicating Arrow
combinators for some type with similar semantics but no valid Arrow
instance), and one providing a bunch of gratuitous Unicode operators that
look pretty in my code editor but I don't know how to type in GHCi.

I'm not sure if I've seen that approach anywhere else, however, so it might
not be something most people would care for.

- C.


On Wed, May 25, 2011 at 10:45 AM, Stephen Tetley
stephen.tet...@gmail.comwrote:

 Hi Ivan

 empty is fine as is, obviously with a Monoid instance as well, people
 can choose to use mempty which removes potential name clashes.

 I was thinking of ($) and (+), though I was forgetting that (+)
 is actually ArrowPlus.

 If you are mostly gifting angles as notation to Applicative, maybe a
 pretty print library can live with fewer infix ops? Though you could
 still define fixities for the binary cases:

 infixr 6 `sep1`

 Or maybe steal the notation form Vector-Space (^+^) as cases where you
 would want both imported at the same time may be uncommon.

 Although trivial proposals often get rejected for changes to Base, I
 suspect a proposal for () as a synonym for `mappend` might have
 legs, it would certainly have a lot of support...

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Random thoughts about typeclasses

2011-05-16 Thread Casey McCann
On Mon, May 16, 2011 at 8:10 AM, Robert Clausecker fuz...@gmail.com wrote:
 I found out, that GHC implements typeclasses as an extra argument, a
 record that stores all functions of the typeclass. So I was wondering,
 is there a way (apart from using newtype) to pass a custom record as the
 typeclass record, to modify the behavior of the typeclass? I thought
 about something like this:

Would GHC's implicit parameter extension possibly suit your purposes
here? Your example would translate as:

{-# LANGUAGE ImplicitParams #-}

type ShowClass a = a - String

f :: (?showC :: ShowClass a) = [a] - String
f x = x = ?showC

g :: [Int] - String
g = let ?showC = show in f

g2 :: [Int] - String
g2 = let ?showC = (return . toEnum) in f

...where:

 g [72, 97, 115, 107, 101, 108, 108]
7297115107101108108
 g2 [72, 97, 115, 107, 101, 108, 108]
Haskell

Clearly this doesn't allow you retrofit such functionality onto
existing code using existing type classes, but I'd be wary of doing
that anyway--type class instances are not something that code will
expect to have changing out from under it. Otherwise, this seems to be
exactly what the implicit parameters extension is designed for,
judging from the way the GHC user's guide describes it.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Open CV or alternate image processing library for Haskell on windows?

2011-05-16 Thread Casey McCann
On Mon, May 16, 2011 at 8:37 AM, Gregory Guthrie guth...@mum.edu wrote:
 I wanted to look into using Haskell for an introductory Image Processing 
 class, but the main package used for such things (OpenCV) does not appear to 
 be available for windows systems.

 Is there some other good option for image processing in Haskell, or has 
 anyone ported openCV to a windows Leksah environment?

Which package are you having difficulty with? OpenCV is a library
written in C/C++ and appears to work on Windows, and there looks to be
two different packages on Hackage providing bindings to it, neither of
which seems to have any issues with Windows. One does rely on the unix
package, but my understanding is that Cygwin is sufficient for
that--not certain about the details, though. I haven't used any of
these packages or OpenCV itself personally, so there may be further
issues I'm not seeing, but I would guess that any difficulty you've
encountered was a matter of build tools and system configuration, not
the libraries themselves.

I have found it necessary on multiple occasions to do manual tweaks
and jury-rigging when installing FFI bindings from Hackage on Windows,
as opposed to the typically seamless process of installing an external
library from standard repositories on Ubuntu and then bindings from
Hackage. Admittedly this may be due in large part to the horrendous
condition of build tools on my Windows system. I believe I have two
different GHCs and no less than four copies of GCC in different
locations and I've given up on making sense of it since I'm rarely on
my Windows machine when coding Haskell anyway.

Incidentally, have you looked at what functionality the bindings
packages offer? Both that I saw on Hackage seem to advertise
themselves as emphatically not production-ready code and probably
don't expose all the features of OpenCV. Before you put a lot of time
into fixing build problems, you may want to verify that they even
provide what you need. As a last resort, writing your own Haskell FFI
bindings to a C library is sometimes tedious but not usually
difficult, and there are tools to help automate the task.

I'm not aware of any other existing packages in Haskell for image
processing or computer vision. Depending on what you need, you could
write FFI bindings (to OpenCV or something else) or, if you mostly
want to work with raw data instead of using algorithms provided by the
library, there was actually a question on Stack Overflow recently that
may be relevant: http://stackoverflow.com/questions/6006304

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Exception for NaN

2011-05-16 Thread Casey McCann
On Mon, May 16, 2011 at 3:39 AM, Ketil Malde ke...@malde.org wrote:
 I'm not intimately familiar with IEEE 754, but in any case we'd be in
 good company: R typically lets you select to sort NaNs as greater or
 less than any other values, and sorts non-NaN values correctly also in
 the presence of NaNs, I think.

That's interesting. I wasn't aware of any deliberate breaks with the
standard, and it being R specifically is particularly striking.
Regardless, sticking to the standard behavior is a sensible default.
Floating point values are confusing enough to most people without
being inconsistent as well.

 At any rate, I think we already violate the spec by not having the
 required unordered result for comparisons, and just treating every
 comparison involving a NaN as GT.  I don't think considering NaN as
 e.g. less than -Inf would violate the spec *more*.

Well, it'd be insult to injury. If memory serves me, the standard
behavior is that NaN =/= NaN evaluates as true, and every other
comparison evaluates as false. This is fine (other than not being the
expected behavior of Eq and Ord instances) except, yes, the compare
function, which assumes a complete ordering and thus cannot return a
correct result. The slightly more correct thing to do would be having
compare throw an exception on NaN but I wonder if that wouldn't just
cause a different set of headaches.

Why don't we have a partial ordering type class, anyway? Lack of
obvious utility, I suppose.

 This sounds pretty bad, until you consider that you don't even have
 proper equality, so using floating point values as keys in a Map is
 already asking for trouble.  But I would like sorting to work more
 consistently.

 But I guess it is a matter of lipstick on a pig...

How so? Equality on floating point values other than NaN works just
fine and behaves as expected. It's just that they violate all sorts of
algebraic laws when arithmetic is involved so sequences of operations
that should be equivalent aren't, in ways that are highly dependent on
the values being operated on.

What it mostly boils down to is that Haskell makes you expect things
to be simple and consistent and clearly-defined and floating point
values just simply aren't. They're a messy compromise for the sake of
computational speed. If you want numbers that act like they should,
there's always Integer and Rational.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Exception for NaN

2011-05-14 Thread Casey McCann
On Sat, May 14, 2011 at 9:14 AM, Ketil Malde ke...@malde.org wrote:
 Maybe not terribly brilliant, but wouldn't it improve things slightly if
 NaN was considered less or greater than any other value (possibly
 excluding infinities)?

It would improve things in the sense of giving well-behaved instances
for Eq and Ord, yes. It would not improve things in the sense that it
would violate the IEEE floating point spec.

The real issue here, I think, is that we're expecting Ord to serve two
different purposes: Sometimes a logical ordering on the type according
to its semantics, sometimes an essentially arbitrary ordering for the
purpose of structuring data (e.g., for use as a key in Data.Map). For
most types, either there is no meaningful semantic ordering or the
obvious ordering serves both purposes.

In the case of floating point values, the semantics of the type are
that it is not fully ordered and thus arguably shouldn't be an
instance of Ord at all--in particular, there's nothing compare can
correctly return when given NaN. An arbitrary ordering, on the other
hand, could be established easily so that things like Data.Map would
work correctly, but only by breaking the semantics of the type (even
more than is already the case due to things like compare, that is).

The current situation is an awkward compromise that mostly works and
does what you want in most cases except when you get weird silent bugs
due to, say, minimum returning a non-minimal value, or Data.Map.lookup
returning Nothing for a key that actually exists, or whatever else.
Alternative approaches are generally going to be either horribly
inconvenient, cause as many problems as they solve, or require
massively disruptive changes to the standard library.

In short, file this one on the shelf next to why is Num designed the
way it is?

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Exception for NaN

2011-05-13 Thread Casey McCann
On Fri, May 13, 2011 at 4:48 PM, Luke Palmer lrpal...@gmail.com wrote:
 On Thu, May 12, 2011 at 5:50 PM, Daniel Fischer
 daniel.is.fisc...@googlemail.com wrote:

 Prelude Data.List maximum [0,-1,0/0,-5,-6,-3,0/0,-2]
 0.0
 Prelude Data.List minimum [0,-1,0/0,-5,-6,-3,0/0,-2]
 -2.0
 Prelude Data.List sort [0,-1,0/0,-5,-6,-3,0/0,-2]
 [-6.0,-5.0,-2.0,NaN,-3.0,NaN,-1.0,0.0]

 Wow, that's the best example of NaN poison I've seen.

Somewhat less impressive, but would everyone expect these functions to
be equivalent up to performance characteristics?

f :: (Eq a) = [a] - [a]
f = nub . concatMap (replicate 5)

g :: (Eq a) = [a] - [a]
g = nub

If the answer that springs to mind is yes, for any well-behaved
instance of Eq, well...

Bonus question: Should this function ever return False?

h :: (Ord a) = a - a - Bool
h x y = case compare x y of
GT - x  y
EQ - x == y
LT - x  y

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Advertisement: the Haskell Stack Overflow Q A site

2011-05-04 Thread Casey McCann
On Wed, May 4, 2011 at 5:02 PM, James Cook mo...@deepbondi.net wrote:
  Haskell, on the other hand, has a small enough volume that people can at
 least skim the ones from the last past day or two in a fairly small amount
 of time.

They can and, in fact, do. Or at least I do, at any rate, even when I
don't really have time to answer any. And I suspect Don Stewart does
as well since by himself he's something like 20% of the answers by
volume. Suffice it to say, questions with the [haskell] tag don't get
overlooked.

Overall, based on my experiences and glancing at the question lists,
I'd estimate that most questions tagged [haskell] get at least 50
views and at least one useful answer within a few hours of being
posted, depending on time of day and how many of the more prolific
answerers are around. There're maybe 25 questions with no answers at
all, which is less than 1% of the questions in the tag, and of the
unanswered questions many are either very poorly thought out, very
difficult to answer, or highly specific to some tool or library that
not everyone may be familiar with.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Python is lazier than Haskell

2011-04-29 Thread Casey McCann
On Fri, Apr 29, 2011 at 12:42 AM, Gregg Reynolds d...@mobileink.com wrote:
 On Thu, Apr 28, 2011 at 11:38 PM, Ben Lippmeier b...@ouroborus.net wrote:
 Laziness at the value level causes space leaks, and laziness at the type
 level causes mind leaks. Neither are much fun.

 If the designers could find a way to support laziness at the programmer
 level I for one would be very grateful.

I believe I saw a specification for such a language once.
Unfortunately it was also lazy at the ontological level, and since
nothing logically required the spec to exist it actually didn't.

- C.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell and the Software design process

2010-05-04 Thread Casey McCann
On Tue, May 4, 2010 at 9:09 AM, Limestraël limestr...@gmail.com wrote:
 Are there other methods than Maybe or exceptions to handle the errors in
 Haskell? Is the monad Error(T) useful?

I believe the usual Error monad is just (Either e), with Left
indicating failure. It's the same idea as Maybe, only returning
information about the error instead of simply Nothing. At any rate,
earlier you said:

 The Maybe method is the simplest but it also comes with a little overhead, 
 since you always have to unpack the Maybe a value return

...which sounds odd to me. If you have a complex computation, in which
many subcomputations can fail, it makes perfect sense to lift the
whole thing into an error-handling monad. Particularly at points where
nothing sensible can be done with a Left or Nothing, unpacking the
result is unnecessary; instead, leave it as is and continue the
computation inside the monad. Then, unpack the value at whatever point
that you actually need the result, or can handle errors gracefully.

I'd actually argue that error handling with Maybe/Either is the single
best, most persuasive use for monadic structure in code--it's
certainly the thing I miss most in other languages. For something so
simple (the entire implementation of Maybe with instances is shorter
than this message!) it's amazingly useful, letting you simplify code
while simultaneously having static assurance that you won't get
runtime errors because of not checking for failure somewhere.

Using fromJust or maybe (error foo) ... seems bizarre, as if
trying to recreate in Haskell the mind-numbing experience of dealing
with unexpectedly null pointers being dereferenced. For that matter,
null references tend to be frustrating to debug for exactly the same
reason that Haskell errors can be: Crashing only when and if the value
is actually needed, not when the null pointer or _|_ is first
introduced.

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell and the Software design process

2010-05-04 Thread Casey McCann
On Tue, May 4, 2010 at 2:43 PM, Gregory Crosswhite
gcr...@phys.washington.edu wrote:
 I definitely like that idea.  :-)  Is this similar to the notion of
 dependent types?

That's where things tend to wind up eventually, yes. Although, with
Haskell as it stands, a great deal of unused information is already
present outside the type system, sufficient to automatically prove a
range of easy properties of code. For instance, consider Neil
Mitchell's Catch tool[0], which seems to handle things like the
secondElement function discussed. Of course, actually writing such a
checker is not so easy, and encoding the results of something like
Catch in the type system leads to either convoluted type
metaprogramming hacks or to new extensions creeping slowly toward full
dependent types.

- C.

[0] http://community.haskell.org/~ndm/catch/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strict type system allows for a maximum number of programming errors to be caught at compile time.

2010-05-03 Thread Casey McCann
On Tue, May 4, 2010 at 12:13 AM, Ivan Miljenovic
ivan.miljeno...@gmail.com wrote:
 On 4 May 2010 13:30, Luke Palmer lrpal...@gmail.com wrote:
 Here is a contrived example of what I am referring to:

 prefac f 0 = 1
 prefac f n = n * f (n-1)

 fac = (\x - x x) (\x - prefac (x x))

 I can't work out how this works (or should work rather); is it meant
 to be using church numerals or something (assuming that they have been
 made an instance of Num so that - and * work)?

Looks like a variation on H. Curry's fixed-point combinator to me, e.g.:

y f = (\x - f (x x)) (\x - f (x x))

Which of course is perfectly useful, but not valid in any type system
that excludes absurdities. The otherwise-infinite type signature for Y
is circumvented by built-in recursion in Haskell (making halting
undecidable in the process, unfortunately).

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Sorting Types

2010-04-27 Thread Casey McCann
On Tue, Apr 27, 2010 at 10:20 AM, John Creighton johns2...@gmail.com wrote:
 I was wondering if it is possible to sort types in hakell and if so what
 language extension I should use.

There are multiple ways that some manner of ordering could be defined
on types. A structural definition is one method; ordering Peano
numerals is a simple example, but the same idea applies more generally
to examining the size/depth/etc. of nested constructors. Another
option is defining an explicit ordering on some specified group of
types; this allows greater flexibility at the cost of needing to
manually add types to the ordering.

At any rate, both involve fairly straightforward type-level
programming, possible (and in fact rather easy) to accomplish using
functional dependencies with overlapping and undecidable instances. A
more limited set of extensions is probably viable for some types of
ordering, possibly at the expense of some verbosity or difficulty.
Unfortunately I'm not very practiced with type families, so I'm not
sure how it translates to those; the lack of overlapping instances
makes some things awkward or impossible using type families, alas.

 I get the following error:

   Illegal type synonym family application in instance: And a (LT a i)
   In the type synonym instance declaration for 'Sort'

Well, as it says, type synonym instances can't be used as parameters
to type synonym instances. What constitutes a legal instance
declaration is described in the GHC user's guide, section 7.7.2.2., if
you want clarification.

To keep this from being too much doom and gloom, here's a
quick-and-dirty example of one way to define an arbitrary explicit
ordering on a group of types, that might give you some ideas.

First, all the usual extensions that announce: Warning! We're about
to abuse the type system!

 {-# LANGUAGE TypeOperators #-}
 {-# LANGUAGE OverlappingInstances #-}
 {-# LANGUAGE UndecidableInstances #-}
 {-# LANGUAGE MultiParamTypeClasses #-}
 {-# LANGUAGE FunctionalDependencies #-}
 {-# LANGUAGE FlexibleInstances #-}
 {-# LANGUAGE FlexibleContexts #-}

My preferred type-level booleans, terse yet friendly.

 data Yes = Yes deriving (Show)
 data No = No deriving (Show)

We'll need some very simple type-level lists. Nothing fancy here.

 data Nil = Nil deriving (Show)
 infixr 5 :*:
 data a :*: b = a :*: b deriving (Show)

Four possible results for a comparison. Note the presence of
Unordered, as the comparisons are defined only on an explicit group
of types. The TypeOrdering class is used only to convert type-level
results to equivalent values, for easier inspection.

 data Less = Less deriving (Show)
 data Equal = Equal deriving (Show)
 data Greater = Greater deriving (Show)
 data Unordered = Unordered deriving (Show)
 class TypeOrdering t where toOrdering :: t - Maybe Ordering
 instance TypeOrdering Less where toOrdering _ = Just LT
 instance TypeOrdering Equal where toOrdering _ = Just EQ
 instance TypeOrdering Greater where toOrdering _ = Just GT
 instance TypeOrdering Unordered where toOrdering _ = Nothing

Now for the meaty bits. The type compare function takes three
arguments: a type-level list that specifies the ordering to use, and
two types to compare. The list is treated as a comprehensive
least-to-greatest enumeration of the ordered types; if either type
argument isn't in the list, the result will be Unordered.

The general structure is just simple recursion, obfuscated by
implementation details of type-level programming. Roughly speaking,
conditionals in type-level functions are most conveniently written by
calling another type function whose instances are determined by the
conditional expression; this is to avoid having GHC evaluate both
branches of the conditional, which would lead to unnecessary
computation at best and compiler errors at worst.

To start with, if the list is empty, the types are unordered;
otherwise, we compare both types to the head of the list and branch on
the results.

 class Compare ord x y r | ord x y - r where
 tComp :: ord - x - y - r
 tComp = undefined
 instance (
 TypeEq o x bx,
 TypeEq o y by,
 Comp' bx by ord x y r
 ) = Compare (o :*: ord) x y r
 instance Compare Nil x y Unordered

 class Comp' bx by ord x y r | bx by ord x y - r

If both types and the list head are all equal, the result is obviously Equal.

 instance Comp' Yes Yes ord x y Equal

If neither is equal to the list head, recurse with the list tail.

 instance (Compare ord x y r) = Comp' No No ord x y r

If one of the types is equal to the list head, that type will be the
lesser if an ordering exists at all. We select the optimistic result
and call another function with the type we didn't find.

 instance (Comp'' ord y Less r) = Comp' Yes No ord x y r
 instance (Comp'' ord x Greater r) = Comp' No Yes ord x y r

If the list is empty, the result is unordered, as before. Otherwise,
we compare the type with the list head and branch with yet another
function.

 class Comp'' ts t o r | 

Re: [Haskell-cafe] Proper Handling of Exceptional IEEE Floating Point Numbers

2010-04-25 Thread Casey McCann
On Sun, Apr 25, 2010 at 9:08 PM, Richard O'Keefe o...@cs.otago.ac.nz wrote:
 It seems to me that there's a choice here between (...)

Nice! That's a very comprehensive summary of the situation regarding
issues of correctness. I do wonder, though, what (if any) are the
performance implications?

Editorializing a bit, I would actually go so far as to say that, in
the general case, using floating point values at all is a mistake.
Programmers failing to use them properly has been a small but
consistent source of bugs, even in low-level languages where one would
expect familiarity with their behavior to be the norm. The situation
is even worse in languages that are interpreted, VM-based, or
otherwise further removed from the hardware level, where I've seen
people who thought that IEEE specified behavior was a bug in the
language runtime.

To that end, I'd make a simultaneously conservative and radical
suggestion: Regard floating point types as, first and foremost, a
performance optimization, and strongly discourage their use as
general-purpose fractional numbers. Aside from issues of backwards
compatibility and such, I'd even advocate removing floating point
types from the Prelude and instead require an explicit import from a
separate module in the standard libraries. Use of floating point
values would, ideally, be limited to calculation-heavy code which
spends a non-trivial amount of its time doing fractional arithmetic,
with an assumption that anyone writing code like that ought to
understand both IEEE floats and Haskell's handling of them well enough
to do it correctly.

Given that distinction, I'd say that the order of priorities for
floats should be 1) anything that supports writing high-performance
code 2) accuracy to IEEE standards as the expected behavior 3)
minimize the ugliness from a Haskell perspective as much as possible
without harming the first two. What that works out to, I'm not sure,
but I'd tolerate creating _|_s or breaking Ord's semantics if that's
what it takes.

Alas, I expect that's far too disruptive of existing code to be a
viable approach.

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proper Handling of Exceptional IEEE Floating Point Numbers

2010-04-23 Thread Casey McCann
On Fri, Apr 23, 2010 at 3:21 AM, Barak A. Pearlmutter ba...@cs.nuim.ie wrote:
 ... An invalid comparison evaluating to _|_ is arguably more
 correct, but I personally find the idea of introducing more bottoms
 rather distasteful.

 Too late!  NaN is pretty much the _|_ of IEEE Floating Point.

Yes, of course. But I don't have to like it...

What annoys me is that, conceptually, the silently-propagating NaNs
more strongly resemble Nothing, with the arithmetic functions lifted
into Maybe, Applicative-style. Likewise, comparisons could sensibly be
interpreted as returning Maybe Bool or Maybe Ordering. But there's no
good way to work that into Haskell without making floats incredibly
awkward to use.

 In the context of
 Haskell, which does not have the issue of needing to relax strictness
 just for NaN, I think the right thing would be to have compare give
 _|_, and maybe also , , ==.  After all, NaN is outside the carefully
 defined total ordering of all other IEEE floating point values
 including +/- Infinity.

The reason this makes me unhappy is that evaluating bottoms is a
terrible way to deal with error conditions in pure code. It also makes
using floating point values in generic code written for Ord dangerous,
because the generic code won't (and can't) do anything to check
whether calling compare will produce _|_ even if both arguments are
already known to be fully evaluated.

 (By the stringent criteria some people are giving for allowing
 something to be Eq and Ord, Char would also be stripped of them, since
 after all Char includes _|_.  Sort of.)

The difference, of course, is that getting _|_ as a result of using
_|_ is tolerable; getting _|_ as a result of using only non-_|_ values
makes me sad. To my mind, the fewer ways there are to accidentally
introduce _|_, the better.

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proper Handling of Exceptional IEEE Floating Point Numbers

2010-04-22 Thread Casey McCann
On Thu, Apr 22, 2010 at 11:34 AM, Barak A. Pearlmutter ba...@cs.nuim.ie wrote:
 Comparison of exceptional IEEE floating point numbers, like Nan, seems
 to have some bugs in ghci (version 6.12.1).

Arguably, the bug in question is the mere existence of Eq and Ord
instances for IEEE floats. They don't, can't, and never will work
correctly. A similar topic was discussed here not too long ago; IEEE
floating point so-called numbers lack reflexive equality and
associativity of addition and multiplication, among other properties
one might take for granted in anything calling itself a number. If
memory serves me, someone provided this informative link in the
previous thread: http://docs.sun.com/source/806-3568/ncg_goldberg.html

That said, given that Haskell seems to be following the
well-established tradition of willfully disregarding the inconvenient
aspects of floats as far as the type system is concerned, I would say
that compare returning GT is particularly unintuitive. If something
must stand in for a result of arguments are non-comparable, EQ is
marginally more appealing, as it is expected to be reflexive, as
non-comparable is. An invalid comparison evaluating to _|_ is
arguably more correct, but I personally find the idea of introducing
more bottoms rather distasteful. On the other hand, crashing the
program is usually better than incorrect results, so in this case it's
probably justified.

The only correct solution would be to strip floating point types of
their instances for Ord, Eq, and--therefore, by extension--Num. For
some reason, no one else seems to like that idea. I can't imagine
why...

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Functional Dependencies conflicts

2010-04-18 Thread Casey McCann
On Sun, Apr 18, 2010 at 5:01 AM, Limestraël limestr...@gmail.com wrote:
 There must be some kind of a private joke I don't get...

 BTW, all you've said is pretty scaring...

And somewhat exaggerated, of course. Reasonable uses exist for all
three extensions, but they're firmly in the category of avoid unless
you know what you're doing. Well, at least two of them are, I'm not
sure when IncoherentInstances is a good idea (if ever). It's worth
experimenting with them in some toy code for a while before trying to
use them for real.

In any case, if you do use those extensions, they can usually be
isolated to some extent. A library can use them internally without
requiring client code to enable them, and in an application use can be
restricted to just a few modules, enabling the extensions on a
per-module basis.

My rule of thumb is the sausage principle--outside code should be
able to act as if GHC somewhere picked up a more expressive means of
specifying instance heads and/or a smarter termination checker and
carry on blissfully ignorant of by what providence the instances were
obtained. That is, if one eats sausage, it is best to not dwell on how
it is made, so to speak.

 It's strange I can't declare a generic instance for Binary types... I
 thought I was trying to do something quite common in Haskell.
 Apparently I'm still a young padawan with many things to learn.
 Anyway, it's not the first time I get worked up with multi-param typeclasses
 and functionnal dependencies

What you're trying to do is perfectly reasonable, unfortunately it
doesn't mesh well with how type classes/instances work. A lot of the
reason why the distressing extensions under discussion exist at all is
working around those limitations.

Type families are a start on cleaning up one aspect of the type class
system--namely, the awkwardness of functional dependencies.
Unfortunately, type families don't really help on the how to write
generic but not completely general instances right now, and in fact
are incompatible with overlapping instances, making some things
impossible! I think there have been some discussions of proposals
toward fixing this as well, but I'm not sure what the status of those
are.

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Functional Dependencies conflicts

2010-04-17 Thread Casey McCann
On Sat, Apr 17, 2010 at 4:01 PM, Limestraël limestr...@gmail.com wrote:
 I would have undestood the error if GameObject was also an instance of
 Binary (then the two instances would match), but it's not the case...

As Daniel Fischer has mentioned, presumably a Binary instance could
later be written for GameObject; even if you have no intention of
doing so, GHC considers the possibility. In other words, it's
sufficient that such an instance could exist, not that it actually
does.

In general: Instance selection and context checking are separate and
occur in that order, thus contexts generally can't influence instance
selection (except by using OverlappingInstances and strategically
confusing instance heads, ensuring that GHC can't make any sense of
your code until considering the contexts).

Unfortunately, anything involving extremely generic instances with
some constraint tend to be very difficult to construct, because of how
this works. This tends to include things like default instances for
types not covered by specific ones, making all instances of X also
instances of Y, fundep type predicates based on class membership,
and so on. Type hackery can often get you most of what you want, but
it gets awkward fast.

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: cabal: other-modules

2010-04-15 Thread Casey McCann
On Thu, Apr 15, 2010 at 10:45 AM, Dougal Stanton
dou...@dougalstanton.net wrote:
 'Morally' seems just the perfect word for this occasion --- concerned
 with right or proper conduct. In this case, potential discrepancies
 between the files that cabal 'knows' about when issuing different
 commands; or the sense of deceit when it 'appears' to work only for
 faults to appear further down the line.

Beyond that, there are long established traditions in some parts of
programming and mathematics for using ethical/moral terms to describe
qualities that go beyond just technical requirements or correctness;
this program should do the Right Thing, that bit of code is evil,
and so on. Actually defining this moral sense is difficult, though,
and it varies somewhat from one person to another, but there seems to
be at least some common, shared understanding. It's about should and
proper instead of is or must. Personally, I know I've written
code that I'd feel guilty about even if it worked perfectly and no one
else ever saw it--what else would you call that feeling?

From another angle, here's an example of explicit moral terminology
applied to mathematics: http://www.cheng.staff.shef.ac.uk/morality/
The mathematical sense there is slightly different from the
programming sense, I think, but there seems to be some crossover. For
instance, I've gotten the impression that something like a
mathematical moral sense underlies much of the interest in
programming language semantics, FRP, and dependently-typed languages,
though I don't know if the people involved would necessarily call it
that.

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: instance Eq (a - b)

2010-04-14 Thread Casey McCann
On Wed, Apr 14, 2010 at 2:22 PM, Stefan Monnier
monn...@iro.umontreal.ca wrote:
 While we're here, I'd be more interested in a dirtyfast comparison
 operation which could look like:

    eq :: a - a - IO Bool

 where the semantics is if (eq x y) returns True, then x and y are the
 same object, else they may be different.  Placing it in IO is not great
 since its behavior really depends on the compiler rather than on the
 external world, but at least it would be available.

What about something like System.Mem.StableName? Various pointer types
from the FFI have Eq instances as well and could potentially be
(mis)used to that end.

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type constrain in instance?

2010-04-09 Thread Casey McCann
On Fri, Apr 9, 2010 at 11:22 AM, Louis Zhuang louis.zhu...@acm.org wrote:
 However GHC only has kinds for class/instance like (*-*-*) so we are forced 
 to
 allow all possible types in instance code. I'm not sure if I'm modelling 
 things
 correctly or is there another way to do the same thing?

As far as I know, it is indeed not generally possible to constrain
unmentioned parameters to a type constructor in an instance
declaration. There are workarounds involving modifications to the
class definition, but as you want to use a class from the standard
libraries, that helps very little. For the most part this is by
design; the standard type classes are intended to be fully generic in
their parameters.

However, it seems that you are creating your own special-purpose data
type, so one possible solution presents itself: Constrain the
ChainableFunction type to permit construction only with numeric types.
Simply placing a Num context on the data declaration fails, however,
as this demands a similar constraint on functions using the
type--which is exactly what we're trying to achieve, and so is
spectacularly useless. One conventional solution is to instead conceal
the actual data constructor from client code, instead exposing only
constructor/deconstructor functions with appropriate constraints; the
downsides to this are that client code cannot use pattern matching on
the type, and that internal code must carefully maintain constraints
anywhere the type is used.

A more aesthetically appealing approach, if you're not averse to
language extensions, is GADTs: Place constraints on the CF data
constructor, not the type. With CF the sole constructor the
constraints will be enforced everywhere, and best of all pattern
matching on CF will provide the necessary context--making the
constraint visible even inside the instance declaration for the
supposedly fully-generic (.)!

Alas, it now seems difficult to describe id; it must create a
ChainableFunction with any type (as per the Category class), and
without pattern matching on a ChainableFunction argument it has no way
of getting the constraints. But consider that, for the same reasons,
id has no way of actually doing anything with the type parameters with
which it must construct a ChainableFunction, and thus shouldn't need
to have them at all; further, the semantics of id are quite simple and
essentially independent of its parameterized content. Thus, we can
add another constructor to ChainableFunction, that takes no arguments
and constructs a value of type (ChainableFunction a a), and extend the
definition of (.) to make the behavior of the identity explicit. The
result will look something like this:

{-# LANGUAGE MultiParamTypeClasses, GADTs #-}
import qualified Control.Category as Cat

data ChainableFunction a b where
CF :: (Num a, Num b) = (a-b) - (a-b) - ChainableFunction a b
CFId :: ChainableFunction a a

instance Cat.Category ChainableFunction where
id = CFId
CF g g' . CF f f' = CF (g.f) (\a - f' a * g' (f a))
CFId . f = f
g . CFId = g

You've probably noticed that I've been ignoring the Module class.
Unfortunately, the solution thus far is insufficient; a Module
constraint on the CF constructor does work as expected, providing a
context with (Module a b, Module b c), but the result requires an
instance for Module a c, which neither available, nor easily obtained.
I'm not sure how best to handle that issue; if you find the rest of
this useful, hopefully it will have given you enough of a start to
build a complete solution.

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell gsoc proposal for richer numerical type classes and supporting algorithms

2010-04-08 Thread Casey McCann
On Thu, Apr 8, 2010 at 2:09 PM, Edward Kmett ekm...@gmail.com wrote:
 Template Haskell can help dull the pain, but the result seems hardly 
 idiomatic.

Well, since this is dealing with types and type classes, much of the
required boilerplate could also be straightforwardly derived in full
generality using type-level metaprogramming techniques rather than TH,
but the outcome of that would likely be even less tasteful, in the
sense of so many UndecidableInstances that you won't be able to
scratch your nose without running into the Halting Problem. With a
bit of finesse, though, I suspect the result could allow users of the
library to avoid both boilerplate and unnerving GHC extensions.
Compatibility with Prelude classes could probably also be solved this
way.

Still, probably not terribly appealing to most folks.

 The amount of code to define a new Field-like object can baloon to well over 
 a hundred lines, and in the above I didn't even address how to work with 
 near-field-like concepts like Fields and Doubles, which don't support all of 
 the laws but which have the same feel.

I'm somewhat uncomfortable as it is with equivocating between true
mathematical objects and hand-wavy approximations that have hardware
support. Seriously, floating point so-called numbers don't even have
reflexive equality! If anything, it seems like there might be value in
chopping the numeric types apart into fast number-crunchy types and
types with nice algebraic properties, and then enhancing each with
relevant functionality.

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell gsoc proposal for richer numerical type classes and supporting algorithms

2010-04-08 Thread Casey McCann
On Thu, Apr 8, 2010 at 7:58 PM, wren ng thornton w...@freegeek.org wrote:
 They don't?  I am pretty sure that a floating point number is always equal
 to itself, with possibly a strange corner case for things like +/- 0 and
 NaN.

 Exactly. NaN /= NaN.

 Other than that, I believe that let x = ... in x == x is true (because
 they are the same bitfield by definition), however it is easy to have 'the
 same number' without it having the same bitfield representation due to loss
 of precision and the like. To say nothing of failures of other laws leading
 to overflow, underflow, etc.

Indeed. NaN means that equality is not reflexive for floats in
general, only a subset of them.

Likewise, addition and multiplication are not associative and the
distributive law doesn't hold.

I think commutativity is retained, though. That's something, right?

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell gsoc proposal for richer numerical type classes and supporting algorithms

2010-04-08 Thread Casey McCann
On Thu, Apr 8, 2010 at 8:51 PM, Gregory Crosswhite
gcr...@phys.washington.edu wrote:
 First of all, it isn't clear to me that NaN /= NaN, since in ghci the 
 expression 1.0/0.0 == 1.0/0.0 evaluates to True.  But even if that were the 
 case, I would call that more of a technicality then meaning that equality is 
 not reflexive for floats, since NaN is roughly the floating-point equivalent 
 of _|_, and using the same argument one could also say that reflexivity does 
 not hold in general for equating values of *any* Haskell type since (_|_ == 
 _|_) does not return true but rather _|_.

The difference there is that _|_ generally means the entire program
has shuffled off this mortal coil, whilst a (non-signalling) NaN is,
by specification, silently and automatically propagated, turning
everything it touches into more NaNs. The very sensible purpose of
this is to allow computations on floats to be written in a
straightforward manner without worrying about intermediate errors,
rather than having NaN checks everywhere, or having to worry about
exceptions all the time. In that regard, it's more analogous to all
floating point operations occurring in an implicit Maybe monad, except
with Nothing treated as False by conditionals. (==) $ Nothing *
Nothing indeed does not evaluate to Just True, so in a sense
equality is also not reflexive for types such as Maybe Integer.

Although I like to make fun of floats as being sorry excuses for
numbers, the way they work is actually quite sensible from the
perspective of doing low-level computations with fractional numbers.
Now, if only certain... other... programming languages were as
well-designed, and could silently and safely propagate values like
Not a Reference or Not a Pointer.

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage accounts and real names

2010-04-05 Thread Casey McCann
2010/4/5 Jonas Almström Duregård jonas.dureg...@gmail.com:
 This being said, I have no problem with this restriction. In fact,
 trying to determine the origin of code before agreeing to distribute
 it sounds like sound procedure.

How so? What does knowing the real name of some code's author tell you
that merely knowing the author's pseudonym doesn't? Particularly when
the information is still unreliable, since any rigorous verification
of identity is likely far more trouble than anyone would want to deal
with here, and any individuals who want to misuse hackage will be the
ones most motivated to deceive.

Not to mention that pseudonymity is overwhelmingly the norm on the
internet. In general, unless it has some reasonable justification like
handling credit cards, a site demanding real names would make me
highly suspicious about what they wanted to do with the information.
In practice, of course, I trust hackage--given that I download and
execute code from it--but deviating from standard expectations for no
apparent reason is rather peculiar.

For what it's worth, a quick web search indicated no such requirement
for uploading packages to RubyGems or the Python cheese shop, and they
seem to do okay.

 When I registered I was prompted to verify my identity by means of my
 university email (as opposed to my gmail account), which would
 complicate using a pseudonym.

I don't have a hackage account, since I'm fairly new to Haskell and
none of my projects are yet in a sufficiently complete state to
warrant distribution. I'd most likely want to use my real name anyway,
but being specifically required to do so is a bit off-putting, and
having to verify it (A pseudonymous gmail account isn't good enough?
Really?) would quite possibly irritate me enough to decide it isn't
worth it. I do this for fun, after all.

Is the purpose of hackage to be an open community package index that
encourages general contributions, or something more limited?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Metaprogramming in Haskell vs. Ocaml

2010-04-05 Thread Casey McCann
On Mon, Apr 5, 2010 at 7:05 PM, Jason Dagit da...@codersbase.com wrote:
 On Mon, Apr 5, 2010 at 12:27 PM, Jacques Carette care...@mcmaster.ca
 wrote:
 2. people who care about types use a typed meta-language (like metaocaml)
 instead of an untyped template layer atop a (fantastic!) typed language.

 Are you implying that template haskell is not typed?

Not to speak for Jacques, but my impression is that while TH itself is
typed--it's just more Haskell after all--it doesn't do much to prevent
you from generating code that is not well-typed. Or even well-formed,
for that matter; my initial attempts to learn how to use TH produced
quite a few that's impossible! errors from GHC (I do not think that
word means what it thinks it means).

There's also type-level metaprogramming, as in e.g. HList, which is
almost completely untyped. I have some personal library code that
implements a simple meta-type system and it's a huge, horrid, painful
mess for something with an expressive power no better than System F.

In contrast, MetaOCaml seems to be some variety of a multi-stage
system where metaprogramming blends smoothly into regular
programming with a single, consistent type ensuring type safety at all
points.

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Correct way to record state in OpenGL?

2010-04-04 Thread Casey McCann
On Sun, Apr 4, 2010 at 5:03 AM, Mark Spezzano
mark.spezz...@chariot.net.au wrote:
 What is the correct way to record custom state when using OpenGL?

 By this, I refer to, say, properties of a square--say it's x,y coordinates as 
 it moves across the screen. This requires that the program keep track of the 
 object's state as it moves. These coordinates are _not_ part of the OpenGL 
 state machine, so is the correct way to use the State monad?

The State monad effectively just adds an extra argument and result to
each function and connects them together for you; the type of a
function (a - State s b) translates into (a - s - (b, s)). There's
nothing magic in there, and nothing truly stateful in the sense of
OpenGL's it's turtles all the way down approach to mutable state,
but any state-keeping you could reasonably do by adding more
arguments/results to pure functions can usually be done more elegantly
with State. Of course you can't really do much with OpenGL outside of
IO, so you'd probably want a StateT transformer on top of the IO
monad.

The downside is that code outside of the transformed monad can't
change the state, and can't access it except as an explicit argument.
Normally this is a good thing, but it means that any main event loop
in the application must be in the transformed monad, and thus under
your control--which isn't the case if you're using GLUT.

 If so--or if not so--how would I proceed to keep track of the state of my 
 objects as they move about the screen? Maybe the HOpenGL implementation comes 
 with such state-tracking devices?

HOpenGL does use the StateVar package to provide a consistent
interface to mutable references, but that doesn't provide any special
implementation of such. Using Data.IORef is probably the simplest
choice, and optionally using StateVar if you want to use the same
syntax for both IORefs and OpenGL state.

On the other hand, if you're getting a GL context from something
else--such as SDL or a GUI library--that requires you to manage an
explicit event loop, feel free to use StateT instead.

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: fixed-list -- A fixed length list library

2010-03-21 Thread Casey McCann
Job Vranish job.vran...@gmail.com wrote:
 Its main advantages are:
  Very easy to use.
  Almost entirely Haskell98 (the non Haskell98 pieces are not critical, just
 nice)
  The datatype is a member of  Foldable, Traverable, Applicative, Monad,
 etc...
  Then length of the list is encoded in the type in a natural way.

Unfortunately, it's very easy to get a context reduction stack
overflow from GHC this way, which makes using such datatypes awkward
for anything but very short lists. Explicit type annotations will
often make things work, but at that point the type class isn't helping
you anyway. For example, assuming the default stack size:

import Data.FixedList

fixed18 :: FixedList18 Int
fixed18 = fromFoldable' [1..] -- this works

fixed20 = 20 :. 20 :. fixed18 -- this works

fixed22 :: FixedList22 Int
fixed22 = 22 :. 22 :. fixed20 -- this only works with a type annotation

show18 = show fixed18 -- this works

-- this doesn't work:
-- show20 = show fixed20

show20' :: FixedList20 Int - String
show20' list20 = show list20

show20 = show20' fixed20 -- this does work

Using head and tail on longer lists fails likewise. I expect there's
some way to make it work without simply increasing the stack depth,
but I'm not sure how. Any thoughts?

- C.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe