Re: Untouchable type variables

2017-06-18 Thread Dan Doel
This doesn't sound like the right explanation to me. Untouchable variables
don't have anything (necessarily) to do with existential quantification.
What they have to do with is GHC's (equality) constraint solving.

I don't completely understand the algorithm. However, from what I've read
and seen of the way it works, I can tell you why you might see the error
reported here

When type checking moves under a 'fancy' context, all (not sure if it's
actually all) variables made outside that context are rendered untouchable,
and are not able to be unified with local variables inside the context. So
the problem that is occurring is related to `c` being bound outside the
'fancy' context `F a b`, but used inside (and maybe not appearing in the
fancy context is a factor). And `F a b` is fancy because GHC just has to
assume the worst about type families (that don't reduce, anyway). Equality
constraints are the fundamental 'fancy' context, I think.

The more precise explanation is, of course, in the paper describing the
current type checking algorithm. I don't recall the motivation, but they do
have one. :) Maybe it's overly aggressive, but I really can't say myself.

-- Dan


On Sun, Jun 18, 2017 at 3:02 PM, wren romano 
wrote:

> On Thu, May 4, 2017 at 10:20 AM, Wolfgang Jeltsch
>  wrote:
> > Today I encountered for the first time the notion of an “untouchable”
> > type variable. I have no clue what this is supposed to mean.
>
> Fwiw, "untouchable" variables come from existential quantification
> (since the variable must be held abstract so that it doesn't escape).
> More often we see these errors when using GADTs and TypeFamilies,
> since both of those often have existentials hiding under the hood in
> how they deal with indices.
>
>
> > A minimal
> > example that exposes my problem is the following:
> >
> >> {-# LANGUAGE Rank2Types, TypeFamilies #-}
> >>
> >> import GHC.Exts (Constraint)
> >>
> >> type family F a b :: Constraint
> >>
> >> data T b c = T
> >>
> >> f :: (forall b . F a b => T b c) -> a
> >> f _ = undefined
>
> FWIW, the error comes specifically from the fact that @F@ is a family.
> If you use a plain old type class, or if you use a type alias (via
> -XConstraintKinds) then it typechecks just fine. So it's something
> about how the arguments to @F@ are indices rather than parameters.
>
> I have a few guesses about why the families don't work here, but I'm
> not finding any of them particularly convincing. Really, imo, @c@
> should be held abstract within the type of the argument, since it's
> universally quantified from outside. Whatever @F a b@ evaluates to
> can't possibly have an impact on @c@[1]. I'd file a bug report. If
> it's just an implementation defect, then the GHC devs will want to
> know. And if there's actually a type theoretic reason I missed, it'd
> be good to have that documented somewhere.
>
> [1] For three reasons combined: (1) @F a b@ can't see @c@, so the only
> effect evaluating @F a b@ could possibly have on @c@ is to communicate
> via some side channel, of which I only see two: (2) the @(a,c)@ from
> outside are quantified parametrically, thus nothing from the outside
> scope could cause information to flow from @a@ to @c@, (3) the @T@ is
> parametric in @(b,c)@ (since it is not a GADT) so it can't cause
> information to flow from @b@ to @c@.
>
> --
> Live well,
> ~wren
> ___
> Glasgow-haskell-users mailing list
> Glasgow-haskell-users@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
>
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: TDNR without new operators or syntax changes

2016-05-26 Thread Dan Doel
On Thu, May 26, 2016 at 5:14 AM, Peter  wrote:
> Solving for everything but f, we get f :: T -> Int.

So TDNR happens for things in function position (applied to something).

> Solving for everything but f, we get f :: T -> Int.

So TDNR happens for things in argument position.

> May not be solvable, would fail to disambiguate.

But there's exactly one combination of f and v definitions that will
succeed with the right type. So why doesn't that happen? Here's an
intermediate step:

i1' x = f x :: Int

What happens there? Another way to phrase the question is: why would
TDNR only disambiguate based on argument types of functions and not on
return types? Why would function types even be a factor at all?

And to be honest, I don't really think the description of what the
type checker would be doing is detailed enough. If there are two
ambiguous candidates how does type checking proceed with 'the type in
the declared signature' when there are two possible signatures which
may be completely different? Is it doing backtracking search? How do
you add backtracking search to GHC's inference algorithm? Etc. The
later examples are designed to raise questions about this, too.

I'm rather of the opinion that TDNR just isn't a good feature for most
things. Implicit in the famous "How to Make Ad-hoc Polymorphism Less
Ad-hoc" is that being ad-hoc is bad. And type classes fix that by
turning overloading into something that happens via an agreed upon
interface, with declared conventions, and which can be abstracted over
well. But TDNR has none of that, so to my mind, it's not really
desirable, except possibly in cases where there is no hope of being
abstract (for instance, Agda does some TDNR for constructors in
patterns, and there isn't much basis in the underlying theory for
trying to abstract over doing induction on completely different types
with similarly named pieces; so it's more acceptable).

But also, for something as far reaching as doing TDNR for every
ambiguous name, it's not terribly clear to me what a good algorithm
even is, unless it's only good enough to handle really simple
examples, and just doesn't work most of the time (and even then, I'm
not sure it's been thought through enough to say that it's a simple
addition to GHC's type checking).

-- Dan
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: TDNR without new operators or syntax changes

2016-05-25 Thread Dan Doel
As a supplement, here's a series of definitions to think about. The
question is: what should happen in each section, and why? The more
detailed the answer, the better. Definitions from previous sections
are in scope in subsequent ones, for convenience. The examples are
arranged in a slippery slope, but there may be more good, confusing
examples that I missed.

--- cut ---

-- 1 --

f :: T -> Int
f t = ...

f :: U -> Int -> Char
f u = ...

t :: T
t = ...

u :: U
u = ...

i1 :: Int
i1 = f t

-- 2 --

g :: (T -> Int) -> Int
g h = h t

i2 :: Int
i2 = g f

-- 3 --

v :: T
v = t

v :: U
v = u

i3 :: Int
i3 = f v

-- 4 --

class C a where
  c :: a -> Int

instance C T where
  c = f

i4 :: Int
i4 = c v

-- 5 --

main = print $ f v

--- cut ---

-- Dan

On Tue, May 24, 2016 at 4:07 AM, Adam Gundry  wrote:
> Thanks for starting this discussion! Having spent more time thinking
> about record field overloading than perhaps I should, here are some
> things to think about...
>
>
> On 22/05/16 16:01, Jeremy wrote:
>> Bertram Felgenhauer-2 wrote
 1. If the compiler encounters a term f a, and there is more than one
 definition for f in scope (after following all of the usual rules for
 qualified imports);

 2. And exactly one of these definitions matches the type of a (or the
 expected type of f if given);

 3. Then this is the definition to use.
>>>
>>> I now find that Anthony's concerns are justified. The question is, what
>>> exactly does the type matching in step 2 involve? If it is a recursive
>>> call to the type-checker then you'll have a huge performance problem.
>>
>> I was concerned about this, but not knowing anything about how
>> type-checking/inference is implemented, I wouldn't know if this is actually
>> a problem or not.
>
> Unfortunately this is indeed a problem. When the type-checker encounters
> `f a`, it infers the type of `f`, ensures it is a function type `s ->
> t`, then checks that `a` has type `s`. But if there are multiple
> possible `f`s in scope, what should it do? Options include:
>
>  1. run the entire type-checking process for each possible type of `f`
> (this is almost certainly too slow);
>
>  2. give ad-hoc rules to look at type information from the context or
> the inferred type of the argument `a`;
>
>  3. introduce a name resolution constraint and defer it to the
> constraint solver (the same way that solving for typeclasses works);
>
>  4. require a type signature in a fixed location.
>
> Both 2 and 3 would need careful specification, as they run the risk of
> exposing the type-checking algorithm to the user, and changing the
> typing rules depending on what is in scope may be problematic. In
> particular, if `f` has a higher-rank type than bidirectional type
> inference is likely to break down.
>
> The DuplicateRecordFields approach is to make use of bidirectional type
> inference (a bit like 2, but without looking at the type of the
> argument) and otherwise require a type signature. This is carefully
> crafted to avoid needing to change existing typing rules. The wiki page
> [1] sets out some of the cases in which this does and doesn't work.
>
> Point 3 is rather like the magic classes in the OverloadedRecordFields
> proposal [2] (this isn't in GHC HEAD yet, but an early version is on
> Phab [3]).
>
>
>>> If, on the other hand, you only take into account what is known about
>>> the type of a at a given time, then you need special treatment for
>>> unambiguous names or even trivial programs will fail to type-check, just
>>> as Anthony said.
>>
>> Why special treatment for unambiguous names? They shouldn't be effected at
>> all by this.
>
> There are some design choices here as well, separately from the options
> above. Some possibilities are:
>
>  A. leave unambiguous names alone, and only do the special thing for the
> ambiguous ones (this doesn't break existing code, but can lead to
> confusing errors if an import is changed to introduce or remove an
> ambiguity, especially for options 2 or 3 above);
>
>  B. do the same thing for unambiguous names as ambiguous ones (but this
> may well break existing code, and is likely to restrict inference too much);
>
>  C. require an explicit syntactic cue that the name should be treated
> specially (e.g. the original TDNR's use of dot, or the # sign in the
> OverloadedLabels part of ORF [4]).
>
>
> As you can see, there are quite a few complex trade-offs here. I suspect
> this is at least part of the reason for the slow progress on TDNR-like
> extensions. It will be interesting to see how DuplicateRecordFields is
> received!
>
> All the best,
>
> Adam
>
>
> [1]
> https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/DuplicateRecordFields
>
> [2]
> https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/MagicClasses
>
> [3] https://phabricator.haskell.org/D1687
>
> [4]
> https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/OverloadedLabels
>
>
> --

Re: suboptimal ghc code generation in IO vs equivalent pure code case

2016-05-11 Thread Dan Doel
On Tue, May 10, 2016 at 4:45 AM, Harendra Kumar
 wrote:
> Thanks Dan, that helped. I did notice and suspect the update frame and the
> unboxed tuple but given my limited knowledge about ghc/core/stg/cmm I was
> not sure what is going on. In fact I thought that the intermediate tuple
> should get optimized out since it is required only because of the realworld
> token which is not real. But it might be difficult to see that at this
> level?

The token exists as far as the STG level is concerned, I think,
because that is the only thing ensuring that things happen in the
right order. And the closure must be built to have properly formed
STG. So optimizing away the closure building would have to happen at a
level lower than STG, and I guess there is no such optimization. I'm
not sure how easy it would be to do.

> What you are saying may be true for the current implementation but in theory
> can we eliminate the intermediate closure?

I don't know. I'm a bit skeptical that building this one closure is
the only thing causing your code to be a factor of 5 slower. For
instance, another difference in the core is that the recursive call
corresponding to the result s2 happens before allocating the
additional closure. That is the case statement that unpacks the
unboxed tuple. And the whole loop happens this way, so it is actually
building a structure corresponding to the entire output list in memory
rather eagerly.

By contrast, your pure function is able to act in a streaming fashion,
if consumed properly, where only enough of the result is built to keep
driving the rest of the program. It probably runs in constant space,
while your IO-based loop has a footprint linear in the size of the
input list (in addition to having slightly more overhead per character
because of the one extra thunk), because it is a more eager program.

And having an asymptotically larger memory footprint is more likely to
explain a 5x slowdown than allocating one extra thunk for each list
concatenation, I think. (Of course, it could also be some other
factor, as well.)

You should probably run with +RTS -s (compiling with -rtsopts), and
see if the IO version has a much larger maximum residency.

Anyhow, this is fundamental to writing the algorithm using IO. It's an
algorithm that's a sequence of steps that happen in order, the number
of steps is proportional to the input list, and part of those steps is
putting stuff in memory for the results.

> So why is that? Why can't we generate (++) inline similar to (:)? How do we
> make this decision? Is there a theoretical reason for this or this is just
> an implementation artifact?

The difference between these two is that (++) is a function call, and
(:) is a constructor. Constructors are a special case, and don't need
to have all the provisions that functions in general do. The best way
to learn what the differences are is probably to read the paper about
the STG machine.

Hope this is a more fruitful lead, but it may be that there's not a
lot that can be done, without doing some baroque things to your
algorithm, because of the necessity of its using IO.

-- Dan
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: suboptimal ghc code generation in IO vs equivalent pure code case

2016-05-09 Thread Dan Doel
I'm no expert on reading GHC's generated assembly. However, there may
be a line you've overlooked in explaining the difference, namely:

movq $stg_upd_frame_info,-16(%rbp)

This appears only in the IO code, according to what you've pasted, and
it appears to be pushing an update frame (I think). Update frames are
used as part of lazy evaluation, to ensure that work only happens
once, barring very short race conditions. So, at a guess, I would say
that your IO-based code is creating extra updatable closures, which
isn't free.

It's sometimes difficult to see the difference at the core level. It
will probably be clearer at the STG level, because the expression
language is more disciplined there. But, for instance, your pure code
tail calls (++), whereas your IO code returns an unboxed tuple with
the same sort of expression that is in the pure tail call. However,
complex expressions like that can't really be put in an unboxed tuple
at the STG level, so what will happen is that the complex expression
will be let (closure allocation), and the closure will be returned in
the unboxed tuple. So that is the source of the difference. A more
perspicuous picture would be something like:

  Pure:
False ->
  let {
l1 = : ww_amuh []
l2 = Data.Unicode.Internal.Normalization.decompose_$sdecompose
ipv_smuv ipv1_smuD
  } in ++ l1 l2


  IO:
False ->
  case $sa1_sn0g ipv_smUT ipv1_smV6 ipv2_imWU
  of _ { (# ipv4_XmXv, ipv5_XmXx #) ->
  let {
l1 = : sc_sn0b []
l3 = ++ l1 ipv5_XmXx
  } in (# ipv4_XmXv, l3 #)

I can't say for certain that that's the only thing making a
difference, but it might be one thing.

-- Dan


On Mon, May 9, 2016 at 10:23 AM, Harendra Kumar
 wrote:
> I have a loop which runs millions of times. For some reason I have to run it
> in the IO monad. I noticed that when I convert the code from pure to IO
> monad the generated assembly code in essence is almost identical except one
> difference where it puts a piece of code in a separate block which is making
> a huge difference in performance (4-6x slower).
>
> I want to understand what makes GHC to generate code in this way and if
> there is anything that can be done at source level (or ghc option)  to
> control that.
>
> The pure code looks like this:
>
> decomposeChars :: [Char] -> [Char]
>
> decomposeChars [] = []
> decomposeChars [x] =
> case NFD.isDecomposable x of
> True -> decomposeChars (NFD.decomposeChar x)
> False -> [x]
> decomposeChars (x : xs) = decomposeChars [x] ++ decomposeChars xs
>
> The equivalent IO code is this:
>
> decomposeStrIO :: [Char] -> IO [Char]
>
> decomposeStrPtr !p = decomposeStrIO
> where
> decomposeStrIO [] = return []
> decomposeStrIO [x] = do
> res <- NFD.isDecomposable p x
> case res of
> True -> decomposeStrIO (NFD.decomposeChar x)
> False -> return [x]
> decomposeStrIO (x : xs) = do
> s1 <- decomposeStrIO [x]
> s2 <- decomposeStrIO xs
> return (s1 ++ s2)
>
> The difference is in how the code corresponding to the call to the (++)
> operation is generated. In the pure case the (++) operation is inline in the
> main loop:
>
> _cn5N:
> movq $sat_sn2P_info,-48(%r12)
> movq %rax,-32(%r12)
> movq %rcx,-24(%r12)
> movq $:_con_info,-16(%r12)
> movq 16(%rbp),%rax
> movq %rax,-8(%r12)
> movq $GHC.Types.[]_closure+1,(%r12)
> leaq -48(%r12),%rsi
> leaq -14(%r12),%r14
> addq $40,%rbp
> jmp GHC.Base.++_info
>
> In the IO monad version this code is placed in a separate block and a call
> is placed in the main loop:
>
> the main loop call site:
>
> _cn6A:
> movq $sat_sn3w_info,-24(%r12)
> movq 8(%rbp),%rax
> movq %rax,-8(%r12)
> movq %rbx,(%r12)
> leaq -24(%r12),%rbx
> addq $40,%rbp
> jmp *(%rbp)
>
> out of the line block - the code that was in the main loop in the previous
> case is now moved to this block (see label _cn5s below):
>
> sat_sn3w_info:
> _cn5p:
> leaq -16(%rbp),%rax
> cmpq %r15,%rax
> jb _cn5q
> _cn5r:
> addq $24,%r12
> cmpq 856(%r13),%r12
> ja _cn5t
> _cn5s:
> movq $stg_upd_frame_info,-16(%rbp)
> movq %rbx,-8(%rbp)
> movq 16(%rbx),%rax
> movq 24(%rbx),%rbx
> movq $:_con_info,-16(%r12)
> movq %rax,-8(%r12)
> movq $GHC.Types.[]_closure+1,(%r12)
> movq %rbx,%rsi
> leaq -14(%r12),%r14
> addq $-16,%rbp
> jmp GHC.Base.++_info
> _cn5t:
> movq $24,904(%r13)
> _cn5q:
> jmp *-16(%r13)
>
> Except this difference the rest of the assembly looks pretty similar in both
> the cases. The corresponding dump-simpl output for the pure case:
>
>   False ->
> ++
>   @ Char
>   (GHC.Types.: @ Char ww_amuh (GHC.Types.[] @ Char))
>   (Data.Unicode.Internal.Normalization.decompose_$sdecompose
>  

Re: Breaking Changes and Long Term Support Haskell

2015-10-21 Thread Dan Doel
For proposal 3, I don't see what difference it makes whether a
refreshed Haskell committee or a new libraries committee makes
decisions that affect backwards compatibility. A name doesn't ensure
good decision making. The only difference I can see is that the
Haskell committee might only publish final decisions every couple
years. But the Haskell report also isn't designed to describe
migration plans between feature revisions; unless the plan is to start
incorporating library deprecation and whatnot into the report (which
would be odd to me). But that would just be doing the same thing
slower, so it'd be little different than making library changes over 6
to 9 GHC versions instead of 3.

For proposal 2, I don't know how effective it will be in practice. I
believe it is already the job of a proposal submitter to summarize the
arguments made about it, according to the library proposal guidelines.
We could post those summaries to another list. But unless more people
promise they will be diligent about reading that list, I'm not sure
that one factor in these dust ups (surprise) will actually be any
different.

Also, if amount of discussion is at issue, I'm not sure I agree. For
AMP, I was waiting a decade, more or less. I thought we should do it,
other people thought we shouldn't because it would break things. I
don't know what more there was to discuss, except there was more stuff
to break the longer we waited.

As for FTP, some aspects only became known as the proposal was
implemented, and I don't know that they would have been realized
regardless of how long the proposal were discussed. And then we still
had a month or so of discussion after the implementation was
finalized, on the cusp of GHC 7.10 being released. So how much more
_was_ needed, that people are now discussing it again?

If it's just about documenting more things, there's certainly no harm in that.

For 1, I don't have a very strong opinion. If pressed, I would
probably express some similar sentiments to Henrik. I certainly don't
think Haskell would be nearly as good as it is if it were a simple
majority vote by all users (and I probably wouldn't use it if that's
how things were decided). Would a community vote for libraries
committee be better than appointment by people who previously held the
power (but have more to do than any human can accomplish)? I don't
know.

I should say, though, that things are not now run by simple majority
vote. What we conducted a year ago was a survey, where people
submitted their thoughts. I didn't get to read them, because they were
private, and it wasn't my decision to make. But it was not just +80
-20.

With regard to your last paragraph, unless I've missed something (and
I confess that I haven't read every comment in these threads), the
recent resignations didn't express disagreement with the decision
making process. They expressed disagreement with the (technical)
decisions (and their effects). I don't see how a different process
could have solved that unless it is expected that it would have made
different decisions.

-- Dan

On Wed, Oct 21, 2015 at 6:18 PM, Geoffrey Mainland <mainl...@apeiron.net> wrote:
> Hi Dan,
>
> Thank you for the historical perspective.
>
> I was careful not to criticize the committee. Instead, I made three
> concrete proposals with the hope that they would help orient a conversation.
>
> It sounds like you are not for proposal 3. How about the other two?
>
> My original email stated my underlying concern: we are losing valuable
> members of the community not because of the technical decisions that are
> being made, but because of the process by which they are being made.
> That concern is what drove my proposals. It is perfectly valid to think
> that that loss was the inevitable price of progress, but that is not my
> view.
>
> Cheers,
> Geoff
>
> On 10/21/15 5:22 PM, Dan Doel wrote:
>> Hello,
>>
>> I'm Dan Doel. I'm on the core libraries committee (though I'm speaking
>> only for myself). As I recall, one of the reasons I got tapped for it
>> was due to my having some historical knowledge about Haskell; not
>> because I was there, but because I've gone back and looked at some old
>> reports and whatnot (and sometimes think they're better than what we
>> have now).
>>
>> But, I was around (of course) when the core libraries committee
>> started up, so perhaps I can play the role of historian for this as
>> well.
>>
>> The reason the committee exists is because a couple years ago, people
>> brought up the ideas that were finally realized in the
>> Applicative-Monad proposal and the Foldable-Traversable proposal. A
>> lot of people weighed in saying they thought they were a good idea,
>> and significantly fewer people weighed in saying they thought that it
>> shouldn't happen for various reasons---roughly 

Re: Breaking Changes and Long Term Support Haskell

2015-10-21 Thread Dan Doel
Hello,

I'm Dan Doel. I'm on the core libraries committee (though I'm speaking
only for myself). As I recall, one of the reasons I got tapped for it
was due to my having some historical knowledge about Haskell; not
because I was there, but because I've gone back and looked at some old
reports and whatnot (and sometimes think they're better than what we
have now).

But, I was around (of course) when the core libraries committee
started up, so perhaps I can play the role of historian for this as
well.

The reason the committee exists is because a couple years ago, people
brought up the ideas that were finally realized in the
Applicative-Monad proposal and the Foldable-Traversable proposal. A
lot of people weighed in saying they thought they were a good idea,
and significantly fewer people weighed in saying they thought that it
shouldn't happen for various reasons---roughly the same things that
people are still bringing up about these proposals.

This wasn't the first time that happened, either. I think it was
widely agreed among most users that Functor should be a superclass of
Monad since I started learning Haskell around 10 years ago. And once
Applicative was introduced, it was agreed that that should go in the
middle of the two. But it appeared that it would never happen, despite
a significant majority thinking it should, because no one wanted to do
anything without pretty much unanimous consent.

So, one question that got raised is: why should this majority of
people even use Haskell/GHC anymore? Why shouldn't they start using
some other language that will let them change 15-year-old mistakes, or
adapt to ideas that weren't even available at that time (but are still
fairly old and established, all things considered). And the answer was
that there should be some body empowered to decide to move forward
with these ideas, even if there is some dissent. And frankly, it
wasn't going to be the prime committee, because it hadn't shown any
activity in something like 3 years at the time, and even when it was
active, it didn't make anywhere near the sort of changes that were
being discussed.

And the kicker to me is, many things that people are complaining about
again (e.g. the FTP) were the very things that the committee was
established to execute. I don't think we had a formal vote on that
proposal, because we didn't need to. Our existence was in part to
execute that proposal (and AMP). And then a year ago, when it was
finally time to release the changes, there was another ruckus. And we
still didn't have a CLC vote on the matter. What we did was conduct a
community poll, and then SPJ reviewed the submissions. But I don't
mean to pass the buck to him, because I'm pretty sure he was worried
that we were crazy, and overstepping our bounds. Just, the results of
the survey were sufficient for him to not overrule us.

So my point is this: there seems to be some sentiment that the core
libraries committee is unsound, and making bad decisions. But the
complaints are mostly not even about actual decisions we made (aside
from maybe Lennart Augustsson's, where he is unhappy with details of
the FTP that you can blame on us, but were designed to break the least
code, instead of being the most elegant; if we had pleased him more,
we would have pleased others less). They are about the reasons for
founding the committee in the first place. You can blame us, if you
like, because I think it's certain that we would have approved them if
we had formally voted. We just didn't even need to do so.

Forgive me if I'm wrong, but suggestions that these decisions should
have been deferred to a Haskell Prime committee mean, to me, that the
hope is that they would have been rejected. That the Haskell Prime
committee should have just vetoed these proposals that something like
80% or more of practicing Haskell users (as far as we can tell) wanted
for years before they finally happened. That the Haskell Prime
committee should be responsible for enforcing the very status quo that
led to the CLC in the first place, where proposals with broad support
but minority dissent never pass for various core modules.

If this is the case, then one could simply repose the earlier
question: why should most of these people stick around to obey by the
Haskell Prime committee's pronouncements, instead of getting to work
on a language that incorporates their input?

And if it isn't, then I don't ultimately understand what the
complaints are. We try to accomplish the (large) changes in a manner
that allows transition via refactoring over multiple versions (and as
I mentioned earlier, some complaints are that we compromised _too
much_ for this). And in light of the more recent complaints, it's even
been decided that our time frames should be longer. Rolling up changes
into a report just seems like it makes transitions less smooth. Unless
the idea is to make GHC capable of switching out entire base library
sets; but someone has to implement that, and once you

Re: Closed Type Families: type checking dumbness? [was: separate instance groups]

2015-06-07 Thread Dan Doel
It seems to me the problem is that there's no way to define classes by
consecutive cases to match the family definitions. I don't know what a good
syntax for that would be, since 'where' syntax is taken for those. But it
seems like it would correspond fill the hole here.

On Sun, Jun 7, 2015 at 7:27 AM, Richard Eisenberg e...@cis.upenn.edu wrote:

 This is all expected behavior. GHC's lazy overlap checking for class
 instances simply cannot apply to type families -- it would be unsound. I'm
 afraid I don't see what can be improved here.

 Richard

 On Jun 6, 2015, at 2:04 AM, AntC anthony_clay...@clear.net.nz wrote:

  From: AntC
  Date: 2015-06-04 22:39:25 GMT
 
  Take the standard example for partial overlaps.
  Suppose I have a class: ...
 
  I'm also getting (in more complex examples)
  GHC complaining it can't infer the types
  for the result of f.
  So now I'm having to put type equality
  constraints on the class instances,
  to assure it that F comes up with
  the right type.
 
  In a reduced example, I'm still getting
  poor type checking. This is GHC 7.8.3.
  This seems so dumb, I'm suspecting a defect,
  It's similar to
  but much more glaring than:
  https://ghc.haskell.org/trac/ghc/ticket/10227
  https://ghc.haskell.org/trac/ghc/ticket/9918
 
  {-# LANGUAGE TypeFamilies,
  FlexibleInstances
#-}
  module ClosedTypeFamily where
 
 data Foo b c = Foo b c deriving (Eq, Read, Show)
 
 type family F awhere
   F (Foo Int c)  = Int-- Foo Int is first instance
   F (Foo b Char) = Char
 
 class C a where f :: a - F a
 
 instance C (Foo Int c) where  -- compiles OK
   f (Foo x _) = x
 
 instance (F (Foo b Char) ~ Char) = C (Foo b Char) where
   f (Foo _ y) = y
 
  needs the eq constraint. Without it, GHC complains:
 Couldn't match expected type ‘F (Foo b Char)’
 with actual type ‘Char’
 Relevant bindings include
   f :: Foo b Char - F (Foo b Char)
 In the expression: y
 In an equation for ‘f’: f (Foo _ y) = y
 
  Note that if I change the sequence
  of the family instances for F,
  then GHC instead complains
  about the class instance for (Foo Int c).
 
  OK these are overlapping class instances.
  But GHC's usual behaviour
  (without closed type families)
  is to postpone complaining
  until and unless a usage
  (Foo Int Char) actually turns up.
 
  BTW if I put a first family instance
   F (Foo Int Char) = Int
  to explicitly catch the overlap,
  then GHC complains about **both** class instances.
 
  Reminder [to Richard]
  I need not only types but also terms.
 
  AntC
  ___
  Glasgow-haskell-users mailing list
  Glasgow-haskell-users@haskell.org
  http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
 

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] RFC: Native -XCPP Proposal

2015-05-08 Thread Dan Doel
vector generates a considerable amount of code using CPP macros.

And with regard to other mails, I'm not too eager (personally) to port that
to template Haskell, even though I'm no fan of CPP. The code generation
being done is so dumb that CPP is pretty much perfect for it, and TH would
probably just be more work (and it's certainly more work to write it again
now that it's already written).

On Wed, May 6, 2015 at 10:21 AM, Bardur Arantsson s...@scientician.net
wrote:

 On 06-05-2015 15:05, Alan  Kim Zimmerman wrote:
  Perhaps it makes sense to scan hackage to find all the different CPP
 idioms
  that are actually used in Haskell code, if it is a small/well-defined set
  it may be worth writing a simple custom preprocessor.
 

 +1, I'll wager that the vast majority of usages are just for version
 range checks.

 If there are packages that require more, they could just keep using the
 system-cpp or, I, guess cpphs if it gets baked into GHC. Like you, I'd
 want to see real evidence that that's actually worth the
 effort/complication.

 Regards,

 ___
 Haskell-Cafe mailing list
 haskell-c...@haskell.org
 http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Increased memory usage with GHC 7.10.1

2015-04-02 Thread Dan Doel
You aren't the only one. The vector test suite also has these kind of
issues. In its case, it's hard for me to tell how big the code is, because
template haskell is being used to generate it. However, I don't think the
template haskell is what's using the additional performance, because the
test suite is compiled with both -O0 and -O2, and only the latter runs into
performance issues.

In vector's case, 7.6.3 compiles significantly faster than 7.8.4, which is
in turn significantly faster than 7.10.1. And memory usage follows the same
pattern (with -O2 at least). 7.6.3 uses ~400MB of memory, 7.8.4 1-2GB and
7.10.1 3-4GB (if I'm remembering correctly). And while I can build the
tests on 7.10 in around 5 minutes, travis times out building them after
around half an hour.

On Thu, Apr 2, 2015 at 8:47 AM, Jan Stolarek jan.stola...@p.lodz.pl wrote:

 I will. But I was curious whether this is only me or is anyone else seeing
 similar behaviour. And
 what about performance comparisson between 7.8.4 and 7.10.1? Do we have
 any numbers?

 Janek

 Dnia czwartek, 2 kwietnia 2015, Richard Eisenberg napisał:
  Post a bug report! :)
 
  On Apr 2, 2015, at 8:19 AM, Jan Stolarek jan.stola...@p.lodz.pl wrote:
   An update frrom my second machine, this time with 4GB of RAM. Compiling
   Agda ran out of memory (again Agda.TypeChecking.Serialise module) and I
   had to kill the build. But once I restarted the build the module was
   compiled succesfully in a matter of minutes and using around 50% of
   memory. This looks like some kind of memory leak in GHC.
  
   Janek
  
   Dnia środa, 1 kwietnia 2015, Jan Stolarek napisał:
   Forall hi,
  
   I just uprgaded both of my machines to use GHC 7.10.1. I keep
 sandboxed
   installations of GHC and this means I had to rebuild Agda and Idris
   because the binaries built with GHC 7.8.4 were stored inside
 deactivated
   7.8.4 sandbox. Sadly, I had problems building both Agda and Idris due
 to
   GHC taking up all of available memory.
  
   With Idris the problematic module was Idris.ElabTerm (~2900LOC). The
   interesting part of the story is that when I do a clean build of Idris
   GHC consumes all of memory when compiling that module and I have to
 kill
   the build. But when I restart the build after killing GHC the module
 is
   compiled using a reasonable amount of memory and within reasonable
 time.
  
   With Agda the problematic module is Agda.TypeChecking.Serialise
   (~2000LOC). The trick with killing the build and restarting it didn't
   work in this case. I had to compile Agda with GHC 7.8.4 (which works
   without problems though the mentioned module still requires a lot of
   memory) and alter my setup so that Agda binary is not stored inside
 GHC
   sandbox.
  
   I wonder if any of you came across similar issues with GHC 7.10.1? Do
 we
   have any performance data that allows to compare memory usage and
   performance of GHC 7.10.1 with previous stable releases?
  
   All of the above happened on 64bit Debian Wheezy with 2GB of RAM.
  
   Janek
  
   ---
   Politechnika Łódzka
   Lodz University of Technology
  
   Treść tej wiadomości zawiera informacje przeznaczone tylko dla
 adresata.
   Jeżeli nie jesteście Państwo jej adresatem, bądź otrzymaliście ją
 przez
   pomyłkę prosimy o powiadomienie o tym nadawcy oraz trwałe jej
 usunięcie.
  
   This email contains information intended solely for the use of the
   individual to whom it is addressed. If you are not the intended
   recipient or if you have received this message in error, please notify
   the sender and delete it from your system.
   ___
   Glasgow-haskell-users mailing list
   Glasgow-haskell-users@haskell.org
  
 http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
  
   ---
   Politechnika Łódzka
   Lodz University of Technology
  
   Treść tej wiadomości zawiera informacje przeznaczone tylko dla
 adresata.
   Jeżeli nie jesteście Państwo jej adresatem, bądź otrzymaliście ją przez
   pomyłkę prosimy o powiadomienie o tym nadawcy oraz trwałe jej
 usunięcie.
  
   This email contains information intended solely for the use of the
   individual to whom it is addressed. If you are not the intended
 recipient
   or if you have received this message in error, please notify the sender
   and delete it from your system.
   ___
   Glasgow-haskell-users mailing list
   Glasgow-haskell-users@haskell.org
   http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users



 ---
 Politechnika Łódzka
 Lodz University of Technology

 Treść tej wiadomości zawiera informacje przeznaczone tylko dla adresata.
 Jeżeli nie jesteście Państwo jej adresatem, bądź otrzymaliście ją przez
 pomyłkę
 prosimy o powiadomienie o tym nadawcy oraz trwałe jej usunięcie.

 This email contains information intended solely for the use of the
 individual to whom it is addressed.
 If you are not the intended recipient or if you 

Re: [Haskell] Rank-N types with (.) composition

2015-02-10 Thread Dan Doel
Impredicativity, with regard to type theories, generally refers to types
being able to quantify over the collection of types that they are then a
part of. So, the judgment:

(forall (a :: *). a - a) :: *

is impredicative, because we have a type in * that quantifies over all
types in *, which includes itself. Impredicativity in general refers to
this sort of (mildly) self-referential definition.

GHC will tell you that the above judgment is true, but things aren't that
simple. The type inference algorithm can either try to make use of such
impredicative instantiations, or act like everything is predicative. And
aspects of GHC's algorithm are either simplified or made possible at all
because of assumptions of predicativity.

Also, I think ($) is the way it is specifically because 'runST $ ...' is
considered useful and common enough to warrant an ad-hoc solution. There
have been other ad-hoc solutions in the past, but redesigning inference to
not be ad-hoc about it would be very difficult at best.

-- Dan

On Tue, Feb 10, 2015 at 3:51 PM, David Feuer david.fe...@gmail.com wrote:

 The problem is that GHC's type system is (almost entirely)
 predicative. I couldn't tell you just what that means, but to a first
 approximation, it means that type variables cannot be instantiated to
 polymorphic types. You write

 trip = Wrap . extract


 which means

 (.) Wrap extract

 (.)::(b-c)-(a-b)-a-c

 Wrap :: (forall f. Functor f = f Int) - Wrap

 The trouble here is that the type variable b in the type of (.) isn't
 allowed to be polymorphic, but Wrap's argument must be.

 Note that there's a weird exception: ($) actually has an impredicative
 type, because it's a special case in the type checker. This is largely
 for historical reasons.

 On Tue, Feb 10, 2015 at 3:38 PM, Tyson Whitehead twhiteh...@gmail.com
 wrote:
  I came across something that seems a bit strange to me.  Here is a
 simplified version (the original was trying to move from a lens ReifiedFold
 to a lens-action ReifiedMonadicFold)
 
   {-# LANGUAGE RankNTypes #-}
 
   import Control.Applicative
 
   newtype Wrap = Wrap { extract :: forall f. Functor f = f Int }
 
   trip :: Wrap - Wrap
   trip a = Wrap (extract a)
 
  The compiler is okay with this.  It chokes on this alternative though
 
   trip :: Wrap - Wrap
   trip = Wrap . extract
 
  giving (GHC 7.8.2)
 
   Couldn't match type ‘f0 Int’
 with ‘forall (f :: * - *). Functor f = f Int’
   Expected type: f0 Int - Wrap
 Actual type: (forall (f :: * - *). Functor f = f Int) - Wrap
   In the first argument of ‘(.)’, namely ‘Wrap’
   In the expression: Wrap . extract
 
  I'm guessing this is because the compiler fancy footwork to handle the
 implicit parameters, something like
 
   trip a = Wrap (\f fDict - extract a f fDict)
 
  where f is the Functor type and fDict is the associated dictionary,
 isn't compatible with the (.) definition of
 
   f . g = \x - f (g x)
 
  Is this correct?  I would appreciate anyone insight here.  Is there a
 way combine these (.) style?
 
  Thanks!  -Tyson
  ___
  Haskell mailing list
  Haskell@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell
 ___
 Haskell mailing list
 Haskell@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Rank-N types with (.) composition

2015-02-10 Thread Dan Doel
There is no special type for ($). The name is simply special cased in the
compiler. The rule is something like:

Whenever you see: f Prelude.$ x
instead try to type check: f x

That may not be the exact behavior, but it's close. To fix (.) (in a
similar fashion) you would have to have a similar rule, like:

Whenever you see: f Prelude.. g
instead try to type check: \x - f (g x)

-- Dan

On Tue, Feb 10, 2015 at 6:19 PM, Tyson Whitehead twhiteh...@gmail.com
wrote:

 On February 10, 2015 16:28:56 Dan Doel wrote:
  Impredicativity, with regard to type theories, generally refers to types
  being able to quantify over the collection of types that they are then a
  part of. So, the judgment:
 
  (forall (a :: *). a - a) :: *
 
  is impredicative, because we have a type in * that quantifies over all
  types in *, which includes itself. Impredicativity in general refers to
  this sort of (mildly) self-referential definition.

 Thanks Dan and David,

 That was informative.  Also very interesting that ($) is a special case.
 I tried this

  newtype Wrap = Wrap { extract :: forall f. Functor f = f Int }

  trip'' :: Wrap - Wrap
  trip'' a = Wrap $ extract a

 and the compiler was happy.  Wrapping ($) as ($') gave an error as you
 implied it would

  trip''' :: Wrap - Wrap
  trip''' a = Wrap $' extract a
  where ($') = ($)

 With regard to my earlier comment about translating the (.) version

  trip' :: Wrap - Wrap
  trip' = Wrap . extract

 to core, I can see it's actually okay.  A most you may need is a lambda to
 float the implicit parameters backwards

  trip' :: Wrap - Wrap
  trip' = Wrap . (\a f fDict - extract f fDict a)

 as GHC seems to always float them as far leftward as possible

  extract :: Functor f = Wrap - f Int

 I take it there are no user supplied types a person can give to overcome
 the predicative assumption?

 Out of curiosity, how would you write the special internal type that ($)
 has that separates it from ($') above?

 Thanks!  -Tyson

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: Proposal: ValidateMonoLiterals - Initial bikeshed discussion

2015-02-06 Thread Dan Doel
Assuming a separate syntax, I believe that the criterion would be as simple
as ensuring that no ValidateFoo constraints are left outstanding. The
syntax would add the relevant validate call, and type variables involved in
a ValidateFoo constraint would not be generalizable, and would have to be
defaulted or inferred from elsewhere, similar to the monomorphism
restriction. I'm not sure how difficult that would be to implement.

I'm not terribly gung ho on this, though. It feels very ad hoc. Making
validation vs. non-validation syntactic rather than just based on
polymorphism seems somewhat less so, though; so I'd prefer that direction.
Finding unused syntax is always a problem, of course.

On Fri, Feb 6, 2015 at 11:38 AM, Ryan Trinkle ryan.trin...@gmail.com
wrote:

 My greatest concern here would be that, as an application is maintained, a
 literal might go from monomorphic to polymorphic, or vice versa, without
 anybody noticing.  It sounds like this could result in a value silently
 becoming partial, which would be a big problem for application stability;
 in the opposite case - a partial value becoming a compile-time error - I am
 somewhat less concerned, but it could still be confusing and disruptive.

 I would prefer that there be some syntactic indication that I want my
 literal to be checked at compile time.  This syntax could also add whatever
 monomorphism requirement is needed, and then it would become a compile-time
 error for the value to become polymorphic.  I don't know nearly enough
 about the type system to know whether this is possible.

 Also, it seems to me that it might not be so clean as monomorphic versus
 polymorphic.  For example, suppose I have this:

 newtype PostgresTableName s = PostgresTableName String

 where 's' is a phantom type representing the DB schema that the name lives
 in.  The validation function is independent of the schema - it simply fails
 if there are illegal characters in the name, or if the name is too long.
 So, ideally, (foo\0bar :: forall s. PostgresTableName s) would fail at
 compile time, despite being polymorphic.


 Ryan

 On Fri, Feb 6, 2015 at 11:16 AM, Merijn Verstraaten 
 mer...@inconsistent.nl wrote:

 Ryan,

 Unfortunately, yes, you are understanding that correctly.

 The reason I qualified it with monomorphic only is that, I want to
 avoid breakage that would render the extension practically unusable in real
 code.

 Let's say I right now have:

 foo :: Num a = [a] - [a]
 foo = map (+1)

 I have two options 1) we compile this as currently using fromIntegral and
 it WILL break for Even or 2) we reject any polymorphic use of literals like
 this. Given the amount of numerical code relying on the polymorphism of
 Num, I think the option of not being able to compile Num polymorphic code
 is completely out of the question. Almost no application  would work.

 I would advocate in favour of not requiring an IsList/IsString instance
 for the validation class, this would allow you to write a conversion that
 ONLY converts literals in a validated way and will never successfully
 convert literals without the extension, since with the extension disabled
 GHC would try to use the fromList/fromString from the IsString/IsList
 classes which do not exist.

 Unfortunately, given how deeply fromIntegral is tied to the Num class I
 don't see any way to achieve the same for Num. The only option would be to
 not make Even an instance of Num, that way the same trick as above could
 work. Removing fromIntegral from Num is obviously not going to happen and
 without doing that I don't see how we could prevent someone using
 fromIntegral manually to convert to Even in a way that won't break Num
 polymorphic functions. If you have any ideas on how to tackle this, I'm all
 open to hearing them!

 I agree with you that this is ugly, but I console myself with the thought
 that being able to check all monomorphic literals is already a drastic
 improvement over the current state. And in the case of lists and strings we
 could actually ensure that things work well, since almost no one writes
 IsString polymorphic code.

 Cheers,
 Merijn

  On 6 Feb 2015, at 16:59, Ryan Trinkle ryan.trin...@gmail.com wrote:
 
  I think the idea of compile-time validation for overloaded literals is
 fantastic, and doing it with nicer syntax than quasiquoting would really
 improve things.  However, I'm a bit confused about specifically how the
 requirement that it be monomorphic will play into this.  For example, if I
 have:
 
  x = 1
 
  Presumably this will compile, and give a run-time error if I ever
 instantiate its type to Even.  However, if I have:
 
  x :: Even
  x = 1
 
  it will fail to compile?  Furthermore, if I have the former, and type
 inference determines that its type is Even, it sounds like that will also
 fail to compile, but if type inference determines that its type is forall
 a. Nat a = a, then it will successfully compile and then fail at runtime.
 
  Am I understanding this 

Re: Overlapping and incoherent instances

2014-08-11 Thread Dan Doel
On Mon, Aug 11, 2014 at 11:36 AM, Twan van Laarhoven twa...@gmail.com
wrote:

 To me, perhaps naively, IncoherentInstances is way more scary than
 OverlappingInstances.


​It might be a bit naive. Most things that incoherent instances would allow
are allowed with overlapping instances so long as you partition your code
into two modules. So unless such a partitioning is impossible, overlapping
instances are almost as scary as incoherent instances (unless the module
separation somehow makes it less scary).

And actually, with the way GHC handles instances, you can get more
incoherent behavior than incoherent instances allow without enabling any
extensions, just using modules:

module A where
  class Foo a where foo :: a

module B where
  import A
  instance F​
​oo Int where foo = 5
  bar :: Int ; bar = foo

module C where
  import A
  instance Foo Int where foo = 6
  baz :: Int ; baz = foo

module D where
  import B
  import C

  quux = bar + baz -- 11​
​

--
​ Dan​
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Bang Patterns

2014-04-02 Thread Dan Doel
Filed. Bug #8952.


On Wed, Apr 2, 2014 at 3:41 PM, wren romano winterkonin...@gmail.comwrote:

 On Tue, Apr 1, 2014 at 3:02 PM, Dan Doel dan.d...@gmail.com wrote:
  Specifically, consider:
 
  case Nothing of
!(~(Just x)) - 5
Nothing - 12
 
  Now, the way I'd expect this to work, and how I think the spec says it
  works, is that my Nothing is evaluated, and then the irrefutable ~(Just
 x)
  matches Nothing, giving a result of 5. In fact, GHC warns about
 overlapping
  patterns for this.

 It's sensible to give an overlap warning --that is, assuming we don't
 want overlap to be an error-- since the irrefutable pattern matches
 everything, and adding bangs doesn't change what values are matched
 (it only changes whether we diverge or not).

 However, I have no idea how top-level bang in case-expressions is
 supposed to be interpreted. If anything, it should be ignored since we
 must already force the scrutinee to WHNF before matching *any* of the
 clauses of a case-expression. However, I thought bangs were restricted
 to (1) immediately before variables, and (2) for top-level use in
 let/where clauses...

 In any case, following the standard desugaring of the specs:

 case Nothing of !(~(Just x)) - 5 ; Nothing - 12

 === { next to last box of
 
 http://www.haskell.org/ghc/docs/latest/html/users_guide/bang-patterns.html
 ,
 the proposed clause (t) for section 3.17.3, figure 4 }

 Nothing `seq` case Nothing of ~(Just x) - 5 ; Nothing - 12

 === { Haskell Report, section 3.17.3, figure 3, clause (d) }

 Nothing `seq` (\x - 5) (case Nothing of Just x - x)

 Which most definitely does not evaluate to 12. Either the specs are
 wrong (dubious) or the implementation is. File a bug report.

 --
 Live well,
 ~wren

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Eta Reduction

2014-04-01 Thread Dan Doel
In the past year or two, there have been multiple performance problems in
various areas related to the fact that lambda abstraction is not free,
though we
tend to think of it as so. A major example of this was deriving of Functor.
If we
were to derive Functor for lists, we would end up with something like:

instance Functor [] where
  fmap _ [] = []
  fmap f (x:xs) = f x : fmap (\y - f y) xs

This definition is O(n^2) when fully evaluated,, because it causes O(n) eta
expansions of f, so we spend time following indirections proportional to the
depth of the element in the list. This has been fixed in 7.8, but there are
other examples. I believe lens, [1] for instance, has some stuff in it that
works very hard to avoid this sort of cost; and it's not always as easy to
avoid
as the above example. Composing with a newtype wrapper, for instance,
causes an
eta expansion that can only be seen as such at the core level.

The obvious solution is: do eta reduction. However, this is not
operationally
sound currently. The problem is that seq is capable of telling the
difference
between the following two expressions:

undefined
\x - undefined x

The former causes seq to throw an exception, while the latter is considered
defined enough to not do so. So, if we eta reduce, we can cause terminating
programs to diverge if they make use of this feature.

Luckily, there is a solution.

Semantically one would usually identify the above two expressions. While I
do
believe one could construct a semantics that does distinguish them, it is
not
the usual practice. This suggests that there is a way to not distinguish
them,
perhaps even including seq. After all, the specification of seq is monotone
and
continuous regardless of whether we unify ⊥ with \x - ⊥ x or insert an
extra
element for the latter.

The currently problematic case is function spaces, so I'll focus on it. How
should:

seq : (a - b) - c - c

act? Well, other than an obvious bottom, we need to emit bottom whenever our
given function is itself bottom at every input. This may first seem like a
problem, but it is actually quite simple. Without loss of generality, let us
assume that we can enumerate the type a. Then, we can feed these values to
the
function, and check their results for bottom. Conal Elliot has prior art for
this sort of thing with his unamb [2] package. For each value x :: a, simply
compute 'f x `seq` ()' in parallel, and look for any successes. If we ever
find
one, we know the function is non-bottom, and we can return our value of c.
If we
never finish searching, then the function must be bottom, and seq should not
terminate, so we have satisfied the specification.

Now, some may complain about the enumeration above. But this, too, is a
simple
matter. It is well known that Haskell programs are denumerable. So it is
quite
easy to enumerate all Haskell programs that produce a value, check whether
that
value has the type we're interested in, and compute said value. All of this
can
be done in Haskell. Thus, every Haskell type is programatically enumerable
in
Haskell, and we can use said enumeration in our implementation of seq for
function types. I have discussed this with Russell O'Connor [3], and he
assures
me that this argument should be sufficient even if we consider semantic
models
of Haskell that contain values not denoted by any Haskell program, so we
should
be safe there.

The one possible caveat is that this implementation of seq is not
operationally
uniform across all types, so the current fully polymorphic type of seq may
not
make sense. But moving back to a type class based approach isn't so bad, and
this time we will have a semantically sound backing, instead of just having
a
class with the equivalent of the current magic function in it.

Once this machinery is in place, we can eta reduce to our hearts' content,
and
not have to worry about breaking semantics. We'd no longer put the burden on
programmers to use potentially unsafe hacks to avoid eta expansions. I
apologize
for not sketching an implementation of the above algorithm, but I'm sure it
should be elementary enough to make it into GHC in the next couple versions.
Everyone learns about this type of thing in university computer science
programs, no?

Thoughts? Comments? Questions?

Cheers,
-- Dan

[1] http://hackage.haskell.org/package/lens
[2] http://hackage.haskell.org/package/unamb
[3] http://r6.ca/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Bang Patterns

2014-04-01 Thread Dan Doel
Greetings,

I've been thinking about bang patterns as part of implementing our own
Haskell-like compiler here, and have been testing out GHC's implementation
to see how it works. I've come to one case that seems like it doesn't work
how I think it should, or how it is described, and wanted to ask about it.

Specifically, consider:

case Nothing of
  !(~(Just x)) - 5
  Nothing - 12

Now, the way I'd expect this to work, and how I think the spec says it
works, is that my Nothing is evaluated, and then the irrefutable ~(Just x)
matches Nothing, giving a result of 5. In fact, GHC warns about overlapping
patterns for this.

However, this actually evaluates to 12, meaning that !(~p) appears to
cancel out and be equivalent to p. It seems to me this might be a side
effect of the logic used to implement 'let !p = ...', but I'm not certain.

So, my question is whether this is intentional. If it is, then the bang
patterns description should probably mention it, since it's subtly
different than the rest of the specification. Also the warning should be
removed, because there is no overlapping in the above case statement. If it
is unintentional, we should probably decide either to make it intentional
(and perform the above changes), or to change the implementation. :)

Cheers,
-- Dan
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Desugaring do-notation to Applicative

2013-10-02 Thread Dan Doel
Unfortunately, in some cases, function application is just worse. For
instance, when the result is a complex arithmetic expression:

do x - expr1; y - expr2; z - expr3; return $ x*y + y*z + z*x

In cases like this, you have pretty much no choice but to name intermediate
variables, because the alternative is incomprehensible. But applicative
notation:

(\x y z - x*y + y*z + z*x) $ expr1 * expr2 * expr3

moves the variable bindings away from the expressions they're bound to, and
we require extra parentheses to delimit things, and possibly more.

Desugaring the above do into applicative is comparable to use of plain let
in scheme (monad do is let*, mdo was letrec). And sometimes, let is nice,
even if it has an equivalent lambda form.

And as Jake mentioned, syntax isn't the only reason for Applicative.
Otherwise it'd just be some alternate names for functions involving Monad.



On Wed, Oct 2, 2013 at 5:12 AM, p.k.f.holzensp...@utwente.nl wrote:

  I thought the whole point of Applicative (at least, reading Connor’s
 paper) was to restore some function-application-style to the whole
 effects-thing, i.e. it was the very point **not** to resort to binds or
 do-notation.

 ** **

 That being said, I’m all for something that will promote the use of the
 name “pure” over “return”.

 ** **

 +1 for the Opt-In

 ** **

 Ph.

 ** **

 ** **

 ** **

 *From:* Glasgow-haskell-users [mailto:
 glasgow-haskell-users-boun...@haskell.org] *On Behalf Of *Iavor Diatchki

 

 ** **

 do x1 - e1

 ** **

-- The following part is `Applicative`

(x2,x3) - do x2 - e2 x1

  x3 - e3

  pure (x2,x3)

 ** **

f x1 x2 x3

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] ScopedTypeVariables

2013-08-06 Thread Dan Doel
This is already a separate extension: PatternSignatures. However, that
extension is deprecated for some reason.


On Tue, Aug 6, 2013 at 2:46 PM, Evan Laforge qdun...@gmail.com wrote:

 Occasionally I have to explicitly add a type annotation, either for
 clarity or to help choose a typeclass instance.  Usually top-level
 type annotations take care of this, but sometimes it's convenient to
 only annotate a certain value, e.g. one argument of a lambda.

 I've noticed that while vanilla haskell is happy to allow me to put
 type annotations on variables where they are used (e.g. '\x - f (x ::
 T)'), if I put it on the variable where it is bound (e.g. '\(x :: T)
 - f x'), it wants me to turn on ScopedTypeVariables.

 I think ScopedTypeVariables is a nice extension and I'm sure it comes
 from a perfectly respectable family and all, but it feels like
 annotations on arguments comes in as a side-effect.

 Would it make sense to split it into a separate extension like
 TypesOnArguments so I can more accurately express my deviation from
 haskell2010 orthodoxy?  Or is there some deeper tie between scoped
 type variables and annotations on arguments?

 Now that I think of it, it seems inconsistent that 'x :: A - B; x a =
 ...' is valid, but 'x = \(a :: A) - (...) :: B' is not.  Doesn't the
 former desugar to the latter?

 And what about getting ScopedTypeVariables into haskell prime?  As far
 as I know everyone loves it, or at least no one actually hates it :)

 ___
 Haskell-Cafe mailing list
 haskell-c...@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] ScopedTypeVariables

2013-08-06 Thread Dan Doel
This is already a separate extension: PatternSignatures. However, that
extension is deprecated for some reason.


On Tue, Aug 6, 2013 at 2:46 PM, Evan Laforge qdun...@gmail.com wrote:

 Occasionally I have to explicitly add a type annotation, either for
 clarity or to help choose a typeclass instance.  Usually top-level
 type annotations take care of this, but sometimes it's convenient to
 only annotate a certain value, e.g. one argument of a lambda.

 I've noticed that while vanilla haskell is happy to allow me to put
 type annotations on variables where they are used (e.g. '\x - f (x ::
 T)'), if I put it on the variable where it is bound (e.g. '\(x :: T)
 - f x'), it wants me to turn on ScopedTypeVariables.

 I think ScopedTypeVariables is a nice extension and I'm sure it comes
 from a perfectly respectable family and all, but it feels like
 annotations on arguments comes in as a side-effect.

 Would it make sense to split it into a separate extension like
 TypesOnArguments so I can more accurately express my deviation from
 haskell2010 orthodoxy?  Or is there some deeper tie between scoped
 type variables and annotations on arguments?

 Now that I think of it, it seems inconsistent that 'x :: A - B; x a =
 ...' is valid, but 'x = \(a :: A) - (...) :: B' is not.  Doesn't the
 former desugar to the latter?

 And what about getting ScopedTypeVariables into haskell prime?  As far
 as I know everyone loves it, or at least no one actually hates it :)

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Proposal: Non-recursive let

2013-07-10 Thread Dan Doel
On Wed, Jul 10, 2013 at 3:08 AM, Andreas Abel andreas.a...@ifi.lmu.dewrote:

 Another instance (cut-down) are let-guards like

   let Just x | x  0 = e in x

 The x  0 is understood as an assertion here, documenting an invariant.
  However, Haskell reads this as

   let Just x = case () of { () | x  0 - e } in x

 leading to non-termination.  A non-recursive version

   let' Just x | x  0 = e in x

 could be interpreted as

   case e of { Just x | x  0 - x }

 instead, which is meaningful (in contrast to the current interpretation).


I still don't understand how this is supposed to work. It is a completely
different interpretation of guarded definitions that can only apply to
certain special cases.

Specifically, you have a let with one definition with one guard. This lets
you permute the pieces into a case, because there is one of everything.
But, if we instead have two guards, then we have two possible bodies of the
definition, and only one body of the let. But, the case has only one
discriminee (which comes from the definition body), and two branches (which
are supposed to come from the let body). We have the wrong number of pieces
to shuffle everything around.

The only general desugaring is the one in the report, but keeping the
non-recursive let. This means that the x in the guard refers to an outer
scope. But it is still equivalent to:

let' Just x = case x  0 of True - e in x

let' x = case (case x  0 of True - e) of ~(Just y) - y in x

not:

case e of Just x | x  0 - x

case e of Just x - case x  0 of True - x
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] GHC bug? Let with guards loops

2013-07-09 Thread Dan Doel
The definition

Just x | x  0 = Just 1

is recursive. It conditionally defines Just x as Just 1 when x  0 (and as
bottom otherwise). So it must know the result before it can test the guard,
but it cannot know the result until the guard is tested. Consider an
augmented definition:

Just x | x  0  = Just 1
   | x = 0 = Just 0

What is x?


On Tue, Jul 9, 2013 at 10:42 AM, Andreas Abel andreas.a...@ifi.lmu.dewrote:

 Hi, is this a known bug or feature of GHC (7.4.1, 7.6.3)?:

 I got a looping behavior in one of my programs and could not explain why.
  When I rewrote an irrefutable let with guards to use a case instead, the
 loop disappeared.  Cut-down:

   works = case Just 1 of { Just x | x  0 - x }

   loops = let Just x | x  0 = Just 1 in x

 works returns 1, loops loops.  If x is unused on the rhs, the
 non-termination disappears.

   works' = let Just x | x  0 = Just 1 in 42

 Is this intended by the Haskell semantics or is this a bug?  I would have
 assumed that non-recursive let and single-branch case are interchangeable,
 but apparently, not...

 Cheers,
 Andreas

 --
 Andreas AbelDu bist der geliebte Mensch.

 Theoretical Computer Science, University of Munich
 Oettingenstr. 67, D-80538 Munich, GERMANY

 andreas.a...@ifi.lmu.de
 http://www2.tcs.ifi.lmu.de/~**abel/ http://www2.tcs.ifi.lmu.de/~abel/

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC bug? Let with guards loops

2013-07-09 Thread Dan Doel
With pattern guards, it's difficult to say whether it is never 'useful' to
have things like the following work:

C x | C' y z - f x = ...

But I'd also shy away from changing the behavior because it causes a lot of
consistency issues. In

let
  f vs1 | gs1 = es1
  h vs2 | gs2 = es2
  ...

we have that f and h are in scope in both gs1 and gs2. Does it make sense
to call f in gs1? It's easy to loop if you do. So should f not be in scope
in gs1, but h is, and vice versa for gs2? But they're both in scope for es1
and es2?

And if we leave the above alone, then what about the case where there are
no vs? Is that different? Or is it only left-hand patterns that get this
treatment?

Also, it might have some weird consequences for moving code around. Like:

let Just x | x  0 = Just 1

let Just x | y  0 = Just 1
y = x

let Just x | b = Just 1
  where b = x  0

let Just x | b = Just 1
b = x  0

These all behave the same way now. Which ones should change?

If Haskell had a non-recursive let, that'd probably be a different story.
But it doesn't.



On Tue, Jul 9, 2013 at 1:12 PM, Andreas Abel andreas.a...@ifi.lmu.dewrote:

 Thanks, Dan and Roman, for the explanation.  So I have to delete the
 explanation non-recursive let = single-branch case from my brain.

 I thought the guards in a let are assertations, but in fact it is more
 like an if.  Ok.

 But then I do not see why the pattern variables are in scope in the guards
 in

   let p | g = e

 The variables in p are only bound to their values (given by e) if the
 guard g evaluates to True.  But how can g evaluate if it has yet unbound
 variables?  How can ever a pattern variable of p be *needed* to compute the
 value of the guard?  My conjecture is that it cannot, so it does not make
 sense to consider variables of g bound by p.  Maybe you can cook up some
 counterexample.

 I think the pattern variables of p should not be in scope in g, and
 shadowing free variables of g by pattern variables of p should be forbidden.

 Cheers,
 Andreas

 On 09.07.2013 17:05, Dan Doel wrote: The definition

 
   Just x | x  0 = Just 1
 
  is recursive. It conditionally defines Just x as Just 1 when x  0 (and
  as bottom otherwise). So it must know the result before it can test the
  guard, but it cannot know the result until the guard is tested. Consider
  an augmented definition:
 
   Just x | x  0  = Just 1
  | x = 0 = Just 0
 
  What is x?

 On 09.07.2013 17:49, Roman Cheplyaka wrote:

 As Dan said, this behaviour is correct.

 The confusing thing here is that in case expressions guards are attached
 to the patterns (i.e. to the lhs), while in let expressions they are
 attached to the rhs.

 So, despite the common Just x | x  0 part, your examples mean rather
 different things.

 Here's the translation of 'loops' according to the Report:

loops =
  let Just x =
case () of
  () | x  0 - Just 1
  in x

 Here it's obvious that 'x' is used in the rhs of its own definition.

 Roman

 * Andreas Abel andreas.a...@ifi.lmu.de [2013-07-09 16:42:00+0200]

 Hi, is this a known bug or feature of GHC (7.4.1, 7.6.3)?:

 I got a looping behavior in one of my programs and could not explain
 why.  When I rewrote an irrefutable let with guards to use a case
 instead, the loop disappeared.  Cut-down:

works = case Just 1 of { Just x | x  0 - x }

loops = let Just x | x  0 = Just 1 in x

 works returns 1, loops loops.  If x is unused on the rhs, the
 non-termination disappears.

works' = let Just x | x  0 = Just 1 in 42

 Is this intended by the Haskell semantics or is this a bug?  I would
 have assumed that non-recursive let and single-branch case are
 interchangeable, but apparently, not...

 Cheers,
 Andreas

 --
 Andreas AbelDu bist der geliebte Mensch.

 Theoretical Computer Science, University of Munich
 Oettingenstr. 67, D-80538 Munich, GERMANY

 andreas.a...@ifi.lmu.de
 http://www2.tcs.ifi.lmu.de/~**abel/ http://www2.tcs.ifi.lmu.de/~abel/

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Andreas AbelDu bist der geliebte Mensch.

 Theoretical Computer Science, University of Munich
 Oettingenstr. 67, D-80538 Munich, GERMANY

 andreas.a...@ifi.lmu.de
 http://www2.tcs.ifi.lmu.de/~**abel/ http://www2.tcs.ifi.lmu.de/~abel/

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Int is broken [Was: Different answers on different machines]

2013-06-02 Thread Dan Doel
There is a package that implements an Int that throws an exception on
overflow:

http://hackage.haskell.org/package/safeint

Since Int's existence is pretty much all about trading for performance, I
wouldn't recommend holding your breath on the above becoming the default.

If you want things to work like Scheme, that's exactly what Integer is (in
GHC, anyhow). And Integer is what you get by default(ing) unless you use
something else that is specifically defined to use Int, or specify it
yourself.



On Sun, Jun 2, 2013 at 5:02 PM, Tommy Thorn tt1...@yahoo.com wrote:

 On Jun 2, 2013, at 12:52 , Henry Laxen nadine.and.he...@pobox.com wrote:

  Yes, that was it.  The dell was a 32 bit system, and the desktop a 64.  I
  changed everything from Int to Integer, and now both agree.  Thanks for
 the
  pointer.

 Isn't that just terrible? I hate the fact that Haskell was defined to
 neither trap the overflow
 or just treat everything as Integer [like Scheme]. A sacrifice of program
 safety in the name
 of efficiency.

 I disagree with this choice and posit that a clever implementation can
 minimize the cost
 of the overflow checking in most relevant cases.

 I wish this fatal flaw would be reconsidered for the next major revision.

 Tommy


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-29 Thread Dan Doel
On Mon, Apr 29, 2013 at 10:05 AM, Duncan Coutts 
duncan.cou...@googlemail.com wrote:

 On Thu, 2013-04-25 at 00:52 +0200, Gábor Lehel wrote:
  On Wed, Apr 24, 2013 at 7:56 PM, Bryan O'Sullivan b...@serpentine.com
 wrote:
 
   On Wed, Apr 24, 2013 at 10:47 AM, Duncan Coutts 
   duncan.cou...@googlemail.com wrote:
  
   I address it briefly in my thesis [1], Section 4.8.2. I think it's a
   fundamental limitation of stream fusion.
  
  
   See also concat, where the naive fusion-based implementation has
 quadratic
   performance:
  
   concat :: [Text] - Text
   concat txts = unstream (Stream.concat (List.map stream txts))
  
   I've never figured out how to implement this with sensible
 characteristics
   within the fusion framework.
  
 
  If you could solve concat, might that also lead to be being able to do
  without the Skip constructor?

 Dan is right, we still need Skip. My suggested solution to the
 concatmap problem is also mostly independent of the skip issue.

 You shouldn't think of skip as being a hack. It's not. It's how we
 express a more general class of producers in a way that is productive.


 To further this, note that in a total language, with the type:

codata Stream a = End | Yield a (Stream a)

filter is not definable; it is not a total function. At least, barring an
extra proof of some sort that the predicate will yield true after a finite
amount of time. concat is similar.

Also, adding Skip (Stream a) is a relatively standard way of explicitly
representing lazy, partial values. (This is opposed to the partiality
monad, which is like an encoding of strict general recursion). That is, if
νF is the type of total values, then ν(F + Id) is the type of partial
values. I don't know how easy it is to delete from a more complex tree
using just that extension, but in theory you could productively represent
arbitrary manipulations with just that, I believe.

-- Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were datatype contexts removed instead of fixing them?

2013-04-25 Thread Dan Doel
It is not completely backwards compatible, because (for instance) the
declaration:

newtype C a = Foo a = Foo a

was allowed, but:

newtype Foo a where
  Foo :: C a = a - Foo a

is an illegal definition. It can only be translated to a non-newtype data
declaration, which changes the semantics.


On Thu, Apr 25, 2013 at 10:35 AM, Gábor Lehel illiss...@gmail.com wrote:

 I've wondered this too. What would have been wrong with a simple
 source-to-source translation, where a constraint on the datatype itself
 translates to the same constraint on each of its constructors? Perhaps it
 would be unintuitive that you would have to pattern match before gaining
 access to the constraint? On a superficial examination it would have been
 backwards-compatible, allowing strictly more programs than the previous
 handling.

 On Thu, Apr 25, 2013 at 12:38 PM, harry volderm...@hotmail.com wrote:

 If I understand correctly, the problem with datatype contexts is that if
 we
 have e.g.
   data Eq a = Foo a = Foo a
 the constraint Eq a is thrown away after a Foo is constructed, and any
 method using Foos must repeat Eq a in its type signature.

 Why were these contexts removed from the language, instead of fixing
 them?

 PS This is following up on a discussion on haskell-beginners, How to
 avoid
 repeating a type restriction from a data constructor. I'm interested in
 knowing whether there's a good reason not to allow this, or if it's just a
 consequence of the way type classes are implemented by compilers.


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Your ship was destroyed in a monadic eruption.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were datatype contexts removed instead of fixing them?

2013-04-25 Thread Dan Doel
I can't think of any at the moment that are still in force. However, one
that might have been relevant at the time is:

data C a = Foo a = Foo a a

foo :: Foo a - (a, a)
foo ~(Foo x y) = (x, y)

Irrefutable matches used to be disallowed for GADT-like things, which would
break the above if it were translated to GADTs. Now they just don't
introduce their constraints.

However, another thing to consider is that getting rid of data type
contexts was accepted into the language standard. It's not really possible
to fix them by translation to GADTs in the report, because GADTs aren't in
the report, and probably won't be for some time, if ever. And putting a
fixed version natively into the report would require nailing down a lot of
details. For instance, are the contexts simply invalid on newtypes, or do
they just work the old way?

I don't really think they're worth saving in general, though. I haven't
missed them, at least.

-- Dan


On Thu, Apr 25, 2013 at 3:19 PM, Gábor Lehel illiss...@gmail.com wrote:

 Good point, again. Is that the only problem with it?


 On Thu, Apr 25, 2013 at 5:57 PM, Dan Doel dan.d...@gmail.com wrote:

 It is not completely backwards compatible, because (for instance) the
 declaration:

 newtype C a = Foo a = Foo a

 was allowed, but:

 newtype Foo a where
   Foo :: C a = a - Foo a

 is an illegal definition. It can only be translated to a non-newtype data
 declaration, which changes the semantics.


 On Thu, Apr 25, 2013 at 10:35 AM, Gábor Lehel illiss...@gmail.comwrote:

 I've wondered this too. What would have been wrong with a simple
 source-to-source translation, where a constraint on the datatype itself
 translates to the same constraint on each of its constructors? Perhaps it
 would be unintuitive that you would have to pattern match before gaining
 access to the constraint? On a superficial examination it would have been
 backwards-compatible, allowing strictly more programs than the previous
 handling.

 On Thu, Apr 25, 2013 at 12:38 PM, harry volderm...@hotmail.com wrote:

 If I understand correctly, the problem with datatype contexts is that
 if we
 have e.g.
   data Eq a = Foo a = Foo a
 the constraint Eq a is thrown away after a Foo is constructed, and any
 method using Foos must repeat Eq a in its type signature.

 Why were these contexts removed from the language, instead of fixing
 them?

 PS This is following up on a discussion on haskell-beginners, How to
 avoid
 repeating a type restriction from a data constructor. I'm interested in
 knowing whether there's a good reason not to allow this, or if it's
 just a
 consequence of the way type classes are implemented by compilers.


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Your ship was destroyed in a monadic eruption.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





 --
 Your ship was destroyed in a monadic eruption.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-24 Thread Dan Doel
Presumably concat also has to use skip, for the same reason as filter.
Otherwise it has to recursively process the outer stream until it gets to a
non-empty inner stream, which breaks the rule that only the final consumer
is recursive.

concat [[1,2,3],[4,5],[],[6,7]]

probably looks something like:

Yield 1, Yield 2, Yield 3, Skip, Yield 4, Yield 5, Skip, Skip, Yield 6,
Yield 7, Skip, Done

-- Dan



On Wed, Apr 24, 2013 at 6:52 PM, Gábor Lehel illiss...@gmail.com wrote:



 On Wed, Apr 24, 2013 at 7:56 PM, Bryan O'Sullivan b...@serpentine.comwrote:

 On Wed, Apr 24, 2013 at 10:47 AM, Duncan Coutts 
 duncan.cou...@googlemail.com wrote:

 I address it briefly in my thesis [1], Section 4.8.2. I think it's a
 fundamental limitation of stream fusion.


 See also concat, where the naive fusion-based implementation has
 quadratic performance:

 concat :: [Text] - Text
 concat txts = unstream (Stream.concat (List.map stream txts))

 I've never figured out how to implement this with sensible
 characteristics within the fusion framework.


 If you could solve concat, might that also lead to be being able to do
 without the Skip constructor? Skip was added explicitly to be able to
 efficiently handle things like filter (without it the Step datatype is
 exactly the base functor for lists), but if concat works, then filter p
 can be expressed as concat . map (\x - if (p x) then [x] else []). Of
 course, presumably filter isn't the only function that requires Skip, I
 don't know what the others are. (Also of course solve and works are
 intentionally fuzzy, with the point being that I don't know if solving
 concat implies that the desirable properties would survive composition in
 the suggested manner.)

 --
 Your ship was destroyed in a monadic eruption.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Why is GHC so much worse than JHC when computing the Ackermann function?

2013-04-20 Thread Dan Doel
There's something strange going on in this example. For instance, setting
(-M) heap limits as low as 40K seems to have no effect, even though the
program easily uses more than 8G. Except, interrupting the program in such
a case does seem to give a message about heap limits being exceeded (it
won't stop on its own, though). Also, the program compiled without
optimizations uses very little memory (though it's slow), which is odd,
because the optimized version works exclusively on Int#, which shouldn't
cause heap allocation.

I filed a ticket earlier: http://hackage.haskell.org/trac/ghc/ticket/7850
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Need some advice around lazy IO

2013-03-18 Thread Dan Doel
Do note that deepSeq alone won't (I think) change anything in your
current code. bug will deepSeq the file contents. And the cons will
seq bug. But nothing is evaluating the cons. And further, the cons
isn't seqing the tail, so none of that will collapse, either. So the
file descriptors will still all be opened at once.

Probably the best solution if you choose to go this way is:

bug - evaluate (fileContents2Bug $!! str)

which ties the evaluation of the file contents into the IO execution.
At that point, deepSeqing the file is probably unnecessary, though,
because evaluating the bug will likely allow the file contents to be
collected.

On Mon, Mar 18, 2013 at 6:42 AM, C K Kashyap ckkash...@gmail.com wrote:
 Thanks Konstantin ... I'll try that out too...



 Regards,
 Kashyap


 On Mon, Mar 18, 2013 at 3:31 PM, Konstantin Litvinenko
 to.darkan...@gmail.com wrote:

 On 03/17/2013 07:08 AM, C K Kashyap wrote:

 I am working on an automation that periodically fetches bug data from
 our bug tracking system and creates static HTML reports. Things worked
 fine when the bugs were in the order of 200 or so. Now I am trying to
 run it against 3000 bugs and suddenly I see things like - too  many open
 handles, out of memory etc ...

 Here's the code snippet - http://hpaste.org/84197

 It's a small snippet and I've put in the comments stating how I run into
 out of file handles or simply file not getting read due to lazy IO.

 I realize that putting ($!) using a trial/error approach is going to be
 futile. I'd appreciate some pointers into the tools I could use to get
 some idea of which expressions are building up huge thunks.


 You problem is in

 let bug = ($!) fileContents2Bug str

 ($!) evaluate only WHNF and you need NF. Above just evaluate to first char
 in a file, not to all content. To fully evaluate 'str' you need something
 like

 let bug = Control.DeepSeq.rnf str `seq` fileContents2Bug str






 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Need some advice around lazy IO

2013-03-17 Thread Dan Doel
One thing that typically isn't mentioned in these situations is that
you can add more laziness. I'm unsure if it would work from just your
snippet, but it might.

The core problem is that something like:

mapM readFile names

will open all the files at once. Applying any processing to the file
contents is irrelevant unless the results of that processing is
evaluated sufficiently to allow the file to be closed.

Now, most people will tell you that this means lazy I/O is evil, and
you should make it all strict. But, consider an analogous situation
where instead of opening a file handle, we do something that allocates
a lot of memory, and can only free it after processing. We'd run out
of memory allocating 3,000 * X, but X alone is fine. Then people
usually suggest delaying the allocation until you need it, i.e. lazy
evaluation.

Unfortunately, there's no combinator for this in the standard
libraries, but you can write one:

mapMI :: (a - IO b) - [a] - IO [b]
mapMI _ [] = return []
-- You can play with this case a bit. This will open a file for
the head of the list,
-- and then when each subsequent cons cell is inspected. You could probably
-- interleave 'f x' as well.
mapMI f (x:xs) = do y - f x ; ys - unsafeInterleaveIO (mapMI f
xs) ; return (y:ys)

Now, mapMI readFile only opens the handle when you match on the list,
so if you process the list incrementally, it will open the file
handles one-by-one.

As an aside, you should never use hClose when doing lazy I/O. That's
kind of like solving the above, i've allocated too much memory,
problem with, just overwrite some expensive stuff with some other
cheap stuff to free up space.

-- Dan


On Sun, Mar 17, 2013 at 1:08 AM, C K Kashyap ckkash...@gmail.com wrote:
 Hi,

 I am working on an automation that periodically fetches bug data from our
 bug tracking system and creates static HTML reports. Things worked fine when
 the bugs were in the order of 200 or so. Now I am trying to run it against
 3000 bugs and suddenly I see things like - too  many open handles, out of
 memory etc ...

 Here's the code snippet - http://hpaste.org/84197

 It's a small snippet and I've put in the comments stating how I run into
 out of file handles or simply file not getting read due to lazy IO.

 I realize that putting ($!) using a trial/error approach is going to be
 futile. I'd appreciate some pointers into the tools I could use to get some
 idea of which expressions are building up huge thunks.


 Regards,
 Kashyap

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell + RankNTypes + (forall p. p Char - p Bool) sound?

2013-03-05 Thread Dan Doel
Just because you can't use the 'magic primitive' in question to
produce an element of the empty type doesn't mean the system is sound
(nor does type soundness have anything to do with proving 'false').

The question is what the primitive is supposed to do. If it's supposed
to work as a witness of equality between Char and Bool, then (sym prim
. prim :: p Char - p Char) must be refl, the identity function. But
then if we choose p = Identity, we have f :: Char - Char via
round-trip through Bool that must be the identity. In a sufficiently
capable language, that can be shown impossible via the pigeonhole
principle, but I'm not sure if 'just rank-n types' is up to the task.

Some other food for thought is that 'true = false' (true and false
beeing booleans) is not sufficient to derive false in dependent type
theory _unless_ there is some kind of large elimination, either
directly or via universes. Without those, type theory admits trivial
models in which every type denotes a set of at most one element. One
can see then that it might take the ability to do case analysis on
types to gain a contradiction from 'Char = Bool' in GHC (the
pigeonhole route doesn't seem like it'd be feasible), although I don't
know that that's the case.

Anyhow, soundness is with respect to a model. In the usual model of
Haskell, every domain is nonempty, including the one for p Char - p
Bool. So if you don't specify anything about the primitive, it could
be undefined, and there'd be no problem with type soundness. And it
may also be the case that you can introduce a primitive that is not
parametric in p, and arbitrarily applies functions f :: Char - Bool,
g :: Bool - Char in the right places for each particular p definable
in the language. That will fail the properties of an equality witness,
but if you don't specify any properties at all, anything goes (and you
can't really prove anything about the action of Leibniz or any other
equality in GHC anyhow, so it can't contradict anything there).

i don't really know the answer to whether TypeFamilies/GADTs is merely
sufficient or necessary, though.


On Tue, Mar 5, 2013 at 3:54 AM, Shachaf Ben-Kiki shac...@gmail.com wrote:
 I was trying to figure out a way to write absurd :: (forall p. p Char
 - p Bool) - Void using only rank-n types. Someone suggested that
 Haskell with RankNTypes and a magic primitive of type (forall p. p
 Char - p Bool) might be sound (disregarding the normal ways to get ⊥,
 of course).

 Is that true? Given either TypeFamilies or GADTs, you can write
 absurd. But it doesn't seem like you can write it with just
 RankNTypes. (This is related to GeneralizedNewtypeDeriving, which is
 more or less a version of that magic primitive.)

 This seems like something that GADTs (/TypeFamilies) give you over
 Leibniz equality: You can write

   data Foo a where
 FooA :: Foo Char
 FooB :: Void - Foo Bool

   foo :: Foo Bool - Void
   foo (FooB x) = x

 Without any warnings. On the other hand

   data Bar a = BarA (Is a Char) | BarB (Is a Bool) Void

   bar :: Bar Bool - Void
   bar (BarB _ x) = x
   bar (BarA w) = -- ???

 Doesn't seem possible. If it's indeed impossible, what's the minimal
 extension you would need to add on top of RankNTypes to make it work?
 GADTs seems way too big.

 Shachaf

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell syntax/indentation for vim

2013-03-04 Thread Dan Doel
I hadn't seen this before, but I tried it out, and the parts I'm interested
in are nice. The indenting is less flaky than what I was using before
(comments had issues).

If you're rewriting things, though, it'd be nice to be able to customize
indentation a little more. For instance, I like laying out ifs like:

if foo
  then bar
  else baz

But I like to lay out wheres as:

foo = ...
 where
 bar = ...

But both the indents here are based on shiftwidth, so they're tied together.

Another 'nice to have' would be some intelligent outdenting. For instance,
if you type a let block right now:

let foo = zig
bar = zag
in ...

That's what you'll get. It'd be nice if typing the 'in' snapped back to the
let. I know it's possible to implement something like this, because the
scala indentation mode I use frequently outdents when I type '=' (which
annoys the hell out of me, because it's almost never correct), but I don't
know if it can be done intelligently enough to be useful (which would be
important). Something to keep in mind, though.

-- Dan


On Sun, Mar 3, 2013 at 9:48 AM, dag.odenh...@gmail.com 
dag.odenh...@gmail.com wrote:

 I see now in your README that you have seen vim2hs.  I'd love to hear what
 you disliked about it, especially given my plan to rewrite the whole thing
 [1]! :)

 [1] https://github.com/dag/vim2hs/issues/45


 On Sun, Mar 3, 2013 at 3:38 PM, dag.odenh...@gmail.com 
 dag.odenh...@gmail.com wrote:

 Hi

 Have you seen vim2hs?

 https://github.com/dag/vim2hs


 On Sat, Mar 2, 2013 at 9:11 PM, Tristan Ravitch travi...@cs.wisc.eduwrote:

 Cafe,

 I've recently been playing with vim and wasn't quite satisfied with the
 existing syntax highlighting and indentation, so I thought I'd try my
 hand at a new Haskell mode:

 https://github.com/travitch/hasksyn

 It is minimal in that it doesn't provide support for running external
 commands over code or anything fancy.  It just does syntax highlighting
 and reasonably-smart indentation.  There is no support for literate
 Haskell since supporting both with one mode is very tricky.

 It might be useful to some people.  Comments, bug reports, and
 suggestions
 welcome.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell syntax/indentation for vim

2013-03-04 Thread Dan Doel
They're hard to come by, but some other examples might be...

1) If you type something that is recognizably a guard, it could pop
back other guards:

foo x y z
  | guard1 = do ...
| guard2 = -- outdent now

Not sure how feasible that one is.

2) When you type 'else', outdent to the same indentation as the
associated 'then'.

There may be others, but those are what I could think of after a few
minutes of pondering.


On Mon, Mar 4, 2013 at 12:40 PM, Tristan Ravitch travi...@cs.wisc.edu wrote:
 I like automatic outdenting too, but I only came up with three cases
 where I felt like I could do it reliably:

  * With let/in as you described
  * After a catchall case:

case ... of
  C1 - ...
  C2 - ...
  _ - ...
-- dedent back to here

  * And similarly after a do block ending in a return

 Even that last one is slightly questionable, I feel, but probably works
 for almost all cases.  Are there any others?

 On Mon, Mar 04, 2013 at 12:20:12PM -0500, Dan Doel wrote:
 I hadn't seen this before, but I tried it out, and the parts I'm interested
 in are nice. The indenting is less flaky than what I was using before
 (comments had issues).

 If you're rewriting things, though, it'd be nice to be able to customize
 indentation a little more. For instance, I like laying out ifs like:

 if foo
   then bar
   else baz

 But I like to lay out wheres as:

 foo = ...
  where
  bar = ...

 But both the indents here are based on shiftwidth, so they're tied together.

 Another 'nice to have' would be some intelligent outdenting. For instance,
 if you type a let block right now:

 let foo = zig
 bar = zag
 in ...

 That's what you'll get. It'd be nice if typing the 'in' snapped back to the
 let. I know it's possible to implement something like this, because the
 scala indentation mode I use frequently outdents when I type '=' (which
 annoys the hell out of me, because it's almost never correct), but I don't
 know if it can be done intelligently enough to be useful (which would be
 important). Something to keep in mind, though.

 -- Dan


 On Sun, Mar 3, 2013 at 9:48 AM, dag.odenh...@gmail.com 
 dag.odenh...@gmail.com wrote:

  I see now in your README that you have seen vim2hs.  I'd love to hear what
  you disliked about it, especially given my plan to rewrite the whole thing
  [1]! :)
 
  [1] https://github.com/dag/vim2hs/issues/45
 
 
  On Sun, Mar 3, 2013 at 3:38 PM, dag.odenh...@gmail.com 
  dag.odenh...@gmail.com wrote:
 
  Hi
 
  Have you seen vim2hs?
 
  https://github.com/dag/vim2hs
 
 
  On Sat, Mar 2, 2013 at 9:11 PM, Tristan Ravitch 
  travi...@cs.wisc.eduwrote:
 
  Cafe,
 
  I've recently been playing with vim and wasn't quite satisfied with the
  existing syntax highlighting and indentation, so I thought I'd try my
  hand at a new Haskell mode:
 
  https://github.com/travitch/hasksyn
 
  It is minimal in that it doesn't provide support for running external
  commands over code or anything fancy.  It just does syntax highlighting
  and reasonably-smart indentation.  There is no support for literate
  Haskell since supporting both with one mode is very tricky.
 
  It might be useful to some people.  Comments, bug reports, and
  suggestions
  welcome.
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Inference for RankNTypes

2013-01-02 Thread Dan Doel
Your example doesn't work for the same reason the following doesn't work:

id runST (some st code)

It requires the inferencer to instantiate certain variables of id's type to
polymorphic types based on runST (or flip's based on one), and then use
that information to check some st code (id in your example) as a
polymorphic type. At various times, GHC has had ad-hoc left-to-right
behavior that made this work, but it no longer does. Right now, I believe
it only has an ad-hoc check to make sure that:

runST $ some st code

works, and not much else. Note that even left-to-right behavior covers all
cases, as you might have:

f x y

such that y requires x to be checked polymorphically in the same way. There
are algorithms that can get this right in general, but it's a little
tricky, and they're rather different than GHC's algorithm, so I don't know
whether it's possible to make GHC behave correctly.

The reason it works when you factor out or annotate flip one 'x' is that
that is the eventual inferred type of the expression, and then it knows to
expect the id to be polymorphic. But when it's all at once, we just have a
chain of unifications relating things like: (forall a. a - a) ~ beta ~
(alpha - alpha), where beta is part of type checking flip, and alpha -
alpha is the instantiation of id's type with unification variables, because
we didn't know that it was supposed to be a fully polymorphic use. And that
unification fails.


On Wed, Jan 2, 2013 at 8:21 AM, Francesco Mazzoli f...@mazzo.li wrote:

 At Wed, 2 Jan 2013 14:49:51 +0200,
 Roman Cheplyaka wrote:
  I don't see how this is relevant.

 Well, moving `flip one' in a let solves the problem, and The fact that
 let-bound
 variables are treated differently probably has a play here.  I originally
 thought that this was because the quantifications will be all to the left
 in the
 let-bound variable while without a let-bound variable the types are used
 directly.  However this doesn’t explain the behaviour I’m seeing.

  GHC correctly infers the type of flip one 'x':
 
*Main :t flip one 'x'
flip one 'x' :: (forall a. a - a) - Char
 
  But then somehow it fails to apply this to id. And there are no bound
  variables here that we should need to annotate.

 Right.  The weirdest thing is that annotating `flip one' (as in `three' in
 my
 code) or indeed `flip one 'x'' with the type that shows up in ghci makes
 things work:

 five = (flip one 'x' :: (forall a. a - a) - Char) id

 Francesco

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Inference for RankNTypes

2013-01-02 Thread Dan Doel
On Wed, Jan 2, 2013 at 11:20 AM, Dan Doel dan.d...@gmail.com wrote:

 Note that even left-to-right behavior covers all cases, as you might have:

 f x y

 such that y requires x to be checked polymorphically in the same way.
 There are algorithms that can get this right in general, but it's a little
 tricky, and they're rather different than GHC's algorithm, so I don't know
 whether it's possible to make GHC behave correctly.


Oops, that should have been, note that not even left-to-right behavior
covers all cases.

Also, I don't mean to imply that GHC is behaving wrongly. Situations like
these are usually the result of necessary trade-offs in the algorithms. GHC
does a lot of things that algorithms that successfully check this type of
examples don't have to deal with at all. You have to pick your poison.

-- Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Inference for RankNTypes

2013-01-02 Thread Dan Doel
If you want to know the inner workings, you probably need to read the
OutsideIn(X) paper.*

I'm not that familiar with the algorithm. But what happens is something
like this When GHC goes to infer the type of 'f x' where it knows that
f's argument is expected to be polymorphic, this triggers a different code
path that will check that x can be given a type that is at least as general
as is necessary for the argument.

However, flip one 'x' id gives flip a type like (alpha - beta - gamma)
- beta - alpha - gamma. Then, we probably get some constraints collected
up like:

alpha ~ (forall a. a - a)
alpha ~ (delta - delta)

That is, it does not compute the higher-rank type of flip one 'x' and
then decide how the application of that to id should be checked; it decides
how all the arguments should be checked based only on flip's type, and flip
does not have a higher-rank type on its own. And solving the above
constraints cannot trigger the alternate path.

However, when you factor out or annotate flip one 'x', it knows that it's
applying something with a higher-rank type (whether because it inferred it
separately, or you gave it), and that does trigger the alternate code path.

If that's still too vague, you'll have to refer to the paper.

-- Dan

*
http://research.microsoft.com/en-us/um/people/simonpj/papers/constraints/jfp-outsidein.pdf


On Wed, Jan 2, 2013 at 11:47 AM, Francesco Mazzoli f...@mazzo.li wrote:

 At Wed, 2 Jan 2013 11:20:46 -0500,
 Dan Doel wrote:
  Your example doesn't work for the same reason the following doesn't work:
 
  id runST (some st code)
 
  It requires the inferencer to instantiate certain variables of id's type
 to
  polymorphic types based on runST (or flip's based on one), and then use
  that information to check some st code (id in your example) as a
  polymorphic type. At various times, GHC has had ad-hoc left-to-right
  behavior that made this work, but it no longer does. Right now, I believe
  it only has an ad-hoc check to make sure that:
 
  runST $ some st code
 
  works, and not much else. Note that even left-to-right behavior covers
 all
  cases, as you might have:
 
  f x y
 
  such that y requires x to be checked polymorphically in the same way.
 There
  are algorithms that can get this right in general, but it's a little
  tricky, and they're rather different than GHC's algorithm, so I don't
 know
  whether it's possible to make GHC behave correctly.
 
  The reason it works when you factor out or annotate flip one 'x' is
 that
  that is the eventual inferred type of the expression, and then it knows
 to
  expect the id to be polymorphic. But when it's all at once, we just have
 a
  chain of unifications relating things like: (forall a. a - a) ~ beta ~
  (alpha - alpha), where beta is part of type checking flip, and alpha -
  alpha is the instantiation of id's type with unification variables,
 because
  we didn't know that it was supposed to be a fully polymorphic use. And
 that
  unification fails.

 Hi Dan,

 Thanks a lot for the answer, one forgets that with HM you always replace
 the
 quantified variables immediately.

 However I am still confused on how GHC makes it work when I annotate or put
 things in separate variables.  In other words, can you provide links or
 clarify
 how this procedure works:

 The reason it works when you factor out or annotate flip one 'x' is
 that
 that is the eventual inferred type of the expression, and then it
 knows to
 expect the id to be polymorphic.

 Thanks,
 Francesco

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Status of Haskell'?

2012-12-02 Thread Dan Doel
This is a significant problem for even some of the more ubiquitous
extensions. For instance, there are multiple compilers that implement
RankNTypes, but I would not be surprised at all if programs using that
extension were not portable across implementations (they're not really even
portable across GHC versions).

The problem is that RankNTypes is not just about the fact that you are
allowed to use such types; every compiler must decide on an inference
algorithm that incorporates such types while defaulting to Hindley-Milner.
But, there are several such algorithms, and they have different trade offs
as far as where annotations must be placed, or even whether certain
conceivably well-typed terms are type checkable (for instance, GHC used to
do some level of impredicative instantiation; forall a. a - a could be
instantiated to (forall a. a) - (forall a. a); but this no longer works).

So, even if we have ubiquitous agreement on the fact that RankNTypes are
useful, and implementable, we don't have ubiquitous agreement on the
algorithms for implementing them, and which set of trade offs to make. But
any standard would have to nail all that down, or else programs won't be
portable.

And this is not the only extension for which this kind of thing is an issue.

-- Dan


On Sun, Dec 2, 2012 at 6:42 AM, Ross Paterson r...@soi.city.ac.uk wrote:

 On Fri, Nov 30, 2012 at 11:05:41PM +, Gábor Lehel wrote:
  Well, I'm not so sure it's a great idea to just bake what GHC does at
  this moment (for any particular extension) into the standard without
  really thinking about it. Even then, you have to figure out, in great
  detail, what GHC does, and write it all down! That's not negligible
  effort, either.

 And that is the core of the problem.  The standard isn't just a list
 of approved features.  It needs to describe them in such detail that a
 programmer can tell, from the Report alone, whether a particular program
 is legal, and if so what it's supposed to do.  We don't have that level
 of description for these extensions, and creating it will be a lot of
 hard work.

 Relying on what GHC does at the moment has obvious risks for
 programmers, it also puts an unfair responsibility on GHC itself.  How can
 they improve a feature if it's current implementation is the standard?

 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Status of Haskell'?

2012-12-02 Thread Dan Doel
As far as I know, MPTCs alone do not have this issue. But functional
dependencies do, as there are at least two ways they can behave. One is the
way they traditionally behave in GHC, and another is the way they would
behave if they were sugar for type families.

I can't think of anything about MPTCs alone that would be a problem, though.


On Sun, Dec 2, 2012 at 2:56 PM, Gábor Lehel illiss...@gmail.com wrote:

 On Sun, Dec 2, 2012 at 7:06 PM, Dan Doel dan.d...@gmail.com wrote:
  This is a significant problem for even some of the more ubiquitous
  extensions. For instance, there are multiple compilers that implement
  RankNTypes, but I would not be surprised at all if programs using that
  extension were not portable across implementations (they're not really
 even
  portable across GHC versions).
 
  The problem is that RankNTypes is not just about the fact that you are
  allowed to use such types; every compiler must decide on an inference
  algorithm that incorporates such types while defaulting to
 Hindley-Milner.
  But, there are several such algorithms, and they have different trade
 offs
  as far as where annotations must be placed, or even whether certain
  conceivably well-typed terms are type checkable (for instance, GHC used
 to
  do some level of impredicative instantiation; forall a. a - a could be
  instantiated to (forall a. a) - (forall a. a); but this no longer
 works).
 
  So, even if we have ubiquitous agreement on the fact that RankNTypes are
  useful, and implementable, we don't have ubiquitous agreement on the
  algorithms for implementing them, and which set of trade offs to make.
 But
  any standard would have to nail all that down, or else programs won't be
  portable.
 
  And this is not the only extension for which this kind of thing is an
 issue.
 

 Out of curiosity, to what degree does MultiParamTypeClasses have this
 issue? It seems to me like one of the few extensions which is
 straightforward, widely implemented, uncontroversial, and very useful.
 For some reason it's been held up by the FDs vs TFs debate, but I
 don't see why it has to be. Vanilla MPTCs on the one hand, and MPTCs
 together with either FDs or TFs on the other hand, serve different use
 cases. If you want a plain type-level relation on types, you use
 MPTCs. If you want some types to be determined by others, then you use
 either FDs or TFs. If we standardize support for the former, that's
 useful, even if we continue to procrastinate on the FDs vs TFs
 question. So if the idea is to do yearly incremental updates to the
 standard, MPTCs looks like the biggest low-hanging fruit to me.
 (Assuming they aren't similarly problematic as RankNTypes.)

 --
 Your ship was destroyed in a monadic eruption.

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] Why Kleisli composition is not in the Monad signature?

2012-11-30 Thread Dan Doel
Lists! The finite kind.

This could mean Seq for instance.
On Nov 30, 2012 9:53 AM, Brent Yorgey byor...@seas.upenn.edu wrote:

 On Fri, Nov 30, 2012 at 02:33:54AM +0100, Ben Franksen wrote:
  Brent Yorgey wrote:
   On Thu, Nov 29, 2012 at 03:52:58AM +0100, Ben Franksen wrote:
   Tony Morris wrote:
As a side note, I think a direct superclass of Functor for Monad is
 not
a good idea, just sayin'
   
class Functor f where
  fmap :: (a - b) - f a - f b
   
class Functor f = Apply f where
  (*) :: f (a - b) - f a - f b
   
class Apply f = Bind f where
  (=) :: (a - f b) - f a - f b
   
class Apply f = Applicative f where
  unit :: a - f a
   
class (Applicative f, Bind f) = Monad f where
   
Same goes for Comonad (e.g. [] has (=) but not counit)
... and again for Monoid, Category, I could go on...
  
   Hi Tony
  
   even though I dismissed your mentioning this on the Haskell' list, I
 do
  have
   to admit that the proposal has a certain elegance. However, before I
 buy
   into this scheme, I'd like to see some striking examples for types
 with
   natural (or at least useful) Apply and Bind instances that cannot be
 made
   Applicative resp. Monad.
  
   Try writing an Applicative instances for (Data.Map.Map k).  It can't
   be done, but the Apply instance is (I would argue) both natural and
  useful.
 
  I see. So there is one example. Are there more? I'd like to get a feeling
  for the abstraction and this is hard if there is only a single example.

 Any data type which admits structures of arbitrary but *only finite*
 size has a natural zippy Apply instance but no Applicative (since
 pure would have to be an infinite structure).  The Map instance I
 mentioned above falls in this category.  Though I guess I'm having
 trouble coming up with other examples, but I'm sure some exist.  Maybe
 Edward knows of other examples.

 -Brent

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why Kleisli composition is not in the Monad signature?

2012-10-16 Thread Dan Doel
On Tue, Oct 16, 2012 at 10:37 AM, AUGER Cédric sedri...@gmail.com wrote:
 join IS the most important from the categorical point of view.
 In a way it is natural to define 'bind' from 'join', but in Haskell, it
 is not always possible (see the Monad/Functor problem).

 As I said, from the mathematical point of view, join (often noted μ in
 category theory) is the (natural) transformation which with return (η
 that I may have erroneously written ε in some previous mail) defines a
 monad (and requires some additionnal law).

This is the way it's typically presented. Can you demonstrate that it
is the most important presentation?

I'd urge caution in doing so, too. For instance, there is a paper,
Monads Need Not Be Endofunctors, that describes a generalization of
monads to monads relative to another functor. And there, bind is
necessarily primary, because join isn't even well typed. I don't think
it's written by mathematicians per se (rather, computer
scientists/type theorists). But mathematicians have their own
particular interests that affect the way they frame things, and that
doesn't mean those ways are better for everyone.

 As often some points it out,
 Haskellers are not very right in their definition of Monad, not because
 of the bind vs join (in fact in a Monad either of them can be defined
 from the other one), but because of the functor status of a Monad. A
 monad, should always be a functor (at least to fit its mathematical
 definition). And this problem with the functor has probably lead to the
 use of bind (which is polymorphic in two type variables) rather than
 join (which has only one type variable, and thus is simpler).
 The problem, is that when 'm' is a Haskell Monad which does not belong
 to the Functor class, we cannot define 'bind' in general from 'join'.

I don't think Functor being a superclass of Monad would make much
difference. For instance, Applicative is properly a subclass of
Functor, but we don't use the minimal definition that cannot reproduce
fmap on its own:

class Functor f = LaxMonoidal f where
  unit :: f ()
  pair :: f a - f b - f (a, b)

Instead we use a formulation that subsumes fmap:

fmap f x = pure f * x

Because those primitives are what we actually want to use, and it's
more efficient to implement those directly than to go through some set
of fully orthogonal combinators purely for the sake of mathematical
purity.

And this goes for Monad, as well. For most of the monads in Haskell,
the definition of join looks exactly like a definition of (= id),
and it's not very arduous to extend that to (= f). But if we do
define join, then we must recover (=) by traversing twice; once for
map and once for join. There are some examples where join can be
implemented more efficiently than bind, but I don't actually know of
any Haskell Monads for which this is the case.

And as I mentioned earlier, despite many Haskell folks often not
digging into monads as done by mathematicians much (and that's fine),
(=) does correspond to a nice operation: variable substitution. This
is true in category theory, when you talk about algebraic theories,
and it's true in Haskell when you start noticing that various little
languages are described by algebraic theories. But from this angle, I
see no reason why it's _better_ to split variable substitution into
two operations, first turning a tree into a tree of trees, and then
flattening.

It'd be nice if all Functor were a prerequisite for Monad, but even
then there are still reasons for making (=) a primitive.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why Kleisli composition is not in the Monad signature?

2012-10-15 Thread Dan Doel
On Mon, Oct 15, 2012 at 10:05 PM, damodar kulkarni
kdamodar2...@gmail.com wrote:
 @Jake


 In my opinion, this is not as nice as the do-notation version, but at
 least it's compositional:


 That's an important point you have made, as Haskellers value code
 composition so much.
 If code composition is the holy grail, why not encourage the monadic code,
 too, to be compositional? Nicety can wait; some syntax sugar might take care
 of it.

 And as you have pointed out, arrows make a superior choice in this regard,
 but they are rather newer to monads.

I'm uncertain where this, compositional means written as the
composition of functions, thing started. But it is not what I, and
I'm sure any others mean by the term, compositional.

For instance, one of the key properties of denotational semantics is
that they are compositional. But this does not mean that the semantics
is written as the composition of functions. Perhaps it could be, but
that is a rather superficial criterion. What it means is that the
semantics of compound expressions are merely functions of the
semantics of the constituent pieces, so one can stick together well
understood pieces to get well understood wholes, and the whole does
not have to be analyzed as an entire unit.

Now, one can give almost anything a compositional semantics in this
sense, by denoting things by pieces that pass context along. And this
is a reason to avoid effects and whatnot. But this is true whether one
writes things as a pipeline of functions or as some other sort of
expression. Context may be threaded through a series of expressions,
or through a series of composed functions. Choosing either way of
writing makes no difference.

So I don't really care whether people write their code involving
monads as the composition of Kleisli arrows, except for which makes
the code look nicer. And the arrow option does not always do so,
especially when one must artificially extend things of type 'M a' into
constant functions. Kleisli arrows aren't the end all and be all of
monads (if you read books on the math end, the Eilenberg-Moore
category tends to be emphasized far more, and the Kleisli category
might not even be presented in the same way as it typically is amongst
Haskellers).

As for why (=) is a good primitive For one, it works very nicely
for writing statement sequences without any sugar (for when you want
to do that):

getLine = \x -
getLine = \y -
print (x, y)

For two, it corresponds to the nice operation of substitution of an
expression for a variable (which is a large part of what monads are
actually about in category theory, but doesn't get a lot of play in
Haskell monad tutorials).

For three, I can't for the life of me think of how anyone would write
(=) as a primitive operation _except_ for writing (=) and then '(f
= g) x = f x = g'. The function cannot be inspected to get the
result except by applying it.  I suppose one _can_ write (=)
directly. For instance:

data Free f a = Pure a | Roll (f (Free f a))

(g = h) x = case g x of
  Pure b - h b
  Roll f - Roll $ fmap (id = h) f

But the (id = h) is there because we want to put (= h) and don't
have it. The g is sort of an illusion. We use it to get an initial
value, but all our recursive calls use g = id, so we're subsequently
defining (=) in disguise. (And, amusingly, we're even using
polymorphic recursion, so if this isn't in a class, or given an
explicit type signature, inference will silently get the wrong
answer.)

So, there are good reasons for making (=) primitive, and not really
any (that I can see) for making (=) more than a derived function.
I'd be down with putting join in the class, but that tends to not be
terribly important for most cases, either.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Church vs Boehm-Berarducci encoding of Lists

2012-09-25 Thread Dan Doel
On Wed, Sep 26, 2012 at 12:41 AM,  o...@okmij.org wrote:
 First of all, the Boehm-Berarducci encoding is inefficient only when
 doing an operation that is not easily representable as a fold. Quite
 many problems can be efficiently tackled by a fold.

Indeed. Some operations are even more efficient than usually expected
when operating on folds. For instance:

snoc xs x f = xs f . f x

People often recommend using ([a] - [a]) for efficiently building up
lists without worrying about introducing O(n^2) costs through bad
associativity, but the Boehm-Berarducci encoding gets you that and
more (map g xs f = xs (f . g) ; map h (map g xs) = \f - xs (f . g .
h)).

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: PolyKind issue in GHC 7.6.1rc1: How to make a kind a functional dependency?

2012-09-19 Thread Dan Doel
On Sun, Sep 16, 2012 at 11:49 AM, Simon Peyton-Jones
simo...@microsoft.com wrote:
 I don't really want to eagerly eta-expand every type variable, because (a) 
 we'll bloat the constraints and (b) we might get silly error messages.  For 
 (b) consider the insoluble constraint
 [W]  a~b
 where a and b are both skolems of kind '(k1,k2). If we eta-expand both we'll 
 get two insoluble constraints (Fst a ~ Fst b) and (Snd a ~ Snd b), and we 
 DEFINITELY don't want to report that as a type error!

I almost forgot to mention this, but...

You should perhaps talk to the agda implementors. They've done a lot
of work to avoid eta expanding as much as possible, because it was
killing performance (but it does also make for some nicer display). So
they probably know some tricks.

-- Dan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Church vs Boehm-Berarducci encoding of Lists

2012-09-19 Thread Dan Doel
On Wed, Sep 19, 2012 at 8:36 PM, wren ng thornton w...@freegeek.org wrote:
 P.S. It is actually possible to write zip function using Boehm-Berarducci
 encoding:
 http://okmij.org/ftp/ftp/Algorithms.html#zip-folds


 Of course it is; I just never got around to doing it :)

If you do, you might want to consider not using the above method, as I
seem to recall it doing an undesirable amount of extra work (repeated
O(n) tail). One can do better with some controversial types (this is
not my idea originally; ski (don't know his real name) on freenode's
#haskell showed it to me a long time ago):

 snip 

{-# LANGUAGE PolymorphicComponents #-}

module ABC where

newtype L a = L { unL :: forall r. (a - r - r) - r - r }

nil :: L a
nil = L $ \_ z - z

cons :: a - L a - L a
cons x (L xs) = L $ \f - f x . xs f

newtype A a c = Roll { unroll :: (a - A a c - L c) - L c }

type B a c = a - A a c - L c

zipWith :: (a - b - c) - L a - L b - L c
zipWith f (L as) (L bs) = unroll (as consA nilA) (bs consB nilB)
 where
 -- nilA :: A a c
 nilA = Roll $ const nil

 -- consA :: a - A a c - A a c
 consA x xs = Roll $ \k - k x xs

 -- nilB :: B a c
 nilB _ _ = nil

 -- consB :: b - B a c - B a c
 consB y ys x xs = f x y `cons` unroll xs ys

 snip 

This traverses each list only once.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Church vs Boehm-Berarducci encoding of Lists

2012-09-18 Thread Dan Doel
This paper:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.957

Induction is Not Derivable in Second Order Dependent Type Theory,
shows, well, that you can't encode naturals with a strong induction
principle in said theory. At all, no matter what tricks you try.

However, A Logic for Parametric Polymorphism,

http://www.era.lib.ed.ac.uk/bitstream/1842/205/1/Par_Poly.pdf

Indicates that in a type theory incorporating relational parametricity
of its own types,  the induction principle for the ordinary
Church-like encoding of natural numbers can be derived. I've done some
work here:

http://code.haskell.org/~dolio/agda-share/html/ParamInduction.html

for some simpler types (although, I've been informed that sigma was
novel, it not being a Simple Type), but haven't figured out natural
numbers yet (I haven't actually studied the second paper above, which
I was pointed to recently).

-- Dan

On Tue, Sep 18, 2012 at 5:41 PM, Ryan Ingram ryani.s...@gmail.com wrote:
 Oleg, do you have any references for the extension of lambda-encoding of
 data into dependently typed systems?

 In particular, consider Nat:

 nat_elim :: forall P:(Nat - *). P 0 - (forall n:Nat. P n - P (succ
 n)) - (n:Nat) - P n

 The naive lambda-encoding of 'nat' in the untyped lambda-calculus has
 exactly the correct form for passing to nat_elim:

 nat_elim pZero pSucc n = n pZero pSucc

 with

 zero :: Nat
 zero pZero pSucc = pZero

 succ :: Nat - Nat
 succ n pZero pSucc = pSucc (n pZero pSucc)

 But trying to encode the numerals this way leads to Nat referring to its
 value in its type!

type Nat = forall P:(Nat  - *). P 0 - (forall n:Nat. P n - P (succ n))
 - P ???

 Is there a way out of this quagmire?  Or are we stuck defining actual
 datatypes if we want dependent types?

   -- ryan



 On Tue, Sep 18, 2012 at 1:27 AM, o...@okmij.org wrote:


 There has been a recent discussion of ``Church encoding'' of lists and
 the comparison with Scott encoding.

 I'd like to point out that what is often called Church encoding is
 actually Boehm-Berarducci encoding. That is, often seen

  newtype ChurchList a =
  CL { cataCL :: forall r. (a - r - r) - r - r }

 (in http://community.haskell.org/%7Ewren/list-extras/Data/List/Church.hs )

 is _not_ Church encoding. First of all, Church encoding is not typed
 and it is not tight. The following article explains the other
 difference between the encodings

 http://okmij.org/ftp/tagless-final/course/Boehm-Berarducci.html

 Boehm-Berarducci encoding is very insightful and influential. The
 authors truly deserve credit.

 P.S. It is actually possible to write zip function using Boehm-Berarducci
 encoding:
 http://okmij.org/ftp/ftp/Algorithms.html#zip-folds




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Church vs Boehm-Berarducci encoding of Lists

2012-09-18 Thread Dan Doel
On Tue, Sep 18, 2012 at 11:19 PM, Ryan Ingram ryani.s...@gmail.com wrote:
 Fascinating!

 But it looks like you still 'cheat' in your induction principles...

 ×-induction : ∀{A B} {P : A × B → Set}
 → ((x : A) → (y : B) → P (x , y))
 → (p : A × B) → P p
 ×-induction {A} {B} {P} f p rewrite sym (×-η p) = f (fst p) (snd p)

 Can you somehow define

 x-induction {A} {B} {P} f p = p (P p) f

No, or at least I don't know how.

The point is that with parametricity, I can prove that if p : A × B,
then p = (fst p , snd p). Then when I need to prove P p, I change the
obligation to P (fst p , snd p). But i have (forall x y. P (x , y))
given.

I don't know why you think that's cheating. If you thought it was
going to be a straight-forward application of the Church (or some
other) encoding, that was the point of the first paper (that's
impossible). But parametricity can be used to prove statements _about_
the encodings that imply the induction principle.

 On Tue, Sep 18, 2012 at 4:09 PM, Dan Doel dan.d...@gmail.com wrote:

 This paper:

 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.957

 Induction is Not Derivable in Second Order Dependent Type Theory,
 shows, well, that you can't encode naturals with a strong induction
 principle in said theory. At all, no matter what tricks you try.

 However, A Logic for Parametric Polymorphism,

 http://www.era.lib.ed.ac.uk/bitstream/1842/205/1/Par_Poly.pdf

 Indicates that in a type theory incorporating relational parametricity
 of its own types,  the induction principle for the ordinary
 Church-like encoding of natural numbers can be derived. I've done some
 work here:

 http://code.haskell.org/~dolio/agda-share/html/ParamInduction.html

 for some simpler types (although, I've been informed that sigma was
 novel, it not being a Simple Type), but haven't figured out natural
 numbers yet (I haven't actually studied the second paper above, which
 I was pointed to recently).

 -- Dan

 On Tue, Sep 18, 2012 at 5:41 PM, Ryan Ingram ryani.s...@gmail.com wrote:
  Oleg, do you have any references for the extension of lambda-encoding of
  data into dependently typed systems?
 
  In particular, consider Nat:
 
  nat_elim :: forall P:(Nat - *). P 0 - (forall n:Nat. P n - P
  (succ
  n)) - (n:Nat) - P n
 
  The naive lambda-encoding of 'nat' in the untyped lambda-calculus has
  exactly the correct form for passing to nat_elim:
 
  nat_elim pZero pSucc n = n pZero pSucc
 
  with
 
  zero :: Nat
  zero pZero pSucc = pZero
 
  succ :: Nat - Nat
  succ n pZero pSucc = pSucc (n pZero pSucc)
 
  But trying to encode the numerals this way leads to Nat referring to
  its
  value in its type!
 
 type Nat = forall P:(Nat  - *). P 0 - (forall n:Nat. P n - P (succ
  n))
  - P ???
 
  Is there a way out of this quagmire?  Or are we stuck defining actual
  datatypes if we want dependent types?
 
-- ryan
 
 
 
  On Tue, Sep 18, 2012 at 1:27 AM, o...@okmij.org wrote:
 
 
  There has been a recent discussion of ``Church encoding'' of lists and
  the comparison with Scott encoding.
 
  I'd like to point out that what is often called Church encoding is
  actually Boehm-Berarducci encoding. That is, often seen
 
   newtype ChurchList a =
   CL { cataCL :: forall r. (a - r - r) - r - r }
 
  (in
  http://community.haskell.org/%7Ewren/list-extras/Data/List/Church.hs )
 
  is _not_ Church encoding. First of all, Church encoding is not typed
  and it is not tight. The following article explains the other
  difference between the encodings
 
  http://okmij.org/ftp/tagless-final/course/Boehm-Berarducci.html
 
  Boehm-Berarducci encoding is very insightful and influential. The
  authors truly deserve credit.
 
  P.S. It is actually possible to write zip function using
  Boehm-Berarducci
  encoding:
  http://okmij.org/ftp/ftp/Algorithms.html#zip-folds
 
 
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Fwd: How Type inference work in presence of Functional Dependencies

2012-09-13 Thread Dan Doel
Copying the mailing list, because I forgot.

On Thu, Sep 13, 2012 at 5:18 AM, satvik chauhan mystic.sat...@gmail.com wrote:
 Consider the code below :

 {-# LANGUAGE
 MultiParamTypeClasses,FlexibleInstances,FunctionalDependencies,UndecidableInstances,FlexibleContexts
 #-}
 class Foo a c | a - c
 instance Foo Int Float
 f :: (Foo Int a) = Int - a
 f = undefined

 Now when I see the inferred type of f in ghci

 :t f

 f :: Int - Float

 Now If I add the following code

 g :: Int - Float
 g = undefined

 h :: (Foo Int a) = Int - a
 h = g


 I get the error

 Could not deduce (a ~ Float)


 I am not able to understand what has happened here ? The restriction Foo
 Int a should have restricted the type of h to Int - Float as shown in
 the inferred type of f.

The answer, I believe, is that the difference between the fundep
implementation and type families is local constraint information.
Fundeps do no local propagation.

So in your first definition, you've locally provided 'Int - a', which
is acceptable to GHC. Then it figures out externally to the function
that '(Foo Int a) = Int - a' is actually Int - Float.

In the second definition, you're trying to give 'Int - Float', but
GHC only knows locally that you need to provide 'Int - a' with a
constraint 'Foo Int a' which it _won't_ use to determine that a ~
Float.

This is not inherent to fundeps. One could make a version of fundeps
that has the local constraint rules (easily so by translating to the
new type families stuff). But, the difference is also the reason that
overlapping instances are supported for fundeps and not type families.
But I won't get into that right now.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: PolyKind issue in GHC 7.6.1rc1: How to make a kind a functional dependency?

2012-08-31 Thread Dan Doel
On Fri, Aug 31, 2012 at 9:06 AM, Edward Kmett ekm...@gmail.com wrote:
 I know Agda has to jump through some hoops to make the refinement work on
 pairs when they do eta expansion. I wonder if this could be made less
 painful.

To flesh this out slightly, there are two options for defining pairs in Agda:

  data Pair1 (A B : Set) : Set where
_,_ : A - B - Pair1 A B

  record Pair2 (A B : Set) : Set where
constructor _,_
field
  fst : A
  snd : B

Now, if we have something similar to Thrist indexed by Pair2, we have
no problems, because:

A p - M A p

is equal to:

A (fst p , snd p) - M A (fst p , snd p)

Which is what we need for the irt definition to make sense. If we
index by Pair1, this won't happen automatically, but we have an
alternate definition of irt:

irt : {I J : Set} {A : Pair1 I J - Set} {p : Pair1 I J} - A p -
Thrist A p
irt {I} {J} {A} {i , j} ap = ap :- Nil

The pattern match {i , j} there refines p to be (i , j) everywhere, so
the definition is justified.

Without one of these two options, these definitions seem like they're
going to be cumbersome. Ed seems to have found a way to do it, by what
looks kind of like hand implementing the record version, but it looks
unpleasant.

On another note, it confuses me that m - k would be necessary at all
in the IMonad definition. k is automatically determined by being part
of the kind of m, and filling in anything else for k would be a type
error, no? It is the same kind of reasoning that goes on in
determining what 'a' is for 'id :: forall a. a - a' based on the type
of the first argument.

-- Dan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: New INLINE pragma syntax idea, and some questions

2012-08-04 Thread Dan Doel
On Aug 3, 2012 11:13 PM, Brandon Simmons brandon.m.simm...@gmail.com
wrote:
 In particular I don't fully understand why these sorts of contortions...


http://hackage.haskell.org/packages/archive/base/latest/doc/html/src/GHC-List.html#foldl

 ...are required. It seems like a programmer has to throw equational
 reasoning, separation of concerns, and all the little elegant bits
 about the language out the window just to indicate something boring to
 the compiler.

 Disclaimer: The following is less a proposal meant to be taken
 seriously, and more me trying to better understand things.

 Could the following be used as syntax for indicating inlining? Rather
 than relying on the syntactic LHS, instead let that be specified in
 the type signature...

 foldl:: (a - b - a) - a - [b] - {-# INLINE #-} a
 foldl f z [] =  z
 foldl f z (x:xs) = foldl f (f z x) xs

 ...indicating, in this case, that foldl should be inlined when
 fully-applied means its first three arguments (I guess that's the
 intent of the original version linked above?). Then (waves hands) the
 compiler could do the necessary transformations that the programmer
 had to do to foldl above. Maybe what I'm proposing is actually a
 separate NORECURSIVE_TRANSFORM pragma or something

That's not quite the effect. What has been done to foldl there is known as
the static argument transform. It avoids passing constant arguments along
in recursion. f is the only static argument to foldl (foldr by contrast has
two).

This can be important for multiple reasons. Sometimes it frees up
registers. Here, we may inline foldl and possibly specialize the loop to a
statically known f. That is often a big win. For instance, if you write sum
with foldl, you can inline, do a worker wrapper transform, and work on
unboxed integers with raw adds (probably) instead of going through multiple
layers of indirection.

There was some work on making GHC automatically SAT, but of it's a bit
tricky with regard to when it's worth it, so I don't think it's been put in.

I have code that relies on this sort of thing a lot, so if someone comes up
with a good way to automate it, I wouldn't complain.

Dan
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Call to arms: lambda-case is stuck and needs your help

2012-07-07 Thread Dan Doel
If we're voting

I think \of is all right, and multi-argument case could be handy,
which rules out using 'case of' for lambda case, because it's the
syntax for a 0-argument case:

case of
  | guard1 - ...
  | guard2 - ...

Then multi-argument lambda case could use the comma syntax of
multi-argument case.

One thing I don't think makes sense in combination is \of with
0-arguments, since any desugaring of that is not going to involve and
actual lambda expression.

-- Dan

On Thu, Jul 5, 2012 at 5:04 PM, Edward Kmett ekm...@gmail.com wrote:
 I really like the \of proposal!

 It is a clean elision with \x - case x of becoming \of

 I still don't like it directly for multiple arguments.

 One possible approach to multiple arguments is what we use for multi-argument 
 case/alt here in our little haskell-like language, Ermine, here at SP 
 CapitalIQ, we allow for ',' separated patterns, but without surrounding 
 parens to be treated as a multi argument case and alt pair. Internally we 
 desugar our usual top level bindings directly to this representation. When 
 mixed with the \of extension, this would give you:

 foo :: Num a = Maybe a - Maybe a - Maybe a
 foo = \of
   Just x, Just y - Just (x*y)
   _, _ - Nothing

 but it wouldn't incur parens for the usual constructor pattern matches and it 
 sits cleanly in another syntactic hole.

 A similar generalization can be applied to the expression between case and of 
 to permit a , separated list of expressions so this becomes applicable to the 
 usual case construct. A naked unparenthesized , is illegal there currently as 
 well. That would effectively be constructing then matching on an unboxed 
 tuple without the (#, #) noise, but that can be viewed as a separate 
 proposal' then the above is just the elision of the case component of:

 foo mx my = case mx, my of
   Just x, Just y - Just (x*y)
   _, _ - Nothing

 On Jul 5, 2012, at 2:49 PM, wagne...@seas.upenn.edu wrote:

 Quoting wagne...@seas.upenn.edu:

 Well, for what it's worth, my vote goes for a multi-argument \case. I

 Just saw a proposal for \of on the reddit post about this. That's even 
 better, since:

 1. it doesn't change the list of block heralds
 2. it doesn't mention case, and therefore multi-arg \of is perhaps a bit 
 less objectionable to those who expect case to be single-argument
 3. 40% less typing!

 Can I change my vote? =)
 ~d

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] I Need a Better Functional Language!

2012-04-05 Thread Dan Doel
On Thu, Apr 5, 2012 at 10:14 AM, Grigory Sarnitskiy sargrig...@ya.ru wrote:
 First, what are 'functions' we are interested at? It can't be the usual 
 set-theoretic definition, since it is not constructive. The constructive 
 definition should imply functions that can be constructed, computed. Thus 
 these are computable functions that should be of our concern. But computable 
 functions in essence are just a synonym for programs.

This is a flawed premise. The point of working with functions is
abstraction, and that abstraction is given by extensional equality of
functions:

f = g  iff  forall x. f x = g x

So functions are not synonymous with programs or algorithms, they
correspond to an equivalence class of algorithms that produce the same
results from the same starting points. If you can access the source of
functions within the language, this abstraction has been broken.

And this abstraction is useful, because it allows you to switch freely
between programs that do the same thing without having to worry that
someone somewhere is relying on the particular algorithm or source.
This is the heart of optimization and refactoring and the like.

There are places for metaprogramming, or perhaps even a type of
algorithms that can be distinguished by means other than the functions
they compute. But to say that functions are that type is false, and
that functions should mean that is, I think, wrong headed.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad laws in presence of bottoms

2012-02-21 Thread Dan Doel
On Tue, Feb 21, 2012 at 10:44 AM, wren ng thornton w...@freegeek.org wrote:
 That's a similar sort of issue, just about whether undefined ==
 (undefined,undefined) or not. If the equality holds then tuples would be
 domain products[1], but domain products do not form domains!
 ...
 [1] Also a category-theoretic product.

This doesn't make much sense to me, either. If it's a category
theoretic product in a category of domains, then the product must be a
domain, no?

 In order to get
 a product which does form a domain, we'd need to use the smash product[2]
 instead. Unfortunately we can't have our cake and eat it too (unless we get
 rid of bottom entirely).

You don't have to get rid of bottom entirely (I think). If you make
matches against products irrefutable, then you're again in the
situation of seq being the only thing able to distinguish between _|_
and (_|_, _|_), so we could keep the current implementation (which is
efficient) without it being possible to observe within the language.
You just have to make seq not be magic on products. Miranda did this,
except it still had a seq which exposed the lie.

The problem with this is that you can easily build up a product that
is a bunch of thunks and cause a stack overflow; I _think_ that's more
of a concern than doing the same with a function. So in practice it
might be harder to do without seq on tuples.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad laws in presence of bottoms

2012-02-21 Thread Dan Doel
On Wed, Feb 22, 2012 at 1:40 AM, wren ng thornton w...@freegeek.org wrote:
 It's a category-theoretic product, but not for the category of domains. Let
 Set be the category of sets and set-theoretic functions. And let pDCPO be
 the category of (pointed) domains and their homomorphisms.

 The (domain-theoretic) domain product was discussed in my previous message
 to MigMit. It's a category-theoretic product in Set (among other categories)
 because the necessary morphisms exist and they satisfy the necessary
 equations. Moreover, in Set the domain product coincides with the cartesian
 product (we just forget about the orderings on the input domains and the
 resulting product). Hence, since the cartesian product is a
 category-theoretic product for Set, we know that the domain product must be
 a category-theoretic product in Set.

Well, like Miguel, I don't see the problem with choosing the ordering:

(a,b) = (a', b') iff a = a' and b = b'

This makes (a0, b0) the bottom element. The only difference with
Haskell's tuple types is that it lacks an extra element below (a0,
b0); it's unlifted.

It also seems to me that this _is_ the correct product for domains,
unless I'm still sketchy on what you mean by domain. I don't think it
matters that we're only considering strict homomorphisms.

 Conversely, the smash product is a category-theoretic product in pDCPO, but
 not in Set. Since every domain homomorphism must map bottoms to bottoms, it
 follows that f(d0) = a0 and g(d0) = b0. From this we have the necessary
 continuity to ensure that C, pi_A, and pi_B all exist in pDCPO. However,
 since there exist set-theoretic functions f and g which do not have that
 special property, the smash product is going to lose information about the
 non-bottom component of the product and so it cannot satisfy the necessary
 category-theoretic equations (in Set).

The smash product is not a product for domains (in general), either.
You are not allowed to throw away the components for such a product,
because given a homomorphism f that maps everything to a0, pi_B .
f,g maps everything to b0, regardless of what g is.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Holes in GHC

2012-01-26 Thread Dan Doel
On Thu, Jan 26, 2012 at 12:45 PM, Thijs Alkemade
thijsalkem...@gmail.com wrote:
 Let me try to describe the goal better. The intended users are people
 new to Haskell or people working with existing code they are not
 familiar with.

Also me. I want this feature. It pretty much single handedly makes
prototyping things in Agda and then porting them to Haskell a better
experience than writing them straight in Haskell to begin with. I can
partially implement functions and get feedback on what I need to
provide and what is available, add candidate terms, have them type
checked and filled in if they work. Etc.

It's significantly better than any Haskell editor I'm aware of, and
adding undefined or ?foo and poking at things in ghci isn't
comparable.

-- Dan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Holes in GHC

2012-01-26 Thread Dan Doel
On Thu, Jan 26, 2012 at 2:36 PM, Simon Peyton-Jones
simo...@microsoft.com wrote:
 |   Let me try to describe the goal better. The intended users are people
 |   new to Haskell or people working with existing code they are not
 |   familiar with.
 |
 |  Also me. I want this feature.

 My question remains: what is the feature?   Agda has a sophisticated IDE; is 
 that a key part of the feature.  I expect so.

The Agda piece of the feature is support for keeping a type checker
running continuously, which has the ability to type check expressions
with holes, like:

foo { }0 bar baz { }1

and can be queried for the type of expression that must be put into
the holes to make them type check, a list of all the variables in
scope at each hole together with their types, and the ability to type
check expressions against the necessary type for each hole without
rechecking the entire file (at least in the Agda case, checking the
part can be much faster than checking the whole; perhaps it'd matter
less in GHC). It probably must also be possible to tell the system
you're filling in a hole, because that might refine the type of the
other holes.

The rest is an emacs mode that interacts with the running type checker
appropriately.

Another view might be that the holes allow you to capture and interact
with contexts in the middle of type checking. A hole captures the
continuation, and then proceeds as if successful for any type, but the
continuation may be queried for the above information. Then you can
call the continuation to fill in the holes (possibly with expressions
having their own holes). But I'm not going to suggest that's a
feasible way to implement it.

The support of the compiler is necessary before you can build any
editor making use of it, though.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Not an isomorphism, but what to call it?

2012-01-19 Thread Dan Doel
A is a retract of B.

http://nlab.mathforge.org/nlab/show/retract

g is the section, f is the rectraction. You seem to have it already.
The definition needn't be biased toward one of the functions.

On Thu, Jan 19, 2012 at 4:24 PM, Sean Leather leat...@cs.uu.nl wrote:
 I have two types A and B, and I want to express that the composition of two
 functions f :: B - A and g :: A - B gives me the identity idA = f . g :: A
 - A. I don't need g . f :: B - B to be the identity on B, so I want a
 weaker statement than isomorphism.

 I understand that:
 (1) If I look at it from the perspective of f, then g is the right inverse
 or section (or split monomorphism).
 (2) If I look at from g, then f is the left inverse or retraction (or split
 epimorphism).

 But I just want two functions that give me an identity on one of the two
 types and I don't care which function's perspective I'm looking at it from.
 Is there a word for that?

 Regards,
 Sean

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were unfailable patterns removed and fail added to Monad?

2012-01-19 Thread Dan Doel
On Thu, Jan 19, 2012 at 10:43 PM, Gregory Crosswhite
gcrosswh...@gmail.com wrote:
 first, that the notion of unfailable was not removed from the language
 so much as not added in the first place

No, this is not correct. Unfailable patterns were specified in Haskell
1.4 (or, they were called failure-free there; they likely existed
earlier, too, but I'll leave the research to people who are
interested). They were new in the sense that they were introduced
only for the purposes of desugaring do/comprehensions, whereas
refutable vs. irrefutable patterns need to be talked about for other
purposes.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were unfailable patterns removed and fail added to Monad?

2012-01-19 Thread Dan Doel
On Thu, Jan 19, 2012 at 11:11 PM, Dan Doel dan.d...@gmail.com wrote:
 No, this is not correct. Unfailable patterns were specified in Haskell
 1.4 (or, they were called failure-free there; they likely existed
 earlier, too, but I'll leave the research to people who are
 interested). They were new in the sense that they were introduced
 only for the purposes of desugaring do/comprehensions, whereas
 refutable vs. irrefutable patterns need to be talked about for other
 purposes.

I should also note: GHC already implements certain unfailable patterns
the 1.4 way when using RebindableSyntax (possibly by accident):

{-# LANGUAGE RebindableSyntax, MonadComprehensions #-}

module Test where

import qualified Prelude
import Prelude (String, Maybe(..))

import Control.Applicative

class Applicative m = Monad m where
  (=) :: m a - (a - m b) - m b

return :: Applicative f = a - f a
return = pure

class Monad m = MonadZero m where
  mzero :: m a
  fail  :: String - m a

  mzero = fail mzero
  fail _ = mzero

foo :: MonadZero m = m (Maybe a) - m a
foo m = do Just x - m
   pure x

bar :: Monad m = m (a, b) - m a
bar m = do (x, y) - m
   pure x

baz :: MonadZero m = m (Maybe a) - m a
baz m = [ x | Just x - m ]

quux :: Monad m = m (a, b) - m a
quux m = [ x | (x, y) - m ]

It doesn't work for types defined with data, but it works for built-in tuples.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Unit unboxed tuples

2012-01-11 Thread Dan Doel
On Wed, Jan 11, 2012 at 8:41 AM, Simon Marlow marlo...@gmail.com wrote:
 On 10/01/2012 16:18, Dan Doel wrote:

 Copying the list, sorry. I have a lot of trouble replying correctly
 with gmail's interface for some reason. :)

 On Tue, Jan 10, 2012 at 11:14 AM, Dan Doeldan.d...@gmail.com  wrote:

 On Tue, Jan 10, 2012 at 5:01 AM, Simon Marlowmarlo...@gmail.com  wrote:

 On 09/01/2012 04:46, wren ng thornton wrote:

 Shouldn't (# T #) be identical to T?


 No, because (# T #) is unlifted, whereas T is lifted.  In operational
 terms,
 a function that returns (# T #) does not evaluate the T before returning
 it,
 but a function returning T does.  This is used in GHC for example to
 fetch a
 value from an array without evaluating it, for example:

  indexArray :: Array# e -  Int# -  (# e #)


 I don't really understand this explanation. (# T #) being unlifted
 would mean it's isomorphic to T under the correspondence e-  (# e
 #). _|_ = (# _|_ #) : (# T #), so this works.

 Does the difference have to do with unboxed types? For instance:

    foo :: () -  Int#
    foo _ = foo ()
    bar :: () -  (# Int# #)
    bar _ = (# foo () #)

    baz = case bar () of _ -  5  -- 5
    quux = case foo () of _ -  5 -- non-termination

 Because in that case, either (# Int# #) is lifted, or the Int# is
 effectively lifted when inside the unboxed tuple. The latter is a bit
 of an oddity.


 Unboxed types cannot be lifted, so in fact bar compiles to this:

  bar = \_ - case foo () of x - (# x #)

 and both baz and quux diverge.

 It might help to understand (# T #) by translating it to (# T, () #).
 There's really no difference.

Then I'm afraid I still don't understand the difference. Is it that
case in core always evaluates? So:

case undefined of x - ...

blows up, while

case (# undefined #) of (# x #) - ...

does not?

Also, if so, how is (core-wise):

foo :: ... - (# T #)
case foo v of (# x #) - ...

different from:

foo :: ... - T
let x = foo v in ...

Stack vs. heap allocation?

Sorry for being rather thick.

-- Dan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Unit unboxed tuples

2012-01-10 Thread Dan Doel
Copying the list, sorry. I have a lot of trouble replying correctly
with gmail's interface for some reason. :)

On Tue, Jan 10, 2012 at 11:14 AM, Dan Doel dan.d...@gmail.com wrote:
 On Tue, Jan 10, 2012 at 5:01 AM, Simon Marlow marlo...@gmail.com wrote:
 On 09/01/2012 04:46, wren ng thornton wrote:
 Shouldn't (# T #) be identical to T?

 No, because (# T #) is unlifted, whereas T is lifted.  In operational terms,
 a function that returns (# T #) does not evaluate the T before returning it,
 but a function returning T does.  This is used in GHC for example to fetch a
 value from an array without evaluating it, for example:

  indexArray :: Array# e - Int# - (# e #)

I don't really understand this explanation. (# T #) being unlifted
would mean it's isomorphic to T under the correspondence e - (# e
#). _|_ = (# _|_ #) : (# T #), so this works.

Does the difference have to do with unboxed types? For instance:

   foo :: () - Int#
   foo _ = foo ()
   bar :: () - (# Int# #)
   bar _ = (# foo () #)

   baz = case bar () of _ - 5  -- 5
   quux = case foo () of _ - 5 -- non-termination

Because in that case, either (# Int# #) is lifted, or the Int# is
effectively lifted when inside the unboxed tuple. The latter is a bit
of an oddity.

-- Dan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] On the purity of Haskell

2012-01-01 Thread Dan Doel
On Sun, Jan 1, 2012 at 3:26 PM, Ketil Malde ke...@malde.org wrote:
 Chris Smith cdsm...@gmail.com writes:

 I wonder: can writing to memory be called a “computational effect”? If
 yes, then every computation is impure.

 I wonder if not the important bit is that pure computations are unaffected by
 other computations (and not whether they can affect other computations).
 Many pure computations have side effects (increases temperature,
 modifies hardware registers and memory, etc), but their effect can only
 be observed in IO.

 (E.g. Debug.Trace.trace provides a non-IO interface by use of
 unsafePerformIO which is often considered cheating, but in
 this view it really is pure.)

The point of purity (and related concepts) is to have useful tools for
working with and reasoning about your code. Lambda calculi can be seen
as the prototypical functional languages, and the standard ones have
the following nice property:

  The only difference between reduction strategies is termination.

Non-strict strategies will terminate for more expressions than strict
ones, but that is the only difference.

This property is often taken to be the nub of what it means to be a
pure functional language. If the language is an extension of the
lambda calculus that preserves this property, then it is a pure
functional language. Haskell with the 'unsafe' stuff removed is such a
language by this definition, and most GHC additions are too, depending
on how you want to argue. It is even true with respect to the output
of programs, but not when you're using Debug.Trace, because:

flip (+) (foo `trace` 1) (bar `trace` 1)

will print different things with different evaluation orders.

A similar property is referential transparency, which allows you to
factor your program however you want without changing its denotation.
So:

(\x - x + x) e

is the same as:

e + e

This actually fails for strict evaluation strategies unless you relax
it to be 'same denotation up to bottoms' or some such. But not having
to worry about whether you're changing the definedness of your
programs by factoring/inlining is actually a useful property that
strict languages lack.

Also, the embedded IO language does not have this property.

do x - m ; f x x

is different from

do x - m ; y - m ; f x y

and so on. This is why you shouldn't write your whole program with IO
functions; it lacks nice properties for working with your code. But
the embedded IO language lacking this property should not be confused
with Haskell lacking this property.

Anyhow, here's my point: these properties can be grounded in useful
features of the language. However, for the vast majority of people,
being able to factor their code differently and have it appear exactly
the same to someone with a memory sniffer isn't useful. And unless
you're doing serious crypto or something, there is no correct amount
of heat for a program to produce. So if we're wondering about whether
we should define purity or referential transparency to incorporate
these things, the answer is an easy, no. We throw out the
possibility that 'e + e' may do more work than '(\x - x + x) e' for
the same reason (indeed, we often want to factor it so that it
performs better, while still being confident that it is in some sense
the same program, despite the fact that performing better might by
some definitions mean that it isn't the same program).

But the stuff that shows up on stdout/stderr typically is something we
care about, so it's sensible to include that. If you don't care what
happens there, go ahead and use Debug.Trace. It's pure/referentially
transparent modulo stuff you don't care about.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strict, lazy, non-strict, eager

2011-12-24 Thread Dan Doel
On Sat, Dec 24, 2011 at 2:31 AM, Albert Y. C. Lai tre...@vex.net wrote:
 1. a function f is strict if  f ⊥ = ⊥
 2. ⊥ represents any computation which does not terminate, i.e. an exception
 or an infinite loop
 3. strict describes the denotational semantics

All three of these statements are true. The only potentially
controversial one is 2, but any term that the operational semantics
would identify as simple non-termination (which is invariably what
they're talking about when they say 2; not some partially defined
term) will be given denotation ⊥.

 B. Actually there are more, but apparently two is already enough to cause
 all kinds of incoherent statements. If I draw your attention to algebraic
 semantics, will you start saying it is too lazy, need to make it more
 idempotent?

Yes, there are more than two. And they don't exist in completely
separate vacuums from one another. Denotational and operational
properties are sometimes (often?) correlated. And algebraic semantics
is often the sweet spot for reasoning about the structure of the
operational or denotational semantics of your code, without bringing
in all the irrelevant details from the latter two. I can make a
denotational semantics for System F where each term is denoted by its
normal form (an operational concept).

I think it's good to be clear on all these specifics, and people could
do with a better recognition of the difference between (non-)strict
and (lazy)eager (hint: you can have an eager, non-strict language).
But it isn't necessarily a problem that people speak in terms of more
than one at once. The different kinds of semantics aren't in conflict
with one another.

The main problem would be that such casual mixing prevents newcomers
from learning the distinctions by osmosis.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strict, lazy, non-strict, eager

2011-12-24 Thread Dan Doel
On Sun, Dec 25, 2011 at 12:14 AM, Eugene Kirpichov ekirpic...@gmail.com wrote:
 On Sat, Dec 24, 2011 at 10:49 PM, Dan Doel dan.d...@gmail.com wrote:
 I think it's good to be clear on all these specifics, and people could
 do with a better recognition of the difference between (non-)strict
 and (lazy)eager (hint: you can have an eager, non-strict language).

 Can you elaborate? That's apparently my blind spot.

A while back, there was a paper on something called (I believe)
optimistic evaluation. The strategy goes like this: when you evaluate
'f x', first you start evaluating 'x'. If that takes too long, or you
encounter an exception, you (re)thunk it, and continue evaluating the
body of f lazy style, in case you don't really need x.

This is arguably eager, since you reduce arguments to functions
immediately if possible. And it has some advantages over lazy
evaluation in common cases. For instance, it avoids foldl building up
a huge nested thunk that would cause a stack overflow. But it isn't
strict, because

const 5 undefined

is still 5.

You can also imagine sparking on every function application, so that
arguments automatically get reduced in parallel, and as soon as
possible. I think that's been determined to not be very effective,
though.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?

2011-12-22 Thread Dan Doel
On Thu, Dec 22, 2011 at 4:19 AM, Heinrich Apfelmus
apfel...@quantentunnel.de wrote:
 Alexander Solla wrote:

 And denotational semantics is not just nice. It is useful. It's the best
 way to understand why the program we just wrote doesn't terminate.


 Denotational semantics is unrealistic.  It is a Platonic model of
 constructive computation.  Alan Turing introduced the notion of an
 oracle
 to deal with what we are calling bottom.  An oracle is a thing that
 (magically) knows what a bottom value denotes, without having to wait
 for
 an infinite number of steps.  Does Haskell offer oracles?  If not, we
 should abandon the use of distinct bottoms.  The /defining/ feature of a
 bottom is that it doesn't have an interpretation.


 Huh? I don't see the problem.

 Introducing bottom as a value is a very practical way to assign a
 well-defined mathematical object to each expression that you can write down
 in Haskell. See

  http://en.wikibooks.org/wiki/Haskell/Denotational_semantics

 It's irrelevant whether _|_ is unrealistic, it's just a mathematical model
 anyway, and a very useful one at that. For instance, we can use it to reason
 about strictness, which gives us information about lazy evaluation and
 operational semantics.

As another example Not that long ago, Bob Harper was complaining
on his blog about how you can't use induction to reason about Haskell
functions. But, that's incorrect. What you can't use is induction
based on a model where all data types are the expected inductively
defined sets, and non-termination is modeled as an effect. This isn't
surprising, of course, because Haskell's models (i.e. denotational
semantics) aren't like that.

But, once you know what Haskell's models are---they model types as
domains, and data types are inductively defined _domains_, not
sets---then you in fact get induction principles based on those models
(see for instance, Fibrational Induction Rules for Initial Algebras).
You need to prove two or three additional cases, but it works roughly
the same as the plain ol' induction you seem to lose for having
non-strict evaluation.

And then you have one less excuse for not using a language with lazy
evaluation. :)

-- Dan

* http://personal.cis.strath.ac.uk/~patricia/csl2010.pdf

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More liberal than liberal type synonyms

2011-12-07 Thread Dan Doel
On Wed, Dec 7, 2011 at 5:48 AM, Dmitry Kulagin dmitry.kula...@gmail.com wrote:
 I am still pretty new in Haskell, but this problem annoys me already.

 If I define certain monad as a type synonym:

    type StateA a = StateT SomeState SomeMonad a

 Then I can't declare new monad based on the synonym:

    type StateB a = StateT SomeOtherState StateA a

 The only way I know to overcome is to declare StateA without `a':

    type StateA = StateT SomeState SomeMonad

 But it is not always possible with existing code base.

I'm afraid my proposal doesn't make this work. You could perhaps
define StateB, but when you expand in a type you get:

StateB a = StateT SomeOtherState StateA a

which has a partially applied StateA, and is rejected. The only way to
make this work is to eta reduce StateA manually, or make GHC recognize
when a synonym can be eta reduced in this way (which might be both
possible and useful as a separate proposal).

My extension fell within the liberal type synonym space, which says
that if you have:

F G

where F and G are both synonyms, and G is partially applied, then it
is okay as long as expansion of F (and any subsequent expansions)
cause G to become fully applied. My extension of this is just to allow
partial application inside aliases as long as it meets these same
criteria.

The reason to disallow partially applied type aliases is that they
make inference pretty much impossible, unless you only infer them in
very limited circumstances perhaps. And if you can't get inference of
them, you probably need to start having explicit annotations to tell
the type checker what you want to happen, which has some of its own
complications with the way quantifiers work in GHC and such. It'd
cascade into some thorny issues.

Hopefully that covers all the other subsequent stuff folks have been
talking about.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


More liberal than liberal type synonyms

2011-12-06 Thread Dan Doel
Greetings,

In the process of working on a Haskell-alike language recently, Ed
Kmett and I realized that we had (without really thinking about it)
implemented type synonyms that are a bit more liberal than GHC's. With
LiberalTypeSynonyms enabled, GHC allows:

type Foo a b = b - a
type Bar f = f String Int

baz :: Bar Foo
baz = show

because Bar expands to saturate Foo. However, we had also implemented
the following, which fails in GHC:

type Foo a b = b - a
type Bar f = f (Foo Int) (Foo Int)
type Baz f g = f Int - g Int

quux :: Bar Baz
quux = id

That is: type synonyms are allowed to be partially applied within
other type synonyms, as long as similar transitive saturation
guarantees are met during their use.

I don't know how useful it is, but I was curious if anyone can see
anything wrong with allowing this (it seems okay to me after a little
thought), and thought I'd float the idea out to the GHC developers, in
case they're interested in picking it up.

-- Dan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell-cafe] More liberal than liberal type synonyms

2011-12-06 Thread Dan Doel
Greetings,

In the process of working on a Haskell-alike language recently, Ed
Kmett and I realized that we had (without really thinking about it)
implemented type synonyms that are a bit more liberal than GHC's. With
LiberalTypeSynonyms enabled, GHC allows:

type Foo a b = b - a
type Bar f = f String Int

baz :: Bar Foo
baz = show

because Bar expands to saturate Foo. However, we had also implemented
the following, which fails in GHC:

type Foo a b = b - a
type Bar f = f (Foo Int) (Foo Int)
type Baz f g = f Int - g Int

quux :: Bar Baz
quux = id

That is: type synonyms are allowed to be partially applied within
other type synonyms, as long as similar transitive saturation
guarantees are met during their use.

I don't know how useful it is, but I was curious if anyone can see
anything wrong with allowing this (it seems okay to me after a little
thought), and thought I'd float the idea out to the GHC developers, in
case they're interested in picking it up.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Evaluating type expressions in GHCi

2011-09-20 Thread Dan Doel
type family Bar a :: *
type instance Bar () = String

data Foo a = Bar a | Baz a a

ghci Bar ()

What happens?

There is a lot of ambiguity between term and type levels in Haskell.
(,); []; etc. It's only the overall structure of the language that
disambiguates them; you can't necessarily tell just by looking at an
expression and the existing definitions in scope.

-- Dan

On Tue, Sep 20, 2011 at 7:02 PM,  wagne...@seas.upenn.edu wrote:
 Would it be possible to have no command at all? Types are distinguished by
 upper-case letters, so it should be possible to tell whether a given
 expression is a value-level or a type-level expression.

 I guess that's not strictly true, since the expression could be _only_ type
 variables -- but then I think it would be forgivable to just use the
 value-level evaluator for those ambiguous ones.

 ~d

 Quoting Simon Peyton-Jones simo...@microsoft.com:

 Sean

 Yes, this has been asked for before, and it wouldn't be hard to implement.


 What should the GHCi command be *called*?

 We already have :kind, which displays the kind of a type.  Maybe :kind!
 should evaluate the type as well?  Or perhaps :kind should evaluate anyway
 (although that would be a bit  inconsistent with :type which does not
 evaluate the expression)

 Or :normtype?   short for normalise type

 Simon

 From: glasgow-haskell-users-boun...@haskell.org
 [mailto:glasgow-haskell-users-boun...@haskell.org] On Behalf Of Sean Leather
 Sent: 20 September 2011 11:34
 To: GHC Users List
 Subject: Evaluating type expressions in GHCi

 I would like to ask GHCi for the type that a type expression will evaluate
 to, once all definitions of type synonyms and (when possible) type families
 have been inlined.

 It appears that I can do some part of this for type T by using :t
 undefined :: T:

 type family F a
 type instance F Int = Bool
 type instance F Bool = Int
 type instance F (a, b) = (F a, F b)

 ghci :t undefined :: F (Int, Bool)
 undefined :: F (Int, Bool) :: (Bool, Int)

 I also get what I expect here:

 ghci :t undefined :: F (a, Bool)
 undefined :: F (a, Bool) :: (F a, Int)

 Of course, this doesn't work on types of kinds other than *.

 Is it possible and worth having another GHCi command to perform this
 operation for any types? It could be the type analogue to :t such that it
 evaluates the type and gives its kind.

 Regards,
 Sean




 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Smarter do notation

2011-09-04 Thread Dan Doel
On Sun, Sep 4, 2011 at 12:24 AM, Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com wrote:
 On 4 September 2011 12:34, Daniel Peebles pumpkin...@gmail.com wrote:
 Hi all,
 For example, if I write in a do block:
 x - action1
 y - action2
 z - action3
 return (f x y z)
 that doesn't require any of the context-sensitivty that Monads give you, and
 could be processed a lot more efficiently by a clever Applicative instance
 (a parser, for instance).

 What advantage is there in using Applicative rather than Monad for
 this?  Does it _really_ lead to an efficiency increase?

Forget about efficiency. What if I just want nicer syntax for some
applicative stuff? For instance, this is applicative:

  do x - fx ; y - fy ; z - fz ; pure (x*x + y*y + z*z)

But my only option for writing it to require just applicative is something like:

  (\x y z - x*x + y*y + z*z) $ fx * fy * fz

Even if I had idiom brackets, it'd just be:

  (| (\x y z - x*x + y*y + z*z) fx fy fz |)

Basically the situation boils down to this: applicatives admit a form
of let as sugar:

  let
x = ex
y = ey
z = ez
   in ...

where the definitions are not recursive, and x is not in scope in ey
and so on. This is desugarable to (in lambda calculus):

  (\x y z - ...) (ex) (ey) (ez)

but we are currently forced to write in the latter style, because
there's no support for the sugared syntax. So if anyone's looking for
motivation, ask yourself if you've ever found let or where useful. And
of course, in this case, we can't just beta reduce the desugared
expression, because of the types involved.

Comprehensions are rather like an expression with a where:

  [ x*x + y*y + z*z | x - ex, y - ey, z - ez ]

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Type families difference between 7.0.4 and 7.2.1

2011-08-16 Thread Dan Doel
Classes are not always exported from a module. Only instances are. It
is even possible to export methods of a class that isn't itself
exported, making it impossible to write the types for them explicitly
(GHC will infer qualified types that you can't legally write given the
imports).

I don't really understand why it would be impossible not to export a
data family, given that (instances I understand). And of course, you
can selectively export methods of a class, so why not associated
types?

-- Dan

On Tue, Aug 16, 2011 at 2:16 AM, Brandon Allbery allber...@gmail.com wrote:
 (I'm adding glasgow-haskell-users to this; if I'm remembering incorrectly
 someone should correct me, if not then the namespace bit should be at least
 mentioned if not filed as a bug.)
 On Tue, Aug 16, 2011 at 00:44, Luite Stegeman stege...@gmail.com wrote:

 On Tue, Aug 16, 2011 at 6:33 AM, Brandon Allbery allber...@gmail.com
 wrote:
  On Mon, Aug 15, 2011 at 08:12, Luite Stegeman stege...@gmail.com
  wrote:
  -- C.hs
  {-# LANGUAGE TypeFamilies #-}
  module C where
 
  class C1 a where
   data F a :: *
 
  I believe this is supposed to be syntactic sugar for a data family, so
  7.0.4
  is wrong.  (I also think it was a known deficiency.)

 In that case, why does module B export F, even though I imported C
 qualified. Within B it can only be referred to as C.F

 My specific recollection is that 7.0.x treated F as a data family without
 calling it one, which introduced some needless duplication in the code base
 and some oddities in usage, including possible core dumps for orphan
 instances.  Again, 7.2.x is the correct reference; behavior of class ...
 where data ... in 7.0 is not consistent.
 And yes, not exporting the data-family-not-called-such was one of the
 inconsistencies in 7.0, 7.2's behavior being considered a bug fix for it.
  7.0's behavior is actually a fairly serious bug, IIRC:  instances of C1 not
 defined within C.hs would not correctly associate with the non-exported data
 family F and the code generated for them would crash at runtime.
  (Typeclasses are always global over an entire program; in effect, they are
 always exported, and you can't suppress it.  Therefore a data family
 associated with a typeclass must also be exported always.)
 I suspect Within B it can only be referred to as C.F is a namespace bug,
 given that F must always be implicitly exported/imported to match the
 implicit export/import of C1.
 --
 brandon s allbery                                      allber...@gmail.com
 wandering unix systems administrator (available)     (412) 475-9364 vm/sms


 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell] ANNOUNCE: vector-algorithms 0.5.2

2011-08-03 Thread Dan Doel
Greetings,

Following some work at hac-phi, I've finally put together a new
release of vector-algorithms. It should now be available via hackage,
or you can pull from code.haskell.org if you prefer:

  hackage: http://hackage.haskell.org/package/vector-algorithms/
  latest: darcs get http://code.haskell.org/~dolio/vector-algorithms/

The highlights of the new release are as follows:

  - Some bit rot in the test suite has been fixed
  - Some strictness has been added to the merge sort to improve
unboxing on current GHCs
  - A couple out-of-bounds errors have been caught and fixed
  - An entirely new sort---American flag sort---has been added to the line-up
* American flag sort is an in-place bucket sort, which only uses a
constant amount
  of auxiliary heap space (determined by the element type). And
unlike a typical
  radix sort, it is actually suitable for sorting arrays of
variable-length (O(1) lookup)
  strings. To this end, there is a (strict) ByteString instance
for the relevant class,
  and possibly more such instances to come.

Bug reports, patches, requests, etc. can be sent to me at this address.

Enjoy,

-- Dan

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] ANNOUNCE: vector-algorithms 0.5.2

2011-08-03 Thread Dan Doel
Greetings,

Following some work at hac-phi, I've finally put together a new
release of vector-algorithms. It should now be available via hackage,
or you can pull from code.haskell.org if you prefer:

  hackage: http://hackage.haskell.org/package/vector-algorithms/
  latest: darcs get http://code.haskell.org/~dolio/vector-algorithms/

The highlights of the new release are as follows:

  - Some bit rot in the test suite has been fixed
  - Some strictness has been added to the merge sort to improve
unboxing on current GHCs
  - A couple out-of-bounds errors have been caught and fixed
  - An entirely new sort---American flag sort---has been added to the line-up
* American flag sort is an in-place bucket sort, which only uses a
constant amount
  of auxiliary heap space (determined by the element type). And
unlike a typical
  radix sort, it is actually suitable for sorting arrays of
variable-length (O(1) lookup)
  strings. To this end, there is a (strict) ByteString instance
for the relevant class,
  and possibly more such instances to come.

Bug reports, patches, requests, etc. can be sent to me at this address.

Enjoy,

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Superclass Cycle via Associated Type

2011-07-22 Thread Dan Doel
2011/7/22 Gábor Lehel illiss...@gmail.com:
 Yeah, this is pretty much what I ended up doing. As I said, I don't
 think I lose anything in expressiveness by going the MPTC route, I
 just think the two separate but linked classes way reads better. So
 it's just a would be nice thing. Do recursive equality superclasses
 make sense / would they be within the realm of the possible to
 implement?

Those equality superclasses are not recursive in the same way, as far
as I can tell. The specifications for classes require that there is no
chain:

C ... = D ... = E ... = ... = C ...

However, your example just had (~) as a context for C, but C is not
required by (~). And the families involved make no reference to C,
either. A fully desugared version looks like:

type family Frozen a :: *
type family Thawed a :: *

class (..., Thawed (Frozen t) ~ t) = Mutable t where ...

I think this will be handled if you use a version where equality
superclasses are allowed.

-- Dan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Simple GADTs, type families and type classes combination with type error.

2011-07-22 Thread Dan Doel
On Fri, Jul 22, 2011 at 11:12 AM, Serguey Zefirov sergu...@gmail.com wrote:
 -
 {-# LANGUAGE GADTs, TypeFamilies #-}

 class CPU cpu where
        type CPUFunc cpu

 data Expr cpu where
        EVar :: String - Expr cpu
        EFunc :: CPU cpu = CPUFunc cpu - Expr cpu

 class CPU cpu = FuncVars cpu where
        funcVars :: CPUFunc cpu - [String]

 exprVars :: FuncVars cpu = Expr cpu - [String]
 exprVars (EVar v) = [v]
 -- an offending line:
 exprVars (EFunc f) = funcVars f
 -

 I tried to split creation and analysis constraints. CPU required for
 creation of expressions, FuncVars required for analysis. It all looks
 nice but didn't work.

 (In our real code EVar is slightly more complicated, featuring Var
 cpu argument)

 It looks like GHC cannot relate parameters inside and outside of
 GADT constructor.

 Not that I hesitate to add a method to a CPU class, but I think it is
 not the right thing to do. So if I can split my task into two classes,
 I will feel better.

GHC cannot decide what instance of FuncVars to use. The signature of
funcVars is:

funcVars :: FuncVars cpu = CPUFunc cpu - [String]

This does not take any arguments that allow cpu to be determined. For
instance, if there were instances (rolling them into one declaration
for simplicity):

instance FuncVars Int where
  type CPUFunc Int = Int
  ...

instance FuncVars Char where
  type CPUFunc Char = Int

Then GHC would see that CPUFunc cpu = Int, but from this, it cannot
determine whether cpu = Int or cpu = Char. CPUFunc is not
(necessarily) injective.

Making CPUFunc a data family as Felipe suggested fixes this by CPUFunc
essentially being a constructor of types, not a function that
computes. So it would be impossible for CPUFunc a = CPUFunc b unless a
= b.

Also, if you have a class whose only content is an associated type,
there's really no need for the class at all. It desugars into:

type family CPUFunc a :: *

class CPU a

So it's just a type family and an empty class, which will all have
exactly the same cases defined. You could instead use just the family.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Simple GADTs, type families and type classes combination with type error.

2011-07-22 Thread Dan Doel
On Fri, Jul 22, 2011 at 4:11 PM, Serguey Zefirov sergu...@gmail.com wrote:
 But cpu variable is the same in all places. If we don't dive into
 CPUFunc reduction (to Int or whatever) we can safely match funcVars
 argument and unify cpu.

 This is the case when we write generic functions over type family application.

Here is approximately what the checking algorithm knows in the problematic case:

  exprVars (EFunc f) = funcVars f

  exprVars :: FuncVars cpu1 = Expr cpu1 - [String]
  EFunc f :: Expr cpu1
  funcVars :: FuncVars cpu2 = CPUFunc cpu2 - [String]
  f :: CPUFunc cpu1

Thus, it can determine:

  CPUFunc cpu2 = CPUFunc cpu1

Now it needs to decide which instance of FuncVars to feed to funcVars.
But it only knows that cpu2 should be such that the above type
equation holds. However, since CPUFunc is a type family, it is not
sound in general to reason from:

  CPUFunc cpu1 = CPUFunc cpu2

to:

  cpu1 = cpu2

So the type checker doesn't. You have nothing there that determines
cpu1 to be the same as cpu2. So you need to make some change that does
determine them to be the same.

In situations like these, it would be handy if there were a way to
specify what type certain variables are instantiated to, but it's sort
of understandable that there isn't, because it'd be difficult to do in
a totally satisfactory way.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Superclass Cycle via Associated Type

2011-07-20 Thread Dan Doel
This isn't a GHC limitation. The report specifies that the class
hierarchy must be a DAG. So C cannot require itself as a prerequisite,
even if it's on a 'different' type.

Practically, in the implementation strategy that GHC (and doubtless
other compilers) use, the declaration:

class C (A x) = C x ...

means that a C x dictionary contains a C (A x) dictionary, which
contains a C (A (A x)) dictionary And dictionaries are strictly
evaluated, so this sort of infinite definition cannot work.

-- Dan

On Wed, Jul 20, 2011 at 12:37 PM, Ryan Trinkle ryant5...@gmail.com wrote:
 The following code doesn't compile, but it seems sensible enough to me.  Is
 this a limitation of GHC or is there something I'm missing?

 class C (A x) = C x where
   type A x :: *

 instance C Int where
   type A Int = String

 instance C String where
   type A String = Int

 The error I get is:

 SuperclassCycle.hs:1:1:
 Cycle in class declarations (via superclasses):
   SuperclassCycle.hs:(1,1)-(2,15): class C (A x) = C x where {
type family A x :: *; }


 Ryan

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: TypeFamilies vs. FunctionalDependencies type-level recursion

2011-06-15 Thread Dan Doel
On Wed, Jun 15, 2011 at 3:25 AM, Simon Peyton-Jones
simo...@microsoft.com wrote:
 Wait.  What about
        instance C [a] [b]
 ?  Should that be accepted?  The Coverage Condition says no, and indeed it 
 is rejected. But if you add -XUndecidableInstances it is accepted.

This 'clearly' violates the functional dependency as well.

However, I must admit, it surprises me that GHC or Hugs ever detected
this, and I imagine there's no general way to detect 'acceptable'
instances.

 Do you think the two are different?  Do you argue for unconditional rejection 
 of everything not satisfying the Coverage Condition, regardless of flags?

One obvious difference from the instances that appear (depending on
how smart you're pretending to be as a compiler) bad but are
nevertheless okay is that these have no contexts. If you can detect
that, then:

instance C a b
instance C [a] [b]

clearly have multiple independent instantiations on both sides, and so
the relation is clearly non-functional. A simple heuristic might be to
reject those, but allow:

instance (..., D .. b .., ...) = C a b

trusting that the context determines b in the right way. Is this
possibly what GHC used to do? Of course, that allows 'Show b = C a b'
so it's pretty weak.

A slightly more intelligent heuristic might be to see if the fundeps
in the context determine b, but that sounds like it might be leaving
the realm of what's checkable.

-- Dan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: TypeFamilies vs. FunctionalDependencies type-level recursion

2011-06-14 Thread Dan Doel
On Tue, Jun 14, 2011 at 11:27 AM, Andrea Vezzosi sanzhi...@gmail.com wrote:
    class C a b | a - b
    instance C a R
    instance C T U

 Are you sure that worked before?

80%

 The following still does anyhow:

    data R
    data T
    data U
    class C a b | a - b
    instance TypeCast R b = C a b
    instance TypeCast U b = C T b

 In fact this is how IsFunction was implemented in 2005[1], and the
 same transformation appears to work for the Eq class too.
 If we allow TypeFamilies we can use (~) instead of the TypeCast hack,
 fortunately.

So it does.

instance (b ~ R) = C a b
instance (b ~ U) = C T b

Which is odd. I don't think there's a way to justify this working.
Either the preconditions are being taken into account, in which case
there is no reason for this to behave differently from:

instance C a R
instance C T U

or the preconditions are not being taken into account (which is the
type class way), in which case both of the qualified instances are
illegal, because they declare instances C T b for all b (plus a
constraint on the use), which violates the fundep. I've seen examples
like this before, and I think what GHC ends up doing (now) is fixing
the 'b' to whatever instance it needs first. But I don't think that's
very good behavior.

In this case it happens that it's impossible to use at more than one
instance, but if you accept the instances, you're back to the
questions of type soundness that are only evaded because fundeps don't
actually use all the information they allegedly express.

-- Dan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: TypeFamilies vs. FunctionalDependencies type-level recursion

2011-06-14 Thread Dan Doel
Sorry about the double send, David. I forgot to switch to reply-all in
the gmail interface.

On Tue, Jun 14, 2011 at 11:49 AM,
dm-list-haskell-pr...@scs.stanford.edu wrote:
 You absolutely still can use FunctionalDependencies to determine type
 equality in GHC 7.  For example, I just verified the code below with
 GHC 7.02:

 *Main typeEq True False
 HTrue
 *Main typeEq (1 :: Int) (2 :: Int)
 HTrue
 *Main typeEq (1 :: Int) False
 HFalse

 As always, you have to make sure one of the overlapping instances is
 more specific than the other, which you can do by substituting a
 parameter c for HFalse in the false case and fixing c to HFalse using
 another class like TypeCast in the context.  (As contexts play no role
 in instance selection, they don't make the instance any more
 specific.)

 While I don't have convenient access to GHC 6 right this second, I'm
 pretty sure there has been no change for a while, as the HList paper
 discussed this topic in 2004.

Okay. I don't really write a lot of code like this, so maybe I missed
the quirks.

In that case, HList has been relying on broken behavior of fundeps for
7 years. :) Because the instance:

   instance TypeEq a b c

violates the functional dependency, by declaring:

   instance TypeEq Int Int Int
   instance TypeEq Int Int Char
   instance TypeEq Int Int Double
   ...
   instance TypeEq Int Char Int
   instance TypeEq Int Char Char
   ...

and adding the constraint doesn't actually affect which instances are
being declared, it just adds a constraint requirement for when any of
the instances are used. It appears I was wrong, though. GHC doesn't
actually fix the instance for such fundeps, and the following compiles
and runs fine:

   class C a b | a - b where
     foo :: a - b
     foo = error Yo dawg.

   instance C a b where

   bar :: Int
   bar = foo x

   baz :: Char
   baz = foo x

so we're using an instance C String Int and an instance C String Char
despite the fact that there's a functional dependency from the first
argument to the second.

-- Dan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: TypeFamilies vs. FunctionalDependencies type-level recursion

2011-06-14 Thread Dan Doel
On Tue, Jun 14, 2011 at 1:19 PM,
dm-list-haskell-pr...@scs.stanford.edu wrote:
 No, these are not equivalent.  The first one TypeEq a b c is just
 declaring an instance that works forall c.  The second is declaring
 multiple instances, which, if there were class methods, could have
 different code.  The second one is illegal, because given just the
 first two types, a and b, you cannot tell which instance to pick.

Then why am I not allowed to write:

class C a b | a - b
instance C T [a]

without undecidable instances? GHC actually complains in that case
that the coverage condition is violated. But it is a single well
specified instance that works for all a.

The answer is that such an instance actually violates the functional
dependency, but UndecidableInstances just turns off the checks to make
sure that fundeps are actually functional. It's a, trust me, switch
in this case (not just a, my types might loop, switch).

So I guess HList will still work fine, and UndecidableInstances are
actually more evil than I'd previously thought (thanks for the
correction, Andrea :)).

 A functional dependency such as | a b - c d just guarantees that a
 and b uniquely determine the instance.  Hence, it is okay to have
 class methods that do not mention type variables c and d, because the
 compiler will still know which instance to pick.

It specifies that a and b uniquely determine c and d, so the choice of
instances is unambiguous based only on a and b. This is the basis for
type level computation that people do with fundeps, because a fundep
'a b - c' allows one to compute a unique c for each pair of types.

If it were just about variable sets determining the instance, then we
could just list those. But I can write:

class C a b c d | a b - c

And it will actually be a, b and d that determine the particular
instance, but just listing 'a b d', or using the fundep 'a b d - c'
is wrong, because the fundep above specifies that there is a single
choice of c for each a and b. So we could have:

C Int Int Char Int
C Int Int Char Double
C Int Int Char Float

but trying to add:

C Int Int Int Char

to the first three would be an error, because the first two parameters
determine the third, rather than the first, second and fourth.

Being allowed to elide variables from the types of methods is one of
the uses of fundeps, and probably why they were introduced, but it is
not what fundeps mean.

 On the other hand, though the compiler won't accept it, you could at
 least theoretically allow code such as the following:

        instance C [Char] where
            type Foo [Char] = forall b. () = b

The fundep equivalent of this is not 'instance C [Char] b'. It is:

instance C [Char] (forall b. b)

except types like that aren't allowed in instances (for good reason in general).

The closest you could come to 'instance C T b' would be something like:

type instance F T = b

except that is probably interpreted by GHC as being (forall b. b).

-- Dan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: TypeFamilies vs. FunctionalDependencies type-level recursion

2011-05-29 Thread Dan Doel
On Sun, May 29, 2011 at 6:45 PM, Ben Millwood hask...@benmachine.co.uk wrote:
 It would seem very strange to me if haskell-prime made the choice of
 fundeps/type families based on the behaviour with
 OverlappingInstances. I'm under the impression that Overlapping is
 generally considered one of the more controversial extensions, and on
 the LanguageQualities wiki page [1] it's explicitly given as an
 example of something which violates them. So not only is Overlapping
 not in the language, but I imagine there are many people (myself
 included) who would like to ensure it stays out.

 My personal opinion is that if Haskell wants a more complete facility
 for type-level programming, that should be addressed directly, instead
 of via creative abuse of the class system and related machinery.

It should also be noted: the fact that functional dependencies work
with overlapping instances, while type families don't is not really
inherent in functional dependencies, but is instead related to choices
about how functional dependencies work that differ from how type
families do. And mainly, this is because functional dependencies fail
to incorporate local information, meaning they fail to work
appropriately in various situations (for instance, matching on a GADT
may refine a type, but that new information may not propagate through
a fundep).

In my experience, you can construct examples that should lead to type
soundness issues with fundeps, and only fail because of peculiarities
in fundep handling. But fundeps could (and arguably should, to
interact with GADTs and the like) be reworked to behave 'properly'.
It's just that type families already do.

I can't really recall what example I used in the past, but here's one
off the cuff:

  module A where
class C a b | a - b where

instance C a a where

data T a where
  CT :: C a b = b - T a

  module B where
import A

instance C Int Char where

c :: Char
c = case t of { CT x - x }

So, the question is: what should happen here?

We've created a T Int in a context in which C Int Int, so it wraps an
Int. Then we match in a context in which C Int Char. But the fundep
tells us that there can only be one S such that C Int S. So we have
some choices:

1) Disallow the overlapping instance C Int Char, because it is
incompatible with the C Int Int from the other module. This is what
GHC 7 seems to do.

2) Pretend that there may in fact be more than one instance C Int a,
and so we can't infer what a is in the body of c. I think this is what
GHC used to do, but it means that whether a fundep a - b actually
allows us to determine what b is from knowing a is kind of ad-hoc and
inconsistent.

3) Allow c to type check. This is unsound.

1 is, I think, in line with type families. But it rules out the
computation that fundeps + overlapping can do and type families
cannot.

Also, in an unrelated direction: there are conditions on type families
that can allow some overlapping to be permitted. For instance, if you
simply want a closed type function, like, taking the above as an
example:

type family F a :: * where
  instance F Int = Char
  instance F a   = a

Then this is a sufficient condition for overlapping to be permissible.
And it covers a lot of the use cases for overlapping instances, I
think.

-- Dan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] Is fusion overrated?

2011-05-18 Thread Dan Doel
On Wed, May 18, 2011 at 2:04 AM, Ryan Ingram ryani.s...@gmail.com wrote:
 Yes, the goal isn't so much to improve complexity (both are O(1)) but to
 reduce the constant factor on that O(1).

 In an inner loop like that, allocator/gc calls by far dominate the cost of
 the program.  If you can remove them, you've improved the performance of the
 program by 10-100x.

Yes, this is important. Fusion is an obvious win on strict structures,
because it can make the space usage asymptotically better. However,
even if this doesn't happen for lazy structures, fusion can save a lot
of _time_. It won't make the time complexity asymptotically better,
but it can save an amount of work proportional to the complexity of
the algorithm.

For instance, I was playing around with uvector a while back, and
needed foldr or something similar that wasn't available. So I wrote
something like:

foldr f z s = if null s then z else f (head s) (tail s)

This all ran in constant space, but tail re-unfolds the entire stream
every time, so this function has time complexity O(n^2). The nth
element chugs through n allocation-followed-by-deallocations before it
becomes usable, which can all be done in constant space, but takes
linear time.

Fusion won't save you in this example (to my knowledge). But if you
compose k functions from lists to lists together, you'll have k
allocations-followed-by-deallocations on every element that makes it
all the way through the pipeline. You'll see O(1) space usage, but
your time usage has a c*k*n term simply from being expressed by a
composition pipeline, where c is the cost of the unnecessary boxing.
Fusion eliminates this term.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Robert Harper on monads and laziness

2011-05-03 Thread Dan Doel
On Tue, May 3, 2011 at 2:26 AM, Dominique Devriese
dominique.devri...@cs.kuleuven.be wrote:
 What I find interesting is that he considers (non-)termination an
 effect, which Haskell does not manage to control like it does other
 types of effects. Dependently typed purely functional languages like
 Coq (or Agda if you prefer ;)) do manage to control this (at the cost
 of replacing general recursion with structural recursion) and require
 you to model non-termination in a monad (or Applicative functor) like
 in YNot or Agda's partiality monad (written _⊥) which models just
 non-termination.

Dependent typing isn't really necessary. Only totality. Of course,
there's some agreement that dependent types help you get back some of
the power you'd lose by going total (by helping you write precise
enough types for your programs to be accomplished in the more limited
recursive manner).

 I have the impression that this separation of the partiality effect
 provides a certain independence of evaluation order which neither ML
 (because of side-effects) nor Haskell (because of non-strict
 semantics) manage to provide. Such an independence seems very useful
 for optimization and parallel purposes.

Total lambda calculi tend to yield the same results irrespective of
evaluation strategy. I guess that's useful for optimization, because
you can apply transformations wherever you want without worrying about
changing the definedness of something (because everything is
guaranteed to be well defined regardless of your evaluation strategy).

I don't see how it obviates strictness analysis, though. For instance:

sumAcc a (x:xs) = sumAcc (a + x) xs
sumAcc a [] = a

... case sumAcc 0 l of { n - ... }

Even in a total language, accumulating lazy thunks is likely to be
inefficient for when we go to use the accumulator, whereas one can
also construct examples (even in a total and inductive fragment) where
lazy evaluation will be superior. So you need to do analysis to
determine which things should be strict and which should be lazy for
efficiency, even if you aren't obligated to do it to determine whether
your optimizations are semantics-preserving.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Python is lazier than Haskell

2011-04-28 Thread Dan Doel
(Sorry if you get this twice, Ertugrul; and if I reply to top. I'm
stuck with the gmail interface and I'm not used to it.)

On Thu, Apr 28, 2011 at 11:27 AM, Ertugrul Soeylemez e...@ertes.de wrote:
 I don't see any problem with this.  Although I usually have a bottom-up
 approach, so I don't do this too often, it doesn't hurt, when I have to.

I do. It's low tech and inconvenient.

Whenever I program in Haskell, I miss Agda's editing features, where I
can write:

   foo : Signature
   foo x y z = ?

Then compile the file. The ? stands in for a term of any type, and
becomes a 'hole' in my code. The editing environment will then tell me
what type of term I have to fill into the hole, and give me
information on what is available in the scope. Then I can write:

   foo x y z = { partialImpl ? ? }

and execute another command. The compiler will make sure that
'partialImpl ? ?' has the right type to fill in the hole (with ?s
again standing in for terms of arbitrary type). If the type checking
goes through, it expands into:

   foo x y z = partialImpl { } { }

and the process repeats until my function is completely written. And
of course, let's not forget the command for automatically going from:

   foo x y z = { x }

to

   foo Con1 y z = { }
   foo Con2 y z = { }
   foo Con3 y z = { }
   ...

I don't think there's anything particularly Agda-specific to the
above. In fact, the inference required should be easier with
Haskell/GHC. Features like this would be pretty killer to have for
Haskell development; then I wouldn't have to prototype in Agda. :)

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Broken ghc-7.0.3/vector combination?

2011-04-20 Thread Dan Doel
On Wed, Apr 20, 2011 at 3:01 PM, Daniel Fischer
daniel.is.fisc...@googlemail.com wrote:
 I'm sure it's not criterion, because after I've found that NaNs were
 introduced to the resamples vectors during sorting (check the entire
 vectors for NaNs before and aftersorting, tracing the count; before: 0,
 afterwards often quite a number, sometimes close to 10%), the further tests
 didn't involve criterion anymore. criterion is simply the most obvious
 place to see the NaNs show up (with 5-10% NaNs among the resamples, it
 won't take too long to see one pop up).

 It could be a bug in statistics, but I'm pretty sure this one's not due to
 statistics either, since fiddling with vector-algorithms made the NaNs
 disappear - btw., Bryan, using the heap sort instead of introsort, I
 haven't found any NaNs in my tests, so temporarily switching the algorithm
 might cure the symptoms.

It's not a statistics bug. I'm reproducing it here using just vector-algorithms.

Fill a vector of size N with [N..1], and (intro) sort it, and you get
NaNs. But only with -O or above. Without optimization it doesn't
happen (and nothing seems to be reading/writing out of bounds, as I
compiled vector with UnsafeChecks earlier and it didn't complain).

Filling the vector with [1..N] also doesn't trigger the NaNs. [0,0..0]
and [0,0..1] trigger it.

I don't know what's going on yet. I have trouble believing it's a bug
in vector-algorithms code, though, as I don't think I've written any
RULEs (just INLINEs), and that's the one thing that comes to mind in
library code that could cause a difference between -O0 and -O. So I'd
tentatively suggest it's a vector, base or compiler bug.

The above testing is on 64-bit windows running a 32-bit copy of GHC,
for reference.

My ability to investigate this will be a bit limited for the near
future. If someone definitively tracks it down to bugs in my code,
though, let me know, and I'll try and push a new release up on
hackage.

-- Dan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Higher-kinded Quantification

2011-04-12 Thread Dan Doel
On Monday 11 April 2011 8:31:54 PM Leon Smith wrote:
 I have a type constructor (Iterator i o m a) of kind (* - * - (* -
 *) - *),  which is a monad transformer,  and I'd like to use the type
 system to express the fact that some computations must be pure,  by
 writing the impredicative type (Iterator i o (forall m. m) a).
 However I've run into a bit of difficulty expressing this,  due to the
 kind of m.   I've attached a minimal-ish example.   Is there a way to
 express this in GHC?

I think the simplest way is 'Iterator i o Id a'. Then there's a function:

  embed :: Iterator i o Id a - Iterator i o m a

with the obvious implementation. This means your NeedAction case is no longer 
undefined, too. You can either peel off NeedActions (since they're just 
delays) or leave them in place.

However, another option is probably:

[i] - (forall m. Iterator i o m a) - (forall m. Iterator i o m a)

which will still have the 'this is impossible' case. You know that the 
incoming iterator can't take advantage of what m is, though, so it will be 
impossible.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Higher-kinded Quantification

2011-04-12 Thread Dan Doel
On Tuesday 12 April 2011 11:27:31 AM Leon Smith wrote:
 I think impredicative polymorphism is actually needed here;  if I write
 ...
 Then I get a type error
 ...

I'm not going to worry about the type error, because that wasn't what I had in 
mind for the types. The type for loop I had in mind was:

  [i] - Iterator i o m a - Iterator i o m a

Then, feedPure cracks open the first (forall m. ...), instantiates it to the m 
for the result, and runs loop on it. If we had explicit type application, it'd 
look like:

  feedPure l it = /\m - loop l (it@m)

but as it is it's just:

  feedPure l it = loop l it

Which cannot be eta reduced.

 But what I find rather befuddling is the kind error:
  feedPure' :: [i] - Iterator i o (forall (m :: * - *) . m) a - Iterator
  i o (forall (m :: * - *) . m) a feedPure' = undefined
 
 Iterator.hs:193:58:
 `m' is not applied to enough type arguments
 Expected kind `*', but `m' has kind `* - *'
 In the type signature for `feedPure'':
   feedPure' :: [i]
- Iterator i o (forall m :: (* - *). m) a
   - Iterator i o (forall m :: (* - *). m) a
 
 Is impredicative polymorphism restricted to the kind *?

The problem is that (forall (m :: * - *). m) is not a valid type, and forall 
is not a valid way to construct things with kind * - *. You have:

  m :: * - * |- T[m] :: * == |- (forall (m :: * - *). T[m]) :: *

but that is the only way forall works.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Agda] Defining subtraction for naturals

2011-03-19 Thread Dan Doel
On Thursday 17 March 2011 6:41:33 PM wren ng thornton wrote:
 How about pragmatically efficacious?

Well...

 (3) Use type hackery to disallow performing subtraction when the result
 would drop below zero, e.g. by requiring a proof that the left argument
 is not less than the right.

As far as this goes, one has to ask where we'd get this proof. It's unlikely 
that we'd just have one already, so the most likely answer is that we'd have 
to compute the proof.

But, the way to compute the proof of whether we're allowed to do subtraction 
is to *do subtraction*. So, if we don't want saturating subtraction, arguably 
we don't want subtraction at all, but a prior *comparison* of our two numbers, 
which will tell us:

  compare m n
m = n + k + 1
m = n
m + k + 1 = n

in which case we already have the information we want. I think this article is 
relevant:

  http://existentialtype.wordpress.com/2011/03/15/boolean-blindness/

Inasmuch as we shouldn't be throwing away all propositional information by 
crushing to a boolean, we also shouldn't be throwing away information that we 
will later have to recompute by deciding the wrong proposition.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] linear and dependent types

2011-02-19 Thread Dan Doel
On Saturday 19 February 2011 1:11:23 AM Vasili I. Galchin wrote:
  BTW I was thinking of http://www.ats.org when I asked this question.

Technically speaking, if one considers ATS to be dependently typed, then one 
might as well also consider GHC to be dependently typed (with the right 
extensions enabled). ATS would easily be a nicer language in that respect, 
though.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Injective type families?

2011-02-14 Thread Dan Doel
On Monday 14 February 2011 5:51:55 PM Daniel Peebles wrote:
 I think what you want is closed type families, as do I and many others :)
 Unfortunately we don't have those.

Closed type families wouldn't necessarily be injective, either. What he wants 
is the facility to prove (or assert) to the compiler that a particualr type 
family is in fact injective.

It's something that I haven't really even seen developed much in fancy 
dependently typed languages, though I've seen the idea before. That is: prove 
things about your program and have the compiler take advantage of it.

-- Dan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] ($) not as transparent as it seems

2011-02-03 Thread Dan Doel
On Thursday 03 February 2011 5:12:54 PM Tim Chevalier wrote:
 On Thu, Feb 3, 2011 at 2:03 PM, Luke Palmer lrpal...@gmail.com wrote:
  This is probably a result of strictness analysis.  error is
  technically strict, so it is reasonable to optimize to:
  
 let e = error foo in e `seq` error e
 
 Yes, and you can see this in the Core code that Don posted: in version
 (A), GHC optimized away the outer call to error. But in version (B),
 the demand analyzer only knows that ($) is strict in its first
 argument -- it's not strict in its second. So it's not obviously safe
 to do the same optimization: the demand analyzer doesn't look
 through higher-order function arguments IIRC. (You can confirm this
 for yourself if you also want to read the demand analyzer output.)
 
 If ($) were getting inlined, the code would look the same coming into
 demand analysis in both cases, so you wouldn't see a difference. So
 I'm guessing you're compiling with -O0.

Whatever is going on, it has to be active during ghci, because all these 
differences can be seen during interpretation (in 7.0.1, at least).

  Prelude error (error foo)
  *** Exception: foo
  Prelude error $ error foo
  *** Exception: *** Exception: foo
  Prelude let g :: (a - b) - a - b ; g f x = f x in g error (error foo)
  *** Exception: foo
  Prelude let g :: (a - b) - a - b ; g f x = f x
  Prelude g error (error foo)
  *** Exception: *** Exception: foo
  Prelude let foo = error foo in error foo
  *** Exception: foo
  Prelude let foo = error foo
  Prelude error foo
  *** Exception: *** Exception: foo

Actually compiling seems to remove the difference in 7.0.1, at least, because 
the output is always:

  Foo: foo

regardless of ($) or not ('fix error' hangs without output as well, which 
isn't what I thought would happen).

Anyhow, that rules out most general-purpose optimizations (including 
strictness analysis, I thought).

- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: In opposition of Functor as super-class of Monad

2011-01-04 Thread Dan Doel
On Tuesday 04 January 2011 5:24:21 am o...@okmij.org wrote:
 Method A: just define bind as usual
 
  instance (Functor (Iteratee el m),Monad m) = Monad (Iteratee el m) where
  
  return = IE_done
  
  IE_done a   = f = f a
  IE_cont e k = f = IE_cont e (\s - k s = docase)
  
   where
   docase (IE_done a, stream)   = case f a of
   
   IE_cont Nothing k - k stream
   i - return (i,stream)
   
   docase (i, s)  = return (i = f, s)
 
 Although we must state the constraint (Functor (Iteratee el m)) to
 satisfy the super-class constraint, we have not made any use of the
 constraint.

This, at least, is false. If Functor is a superclass of Monad, then Monad m 
implies Functor m, which implies Functor (Iteratee el m). So Monad m is a 
sufficient constraint for the instance.

As for the other concerns, I think the closest fix I've seen is to allow 
subclasses to specify defaults for superclasses, and allow instances for 
subclasses to include methods for superclasses. So:

  class Functor m = Monad m where
return :: a - m a
(=)  :: m a - (a - m b) - m b

fmap f x = x = return . f

This has its own caveats of course. And in this case, it seems to 
overconstrain the functor instance, since presumably we'd end up with:

  instance Monad m = Monad (Iteratee el m) where ...
==
  instance Monad m = Functor (Iterate el m) where ...

I'm not sure what to do about that.

-- Dan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] What's the motivation for η rules?

2011-01-03 Thread Dan Doel
On Monday 03 January 2011 9:23:17 pm David Sankel wrote:
 The following is a dependent type example where equality of open terms
 comes up.
 
 z : (x : A) → B
 z = ...
 
 w : (y : A) → C
 w = z
 
 Here the compiler needs to show that the type B, with x free,
 is equivalent to C, with y free. This isn't always decidable, but eta
 rules, as an addition to beta and delta rules, make more of
 these equivalence checks possible (I'm assuming this is
 what extensionality means here).

Extensionality of functions (in full generality) lets you prove the following:

  ext f g : (forall x. f x = g x) - f = g

This subsumes eta for every situation it applies in, because:

  ext f (\x - f x) (\x - refl (f x)) : f = (\x - f x)

although making the equality that type checking uses extensional in this 
fashion leads to undecidability (because determining whether two types are 
equal may require deciding if two arbitrary functions are equal at all points, 
which is impossible in general).

The reverse is not usually the case, though, because even if we have eta:

  eta f : f = (\x - f x)

and the premise:

  pointwise : forall x. f x = g x

we cannot use pointwise to substitute under the lambda and get

  necessary : (\x - f x) = (\x - g x)

Proving necessary would require use of ext to peel away the binder 
temporarily, but ext is what we're trying to prove.

So, although eta is often referred to as extensional equality (confusingly, if 
you ask me), because it is not part of the computational behavior of the terms 
(reduction of lambda terms doesn't usually involve producing eta-long normal 
forms, and it certainly doesn't involve rewriting terms to arbitrary other 
extensionally equal terms), it is usually weaker than full extensionality in 
type theories.

 What would be example terms for B and C that would require invoking the eta
 rule for functions, for example?

It could be as simple as:

  z : T f
  z = ...

  w : T (\y - f y)
  w = z

On the supposition that those types aren't reducible. For instance, maybe we 
have:

  data T (f : Nat - Nat) : Set where
whatever : T f

Without eta, the types aren't equal, so this results in a type error.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-23 Thread Dan Doel
On Thursday 23 December 2010 12:52:07 pm Daniel Peebles wrote:
 Fair enough :) that'll teach me to hypothesize something without thinking
 about it! I guess I could amend my coinductive proof:
 
 http://hpaste.org/paste/42516/mirrormirror_with_bottom#p42517
 
 does that cover bottom-ness adequately? I can't say I've thought through it
 terribly carefully.

That isn't the usual way to model partially defined values. For instance, you 
can write functions with that datatype that would not be monotone:

  pdefined :: {A : Set} - Tree A - Bool
  pdefined ⊥ = false
  pdefined _ = true

It's somewhat more accurate to take the partiality monad:

  data D (A : Set) : Set where
now   : A - D A
later : ∞ (D A) - D A

  ⊥ : {A : Set} - D A
  ⊥ = later (♯ ⊥)

Then we're interested in an equivalence relation on D As where two values are 
equal if they either both diverge, or both converge to equal inhabitants of A 
(not worrying about how many steps each takes to do so).

Then, the partially defined trees are given by:

  mutual {A}
Tree = D Tree'

data Tree' : Set where
  node : Tree - A - Tree - Tree'
  tip  : Tree'

And equivalence of these trees makes use of the above equivalence for D-
wrapped things. (It's actually a little weird that this works, I think, if you 
expect Tree' to be a least fixed point; but Agda does not work that way).

If you're just interested in proving mirror (mirror x) = x, though, this is 
probably overkill. Your original type should be isomorphic to the Haskell type 
in a set theoretic way of thinking, and the definition of mirror does what the 
Haskell function would do at all the values. So the fact that you can write 
functions on that Agda type that would have no corresponding implementation in 
Haskell is kind of irrelevant.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Categorical description of systems with dependent types

2010-12-02 Thread Dan Doel
On Thursday 02 December 2010 10:13:32 am Petr Pudlak wrote:
   Hi,
 
 recently, I was studying how cartesian closed categories can be used to
 describe typed functional languages. Types are objects and morphisms are
 functions from one type to another.
 
 Since I'm also interested in systems with dependent types, I wonder if
 there is a categorical description of such systems. The problem (as I
 see it) is that the codomain of a function depends on a value passed to
 the function.
 
 I'd happy if someone could give me some pointers to some papers or other
 literature.

There are many different approaches to modelling dependent types in category 
theory. Roughly, they are all similar to modelling logic, but they all differ 
in details.

I think the easiest one to get a handle on is locally Cartesian closed 
categories, although there's some non-intuitive stuff if you're used to type 
theory. The way it works is this: a locally Cartesian closed category C is a 
category such that all its slice categories are cartesian closed. This gets 
you the following stuff:

---

Since C is isomorphic to C/1, where 1 is a terminal object, C is itself 
Cartesian closed assuming said 1 exists. This may be taken as a defining 
quality as well, I forget.

---

Each slice category C/A should be viewed as the category of A-indexed families 
of types. This seems kind of backwards at first, since the objects of C/A are 
pairs like (X, f : X - A). However, the way of interpreting such f as a 
family of types F : A - * is that F a corresponds to the 'inverse image' of f 
over a. So X is supposed to be like the disjoint union of the family F in 
question, and f identifies the index of any particular 'element' of X.

Why is this done? It has nicer theoretical properties. For instance, A - * 
can't sensibly be a morphism, because * corresponds to the entire category of 
types we're modelling. It'd have to be an object of itself, but that's 
(likely) paradox-inducing.

---

Given a function f : B - A, there's a functor f* : C/A - C/B, which re-
indexes the families. If you think of this in the more usual type theory 
style, it's just composition: B -f- A -F- *. In the category theoretic case, 
it's somewhat more complex, but I'll just leave it at that for now.

Now, this re-indexing functor is important. In type theories, it's usually 
expected to have two adjoints:

  Σf ⊣ f* ⊣ Πf

These adjoints are the dependent sum and product. To get a base type that 
looks like:

  Π x:A. B

from type theory. Here's how we go:

  B should be A-indexed, so it's an object of C/A

  For any A, there's an arrow to the terminal object ! : A - 1

  This induces the functor !* : C/1 - C/A

  This has an adjoint Π! : C/A - C/1

  C/1 is isomorphic to C, so C has an object that corresponds to Π!B, which is
  the product of the family B. This object is the type Π x:A. B

In general, ΠfX, with f : A - B, and X an A-indexed family, F : A - *, is 
the B-indexed family, G : B - * for ease, where G b = Π x : f⁻¹ b. F x. That 
is, the product of the sub-family of X corresponding to each b. In the case of 
B = 1, this is the product of the entire family X.

This adjointness stuff is (no surprise) very similar to the way quantifiers 
are handled in categorical logic.

---

That's the 10,000 foot view. I wouldn't worry if you don't understand much of 
the above. It isn't easy material. In addition to locally Cartesian closed 
categories, you have:

  toposes
  hyperdoctrines
  categories with families
  contextual categories
  fibred categories

And I'm probably forgetting several. If you read about all of these, you'll 
probably notice that there are a lot of similarities, but they mostly fail to 
be perfectly reducible to one another (although toposes are locally Cartesian 
closed).

I don't have any recommendations on a best introduction for learning this. 
Some that I've read are:

  From Categories with Families to Locally Cartesian Closed Categories
  Locally Cartesian Closed Categories and Type Theory

but I can't say that any one paper made everything snap into place. More that 
reading quite a few gradually soaked in. And I still feel rather hazy on the 
subject.

Hope that helps some.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Categorical description of systems with dependent types

2010-12-02 Thread Dan Doel
On Thursday 02 December 2010 7:54:18 pm Larry Evans wrote:
 [snip]
 *Maybe* the computer science people are trying to minimize the concepts.
 In this case, the one concept common to both the sum and product ( in
 the math peoples viewpoint) is there's a function:
 
field2type: field_name - Type
 
 in the case of a product or record, r, it's:
 
   product = f:fieldname - field2type(f)
 
 in the case of a disjoint sum its:
 
   sum = (f:fieldname, field2type(f))
 
 or something like that.

I'll be honest: I don't really know what you're saying above. However, I can 
throw in my 2 cents on the naming thing.

The product-function naming obviously comes from thinking about, what if the 
type of later arguments of a tuple could depend on earlier ones, and what if 
the result type of a function could depend on the argument? These are quite 
ordinary questions for a practicing programmer to think about, which is 
probably why computer science people (supposedly) favor this naming. I might 
even use this naming myself when appropriate, although I'd say 'tuple' (or 
record) instead of 'product' to (hopefully) avoid confusion.

The sum-product naming, by contrast, comes from thinking about, we have n-ary 
sums and products for any natural n; for instance, A + B and A * B are binary. 
This can be viewed as sums and products of families of types indexed by finite 
sets. What if we generalize this to sums and products of families indexed by 
*arbitrary* types? Unlike the above, I don't think this is something that is 
likely to be sparked naturally during programming. However, it's quite close 
to similar constructions in set theory, which probably explains why 
mathematicians favor it.

If you get into category theoretic views of programming languages, I think the 
sum-product naming makes sense there, and it's hard to make the product-
function naming work. For instance:

  The A-indexed product Π x:A. F[x] has an A-indexed family of projections:
   proj a : (Π x:A. F[x]) - F[a]

  The A-indexed sum Σ x:A. F[x] has an A-indexed family of injections:
   in a : F[a] - (Σ x:A. F[x])

Which are visibly generalizations of the definitions of (co)products you'd 
encounter modelling non-dependently typed languages. Perhaps this is on the 
math end of things, though.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Serialization of (a - b) and IO a

2010-11-12 Thread Dan Doel
On Thursday 11 November 2010 9:23:13 pm Luke Palmer wrote:
 Admittedly, the class of reasoning I usually use in my Haskell
 programs, and the one that you talked about using earlier this
 message, is essentially seq doesn't exist.  However, I prefer to use
 this class of reasoning because I would prefer if seq actually didn't
 exist (er, I think the implication goes the other way).

seq can still exist, I think. And I still want it (well, I could leave it for 
functions, really, I think). What doesn't exist, loosely speaking, is bottom, 
which means:

  forall x y. x `seq` y = y

And so seq = flip const. That makes things like:

  foo ... = ... (x `seq` y) ...

appear useless, unless we remember that denotational semantics aren't the end-
all and be-all, in which case we can recognize that seq is used as an 
operational hint to the compiler, same as par and pseq. It just happens to be 
the case that in Haskell's ordinary semantics, merely giving the denotational 
semantics of seq is sufficient to induce the right operational behavior, 
provided the compiler isn't bone headed (and further, is lenient enough to 
allow sufficiently smart compilers to disregard our naive 'evaluate x before 
y' reading of seq if it's more efficient to do so).

 Not so for
 serialize: I would like a serialize function, but I don't want the
 semantic burden it brings.  If only there were a way to...
 
 oh yeah.
 
 serialize :: (a - b) - IO String
 
 I still don't really get what we're arguing about.

I don't know.

-- Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Serialization of (a - b) and IO a

2010-11-12 Thread Dan Doel
It took me a bit to decide whether this was an adequate counter to my 
objection, but ultimately, I don't think it is. I'll try to explain as well as 
possible.

On Friday 12 November 2010 1:40:10 pm rocon...@theorem.ca wrote:
 As you are well aware in Coq, and in Agda we don't have an extensionality
 axiom; however this is not a problem because we simply use setoid equality
 to capture extenional reasoning and prove that in every specific case
 where we want to use extensional reasoning it is sound to do so.

If we are talking about writing programs in a closed world, and proving things 
about said world, this is fine. But this is not always what we are doing in 
Haskell. For instance, if I am writing a library, am I justified in changing:

  f (g x) (g x)

to:

  let y = g x in f y y

without bumping the semantics breaking change version number? In a closed 
world, where I have proved that all the relevant code I have written admits 
extensionality, the answer is, yes. But in an open world, where people are 
free to use serialize to inspect my implementations in pure code, the answer 
is, no.

 Now suppose I add the following consistent axiom to Coq:
 
 Axiom Church-Turing :
forall f:Nat - Nat, exists e:Nat, forall n:Nat, {e}(n) = f(n)

So, your argument is, if I'm not mistaken, that intensional type theory can be 
consistently extended with this axiom. I believe this. In fact, I'd even be 
willing to accept that purely as an axiom, it might be consistent to add it to 
extensional type theory. However, this is not the whole story.

One question I have to ask is: how does this compute? In Agda, and I suspect 
Coq, axioms simply do not compute (and this is the reason I'd be willing to 
believe Church-Turing is consistent with an extensional theory). However, 
serialize *will* compute, and I expect it to compute like this:

  forall e:Nat. serialize {e} = e

And I can believe that even this is consistent with intensional type theory. 
In an intensional theory, I can imagine (Nat - Nat) being interpreted not as 
a type of functions, but as a type of algorithms. There may be many algorithms 
that compute the same function, and Nat - Nat contains them all. 
serialize/Church-Turing returns the source code of each one.

When we assert extensionality, though, Nat - Nat can no longer be inhabited 
by mere algorithms, though. In an extensional theory, Nat - Nat is a quotient 
of the set of algorithms. And then we have only a few options:

  1) serialize/Church-Turing violates the quotient
  2) serialize/Church-Turing computes extensional equality of algorithms, and
 chooses a single 'source code' representation for each equivalence class
  3) It is impossible for two different pieces of source code to compute the
 same function (so serialize {e} = e is valid because there is no e' /= e
 such that forall x. {e} x = {e'} x)
  *) ...

2 is, I think, impossible, and 3 is simply false, so the choice is 1, meaning 
that ITT + a Church-Turing that actually computes cannot be consistently 
extended with extensionality. And it is this that I care about, and what I was 
referring to in the mail you replied to:

  - Intensional type theory can be consistently extended to extensional type
theory.
  - serialize is anti-extensional (similar to the way impredicative Set
and injective type constructors are anti-classical).

And it is this consistent extension that I care about. And, going back to my 
library example, the reason to care about this is abstraction. I want clients 
of my library to program to the abstraction of my code as functions, not as 
algorithms/source code, because that allows me leeway to tweak the algorithms 
as I see fit, so long as they compute the same function.

And, for that matter, the compiler can do this kind of mucking around with 
algorithms. I think it's a big deal if, to enable optimization of a function, 
we have to prove that all code in our program treats it extensionally (I don't 
believe the compiler can do it, currently, barring serialize simply not being 
used; and say goodbye to separate compilation).

 [1]Actaully the realizer for serialize is *weaker* that this axioms.  The
 realizer for serialize would be (Nat - Nat) - IO Nat instead of (Nat -
 Nat) - Nat, so should have less impact that the Church-Turing axiom.

I have no problem with serialize :: (Nat - Nat) - IO Nat. The problem is 
with (Nat - Nat) - Nat. The former can respect the quotient in question via 
IO hand waving. Perhaps this distinction is frivolous, but it seems wrong to 
make the language in general anti-extensional when it could instead be put 
inside the sin bin. My quarrel is more with, let's dump out the sin bin and 
just be careful.

-- Dan

[*] I have seen a paper about a type theory with quotients that has a 
function:

  choose : T / R - T

such that:

  choose (squish x) = x

but it was meticulously designed to allow that in ways that I don't really 
remember, so I'm going to pretend it 

Re: [Haskell-cafe] Serialization of (a - b) and IO a

2010-11-11 Thread Dan Doel
On Thursday 11 November 2010 4:25:46 am Thomas Davie wrote:
 I don't think I agree, I didn't see a rule f == g = serialise f ==
 serialise g anywhere.

That equal things can be substituted for one another is one of the fundamental 
ideas of equality (the other is that anything is equal to itself). In fact, in 
second order logic, equality can be *defined* as (roughly):

  (x = y) means (forall P. P x - P y)

That is, x is equal to y if all predicates satisfied by x are also satisfied 
by y. Using this, one can derive other typical laws for equality. Transitivity 
is pretty easy, but surprisingly, even symmetry can be gotten:

  If P z is z = x, using substitution we get x = x - y = x,
  and x = x is trivially true.

And of course, we also get congruence:

  Given a function f, let P z be f x = f z,
  substitution yields f x = f x - f x = f y,
  where f x = f x is again trivial.

The equality that people typically expect to hold for Haskell expressions is 
that two such expressions are equal if they denote the same thing, as Max 
said. Expressions with function type denote mathematical functions, and so if 
we have something like:

  serialize :: (Integer - Integer) - String

it must be a mathematical function. Further, its arguments will denote 
functions, to, and equality on mathematical functions can be given point-wise:

  f = g iff forall x. f x = g x

Now, here are two expressions with type (Integer - Integer) that denote equal 
functions:

  \x - x + x
  \x - 2 * x

So, for all this to work out, serialize must produce the same String for both 
of those. But in general, it isn't possible to decide if two functions are 
point-wise equal, let alone select a canonical representative for each 
equivalence class of expressions that denote a particular function. So there's 
no feasible way to implement serialize without breaking Haskell's semantics.

How making serialize :: (Integer - Integer) - IO String solves this? Well, 
that's another story.

-- Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   4   >