Re: [Haskell-cafe] generalized newtype deriving allows the definition of otherwise undefinable functions

2010-03-10 Thread wren ng thornton

Wolfgang Jeltsch wrote:

Hello,

some time ago, it was pointed out that generalized newtype deriving could be 
used to circumvent module borders. Now, I found out that generalized newtype 
deriving can even be used to define functions that would be impossible to define 
otherwise. To me, this is surprising since I thought that generalized newtype 
deriving was only intended to save the programmer from writing boilerplate 
code, not to extend expressiveness.


Let's dig down and figure out the problem. When you annotate the type 
Wrapped a with deriving (Iso a) what are you saying? You're saying 
that the compiler should derive an instance (Iso a (Wrapped a)). Well, 
that instance gives you the method instance conv :: forall f. f a - f 
(Wrapped a). The funny thing is that the only implementation for 
---something like--- that type would be fmap Wrap. But that 
implementation would introduce the requirement (Functor f), which is 
missing in the conv type. This is possible because of the representation 
model where a and (N a) have the same runtime representation, but it 
violates the intensional distinction between those types, by way of 
presuming functorality of any type of kind (k - *). Yeah, we can't do 
that legally in the language, so the GeneralizedNewtypeDeriving 
implementation is buggy.


Regarding the use of GeneralizedNewtypeDeriving for implementing 
functions that are faster than otherwise possible, these particular 
derivations should not be done without the (Functor f) restriction in 
the type of the derived function. It doesn't matter that the 
implementation does not use the fmap implementation (assuming that 
implementation is lawful), it matters that the derived function cannot 
be written at all ---efficiently or otherwise--- without assuming such 
an fmap exists.


Special casing things like this so they require the (Functor f) 
restriction won't solve everything. I'm sure we could create a different 
example that violates a different class. The general solution would seem 
to be making sure that the newtype only occurs in top-level positions 
within the type of the derived functions. Where top-level means  that 
it is not embedded within an unknown type constructor, though we can 
legitimately bake in support for well-known type constructors like (-), 
(,), Either, Maybe, [],...


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] If wishes were horses... (was: Re: definition of sum)

2010-03-11 Thread wren ng thornton

David Leimbach wrote:

Note that foldl' has a ' to indicate that it's not the same as foldl
exactly.  I would propose that sum' exist as well as sum, and that sum be
lazy.


I wish Haskell allowed ! to occur (non-initially) in alphanum_' 
identifiers as well as in symbolic ones. Then we could be more 
consistent about having ! mean strictness like it does with ($!), bang 
patterns, strict fields,... (too bad about (!) and (!!)). The prime has 
so many other uses, it's a shame it gets used up for strict/lazy 
variants, as if there were no other variations.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] generalized newtype deriving allows the definition of otherwise undefinable functions

2010-03-12 Thread wren ng thornton

Wolfgang Jeltsch wrote:

Am Donnerstag, 11. März 2010 00:37:18 schrieb wren ng thornton:

Wolfgang Jeltsch wrote:

Hello,

some time ago, it was pointed out that generalized newtype deriving could
be used to circumvent module borders. Now, I found out that generalized
newtype deriving can even be used to define functions that would be
impossible to define otherwise. To me, this is surprising since I thought
that generalized newtype deriving was only intended to save the
programmer from writing boilerplate code, not to extend expressiveness.

Let's dig down and figure out the problem. When you annotate the type
Wrapped a with deriving (Iso a) what are you saying? You're saying
that the compiler should derive an instance (Iso a (Wrapped a)). Well,
that instance gives you the method instance conv :: forall f. f a - f
(Wrapped a). The funny thing is that the only implementation for
---something like--- that type would be fmap Wrap.


If the parameter of f is contravariant then we would need a “contraMap”, not 
an fmap. Example:


Right, but it's the same basic idea, just violating the (nonexistent?) 
ContraFunctor class instead of the Functor class. The underlying problem 
---which is what I was trying to identify--- is that generalized newtype 
deriving is assuming that every tycon of kind *-* is a functor (i.e., 
co-/contravariant endofunctor on all of Hask), and it's that assumption 
which causes breakage.


My conservative solution to disallow deriving methods where the newtype 
occurs beneath an unknown tycon would, I think, still work. The only 
difference is that in addition to considering all instances of 
Functor[1] as well known (as well as a few special cases: e.g., the 
first argument to (-) or (,)) we could consider all instances of 
ContraFunctor to be well known as well. That is, if there's an 
official version of that class.


My solution is conservative in that it doesn't offer any support for 
GADTs or type families, which are the particular concern in the bug 
tracker ticket. The problem for them is the same one, only it's even 
more pertinent since some things given the kind *-* should really have 
a different kind like |*|-* since their argument is an index instead of 
a type--- which means they're _really_ not functors on Hask.



Let us look at the Set example from John Meacham. Set is a (covariant) 
functor, not a contravariant functor. However, it isn’t a functor from and to 
the category of Haskell types and functions


Right, which is why the assumption that kind *-* implies functorality 
is broken. Again, in some sense even the kind is wrong; it's something 
more like Ord-*, but that only underscores the point.



[1] And Monad since Monad doesn't state the Functor requirement.

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: If wishes were horses...

2010-03-12 Thread wren ng thornton

Ketil Malde wrote:

What should the type look like?  If memory serves, Clean allows bangs in
type signatures, something like:
 
  foldl' :: (a - b - a) - !a - [b] - a


but I thought it just added a seq under the hood, much like bang
patterns like

  foldl' f !z xs = ...

do in Haskell, so it's not like !a is a separate type.  Or?



The usual approach I've seen is not to distinguish strict and lazy 
datatypes, but rather to distinguish strict and lazy functions, e.g. by 
having two different arrows: (-) for lazy functions and (!-) for 
strict ones.[1]


There are reasonable reasons for wanting to distinguish strict/lazy 
datatypes instead, though that's usually called distinguishing lifted vs 
unlifted types, which has different effects on the semantics.


I don't recall which approach Clean uses.



Would something like this work?  It seems to me that strictness doesn't
apply to result types


Which is one benefit of the two arrows version: there's no way to talk 
about making the result strict.



and you need to track the
relationship between a and a', since you probably want to allow a lazy
value as e.g. the second parameter of foldl'.


Usually, if you're going to be distinguishing lifted and unlifted types 
then you'll want to be able to say that the unlifted values can always 
be coerced into the corresponding lifted type. Whether that involves 
subtyping, autoboxing, etc. will depend on the compiler and the language.



[1] Actually, just having two is insufficiently expressive. What you 
actually want is to make the (-) type constructor take three arguments: 
the parameter type, the return type, and an index belonging to the set 
{Strict,Lazy}. That way you can use type variables to unify strictness 
behaviors of higher-order functions with their function arguments (i.e., 
the HOF is strict or lazy in various arguments depending on the 
strictness properties of the functional arguments).


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: If wishes were horses...

2010-03-15 Thread wren ng thornton

Ben Millwood wrote:

In general, laziness behaviour can
get complicated quickly and so I'm not convinced that the type
signature is a good home for that information.


Certainly it can. A lot of the same problems arise in the logic 
programming community under the topic of modes, i.e. whether a logic 
variable must be ground (or rather, how defined it must be) before it's 
safe to run the function backwards. Though, at present AFAIK, they've 
contented themselves with rough approximations like lifted/unlifted 
value types or strict/lazy arrows, either of which can catch the simple 
cases.


Personally, I think the only way to capture *all* strictness information 
is if we have full dependent types (or worse, for logic languages, since 
their dependencies lack the directionality of usual dependent types). If 
we had full dependent types, and went with the lifted/unlifted 
distinction instead of the strict/lazy one, then we could give `maybe` 
the type:


maybe :: (_:b) - (_:a - b') - (m:Maybe a)
 - (case m of Nothing = b ; Just _ = b')

Would such a type be helpful? shrug  I think adding full dependent 
types is a bit much if all we're interested in is strictness behavior. 
But, as you say, it seems very unlikely that we can encode strictness 
behaviors which may depend on particular runtime values without DTs.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec to parse tree structures?

2010-03-17 Thread wren ng thornton

david fries wrote:

My my concern was how you would perform random access in a functional
parser. You're points are interesting too. I guess if we really had
wanted to work with parsed objects, retaining the shared references
would have been a must.


Out of curiosity, was there *really* a need for random access? It sounds 
like you could've written things to do it in one forward pass, keeping a 
table on the side of the targets for dangling pointers, and then 
back-patching once you've reached the object being pointed at. You may 
even be able to use laziness to do the back-patching instead of needing 
STRefs. The only problem I could see with that approach is if there was 
funny bit-packing going on so that objects were overlapped in the same 
memory region.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec to parse tree structures?

2010-03-18 Thread wren ng thornton

Stephen Tetley wrote:

hi wren

Where I've used it, random access does seem conceptual more
satisfactory than trying to avoid it.


I was thinking more about performance issues (avoiding disk seeks) which 
would also alleviate the problem of needing random access when it's not 
available.




For complex formats e.g. PECOFF or TrueType, you might only want to
parse certain tables [*]. After parsing the index table you could
build a list of parsers to run sequentially on the body of the file
(including parsers that just drop unwanted tables), but this seems too
much work (and too much to go wrong)


Even still, you could linearize the access in order to minimize seeking 
back and forth. The linearization could even be done automatically by 
having the table of what needs backpatching also serve as a work queue 
(when done with a block, seek to the closest next block; when the queue 
is empty, you're done). The backpatching queue also preserves structure 
sharing.


Perhaps I've just been parsing too many multigigabyte files lately... :)

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell platform questions

2010-03-21 Thread wren ng thornton

Gregory Collins wrote:

Warren Harris warrensomeb...@gmail.com writes:


I downloaded the new haskell-platform-2010.1.0.0-i386.dmg today... ran the
uninstaller, ghc installer and the platform installer. When I run  ghci, it
seems to work fine, but when I try cabal, I get this crash:

$ cabal --version
dyld: unknown required load command 0x8022
Trace/BPT trap

Any suggestions? (I'm on 10.5.8). Maybe I'll try the macport install process
instead.


Is anyone else still on Leopard who can test this? I am worried that the
binaries (which were linked on Snow Leopard) are not working on Leopard.


I'm still on 10.5.8. I don't have cabal-install installed yet, but I 
just installed GHC-6.12.1/HP-2010.1.0.0. I can verify that ghci works 
fine so far. I'll check out cabal-install in the next couple days.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell platform questions

2010-03-22 Thread wren ng thornton

Gregory Collins wrote:

wren ng thornton w...@freegeek.org writes:


I'm still on 10.5.8. I don't have cabal-install installed yet, but I just
installed GHC-6.12.1/HP-2010.1.0.0. I can verify that ghci works fine so
far. I'll check out cabal-install in the next couple days.


If there is an issue here it'd be with the binaries that ship with the
platform, not GHC; can you check /usr/local/bin/cabal to see if yours
has the same issue?


w...@semiramis:~ $ ls /usr/local
ls: /usr/local: No such file or directory
w...@semiramis:~ $ ls /usr/bin/cabal
ls: /usr/bin/cabal: No such file or directory


But http://hackage.haskell.org/platform/new/contents.html tells me 
cabal-install is supposed to ship with the platform...


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell platform questions

2010-03-24 Thread wren ng thornton

Gregory Collins wrote:

wren ng thornton w...@freegeek.org writes:


w...@semiramis:~ $ ls /usr/local
ls: /usr/local: No such file or directory
w...@semiramis:~ $ ls /usr/bin/cabal
ls: /usr/bin/cabal: No such file or directory

But http://hackage.haskell.org/platform/new/contents.html tells me
cabal-install is supposed to ship with the platform...


It should; there's an installation script that's *supposed* to put a
symlink into /usr/local/bin/cabal from
/Library/Frameworks/HaskellPlatform.framework/bin/.

Binaries should be there regardless.


Looks like they are. Perhaps the script is missing a `mkdir -p 
/usr/local/bin` ?  Does the installer keep a log anywhere? (Looks like 
neither GHC nor HP give installation receipts...)


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell platform questions

2010-03-24 Thread wren ng thornton

Don Stewart wrote:

You should file a bug on the Haskell Platform bug tracker.

http://haskell.org/haskellwiki/Haskell_Platform#Trouble_shooting

And I'm CC'ing the dmg maintainer -- it may also be a GHC issue as well.

-- Don

warrensomebody:
I downloaded the new haskell-platform-2010.1.0.0-i386.dmg today... ran  
the uninstaller, ghc installer and the platform installer. When I run  
ghci, it seems to work fine, but when I try cabal, I get this crash:


$ cabal --version
dyld: unknown required load command 0x8022
Trace/BPT trap


Same thing here.

w...@semiramis:~ $ cabal --version
dyld: unknown required load command 0x8022
Trace/BPT trap

OSX = 10.5.8
CPU = Core 2 Duo


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC vs GCC

2010-03-27 Thread wren ng thornton

David Menendez wrote:

On Sat, Mar 27, 2010 at 12:56 AM, Thomas DuBuisson
thomas.dubuis...@gmail.com wrote:

Using bang patterns didn't help almost anything here. Using rem
instead of mod made the time go from 45s to 40s. Now, using -fvia-C
really helped (when I used rem but not using mod). It went down to
10s.

Bang patterns should have helped tons - it isn't GHC thats at fault
here and yes it does tco.  I attached a version w/ excessive bangs
below.  Did you compile with ghc --make -O3 -fforce-recomp?


Does -O3 actually do anything? GHC only goes up to -O2.


Well GHC has an -O3[1], but it's not a good idea to use it. Some of the 
optimizations that -O3 does can result in slower code for particular 
programs. Whereas -O2 is safe and never results in pessimizations.



[1] Unless recent versions have removed it.

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there any female Haskellers?

2010-03-27 Thread wren ng thornton

Alberto G. Corona wrote:

because math abilities  are not a -primary- reason for survival.
 Tools engineering and mastering is.


I don't see the difference. Being able to use a lever, wheel, pulley, 
fire,... is obviously helpful for survival. But intellectual tools 
like mathematics, logic, and computer science don't bear any particular 
relation to physical tools. If someone's adept at using hammers, 
pliers,... there's no reason to suspect that they'd be adept with 
reasoning puzzles. And just because someone's good at category theory is 
no reason to suspect they'd be able to repair a car.


Do not be misled by the fact that CS departments are often lumped in 
with engineering. For that matter, do not be misled by modern 
engineering which bears little resemblance to any tool-using 
evolutionary advantage that may have molded homo sapiens.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there any female Haskellers?

2010-03-27 Thread wren ng thornton

wren ng thornton wrote:

Alberto G. Corona wrote:

because math abilities  are not a -primary- reason for survival.
 Tools engineering and mastering is.


I don't see the difference. 


(That is, the difference between CS and mathematics. Conversely, I don't 
see the similarity between physical tools and intellectual tools.)


Being able to use a lever, wheel, pulley, 
fire,... is obviously helpful for survival. But intellectual tools 
like mathematics, logic, and computer science don't bear any particular 
relation to physical tools. If someone's adept at using hammers, 
pliers,... there's no reason to suspect that they'd be adept with 
reasoning puzzles. And just because someone's good at category theory is 
no reason to suspect they'd be able to repair a car.


Do not be misled by the fact that CS departments are often lumped in 
with engineering. For that matter, do not be misled by modern 
engineering which bears little resemblance to any tool-using 
evolutionary advantage that may have molded homo sapiens.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Are there any female Haskellers?

2010-03-28 Thread wren ng thornton

Jon Fairbairn wrote:

Another (provocative) observation is that most of the women
programmers I've known were good at it and thought they might
not be, but most of the men claimed to be good at it but
were not.


I've observed this too, but it's a bit droll. Let:

p = proportion of people who think they're good but aren't
q = proportion who think they're not good but are
M = number of men in CS
W = number of women in CS

Given that M  W, we'll naturally find that p*M  q*W if p and q are 
even remotely comparable, regardless of whether p and q are independent 
of gender or not.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Are there any female Haskellers?

2010-03-28 Thread wren ng thornton

Günther Schmidt wrote:
One thing that I keep hearing is I'm not trying to be offensive.  I 
think it's easy to get caught up on not being offensive so that we 
don't make any progress.  It's impossible not to offend people -- but 
it is possible to take the time to listen and correct problematic 
behavior and communicate what you've learned to others.


One thing I do notice, one starts with a harmless question and it out of 
the blue it suddenly becomes political. In both ways. Is there really a 
need for this?


Trying to offend (or not) bears no particular relation to causing 
offense (or not). In particular, claiming you weren't trying to offend 
is itself likely to offend many feminists. To understand why you should 
read through


http://www.derailingfordummies.com/

Not that you were intending to derail, but because derailing is a fact 
of social interaction which intentional communities must defend against. 
Dealing with derailing and similar issues is a fact of life for 
feminists. And all the women I know in CS or mathematics count 
themselves as feminists.


Your harmless question was, by its very nature, a political question 
because it touches upon many issues about the presence and role of women 
within society (the HCafe society in particular). The harmless 
question gave license to others to make misogynistic comments on this 
thread, comments you'd now like to distance yourself from accepting 
culpability for. If the question was really so harmless, surely you 
wouldn't be so keen to distance yourself from the responses it created.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Are there any female Haskellers?

2010-03-28 Thread wren ng thornton

Jason Dagit wrote:

On Sun, Mar 28, 2010 at 8:29 PM, wren ng thornton w...@freegeek.org wrote:


Jon Fairbairn wrote:


Another (provocative) observation is that most of the women
programmers I've known were good at it and thought they might
not be, but most of the men claimed to be good at it but
were not.


I've observed this too, but it's a bit droll. Let:

   p = proportion of people who think they're good but aren't
   q = proportion who think they're not good but are
   M = number of men in CS
   W = number of women in CS

Given that M  W, we'll naturally find that p*M  q*W if p and q are even
remotely comparable, regardless of whether p and q are independent of gender
or not.


I recall going to a PhD defense several years ago about gender differences
in computer science.  The dissertation is here:
http://ir.library.oregonstate.edu/dspace/bitstream/1957/4954/1/FinalVersion.pdf

A few take-away points I recall from the defense:


Oh sure :)  I was merely stating that the null hypothesis is sufficient 
to account for the observations made. (As it almost always is for 
psycho/social studies of gender.) There's also an interesting result 
that there's an inverse correlation between actual skill and claimed 
skill (regardless of the particular skill, and AKAIR regardless of gender).


But surely this discussion is more appropriate to cognitive-c...@haskell.org

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-30 Thread wren ng thornton

Stephen Tetley wrote:

Much of the behaviour of CPP is not defined and
often inaccurately described, certainly it wouldn't appear to make an
ideal one summer, student project.


But to give Language.C integrated support for preprocessing, one needn't
implement CPP. They only need to implement the right API for a 
preprocessor to communicate with the parser/analyzer.


Considering all the folks outside of C who use the CPP
*cough*Haskell*cough* having a stand-alone CPP would be good in its own
right. In fact, I seem to recall there's already one of those floating
around somewhere... ;)

I think it'd be far cooler and more useful to give Language.C integrated
preprocessor support without hard-wiring it to the CPP. Especially given
as there are divergent semantics for different CPP implementations, and
given we could easily imagine wanting to use another preprocessor (e.g.,
for annotations, documentation, etc)

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data Structures GSoC

2010-03-31 Thread wren ng thornton

Nathan Hunter wrote:

Hello.

I am hoping to take on the Data Structures project proposed two years ago by
Don Stewart herehttp://hackage.haskell.org/trac/summer-of-code/ticket/1549,
this summer.
Before I write up my proposal to Google, I wanted to gauge the reaction of
the Haskell community to this project.
Particularly:

-What Data Structures in the current libraries are in most dire need of
improvement?
-How necessary do you think a Containers Library revision is?


One thing I've seen come up repeatedly is the issue of presenting a 
unified and general interface for Data.Map, Data.IntMap, and related 
things like Data.Set, bytestring-trie, pqueue, etc which intended to 
mimic their interface. That alone isn't big enough for a GSoC, but it 
would be a very nice thing to have. Every few months there's a request 
on the libraries@ list to alter, generalize, or reunify the map 
interface in some way.


** Just to be clear, I do not mean coming up with a typeclass nor doing 
things like the generalized-trie tricks, I just mean a good old 
fashioned standard API. **


There are countervailing forces for making a good API. On the one hand 
we want functions to do whatever we need, on the other hand we want the 
API to be small enough to be usable/memorable. In the bytestring-trie 
library I attempted to resolve this conflict by offering a small set of 
highly efficient ueber-combinators in the internals module, a medium 
sized set of functions for standard use in the main module, and then 
pushed most everything else off into a convenience module.


The containers library would do good to follow this sort of design. The 
Data.Map and Data.IntMap structures don't provide the necessary 
ueber-combinators, which has led to the proliferation of convenience 
functions which are more general than the standard use functions but not 
general enough to make the interface complete. Also, these generalized 
functions are implemented via code duplication rather than having a 
single implementation, which has been known to lead to cutpaste bugs 
and maintenance issues. Provided the correct ueber-combinators are 
chosen, there is no performance benefit for this code duplication either 
(so far as I've discovered with bytestring-trie).


Additionally, it'd be nice if some of the guts were made available for 
public use (e.g., the bit-twiddling tricks of Data.IntMap so I don't 
have to duplicate them in bytestring-trie).


Also it would be nice to develop a cohesive test and benchmarking suite, 
which would certainly be a large enough task for GSoC, though perhaps 
not fundable.


I would be willing to co-mentor API and algorithm design for cleaning 
the cobwebs out of the containers library. I wouldn't have the time for 
mentoring the choice of datastructures or benchmarking however.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: logfloat 0.12.1

2010-03-31 Thread wren ng thornton


-- logfloat 0.12.1


This package provides a type for storing numbers in the log-domain, 
primarily useful for preventing underflow when multiplying many 
probabilities as in HMMs and other probabilistic models. The package 
also provides modules for dealing with floating numbers correctly.



-- Changes since 0.12.0.1


* Fixed a number of bugs where LogFloat values would become NaN when 
they should not. These bugs involved using normal-space positive 
infinity and so would not affect clients using the package for 
probabilities.


The fixes do introduce some extra checks though. If anyone is using the 
package for probabilities in a large enough project and could run some 
benchmarks to see how 0.12.1 compares to 0.12.0.1 I'd love to hear the 
results. (I'm not sure that doing microbenchmarks would actually give 
much insight about real programs.) If the overhead is bad enough I can 
add a LogProb type which ensures things are proper probabilities in 
order to avoid the new checks.





-- Compatibility / Portability


The package is compatible with Hugs (September 2006) and GHC (6.8, 6.10, 
6.12). For anyone still using GHC 6.6, the code may still work if you 
replace LANGUAGE pragma by equivalent OPTIONS_GHC pragma.


The package is not compatible with nhc98 and Yhc because 
Data.Number.RealToFrac uses MPTCs. The other modules should be compatible.




-- Links


Homepage:
http://code.haskell.org/~wren/

Hackage:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/logfloat

Darcs:
http://code.haskell.org/~wren/logfloat/

Haddock (Darcs version):
http://code.haskell.org/~wren/logfloat/dist/doc/html/logfloat/

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Apparently, Erlang does not have a static type system, since with hot code loading, this is intrinsically difficult.

2010-04-05 Thread wren ng thornton

Jason Dusek wrote:

2010/04/03 Casey Hawthorne cas...@istar.ca:

Apparently, Erlang does not have a static type system, since with hot
code loading, this is intrinsically difficult.


  It is doubtless hard to statically check a program that is
  not statically available :)


Well, so long as you get the hot-loaded component all at once, you can 
do analysis on the static code before you try running it. Just because 
some code arrives after the program started running doesn't mean that 
code needs to be interpreted and have dynamic safety checks inserted.


Now, if you don't get your code in nice cohesive chunks, but have it 
streaming in all willy nilly, then you'd need something like gradual 
typing or what Epigram does in order to do partial-analysis of what 
static code is available, even if it has holes in it.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hackage accounts and real names

2010-04-06 Thread wren ng thornton

Ertugrul Soeylemez wrote:

Human identity is much more than just a file descriptor or a map key,
and people from academia often don't get this, because they don't have
to fear using their real names.  Particularly in economically illiberal
countries being known as the author of a certain Haskell package can get
you into trouble either at work or even with the government.  It can
also prevent you from getting a job.


FWIW, I think this reason alone is enough justification to lift the 
restriction.


Personally: after using a consistent pseudonym for years, I was 
eventually convinced[1] that real names are best when you're involved in 
contributing to online communities--- and I do mean *communities*, not 
mere interaction. But, while I feel it's probably in my best interest to 
have my community deeds associated with my real name, I'm under no 
delusions that it is in everyone's best interests that their deeds be 
so. I've never had to deal with illiberal governments. I have, however, 
worked with a number of excellent hackers who live with them.


Also I don't believe there's anything sacrosanct about real names. Is 
the persona we have in the work place more real than the one we have 
with friends? To say nothing of the countless friends of mine who've 
legally changed their names for this or that reason. Our identity and 
the status accorded to us does not come from a name legally given at 
birth. It comes from the personae with which we participate in the world.



[1] http://www.catb.org/~esr/faqs/hacker-howto.html#status

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell.org re-design

2010-04-08 Thread wren ng thornton

Ivan Lazar Miljenovic wrote:

Thomas Schilling nomin...@googlemail.com writes:


http://i.imgur.com/kFqP3.png   Didn't know about CSS's rgba to
describe transparency.  Very useful.


It's a vely nice!! (in a Borat voice)


+1. Both for the design, and for the content.

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell gsoc proposal for richer numerical type classes and supporting algorithms

2010-04-08 Thread wren ng thornton

Gregory Crosswhite wrote:

On Apr 8, 2010, at 12:25 PM, Casey McCann wrote:


Seriously, floating point so-called numbers don't even have
reflexive equality!


They don't?  I am pretty sure that a floating point number is always equal to 
itself, with possibly a strange corner case for things like +/- 0 and NaN.


Exactly. NaN /= NaN.

Other than that, I believe that let x = ... in x == x is true (because 
they are the same bitfield by definition), however it is easy to have 
'the same number' without it having the same bitfield representation due 
to loss of precision and the like. To say nothing of failures of other 
laws leading to overflow, underflow, etc.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal dependency hell

2010-04-13 Thread wren ng thornton

Duncan Coutts wrote:

On Sun, 2010-04-11 at 18:43 +0200, Maciej Piechotka wrote:

 - Privacy problem. I don't want the software to call home with data
without asking.


Obviously it is important that the data be anonymous and that we do not
send stuff without the user's knowledge. While there is not any directly
identifying information in the existing anonymous build reports, one has
to be very careful with how much access the server provides to the
reports or it may become possible to infer identifying information.


One possibility for mitigating the issues here is to have cabal present 
the entire message to the user for scrubbing prior to being 
submitted,[1] similar to how version control systems generally provide a 
summary of the patch (albeit uneditable) when asking for a patch 
description.


That poses other problems (e.g., reports which are too incomplete to be 
helpful or which are intentionally erroneous), and doesn't cover 
everything (e.g., taking advantage of outside knowledge that Duncan is 
one of the few users on Sparc/Linux), but it helps to solve the 
declassification problem (i.e., what data the user is willing to reveal 
to the server).



[1] Ideally in a way which allows scripting the scrubbing so folks can 
just specify preferences once. If we wanted to keep things simple for 
the implementors, then hooking into $EDITOR and assuming folks know how 
to script their favorite editor is one approach. Otherwise we'll want a 
(E)DSL that can be specified in config files.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal dependency hell

2010-04-13 Thread wren ng thornton

Ketil Malde wrote:

Perhaps it would also be possible to suggest library upgrades likely to
remedy the problem in case of a build failure? 


+1 for good error messages.

+2 for should I try upgrading libfoo? [yn] integration (if 
configurable as AlwaysYes, AlwaysAsk, or AlwaysNo).


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Move MonadIO to base

2010-04-18 Thread wren ng thornton
This bounced because I have different emails registered for cafe@ and 
libraries@, so forwarding it along to the cafe.



wren ng thornton wrote:

wren ng thornton wrote:

Heinrich Apfelmus wrote:

Anders Kaseorg wrote:

This concept can also be generalized to monad transformers:

class MonadTrans t = MonadTransMorph t where
morph :: Monad m = (forall b. (t m a - m b) - m b) - t m a


[...]
However, not all control operators can be lifted this way. Essentially,
while you may downgrade an arbitrary selection of  t m a  values you
may only promote one  m a  in return and all have to share the same
return type  a . In particular, it's not possible to implement

lift :: (Monad m, MonadTrans t) = m a - t m a


Why not?
* morph   says m(t m a) is a subset of (t m a)
* Monad m says we can fmap :: (a-b) - (m a-m b)
* Monad (t m) says we can return :: a - t m a

lift ma = morph (\k - k (fmap return ma))


Or rather,

lift ma = morph (\k - join (fmap (k . return) ma))

That's what I get for typing without checking. The type of morph 
requires us to Church-encode things needlessly; what we mean to say is: 
morph (fmap return ma).


Again, having m(t m a)-(t m a) is strictly more expressive than only 
having (m a)-(t m a) because the former may avail itself of 
operations/operators of t.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: cabal: other-modules

2010-04-18 Thread wren ng thornton

Ivan Lazar Miljenovic wrote:

Why are people suddenly using the term morally when they mean why
doesn't this do what I think it should?  None of its definitions seem
to match what you mean:


The usage on this thread seems a bit nonstandard, but I'm assuming it's 
based off the more general idiom of things being morally equivalent--- 
that is, things which *should* be equal because we mean for them to be 
(regardless of what a particular model (e.g., a programming language) says).


In other words, with the right set of beliefs (i.e., moral beliefs, or 
the right religion) they are indeed equal, but the world is violating 
those beliefs somehow. The implication being that the world (model, 
PL,...) should be changed, rather than the beliefs.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Move MonadIO to base

2010-04-19 Thread wren ng thornton

Anders Kaseorg wrote:

Isaac Dupree wrote:

Do you see the difference? The effects are sequenced in different places.
The return/join pair moves all the effects *outside* the operations such
as catch... thus defeating the entire purpose of morphIO. 


Yes; my question is more whether Wren has a more clever way to get an 
isomorphism (forall b. (m a - IO b) - IO b) - IO (m a) that would make 
the simpler interface work out.  (Or maybe I misunderstood what he was 
getting at.)


Yeah no, that's what I was getting at. Since it doesn't quite work out, 
I should probably rethink my appeal to parametricity re Kleisli arrows.[1]


That is, when we take the monad to be the identity monad or equivalently 
to be no monad, then parametricity yields: (forall b. (m a - b) - b) 
- (m a). Apparently this makes some sort of appeal to special 
properties about the identity monad (e.g., being both pointed and 
copointed) since it doesn't generalize to every monad in the way I 
suggested.


musing
Perhaps the correct version is this?

forall a n. Monad n =
(forall b. (m a - n b) - n b) - n (m (n a))

Of course that may not solve your H98 concerns. Not all monads m provide 
a universal law (forall n, n.m.n - n.m) so to define the law you'd need 
MPTCs to relate m and n. But if we monomorphize to just n=IO that would 
simplify things; but then we'd need (Traversable m) in order to collapse 
the two layers of IO...

/musing


[1] Oleg discusses a similar need to be careful about appeals to 
parametricity when dealing with monads:

http://okmij.org/ftp/Computation/lem.html

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Move MonadIO to base

2010-04-19 Thread wren ng thornton

wren ng thornton wrote:

Anders Kaseorg wrote:

Isaac Dupree wrote:
Do you see the difference? The effects are sequenced in different 
places.

The return/join pair moves all the effects *outside* the operations such
as catch... thus defeating the entire purpose of morphIO. 


Yes; my question is more whether Wren has a more clever way to get an 
isomorphism (forall b. (m a - IO b) - IO b) - IO (m a) that would 
make the simpler interface work out.  (Or maybe I misunderstood what 
he was getting at.)


Yeah no, that's what I was getting at. Since it doesn't quite work out, 
I should probably rethink my appeal to parametricity re Kleisli arrows.[1]


No, my parametricity was correct, just the implementations were wrong:

{-# LANGUAGE RankNTypes #-}
module MorphIO where
import Prelude hiding (catch)
import Control.Monad
import qualified Control.Exception as E
import Control.Exception (NonTermination(..))

class Monad m = MonadMorphIO m where
morphIO :: (forall b. (m a - IO b) - IO b) - m a

class Monad m = MonadJoinIO m where
-- | Embed the IO into the monad m
joinIO :: IO (m a) - m a

-- | Extract the IO computation to the top level,
-- rendering the m pure from IO.
partIO :: m a - IO (m a)

joinIO'  m = morphIO (m =)
morphIO' f = joinIO (f partIO)

instance MonadMorphIO IO where
morphIO f = f id

instance MonadJoinIO IO where
joinIO = join
partIO = fmap return -- N.B. fmap return /= return

catch  m h = morphIO  $ \w - w m `E.catch` \e - w (h e)
catch' m h = morphIO' $ \w - w m `E.catch` \e - w (h e)

test  = E.throwIO NonTermination `catch`  \NonTermination - return 
moo
test' = E.throwIO NonTermination `catch'` \NonTermination - return 
moo



--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bulk Synchronous Parallel

2010-04-23 Thread wren ng thornton

Peter Gammie wrote:

Alice/ML is the place to look for this technology.

http://www.ps.uni-saarland.de/alice/

The project may be dead (I don't know), but they did have the most
sophisticated take on pickling that I've seen. It's an ML variant,
with futures, running on top of the same platform used by Mozart/Oz of
CTM fame.


I seem to recall AliceML decided to move away from the Mozart/Oz runtime 
and develop their own (and that the project died shortly thereafter). 
I'd love to be wrong though, AliceML was very nice.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proper Handling of Exceptional IEEE Floating Point Numbers

2010-04-23 Thread wren ng thornton

Casey McCann wrote:

The only correct solution would be to strip floating point types of
their instances for Ord, Eq, and--therefore, by extension--Num. For
some reason, no one else seems to like that idea. I can't imagine
why...


I'm not terribly opposed to it. But then, I've also defined classes for 
partial orderings[1] and for types containing transfinite values[2] in 
order to try to render floats usable. Also, don't forget some of the 
other bugs[3] in Hugs.



[1] 
http://hackage.haskell.org/packages/archive/logfloat/0.12.1/doc/html/Data-Number-PartialOrd.html


[2] 
http://hackage.haskell.org/packages/archive/logfloat/0.12.1/doc/html/Data-Number-Transfinite.html


[3] 
http://hackage.haskell.org/packages/archive/logfloat/0.12.1/doc/html/Hugs-RealFloat.html



--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] IO (Either a Error) question

2010-05-09 Thread wren ng thornton

Brandon S. Allbery KF8NH wrote:
It's not a call, it's a definition as shown above.  The simpler 
translation is:


  x - y

becomes

  y = \x -

(note incomplete expression; the next line must complete it) and the 
refutable pattern match takes place in the lambda binding.  But because 
of the whole fail thing, instead of letting pattern match failure be 
caught by the lambda binding it gets handled specially beforehand, which 
is especially silly when in most cases fail is defined to do the same 
thing as the lambda binding would.


I'm suggesting (as is David, I think) that a saner definition would let 
the lambda binding randle refutable patterns, and for something like 
Maybe (=) can decide how to deal with it in the usual way.  Otherwise 
you're either using a default fail that duplicates the lambda binding, 
or if you want custom handling (as with Maybe and Either that propagate 
Nothing/Left _ respectively) you end up reimplementing part of (=) as 
fail, which is just dumb.


+1.

I've never understood what exactly the goal of 'fail'-able patterns was. 
It's a *solution* to the problem of pattern matching, but what is the 
*goal* of allowing pattern matching in the first place? What semantics 
is the solution trying to capture?


The vast majority of code I've written or seen uses plain variables as 
the binding pattern, in which case the definition of (=) should handle 
issues like this. And in the cases where we want more than just a plain 
variable, we usually want to handle the exceptional branch on a 
case-by-case basis, so the pattern gets boiled out of the - syntax anyways.


The only examples I can think of where we'd want 'fail'-able patterns 
are entirely pedagogical (and are insignificantly altered by not using 
'fail'-able patterns). I can't think of any real code where it would 
actually help with clarity.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell and the Software design process

2010-05-09 Thread wren ng thornton

Rafael Cunha de Almeida wrote:

I don't think that safeSecondElement is worse than secondElement. I think it's
better for the program to crash right away when you try to do something that
doesn't make sense.

Getting the secondElement of a list with one or less elements doesn't make
sense, so you are definetely doing something wrong if you expected the list to
have two elements, but it doesn't. It's best that the program crashes there,
than propagate the error and crash somewhere else or, worse, not crash at all
and give a wrong answer.


There are two different issues at stake here, one is reactive and the 
other is proactive. You're responding to the reactive question: how do I 
deal with unexpected errors when they arise? But those who argue against 
partial functions are usually responding to the proactive question: how 
do I prevent unexpected errors from arising in the first place?


By allowing partial functions we allow for the introduction of 
unexpected errors, which then forces us to handle the reactive question 
of what to do when those errors show up. But if partial functions are 
avoided, then there are no unexpected errors (of that sort) which could 
ever show up.


We already have to deal with bottom, that's a fact of life for 
general-purpose programming. But partial functions allow us to introduce 
bottom in new ways. When we have bottom meaning just non-termination, 
then we have a good idea about how we should treat it. But once we start 
introducing other bottoms for things like non-exhaustive pattern 
matching or recoverable exceptions, then it's no longer clear what 
exactly bottom means. This in turn makes it hard to know whether the 
semantics of the program match the semantics we want. In particular, I'm 
a fan of semantics which say my program will not crash. Sure, crashing 
is better than silently corrupting everything; but not crashing is 
better than crashing.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell and the Software design process

2010-05-09 Thread wren ng thornton

Gregory Crosswhite wrote:

Yes, but I think that it is also important to distinguish between cases
where an error is expected to be able to occur at runtime, and cases
where an error could only occur at runtime *if the programmer screwed up*.


Well sure, but how can you demonstrate that you (the programmer) haven't 
screwed up? That's the point that I'm most interested in.


Certainly there is a difference between exceptional behavior at runtime 
(e.g., due to malformed inputs) vs programming errors. We can't stop the 
former without broadening the scope of the typesystem to include runtime 
inputs; but we can, at least idealistically, stop the latter. While 
complex invariants require full dependent types to capture, there are a 
surprising number of invariants that Haskell's type system can already 
capture (even without GADTs!).




If you structure your code to preserve and rely on the invariant that a
given list has at least two elements, then it makes sense to call secondElement
because if the list doesn't have two elements then you screwed up.  Furthermore,
there is no way to recover from such an error because you don't necessarily
know where the invariant was broken because if you did you would have
expected the possibility and already fixed it.


If you're structuring your code with that invariant, then why aren't you 
using the correct type?


data AtLeastTwo a = ALT a a [a]

If your response is that you use too many of the standard list 
functions, then write a module with ALT versions (because evidently you 
need one). If your response is that you convert between lists and ALT 
too often, then you probably need to restructure the code. At the very 
least you should be using a newtype and smart constructors, so that you 
can actually prove your invariant whenever necessary:


newtype AtLeastTwo a = Hidden_ALT [a]
altEnd   :: a - a - AtLeastTwo a
altCons  :: a - AtLeastTwo a - AtLeastTwo a
altList  :: a - a - [a] - AtLeastTwo a
fromList :: [a] - Maybe (AtLeastTwo a)
toList   :: AtLeastTwo a - [a]

Without smart constructors or an ADT to enforce your invariants, why 
should you be convinced that you (the programmer) haven't messed up? The 
cheap and easy access to ADTs is one of the greatest benefits of Haskell 
over mainstream languages, don't fear them.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mixing map and mapM ?

2010-05-09 Thread wren ng thornton

Pierre-Etienne Meunier wrote:

This way :

do
times-mapM PF.getFileStatus filenames = return.(map 
PF.modificationTime)

Or also :

do
times-mapM (PF.getFileStatus = (return.(PF.modificationTime))) 
filenames
let sorted=...

I do not know exactly how ghc compiles the IO monad, but it seems to me that 
the latter would allocate a little less.


FWIW, (a = (return . f)) == (liftM f a) ~= (fmap f a)

Where available, the fmap version is the most efficient. The liftM 
function can be less efficient since it's defined generically (namely 
with the bind/return definition above), whereas fmap can take advantage 
of knowing the specific monad it's working on. But then, not everyone 
defines Functor instances for their monads...


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

2010-05-11 Thread wren ng thornton

Max Cantor wrote:

Based on some discussions in #haskell, it seemed to be a consensus
that using a modified continuation monad for Error handling instead
of Eithers would be a significant optimization since it would
eliminate a lot of conditional branching (everytime = is called
in the Either monad, there is a conditional.  


I implemented a ErrCPS monad which does exactly that, but the speed
has been disappointing.  It runs almost exactly 3x slower than a
drop in replacement using the MonadError instance of Either from mtl.



I have noticed speedup in my CPS version of Maybe[1] (kidnapped from the 
Wiki) so the difference is curious. Jan-Willem's comments about closures 
are significant when doing CPS work, but I'd expect Maybe and Either to 
perform similarly, whatever their performance is. It's been a while 
since I've benchmarked MaybeCPS, so perhaps I now have the slowdown too. 
Let's look at the code and see if we can find other differences...


[1] 
http://community.haskell.org/~wren/wren-extras/src/Control/Monad/MaybeCPS.hs



Here's one big difference:


newtype ErrCPS e m a = ErrCPS { runErrCPS ::
forall r . (e - m r) --  error handler
- (a - m r) --  success handler
- m r }


The analogous version I use is:

newtype MaybeCPS a = MaybeCPS
(forall r. (a - Maybe r) - Maybe r)

While I also offer a transformer version of MaybeCPS, the transformer 
*does* suffer from significant slowdown. Also, for MaybeCPS it's better 
to leave the handlers inline in client code rather than to abstract them 
out; that helps to keep things concrete. So perhaps you should first try 
a direct CPS translation:


newtype ErrCPS e a = ErrCPS
(forall r. (a - Either e r) - Either e r)

runErrCPS :: ErrCPS e a - Either e a
runErrCPS (ErrCPS f) = f return

I'd be curious if this version suffers the same slowdown.

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

2010-05-11 Thread wren ng thornton

wren ng thornton wrote:

Here's one big difference:


newtype ErrCPS e m a = ErrCPS { runErrCPS ::
forall r . (e - m r) --  error handler
- (a - m r) --  success handler
- m r }


The analogous version I use is:

newtype MaybeCPS a = MaybeCPS
(forall r. (a - Maybe r) - Maybe r)

While I also offer a transformer version of MaybeCPS, the transformer 
*does* suffer from significant slowdown. Also, for MaybeCPS it's better 
to leave the handlers inline in client code rather than to abstract them 
out; that helps to keep things concrete. So perhaps you should first try 
a direct CPS translation:


newtype ErrCPS e a = ErrCPS
(forall r. (a - Either e r) - Either e r)

runErrCPS :: ErrCPS e a - Either e a
runErrCPS (ErrCPS f) = f return

I'd be curious if this version suffers the same slowdown.



With this change [1] I can't notice any difference for your 
benchmark[2]. Then again, all the runTest calls take 0 msec and I've had 
no luck making the computation take much time; perhaps your computer can 
detect a difference.


You may want to see what standard benchmarking tools like Microbench[3] 
or the magnificent Criterion[4] have to say. I'd do it myself, but I 
haven't had a chance to reinstall everything since getting my new 
computer (due to the installation issues on newer versions of OSX).



[1] 
http://community.haskell.org/~wren/wren-extras/src/Control/Monad/ErrCPS.hs


[2] 
http://community.haskell.org/~wren/wren-extras/test/Control/Monad/ErrCPS/MaxCantorBenchmark.hs


[3] http://hackage.haskell.org/package/microbench

[4] http://hackage.haskell.org/package/criterion

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

2010-05-13 Thread wren ng thornton

Andrea Vezzosi wrote:

wren ng thornton  wrote:

With this change [1] I can't notice any difference for your benchmark[2].
Then again, all the runTest calls take 0 msec and I've had no luck making
the computation take much time; perhaps your computer can detect a
difference.


On my machine, with ghc-6.12.1, yours and the original ErrCPS give
quite similar results, both ~2x slower than Either.
However it's important to note that these results are highly dependent
on the monadic expressions being evaluated, with a different benchmark
you can get an huge speedup with the CPS versions.



That's very curious. After installing Criterion, my machine (OSX 10.5.8 
2.8GHz Intel Core2Duo, GHC 6.12.1 with -O2) shows only 1% difference 
between my ErrCPS and Either on this benchmark. Alas, I can't print 
kernel density graphs since Crieterion charts are broken on 6.12. It 
seems buggy that your platform would behave so much differently...




mkEMA is in fact quite peculiar, since there's no catchError and the
throwError call is rarely (or never?) made, and thanks to foldM you
get that (=) is only used in a right associated way, which is the
ideal situation for Either.


Indeed, mkEMA is a sort of worst-case comparison that doesn't take 
advantage of the ability to short-circuit.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

2010-05-13 Thread wren ng thornton

Antoine Latter wrote:

While I also offer a transformer version of MaybeCPS, the transformer *does*
suffer from significant slowdown. Also, for MaybeCPS it's better to leave
the handlers inline in client code rather than to abstract them out; that
helps to keep things concrete. So perhaps you should first try a direct CPS
translation:


Is the CPS transformed MaybeT slower because it's done in 2-CPS,
rather than in 1-CPS like the MaybeCPS? I only did MaybeT in 2-CPS
because it was the easiest, not because I thought it would be easiest.


I'm not sure how much of it is due to the 2-CPS vs how much is due to 
the loss of concrete case-analysis and tail-calls in crucial areas. As I 
noted in comments on the non-transformer version, there are a number of 
subtle issues such as the choice between let-binding and case analysis 
which have major effects on performance, so it's tricky to make a 
MaybeCPS which doesn't impose a performance overhead.


A big part of the problem is that once you move to the transformer 
version, you can't just jump to the next handler--- you also need to 
carry around whatever the other monad would add to Nothing. Once you 
move to 2-CPS, the representation is similar enough to LogicT 
(==ListCPST) that you seem to loose the benefits of Maybe over [].


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Speed of Error handling with Continuations vs. Eithers

2010-05-13 Thread wren ng thornton

Andrea Vezzosi wrote:

On Thu, May 13, 2010 at 10:51 AM, wren ng thornton w...@freegeek.org wrote:

Andrea Vezzosi wrote:

wren ng thornton  wrote:

With this change [1] I can't notice any difference for your benchmark[2].
Then again, all the runTest calls take 0 msec and I've had no luck making
the computation take much time; perhaps your computer can detect a
difference.

On my machine, with ghc-6.12.1, yours and the original ErrCPS give
quite similar results, both ~2x slower than Either.
However it's important to note that these results are highly dependent
on the monadic expressions being evaluated, with a different benchmark
you can get an huge speedup with the CPS versions.


That's very curious. After installing Criterion, my machine (OSX 10.5.8
2.8GHz Intel Core2Duo, GHC 6.12.1 with -O2) shows only 1% difference between
my ErrCPS and Either on this benchmark. Alas, I can't print kernel density
graphs since Crieterion charts are broken on 6.12. It seems buggy that your
platform would behave so much differently...


I got the measurements from the original code, could you share the
code that uses criterion instead?


The 1% number was buggy because I hadn't factored the generation of 
random lists out of the benchmark. But, having fixed that, I still can't 
replicate your numbers: I get 12us for Either, vs 17us for EitherCPS.


http://community.haskell.org/~wren/wren-extras/test/Control/Monad/ErrCPS/CriterionBenchmark.hs



Yet another version of the same benchmark, this time using Microbench:

http://community.haskell.org/~wren/wren-extras/test/Control/Monad/ErrCPS/MicrobenchBenchmark.hs

Microbench seems to replicate your numbers better: 2551.930ns vs 
4466.832ns (or 391.86 vs 223.87 calls per second)--- though this is 
getting into the range where there might be Int overflow issues 
corrupting the results (a similar problem showed up when benchmarking 
Data.Trie vs Data.Map), so it may warrant further investigation.



--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] double2Float is faster than (fromRational . toRational)

2010-05-23 Thread wren ng thornton

Daniel Fischer wrote:
There are more rules elsewhere. If you compile with optimisations, GHC 
turns your realToFrac into double2Float# nicely, so it's okay to use 
realToFrac.
However, without optimisations, no rules fire, so you'll get 
(fromRational . toRational).


That must be new, because it didn't used to be the case. Also, rewrite 
rules can be fragile. Not to mention that the (fromRational . 
toRational) definition is incorrect for converting between Float and 
Double because Rational cannot properly encode the transfinite values in 
Float/Double.


The robust solution is to use the RealToFrac class from the logfloat 
package:


http://hackage.haskell.org/packages/archive/logfloat/0.12.1/doc/html/Data-Number-RealToFrac.html

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] currying combinators

2010-05-27 Thread wren ng thornton

David Sankel wrote:

keep :: ((t - b) - u - b) - ((t1 - t) - b) - (t1 - u) - b

On Wed, May 26, 2010 at 12:49 PM, Lennart Augustsson lenn...@augustsson.net

wrote:



There are no interesting (i.e. total) functions of that type.



I wonder how one would prove that to be the case. I tried and didn't come up
with anything.


By parametricty, presumably.

We must ultimately construct some value of type b, where b is 
universally quantified. Therefore, the only 'constructors' available for 
b are the ((t - b) - u - b) and ((t1 - t) - b) arguments. However, 
since b is universally quantified, these arguments have no way of 
actually constructing some b, other than by returning bottom.


Remember, if a language lacks typecase, (forall a. X - a) is just 
another way of saying (X - Void).


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] currying combinators

2010-05-27 Thread wren ng thornton

wren ng thornton wrote:

David Sankel wrote:

keep :: ((t - b) - u - b) - ((t1 - t) - b) - (t1 - u) - b

Lennart Augustsson  wrote:

There are no interesting (i.e. total) functions of that type.


I wonder how one would prove that to be the case. I tried and didn't 
come up with anything.


By parametricty, presumably.

We must ultimately construct some value of type b, where b is 
universally quantified. Therefore, the only 'constructors' available for 
b are the ((t - b) - u - b) and ((t1 - t) - b) arguments. However, 
since b is universally quantified, these arguments have no way of 
actually constructing some b, other than by returning bottom.


Er, that's not quite right. That's only true if those arguments are 
rank-2 quantified. I'd had a longer (correct) explanation and tried 
shortening it. So here's the better proof:



In order to produce a value of type b, keep must either use one of those 
two arguments or return bottom.


If it uses the ((t - b) - u - b) argument, then keep can only return 
non-bottom if that function [1] ignores its arguments and returns an 
arbitrary b, or [2] uses the (t - b) argument to construct the b. If we 
assume #1 then keep is not total, because we have no way of proving that 
the assumption is valid. So we must expect #2; so in order for keep to 
be total it must be able to construct a total function (t - b). In 
order to construct such a function it must use one of the original two 
arguments, so this is only possible if we can construct a b via ((t1 - 
t) - b).


If it uses the ((t1 - t) - b) argument, then keep can only return 
non-bottom if that function [1] ignores its arguments, or [2] uses the 
(t1 - t) argument. We can't assume #1 and be total, so we must expect 
#2. In order to construct (t1 - t) we must construct a t. But, since t 
is universally quantified, keep knows of no total functions which return 
t. Thus, keep can only construct a function which returns bottom.


Thus, keep can only return non-bottom under the assumption that the ((t 
- b) - u - b) and ((t1 - t) - b) arguments ignore their arguments 
to return a constant b.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] currying combinators

2010-05-27 Thread wren ng thornton

Dan Doel wrote:

On Thursday 27 May 2010 3:27:58 am wren ng thornton wrote:

By parametricty, presumably.


Actually, I imagine the way he proved it was to use djinn, which uses a 
complete decision procedure for intuitionistic propositional logic. The proofs 
of theorems for that logic correspond to total functions for the analogous 
type. Since djinn is complete, it will either find a total function with the 
right type, or not, in which case there is no such function.


At that point, all you have left to do is show that djinn is in fact complete. 
For that, you can probably look to the paper it's based on: Contraction-Free 
Sequent Calculi for Intuitionistic Logic* (if I'm not mistaken) by Roy 
Dyckhoff.


Sure, that's another option. But the failure of exhaustive search isn't 
a constructive/intuitionistic technique, so not everyone would accept 
the proof. Djinn is essentially an implementation of reasoning by 
parametricity, IIRC, so it comes down to the same first principles.



(Sorry, just finished writing a philosophy paper on intuitionism :)

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Chuch encoding of data structures in Haskell

2010-05-27 Thread wren ng thornton

Stefan Monnier wrote:

churchedBool :: t - t - t


Important detail: the precise type is ∀t. t → t → t.


encodeBool x = \t e - if x then t else e


So the type of encodeBool should be:

 Bool → ∀t. t → t → t

whereas Haskell will infer it to be

  ∀t. Bool → t → t → t



Those are the same type.



which means that a given object can only be eliminated to one type.



No, the t is universally quantified just fine. Perhaps you're thinking 
of the need for rank-2 polymorphism in functions which take 
Church-encoded arguments?


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] currying combinators

2010-05-27 Thread wren ng thornton

Dan Doel wrote:

On Thursday 27 May 2010 1:49:36 pm wren ng thornton wrote:

Sure, that's another option. But the failure of exhaustive search isn't
a constructive/intuitionistic technique, so not everyone would accept
the proof. Djinn is essentially an implementation of reasoning by
parametricity, IIRC, so it comes down to the same first principles.


How, exactly, is it non-constructive to encode the propositional calculus and 
its proofs as, say, types in intuitionistic type theory, write the algorithm 
djinn uses in the same (it was specially crafted to be provably terminating, 
after all), and prove the algorithm complete (again, hopefully in the type 
theory)? I realize this has not all been done, strictly speaking, but I see 
nowhere that it is necessarily non-constructive.



All I'm saying is that a proof of (\forall x. ~ P x) does not constitute 
 a proof of (~ \exist x. P x) for some intuitionists. The issue is one 
of the range of quantification and whether \forall truly exhausts every 
possibility. The BHK interpretation says the two propositions are the 
same, but others say the latter is a stronger claim.


If you believe that Djinn truly does exhaust the search space, then it's 
fine to convert Djinn's proof of (\forall x. ~ P x) into a proof of (~ 
\exist x. P x). However, that just pushes the question back to why you 
feel justified in believing that Djinn truly exhausts the search space. 
I'm not saying you shouldn't believe in Djinn, I'm saying that belief in 
Djinn is not justification for a theorem unless you have justification 
for believing in Djinn.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] currying combinators

2010-05-28 Thread wren ng thornton

Lennart Augustsson wrote:

So what would you consider a proof that there are no total Haskell
functions of that type?
Or, using Curry-Howard, a proof that the corresponding logical formula
is unprovable in intuitionistic logic?



It depends on what kind of proof I'm looking for. If I'm looking for an 
informal proof to convince myself, then I'd probably trust Djinn. If I'm 
trying to convince others, am deeply skeptical, or want to understand 
the reasoning behind the result, then I'd be looking for a more rigorous 
proof. In general, that rigorous proof would require metatheory (as you 
say)--- either my own, or understanding the metatheory behind some tool 
I'm using to develop the proof. For example, I'd only trust Djinn for a 
rigorous proof after fully understanding the algorithms it's using and 
the metatheory used to prove its correctness (and a code inspection, if 
I didn't trust the developers).




If Djinn correctly implements the decision procedure that have been
proven to be total (using meta theory), then I would regard Djinn
saying no as a proof that there is no function of that type.


So would I. However, that's adding prerequisites for trusting Djinn--- 
which was my original point: that Djinn says there isn't one is not 
sufficient justification for some folks, they'd also want justification 
for why we should believe Djinn actually does exhaust every possibility.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Chuch encoding of data structures in Haskell

2010-05-28 Thread wren ng thornton

Ivan Miljenovic wrote:

On 28 May 2010 15:18, wren ng thornton w...@freegeek.org wrote:

Stefan Monnier wrote:

churchedBool :: t - t - t

Important detail: the precise type is ∀t. t → t → t.


encodeBool x = \t e - if x then t else e

So the type of encodeBool should be:

 Bool → ∀t. t → t → t

whereas Haskell will infer it to be

 ∀t. Bool → t → t → t


Those are the same type.


I can see a slight distinction between them, based upon when the
quantification occurs (the former returns a function that work on all
t, the latter will do that encoding for all t).


The isomorphism in System F:

in : (∀t. Bool → t → t → t) → (Bool → ∀t. t → t → t)
in f = \b. /\t. f @t b


out : (Bool → ∀t. t → t → t) → (∀t. Bool → t → t → t)
out g = /\t. \b. g b @t


If you're using the raw System F, then there's a slight difference ---as 
there always is between isomorphic things which are non-identical--- but 
that difference is not theoretically significant. If you're using a 
higher-level language like Haskell, then the conversions are done for 
you, so there's even less difference.


The expression 1+1 is distinct from 2: it has a different representation 
in memory, it takes a different number of steps to normalize, etc. 
Sometimes these differences are important for engineering (e.g., dealing 
with space leaks), but they're rarely significant for theory.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] currying combinators

2010-05-28 Thread wren ng thornton

Lennart Augustsson wrote:

Yes, of course you have to trust Djinn to believe its proof.
That's no different from having to trust me if I had done the proof by hand.
Our you would have to trust yourself if you did the proof.


True, though I think I didn't make my point clearly.


The question is, when can we consider a proof of (\forall x. ~ P x) as a 
valid *constructive* proof of (~ \exists x. P x)?


If you tell me you've checked every x and found none supporting P, and I 
believe you have the capability to have done so, and I believe things 
you say are true, then I may take your statement that (\forall x. ~ P x) 
and prove to myself that (~ \exists x. P x). However, my proof ---no 
matter how much I believe it--- is not the sort of thing I can pick up 
and hand to someone else. If they do not trust your diligence, then they 
have no reason to trust my proof. I haven't constructed a witness to the 
proposition, I've only convinced myself of it.


If, however, you construct a proof of (\forall x. ~ P x) which shows 
that all options have been exhausted, but does so in a finite way that's 
open to inspection and analysis, well then that's a different story. In 
this case, because we've actually constructed some mathematical object 
that we can poke and prod to understand what it is and how it works, we 
do have something that we can hand to a skeptic to convince them. We 
needn't argue from authority or put our faith in an elder or prophet, 
the proof stands on its own.


When someone says prove it, it's not always clear whether they mean 
they want to be convinced it's true, or whether they mean they want 
concrete evidence that witnesses its truth. Exhaustive search may be 
convincing, but it's not constructive. When the OP was asking how we'd 
go about proving that no total function exists with a particular type, I 
was assuming they were seeking a witness rather than a convincing. For a 
witness, I'd think you'd have to reason from parametricity, understand 
the process that Djinn is automating, or similar; just running Djinn is 
merely convincing.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A question on existential types and Church encoding

2010-05-30 Thread wren ng thornton

Jason Dagit wrote:
In Church's λ-calc the types are ignored, 



Not so. Church-style lambda calculus is the one where types matter; 
Curry-style is the one that ignores types and evaluates as if it were 
the untyped lambda calculus.


Church encodings are based on the untyped LC rather than Church's 
simply-typed LC however, which is why they don't typecheck with 
Hindley--Milner.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: list-extras 0.4.0

2010-06-01 Thread wren ng thornton


-- list-extras 0.4.0


A minor (but interface-changing) release for common not-so-common 
functions for lists.




-- Changes (since 0.3.0)


* (by Tom Lokhorst) Added Data.List.Extras.list: a function for 
non-recursive case matching on lists. In the spirit of `maybe` and 
`either` (if we don't interpret them as catamorphisms).


* Changed the type of Data.List.Extras.Pair.zipBy to be correct. I have 
no idea where the old more restrictive type came from.




-- Links


Homepage:
http://code.haskell.org/~wren/

Hackage:
http://hackage.haskell.org/package/list-extras

Darcs:
http://code.haskell.org/~wren/list-extras/

Haddock (Darcs version):
http://code.haskell.org/~wren/list-extras/dist/doc/html/list-extras/

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: Sum type branches as extended types (as Type!Constructor)

2010-06-03 Thread wren ng thornton

Jake McArthur wrote:

On 06/03/2010 10:14 AM, Gabriel Riba wrote:

No need for runtime errors or exception control

hd :: List!Cons a -  a

hd (Cons x _) = x


This is already doable using GADTs:

data Z
data S n

data List a n where
  Nil :: List a Z
  Cons :: a - List a n - List a (S n)

hd :: List a (S n) - a
hd (Cons x _) = x

tl :: List a (S n) - List a n
tl (Cons _ xs) = xs


Sure, it is the whipping boy of dependent types afterall. However,

* Haskell's standard lists (and Maybe, Either,...) aren't GADTs,
* last I heard GADTs are still a ways off from being accepted into haskell',
* and, given that this is the whipping boy of dependent types, you may 
be surprised to learn that pushing the proofs of correctness through for 
the standard library of list functions is much harder than it may at 
first appear. Which is why, apparently, there is no standard library for 
length-indexed lists in Coq.[1]



But more to the point, this proposal is different. Gabriel is advocating 
for a form of refinement types (aka weak-sigma types), not for type 
families. This is something I've advocated for in the past (under the 
name of difference types)[2] and am still an avid supporter of-- i.e., 
I'm willing to contribute to the research and implementation for it, 
given a collaborator familiar with the GHC code base and given 
sufficient community interest.


The reason why length-indexed lists (and other type families) are 
surprisingly difficult to work with is because of the need to manipulate 
the indices in every library function. With refinement types predicated 
on the head constructor of a value, however, there is no need to 
maintain this information throughout all functions. You can always get 
the proof you need exactly when you need it by performing case analysis. 
The benefit of adding this to the type system is that (a) callees can 
guarantee that the necessary checks have already been done, thereby 
improving both correctness and efficiency, and (b) it opens the door for 
the possibility of moving the witnessing case analysis further away from 
the use site of the proof.


While #b in full generality will ultimately lead to things like type 
families, there are still a number of important differences. Perhaps 
foremost is that you needn't define the proof index at the same point 
where you define the datatype. This is particularly important for adding 
post-hoc annotations to standard types like lists, Maybe, Either, etc. 
And a corollary to this is that you needn't settle on a single index for 
an entire programming community, nor do you impose the cost of multiple 
indices on users who don't care about them.


In short, there's no reason why the equality proofs generated by case 
analysis should be limited to type equivalences of GADT indices. Value 
equivalences for ADTs are important too.




[1] Though I'm working on one:
http://community.haskell.org/~wren/coq/vecs/docs/toc.html

[2] http://winterkoninkje.livejournal.com/56979.html

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: random-fu 0.1.0.0

2010-06-03 Thread wren ng thornton

Richard O'Keefe wrote:

There's something in that package that I don't understand,
and I feel really stupid about this.

data RVarT m a

type RVar = RVarT Identity

class Distribution d t where
rvar :: d t - RVar t
rvarT :: d t - RVarT n t

Where does n come from?


Presumably from universal quantification in rvarT? That is, the 
implementation of rvarT should be polymorphic in n, in which case the 
particular n doesn't matter (as well it shouldn't, since if it did 
that'd interfere with the composability of the transformer).


Though, since RVar is a synonym for RVarT, I can't imagine why rvar is a 
method instead of a shorthand defined outside of the class. (If RVar 
were primitive then I could imagine performance reasons, but since it 
isn't...)


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Proposal: Sum type branches as extended types (as Type!Constructor)

2010-06-07 Thread wren ng thornton

Gabriel Riba wrote:

New proposal draft:

Proposal: Type supplement for constructor specific uses of sum types

Purpose: Avoid error clauses (runtime errors), exception control or Maybe
 types in partially defined (constructor specific) functions on sum types.

As an example, with

   data List a = Nil | Cons a (List a)  

* Actual system, with runtime errors (as in GHC Data.List head) or 
exception throwing or optional Maybe results.


   hd :: List a - a
   hd (Cons x _) - x
   hd Nil - error error: hd: empty list -- error or exception throwing

* Proposed system extending types with a suffix @ Constructor or @
{Constructor1, Constructor2, ..}

   hd :: List @ Cons a - a
   hd (Cons x _) = x

The caller must do pattern matching before applying the constructor-specific
function.


Since the goal is to propagate case analysis information, the syntax 
should reflect that. That is, there should be support for nested 
patterns, e.g.,


cdar :: [...@{_:_:_} - a
cdar (_:x:_) = x

cadr :: [[...@{(_:_):_} - a
cadr ((_:xs):_) = xs

headFromJust :: [Maybe a...@{just _ : _} - a
...

The t...@cons syntax is a nice shorthand, but we need to express the 
arguments to the data constructors in the extended syntax in order to 
support nested patterns.



For delimiting multiple alternatives, it's not clear that comma is the 
best delimiter to use, especially since it could be the data constructor 
for tuples. Perhaps using ; or | would be better. Unless there's a 
syntactic reason for preferring braces over parentheses, perhaps we 
should just use parentheses for symmetry with as-patterns.



Finally, there should also be support for negative patterns, i.e., 
propagation of the failure to match a pattern. One place this is useful 
is for distinguishing 0 from other numbers, which allows removing the 
error branches from functions like division. Sometimes we can't 
enumerate all the positive patterns we want to allow, but it's easy to 
express what should be disallowed.


To match case analysis we should allow for a conjunction of negative 
patterns followed by a positive pattern. Or, if we want to incorporate 
multiple positive patterns, then a conjunction of negative patterns 
followed by a disjunction of positive patterns. (Disjunctive case 
matching has been an independent proposal in the past, and there's 
nothing prohibiting supporting it.)


Thus, if we use | to delimit disjunctions,  to delimit conjunctions, 
and \\ to separate the disjuncts from the conjuncts, given the following 
case analysis:


case x : T of
p1 y1... - e1
p2 y2... - e2
_- eF

The variable x has type T outside of the case expression. Within the 
branch e1 it is given the refinement type t...@{p1 _...} where variables 
bound by the pattern are replaced with wildcards. In branch e2 it is 
given the refinement type t...@{p2 _... \\ p1 _...}. This can be simplified 
to t...@{p2 _...} if the head constructors of p2 and p1 are distinct. And 
in the eF branch x would be given the refinement type t...@{_ \\ p1 _...  
p2_...}.


If this semantics is too hard to implement, we could instead require the 
use of as-patterns for introducing the refinements. The variable 
introduced by @ in the as-pattern would be given the refinement type, 
but the scrutinee would continue to have the unrefined type. This latter 
semantics is common in dependently typed languages, but it's verbose and 
ugly so it'd be nice to avoid it if we can.


Other notes:

Case matching on non-variable expressions would gain no extra support, 
since we have no variable to associate the refinement information with 
(unless as-patterns are used).


A refinement type t...@{p1} can always be weakened to t...@{p1 | p2}. 
Similarly, a refinement type can always be weakened by erasing it.


For type inference, I'd suggest that functions which do not have rigid 
signatures are treated the way they currently are; that is, all 
refinement information is weakened away unless it is explicitly 
requested or returned by a function's type signature. This could be 
improved upon later, but seems like the most reasonable place to start. 
One complication of trying to infer refinement types is that if we are 
too liberal then we won't catch bugs arising from non-exhaustive pattern 
matching.


Syntax-wise, there's no particular reason for distinguishing difference 
from conjunctions under difference. That is, the type t...@{... \\ p1  p2} 
could just as well be written t...@{... \\ p1 \\ p2}. And there's no need 
for conjunctions under disjunction because we can unify the patterns to 
get their intersection. Thus, it might be best to just have disjunction 
and difference for simplicity.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: random-fu 0.1.0.0

2010-06-07 Thread wren ng thornton

James Andrew Cook wrote:

In particular, functions such as 'uniform' and 'normal' which directly construct RVars are very useful in 
defining the rvar implementation of other types.  I have been reluctant to drop the rvar function from the 
Distribution class because it is very useful to be able to define other Distribution instances in terms of 
these functions instead of the uglier explicit use of rvarT - e.g. rvarT StdUniform in place of 
just 'stdUniform'. Eventually I'll probably give up this particular objection – probably sooner rather than 
later now that you've made me think about it – but I'm presently in the state where I know it isn't 
right to have both (for some value of right)  but I don't know yet what a better 
solution is, given my conflicting objectives (one of which is to refrain from changing everything at once, so 
users have a chance to keep up with the library's evolution).


Oh I'm sure writing the rvar implementation is prettier. However, since 
clients have to define rvarT too, they can't escape the ugliness :)


If RVar were a datatype, then I could imagine there being performance 
reasons for wanting to define them separately. However, since it isn't, 
it seems like having both methods in the class is just begging for code 
duplication and inconsistency bugs. Without the performance benefit, I 
can't see a reason for having both---


---unless, perhaps, you have a way of deriving a definition of 
rvarT from rvar. If so, then there could be efficiency issues in the 
other direction. I could see some people just giving a pretty 
implementation of rvar and using the inefficient rvarT, whereas other 
people would put up with the ugly in order to have an efficient rvarT... 
(I haven't looked at the package to know if this is actually a possibility)




Making the convenience functions mentioned above return RVarT seems natural, 
but when I tried it I found it made their usage require type annotations in 
many other places to fix the type of the underlying monad, and I have not yet 
decided whether it is worth all that.  I may yet introduce separate RVarT-typed 
convenience functions as well, but I'm not sure I want to do that either.


That's what I was going to propose :)

You could define the RVarT versions, but then keep the RVar versions 
around for their restricted type. That would allow for moving rvar out 
of the class (eliminating possible code duplication bugs) and still 
allow for using uniformT, normalT, etc when defining rvarT.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Span function

2010-06-07 Thread wren ng thornton

R J wrote:

Can someone provide a hand calculation of:
span ( 0) [-1, -2, -3, 0, 1, 2, -3, -4, -5]?
I know the result is ([-1, -2, -3], [0, 1, 2, -3, -4, -5]), but the recursion 
flummoxes me.
Here's the Prelude definition:


First, let's simplify the definition.

span _ [] = ([], [])
span p xs@(x:xs')
| p x = (x:ys, zs)
| otherwise   = ([], xs)
where
(ys, zs) = span p xs'

===

span _ [] = ([], [])
span p xs@(x:xs')
| p x = let (ys, zs) = span p xs' in (x:ys, zs)
| otherwise   = ([], xs)

===

span p xs =
case xs of
[]  - ([], [])
(x:xs') -
if   p x
then let (ys, zs) = span p xs' in (x:ys, zs)
else ([], xs)


And finally we can pretty up that recursive call (possibly changing 
strictness, but otherwise keeping the same meaning).


span = \p xs -
case xs of
[]  - ([], [])
(x:xs') -
if   p x
then first (x:) (span p xs')
else ([], xs)

And now, the reduction:

span ( 0) (-1:-2:-3:0:1:2:-3:-4:-5:[])

== {delta: unfold the definition of span}
(\ p xs -
case xs of
[]  - ([], [])
(x:xs') -
if   p x
then first (x:) (span p xs')
else ([], xs)) ( 0) (-1:-2:-3:0:1:2:-3:-4:-5:[])

== {beta: reduction of application}
(let p = ( 0) in \ xs -
case xs of
[]  - ([], [])
(x:xs') -
if   p x
then first (x:) (span p xs')
else ([], xs)) (-1:-2:-3:0:1:2:-3:-4:-5:[])


N.B. this next step isn't entirely accurate in Haskell since it looses 
sharing.


== {zeta: reduction of let-binding}
(\ xs -
case xs of
[]  - ([], [])
(x:xs') -
if   ( 0) x
then first (x:) (span ( 0) xs')
else ([], xs)) (-1:-2:-3:0:1:2:-3:-4:-5:[])

== {beta, zeta}
case (-1:-2:-3:0:1:2:-3:-4:-5:[]) of
[]  - ([], [])
(x:xs') -
if   ( 0) x
then first (x:) (span ( 0) xs')
else ([], (-1:-2:-3:0:1:2:-3:-4:-5:[])))

== {iota: reduction of case matching}
let x   = -1
xs' = (-2:-3:0:1:2:-3:-4:-5:[])
in
if   ( 0) x
then first (x:) (span ( 0) xs')
else ([], (-1:-2:-3:0:1:2:-3:-4:-5:[])))

== {zeta, zeta}
if   ( 0) (-1)
then first (-1:) (span ( 0) (-2:-3:0:1:2:-3:-4:-5:[]))
else ([], (-1:-2:-3:0:1:2:-3:-4:-5:[])))

== {beta}
if   -1  0
then first (-1:) (span ( 0) (-2:-3:0:1:2:-3:-4:-5:[]))
else ([], (-1:-2:-3:0:1:2:-3:-4:-5:[])))

Just to be pedantic :)
== {delta}
if   -1 # 0
then first (-1:) (span ( 0) (-2:-3:0:1:2:-3:-4:-5:[]))
else ([], (-1:-2:-3:0:1:2:-3:-4:-5:[])))

== {primitive}
if   True
then first (-1:) (span ( 0) (-2:-3:0:1:2:-3:-4:-5:[]))
else ([], (-1:-2:-3:0:1:2:-3:-4:-5:[])))

== {iota}
first (-1:) (span ( 0) (-2:-3:0:1:2:-3:-4:-5:[]))

We'll ignore that `first` and evaluate under it for clarity, even though 
this isn't accurate for Haskell's call-by-need strategy. Technically 
we'd unfold the definition of `first`, do beta/zeta reduction, and then 
do the following since we'd need to evaluate the scrutinee of the case 
match. I'll go a bit faster now since it's the same.


== {delta, beta, beta, iota, zeta, zeta, beta, delta, primitive, iota}
first (-1:) (first (-2:) (span ( 0) [-3,0,1,2,-3,-4,-5]))

== {delta, beta, beta, iota, zeta, zeta, beta, delta, primitive, iota}
first (-1:) (first (-2:) (first (-3:) (span ( 0) [0,1,2,-3,-4,-5])))

Now things are a little different, so we'll slow down.

== {delta, beta, beta}
first (-1:) (first (-2:) (first (-3:) (
case [0,1,2,-3,-4,-5] of
[]  - ([], [])
(x:xs') -
if   ( 0) x
then first (x:) (span ( 0) xs')
else ([], [0,1,2,-3,-4,-5]) )))

== {iota, zeta, zeta, beta}
first (-1:) (first (-2:) (first (-3:) (
if   0  0
then first (0:) (span ( 0) [1,2,-3,-4,-5])
else ([], [0,1,2,-3,-4,-5]) )))

== {delta, primitive, iota}
first (-1:) (first (-2:) (first (-3:) ([], [0,1,2,-3,-4,-5])))

Now for those `first`s. Again, with pure call-by-need evaluation we'll 
have already done most of this, and will have done it in reverse order. 
For clarity we'll do it inside-out a la call-by-value. (Because of 
strictness analysis, Haskell could perform a hybrid CBN/CBV strategy and 
could actually follow this order if it determined that `first` was 
strict in its second argument.)


== {delta}
first (-1:) (first (-2:) (
(\ f p - case p of (x,y) - (f x, y))
(-3:) ([], [0,1,2,-3,-4,-5])))

== {beta, zeta}
first (-1:) (first (-2:) (
(\ p - case p of (x,y) - ((-3:) x, y))
([], [0,1,2,-3,-4,-5])))

== {beta, zeta}
first (-1:) (first (-2:) (
case ([], [0,1,2,-3,-4,-5]) of (x,y) - ((-3:) x, y)))

== {iota, zeta, 

[Haskell-cafe] QuickCheck 2

2010-06-10 Thread wren ng thornton
Since GHC 6.12 ships with QC2 it looks like it's finally time to get 
around to converting some old testing scripts. Unfortunately, one of the 
things I couldn't figure out last time I looked (and hence why I haven't 
switched) is how to reconfigure the configuration parameters to the 
driver function. Is there a porting guide anywhere, or how else can I 
adjust the configuration parameters (in particular, the configMaxTest 
and configMaxFail parameters)?


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] QuickCheck 2

2010-06-10 Thread wren ng thornton

wren ng thornton wrote:
Since GHC 6.12 ships with QC2 it looks like it's finally time to get 
around to converting some old testing scripts. Unfortunately, one of the 
things I couldn't figure out last time I looked (and hence why I haven't 
switched) is how to reconfigure the configuration parameters to the 
driver function. Is there a porting guide anywhere, or how else can I 
adjust the configuration parameters (in particular, the configMaxTest 
and configMaxFail parameters)?


Ah, nevermind. I found what they renamed things to

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] QuickCheck 2

2010-06-10 Thread wren ng thornton

Ivan Lazar Miljenovic wrote:

wren ng thornton w...@freegeek.org writes:


Since GHC 6.12 ships with QC2 it looks like it's finally time to get
around to converting some old testing scripts.


Well, the Haskell Platform does, not GHC...


Fair enough (it was one of the two :)



Unfortunately, one of the things I couldn't figure out last time I
looked (and hence why I haven't switched) is how to reconfigure the
configuration parameters to the driver function.  Is there a porting
guide anywhere, or how else can I adjust the configuration parameters
(in particular, the configMaxTest and configMaxFail parameters)?


I'm not sure what you mean by driver function, but there is
quickCheckWith:
http://hackage.haskell.org/packages/archive/QuickCheck/2.1.0.3/doc/html/Test-QuickCheck.html#v%3AquickCheckWith
(and also quickCheckWithResult) which let you customise the number of
tests you want, etc.  Is that what you were after?


Yeah, that's the one.

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: bytestring-trie 0.2.2 (major bugfix)

2010-06-10 Thread wren ng thornton


-- bytestring-trie 0.2.2 (major bugfix)


Another release for efficient finite maps from (byte)strings to values. 
This version corrects a major bug affecting all users who merge tries.




-- Changes (since 0.1.4)


* Corrected a major bug in mergeBy which caused conflicting values to be 
merged in the wrong order. Since this is a core function, all other 
merging-based functions are affected as well (e.g., unionR, unionL,...). 
The bug was filed by Gregory Crosswhite (thanks!).


* added Data.Trie.Convenience.fromListWith --- A variant of 'fromListR' 
that takes a function for combining values on conflict.


* generalized the type of toListBy

* removed KeyString and KeyElem aliases

* documentation tweaks



-- Future work


* Worked on explicit definitions for foldl and foldr, as opposed to the 
default definitions given by Foldable using Endo. The suggested 
definitions are around 3x faster than the old definitions, however the 
order of folding is different than foldrWithKey (which seems to have the 
correct order). These definitions are not currently being used, but 
perhaps the future will see faster folding functions for Tries.




-- Links


Homepage:
http://code.haskell.org/~wren/

Hackage:
http://hackage.haskell.org/package/bytestring-trie

Darcs:
http://community.haskell.org/~wren/bytestring-trie/

Haddock (Darcs version):

http://community.haskell.org/~wren/bytestring-trie/dist/doc/html/bytestring-trie/

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GATD and pattern matching

2010-06-12 Thread wren ng thornton

Felipe Lessa wrote:

Well, I guess it can't be compiled at all :(
[...]
  T.lhs:4:12:
  Duplicate instance declarations:
instance [incoherent] (Show a) = MaybeShow a
  -- Defined at T.lhs:4:12-32
instance [incoherent] MaybeShow a -- Defined at T.lhs:7:12-22


Indeed, it can't be compiled because contexts don't work like that :)



I think this solution still requires OverlappingInstances and
UndecidableInstances.


The standard solution for this kind of thing is to use a newtype wrapper 
to capture the Show constraint, define the Nothing instance for @a@, and 
use IncoherentInstances. It's ugly, but it works... mostly.


It works best if you're in an architecture where everything is a 
functor--- e.g., if you're using the Data Types a la Carte[1] trick. 
That way you can enforce the discipline of using the newtype wrapper, 
since you'll have to be using some kind of functor wrapper to get things 
to typecheck. Otherwise, it can be too easy to forget to tag something 
as Showable, and then you'll end up using the default Nothing instance 
when you didn't mean to.



[1] http://www.cs.nott.ac.uk/~wss/Publications/DataTypesALaCarte.pdf

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Mining Twitter data in Haskell and Clojure

2010-06-15 Thread wren ng thornton

braver wrote:

On Jun 14, 11:40 am, Don Stewart d...@galois.com wrote:

Oh, you'll want insertWith'.

You might also consider bytestring-trie for the Graph, and IntMap for
the AdJList ?


Yeah, I saw jsonb using Trie and thought there's a reason for it.  But
it's very API-poor compared with Map, e.g. there's not even a fold --
should one toListBy first?


I find that surprising. Have you looked in Data.Trie.Convenience? The 
API of Data.Map is rather bloated so I've pushed most of it out of the 
main module in order to clean things up. There are only a small number 
of functions in the Data.Map interface I haven't had a chance to 
implement yet.


For folding, the `foldMap`, `foldr`, and `foldl` functions are provided 
via the Data.Foldable interface. The Data.Traversable class is also 
implemented if you need to make changes to the trie along the way. These 
all give generic folding over the values stored in the trie. If you need 
access to the keys during folding you can use `foldrWithKey`, though it 
has to reconstruct the keys, which doesn't sound good for your use case. 
`toListBy` is a convenience wrapper around `foldrWithKey` which supports 
list fusion, so it has the same advantages and disadvantages compered to 
the Foldable/Traversable functions.


If there's a particular function you still need, let me know and I can 
add an implementation for it.




In terms of optimizing your code, one thing you'll surely want to do is 
to construct an intern table (Trie Int, IntMap ByteString) so that you 
only have to deal with Ints internally rather than ByteStrings. I 
haven't looked at your code yet to see how this would fit in, but it's 
almost always a requisite trick for handling large text corpora.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Terminology

2010-06-15 Thread wren ng thornton

Emmanuel Castro wrote:

I am looking for the name of the property linking two functions f and g
when :
[f(a),f(b),f(c)] = g([a,b,c])

Is there a standard name?


Generally these sorts of things are called homomorphisms. It's a 
terribly general term, but that's the one I've always seen to describe 
that pattern.


It's also a bit like distributivity, though that probably doesn't 
highlight what you're interested in focusing on.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How does one get off haskell?

2010-06-18 Thread wren ng thornton

Edward Z. Yang wrote:

Excerpts from Paul Lotti's message of Thu Jun 17 15:33:30 -0400 2010:

Same feelings here.  I work in a company that uses C++/Java and the best I could
manage was to use Haskell for prototyping and then deliver in Java.  This worked
out twice so far.  The downside is having to translate it later. 


*Shudders at the though.* Must be a what, x10 size blow-up?


I've done that too. It works fairly well for certain kinds of 
programs/problems, but you have to be careful about your abstractions.


For instance, Java Generics are no substitute for real parametric 
polymorphism. They only work for the simplest kind of container/element 
polymorphism, interact poorly (i.e., not at all) with subclassing, and 
explode with many of the higher-order tricks common in idiomatic 
Haskell. Any sort of higher-order programming (HOFs, point-free style, 
CPS, parametricity,...) rarely translates well--- especially if you want 
to avoid code bloat and have anything resembling idiomatic Java. Though 
sometimes defunctionalization[1] can help.


The code blow up varies. 10x is on the good side of things and indicates 
a good match of abstractions. 20x or 30x is more common I think. But if 
you're relying on any libraries or fancy datastructures, you'll be lucky 
not to have to reimplement everything...



[1]
http://blog.plover.com/prog/defunctionalization.html
http://www.brics.dk/RS/01/23/
http://cristal.inria.fr/~fpottier/publis/fpottier-gauthier-popl04.pdf

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Mining Twitter data in Haskell and Clojure

2010-06-18 Thread wren ng thornton

braver wrote:

Wren -- thanks for the clarification!  Someone said that Foldable on
Trie may not be very efficient -- is that true?


That was probably me saying that I had worked on some more efficient 
implementations than those currently in use. Alas, the more efficient 
ones seem to alter the order of enumerating the values, and I haven't 
had a chance to fix that (so they're not being used currently).


As for general efficiency, I haven't run any benchmarks for folds vs 
other datastructures so I can only speculate. If you're using foldl or 
foldr instead of foldrWithKey[1] then the performance should be 
comparable to Data.Map. For both data structures, all they do is 
traverse the spine and enumerate the leaves, so the indexing data on the 
spine nodes is irrelevant.




What's the exact relationship between Trie and Map and their
respective performance?


Data.Trie is a big-endian patricia tree with node fusion, which is the 
same datastructure as Data.IntMap. The only difference is that IntMap 
can only take fixed-length keys (namely Ints) whereas Trie can take 
variable length keys (i.e., any string of bits). On the other hand, 
Data.Map is a form of balanced tree, which is rather different. The API 
documentation gives links to the relevant papers.


The API documentation also gives their asymptotic performance. I don't 
have all the numbers for Trie yet because my sources didn't give them 
and I haven't had the chance to prove them myself, but they will be 
similar to those for IntMap. In general, because it is specialized for 
the key type, Trie will be more efficient than (Map Bytestring) just as 
IntMap is more efficient than (Map Int).


The exact performance characteristics can vary depending on the specific 
program, however. Because patricia trees and balanced trees are such 
different structures, they excel for different usage patterns. For 
example, if you wanted to get (or enumerate) the submap containing only 
the keys beginning with foo, tries will be much more efficient than 
the alternatives. However, if you need to use functions like 
foldrWithKey often, then tries will be at a disadvantage since they must 
reconstruct the keys.[1] As I recall, tries are also at an advantage if 
you often merge/meld maps; though I don't recall the algorithm Data.Map 
uses to be sure of that.


Similarly, since Map, IntMap, and Trie are persistent data structures, 
if you're not using that persistence then you may get better performance 
out of an ephemeral data structure like hash tables. Though you should 
first check to make sure that IntMap + interning is insufficient, since 
a big part of hash table's performance characteristics comes from the 
interning implicit in the hash function.


Choosing the right tool depends on how you want to use it.



[1] If you do need to use foldrWithKey often, an easy way to improve 
performance is to use a Trie (ByteString,Foo) and store the complete key 
alongside the value. This way you can use the faster foldl/foldr and 
still have access to the key. This will increase memory consumption, of 
course, though Trie does try to maximize sharing of key segments, so it 
won't cost as much as initially expected.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lecture notes for Finally Tagless - benefit of explicit fix combinator

2010-06-19 Thread wren ng thornton

Günther Schmidt wrote:

Hi Stephen,

I'm glad I asked. This sure sounds more interesting than I had 
anticipated. Is this an old hat for your off-the-shelf haskeller or 
something only found in the more seasoned haskellers tool box?


I think it's pretty much the first time I encounter it.


It depends on what kind of seasoning you take :)

Another use case which is really common is to use open recursion for 
things like memoization. That is, if we write an open-recursive 
function, using `fix` turns it into a normal recursive function but we 
can use a different fixed-point combinator in order to memoize the 
results. Luke Palmer has packaged up a version of this on Hackage[1].


You can perform similar tricks on the type level. Wouter Swierstra 
combats the expression problem by using open recursion to build ad-hoc 
union types[2]. Whereas Tim Sheard uses a somewhat different version of 
open recursion to create parameterized modules[3], with a specific use 
case being to separate variables from structure in unification 
algorithms[4].



[1] http://hackage.haskell.org/package/data-memocombinators
[2] http://www.cs.nott.ac.uk/~wss/Publications/DataTypesALaCarte.pdf
[3] http://web.cecs.pdx.edu/~sheard/papers/JfpPearl.ps
[4] http://web.cecs.pdx.edu/~sheard/papers/generic.ps

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type-Level Programming

2010-06-25 Thread wren ng thornton

Jason Dagit wrote:

On Fri, Jun 25, 2010 at 2:26 PM, Walt Rorie-Baety black.m...@gmail.comwrote:


I've noticed over the - okay, over the months - that some folks enjoy the
puzzle-like qualities of programming in the type system (poor Oleg, he's
become #haskell's answer to the Chuck Norris meme commonly encountered in
MMORPGs).

Anyway,... are there any languages out there whose term-level programming
resembles Haskell type-level programming, and if so, would a deliberate
effort to appeal to that resemblance be an advantage (leaving out for now
the hair-pulling effort that such a change would entail)?


I'm not a prolog programmer, but I've heard that using type classes to do
your computations leads to code that resembles prolog.


Indeed. If you like the look of Haskell's type-level programming, you 
should look at logic programming languages based on Prolog. Datalog 
gives a well understood fragment of Prolog. ECLiPSe[1] extends Prolog 
with constraint programming. Mercury[2], lambda-Prolog[3], and Dyna give 
a more modern take on the paradigm.


If you're just a fan of logic variables and want something more 
Haskell-like, there is Curry[4]. In a similar vein there's also 
AliceML[5] which gives a nice futures/concurrency story to ML. AliceML 
started out on the same VM as Mozart/Oz[6], which has similar futures, 
though a different overall programming style.


And, as Jason said, if you're just interested in having the same 
programming style at both term and type levels, then you should look 
into dependently typed languages. Agda is the most Haskell-like, Epigram 
draws heavily from the Haskell community, and Coq comes more from the ML 
tradition. There's a menagerie of others too, once you start looking.



[1] http://eclipse-clp.org/ is currently down, but can be accessed at 
http://87.230.22.228/

[2] http://www.mercury.csse.unimelb.edu.au/
[3] http://www.lix.polytechnique.fr/~dale/lProlog/
[4] http://www-ps.informatik.uni-kiel.de/currywiki/
[5] http://www.ps.uni-saarland.de/alice/
[6] http://www.mozart-oz.org/

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type-Level Programming

2010-06-26 Thread wren ng thornton

Gregory Crosswhite wrote:

On 6/25/10 9:49 PM, wren ng thornton wrote:
[1] http://eclipse-clp.org/ is currently down, but can be accessed at 
http://87.230.22.228/

[2] http://www.mercury.csse.unimelb.edu.au/
[3] http://www.lix.polytechnique.fr/~dale/lProlog/
[4] http://www-ps.informatik.uni-kiel.de/currywiki/
[5] http://www.ps.uni-saarland.de/alice/
[6] http://www.mozart-oz.org/


Are any of those compatible with Haskell, so that we could mix code in 
that language with Haskell code?


Your best bets would be Agda and Curry. I'm not familiar enough with 
either of them to know what sort of FFI or cross-linking they support, 
but both are (by design) rather similar to Haskell. For Curry, it may 
also vary depending on the compiler.


With all the others, interaction will be restricted to generic FFI support.

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type-Level Programming

2010-06-26 Thread wren ng thornton

Andrew Coppin wrote:

Stephen Tetley wrote:

On 26 June 2010 08:07, Andrew Coppin andrewcop...@btinternet.com wrote:

Out of curiosity, what the hell does dependently typed mean anyway?


How about:

The result type of a function may depend on the argument value
(rather than just the argument type)


Hmm. This sounds like one of those things where the idea is simple, but 
the consequences are profound...


The simplest fully[1] dependently typed formalism is one of the many 
variants of LF. LF is an extension of the simply typed lambda calculus, 
extending the arrow type constructor to be ((x:a) - b) where the 
variable x is bound to the argument value and has scope over b. In order 
to make use of this, we also allow type constructors with dependent 
kinds, for example with the type family (P : A - *) we could have a 
function (f : (x:A) - P x). Via Curry--Howard isomorphism this gives us 
first-order intuitionistic predicate calculus (whereas STLC is 1st-order 
propositional calculus). The switch from atomic propositions to 
predicates is where the profundity lies.


A common extension to LF is to add dependent pairs, generalizing the 
product type constructor to be ((x:a) * b), where the variable x is 
bound to the first component of the pair and has scope over b. This 
extension is rather trivial in the LF setting, but it can cause 
unforeseen complications in more complex formalisms.


Adding dependencies is orthogonal to adding polymorphism or to adding 
higher-orderness. Though orthogonal should probably be said with 
scare-quotes. In the PTS presentation of Barendregt's lambda cube these 
three axes are indeed syntactically orthogonal. However, in terms of 
formal power, the lambda cube isn't really a cube as such. There are 
numerous shortcuts and wormholes connecting far reaches in obscure 
non-Euclidean ways. The cube gives a nice overview to start from, but 
don't confuse the map for the territory.


One particular limitation of LF worth highlighting is that, even though 
term-level values may *occur* in types, they may not themselves *be* 
types. That is, in LF, we can't have any function like (g : (x:a) - x). 
In the Calculus of Constructions (CC)[2], this restriction is lifted in 
certain ways, and that's when the distinction between term-level and 
type-level really breaks down. Most current dependently typed languages 
(Coq, Agda, Epigram, LEGO, NuPRL) are playing around somewhere near 
here, whereas older languages tended to play around closer to LF or ITT 
(e.g., Twelf).


And as a final note, GADTs and type families are forms of dependent 
types, so GHC Haskell is indeed a dependently typed language (of sorts). 
They're somewhat kludgy in Haskell because of the way they require code 
duplication for term-level and type-level definitions of the same 
function. In real dependently typed languages it'd be cleaner to work 
with these abstractions since we could pass terms into the type level 
directly, however that cleanliness comes with other burdens such as 
requiring that all functions be provably terminating. Managing to 
balance cleanliness and the requirements of type checking is an ongoing 
research area. (Unless you take the Cayenne route and just stop caring 
about whether type checking will terminate :)



[1] Just as Hindley--Milner is an interesting stopping point between 
STLC and System F, there are also systems between STLC and LF.


[2] In terms of formal power: CC == F_omega + LF == ITT + SystemF.

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type-Level Programming

2010-06-26 Thread wren ng thornton

Andrew Coppin wrote:
I think I looked at Coq (or was it Epigram?) and found it utterly 
incomprehensible. Whoever wrote the document I was reading was obviously 
very comfortable with advanced mathematical abstractions which I've 
never even heard of.


One of the things I've found when dealing with--- er, reading the 
documentation for languages like Coq, is that the class of problems 
which motivate the need to move to such powerful formalisms are often so 
abstract that it is hard to come up with simple practical examples of 
why anyone should care. There are examples everywhere, but complex 
machinery is only motivated by complex problems, so it's hard to find 
nice simple examples.


In particular, I've noticed that once you start *using* Coq (et al.) and 
trying to prove theorems about your programs, the subtle issues that 
motivate the complex machinery begin to make sense. For example, there's 
a lot of debate over whether Axiom K is a good thing or not. Just 
reading the literature doesn't shed any light on the real ramifications 
of having the axiom vs not; you really need to go about trying to prove 
definitional equalities and seeing the places where you can't proceed 
without it, before you can appreciate what all the hubbub is about.



It's a bit like trying to learn Prolog from 
somebody who thinks that the difference between first-order and 
second-order logic is somehow common knowledge. (FWIW, I have 
absolutely no clue what that difference is.


First-order means you can only quantify over simple things. Second-order 
means you can also quantify over quantification, as it were.


For example, the type system of simply-typed lambda calculus is 
1st-order intuitionistic propositional logic, and System F (i.e., STLC + 
rank-n polymorphism) is 2nd-order. (F_omega is higher-order, but that's 
one of those wormholes in the lambda cube.)


Though higher-order is a much abused term, which may cause some of the 
confusion. The higher-orderness discussed above has to do with 
quantification within types, which has to do with higher-orderness of 
the related logics. But we can also talk about a different sort of 
higher-orderness, namely what distinguishes System F from F_omega, or 
distinguishes LF from ITT. In STLC we add a simple language of types at 
a layer above the term layer, in order to make sure the term layer 
behaves itself. After hacking around we eventually decide it'd be cool 
to have functions at the type level. But how to we make sure that our 
types are well-formed? Well, we add a new layer of simple types on top 
of the type layer--- which gives us a second-order system. We could 
repeat the process again once we decide we want kind-level functions too.[1]


To take a different track, in cognitive science people talk about 
theory of mind, i.e. the idea that we each theorize that other people 
have minds, desires, beliefs, etc. A first-order theory of mind is when 
we attribute a mind to other people. A second-order theory of mind is 
when we attribute a theory of mind to other people (i.e., we believe 
that they believe that we have a mind). Etc.


In epistemic/doxastic logics we can talk about first-order 
knowledge/beliefs, that is beliefs in simple propositions. But we can 
also talk about second-order beliefs, that is beliefs about beliefs. Etc.



[1] See Tim Sheard's Omega for the logical conclusion of this process.

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type-Level Programming

2010-07-01 Thread wren ng thornton

Andrew Coppin wrote:

wren ng thornton wrote:

Andrew Coppin wrote:
It's a bit like trying to learn Prolog from somebody who thinks that 
the difference between first-order and second-order logic is somehow 
common knowledge. (FWIW, I have absolutely no clue what that 
difference is.


First-order means you can only quantify over simple things. 
Second-order means you can also quantify over quantification, as it were.


Sure. I get the idea that second-order means like first-order, but 
you can apply it to itself (in most contexts, anyway).


Kind of. More accurately you can apply it to itself once. That is, 
continuing the quantification version:


First we have a set S. In the 0th-order world we can only pick out 
particular elements of S by name, we can't say oh, for any old S. In 
the 1st-order world we are allowed to quantify over elements of S (i.e., 
pick out subsets). However, in so doing, we implicitly create a new set 
S' which contains the names of all possible 1st-order quantifications 
over S. And in the 1st-order setting we have no way of quantifying over 
S', we must pick out elements of S' by name. But in the 2nd-order 
setting we can use 2nd-order quantification in order to say any old 
element of S'[1]. Of course, this will implicitly create a set S'' of 
2nd-order quantifications... ... ...


In practice, there seems to be something magical about 2nd-order 
systems. That is, while there are differences between 2nd-order and 
higher-order (i.e., =2 including the limit), these differences tend to 
be trivial/obvious, whereas the leap from 1st-order systems to 2nd-order 
systems causes remarkable and catastrophic[2] changes. So technically 
2nd-order systems are restricted to only quantifying over their 
1st-order subsystems, but they introduce the notion of self-reference, 
which is as different from quantification as quantification is from naming.




But what the heck does quantification mean?


In the simplest sense, it's the ability to enumerate some collection. 
Though that only raises more questions: which collection?, how big of 
a collection can it be? or equivalently how powerful of an enumerator 
does it use?, is it guaranteed to yield *every* element of the 
collection?,... In an only somewhat more general sense, it's a question 
philosophers (and logicians) have argued over for centuries.


But more apropos, the distinction between orders is really a distinction 
about the power/nature of a system/theory. Quantification over sets is 
an example of a theory where we'd want to distinguish orders, but it 
isn't the only one. The categorization of lambda calculi into layers of 
terms, types, kinds, etc. is another example (where each layer is an 
order). The other example I have about Theory of Mind is yet another 
example, and perhaps the clearest for explaining what the distinction is 
ultimately trying to get at. In 1st-order theories we can generate 
hypotheses about the world, but in 2nd-/higher-order theories we can 
also generate hypotheses about our theories. Again, the 2nd-order 
introduces the notion of self-reference, the idea of a theory containing 
itself as an object, and the infinite regress of impredicativity.[3]





[1] And we can still use 1st-order quantification in order to enumerate 
S. Frequently mathematicians don't bother distinguishing the different 
varieties of quantification available in a given system; so long as we 
know what the upper limit is, we use the same notation for everything 
under that limit. Sometimes, however, we do need to distinguish the 
different quantifiers. For example, this shows up in the stratification 
of universes to avoid impredicativity in Coq, Agda, etc... this is 
usually hidden/implicit from the programmer, though it shows up in the 
compiler/type checker.


[2] That is, world-altering. Not necessarily bad.

[3] Though noone can really say what impredicativity *is* either ;)

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type-Level Programming

2010-07-01 Thread wren ng thornton

Andrew Coppin wrote:
I did wonder what the heck a type function is or why you'd want one. 
And then a while later I wrote some code along the lines of


 class Collection c where
   type Element c :: *
   empty :: c - Bool
   first :: c - Element c

So now it's like Element is a function that takes a collection type and 
returns the type of its elements - a *type function*.


Suddenly the usual approach of doing something like

 class Collection c where
   empty :: c x - Bool
   first :: c x - x

seems like a clumsy kludge, and the first snippet seems much nicer. (I 
gather that GHC's implementation of all this is still fragile though? 
Certainly I regularly get some very, very strange type errors if I try 
to use this stuff...)


Adding type functions introduces a lot of theoretical complexity to the 
type system. It's very easy to end up loosing decidability of type 
inference/checking, not giving the power/properties you want, or both. I 
get the impression that folks are still figuring out exactly how to 
balance these issues in Haskell.


For example, should type functions be injective (i.e. (F x = F y) == (x 
= y)) or not? If we make them injective then that rules out writing 
non-injective functions. But if we don't, then we cripple the type 
inferencer. This is why there's a distinction between associated 
datatypes (injective) vs associated type aliases (not).




the only kinds available are * and (k1 - k2)


Does # not count?


The other kinds like #, ?, and ?? are just implementation details in 
GHC. By and large they are not theoretically significant. Also, other 
compilers (e.g. jhc) have different kinding systems for optimizations.


Some folks have advocated for distinguishing the kind of proper types 
from the kind of type indices. This is theoretically significant, and I 
think it's a fabulous idea. But it hasn't been implemented AFAIK.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: chart broken under 6.12 according to criterion

2010-07-01 Thread wren ng thornton

Neil Brown wrote:

On 01/07/10 10:19, Tom Doris wrote:
According to the criterion.cabal file shipped with the latest 
(0.5.0.1) version of criterion, the Chart package is broken under GHC 
6.12:

flag Chart
   description: enable use of the Chart package
   -- Broken under GHC 6.12 so far
Does anyone know the status of this problem? It's been a little 
frustrating getting Criterion up and running - it didn't work at all 
under 6.10 due to a compiler bug (The impossible happened error on 
uvector install) and now it works under 6.12 but without the nice 
charts that are so useful. Appreciate any insight or workarounds for 
this, thanks



Hi,

cabal install criterion -fChart --reinstall builds fine for me on GHC



OS= OSX 10.5.8
GHC   = 6.12.1
Cabal-Install = 0.8.2
Cabal = 1.8.0.2

$ cabal install criterion -fChart --reinstall
   Resolving dependencies...
...
   Configuring cairo-0.11.0...
   setup: gtk2hsC2hs is required but it could not be found.
...
   Configuring glib-0.11.0...
   setup: gtk2hsC2hs is required but it could not be found.
...
   cabal: Error: some packages failed to install:
   Chart-0.13.1 depends on glib-0.11.0 which failed to install.
   cairo-0.11.0 failed during the configure step. The exception was:
   ExitFailure 1
   criterion-0.5.0.1 depends on glib-0.11.0 which failed to install.
   gio-0.11.0 depends on glib-0.11.0 which failed to install.
   glib-0.11.0 failed during the configure step. The exception was:
   ExitFailure 1
   gtk-0.11.0 depends on glib-0.11.0 which failed to install.
   pango-0.11.0 depends on glib-0.11.0 which failed to install.


Presumably I just need to install some additional non-Hackage libraries, 
though it's not clear how much gtk cruft needs pulling in. FWIW.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How easy is it to hire Haskell programmers

2010-07-02 Thread wren ng thornton

Andrew Coppin wrote:
Hmm, interesting. Applicative and Traversable are two classes I've never 
used and don't really understand the purpose of.


Their main purpose is to avoid the list bias so prevalent from the 
Lispish side of FP. Namely, there are many different kinds of 
collections which can be traversed or folded in ways similar to lists, 
the Foldable and Traversable classes give a common interface for all 
those collections thereby removing the excuse of using lists because 
they have a nicer/more-FP interface than Set, Map, Sequence,...


More abstractly, they capture essential properties of concrete functors 
which represent containers. So for folks who think of functors as (only) 
being containers, these are the methods they expect to have.




I have no idea what hsc2hs is.


One of the libraries for facilitating bindings between C and Haskell. 
Edward Yang is giving a general discussion[0] of the many libraries for 
C--Haskell binding and how they compare.



I keep hearing finger trees mentioned, but only in connection 
to papers that I can't access.


Check out Data.Sequence[1] and Data.FingerTree[2] for canonical 
examples. There's also Edward Kmett's Data.Rope[3] (not to be confused 
with Pierre-Etienne Meunier's Data.Rope[3']). Also, see apfelmus's post 
[4] about finger trees and monoids for a more general perspective.



[0] http://blog.ezyang.com/2010/06/the-haskell-preprocessor-hierarchy/
[1] http://hackage.haskell.org/package/containers
[2] http://hackage.haskell.org/package/fingertree
[3] http://hackage.haskell.org/package/rope
[3'] http://hackage.haskell.org/package/Data-Rope
[4] http://apfelmus.nfshost.com/articles/monoid-fingertree.html

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Rewriting a famous library and using the same name: pros and cons

2010-07-05 Thread wren ng thornton

Ivan Lazar Miljenovic wrote:

Stephen Tetley stephen.tet...@gmail.com writes:


I think it was Hugs compliant as least for some revisions - I seem to
remember looking at it before I switched to GHC.


People still use Hugs? :p


MPJ uses it for teaching Haskell because it's a lot easier to install 
than GHC. I've heard tale of people using it in the embedded world 
(again because of easy portability), though I haven't witnessed it to 
know how recent that is.


I do my best to retain compatibility with Hugs for my Hackage libraries, 
and I use Hugs to define the boundary between semiportable (i.e., not 
H98 but still portable) vs nonportable (i.e., GHC-only). Doing so has 
highlighted some simple bugs in Cabal which, AFAIK, haven't been fixed 
yet. Perhaps because I seem to be the only advocate for trying to keep 
Hugs support alive, and I don't know the Cabal code well enough to make 
the fix myself.




A bit more seriously: is there any listing anywhere of which extensions
Hugs supports?


Cabal has a partial listing embedded in its code, though I can't seem to 
find a textual version at the moment. In general, Hugs has all the 
features of GHC 6.6: FFI, CPP, MPTCs, FunDeps, OverlappingInstances,... 
I'm forgetting off-hand whether it has Rank2Types/RankNTypes, but I 
think so. The one notable difference between Hugs and GHC6.6 is that it 
does not have IncoherentInstances, and instead supports a 
different/incompatible way of trying to solve that problem.


Of course, since it's GHC6.6-era that means it doesn't support LANGUAGE 
pragma to enable these features. The -98 flag enables most of the 
extensions with separate flags for OverlappingInstances, 
Not-IncoherentInstances, and CPP (or any other preprocessor); and in 
order to use the FFI you must first run ffihugs in order to compile the 
bindings. Because the shift from GHC6.6 to GHC6.8 was so significant, 
it'd be nice if someone put out a maintenance version of Hugs which adds 
support for LANGUAGE pragma and similar compilation interface issues, 
even if the underlying code base remains the same. One can dream, eh?




Definitely serious: if anything, I would care more about ensuring
compatability with JHC, UHC, etc. than Hugs nowadays.


Certainly JHC and UHC should be targeted for compatibility. However, 
doing so seems unlikely to come at the cost of compatibility with Hugs. 
Hugs implements H98 along with the most venerable and widely used 
extensions. Consequently, the language Hugs understands happens to be 
the same one that's the goal of modern Haskell compilers like JHC and 
UHC. However little maintenance Hugs is getting, it captures all of 
classic Haskell. The hallmarks of new Haskell like GADTs and type 
functions are, so far as I'm aware, considered a more distant goal for 
alternative compilers.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Rewriting a famous library and using the same name: pros and cons

2010-07-05 Thread wren ng thornton

Ivan Lazar Miljenovic wrote:

Stephen Tetley stephen.tet...@gmail.com writes:


On 3 July 2010 14:00, Ivan Lazar Miljenovic ivan.miljeno...@gmail.com wrote:


So this argument isn't valid ;-)

I think it was Hugs compliant as least for some revisions - I seem to
remember looking at it before I switched to GHC.


Actually, how would you call it in Hugs?  Just start it explicitly with
the -98 flag?


For most extensions you use the -98 flag, for OverlappingInstances you 
use +o, and for CPP you use -F'cpp -P -traditional -D__Hugs__' (or 
similar). If FFI is required then call the program with ffihugs first. E.g.


$ ffihugs -98 +o -F'cpp -P -traditional' Bar.hs
$ hugs -98 +o -F'cpp -P -traditional' Bar.hs

Hugs does not know about LANGUAGE pragmas, as those were added in GHC 
6.8. And Cabal does not infer the need for -98 or +o based on the 
Extensions: field, so you'll have to add them to Hugs-Options: manually. 
Also, there are some bugs in Cabal preventing the mixture of CPP and 
FFI, even though this is fine in Hugs. See 
http://community.haskell.org/~wren/cabal-ffihugstest for more details.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More experiments with ATs

2010-07-05 Thread wren ng thornton

Andrew Coppin wrote:

Brent Yorgey wrote:
On Sun, Jul 04, 2010 at 10:31:34AM +0100, Andrew Coppin wrote:   
I have literally no idea what a type family is. I understand ATs (I 
think!), but TFs make no sense to me.



ATs are just TFs which happen to be associated with a particular
class.  So if you understand ATs then you understand TFs too, you just
didn't know it. =)
  


How would that make sense though? I'm having trouble forming a mental 
image of how / why you'd use that...


Type families ---in the traditional dependent-types sense, not 
necessarily the GHC sense--- are a collection of related types which are 
the same in some sense, but which are distinguished from one another 
by indices.


There are two ways of interpreting this. The first approach is to think 
about type families where the index gives you some type-level 
information about the term-level value; this is the same thing as GADTs. 
That is, if we take the standard whipping boy:


data List :: * - * - * where
nil   :: forall a. List a Z
cons  :: forall a. a - List a n - List a (S n)

then we'll want some way of talking about all Lists, irrespective of 
their lengths. Thus, List A (for some A) is a type family, but not a 
type. There's a fundamental difference here between parametric 
polymorphism and type indices. In Coq it's made more explicit (though 
it's still rendered subtle), but in Haskell it can be a bit harder to see.



The other perspective is to think of type families as a function from 
indices to types, where the particular function constitutes the type 
family. This is the approach taken by associated types and type families 
in GHC. TFs, as separate from ATs, are useful for implementing 
type-level functions. For example, we could define addition on Z and S, 
which would allow us to give the type


(++) :: List a n - List a m - List a (Plus n m)

The type family Plus takes two indices and returns a type, but it does 
so in a way that it is allowed to perform case analysis on the input 
arguments--- and therefore is not parametric.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cereal vs. binary

2010-07-06 Thread wren ng thornton

braver wrote:

I dump results of a computation as a Data.Trie of [(Int,Float)].  It
contains about 5 million entries, with the lists of 35 or less pairs
each.  It takes 8 minutes to load with Data.Binary and lookup a single
key.  What can take so long?  If I change from compressed to
uncompressed (and then decode), it's the same time...  It's not IO,
CPU is loaded 100%.


The Binary instance for Trie is based on the old Binary instance for 
Data.IntMap. There were some corner-case performance issues with the 
latter which were recently fixed[1], but I haven't had a chance to look 
at the new instance or to figure out if the changes would also be 
relevant for Trie. So this might be a potential source of your problems.


[1] Alas, I can't find the thread discussing the new instance ATM.



I'm now thinking of using cereal.  Given I have Data.Binary in place,
what needs to be changed to work with cereal?  Is it binary-
compatible?  How can one construct a cereal instance for Data.Trie?


If you send me an instance, I can apply the patch.

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: The state of Hugs

2010-07-06 Thread wren ng thornton

Ketil Malde wrote:

wren ng thornton w...@freegeek.org writes:


A bit more seriously: is there any listing anywhere of which extensions
Hugs supports?



Cabal has a partial listing embedded in its code, though I can't seem
to find a textual version at the moment. In general, Hugs has all the
features of GHC 6.6: FFI, CPP, MPTCs, FunDeps,
OverlappingInstances,... I'm forgetting off-hand whether it has
Rank2Types/RankNTypes, but I think so.


Indeed Rank2Types are supported.



Of course, since it's GHC6.6-era that means it doesn't support
LANGUAGE pragma to enable these features.


I'm not entirely up to speed on Hugs nor Haskell 2010, but it seems that
it's fairly close to compliance?  


Well here's the list of accepted proposals for 2010:

http://hackage.haskell.org/trac/haskell-prime/query?state=acceptedmilestone=Haskell+2010

So far these ones are already implemented:

HierarchicalModules
FixityResolution
ForeignFunctionInterface
RelaxedDependencyAnalysis

Which leaves the following (seemingly simple) syntactic changes:

DoAndIfThenElse
LineCommentSyntax
NoNPlusKPatterns

And the following more significant changes:

EmptyDataDeclarations
PatternGuards
LanguagePragma

How significant these last three actually are would depend on the code 
base. They shouldn't be too difficult I wouldn't think, but then I don't 
know much about Hugs' code nor what sorts of implementation details the 
extensions might run into.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: the state of Hugs 2

2010-07-06 Thread wren ng thornton

Daniel Fischer wrote:

On Tuesday 06 July 2010 07:04:18, wren ng thornton wrote:

Cabal has a partial listing embedded in its code, though I can't seem to
find a textual version at the moment. In general, Hugs has all the
features of GHC 6.6: FFI, CPP, MPTCs, FunDeps, OverlappingInstances,...
I'm forgetting off-hand whether it has Rank2Types/RankNTypes, but I
think so. The one notable difference between Hugs and GHC6.6 is that it
does not have IncoherentInstances, and instead supports a
different/incompatible way of trying to solve that problem.


Another notable difference are BangPatterns.


Were BangPatterns in 6.6? I never used them until 6.8, so...



If support for BangPatterns could be added, that would be very convenient.
Of course you can do it with seq, but BangPatterns are, well, very 
convenient compared to seq.

Of course my plea for BangPatterns extends to JHC/UHC etc.


Since Hugs doesn't do strictness analysis (AFAIK), it seems like it 
should be implementable in a desugaring pass. At the expense of writing 
a Haskell parser, it could be done via the -F flag without even updating 
Hugs. Though that'd be an ugly hack.


I don't know whether anyone's still reading them, but you could post a 
ticket on http://hackage.haskell.org/trac/hugs/ for it.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Functional dependencies and Peano numbers

2010-07-09 Thread wren ng thornton

Brandon S Allbery KF8NH wrote:

On 7/6/10 15:37 , Oscar Finnsson wrote:

but can they also be on a form similar to


a b c d e f g h| b c - d e f | b d g - h

 (i.e. d,e,f are decided by the b,c-combination while h is decided by
the b,d,g-combination)?


I think the answer to this is yes, but if you have an MPTC with 8
parameters then you desperately need to refactor.


Yes, you can add multiple dependencies. The syntax is to use , after the 
first |.


While having eight parameters is surely a desperate need for 
refactoring, there are times when you'd want multiple dependencies. For 
example, you can say


class F a b | a - b, b - a where...

to express a bijective function on types (that is, for every pair of A 
and B, if you know one of them then you know what the other must be 
uniquely).


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comments on Haskell 2010 Report

2010-07-09 Thread wren ng thornton

Julian Fleischer wrote:

Hi,


8. [...] Saying 0**0 is undefined seems reasonable,
but why 0**y?

I agree on 0**y being 0 (not undefined), but why should 0**0 be undefined? x**0 
:= 1, by convention.


I'm not familiar with that convention. So far as I'm aware, the x**0=1 
vs 0**y=0 conflict leads to 0**0 being best handled as undefined. That 
is, I've not seen any arguments supporting either solution as somehow 
more natural or more helpful in mathematics. /source-please


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comments on Haskell 2010 Report

2010-07-09 Thread wren ng thornton

Christopher Done wrote:

On 10 July 2010 01:22, Ivan Lazar Miljenovic ivan.miljeno...@gmail.com wrote:

Brandon S Allbery KF8NH allb...@ece.cmu.edu writes:


On 7/8/10 22:25 , Alex Stangl wrote:

1. I.E. and e.g. should be followed by commas -- unless UK usage
differs from US standards. (Page 3 and elsewhere, although FFI chapter

I don't think I've ever seen them *followed* by commas.  Preceded,
always.



From The Haskell 98 Library Report:



 partition takes a predicate and a list and returns a pair of lists: those 
elements of the argument list that do and do not satisfy the predicate, 
respectively; i.e., [...]


I don't think you should bother nitpicking about commas here. As
Gregory said, anywhere there is i.e., you can substitute it for
that is. Consider if the spec. was written with that is instead of
i.e., would you then criticise where and where not commas are used?
Does this arbitrary prescription really matter?


This is LaTeX folks! All prescriptivism can be solved by judicious use 
of macros. Honestly, for dealing with all the different journals' style 
guides, stuff like this should be standard fare for mere punctuation 
differences:


\NeedsTeXFormat{LaTeX2e}
\ProvidesPackage{latin}[2010/07/09 resolve style disputes]
\RequirePackage{xspace}

\newif...@comma
\...@commatrue
\declareoption{comma...@commatrue}
\declareoption{nocomma...@commafalse}
\ProcessOptions

% According to most style guides we should not italicize common
% Latin, since it needlessly draws attention to unimportant text.
% For declarativity we specify this command so that clients can
% choose to italicize if desired.
\newcommand{\latin}[1]{#1}

\newcommand{\.}{% Or choose another name if you please
\...@comma
.,\xspace
\else
.\,
\fi
}

\newcommand{\Cf}{\latin{Cf}\.}
\newcommand{\cf}{\latin{cf}\.}
\newcommand{\Eg}{\latin{E.g}\.}
\newcommand{\eg}{\latin{e.g}\.}
\newcommand{\Ie}{\latin{I.e}\.}
\newcommand{\ie}{\latin{i.e}\.}

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] ANNOUNCE: jhc 0.7.4

2010-07-10 Thread wren ng thornton

Brandon S Allbery KF8NH wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 7/10/10 17:01 , Antoine Latter wrote:

* The way you use sed doesn't work with the BSD sed that ships with my
Mac Book. Installing GNU sed and using it works. Similarly, BSD find
doesn't know about '-name', so make hl-clean results in sadness.


Haven't looked at sed, but -name should work in any version of find.  Are
you sure about this?


FWIW -name (and -iname) are supported by OSX-10.5 find. And it seems 
really odd to imagine a find that wouldn't support them...


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] ANNOUNCE: jhc 0.7.4

2010-07-10 Thread wren ng thornton

John Meacham wrote:

On Sat, Jul 10, 2010 at 04:01:53PM -0500, Antoine Latter wrote:

* running DrIFT on src/E/TypeCheck.hs fails with an illegal
bytesequence in hGetContents. I'm guessing that this is only an issue
when building DrIFT with GHC 6.12+, and that the file contains bytes
illegal in UTF8. I deleted everything funny looking in the file and
then it went smooth


Hi, are you compiling from the tarball or the darcs repository? the
tarball shouldn't require DrIFT to be installed. I had not tested DrIFT
with 6.12 but that file should be in UTF8. Hmm... on OSX, is the default
locale a UTF8 one? does ghc 6.12 properly encode to/from utf8 on it by
defualt? could you check, I don't have a mac handy.


Generally OSX takes UTF16 to be the standard encoding (I don't recall if 
it's LE or BE), though UTF8 is supported almost everywhere. I haven't 
checked to see whether that would affect this particular task though.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] ANNOUNCE: jhc 0.7.4

2010-07-15 Thread wren ng thornton

Brandon S Allbery KF8NH wrote:

wren is half right:  at the level of Unixy APIs (and this includes anything
that goes on in a Terminal window and anything that you will be doing from
Haskell) you use UTF8, but OSX APIs --- that is, Carbon and Cocoa --- use
UTF16.  So for the purposes of ghc/jhc OSX is UTF8, but if someone wrote a
fully native Cocoa-based runtime it would use UTF16.  (Compare Win32 APIs
to mingw APIs; very similar situation.)


Yeah, I've been doing too much Cocoa lately ;)  Unfortunately, certain
parts of Cocoa aren't UTF16, which is annoying. Standard terminal stuff
should all be UTF8 though.

--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: lambda calculus and equational logic

2010-07-15 Thread wren ng thornton

Patrick Browne wrote:

Heinrich Apfelmus wrote:

3) Not sure what you mean by proof theoretic semantics. Apparently, the
trace of any program execution like, say

   product [1..5] - 1 * product [2..5] - .. - 120

is a proof that the initial and the final expression denote the same value.

The Curry-Howards correspondence is about the type system, viewing types
as logical propositions and programs as their proofs.


This seems to address the first point (the role of lambda calculus.
But what of the second point, the role equational logic.
I assume (perhaps incorrectly) that lambda calculus is not *a logic*.
See: http://lambda-the-ultimate.org/node/3719



That depends a lot on what you think logic means. Certainly we should 
be willing to call Haskell's type system a logic, with the term-level 
serving as proofs of the types/propositions (by Curry/Howard). It's a 
woefully inconsistent logic[1], sure, but a logic nevertheless.


Whether we would want to call the term-level a logic is a different 
matter. And this is where we get into terminological disputes; e.g., to 
what extent are algebraic systems different/similar to logical systems? 
(and to what extent are computational systems similar/different to 
algebraic systems?) This isn't just an issue of terminology: it throws a 
spotlight on elephants hiding in the room, elephants such as whether all 
mathematics can be reduced to logic, what the relationship is between 
logic and language, etc. Mathematicians, logicians, philosophers, 
theoretical linguists, and computer scientists all have a stake in those 
arguments--- a stake which is, at best, tangential to the actual 
question, and therefore makes it hard to get a decent answer/discussion.


Equational logic is used heavily for reasoning about programs in Haskell 
(and pure functional languages generally). It shows up in the safety 
proofs for the type system, correctness proofs for compilers, strictness 
analysis, optimizations, etc. However, as you mention in that LtU post, 
equational reasoning doesn't really show up *in* Haskell, only in 
reasoning *about* Haskell. Without dependent types or a more developed 
formal language for reasoning with RULES and proving their correctness, 
it does not seem possible to support equational reasoning within Haskell 
(except in the most trivial sense that defining functions is defining a 
system of equations). It's not clear to me whether you're asking about 
equational reasoning for theory or metatheory.


Also, because the type system is inconsistent, that raises the question 
of what it could mean to reason in such a logic. Logicians throw up 
their hands whenever a logic is inconsistent, but computer scientists 
have been reasoning with inconsistency for years. In discussing a 
constructive LEM, Oleg highlights some interesting connections between 
the idea of monads-as-computation and well-known results about how the 
boundary of computability separates the space of Classical and 
Intuitionistic theorems. Issues of inconsistency also show up in 
reasoning about language-based security systems where sometimes we may 
wish to allow inconsistent states during computation so long as there is 
a formal guarantee that consistency is restored soon. But overall, the 
question of what it means to reason formally and correctly in the face 
of an inconsistent system is still an open question.



[1] Because of bottom, as well as many other reasons:
http://okmij.org/ftp/Haskell/types.html#fix

[2] http://okmij.org/ftp/Computation/lem.html

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] trees and pointers

2010-07-15 Thread wren ng thornton

Jake McArthur wrote:

On 07/15/2010 05:33 PM, Victor Gorokhov wrote:

From the docs, lookup is O(min(n,W))

Actually worse than O(log n).


Perhaps I am misunderstanding you, but O(min(n,W)) is either better than 
or the same as O(log n), depending on how you look at things, but I 
don't see any way that the former could be *worse* than the latter.


For n  W: min(n,W)  log n

So, when you can guarantee that n  W ---which is almost always the case 
for IntMap---, then O(min(n,W)) is linear and therefore worse than O(log n).


But even so, if your constant factors are k  c, then k*n  c*log n is 
perfectly possible for all n  W, and therefore what matters in the real 
world here is the constant factors. The reason why is that for 
asymptotic purposes O(min(n,W)) and O(log n) belong to the same class of 
functions between constant and linear, so they're effectively the same 
(in asymptotic-land).


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A question about State Monad and Monad in general

2010-07-15 Thread wren ng thornton

C K Kashyap wrote:

Thanks Daniel,

Better refactorability.

If you're using monadic style, changing from, say,
State Thing
to
StateT Thing OtherMonad

or from
StateT Thing FirstMonad
to
StateT Thing SecondMonad

typically requires only few changes. Explicit state-passing usually
requires more changes.


So, performance gain (runtime/memory) is not a side effect of Monadic style
right?


Generally speaking, right: monadic style won't improve performance.

However, using monadic notation may allow you to change the backing 
monad to a different representation of the same computation, thereby 
giving asymptotic improvements[1]. However, the improvements themselves 
are coming from the different representation; the use of monadic 
notation just allows you to switch the representation without altering 
the code.



[1] http://www.iai.uni-bonn.de/~jv/mpc08.pdf

--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Functional dependencies and Peano numbers (and hoogle-bug?)

2010-07-15 Thread wren ng thornton

Oscar Finnsson wrote:

Anyone made a module/package that solves this problem already? I
cannot be the first that needs generic type safe conversion... .


There's a restricted version in logfloat:Data.Numer.RealToFrac[1] which 
generalizes the Prelude's realToFrac to improve performance and correctness.


To do much more than that you'll probably have to use something like 
Data.Data, Data.Typeable, or similar. Generally speaking, arbitrary 
casts from one type to another go against Haskell ideology because they 
don't make a lot of sense. Often times there are multiple intelligible 
ways to convert between two fixed types, so how can we choose? Things 
like realToFrac, read, show, etc make sense precisely because they are 
more restricted and therefore make explicit the semantic intentions of 
how the conversion should be done.



[1] 
http://hackage.haskell.org/packages/archive/logfloat/0.12.1/doc/html/Data-Number-RealToFrac.html


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Refactoring type-class madness

2010-07-15 Thread wren ng thornton

Andrew Webb wrote:

Because, at the basic level all of the experiments share this type of
data, it seems that I should be able to write analysis functions that
work for any experiment. However, the experiments differ in the
stimuli used, and associated with each stimulus set is a set of
milestones that give times at which important things happen in the
stimuli, and regions of interest that give areas of the visual scene
that are considered important. For a single experiment I would have:

data Experiment = Exp [Trial]
data Trial = Trial [Event] (Map MileStone Time)
data MileStone = M1 | M2 | ...

[...]

So, I was wondering whether there was something wrong with my basic
model which leads to this ugly type class, or whether this is the
proper way forward. Either is fine, really, it would just be nice to
know for certain.


In sounds like you're trying to use typeclasses as if they were 
OO-classes, which is a good way to confuse yourself. What you probably 
want is just parametric polymorphism. For example, if every experiment 
is a sequence of trials, and every trial is a sequence of events with 
some milestones, then you can get the generality you want with:


data Experiment m = Exp [Trial m]
data Trial m = Trial [Event] (Map m Time)
data Event = Fixation {...} | Saccade {...}

data A = MS_A1 | MS_A2 | ... deriving (Ord, Enum)
data B = MS_B1 | MS_B2 | ... deriving (Ord, Enum)
...

then you would pass around (Experiment A), (Experiment B), etc. The 
reason for the Ord instances is so you can use them as keys in Map, and 
the reason for Enum is just so you have a generic interface for listing 
all the milestones (though getting the keys of the map may suffice).


The main reason for wanting to use typeclasses is when you have a common 
interface (i.e. set of function names and types), but the 
implementations of that interface are structurally/algorithmically 
different. If the structure of the implementation is the same and only 
the type of some component changes, then parametric polymorphism is the 
way to go.



Off-topic to your original question, it seems like a better model for 
your data might be to treat milestones as a third kind of event. Thus, a 
trial is just a sequence of events, which could be subject events 
(fixation, saccades) or experimental events (milestones, etc). This is 
assuming that your processing only cares about how patient events occur 
relative to experimental events, and that you don't need access to 
experimental events separately.


If you need to be able to jump around to different events, then you 
could use Trial[Event](Map m [Event]) and construct the map after 
reading input by walking over the list of events and storing pointers to 
the subsequence beginning with each milestone:


computeMilestones :: Trial m - Trial m
computeMilestones (Trial es m) = Trial es (go es m)
where
go [] m = m
go es@(e:es') m = go es' (m' e)
where
m' (MS x) = insert x es m
m' _  = m


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] trees and pointers

2010-07-16 Thread wren ng thornton

Jan-Willem Maessen wrote:

As you observe, it's really down to constant factors.  The reason
IntMap (or any digital trie) is so interesting is that it is simple
enough that the constant factors are quite good---in particular we
don't waste a lot of time figuring out if we're going to need to
rearrange the tree structure on the fly.  That turns out to amortize
quite a few extra level traversals in a lot of settings.


This simplicity of not rebalancing also allows for more sharing in a 
persistent setting, which can be a big gain for programs with the right 
kinds of data distributions.




It also offers really elegant implementations of union and unions.
Whether that means they're quickish I leave to the benchmarkers.


In my experience (NLP work), the performance of unions for tries is one 
of their major selling points. In Okasaki's venerable benchmarks[1], 
they vastly outperform all competitors for merging. Even in 
imperative-land, that's one of the reasons tries are so common in NLP 
and are often chosen over hashmaps for certain tasks.



[1] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.5452

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] in-equality type constraint?

2010-07-17 Thread wren ng thornton

Christopher Lane Hinson wrote:

On Fri, 16 Jul 2010, Paul L wrote:


Does anybody know why the type families only supports equality test
like a ~ b, but not its negation?


I would suggest that type equality is actually used for type inference,
whereas proof of type inequality would have no consequence (that I can 
think of) for the compiler.


Also, it's a lot easier to solve a system of constraints when you only 
have positive constraints. Adding negative constraints greatly 
complicates the solver. In many cases it's still doable, though the 
structure of types might pose some additional challenges above the usual 
ones.


(And they're usually called disequality constraints, since 
inequalities are only interesting when you have subtyping.)


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hot-Swap with Haskell

2010-07-17 Thread wren ng thornton

Brandon S Allbery KF8NH wrote:

On 7/16/10 05:21 , Andy Stewart wrote:

IMO, haskell interpreter is perfect solution for samll script job. But
i'm afraid haskell interpreter is slow for *large code*, i don't know,
i haven't try this way...


Hugs?


Or you can try implementing (or finding) a SASL interpreter[1]. SASL is 
just the untyped[2] lambda calculus with named functions. ADTs are 
implemented by function application, greatly simplifying the 
compiler/interpreter. On their benchmarks, the version presented in the 
paper is competitive with GHCi 6.4 and outperforms Hugs Jan2005.


Interpreting is always slower than compiled code. But how much that 
matters is a separate issue. Folks seem to like their Perl, Python, 
Ruby,... even for large projects.



[1] Jan Jansen (2005) /Data Types and Pattern Matching by Function 
Application/, Proc. 17th IFL.


[2] The version they present in the paper uses the untyped calculus 
since their pattern matching doesn't fit in Hindley--Milner. The problem 
is the need for a fixpoint type operator, and polymorphism under the 
fixpoint; thus, it fits perfectly fine in System F with iso-recursive 
types. So just turn on -XRankNTypes and use them directly:


newtype List a = List { caseList
:: forall r. r - (a - List a - r) - r }
nil   = List (\n c - n)
cons x xs = List (\n c - c x xs)

length xs = caseList xs 0 (\x xs' - 1 + length xs')

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] RE: Design for 2010.2.x series Haskell Platform site (Don Stewart)

2010-07-17 Thread wren ng thornton

Niemeijer, R.A. wrote:

Here's my take on the new design:

Screenshot: http://imgur.com/9LHvk.jpg
Live version: http://dl.dropbox.com/u/623671/haskell_platform_redesign/index.htm


Is it just me, or does aligning [OSX,Win,Linux] `zip` [Comprehensive, 
Robust, CuttingEdge] send the wrong message...


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Design for 2010.2.x series Haskell Platform site (Don Stewart)

2010-07-19 Thread wren ng thornton

Niemeijer, R.A. wrote:

Is it just me, or does aligning [OSX,Win,Linux] `zip` [Comprehensive,
Robust, CuttingEdge] send the wrong message...


Yeah, I noticed that too when designing it, but at the time it didn't bother me 
too much.


I know folks who'd refute all three of those associations, so... :)



I do see how it could confuse some people though, so I've made a design 
alternative
that solves this problem. I think it also makes for a better landing page for 
people who
don't yet know anything about Haskell, since the current design doesn't explain 
what
Haskell actually is. I'll let you guys be the judge though.

New alternative: 
http://dl.dropbox.com/u/623671/haskell_platform_redesign/index2.htm


I like that one. The intro blurbs for Haskell/HP are nice for newcomers, 
it avoids the aforementioned issue, and it's less ostentatious in its 
advertising.


+1.

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Design for 2010.2.x series Haskell Platform site

2010-07-19 Thread wren ng thornton

Malcolm Wallace wrote:
I still like the original design on http://imgur.com/NjiVh a lot 
better, It has a simple modern design to it in my opinion :)


+1.  It is simply beautiful.  Much more striking and memorable than the 
blue diver.


I really like the background image; it's nicely striking and much more 
memorable than the diver. The font selection, color scheme, and central 
logo are decent, but I'm not a fan of the presentation of the OS icons 
nor the content of the footer. The two-box What is Haskell?,What is 
the Haskell Platform? footer works a lot better IMO.  The icons are a 
bit harder to fix though; right now they look bolted on and 
discontinuous with the rest of the design.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On documentation

2010-07-22 Thread wren ng thornton

David Waern wrote:

2010/7/21 Richard O'Keefe o...@cs.otago.ac.nz:

One of the really nice ideas in the R statistics system is that
documentation pages can contain executable examples, and when you
wrap up a package for distribution, the system checks that the
examples run as advertised.


The next version of Haddock will support something like this,
implemented by SImon Hengel.

Here's an example of how to use the new markup:

-- | An example:
--
-- ghci fib 10
-- 55


Awesome. This is similar to Python's thing.



Haskell has a *perfect* fit for this idea in QuickCheck.


We currently only support concrete examples (i.e. unit tests), but the
plan is to add support for QuickCheck properties.


Also support SmallCheck/LazySmallCheck, pretty please? :)

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cabal, Setup.lhs example

2010-07-22 Thread wren ng thornton

Magnus Therning wrote:

On Thu, Jul 22, 2010 at 11:52, Ross Paterson r...@soi.city.ac.uk wrote:

On Thu, Jul 22, 2010 at 11:31:21AM +0100, Magnus Therning wrote:

On Thu, Jul 22, 2010 at 10:59, Ross Paterson r...@soi.city.ac.uk wrote:

Magnus is building by directly running the Setup.hs himself, which ignores
the Build-Type.  To get cabal-install to use his Setup.hs, the Build-Type
must be set to Custom.

Oh, why*2?

Why is the header there if it's not used by Cabal, and why does cabal care?

The field allows cabal to avoid compiling the Setup.hs in this case.
It might also be used by other tools, e.g. one might only trust Simple
packages.  Not all fields are used by all tools, and several of them
do not affect the operation of the library (e.g. Home-page).


All right, so why would cabal want to avoid compiling the Setup.hs?

Of course this behaviour of cabal's means that I in the future will use *Custom*
all the time, since I otherwise have to remember this surprising feature of a
tool I never use.  Is there any reason *not* to do this?


The main reason I could think of to avoid compiling it is for 
performance reasons. I'm not sure how compelling that is, but...


As for why not to always use Custom, as mentioned there are cabal-aware 
tools out there besides cabal-install. For these other tools, there is a 
big difference between Simple and Custom. With Simple we (ideally) 
already know all the semantics of what Setup.hs does, and so we can wire 
that into our tools. With Custom we're forced into the position of doing 
Haskell source analysis since we now have to discover the semantics of 
an arbitrary Turing machine. That's a very high wall to climb if you're 
just wanting to write a simple tool for doing some kind of package analysis.


(I don't think the behavior is surprising since I interpret Simple to 
mean that the Setup.hs file is unused/optional. Though clearly YMMV)


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   4   5   6   7   8   9   10   >