Re: [Haskell-cafe] Tutorial on JS with Haskell: Fay or GHCJS?

2013-09-04 Thread Chris Smith
I second the recommendation to look at Haste.  It's what I would pick for a
project like this today.

In the big picture, Haste and GHCJS are fairly similar.  But when it comes
to the ugly details of the runtime system, GHCJS adopts the perspective
that it's basically an emulator, where compatibility is the number one
goal.  Haste goes for a more native approach; while the evaluation
semantics and such are completely faithful to Haskell, it doesn't go out of
the way to emulate the gritty details of GHC's runtime system.
On Sep 4, 2013 3:38 AM, Nathan Hüsken nathan.hues...@posteo.de wrote:

  In my opinion haste is somewhere between Fay and ghcjs. It supports more
 than Fay, but in difference to ghcjs some PrimOps are not supported (weak
 pointers for example).

 It is a little bit more direct than ghcjs, in the sense that it does not
 need such a big rts written in js.

 I like haste :).

 What I wonder is how the outputs of these 3 compilers compare speed wise.

 On 09/04/2013 11:11 AM, Alejandro Serrano Mena wrote:

 I haven't looked at Haste too much, I'll give it a try.

  My main problem is that I would like to find a solution that will
 continue working in years (somehow, that will became the solution for
 generating JS from Haskell code). That's why I see GHCJS (which just
 includes some patches to mainstream GHC) as the preferred solution, because
 it seems the most probable to continue working when new versions of GHC
 appear.


 2013/9/4 Niklas Hambüchen m...@nh2.me

 Hi, I'm also interested in that.

 Have you already evaluated haste?

 It does not seem to have any of your cons, but maybe others.

 What I particularly miss from all solutions is the ability to simply
 call parts written in Haskell from Javascript, e.g. to write `fib` and
 then integrate it into an existing Javascript application (they are all
 more interested in doing the other direction).

 On Wed 04 Sep 2013 17:14:55 JST, Alejandro Serrano Mena wrote:
  Hi,
  I'm currently writing a tutorial on web applications using Haskell. I
  know the pros and cons of each server-side library (Yesod, Snap,
  Scotty, Warp, Happstack), but I'm looking for the right choice for
  client-side programming that converts Haskell to JavaScript. I've
  finally come to Fay vs. GHCJS, and would like your opinion on what's
  the best to tackle. My current list of pros and cons is:
 
  Fay
  ===
  Pros:
  - Does not need GHC 7.8
  - Easy FFI with JS
  - Has libraries for integration with Yesod and Snap
 
  Cons:
  - Only supports a subset of GHC (in particular, no type classes)
 
 
  GHCJS
  ==
  Pros:
  - Supports full GHC
  - Easy FFI with JS
  - Highly opinionated point: will stay longer than Fay (but it's very
  important for not having a tutorial that is old in few months)
 
  Cons:
  - Needs GHC 7.8 (but provides a Vagrant image)
 
 
   ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe




 ___
 Haskell-Cafe mailing 
 listHaskell-Cafe@haskell.orghttp://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Possible extension to Haskell overloading behavior

2013-07-09 Thread Chris Smith
Oh, yes.  That looks great!  Also seems to work with OverloadedStrings
in the natural way in GHC 7.6, although that isn't documented.

Now if only it didn't force NoImplicitPrelude, since I really want to
-hide-package base and -package my-other-prelude.  Even adding
-XImplicitPrelude doesn't help.

On Tue, Jul 9, 2013 at 11:46 AM, Aleksey Khudyakov
alexey.sklad...@gmail.com wrote:
 On 08.07.2013 23:54, Chris Smith wrote:

 So I've been thinking about something, and I'm curious whether anyone
 (in particular, people involved with GHC) think this is a worthwhile
 idea.

 I'd like to implement an extension to GHC to offer a different
 behavior for literals with polymorphic types.  The current behavior is
 something like:

 Probably RebidableSyntax[1] could work for you. From description it
 allows to change meaning of literals.

 [1]
 http://www.haskell.org/ghc/docs/7.6.3/html/users_guide/syntax-extns.html#rebindable-syntax

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Possible extension to Haskell overloading behavior

2013-07-09 Thread Chris Smith
Oh, never mind.  In this case, I guess I don't need an extension at all!

On Tue, Jul 9, 2013 at 1:47 PM, Chris Smith cdsm...@gmail.com wrote:
 Oh, yes.  That looks great!  Also seems to work with OverloadedStrings
 in the natural way in GHC 7.6, although that isn't documented.

 Now if only it didn't force NoImplicitPrelude, since I really want to
 -hide-package base and -package my-other-prelude.  Even adding
 -XImplicitPrelude doesn't help.

 On Tue, Jul 9, 2013 at 11:46 AM, Aleksey Khudyakov
 alexey.sklad...@gmail.com wrote:
 On 08.07.2013 23:54, Chris Smith wrote:

 So I've been thinking about something, and I'm curious whether anyone
 (in particular, people involved with GHC) think this is a worthwhile
 idea.

 I'd like to implement an extension to GHC to offer a different
 behavior for literals with polymorphic types.  The current behavior is
 something like:

 Probably RebidableSyntax[1] could work for you. From description it
 allows to change meaning of literals.

 [1]
 http://www.haskell.org/ghc/docs/7.6.3/html/users_guide/syntax-extns.html#rebindable-syntax

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Possible extension to Haskell overloading behavior

2013-07-09 Thread Chris Smith
Ugh... I take back the never mind.  So if I replace Prelude with an
alternate definition, but don't use RebindableSyntax, and then hide
the base package, GHC still uses fromInteger and such from base even
though it should be inaccessible.  But if I do use RebindableSyntax,
then the end-user has to add 'import Prelude' to the top of their
code.  Am I missing something?

On Tue, Jul 9, 2013 at 1:51 PM, Chris Smith cdsm...@gmail.com wrote:
 Oh, never mind.  In this case, I guess I don't need an extension at all!

 On Tue, Jul 9, 2013 at 1:47 PM, Chris Smith cdsm...@gmail.com wrote:
 Oh, yes.  That looks great!  Also seems to work with OverloadedStrings
 in the natural way in GHC 7.6, although that isn't documented.

 Now if only it didn't force NoImplicitPrelude, since I really want to
 -hide-package base and -package my-other-prelude.  Even adding
 -XImplicitPrelude doesn't help.

 On Tue, Jul 9, 2013 at 11:46 AM, Aleksey Khudyakov
 alexey.sklad...@gmail.com wrote:
 On 08.07.2013 23:54, Chris Smith wrote:

 So I've been thinking about something, and I'm curious whether anyone
 (in particular, people involved with GHC) think this is a worthwhile
 idea.

 I'd like to implement an extension to GHC to offer a different
 behavior for literals with polymorphic types.  The current behavior is
 something like:

 Probably RebidableSyntax[1] could work for you. From description it
 allows to change meaning of literals.

 [1]
 http://www.haskell.org/ghc/docs/7.6.3/html/users_guide/syntax-extns.html#rebindable-syntax

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Possible extension to Haskell overloading behavior

2013-07-09 Thread Chris Smith
This is working now.  Trying to use -XRebindableSyntax with
-XImplicitPrelude seems to not work (Prelude is still not loaded) when the
exposed Prelude is from base, but it works fine when the Prelude is from a
different package.  Counterintuitive, but it does everything I need it to.
Thanks for the suggestion!
On Jul 9, 2013 4:20 PM, Aleksey Khudyakov alexey.sklad...@gmail.com
wrote:

 On 10.07.2013 01:13, Chris Smith wrote:

 Ugh... I take back the never mind.  So if I replace Prelude with an
 alternate definition, but don't use RebindableSyntax, and then hide
 the base package, GHC still uses fromInteger and such from base even
 though it should be inaccessible.  But if I do use RebindableSyntax,
 then the end-user has to add 'import Prelude' to the top of their
 code.  Am I missing something?

  If base is hidden GHCi refuses to start becaus it can't import Prelude
 (with -XNoImplicitPrelude it starts just fine).

 According to documentation GHC will use whatever fromInteger is in scope.
 But I never used extension in such way.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Possible extension to Haskell overloading behavior

2013-07-08 Thread Chris Smith
So I've been thinking about something, and I'm curious whether anyone
(in particular, people involved with GHC) think this is a worthwhile
idea.

I'd like to implement an extension to GHC to offer a different
behavior for literals with polymorphic types.  The current behavior is
something like:

1. Give the literal a polymorphic type, like (Integral a = a)
2. Type check the whole program, possibly giving the term a more
constrained type.
3. If the type is still ambiguous, apply defaulting rules.

I'd like to add the option to do this instead.

1. Take the polymorphic type, and immediately apply defaulting rules
to get a monomorphic type.
2. Type check the program with the monomorphic type.

Mostly, this would reduce the set of valid programs, since the type is
chosen before considering whether it meets all the relevant
constraints.  So what's the purpose?  To simplify type errors for
programmers who don't understand type classes.  What I have in mind is
domain-specific dialects of Haskell that replace the Prelude and are
aimed at less technical audiences - in my case, children around 10 to
13 years old; but I think the ideas apply elsewhere, too.  Type
classes are (debatably) the one feature of Haskell that tends to be
tricky for non-technical audiences, and yet pops up in very simple
programs (and more importantly, their error messages) even when the
programmer wasn't aware of it's existence, because of its role in
overloaded literals.

In some cases, I think it's a good trade to remove overloaded
literals, in exchange for simpler error messages.  This leaves new
programmers learning a very small, simple language, and not staring so
much at cryptic error messages.  At the same time, it's not really
changing the language, except for the need to explicitly use type
classes (via conversion functions like fromInteger) rather than get
them thrown in implicitly.  With GHC's extended defaulting rules that
apply for OverloadedStrings, this could also be used to treat all
string literals as Text, too, which might make some people happy, too.

Of course, the disadvantage is that for numeric types, you would lose
the convenience of overloaded operators, since this is only a sensible
thing to do if you're replacing the Prelude with one that doesn't use
type classes.  But in at least my intended use, I prefer to have a
single Number type anyway (and a single Text type that's not sometimes
called [Char]).  In the past, explaining these things has eaten up far
too much time that I'd rather have spent on more general skills and
creative activities.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Possible extension to Haskell overloading behavior

2013-07-08 Thread Chris Smith
Oops, when I wrote this, I'd assumed it was possible to export
defaults from a module, like an alternate Prelude.  But it looks like
they only affect the current module.  So this whole thing depends on
also being able to either define defaults in an imported module, or in
options to GHC.

On Mon, Jul 8, 2013 at 12:54 PM, Chris Smith cdsm...@gmail.com wrote:
 So I've been thinking about something, and I'm curious whether anyone
 (in particular, people involved with GHC) think this is a worthwhile
 idea.

 I'd like to implement an extension to GHC to offer a different
 behavior for literals with polymorphic types.  The current behavior is
 something like:

 1. Give the literal a polymorphic type, like (Integral a = a)
 2. Type check the whole program, possibly giving the term a more
 constrained type.
 3. If the type is still ambiguous, apply defaulting rules.

 I'd like to add the option to do this instead.

 1. Take the polymorphic type, and immediately apply defaulting rules
 to get a monomorphic type.
 2. Type check the program with the monomorphic type.

 Mostly, this would reduce the set of valid programs, since the type is
 chosen before considering whether it meets all the relevant
 constraints.  So what's the purpose?  To simplify type errors for
 programmers who don't understand type classes.  What I have in mind is
 domain-specific dialects of Haskell that replace the Prelude and are
 aimed at less technical audiences - in my case, children around 10 to
 13 years old; but I think the ideas apply elsewhere, too.  Type
 classes are (debatably) the one feature of Haskell that tends to be
 tricky for non-technical audiences, and yet pops up in very simple
 programs (and more importantly, their error messages) even when the
 programmer wasn't aware of it's existence, because of its role in
 overloaded literals.

 In some cases, I think it's a good trade to remove overloaded
 literals, in exchange for simpler error messages.  This leaves new
 programmers learning a very small, simple language, and not staring so
 much at cryptic error messages.  At the same time, it's not really
 changing the language, except for the need to explicitly use type
 classes (via conversion functions like fromInteger) rather than get
 them thrown in implicitly.  With GHC's extended defaulting rules that
 apply for OverloadedStrings, this could also be used to treat all
 string literals as Text, too, which might make some people happy, too.

 Of course, the disadvantage is that for numeric types, you would lose
 the convenience of overloaded operators, since this is only a sensible
 thing to do if you're replacing the Prelude with one that doesn't use
 type classes.  But in at least my intended use, I prefer to have a
 single Number type anyway (and a single Text type that's not sometimes
 called [Char]).  In the past, explaining these things has eaten up far
 too much time that I'd rather have spent on more general skills and
 creative activities.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Announcing postgresql-libpq-0.8.2.3

2013-07-08 Thread Leon Smith
I just fixed a fairly serious performance problem with postgresql-libpq's
binding to PQescapeStringConn;   in was exhibiting a non-linear slowdown
when more strings are escaped and retained.

https://github.com/lpsmith/postgresql-libpq/commit/adf32ff26cdeca0a12fa59653b49c87198acc9ae

If you are using postgresql-libpq's escapeStringConn,  or a library that
uses it (e.g.  postgresql-simple,  or persistent-postgresql),  I do
recommend upgrading.   You may or may not see a performance improvement,
 depending on your particular use case,   but if you do it can be quite
substantial.

It's not entirely clear to me what the root cause really is,  but it
certainly appears as though it's related to the (direct) use of
mallocBytes,   which was replaced with (indirect) calls to
mallocForeignPtrBytes / mallocPlainForeignPtrBytes (through the bytestring
package).   In this case,  it resulted in an asymptotic improvement in time
complexity of some algorithms.

Best,
Leon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [database-devel] Announcing postgresql-libpq-0.8.2.3

2013-07-08 Thread Leon Smith
I'll have to benchmark withMVar on my system,  but (at least on my old
laptop) a safe foreign function call is also on the order of 100ns.   As
c_PQescapeStringConn and c_PQescapeByteaConn are currently safe calls,
that would limit the maximum time saved at ~50%.

Perhaps it would make sense to make these unsafe calls as well,  but the
justification I used at the time was that the amount of time consumed by
these functions is bounded by the length of the string being escaped,
 which is itself unbounded.Certainly postgresql-libpq is currently
overly biased towards safe calls;  though when I took the bindings over it
was overly biased towards unsafe calls.   (Though, arguably,  it's worse to
err on the side of making ffi calls unsafe.)

I've also considered integrating calls to c_PQescapeStringConn with
blaze-builder and/or bytestring-builder,  which could help a fair bit,  but
would also introduce dependencies on the internals of these libraries when
currently there is none.

There is certainly a lot of room for optimizing query generation in
postgresql-simple,  this I've been well aware of since the beginning.   And
it probably would be worthwhile to move to protocol-level parameters which
would avoid the need for escaping value parameters altogether,  and open up
the possibility of binary formats as well, which would be a huge
performance improvement for things like numerical values and timestamps.
 Although IIRC,  one downside is that this prevents multiple DML commands
from being issued in a single request,  which would subtly change the
interface postgresql-simple exports.

Best,
Leon


On Mon, Jul 8, 2013 at 10:00 PM, Joey Adams joeyadams3.14...@gmail.comwrote:

 On Mon, Jul 8, 2013 at 9:03 PM, Leon Smith leon.p.sm...@gmail.com wrote:

 I just fixed a fairly serious performance problem with postgresql-libpq's
 binding to PQescapeStringConn;   in was exhibiting a non-linear slowdown
 when more strings are escaped and retained.


  I'd like to point out a somewhat related bottleneck in postgresql-simple
 (but not postgresql-libpq).  Every PQescapeStringConn or PQescapeByteaConn
 call involves a withMVar, which is about 100ns on the threaded RTS on my
 system.  Taking the Connection lock once for the whole buildQuery call
 might be much faster, especially for multi-row inserts and updates.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] runghc dies silently when given large numbers of arguments

2013-06-07 Thread Gareth Smith

Hi all,

The other day, I wrote the following program in /tmp/test.hs

--8---cut here---start-8---
main = error An Error message
--8---cut here---end---8---

Then I ran the following:

,
| gds@lithium:/tmp$ runghc ./test.hs `seq 1 10`
| gds@lithium:/tmp$ ghc -o test test.hs
| gds@lithium:/tmp$ ./test `seq 1 10`
| test: An Error message
| gds@lithium:/tmp$
`

Notice that when I use runghc, my main function doesn't appear to run at
all. But when I run the compiled program, my main function does seem to
run, and produce the error message I expected.

It seems to me like runghc can't cope with that large a number of
arguments or size of argument string, or similar. That seems entirely
reasonable, but if so, then might it be a good idea to have an error
message when we accidentally hit that limit?

I'm using ghc 7.6.2 on Ubuntu 13.04.

Thanks!

Gareth.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] A use case for *real* existential types

2013-05-10 Thread Leon Smith
I've been working on a new Haskell interface to the linux kernel's inotify
system, which allows applications to subscribe to and be notified of
filesystem events.   An application first issues a system call that returns
a file descriptor that notification events can be read from,  and then
issues further system calls to watch particular paths for events.   These
return a watch descriptor (which is just an integer) that can be used to
unsubscribe from those events.

Now,  of course an application can open multiple inotify descriptors,  and
when removing watch descriptors,  you want to remove it from the same
inotify descriptor that contains it;  otherwise you run the risk of
deleting a completely different watch descriptor.

So the natural question here is if we can employ the type system to enforce
this correspondence.   Phantom types immediately come to mind,  as this
problem is almost the same as ensuring that STRefs are only ever used in a
single ST computation.   The twist is that the inotify interface has
nothing analogous to runST,  which does the heavy lifting of the type
magic behind the ST monad.

This twist is very simple to deal with if you have real existential types,
 with the relevant part of the interface looking approximately like

init :: exists a. IO (Inotify a)
addWatch :: Inotify a - FilePath - IO (Watch a)
rmWatch :: Inotify a - Watch a - IO ()

UHC supports this just fine,  as demonstrated by a mockup attached to this
email.  However a solution for GHC is not obvious to me.


inotify.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A use case for *real* existential types

2013-05-10 Thread Leon Smith
On Fri, May 10, 2013 at 9:00 AM, Andres Löh and...@well-typed.com wrote:


  This twist is very simple to deal with if you have real existential
 types,
  with the relevant part of the interface looking approximately like
 
  init :: exists a. IO (Inotify a)
  addWatch :: Inotify a - FilePath - IO (Watch a)
  rmWatch :: Inotify a - Watch a - IO ()

 You can still do the ST-like encoding (after all, the ST typing trick
 is just an encoding of an existential), with init becoming like
 runST:

  init :: (forall a. Inotify a - IO b) - IO b
  addWatch :: Inotify a - FilePath - IO (Watch a)
  rmWatch :: Inotify a - Watch a - IO ()


Right, but my interface the Inotify descriptor has an indefinite extent,
 whereas your interface enforces a dynamic extent.   I'm not sure to what
degree this would impact use cases of this particular library,  but in
general moving a client program from the the first interface to the second
can require significant changes to the structure of the program,   whereas
moving a client program from the second interface to the first is trivial.
   So I'd say my interface is more expressive.

Best,
Leon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A use case for *real* existential types

2013-05-10 Thread Leon Smith
On Fri, May 10, 2013 at 9:04 AM, MigMit miguelim...@yandex.ru wrote:

 With that kind of interface you don't actually need existential types. Or
 phantom types. You can just keep Inotify inside the Watch, like this:


Right, that is an alternative solution,  but phantom types are a relatively
simple and well understood way of enforcing this kind of property in the
type system without incurring run-time costs.   My inotify binding is
intended to be as thin as possible,  and given my proposed interface,   you
could implement your interface in terms of mine,  making the phantom types
disappear using the restricted existentials already available in GHC,   and
such a wrapper should be just as efficient as if you had implemented your
interface directly.

Best,
Leon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A use case for *real* existential types

2013-05-10 Thread Leon Smith
On Fri, May 10, 2013 at 5:49 PM, Alexander Solla alex.so...@gmail.comwrote:

 I'm not sure if it would work for your case, but have you considered using
 DataKinds instead of phantom types?  At least, it seems like it would be
 cheap to try out.


 http://www.haskell.org/ghc/docs/7.4.2/html/users_guide/kind-polymorphism-and-promotion.html


I do like DataKinds a lot,  and I did think about them a little bit with
respect to this problem,  but a solution isn't obvious to me,  and perhaps
more importantly I'd like to be able to support older versions of GHC,
 probably back to 7.0 at least.

The issue is that every call to init needs to return a slightly different
type,  and whether this is achieved via phantom types or datakinds,  it
seems to me some form of existential typing is required.  As both Andres
and MigMit pointed out,  you can sort of achieve this by using a
continuation-like construction and higher-ranked types (is there a name for
this transform?  I've seen it a number of times and it is pretty well
known...),  but this enforces a dynamic extent on the descriptor whereas
the original interface I proposed allows an indefinite extent.

Best,
Leon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A use case for *real* existential types

2013-05-10 Thread Leon Smith
A value has an indefinite extent if it's lifetime is independent of any
block of code or related program structure,  think malloc/free or new/gc.
 A value has a dynamic extent if is lifetime is statically determined
relative to the dynamic execution of the program (e.g. a stack variable):
 in this case the type system ensures that no references to the inotify
descriptor can exist after the callback returns.

Best,
Leon


On Fri, May 10, 2013 at 6:52 PM, Alexander Solla alex.so...@gmail.comwrote:




 On Fri, May 10, 2013 at 3:31 PM, Leon Smith leon.p.sm...@gmail.comwrote:

 On Fri, May 10, 2013 at 5:49 PM, Alexander Solla alex.so...@gmail.comwrote:

 I'm not sure if it would work for your case, but have you considered
 using DataKinds instead of phantom types?  At least, it seems like it would
 be cheap to try out.


 http://www.haskell.org/ghc/docs/7.4.2/html/users_guide/kind-polymorphism-and-promotion.html


 I do like DataKinds a lot,  and I did think about them a little bit with
 respect to this problem,  but a solution isn't obvious to me,  and perhaps
 more importantly I'd like to be able to support older versions of GHC,
  probably back to 7.0 at least.

 The issue is that every call to init needs to return a slightly different
 type,  and whether this is achieved via phantom types or datakinds,  it
 seems to me some form of existential typing is required.  As both Andres
 and MigMit pointed out,  you can sort of achieve this by using a
 continuation-like construction and higher-ranked types (is there a name for
 this transform?  I've seen it a number of times and it is pretty well
 known...),  but this enforces a dynamic extent on the descriptor whereas
 the original interface I proposed allows an indefinite extent.


 I know what extensions (of predicates and the like) are, but what exactly
 does dynamic and indefinite mean in this context?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Markdown extension for Haddock as a GSoC project

2013-04-28 Thread Chris Smith
I think it's worth backing up here, and remembering the original point
of the proposal, by thinking about what is and isn't a goal.  I think
I'd classify things like this:

Goals:
- Use a lightweight, common, and familiar core syntax for simple formatting.
- Still allow haddock-specific stuff like links to other symbols.

Non-Goals:
- Compliance/compatibility with any specification or other system.
- Have any kind of formal semantics.

The essence of this proposal is about making Haddock come closer to
matching all the other places where people type formatted text on the
Internet.  As Johan said, markdown has won.  But markdown won because
it ended up being a collection of general conventions with
compatibility for the 95% of commonly used bits... NOT a formal
specification.  If there are bits of markdown that are just really,
really awkward to use in Haddock, modify them or leave them out.  I
think the whole discussion is getting off on the wrong start by
looking for the right specification against which this should be
judged, when it's really just about building the most natural possible
ad-hoc adaptation of markdown to documentation comments.  Just doing
the easy stuff, like switching from /foo/ to *foo* for emphasis,
really is most of the goal.  Anything beyond that is even better.

Compatibility or compliance to a specification are non-issues: no one
is going to be frequently copying documentation comments to and from
other markdown sources.  Haddock will unavoidably have its own
extensions for references to other definitions anyway, as will the
other system, so it won't be compatible.  Let's just accept that.

Formal semantics is a non-issue: the behavior will still be defined by
the implementation, in that people will write their documentation, and
if it looks wrong, they will fix it.  We don't want to reason formally
about the formatting of our comments, or prove that it's correct.
Avoiding unpleasant surprises is good; but avoiding *all* possible
ambiguous corner cases in parsing, even when they are less likely than
typos, is not particularly important.  If some ambiguity becomes a big
problem, it will get fixed later as a bug.

I think the most important thing here is to not get caught up in
debates about advanced syntax or parsing ambiguities, or let that stop
us from being able to emphasize words the same way we do in the whole
rest of the internet.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Markdown extension for Haddock as a GSoC project

2013-04-28 Thread Chris Smith
On Apr 28, 2013 6:42 PM, Alexander Solla alex.so...@gmail.com wrote:
 I think that much has to do with the historical division in computer
science.  We have mathematics on the right hand, and electrical engineering
on the wrong one.

I've been called many things, but electrical engineer is a new one!

My point was not anything at all to do with programming.  It was about
writing comments, which is fundamentally a communication activity.  That
makes a difference.  It's important to keep in mind that the worst possible
consequence of getting corner cases wrong here is likely that some
documentation will be confusing because the numbering is messed up in an
ordered list.

Pointing out that different processors treat markdown differently with
respect to bold or italics and such is ultimately missing the point.  For
example, I an aware that Reddit treats *foo* like italics while, say,
Google+ puts it in bold... but I really don't care.  What is really of any
importance is that both of them take reasonable conventions from plain text
and render them reasonably.  As far as I'm concerned, you can flip a coin
as to whether it ends up in bold or italics.

That doesn't mean the choices should not be documented.  Sure they should.
But it seems ridiculous to sidetrack the proposal to do something nice by
concerns about the minutiae of the syntax.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Markdown extension for Haddock as a GSoC project

2013-04-27 Thread Chris Smith
Oops, forgot to reply all.
-- Forwarded message --
From: Chris Smith cdsm...@gmail.com
Date: Apr 27, 2013 12:04 PM
Subject: Re: [Haskell-cafe] Markdown extension for Haddock as a GSoC project
To: Bryan O'Sullivan b...@serpentine.com
Cc:

I don't agree with this at all.  Far more important than which convention
gets chosen is that Haskell code can be read and written without learning
many dialects of Haddock syntax.  I see an API for pluggable haddock syntax
as more of a liability than a benefit.  Better to just stick to what we
have than fragment into more islands.

I do think that changing Haddock syntax to include common core pieces of
Markdown could be a positive change... but not if it spawns a battle of
fragmented documentation syntax that lasts a decade.
On Apr 27, 2013 11:08 AM, Bryan O'Sullivan b...@serpentine.com wrote:

 On Sat, Apr 27, 2013 at 2:23 AM, Alistair Bayley alist...@abayley.orgwrote:

 How's about Creole?
 http://wikicreole.org/

 Found it via this:

 http://www.wilfred.me.uk/blog/2012/07/30/why-markdown-is-not-my-favourite-language/

 If you go with Markdown, I vote for one of the Pandoc implementations,
 probably Pandoc (strict):
 http://johnmacfarlane.net/babelmark2/

 (at least then we're not creating yet another standard...)


 Probably the best way to deal with this is by sidestepping it: make the
 support for alternative syntaxes as modular as possible, and choose two to
 start out with in order to get a reasonable shot at constructing a suitable
 API.

 I think it would be a shame to bikeshed on which specific syntaxes to
 support, when a lot of productive energy could more usefully go into
 actually getting the work done. Better to say prefer a different markup
 language? code to this API, then submit a patch!

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] unsafeInterleaveST (and IO) is really unsafe [was: meaning of referential transparency]

2013-04-12 Thread Chris Smith
On Fri, Apr 12, 2013 at 1:44 AM,  o...@okmij.org wrote:
 As to alternatives -- this is may be the issue of
 familiarity or the availability of a nice library of combinators.

It is certainly not just a matter of familiarity, nor availability.
Rather, it's a matter of the number of names that are required in a
working set.  Any Haskell programmer, regardless of whether they use
lazy I/O, will already know the meanings of (.), length, and filter.
On the other hand, ($), count_i, and filterL are new names that must
be learned from yet another library -- and much harder than learned,
also kept in a mental working set of fluency.

This ends up being a rather strong argument for lazy I/O.  Not that
the code is shorter, but that it (surprisingly) unifies ideas that
would otherwise have required separate vocabulary.

I'm not saying it's a sufficient argument, just that it's a much
stronger one than familiarity, and that it's untrue that some better
library might achieve the same thing without the negative
consequences.  (If you're curious, I do believe that it often is a
sufficient argument in certain environments; I just don't think that's
the kind of question that gets resolved in mailing list threads.)

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskel let

2013-04-01 Thread A Smith
Hi
I think I've mastered much of  functional programming with Haskell, or at
least I can write programs which process as desired.
However the existence of the let statement  evades me  apart from a quick
way to define a function. Then there is the in and where parts. Its
been suggested its to do with scope and is obvious. I think I need to
understand it in a C/C++/Perl style of code to have a Eureka moment   when
it will be obvious.
Can someone refer me to something  which will assist me ?

Andrew
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GSOC application level

2013-03-06 Thread Chris Smith
Mateusz Kowalczyk fuuze...@fuuzetsu.co.uk wrote:

 I know that this year's projects aren't up
 yet

Just to clarify, there isn't an official list of projects for you to choose
from.  The project that you purpose is entirely up to you.  There is a list
of recommendations at
http://hackage.haskell.org/trac/summer-of-code/report/1 and another list of
ideas at http://reddit.com/r/haskell_proposals -- but keep in mind that you
ultimately make your own choice about what you propose, and it doesn't have
to be selected from those lists.  You can start writing your perusal today
if you like.

Having an unusually good idea is a great way to get selected even if you
don't have an established body of work to point to.  Just keep in mind that
proposals are evaluated not just on the benefit if they are completed, but
also on their likelihood of success... a good idea is both helpful and
realistic.  They are also evaluated on their benefit to the actual Haskell
community... so of that's not something you have a good fell for, I'd
suggest getting involved.  Follow reddit.com/r/haskell, read this mailing
list, read Haskell blogs from planet.haskell.org, and get familiar with
what Haskellers are concerned about and interested in.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Extensible Type Unification

2013-02-08 Thread Leon Smith
It finally occurred to me how to get most of what I want,  at least from a
functional perspective.Here's a sample GADT,  with four categories of
constructor:

data Foo :: Bool - Bool - Bool - Bool - * where
A :: Foo True b c d
B :: Foo True b c d
C :: Foo a True c d
D :: Foo a b True d
E :: Foo a b c True

Given an Eq instance,  we can easily compare two constructors for equality;
  we can put some constructors in a list and tell which categories appear
in the list,   and it plays reasonably nicely with the case coverage
analyzer,  which I think is important from a software engineering
standpoint.For example,  this function will compile without warning:

fooVal :: Foo a b False False - Int
fooVal  x =
   case x of
  A - 0
  B - 1
  C - 2

The only thing this doesn't do that I wish it did is infer the type of
fooVal above.Rather,  GHC  infers the type  :: Foo a b c d - Int,
and then warns me that I'm missing cases.   But this is admittedly a
strange form of type inference,   completely alien to the Haskell
landscape,   which I realized only shortly after sending my original email.
  My original description of ranges of sets of types wasn't sufficient to
fully capture my intention.

That said,  this solution falls rather flat in several software engineering
respects.  It doesn't scale with complexity;   types quickly become
overbearing even with modest numbers of categories.If you want to add
additional categories,   you have to modify every type constructor instance
you've ever written down.You could mitigate some of this via the
existing type-level machinery,  but it seems a band-aid,  not a solution.

For comparison,  here is the best I had when I wrote my original email:

data Category = W | X | Y | Z

data Foo :: [Category] - * where
A :: (Member W ub) = Foo ub
B :: (Member W ub) = Foo ub
C :: (Member X ub) = Foo ub
D :: (Member Y ub) = Foo ub
E :: (Member Z ub) = Foo ub

class Member (a :: x) (bs :: [x])
instance Member a (a ': bs)
instance Member a bs = Member a (b ': bs)

The code is closer to what I want,  in terms of expression,  though it
mostly fails on the functional requirements.Case analysis doesn't work,
  I can't compare two constructors without additional type annotations,
and while I can find out which categories of constructors appear in a list,
  it doesn't seem I can actually turn those contexts into many useful
things,  automatically.

So while I didn't have any trouble computing set operations over unordered
lists at the type level,   I couldn't figure out out to put it to use in
the way I wanted.Perhaps there is a clever way to emulate unification,
 but I really like the new features that reduce the need to be clever when
it comes to type-level computation.

Best,
Leon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Extensible Type Unification

2013-02-07 Thread Leon Smith
I've been toying with some type-level programming ideas I just can't quite
make work,   and it seems what I really want is a certain kind of type
unification.

Basically,  I'd like to introduce two new kind operators:

kind Set as   -- a finite set of ground type terms of kind as

kind Range as = Range (Set as) (Set as)  -- a greatest lower bound and
a least upper bound


A type expression of kind (Set as)  would either be a type variable of kind
(Set as),  or set of *ground* type terms of kind (as).  A type
expression of kind (Range as) would be a type variable
of kind (Range as),  or two type expressions of kind (Set as).   To unify
ground terms of these types,  one would compute as follows:


unifySets xs ys
   |  xs == ys  = Just xs
   |  otherwise = Nothing

unifyRanges (Range glb0 lub0) (Range glb1 lub1)
| glb' `subset` lub' = Just (Range glb' lub')
| otherwise  = Nothing
  where
glb' = glb0 `union` glb1
lub' = lub0 `isect` lub1


I say sets of ground types,  because I have no idea what unification
between sets of non-ground types would mean,  and I really don't need it.

Among applications that came to mind,   one could use this to create a
restricted IO abstraction that could tell you which kinds of actions might
be taken  (read from mouse,  talk to network,  etc),   and be able to run
only those scripts that are restricted to certain resources. Or,  one
could define a singular GADT that represents messages/events decorated by
various categories,   and then define functions that
only operate on selected categories of messages.E.G. for something
vaguely IRC-ish,  one could write something like:

data EventCategory
= Channel
| Presence
| Information
| Conversation

data Event :: Range EventCategory - *  where
ChanCreate ::  ...  - Event (Range (Set '[Channel])  a)
ChanJoin   ::  ...  - Event (Range (Set '[Channel])  a)
ChanLeave  ::  ...  - Event (Range (Set '[Channel])  a)
PresAvailable  ::  ...  - Event (Range (Set '[Presence]) a)
PresAway   ::  ...  - Event (Range (Set '[Presence]) a)
WhoisQuery ::  ...  - Event (Range (Set '[Information])  a)
WhoisResponse  ::  ...  - Event (Range (Set '[Information])  a)
Message::  ...  - Event (Range (Set '[Conversation]) a)

And then be able to write functions such as


dispatch :: Event (Range a (Set '[Channel, Conversation]))  - IO ()
dispatch e =
case e of
ChanCreate{..} - ...
ChanJoin{..} - ...
ChanLeave{..} - ...
Message{..} - ...


In this case,  the case analysis tool would be able to warn me if I'm
missing any possible events in this dispatcher,  or  if I have extraneous
events that I can't be passed (according to my type.)

Anyway,  I've been trying to see if I can't come up with something similar
using existing type-level functionality in 7.6,   with little success.
(Though I'm not very experienced with type-level programming.) If not,
 might it be possible to add some kind of extensible unification mechanism,
 in furtherance of type-level programming?

Best,
Leon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lambda case (was Re: A big hurray for lambda-case (and all the other good stuff))

2012-12-30 Thread Chris Smith
On Sun, Dec 30, 2012 at 8:51 AM, David Thomas davidleotho...@gmail.comwrote:

 Jon's suggestion sounds great.

 The bike shed should be green.


There were plenty of proposals that would work fine.  `case of` was great.
 `\ of` was great.  It's less obvious to me that stand-alone `of` is never
ambiguous... but if that's true, it's reasonable.  Sadly, the option that
was worse that doing nothing at all is what was implemented.

The bikeshedding nonsense is frustrating.  Bikeshedding is about wasting
time debating the minutia of a significant improvement, when everyone
agrees the improvement is a good idea.  Here, what happened was that
someone proposed a minor syntax tweak (from `\x - case x of` to `case
of`), other reasonable minor syntax tweaks were proposed instead to
accomplish the same goal, and then in the end, out of the blue, it was
decided to turn `case` into a layout-inducing keyword (or even worse, only
sometimes but not always layout-inducing).

There is no bike shed here.  There are just colors (minor syntax tweaks).
 And I don't get the use of bikeshedding as basically just a rude comment
to be made at people who don't like the same syntax others do.

-- 
Chris
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Categories (cont.)

2012-12-21 Thread Chris Smith
It would definitely be nice to be able to work with a partial Category
class, where for example the objects could be constrained to belong to a
class.  One could then restrict a Category to a type level representation
of the natural numbers or any other desired set.  Kind polymorphism should
make this easy to define, but I still don't have a good feel for whether it
is worth the complexity.
On Dec 21, 2012 6:37 AM, Tillmann Rendel ren...@informatik.uni-marburg.de
wrote:

 Hi,

 Christopher Howard wrote:

 instance Category ...


 The Category class is rather restricted:

 Restriction 1:
 You cannot choose what the objects of the category are. Instead, the
 objects are always all Haskell types. You cannot choose anything at all
 about the objects.

 Restriction 2:
 You cannot freely choose what the morphisms of the category are. Instead,
 the morphisms are always Haskell values. (To some degree, you can choose
 *which* values you want to use).


 These restrictions disallow many categories. For example, the category
 where the objects are natural numbers and there is a morphism from m to n
 if m is greater than or equal to n cannot be expressed directly: Natural
 numbers are not Haskell types; and is bigger than or equal to is not a
 Haskell value.

   Tillmann

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] containers license issue

2012-12-17 Thread Chris Smith
Ketil Malde ke...@malde.org wrote:
 The point of the point is that neither of these are translations of
 literary works, there is no precedence for considering them as such, and
 that reading somebody's work (whether literary or source code) before
 writing one's own does not imply that the 'somebody' will hold any
 rights to the subsequent work.

So IANAL, but I do have an amateur interest in copyright law.  The debate
over the word translation is completely irrelevant.  The important point
is whether it is a derived work.  That phrase certainly includes more
than mere translation.  For example, it includes writing fiction that's set
in the same fantasy universe or HHS the same characters as another author's
works.  It also includes making videos with someone else's music playing in
the background. If you create a derived work, then the author of the
original definitely has rights to it, regardless of whether it is a mere
translation.  That's also why the word derived in a comment was
particularly Dacey to the legal staff and probably caused them to overreact
in this case.

The defense in the case of software is to say that the part that was copied
was not a work of authorship in the sense that, say, a fiction character
is.  This is generally not a hard case to win, since courts see computer
software as dominated by its practical function.  But if you copied
something that was clearly a matter of expression and not related to the
function of the software, you could very well be creating a derived work
over which the original author could assert control.

That said, I agree that in this particular case it's very unlikely that the
original author could have won an infringement case.  I just balked a
little at the statements about translation, which was really just an
example.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How can I avoid buffered reads?

2012-12-09 Thread Leon Smith
On Thu, Dec 6, 2012 at 5:23 PM, Brandon Allbery allber...@gmail.com wro\

 Both should be cdevs, not files, so they do not go through the normal
 filesystem I/O pathway in the kernel and should support select()/poll().
  (ls -l, the first character should be c instead of - indicating
 character-mode device nodes.)  If ghc is not detecting that, then *that* is
 indeed an I/O manager issue.


The issue here is that if you look at the source of fdReadBuf,  you see
that it's a plain system call without any reference to GHC's (relatively
new) IO manager.

Best,
Leon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] PLMW: Mentoring at POPL. Second Call for Participation

2012-12-04 Thread Gareth Smith

Students! You have until Sunday to apply for funding to come to both
POPL and PLMW!

CALL FOR PARTICIPATION

SIGPLAN Programming Languages Mentoring Workshop, Rome

Tuesday January 22, 2013

Co-located with POPL 2013

PLMW web page: http://www.doc.ic.ac.uk/~gds/PLMW/index.html

After the resounding success of the first Programming Languages
Mentoring Workshop at POPL 2012, we proudly announce the 2nd SIGPLAN
Programming Languages Mentoring Workshop (PLMW), co-located with POPL
2013 and organised by Nate Foster, Philippa Gardner, Alan Schmitt,
Gareth Smith, Peter Thieman and Tobias Wrigstad.

The purpose of this mentoring workshop is to encourage graduate
students and senior undergraduate students to pursue careers in
programming language research. This workshop will provide technical
sessions on cutting-edge research in programming languages, and
mentoring sessions on how to prepare for a research career. We will
bring together leaders in programming language research from academia
and industry to give talks on their research areas. The workshop will
engage students in a process of imagining how they might contribute to
our research community.

We especially encourage women and underrepresented minority students
to attend PLMW. Since PLMW will be in Rome this year, we particularly
look forward to seeing Eastern European students at the workshop.

This workshop is part of the activities surrounding POPL, the
Symposium on Principles of Programming Languages, and takes place the
day before the main conference. One goal of the workshop is to make
the POPL conference more accessible to newcomers. We hope that
participants will stay through the entire conference, and will also 
attend the POPL tutorials on Monday 21st January which are free to 
PLMW registered attendees. 

Through the generous donation of our sponsors, we are able to provide
scholarships to fund student participation. These scholarships will
cover reasonable expenses (airfare, hotel and registration fees) for
attendance at both the workshop and the POPL conference.

Students attending this year will get one year free student membership
of SIGPLAN

The workshop registration is open to all. Students with alternative
sources of funding  are welcome.

APPLICATION for PLMW scholarship: 

The scholarship application can be accessed from the workshop web site
(http://www.doc.ic.ac.uk/~gds/PLMW/index.html). The deadline for full
consideration of funding is 9th December, 2012. Selected participants
will be notified from Friday 14th December, and will need to register
for the workshop by December 24th.


SPONSORS:

Google
Imperial College London
Jane Street
Monoidics
NSF
Resource Reasoning
SIGPLAN
vmware

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How can I avoid buffered reads?

2012-11-29 Thread Leon Smith
Well,  I took Bardur's suggestion and avoided all the complexities of GHC's
IO stack and simply used System.Posix.IO and Foreign.This appears to
work,  but for better or worse,   it is using blocking calls to the read
system call and is not integrated with GHC's IO manager.   This shouldn't
be an issue for my purposes,  but I suppose it's worth pointing out.

{-# LANGUAGE BangPatterns, ViewPatterns #-}

import   Control.Applicative
import   Data.Bits
import   Data.Word(Word64)
import qualified Data.ByteString as S
import qualified Data.ByteString.Lazy as L
import   Data.ByteString.Internal (c2w)
import   Control.Exception
import   System.Posix.IO
import   Foreign
import qualified System.IO  as IO
import qualified Data.Binary.Getas Get

showHex :: Word64 - S.ByteString
showHex n = s
  where
(!s,_) = S.unfoldrN 16 f n

f n = Just (char (n `shiftR` 60), n `shiftL` 4)

char (fromIntegral - i)
  | i  10= (c2w '0' -  0) + i
  | otherwise = (c2w 'a' - 10) + i

twoRandomWord64s :: IO (Word64,Word64)
twoRandomWord64s = bracket openRd closeRd readRd
  where
openRd = openFd /dev/urandom ReadOnly Nothing defaultFileFlags {
noctty = True }
readRd = \fd - allocaBytes 16 $ \ptr - do
fdReadAll fd ptr 16
x - peek (castPtr ptr)
y - peek (castPtr ptr `plusPtr` 8)
return (x,y)
closeRd = closeFd
fdReadAll fd ptr n = do
  n' - fdReadBuf fd ptr n
  if n /= n'
  then fdReadAll fd (ptr `plusPtr` n') (n - n')
  else return ()

main = do
   (x,y) - twoRandomWord64s
   S.hPutStrLn IO.stdout (S.append (showHex x) (showHex y))


On Wed, Nov 28, 2012 at 6:05 PM, Leon Smith leon.p.sm...@gmail.com wrote:

 If you have rdrand,  there is no need to build your own PRNG on top of
 rdrand.   RdRand already incorporates one so that it can produce random
 numbers as fast as they can be requested,  and this number is continuously
 re-seeded with the on-chip entropy source.

 It would be nice to have a little more information about /dev/urandom and
 how it varies by OS and hardware,   but on Linux and FreeBSD at least it's
 supposed to be a cryptographically secure RNG that incorporates a PRNG to
 produce numbers in case you exhaust the entropy pool.

 On Wed, Nov 28, 2012 at 5:00 PM, Vincent Hanquez t...@snarc.org wrote:

 On 11/28/2012 09:31 PM, Leon Smith wrote:

 Quite possibly,  entropy does seem to be a pretty lightweight
 dependency...

 Though doesn't recent kernels use rdrand to seed /dev/urandom if it's
 available?   So /dev/urandom is the most portable source of random numbers
 on unix systems,  though rdrand does have the advantage of avoiding system
 calls,  so it certainly would be preferable, especially if you need large
 numbers of random numbers.

 There's no much information on this i think, but if you need large number
 of random numbers you should build a PRNG yourself on top of the best
 random seed you can get, and make sure you reseed your prng casually with
 more entropy bytes. Also if
 you don't have enough initial entropy, you should block.

 /dev/urandom is not the same thing on every unix system. leading to
 various assumptions broken when varying the unixes. It also varies with the
 hardware context: for example on an embedded or some virtualized platform,
 giving you really terrible entropy.

 --
 Vincent



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] How can I avoid buffered reads?

2012-11-28 Thread Leon Smith
I have some code that reads (infrequently) small amounts of data from
/dev/urandom,  and because this is pretty infrequent,  I simply open the
handle and close it every time I need some random bytes.

The problem is that I recently discovered that,  thanks to buffering within
GHC,   I was actually reading 8096 bytes when I only need 16 bytes,  and
thus wasting entropy.   Moreover  calling hSetBuffering  handle NoBuffering
did not change this behavior.

I'm not sure if this behavior is a bug or a feature,  but in any case it's
unacceptable for dealing with /dev/urandom.   Probably the simplest way to
fix this is to write a little C helper function that will read from
/dev/urandom for me,  so that I have precise control over the system calls
involved. But I'm curious if GHC can manage this use case correctly;
I've just started digging into the GHC.IO code myself.

Best,
Leon

{-# LANGUAGE BangPatterns, ViewPatterns #-}
import   Control.Applicativeimport   Data.Bitsimport
Data.Word(Word64)import qualified Data.ByteString as Simport
qualified Data.ByteString.Lazy as Limport
Data.ByteString.Internal (c2w)import qualified System.IOas
IOimport qualified Data.Binary.Getas Get
showHex :: Word64 - S.ByteStringshowHex n = s
  where
(!s,_) = S.unfoldrN 16 f n

f n = Just (char (n `shiftR` 60), n `shiftL` 4)

char (fromIntegral - i)
  | i  10= (c2w '0' -  0) + i
  | otherwise = (c2w 'a' - 10) + i
twoRandomWord64s :: IO (Word64,Word64)twoRandomWord64s =
IO.withBinaryFile /dev/urandom IO.ReadMode $ \handle - do
   IO.hSetBuffering handle IO.NoBuffering
   Get.runGet ((,) $ Get.getWord64host * Get.getWord64host) $
L.hGet handle 16
main = do
   (x,y) - twoRandomWord64s
   S.hPutStrLn IO.stdout (S.append (showHex x) (showHex y))

{- Relevant part of strace:

open(/dev/urandom, O_RDONLY|O_NOCTTY|O_NONBLOCK) = 3
fstat(3, {st_mode=S_IFCHR|0666, st_rdev=makedev(1, 9), ...}) = 0
ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7367e528) = -1 EINVAL
(Invalid argument)
ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7367e528) = -1 EINVAL
(Invalid argument)
read(3, 
N\304\4\367/\26c\\3218\237f\214yKg~i\310\r\262\\224H\340y\n\376V?\265\344...,
8096) = 8096
close(3)= 0

-}
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How can I avoid buffered reads?

2012-11-28 Thread Leon Smith
Quite possibly,  entropy does seem to be a pretty lightweight dependency...

Though doesn't recent kernels use rdrand to seed /dev/urandom if it's
available?   So /dev/urandom is the most portable source of random numbers
on unix systems,  though rdrand does have the advantage of avoiding system
calls,  so it certainly would be preferable, especially if you need large
numbers of random numbers.

Best,
Leon

On Wed, Nov 28, 2012 at 2:45 PM, Thomas DuBuisson 
thomas.dubuis...@gmail.com wrote:

 As an alternative, If there existed a Haskell package to give you fast
 cryptographically secure random numbers or use the new Intel RDRAND
 instruction (when available) would that interest you?

 Also, what you are doing is identical to the entropy package on
 hackage, which probably suffers from the same bug/performance issue.

 Cheers,
 Thomas

 On Wed, Nov 28, 2012 at 11:38 AM, Leon Smith leon.p.sm...@gmail.com
 wrote:
  I have some code that reads (infrequently) small amounts of data from
  /dev/urandom,  and because this is pretty infrequent,  I simply open the
  handle and close it every time I need some random bytes.
 
  The problem is that I recently discovered that,  thanks to buffering
 within
  GHC,   I was actually reading 8096 bytes when I only need 16 bytes,  and
  thus wasting entropy.   Moreover  calling hSetBuffering  handle
 NoBuffering
  did not change this behavior.
 
  I'm not sure if this behavior is a bug or a feature,  but in any case
 it's
  unacceptable for dealing with /dev/urandom.   Probably the simplest way
 to
  fix this is to write a little C helper function that will read from
  /dev/urandom for me,  so that I have precise control over the system
 calls
  involved. But I'm curious if GHC can manage this use case correctly;
  I've just started digging into the GHC.IO code myself.
 
  Best,
  Leon
 
  {-# LANGUAGE BangPatterns, ViewPatterns #-}
 
  import   Control.Applicative
  import   Data.Bits
  import   Data.Word(Word64)
  import qualified Data.ByteString as S
  import qualified Data.ByteString.Lazy as L
  import   Data.ByteString.Internal (c2w)
  import qualified System.IOas IO
  import qualified Data.Binary.Getas Get
 
  showHex :: Word64 - S.ByteString
  showHex n = s
where
  (!s,_) = S.unfoldrN 16 f n
 
  f n = Just (char (n `shiftR` 60), n `shiftL` 4)
 
  char (fromIntegral - i)
| i  10= (c2w '0' -  0) + i
| otherwise = (c2w 'a' - 10) + i
 
  twoRandomWord64s :: IO (Word64,Word64)
  twoRandomWord64s = IO.withBinaryFile /dev/urandom IO.ReadMode $
 \handle -
  do
 IO.hSetBuffering handle IO.NoBuffering
 Get.runGet ((,) $ Get.getWord64host * Get.getWord64host) $
 L.hGet
  handle 16
 
  main = do
 (x,y) - twoRandomWord64s
 S.hPutStrLn IO.stdout (S.append (showHex x) (showHex y))
 
 
  {- Relevant part of strace:
 
  open(/dev/urandom, O_RDONLY|O_NOCTTY|O_NONBLOCK) = 3
  fstat(3, {st_mode=S_IFCHR|0666, st_rdev=makedev(1, 9), ...}) = 0
  ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7367e528) = -1 EINVAL
 (Invalid
  argument)
  ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7367e528) = -1 EINVAL
 (Invalid
  argument)
  read(3,
 
 N\304\4\367/\26c\\3218\237f\214yKg~i\310\r\262\\224H\340y\n\376V?\265\344...,
  8096) = 8096
  close(3)= 0
 
  -}
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How can I avoid buffered reads?

2012-11-28 Thread Leon Smith
If you have rdrand,  there is no need to build your own PRNG on top of
rdrand.   RdRand already incorporates one so that it can produce random
numbers as fast as they can be requested,  and this number is continuously
re-seeded with the on-chip entropy source.

It would be nice to have a little more information about /dev/urandom and
how it varies by OS and hardware,   but on Linux and FreeBSD at least it's
supposed to be a cryptographically secure RNG that incorporates a PRNG to
produce numbers in case you exhaust the entropy pool.

On Wed, Nov 28, 2012 at 5:00 PM, Vincent Hanquez t...@snarc.org wrote:

 On 11/28/2012 09:31 PM, Leon Smith wrote:

 Quite possibly,  entropy does seem to be a pretty lightweight
 dependency...

 Though doesn't recent kernels use rdrand to seed /dev/urandom if it's
 available?   So /dev/urandom is the most portable source of random numbers
 on unix systems,  though rdrand does have the advantage of avoiding system
 calls,  so it certainly would be preferable, especially if you need large
 numbers of random numbers.

 There's no much information on this i think, but if you need large number
 of random numbers you should build a PRNG yourself on top of the best
 random seed you can get, and make sure you reseed your prng casually with
 more entropy bytes. Also if
 you don't have enough initial entropy, you should block.

 /dev/urandom is not the same thing on every unix system. leading to
 various assumptions broken when varying the unixes. It also varies with the
 hardware context: for example on an embedded or some virtualized platform,
 giving you really terrible entropy.

 --
 Vincent

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Call for Participation: Programming Languages Mentoring Workshop - a POPL workshop.

2012-11-17 Thread Gareth Smith

Apologies for any duplicates:

CALL FOR PARTICIPATION

SIGPLAN Programming Languages Mentoring Workshop, Rome

Tuesday January 22, 2013

Co-located with POPL 2013

PLMW web page: http://www.doc.ic.ac.uk/~gds/PLMW/index.html

After the resounding success of the first Programming Languages
Mentoring Workshop at POPL 2012, we proudly announce the 2nd SIGPLAN
Programming Languages Mentoring Workshop (PLMW), co-located with POPL
2013 and organised by Nate Foster, Philippa Gardner, Alan Schmitt,
Gareth Smith, Peter Thieman and Tobias Wrigstad.

The purpose of this mentoring workshop is to encourage graduate
students and senior undergraduate students to pursue careers in
programming language research. This workshop will provide technical
sessions on cutting-edge research in programming languages, and
mentoring sessions on how to prepare for a research career. We will
bring together leaders in programming language research from academia
and industry to give talks on their research areas. The workshop will
engage students in a process of imagining how they might contribute to
our research community.

We especially encourage women and underrepresented minority students
to attend PLMW. Since PLMW will be in Rome this year, we particularly
look forward to seeing Eastern European students at the workshop.

This workshop is part of the activities surrounding POPL, the
Symposium on Principles of Programming Languages, and takes place the
day before the main conference. One goal of the workshop is to make
the POPL conference more accessible to newcomers. We hope that
participants will stay through the entire conference, and will also 
attend the POPL tutorials on Monday 21st January which are free to 
PLMW registered attendees. 

Through the generous donation of our sponsors, we are able to provide
scholarships to fund student participation. These scholarships will
cover reasonable expenses (airfare, hotel and registration fees) for
attendance at both the workshop and the POPL conference.

Students attending this year will get one year free student membership
of SIGPLAN

The workshop registration is open to all. Students with alternative
sources of funding  are welcome.

APPLICATION for PLMW scholarship: 

The scholarship application can be accessed from the workshop web site
(http://www.doc.ic.ac.uk/~gds/PLMW/index.html). The deadline for full
consideration of funding is 9th December, 2012. Selected participants
will be notified from Friday 14th December, and will need to register
for the workshop by December 24th.


SPONSORS:

Imperial College London
Jane Street
Monoidics
NSF
Resource Reasoning
SIGPLAN
vmware

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Call for discussion: OverloadedLists extension

2012-09-23 Thread Chris Smith
Michael Snoyman mich...@snoyman.com wrote:
 That said, it would be great to come up with ways to mitigate the
 downsides of unbounded polymorphism that you bring up. One idea I've
 seen mentioned before is to modify these extension so that they target
 a specific instance of IsString/IsList, e.g.:

 {-# STRING_LITERALS_AS Text #-}

 foo == (fromString foo :: Text)

That makes sense for OverloadedStrings, but probably not for
OverloadedLists or overloaded numbers... String literals have the
benefit that there's one type that you probably always really meant.
The cases where you really wanted [Char] or ByteString are rare.  On
the other hand, there really is no sensible I always want this
answer for lists or numbers.  It seems like a kludge to do it
per-module if each module is going to give different answers most of
the time.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Over general types are too easy to make.

2012-09-02 Thread Chris Smith
On Sun, Sep 2, 2012 at 9:40 AM,  timothyho...@seznam.cz wrote:
 The thing is, that one ALWAYS wants to create a union of types, and not
 merely an ad-hock list of data declarations.  So why does it take more code
 to do the right thing(tm) than to do the wrong thing(r)?

You've said this a few times, that you run into this constantly, or
even that everyone runs into this.  But I don't think that's the case.
 It's something that happens sometimes, yes, but if you're running
into this issue for every data type that you declare, that is
certainly NOT just normal in Haskell programming.  So in that sense,
many of the answers you've gotten - to use a GADT, in particular -
might be great advice in the small subset of cases where average
Haskell programmers want more complex constraints on types; but it's
certainly not a good idea to do to every data type in your
application.

I don't have the answer for you about why this always happens to you,
but it's clear that there's something there - perhaps a stylistic
issue, or a domain-specific pattern, or something... - that's causing
you to face this a lot more frequently than others do.  If I had to
take a guess, I'd say that you're breaking things down into fairly
complex monolithic parts, where a lot of Haskell programmers will have
a tendency to work with simpler types and break things down into
smaller pieces.  But... who knows... I haven't seen the many cases
where this has happened to you.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-22 Thread Leon Smith
I think we actually agree more than we disagree;  I do think distinguishing
hard and soft upper bounds (no matter what they are called)  would help,
 and I'm just trying to justify them to some of the more dismissive
attitudes towards the idea

The only thing I think we (might) disagree on is the relative importance of
distinguishing hard and soft bounds versus being able to change bounds
easily after the fact (and *without* changing the version number associated
with the package.)

And on that count,  given the choice,  I pick being able to change bounds
after the fact, hands down.   I believe this is more likely to
significantly improve the current situation than distinguishing the two
types of bound alone.   However,  being able to specify both (and change
both) after the fact may prove to be even better.

Best,
Leon

On Sat, Aug 18, 2012 at 11:52 PM, wren ng thornton w...@freegeek.orgwrote:

 On 8/17/12 11:28 AM, Leon Smith wrote:

 And the
 difference between reactionary and proactive approaches I think is a
 potential justification for the hard and soft upper bounds;  perhaps
 we
 should instead call them reactionary and proactive upper bounds
 instead.


 I disagree. A hard constraint says this package *will* break if you
 violate me. A soft constraint says this package *may* break if you
 violate me. These are vastly different notions of boundary conditions, and
 they have nothing to do with a proactive vs reactionary stance towards
 specifying constraints (of either type).

 The current problems of always giving (hard) upper bounds, and the
 previous problems of never giving (soft) upper bounds--- both stem from a
 failure to distinguish hard from soft! The current/proactive approach fails
 because the given constraints are interpreted by Cabal as hard constraints,
 when in truth they are almost always soft constraints. The
 previous/reactionary approach fails because when the future breaks noone
 bothered to write down when the last time things were known to work.

 To evade both problems, one must distinguish these vastly different
 notions of boundary conditions. Hard constraints are necessary for
 blacklisting known-bad versions; soft constraints are necessary for
 whitelisting known-good versions. Having a constraint at all shows where
 the grey areas are, but it fails to indicate whether that grey is most
 likely to be black or white.

 --
 Live well,
 ~wren


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-17 Thread Leon Smith
I see good arguments on both sides of the upper bounds debate,  though at
the current time I think the best solution is to omit upper bounds (and I
have done so for most/all of my packages on hackage).But I cannot agree
with this enough:

On Thu, Aug 16, 2012 at 4:45 AM, Joachim Breitner
m...@joachim-breitner.dewrote:

 I think what we’d need is a more relaxed policy with modifying a
 package’s meta data on hackage. What if hackage would allow uploading a
 new package with the same version number, as long as it is identical up
 to an extended version range? Then the first person who stumbles over an
 upper bound that turned out to be too tight can just fix it and upload
 the fixed package directly, without waiting for the author to react.


I think that constraint ranges of a given package should be able to both be
extended and restricted after the fact.   Those in favor of the reactionary
approach (as I am at the moment, or Bryan O'Sullivan) would find the
ability of to restrict the version range useful,  while those in favor of
the proactive approach (like Joachim Breitner or Doug Beardsley) would find
the ability to extend the version range useful.

I suspect that attitudes towards upper bounds may well change if we can set
version ranges after the fact.I know mine very well might.And the
difference between reactionary and proactive approaches I think is a
potential justification for the hard and soft upper bounds;  perhaps we
should instead call them reactionary and proactive upper bounds instead.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Chris Smith
I am tentatively in agreement that upper bounds are causing more
problems than they are solving.  However, I want to suggest that
perhaps the more fundamental issue is that Cabal asks the wrong person
to answer questions about API stability.  As a package author, when I
release a new version, I know perfectly well what incompatible changes
I have made to it... and those might include, for example:

1. New modules, exports or instances... low risk
2. Changes to less frequently used, advanced, or internal APIs...
moderate risk
3. Completely revamped commonly used interfaces... high risk

Currently *all* of these categories have the potential to break
builds, so require the big hammer of changing the first-dot version
number.  I feel like I should be able to convey this level of risk,
though... and it should be able to be used by Cabal.  So, here's a
proposal just to toss out there; no idea if it would be worth the
complexity or not:

A. Cabal files should get a new Compatibility field, indicating the
level of compatibility from the previous release: low, medium, high,
or something like that, with definitions for what each one means.

B. Version constraints should get a new syntax:

bytestring ~ 0.10.* (allow later versions that indicate low or
moderate risk)
bytestring ~~ 0.10.* (allow later versions with low risk; we use
the dark corners of this one)
bytestring == 0.10.* (depend 100% on 0.10, and allow nothing else)

Of course, this adds a good bit of complexity to the constraint
solver... but not really.  It's more like a pre-processing pass to
replace fuzzy constraints with precise ones.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Chris Smith
Twan van Laarhoven twa...@gmail.com wrote:
 Would adding a single convenience function be low or high risk? You say it
 is low risk, but it still risks breaking a build if a user has defined a
 function with the same name.

Yes, it's generally low-risk, but there is *some* risk.  Of course, it
could be high risk if you duplicate a Prelude function or a name that
you know is in use elsewhere in a related or core library... these
decisions would involve knowing something about the library space,
which package maintainers often do.

 I think the only meaningful distinction you can make are:

Except that the whole point is that this is *not* the only distinction
you can make.  It might be the only distinction with an exact
definition that can be checked by automated tools, but that doesn't
change the fact that when I make an incompatible change to a library
I'm maintaining, I generally have a pretty good idea of which kinds of
users are going to be fixing their code as a result.  The very essence
of my suggestion was that we accept the fact that we are working in
probabilities here, and empower package maintainers to share their
informed evaluation.  Right now, there's no way to provide that
information: the PVP is caught up in exactly this kind of legalism
that only cares whether a break is possible or impossible, without
regard to how probable it is.  The complaint that this new mechanism
doesn't have exactly such a black and white set of criteria associated
with it is missing the point.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What does unpacking an MVar really mean?

2012-07-31 Thread Leon Smith
On Tue, Jul 31, 2012 at 7:37 AM, Bertram Felgenhauer 
bertram.felgenha...@googlemail.com wrote:

 Note that MVar# itself cannot be unpacked -- the StgMVar record will
 always be a separate heap object.


One could imagine a couple of techniques to unpack the MVar# itself,  and
was curious if GHC might employ one of them.

So, really,  unpacking the MVar does not eliminate a layer of indirection,
 it just eliminates the need to check a pointer tag (and possibly execute a
thunk or follow some redirects if you don't have a pointer to an MVar#).
 I think this is what I was ultimately after.

Best,
Leon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] What does unpacking an MVar really mean?

2012-07-30 Thread Leon Smith
I admit I don't know exactly how MVars are implemented,  but given that
they can be aliased and have indefinite extent,   I would think that they
look something vaguely like a  cdatatype ** var,  basically a pointer to an
MVar (which is itself a pointer,  modulo some other things such as a thread
queue.)

And,  I would think that unpacking such an structure would basically be
eliminating one layer of indirection,  so it would then look vague like a
cdatatype * var.But again,  given aliasing and indefinite extent, this
would seem to be a difficult thing to do.

Actually this isn't too difficult if an MVar only exists in a single
unpacked structure:   other references to the MVar can simply be pointers
into the structure.   But the case where an MVar is unpacked into two
different structures suggests that,  at least in some cases,  an unpacked
MVar is still a cdatatype ** var;

So, is my understanding more or less correct?  Does anybody have a good,
succinct explanation of how MVars are implemented,  and how they are
unpacked?

One final question,   assuming that unpacking an MVar really does eliminate
a layer of indirection,  and that other references to that MVar are simply
pointers into a larger structure,   what happens to that larger structure
when there are no more references to it (but still some references to the
MVar?)Given the complications that must arise out of a doubly
unpacked MVar,  I'm going to guess that the larger structure does get
garbage collected in this case,  and that the MVar becomes dislodged from
this structure.   Would that MVar then be placed directly inside another
unpacked reference, if one is available?

Best,
Leon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What does unpacking an MVar really mean?

2012-07-30 Thread Leon Smith
Let me clarify a bit.

I am familiar with the source of Control.Concurrent.MVar,  and I do see {-#
UNPACK #-}'ed MVars around,  for example in GHC's IO manager. What I
should have asked is,  what does an MVar# look like?  This cannot be
inferred from Haskell source;  though I suppose I could have tried to read
the Runtime source.

Now,  one would hope that and (MVar# RealWorld footype) would
 approximately correspond to a footype * mvar; variable in C.   The
problem is this cannot _always_ be the case,  because you can alias the
(MVar# RealWorld footype) by placing a single MVar into two unpacked
columns in two different data structures.So you would need to be able
to still sometimes represent an MVar# by a footype ** mvar at runtime,
 even though one would hope that it would be represented by a footype *
mvar in one particular data structure.

On Tue, Jul 31, 2012 at 1:04 AM, Ryan Ingram ryani.s...@gmail.com wrote:

 Because of this, boxed MVars can be garbage collected without necessarily
 garbage-collecting the MVar# it holds, if a live reference to that MVar#
 still exists elsewhere.


I was asking the dual question:  if the MVar# exists in some data
structure,  can that data structure still be garbage collected when there
is a reference to the MVar#,  but not the data structure it is contained
within.

Best,
Leon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Call to arms: lambda-case is stuck and needs your help

2012-07-06 Thread Chris Smith
Whoops, my earlier answer forgot to copy mailing lists... I would love to
see \of, but I really don't think this is important enough to make case
sometimes introduce layout and other times not.  If it's going to obfuscate
the lexical syntax like that, I'd rather just stick with \x-case x of.
On Jul 6, 2012 3:15 PM, Strake strake...@gmail.com wrote:

 On 05/07/2012, Mikhail Vorozhtsov mikhail.vorozht...@gmail.com wrote:
  Hi.
 
  After 21 months of occasional arguing the lambda-case proposal(s) is in
  danger of being buried under its own trac ticket comments. We need fresh
  blood to finally reach an agreement on the syntax. Read the wiki
  page[1], take a look at the ticket[2], vote and comment on the proposals!
 

 +1 for \ of multi-clause lambdas

 It looks like binding of to me, which it ain't, but it is nicely brief...

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Question on proper use of Data.IORef

2012-06-22 Thread Adam Smith
theValueRef isn't a pointer to theValue that you can use to somehow change
theValue (which is immutable).
theValueRef is a reference to a box that contains a totally separate,
mutable value.

When you use newIORef to create theValueRef, it's *copying* theValue into
the box. When you mutate theValueRef, you're mutating the value inside the
box - theValue remains unchanged.

Cheers,
Adam

On 22 June 2012 11:30, Captain Freako capn.fre...@gmail.com wrote:

 Hi experts,


 I fear I don't understand how to properly use *Data.IORef*.
 I wrote the following code:


   1 -- Testing Data.IORef
   2 module Main where
   3
   4 import Data.IORef
   5
   6 bump :: IORef Int - IO()
   7 bump theRef = do
   8 tmp - readIORef theRef
   9 let tmp2 = tmp + 1
  10 writeIORef theRef tmp2
  11
  12 main = do
  13 let theValue = 1
  14 print theValue
  15 theValueRef - newIORef theValue
  16 bump theValueRef
  17 return theValue


 and got this, in ghci:


 *Main :load test2.hs
 [1 of 1] Compiling Main ( test2.hs, interpreted )
 Ok, modules loaded: Main.
 *Main main
 1
 *1*


 I was expecting this:


 *Main :load test2.hs
 [1 of 1] Compiling Main ( test2.hs, interpreted )
 Ok, modules loaded: Main.
 *Main main
 1
 *2*


 Can anyone help me understand what I'm doing wrong?


 Thanks!
 -db

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Current uses of Haskell in industry?

2012-06-13 Thread Chris Smith
It turns out I'm filling in for a cancelled speaker at a local open
source user group, and doing a two-part talk, first on Haskell and
then Snap.  For the Haskell part, I'd like a list of current places
the language is used in industry.  I recall a few from Reddit stories
and messages here and other sources, but I wonder if anyone is keeping
a list.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] import IO

2012-05-16 Thread A Smith
Hi folks
I need a little help.
I had a hiccup upgrading my Ubuntu system, and eventually did a fresh
install.
Its mostly fixed to my old favourite ways but I cannot remember what's
needed to install the stuff that the import IO statement uses!
--
Andrew
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Fwd: Problem with forall type in type declaration

2012-05-04 Thread Chris Smith
Oops, forgot to reply-all again...

-- Forwarded message --
From: Chris Smith cdsm...@gmail.com
Date: Fri, May 4, 2012 at 8:46 AM
Subject: Re: [Haskell-cafe] Problem with forall type in type declaration
To: Magicloud Magiclouds magicloud.magiclo...@gmail.com


On Fri, May 4, 2012 at 2:34 AM, Magicloud Magiclouds
magicloud.magiclo...@gmail.com wrote:
 Sorry, it was just a persudo code. This might be more clear:

 run :: (Monad m) = m IO a - IO a

Unfortunately, that's not more clear.  For the constraint (Monad m) to
hold, m must have the kind (* - *), so then (m IO a) is meaningless.
I assume you meant one of the following:

   run :: MonadTrans m = m IO a - IO a

or

   run :: MonadIO m = m a - IO a

(Note that MonadIO is the class from the mtl package; there is no space there).

Can you clarify which was meant?  Or perhaps you meant something else entirely?

--
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Specify compile error

2012-05-03 Thread Adam Smith
Nope - because at compile time, there's no way to know whether
createB's argument is a Safe or an Unsafe. That information only
exists at run time. Consider the following functions.

f :: Int - A
f x = if x  0 then Unsafe x else Safe x

g :: IO B
g = do x - getLine
   return $ createB $ f (read x)

Here, read x will convert the input (entered at runtime) to an Int
(assuming it can - failure leads to a runtime exception), and then f
will convert the resulting Int to an A, which is then passed to
createB. But there's no way of the compiler knowing whether that A
will be a Safe or an Unsafe A, since that depends on the value entered
at runtime.

If you want the compiler to typecheck two things differently, they
need to be of different types. If you give a bit more context about
what you're trying to do, someone might be able to suggest a safer way
of doing what it.

Cheers,
Adam

On 3 May 2012 11:36, Ismael Figueroa Palet ifiguer...@gmail.com wrote:

 Hi, I'm writing a program like this:

 data B = B Int
 data A = Safe Int | Unsafe Int

 createB :: A - B
 createB (Safe i) = B i
 createB (Unsafe i) = error This is not allowed

 Unfortunately, the situation when createB is called with an Unsafe value is 
 only checked at runtime.
 If I omit the second case, it is not an error to not be exhaustive :-(

 Is there a way to make it a compile time error??

 Thanks!
 --
 Ismael


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] twitter election on favorite programming language

2012-05-01 Thread Leon Smith
Out of curiousity,  was this a plurality election (vote for one),  or an
approval election (vote for many)?

On Tue, May 1, 2012 at 12:11 AM, Kazu Yamamoto k...@iij.ad.jp wrote:

 Hello,

 A twitter election on favorite programming language was held in Japan
 and it appeared that Heskell is No. 10 loved language in Japan. :-)

http://twisen.com/election/index/654

 Regards,

 --Kazu

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.1.0

2012-04-17 Thread Chris Smith
Paolo,

This new pipes-core release looks very nice, and I'm happy to see
exception and finalizer safety while still retaining the general
structure of the original pipes package.  One thing that Gabriel and
Michael have been talking about, though, that seems to be missing
here, is a way for a pipe to indicate that it's finished with its
upstream portion, so that upstream finalizers can be immediately run
without waiting for the downstream parts of the pipe to complete.

Do you have an answer for this?  I've been puzzling it out this
morning, but it's unclear to me how something like this interacts with
type safety and exception handling.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] open source project for student

2012-04-11 Thread Chris Smith
Hmm, tough to answer without more to go on.  I think if I were in your
shoes I'd ask myself where I'm most happy outside of programming.  A lot of
good entry level open source work involves combining programming with other
skills.

Are you an artist?  Have a talent for strong design and striking expression?

Are you an organizer or a communicator?  The sort of person who draws
diagrams and talks to yourself practicing better ways to explain cool ideas
in simple terms?

Are you a scrappy tinkerer?  Someone who knows how to get your hands dirty
in a productive way before you're an expert?  A wiz with unit testing and
profiling tools?

I do have an education-related project I'm working on where being a smart
but inexperienced programmer might be an advantage.  But it's a question of
whether it's a good fit for what you're looking for.  Email me if you may
be interested in that.
On Apr 11, 2012 3:53 PM, Dan Cristian Octavian danoctavia...@gmail.com
wrote:

 Hello,

 I am a second year computer science student who is very interested in
  working on a haskell open source project. I have no particular focus on a
 certain type of application. I am open to ideas and to exploring new
 fields. What kind of project should I look for considering that I am a
 beginner? (Any particular project proposals would be greatly appreciated).

 Is the entry bar too high for most projects out there for somebody lacking
 experience such as me so that I should try getting some experience on my
 own first?

 Would it be a better idea to try to hack on my own project rather than
 helping on an existing one?

 Thank you very much for your help.




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] adding the elements of two lists

2012-03-26 Thread Chris Smith
Jerzy Karczmarczuk jerzy.karczmarc...@unicaen.fr wrote:
 Le 26/03/2012 02:41, Chris Smith a écrit :
 Of course there are rings for which it's possible to represent the
 elements as lists.  Nevertheless, there is definitely not one that
 defines (+) = zipWith (+), as did the one I was responding to.

 What?

 The additive structure does not define a ring.
 The multiplication can be a Legion, all different.

I'm not sure I understand what you're saying there.  If you were
asking about why there is no ring on [a] that defines (+) = zipWith
(+), then here's why.  By that definition, you have [1,2,3] + [4,5] =
[5,7].  But also [1,2,42] + [4,5] = [5,7].  Addition by [4,5] is not
one-to-one, so [4,5] cannot be invertible.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] adding the elements of two lists

2012-03-26 Thread Chris Smith
On Mon, Mar 26, 2012 at 10:18 AM, Jerzy Karczmarczuk
jerzy.karczmarc...@unicaen.fr wrote:
 So, * the addition* is not invertible, why did you introduce rings ...

My intent was to point out that the Num instance that someone
suggested for Num a = Num [a] was a bad idea.  I talked about rings
because they are the uncontroversial part of the laws associated with
Num: I think everyone would agree that the minimum you should expect
of an instance of Num is that its elements form a ring.

In any case, the original question has been thoroughly answered... the
right answer is that zipWith is far simpler than the code in the
question, and that defining a Num instance is possible, but a bad idea
because there's not a canonical way to define a ring on lists.  The
rest of this seems to have devolved into quite a lot of bickering and
one-ups-manship, so I'll back out now.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] adding the elements of two lists

2012-03-25 Thread Chris Smith
Jonathan Grochowski jongrocho...@gmail.com wrote:
 As Michael suggests using zipWith (+) is the simplest solution.

 If you really want to be able to write [1,2,3] + [4,5,6], you can define
the instnace as

 instance (Num a) = Num [a] where
 xs + ys = zipWith (+) xs ys

You can do this in the sense that it's legal Haskell... but it is a bad
idea to make lists an instance of Num, because there are situations where
the result doesn't act as you would like (if you've had abstract algebra,
the problem is that it isn't a ring).

More concretely, it's not hard to see that the additive identity is
[0,0,0...], the infinite list of zeros.  But if you have a finite list x,
then x - x is NOT equal to that additive identity!  Instead, you'd only get
a finite list of zeros, and if you try to do math with that later, you're
going to accidentally truncate some answers now and then and wonder what
went wrong.

In general, most type classes in Haskell are like this... the compiler only
cares that you provide operations with certain types, but the type class
also carries around additional laws that you should obey when writing
instances.  Here there's no good way to write an instance that obeys the
laws, so it's better to write no instance at all.

-- 
Chris Smith
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] adding the elements of two lists

2012-03-25 Thread Chris Smith
Jerzy Karczmarczuk jerzy.karczmarc...@unicaen.fr wrote:
 Le 26/03/2012 01:51, Chris Smith a écrit :

     instance (Num a) = Num [a] where
     xs + ys = zipWith (+) xs ys

 You can do this in the sense that it's legal Haskell... but it is a bad idea 
 [...]

 It MIGHT be a ring or not. The real problem is that one should not confuse
 structural and algebraic (in the classical sense) properties of your
 objects.

Of course there are rings for which it's possible to represent the
elements as lists.  Nevertheless, there is definitely not one that
defines (+) = zipWith (+), as did the one I was responding to.  By the
time you get a ring structure back by some *other* set of rules,
particularly for multiplication, the result will so clearly not be
anything like a general Num instance for lists that it's silly to even
be having this discussion.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there arithmetic composition of functions?

2012-03-19 Thread Chris Smith
If you are willing to depend on a recent version of base where Num is no
longer a subclass of Eq and Show, it is also fine to do this:

instance Num a = Num (r - a) where
(f + g) x = f x + g x
fromInteger = const . fromInteger

and so on.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there arithmetic composition of functions?

2012-03-19 Thread Chris Smith
On Mar 19, 2012 11:40 AM, Ozgur Akgun ozgurak...@gmail.com wrote:
 {-# LANGUAGE FlexibleInstances #-}

 instance Num a = Num (a - a) where

You don't want (a - a) there.  You want (b - a).  There is nothing about
this that requires functions to come from a numeric type, much less the
same one.

-- 
Chris Smith
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Google Summer of Code idea of project application

2012-03-19 Thread Chris Smith
Damien Desfontaines ddfontai...@gmail.com wrote:
 Thanks for your answer. I must admit that I do not really realize how much 
 work
 such a project represents. I will probably need the help of someone who is 
 more
 experienced than me to decide my timeline, and perhaps to restrict the final
 goal of my work (perhaps to a syntaxic subset of Haskell ?).

I'll be a bit blunt, in the interest of encouraging you to be
realistic before going too far down a doomed path.  I can't imagine
anyone at all thinking that a translator from a toy subset of Haskell
into a different language would be useful in any way whatsoever.  The
goal of GSoC is to find a well-defined project that's reasonable for a
summer, and is USEFUL to a language community.  Restricting the
project to some syntactic subset of Haskell is what people are
*afraid* will happen, and why you've gotten some not entirely
enthusiastic answers.  It just won't do us any good, especially when
there's no visible community of people ready to pick up the slack and
finish the project later.

One possible way out of this trap would be if, perhaps, the variant of
Haskell you picked were actually GHC's core language.  That could
actually have a lot of advantages, such as avoid parsing entirely,
removing type classes, laziness (I think... GHC did make the swap to
strict core, didn't it?), and many other advanced type system features
entirely, and being at least a potentially useful result that works
with arbitrary code and all commonly used Haskell language extensions
on top of the entire language.  At least you are back into plausible
territory.

It still seems far too ambitious for GSoC, though.  And I remain
unconvinced how useful it really is likely to be.  I'll grant there
are other people that care a lot more about ML than I do.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there arithmetic composition of functions?

2012-03-19 Thread Chris Smith
On Mon, Mar 19, 2012 at 7:16 PM, Richard O'Keefe o...@cs.otago.ac.nz wrote:
 One problem with hooking functions into the Haskell numeric
 classes is right at the beginning:

    class (Eq a, Show a) = Num a

This is true in base 4.4, but is no longer true in base 4.5.  Hence my
earlier comment about if you're willing to depend on a recent version
of base.  Effectively, this means requiring a recent GHC, since I'm
pretty sure base is not independently installable.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Google Summer of Code idea of project application

2012-03-19 Thread Chris Smith
On Mon, Mar 19, 2012 at 7:52 PM, Richard O'Keefe o...@cs.otago.ac.nz wrote:
 As just one example, a recent thread concerned implementing
 lock-free containers.  I don't expect converting one of those
 to OCaml to be easy...

If you translate to core first, then the only missing bit is the
atomic compare-and-swap primop that these structures will depend on.
Maybe that exists in OCaml, or maybe not... I wouldn't know.  If not,
it would be perfectly okay to refuse to translate the atomic
compare-and-swap primop that lockless data structures will use.  That
said, though, there are literally *hundreds* of GHC primops for tiny
little things like comparing different sized integers and so forth,
that would need to be implemented all on top of the interesting
task of doing language translation.  That should be kept in mind when
estimating the task.

 If, however, you want to make it possible for someone to
 write code in a sublanguage of Haskell that is acceptable
 to a Haskell compiler and convert just *that* to OCaml, you
 might be able to produce something useful much quicker.

I'm quite sure, actually, that implementing a usable sublanguage of
Haskell in this way would be a much larger project even than
translating core.  A usable sublanguage of Haskell would need a
parser, which could be a summer project all on its own if done well
with attention to errors and a sizeable test suite.  It would need an
implementation of lazy evaluation, which can be quite tricky to get
right in a thread-safe and efficient way.  It would need type checking
and type inference that's just different enough from OCaml that you'd
probably have to write a new HM+extensions type checker and inference
engine on your own, and *that* could again be far more than a summer
project on its own, if you plan to build something of production
quality.  It would need a whole host of little picky features that
involve various kinds of desugarings that represent man-decades worth
of work just on their own.

After a bit of thought, I'm pretty confident that the only reasonable
way to approach this project is to let an existing compiler tackle the
task of converting from Haskell proper to a smaller language that's
more reasonable to think about (despite the problems with lots of
primops... at least those are fairly mechanical).  Not because of all
the advanced language features or libraries, but just because
re-implementing the whole front end of a compiler for even a limited
but useful subset of Haskell is a ludicrously ambitious and risky
project for GSoC.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Google Summer of Code - Lock-free data structures

2012-03-18 Thread Chris Smith
On Mar 18, 2012 6:39 PM, Florian Hartwig florian.j.hart...@gmail.com
wrote:
 GSoC stretches over 13 weeks. I would estimate that implementing a data
 structure, writing tests, benchmarks, documentation etc. should not take
more
 than 3 weeks (it is supposed to be full-time work, after all), which means
 that I could implement 4 of them in the time available and still have some
 slack.

Don't underestimate the time required for performance tuning, and be
careful to leave yourself learning time, unless you have already
extensively used ThreadScope, read GHC Core, and worked with low-level
strictness, unpacking, possibly even rewrite rules.  I suspect that the
measurable performance benefit from lockless data structures might be
tricky to tease out of the noise created by unintentional strictness or
unboxing issues.  And we'd be much happier with one or two really
production quality implementations than even six or seven at a student
project level.

-- 
Chris Smith
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Theoretical question: are side effects necessary?

2012-03-17 Thread Chris Smith
On Sat, Mar 17, 2012 at 5:41 AM, Donn Cave d...@avvanta.com wrote:
 I hope the answer is not that in computer science we regard all
 effects as side effects because the ideal computer program simply
 exists without consequence.

The answer is that side effects has become something of a figure of
speech, and now has a specialized meaning in programming languages.

When we're talking about different uses of the word function in
programming languages, side effects refer to any effect other than
evaluating to some result when applied to some argument.  For example,
in languages like C, printf takes some arguments, and returns an int.
When viewed as just a function, that's all there is to it; functions
exist to take arguments and produce return values.  But C extends the
definition of a function to include additional effects, like making
Hello world appear on a nearby computer screen.  Because those
effects are aside from the taking of arguments and returning of
values that functions exist to do, they are side effects... even
though in the specific case of printf, the effect is the main goal and
everyone ignores the return value, still for functions in general, any
effects outside of producing a resulting value from its arguments are
side effects.

I suppose Haskell doesn't have side effects in that sense, since its
effectful actions aren't confused with functions.  (Well, except from
silly examples like the CPU gives off heat or FFI/unsafe stuff like
unsafePerformIO.)  So maybe we should ideally call them just
effects.  But since so many other languages use functions to
describe effectful actions, the term has stuck.  So pretty much when
someone talks about side effects, even in Haskell, they means stateful
interaction with the world.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Theoretical question: are side effects necessary?

2012-03-16 Thread Chris Smith
On Fri, Mar 16, 2012 at 3:43 PM, serialhex serial...@gmail.com wrote:
 an interesting question emerges:  even though i may be able to implement an
 algorithm with O(f(n)) in Haskell, and write a program that is O(g(n)) 
 O(f(n)) in C++ or Java...  could Haskell be said to be more efficient if
 time spent programming / maintaining Haskell is  C++ or Java??

There are two unrelated issues: (a) the efficiency of algorithms
implementable in Haskell, and (b) the efficiency of programmers
working in Haskell.  It makes no sense to ask a question that
conflates the two.  If you're unsure which definition of efficient
you meant to ask about, then first you should stop to define the words
you're using, and then ask a well-defined question.

That being said, this question is even more moot given that real
Haskell, which involves the IO and ST monads, is certainly no
different from any other language in its optimal asymptotics.  Even if
you discount IO and ST, lazy evaluation alone *may* recover optimal
asymptotics in all cases... it's known that a pure *eager* language
can add a log factor to the best case sometimes, but my understanding
is that for all known examples where that happens, lazy evaluation
(which can be seen as a controlled benign mutation) is enough to
recover the optimal asymptotics.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-12 Thread Chris Smith
On Mon, Mar 12, 2012 at 3:26 AM, Paolo Capriotti p.caprio...@gmail.com wrote:
 I wouldn't say it's unsound, more like not yet proved to be bug-free :)

 Note that the latest master fixes all the issues found so far.

I was referring to the released version of pipes-core, for which
known to be unsound is an accurate description.  Good to hear that
you've got a fix coming, though.  Given the history here, maybe
working out the proofs of the category laws sooner rather than later
would be a good thing.  I'll have a look today and see if I can bang
out a proof of the category laws for your new code without ensure.

It will then be interesting to see how that compares to Gabriel's
approach, which at this point we've heard a bit about but I haven't
seen.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Empty Input list

2012-03-12 Thread Chris Smith
On Mon, Mar 12, 2012 at 2:41 PM, Kevin Clees k.cl...@web.de wrote:
 what can I do, if a function gets an empty input list? I want, that it only 
 returns nothing.
 This is my source code:

 tmp:: [(Int, Int)] - Int - (Int, Int)
 tmp (x:xs) y
        | y == 1 = x
        | y  1 = tmp xs (y-1)

It's not clear what you mean by returns nothing when the result is
(Int, Int)... there is no nothing value of that type.  But you can
add another equation to handle empty lists one you decide what to
return in that case.  For example, after (or before) the existing
equation, add:

tmp [] y = (-1, -1)

Or, you may want to use a Maybe type for the return... which would
mean there *is* a Nothing value you can return:

tmp:: [(Int, Int)] - Int - Maybe (Int, Int)
tmp (x:xs) y
       | y == 1 = Just x
       | y  1  = tmp xs (y-1)
tmp [] y = Nothing

Does that help?
-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Empty Input list

2012-03-12 Thread Chris Smith
Oh, and just to point this out, the function you're writing already
exists in Data.List.  It's called (!!).  Well, except that it's zero
indexed, so your function is more like:

tmp xs y = xs !! (y-1)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Empty Input list

2012-03-12 Thread Chris Smith
On Mon, Mar 12, 2012 at 3:14 PM, Kevin Clees k.cl...@web.de wrote:
 Now my function looks like this:

 tmp:: [(Int, Int)] - Int - (Int, Int)
 tmp [] y = (0,0)
 tmp xs y = xs !! (y-1)

Just a warning that this will still crash if the list is non-empty by
the index exceeds the length.  That's because your function is no
longer recursive, so you only catch the case where the top-level list
is empty.  The drop function doesn't crash when dropping too many
elements though, so you can do this and get a non-recursive function
that's still total:

tmp :: [(Int,Int)] - Int - (Int, Int)
tmp xs y = case drop (y-1) xs of
[] - (0,0)
Just (x:_) - x

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-11 Thread Chris Smith
On Sun, Mar 11, 2012 at 7:09 AM, Paolo Capriotti p.caprio...@gmail.com wrote:
 Someone actually implemented a variation of Pipes with unawait:
 https://github.com/duairc/pipes/blob/master/src/Control/Pipe/Common.hs
 (it's called 'unuse' there).

 I actually agree that it might break associativity or identity, but I
 don't have a counterexample in mind yet.

Indeed, on further thought, it looks like you'd run into problems here:

unawait x  await == return x
(idP + unawait x)  await == ???

The monadic operation is crucial there: without it, there's no way to
observe which side of idP knows about the unawait, so you can keep it
local and everything is fine... but throw in the Monad instance, and
those pipes are no longer equivalent because they act differently in
vertical composition.  There is no easy way to fix this with (idP ==
pipe id).  You could kludge the identity pipes and make that law hold,
and I *think* you'd even keep associativity in the process so you
would technically have a category again.  But this hints to me that
there is some *other* law you should expect to hold with regard to the
interaction of Category and Monad, and now that is being broken.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-11 Thread Chris Smith
On Sun, Mar 11, 2012 at 10:30 AM, Mario Blažević blama...@acanac.net wrote:
    It's difficult to say without having the implementation of both unawait
 and all the combinators in one package. I'll assume the following equations
 hold:

   (p1  unawait x)  p2 = (p1  p2) * unawait x       -- this one
 tripped me up

I don't think this could reasonably hold.  For example, you'd expect
that for any p, idP  p == idP since idP never terminates at all.
But then let p1 == idP, and you get something silly.  The issue is
with early termination: if p2 terminates first in the left hand side,
you don't want the unawait to occur.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-11 Thread Chris Smith
On Sun, Mar 11, 2012 at 11:22 AM, Mario Blažević blama...@acanac.net wrote:
    No, idP does terminate once it consumes its input. Your idP  p first
 reproduces the complete input, and then runs p with empty input.

This is just not true.  idP consumes input forever, and (idP  p) =
idP, for all pipes p.

If it is composed with another pipe that terminates, then yes, the
*composite* pipe can terminate, so for example ((q + idP)  p) may
actually do something with p.  But to get that effect, you need to
compose before the monadic bind... so for example (q + (idP  p)) =
(q + idP) = q.  Yes, q can be exhausted, but when it is, idP will
await input, which will immediately terminate the (idP  p) pipe,
producing the result from q, and ignoring p entirely.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-11 Thread Chris Smith
On Sun, Mar 11, 2012 at 2:33 PM, Twan van Laarhoven twa...@gmail.com wrote:
 I think you should instead move unwaits in and out of the composition on the
 left side:

    unawait x  (p1 + p2) === (unawait x  p1) + p2

 This makes idP a left-identity for (+), but not a right-identity, since
 you can't move unawaits in and out of p2.

Not sure how we got to the point of debating which of the category
laws pipes should break... messy business there.  I'm going to be in
favor of not breaking the laws at all.  The problem here is that
composition of chunked pipes requires agreement on the chunk type,
which gives the type-level guarantees you need that all chunked pipes
in a horizontal composition (by which I mean composition in the
category... I think you were calling that vertical?  no matter...)
share the same chunk type.  Paolo's pipes-extra does this by inventing
a newtype for chunked pipes, in which the input type appears in the
result as well.  There are probably some details to quibble with, but
I think the idea there is correct.  I don't like this idea of
implicitly just throwing away perfectly good data because the types
are wrong.  It shows up in the category-theoretic properties of the
package as a result, but it also shows up in the fact that you're
*throwing* *away* perfectly good data just because the type system
doesn't give you a place to put it!  What's become obvious from this
is that a (ChunkedPipe a b m r) can NOT be modelled correctly as a
(Pipe a b m r).

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-11 Thread Chris Smith
On Sun, Mar 11, 2012 at 8:53 PM, Mario Blažević blama...@acanac.net wrote:
    May I enquire what was the reason for the non-termination of idP? Why was
 it not defined as 'forP yield' instead? The following command runs the way I
 expected.

With pipes-core (which, recall, is known to be unsound... just felt
this is a good time for a reminder of that, even though I believe the
subset that adds tryAwait and forP to be sound), you do get both (pipe
id) and (forP yield).  So discover which is the true identity, we can
try:

idP + forP yield == forP yield
forP yield + idP == forP yield

Yep, looks like idP is still the identity.

Of course, the real reason (aside from the fact that you can check and
see) is that forP isn't definable at all in Gabriel's pipes package.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Summer of Code idea: Haskell Web Toolkit

2012-03-06 Thread Chris Smith
My first impression on this is that it seems a little vague, but
possibly promising.

I'd make it clearer that you plan to contribute to the existing UHC
stuff.  A first glance left me with the impression that you wanted to
re-implement a JavaScript back end, which would of course be a
non-starter as a GSoC project.  Since the actual proposal is to work
on the build system and libraries surrounding the existing UHC back
end, I'd maybe suggest revising the proposal to be clearer about that,
and more specific about what parts of the current UHC compiler, build
system, and libraries you propose working on.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are all monads functions?

2011-12-31 Thread Chris Smith
On Dec 31, 2011 8:19 AM, Yves Parès limestrael+hask...@gmail.com wrote:
 -- The plain Maybe type
 data Maybe a = Just a | Nothing

 -- The MaybeMonad
 newtype MaybeMonad a = MM ( () - Maybe a )

 That's what using Maybe as a monad semantically means, doesn't it?

I'd have to say no.  That Maybe types are isomorphic to functions from ()
is not related to their being monads... indeed it's true of all types.  I'm
not sure what meaning you see in the function, but I don't see anything of
monads in it.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-30 Thread Chris Smith
On Fri, 2011-12-30 at 18:34 +0200, Artyom Kazak wrote:
 I wonder: can writing to memory be called a “computational effect”? If  
 yes, then every computation is impure. If no, then what’s the difference  
 between memory and hard drive?

The difference is that our operating systems draw an abstraction
boundary such that memory is private to a single program, while the hard
drive is shared between independent entities.  It's not the physical
distinction (which has long been blurred by virtual memory and caches
anyway), but the fact that they are on different sides of that
abstraction boundary.

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-30 Thread Chris Smith

 time t:  f 42   (computational process implementing func application
 begins…)
 t+1:   keystroke = 1
 t+2:  43   (… and ends)
 
 
 time t+3:  f 42
 t+4:  keystroke = 2
 t+5:  44
 
 
 Conclusion:  f 42 != f 42

That conclusion would only follow if the same IO action always produced
the same result when performed twice in a row.  That's obviously untrue,
so the conclusion doesn't follow.  What you've done is entirely
consistent with the fact that f 42 = f 42... it just demonstrates that
whatever f 42 is, it doesn't always produce the same result when you o
it twice.

What Conal is getting at is that we don't have a formal model of what an
IO action means.  Nevertheless, we know because f is a function, that
when it is applied twice to the same argument, the values we get back
(which are IO actions, NOT integers) are the same.

-- 
Chris Smith




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-30 Thread Chris Smith
On Fri, 2011-12-30 at 12:45 -0600, Gregg Reynolds wrote:
 I spent some time sketching out ideas for using random variables to provide
 definitions (or at least notation) for stuff like IO.  I'm not sure I could
 even find the notes now, but my recollection is that it seemed like a
 promising approach.  One advantage is that this eliminates the kind of 
 informal
 language (like user input) that seems unavoidable in talking about IO.
 Instead of defining e.g. readChar or the like as an action that does
 something and returns an char (or however standard Haskell idiom puts it),
 you can just say that readChar is a random char variable and be done with
 it.  The notion of doing an action goes away.  The side-effect of actually
 reading the input or the like can be defined generically by saying that
 evaluating a random variable always has some side-effect; what specifically
 the side effect is does not matter.

Isn't this just another way of saying the same thing that's been said
already?  It's just that you're saying random variable instead of I/O
action.  But you don't really mean random variable, because there's all
this stuff about side effects thrown in which certainly isn't part of
any idea of random variables that anyone else uses.  What you really
mean is, apparently, I/O action, and you're still left with all the
actual issues that have been discussed here, such as when two I/O
actions (aka random variables) are the same.

There is one difference, and it's that you're still using the term
evaluation to mean performing an action.  That's still a mistake.
Evaluation is an idea from operational semantics, and it has nothing to
do with performing effects.  The tying of effects to evaluation is
precisely why it's so hard to reason about programs in, say, C
denotationally, because once there is no such thing as an evaluation
process, modeling the meaning of terms becomes much more complex and
amounts to reinventing operational semantics in denotational clothing)\.

I'd submit that it is NOT an advantage to any approach that the notion
of doing an action goes away.  That notion is *precisely* what programs
are trying to accomplish, and obscuring it inside functions and
evaluation rather than having a way to talk about it is handicapping
yourself from a denotational perspective.  Rather, what would be an
advantage (but also rather hopeless) would be to define the notion of
doing an action more precisely.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-30 Thread Chris Smith
On Fri, 2011-12-30 at 12:24 -0600, Gregg Reynolds wrote:
 No redefinition involved, just a narrowing of scope.  I assume that,
 since we are talking about computation, it is reasonable to limit  the
 discussion to the class of computable functions - which, by the way,
 are about as deeply embedded in orthodox mathematics as you can get,
 by way of recursion theory.  What would be the point of talking about
 non-computable functions for the semantics of a programming language?

Computability is just a distraction here.  The problem isn't whether
getAnIntFromUser is computable... it is whether it's a function at
all!  Even uncomputable functions are first and foremost functions, and
not being computable is just a property that they have.  Clearly this is
not a function at all.  It doesn't even have the general form of a
function: it has no input, so clearly it can't map each input value to a
specific output value.  Now, since it's not a function, it makes little
sense to even try to talk about whether it is computable or not (unless
you first define a notion of computability for something other than
functions).

If you want to talk about things that read values from the keyboard or
such, calling them uncomputable is confusing, since the issue isn't
really computability at all, but rather needing information from a
constantly changing external environment.  I suspect that at least some
people talking about functions are using the word to mean a
computational procedure, the sort of thing meant by the C programming
language by that word.  Uncomputable is a very poor word for that idea.


-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-30 Thread Chris Smith
On Fri, 2011-12-30 at 23:16 +0200, Artyom Kazak wrote:
 Thus, your function “f” is a function indeed, which generates a list of  
 instructions to kernel, according to given number.

Not my function, but yes, f certainly appears to be a function.

Conal's concern is that if there is no possible denotational meaning for
values of IO types, then f can't be said to be a function, since its
results are not well-defined, as values.

This is a valid concern... assigning a meaning to values of IO types
necessarily involves some very unsatisfying hand-waving about
indeterminacy, since for example IO actions can distinguish between
bottoms that are considered equivalent in the denotational semantics of
pure values (you can catch a use of 'error', but you can't catch
non-termination).  Nevertheless, I'm satisfied that to the extent that
any such meaning can be assigned, f will be a valid function on
non-bottom values.  Not perfect, but close.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-29 Thread Chris Smith
Entering tutorial mode here...

On Thu, 2011-12-29 at 10:04 -0800, Donn Cave wrote:
 We can talk endlessly about what your external/execution results 
 might be for some IO action, but at the formulaic level of a Haskell
 program it's a simple function value, e.g., IO Int.

Not to nitpick, but I'm unsure what you might mean by function value
there.  An (IO Int) is not a function value: there is no function
involved at all.  I think the word function is causing some confusion,
so I'll avoid calling things functions when they aren't.

In answer to the original question, the mental shift that several people
are getting at here is this: a value of the type (IO Int) is itself a
meaningful thing to get your hands on and manipulate.  IO isn't just
some annotation you have to throw in to delineate where your non-pure
stuff is or something like that; it's a type constructor, and IO types
have values, which are just as real and meaningful as any other value in
the system.  For example,

Type: Int
Typical Values: 5, or 6, or -11

Type: IO Int
Typical Values: (choosing a random number from 1 to 10 with the default
random number generator), or (doing nothing and always returning 5), or
(writing hello to temp.txt in the current working directory and
returning the number of bytes written)

These are PURE values... they do NOT have side effects.  Perhaps they
describe side effects in a sense, but that's a matter of how you
interpret them; it doesn't change the fact that they play the role of
ordinary values in Haskell.  There are no special evaluation rules for
them.

Just like with any other type, you might then consider what operations
you might want on values of IO types.  For example, the operations you
might want on Int are addition, multiplication, etc.  It turns out that
there is one major operation you tend to want on IO types: combine two
of them by doing them in turn, where what you do second might depend on
the result of what you do first.  So we provide that operation on values
of IO types... it's just an ordinary function, which happens to go by
the name (=).  That's completely analogous to, say, (+) for Int...
it's just a pure function that takes two parameters, and produces a
result.  Just like (+), if you apply (=) to the same two parameters,
you'll always get the same value (of an IO type) as a result.

Now, of course, behind the scenes we're using these things to describe
effectful actions... which is fine!  In fact, our entire goal in writing
any computer program in any language is *precisely* to describe an
effectful action, namely what we'd like to see happen when our program
is run.  There's nothing wrong with that... when Haskell is described as
pure, what is meant by that is that is lets us get our hands on these
things directly, manipulate them by using functions to construct more
such things, in exactly the same way we'd do with numbers and
arithmetic.  This is a manifestly different choice from other languages
where those basic manipulations even on the simple types are pushed into
the more nebulous realm of effectful actions instead.

If you wanted to make a more compelling argument that Haskell is not
pure, you should look at termination and exceptions from pure code.
This is a far more difficult kind of impurity to explain away: we do it,
by introducing a special families of values (one per type) called
bottom or _|_, but then we also have to introduce some special-purpose
rules about functions that operate on that value... an arguably clearer
way to understand non-termination is as a side-effect that Haskell does
NOT isolate in the type system.  But that's for another time.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-29 Thread Chris Smith
On Thu, 2011-12-29 at 18:07 +, Steve Horne wrote:
 By definition, an intentional effect is a side-effect. To me, it's by 
 deceptive redefinition - and a lot of arguments rely on mixing 
 definitions - but nonetheless the jargon meaning is correct within 
 programming and has been for decades. It's not going to go away.
 
 Basically, the jargon definition was coined by one of the pioneers of 
 function programming - he recognised a problem and needed a simple way 
 to describe it, but in some ways the choice of word is unfortunate.

I don't believe this is true.  Side effect refers to having a FUNCTION
-- that is, a map from input values to output values -- such that when
it is evaluated there is some effect in addition to computing the
resulting value from that map.  The phrase side effect refers to a
very specific confusion: namely, conflating the performing of effects
with computing the values of functions.

Haskell has no such things.  It's values of IO types are not functions
at all, and their effects do not occur as a side effect of evaluating a
function.  Kleisli arrows in the IO monad -- that is, functions whose
result type is an IO type, for example String - IO () -- are common,
yes, but note that even then, the effect still doesn't occur as a side
effect of evaluating the function.  Evaluating the function just gives
you a specific value of the IO type, and performing the effect is still
a distinct step that is not the same thing as function evaluation.

 You can argue pedantry, but the pedantry must have a point - a 
 convenient word redefinition will not make your bugs go away. People 
 tried that with it's not a bug it's a feature and no-one was impressed.

This most certainly has a point.  The point is that Haskell being a pure
language allows you to reason more fully about Haskell programs using
basic language features like functions and variables.  Yes, since
Haskell is sufficiently powerful, it's possible to build more and more
complicated constructs that are again harder to reason about... but even
when you do so, you end up using the core Haskell language to talk
*about* such constructs... you retain the ability to get your hands on
them and discuss them directly and give them names, not mere as side
aspects of syntactic forms as they manifest themselves in impure
languages.

That is the point of what people are saying here (pedantry or not is a
matter of your taste); it's directly relevant to day to day programming
in Haskell.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell /Random generators

2011-12-29 Thread Chris Smith
On Thu, 2011-12-29 at 21:04 +, Steve Horne wrote:
 AFAIK there's no hidden unsafePerformIO sneaking any entropy in behind
 the scenes. Even if there was, it might be a legitimate reason for
 unsafePerformIO - random numbers are in principle non-deterministic,
 not determined by the current state of the outside world and
 which-you-evaluate-first should be irrelevant.

This is certainly not legitimate.  Anything that can't be memoized has
no business advertising itself as a function in Haskell.  This matters
quite a lot... programs might change from working to broken due to
something as trivial as inlining by the compiler (see the ugly  NOINLINE
annotations often used with unsafePerformIO tricks for initialization
code for an example).

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-29 Thread Chris Smith
Sorry to cut most of this out, but I'm trying to focus on the central
point here.

On Thu, 2011-12-29 at 22:01 +, Steve Horne wrote:
 In pure functional terms, the result should be equivalent to a fully
 evaluated value - but putStrLn isn't pure. It cannot be fully
 evaluated until run-time.

And here it is, I think.  You're insisting on viewing the performing of
some effect as part of the evaluation of an expression, even though the
language is explicitly and intentionally designed not to conflate those
two ideas.  Effects do not happen as a side-effect of evaluating
expressions.  Instead they happen because you define the symbol 'main'
to be the effect that you want to perform, and then set the runtime
system to work on performing it by running your program.

Evaluation and effects are just not the same thing, and it makes no
sense to say something isn't evaluated just because the effect it
describes haven't been performed.  It's exactly that distinction -- the
refusal to conflate evaluation with performing effects -- that is
referred to when Haskell is called a pure language.

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-29 Thread Chris Smith
On Fri, 2011-12-30 at 00:44 +, Steve Horne wrote:
 So, to resurrect an example from earlier...
 
 f :: Int - IO Int
 f = getAnIntFromTheUser = \i - return (i+1)

Did you mean  f :: IO Int ?  If not, then I perhaps don't understand
your example, and your monad is not IO.  I'll continue assuming the
former.

 Are you claiming that the expression (i+1) is evaluated without knowing 
 the value of i?

I'm not sure what you mean by evaluated here.  I'd say it's in normal
form, but it has free variables so it's not even meaningful by itself;
it doesn't have a value in the first place.  On the other hand, the
larger expression, \i - return (i+1), is closed *and* effectively in
normal form, so yes, I'd definitely say it is evaluated so far as that
word has any meaning at all.

 If not, at run-time your Haskell evaluates those expressions that 
 couldn't be fully evaluated at compile-time.

I certainly agree that the GHC runtime system, and any other Haskell
implementation's runtime system as well, evaluates expressions (some
representation of them anyway), and does lots of destructive updates to
boot.  This isn't at issue.  What is at issue is whether to shoehorn
those effects into the language semantics as a side-effect of evaluation
(or equivalently, force evaluation of expressions to be seen as an
effect -- when you only allow for one of these concepts, it's a silly
semantic game as to which name you call it by), or to treat effects as
semantically first-class concepts in their own right, different from the
simplification of expressions into values.

 If you do, we're back to my original model. The value returned by main 
 at compile-time is an AST-like structure wrapped in an IO monad 
 instance.

Here you're introducing implementation detail here that's rather
irrelevant to the semantics of the language.  Who knows whether compiler
and the runtime implementation build data structures corresponding to an
AST and run a reduction system on them, or use some other mechanism.
One could build implementations that do it many different ways.  In
fact, what most will do is generate machine code that directly performs
the desired effects and use closures with pointers to the generated
machine code.  But that's all beside the point.  If you need to know how
your compiler is implemented to answer questions about language
semantics, you've failed already.

Purity isn't about the RTS implementation, which is of course plenty
effectful and involves lots of destructive updates.  It's about the
language semantics.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-29 Thread Chris Smith
On Fri, 2011-12-30 at 02:40 +, Steve Horne wrote:
 Well, we're playing a semantic game anyway. Treating effects as
 first-class concepts in themselves is fine, but IMO doesn't make
 Haskell pure.

Okay, so if you agree that:

(a) IO actions are perfectly good values in the Haskell sense.
(b) They are first class (can passed to/returned from functions, etc.).
(c) When used as plain values, they have no special semantics.
(d) An IO action is no more a Haskell function than an Int is.
(e) All effects of a Haskell program are produced by the runtime system
performing the IO action called main (evaluating expressions lazily as
needed to do so), and NOT as a side-effect of the evaluation of
expressions.

Then we are completely in agreement on everything except whether the
word pure should apply to a programming language with those semantics.
Certainly the rest of the Haskell community describes this arrangement,
with effects being first-class values and performing those effects being
something done by the runtime system completely separate from evaluating
expressions, as Haskell being pure.  You can choose your terminology.

 I don't know the first thing about denotational semantics, but I do
 know this - if you place run-time behaviour outside the scope of your
 model of program semantics, that's just a limitation of your model. It
 doesn't change anything WRT the program itself - it only limits the
 understanding you can derive using that particular model.

The important bit about purity is that programs with I/O fit in to the
pure model just fine!  The pure model doesn't fully explain what the I/O
actions do, of course, but crucially, they also do not BREAK the pure
model.  It's a separation of concerns: I can figure out the higher-level
stuff, and when I need to know about the meaning of the values of
specific hairy and opaque data types like IO actions, or some complex
data structure, or whatever... well, then I can focus in and work out
the meaning of that bit when the time comes up.  The meanings of values
in those specific complex types doesn't affect anything except those
expressions that deal explicitly with that type.  THAT is why it's so
crucial that values of IO types are just ordinary values, not some kind
of magic thing with special evaluation rules tailored to them.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Level of Win32 GUI support in the Haskell platform

2011-12-29 Thread Chris Smith
On Fri, 2011-12-30 at 01:53 +, Steve Horne wrote:
 I've been for functions like GetMessage, TranslateMessage and 
 DispatchMessage in the Haskell Platform Win32 library - the usual 
 message loop stuff - and not finding them. Hoogle says no results found.

I see them in the Win32 package.

http://hackage.haskell.org/packages/archive/Win32/2.2.1.0/doc/html/Graphics-Win32-Window.html#v:getMessage

 Alternatively, should I be doing dialog-based coding and leaving Haskell 
 to worry about message loops behind the scenes?

Various people recommend the gtk (aka Gtk2Hs) and wx packages for that.
I've never been able to get wx to build, but gtk works fine.  Others
(mostly those using macs) report the opposite.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Time zones and IO

2011-11-06 Thread Chris Smith
On Sun, 2011-11-06 at 17:25 -0500, Heller Time wrote:
  unless the machine running the program using time-recurrence was traveling 
 across timezones (and the system was updating that fact)

Note that this is *not* an unusual situation at all these days.  DST was
already mentioned, but also note that more and more software is running
on mobile devices that do frequently update their time zone information.
Unpredictably breaking code when this occurs is going to get a lot worse
when people starting building Haskell for their Android and iOS phones,
as we're very very close to seeing happen.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell Cloud and Closures

2011-10-01 Thread Fred Smith
I've built a little program to compute the plus function remotely by
using Cloud Haskell:
http://pastebin.com/RK4AcWFM

but i faced an unfortunate complication, what i would like to do is to
send a function to another host, no matter if the function is locally
declared or at the top level.

In seems to me that in cloud haskell library the function's closures
can be computed only with top-level ones, is it possible to compute
the closure at runtime of any function and to send it to another host?

here's the code that i would like to compile:
http://pastebin.com/7F4hs1Kk


thank you all in advance!
Fred

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Cloud and Closures

2011-10-01 Thread Fred Smith


On 1 Ott, 12:03, Erik de Castro Lopo mle...@mega-nerd.com wrote:
 I was at the Haskell Symposium where this paper was presented. This
 limitation is a known limitation and cannot currently be worked around
 other than my moving whatever is required to the top level.

 Erik
 --

do you know if there is another way either to compute the closure of a
function or to serialize it in order to send the computation to
another host?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Cloud and Closures

2011-10-01 Thread Chris Smith
On Sat, 2011-10-01 at 02:16 -0700, Fred Smith wrote:
 In seems to me that in cloud haskell library the function's closures
 can be computed only with top-level ones, is it possible to compute
 the closure at runtime of any function and to send it to another host?

The current rule is a bit overly restrictive, true.  But I just wanted
to point out that there is a good reason for having *some* restriction
in place.  There are certain types that should *not* be sent to other
processes or nodes.  Take MVar, for example.  It's not clear what it
would mean to send an MVar over a channel to a different node.

By extension, allowing you to send arbitrary functions not defined at
the top level is also problematic, because such functions might close
over references to MVars, making them essentially a vehicle for
smuggling MVars to new nodes.  And since the types of free variables
don't occur in the types of terms, there is no straight-forward Haskell
type signature that can express this limitation.  So the compiler is
obliged to specify some kind of sufficient restrictions to prevent you
from sending functions that close over MVar or other node-specific
types.

For now, you'll have to move all of your functions to the top level.
Hopefully, in the future, some relaxation of those rules can occur.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered notentirelygreat?

2011-09-27 Thread Chris Smith
On Tue, 2011-09-27 at 00:29 -0700, Donn Cave wrote:
 It doesn't appear to me to be a technicality about the representation -
 the value we're talking about excluding is not just represented as
 greater than 0.3, it is greater than 0.3 when applied in computations.

Sure, the exact value is greater than 0.3.  But to *predict* that, you
have to know quite a bit about the technicalities of how floating point
values are represented.  For example, you need to know that 0.1 has no
exact representation as a floating point number, and that the closest
approximation is greater than the exact real number 0.1, and that the
difference is great enough that adding it twice adds up to a full ulp of
error.

 For example you can subtract 0.3 and get a nonzero value (5.55e-17.)

Again, if you're working with floating point numbers and your program
behaves in a significantly different way depending on whether you get 0
or 5.55e-17 as a result, then you're doing something wrong.

 The disappointment with iterative addition is not that
 its fifth value [should be] omitted because it's technically greater,
 it's that range generation via iterative addition does not yield the
 values I specified.

I certainly don't agree that wanting the exact value from a floating
point type is a reasonable expectation.  The *only* way to recover those
results is to do the math with the decimal or rational values instead of
floating point numbers.  You'll get the rounding error from floating
point regardless of how you do the computation, because the interval
just isn't really 0.1.  The difference between those numbers is larger
than 0.1, and when you step by that interval, you won't hit 0.5.

You could calculate the entire range using Rational and then convert
each individual value after the fact.  That doesn't seem like a
reasonable default, since it has a runtime performance cost.  Of course
you're welcome to do it when that's what you need.

 last ([0.1, 0.2 .. 0.5]) == 0.5
False

 last (map fromRational [0.1, 0.2 .. 0.5]) == 0.5
True

-- 
Chris



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considerednotentirelygreat?

2011-09-27 Thread Chris Smith
On Tue, 2011-09-27 at 09:23 -0700, Donn Cave wrote:
 I think it's more than reasonable to expect
 
   [0.1,0.2..0.5] == [0.1,0.2,0.3,0.4,0.5]
 
 and that would make everyone happy, wouldn't it?

But what's the justification for that?  It *only* makes sense because
you used short decimal literals.  If the example were:

let a = someComplicatedCalculation
b = otherComplicatedCalculation
c = thirdComplicatedCalculation
in  [a, b .. c]

then it would be far less reasonable to expect the notation to fudge the
numbers in favor of obtaining short decimal representations, which is
essentially what you're asking for.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered notentirelygreat?

2011-09-27 Thread Chris Smith
On Tue, 2011-09-27 at 09:47 -0700, Iavor Diatchki wrote:
 As Ross pointed out in a previous e-mail the instance for Rationals is
 also broken:
 
  last (map fromRational [1,3 .. 20])
  21.0

Sure, for Int, Rational, Integer, etc., frankly I'd be in favor of a
runtime error when the last value isn't in the list.  You don't need
approximate behavior for those types, and if you really mean
takeWhile (= 20) [1,3..], then you should probably write that, rather
than a list range notation that doesn't mean the same thing.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considerednotentirelygreat?

2011-09-27 Thread Chris Smith
On Tue, 2011-09-27 at 12:36 -0400, Steve Schafer wrote:
 [0.1,0.2..0.5] isn't the problem. The problem is coming up with
 something that not only works for [0.1,0.2..0.5], but also works for
 [0.1,0.2..1234567890.5].
 
 A good rule of thumb: For every proposal that purports to eliminate
 having to explicitly take into consideration the limited precision of
 floating-point representations, there exists a trivial example that
 breaks that proposal.

If by trivial you mean easy to construct, sure.  But if you mean
typical, that's overstating the case by quite a bit.

There are plenty of perfectly good uses for floating point numbers, as
long as you keep in mind a few simple rules:

1. Don't expect any exact answers.

2. Don't add or subtract values of vastly different magnitudes if you
expect any kind of accuracy in the results.

3. When you do depend on discrete answers (like with the Ord functions)
you assume an obligation to check that the function you're computing is
continuous around the boundary.

If you can't follow these rules, you probably should find a different
type.  But there's a very large class of computing tasks where these
rules are not a problem at all.  In your example, you're breaking rule
#2.  It's certainly not a typical case to be adding numbers like 0.1 to
numbers in the billions, and if you're doing that, you should know in
advance that an approximate type is risky.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-26 Thread Chris Smith
On Mon, 2011-09-26 at 18:53 +0200, Lennart Augustsson wrote:
 If you do [0.1, 0.2 .. 0.3] it should leave out 0.3.  This is floating
 point numbers and if you don't understand them, then don't use them.
 The current behaviour of .. for floating point is totally broken, IMO.

I'm curious, do you have even a single example of when the current
behavior doesn't do what you really wanted anyway?  Why would you write
an upper bound of 0.3 on a list if you don't expect that to be included
in the result?  I understand that you can build surprising examples with
stuff that no one would really write... but when would you really *want*
the behavior that pretends floating point numbers are an exact type and
splits hairs?

I'd suggest that if you write code that depends on whether 0.1 + 0.1 +
0.1 = 0.3, for any reason other than to demonstrate rounding error,
you're writing broken code.  So I don't understand the proposal to
change this notation to create a bunch of extra broken code.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-26 Thread Chris Smith
On Mon, 2011-09-26 at 18:52 +0300, Yitzchak Gale wrote:
 Chris Smith wrote:
  class Ord a = Range a where...
 
 Before adding a completely new Range class, I would suggest
 considering Paul Johnson's Ranged-sets package:

Well, my goal was to try to find a minimal and simple answer that
doesn't break anything or add more complexity.  So I don't personally
find the idea of adding multiple *more* type classes appealing.

In any case, it doesn't make much difference either way.  It's clear
that no one is going to be satisfied here, so there's really no point in
making any change.  In fact, if this conversation leads to changes, it
looks like it will just break a bunch of code and make Haskell harder to
use.

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirelygreat?

2011-09-26 Thread Chris Smith
On Mon, 2011-09-26 at 19:54 -0700, Donn Cave wrote:
 Pardon the questions from the gallery, but ... I can sure see that
 0.3 shouldn't be included in the result by overshooting the limit
 (i.e., 0.30004), and the above expectations about
 [0,2..9] are obvious enough, but now I have to ask about [0,2..8] -
 would you not expect 8 in the result?  Or is it not an upper bound?

Donn,

[0, 2 .. 8] should be fine no matter the type, because integers of those
sizes are all exactly representable in all of the types involved.  The
reason for the example with a step size of 0.1 is that 0.1 is actually
an infinitely repeating number in base 2 (because the denominator has a
prime factor of 5).  So actually *none* of the exact real numbers 0.1,
0.2, or 0.3 are representable with floating point types.  The
corresponding literals actually refer to real numbers that are slightly
off from those.

Furthermore, because the step size is not *exactly* 0.1, when it's added
repeatedly in the sequence, the result has some (very small) drift due
to repeated rounding error... just enough that by the time you get in
the vacinity of 0.3, the corresponding value in the sequence is actually
*neither* the rational number 0.3, *nor* the floating point literal 0.3.
Instead, it's one ulp larger than the floating point literal because of
that drift.

So there are two perspectives here.  One is that we should think in
terms of exact values of the type Float, which means we'd want to
exclude it, because it's larger than the top end of the range.  The
other is that we should think of approximate values of real numbers, in
which case it's best to pick the endpoint closest to the stated one, to
correct for what's obviously unintended drift due to rounding.

So that's what this is about: do we think of Float as an approximate
real number type, or as an exact type with specific values.  If the
latter, then of course you exclude the value that's larger than the
upper range.  If the former, then using comparison operators like ''
implies a proof obligation that the result of the computation remains
stable (loosely speaking, the function continuous) at that boundary
despite small rounding error in either direction.  In that case,
creating a language feature where, in the *normal* case of listing the
last value you expect in the list, 0.3 (as an approximate real number)
is excluded from this list just because of technicalities about the
representation is an un-useful implementation, to say the least, and
makes it far more difficult to satisfy that proof obligation.

Personally, I see floating point values as approximate real numbers.
Anything else in unrealistic: the *fact* of the matter is that no one is
reasoning about ulps or exact rational values when they use Float and
Double.  In practice, even hardware implementations of some floating
point functions have indeterminate results in the exact sense.  Often,
the guarantee provided by an FPU is that the result will be within one
ulp of the correct answer, which means the exact value of the answer
isn't even known!  So, should we put all floating point calculations in
the IO monad because they aren't pure functions?  Or can we admit that
people use floating point to approximate reals and accept the looser
reasoning?

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-25 Thread Chris Smith
Would it be an accurate summary of this thread that people are asking
for (not including quibbles about naming and a few types):

class Ord a = Enum a where
succ :: a - a
pred :: a - a
fromEnum :: a - Int(eger)
toEnum :: Int(eger) - a
-- No instance for Float/Double

class Ord a = Range a where
rangeFromTo :: a - a - [a] -- subsumes Ix.range / Enum.enumFromTo
rangeFromThenTo :: a - a - a - [a]
inRange   :: (a, a) - a - Bool
-- Does have instances for Float/Double.  List ranges desugar to this.
-- Also has instances for tuples

class Range a = InfiniteRange a where -- [1]
rangeFrom :: a - [a]
rangeFromThen :: a - a - [a]
-- Has instances for Float/Double
-- No instances for tuples

class Range a = Ix a where
index :: (a, a) - a - Int
rangeSize :: (a, a) - Int

-- Again no instances for Float/Double.  Having an instance here implies
-- that the rangeFrom* are complete, containing all 'inRange' values

class (RealFrac a, Floating a) = RealFloat a where
... -- existing stuff
(..), (.=.), (..), (.=.), (.==.) :: a - a - Bool
-- these are IEEE semantics when applicable

instance Ord Float where ... -- real Ord instance where NaN has a place

There would be the obvious properties stated for types that are
instances of both Enum and Range, but this allows for non-Enum types to
still be Range instances.

If there's general agreement on this, then we at least have a proposal,
and one that doesn't massively complicate the existing system.  The next
step, I suppose would be to implement it in an AltPrelude module and
(sadly, since Enum is changing meaning) a trivial GHC language
extension.  Then the real hard work of convincing more people to use it
would start.  If that succeeds, the next hard work would be finding a
compatible way to make the transition...

I'm not happy with InfiniteRange, but I imagine the alternative (runtime
errors) won't be popular in the present crowd.

-- 
Chris



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-25 Thread Chris Smith
 Don't mix range and arithmetic sequences. I want arithmetic sequences for 
 Double, Float and Rational, but not range.
 (For Float and Double one could implement range [all values between the 
 given bounds, in increasing order, would be the desired/expected semantics 
 for that, I think?],

Okay, fine, I tried.  Obviously, I'm opposed to just flat removing
features from the language, especially when they are so useful that they
are being used without any difficulty at all by the 12 year olds I'm
teaching right now.

Someone (sorry, not me) should really write up the proposed change to
Ord for Float/Double and shepherd them through the haskell-prime
process.  That one shouldn't even be controversial; there's already an
isNaN people should be using for NaN checks, and any code relying on the
current behavior is for all intents and purposes broken anyway.  The
only question is whether to add the new methods to RealFloat (breaking
on the bizarre off chance that someone has written a nonstandard
RealFloat instance), or add a new IEEE type class.

-- 
Chris Smith



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-22 Thread Chris Smith
On Fri, 2011-09-23 at 11:02 +1200, Richard O'Keefe wrote:
 I do think that '..' syntax for Float and Double could be useful,
 but the actual definition is such that, well, words fail me.
 [1.0..3.5] = [1.0,2.0,3.0,4.0]   Why did anyone ever think
 _that_ was a good idea?

In case you meant that as a question, the reason is this:

Prelude [0.1, 0.2 .. 0.3]
[0.1,0.2,0.30004]

Because of rounding error, an implementation that meets your proposed
law would have left out 0.3 from that sequence, when of course it was
intended to be there.  This is messy for the properties you want to
state, but it's almost surely the right thing to do in practice.  If the
list is longer, then the most likely way to get it right is to follow
the behavior as currently specified.  Of course it's messy, but the
world is a messy place, especially when it comes to floating point
arithmetic.

If you can clear this up with a better explanation of the properties,
great!  But if you can't, then we ought to reject the kind of thinking
that would remove useful behavior when it doesn't fit some theoretical
properties that looked nice until you consider the edge cases.

-- 
Chris



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   4   >