Re: [Haskell-cafe] class Bytestringable or ToBytestring

2012-11-23 Thread Silvio Frischknecht
i recently found the convertible package

http://hackage.haskell.org/packages/archive/convertible/1.0.11.1/doc/html/Data-
Convertible-Base.html

convert :: Convertible a b = a - b

I've only used it once but it looks good to me.

sure the type checker does not guartantee that you get a ByteString back but 
if you only use your own types where you write all the instances yourself you 
should be safe.

silvio

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal failures...

2012-11-23 Thread kudah
Personally, I successfully use Wine to build, ship and test for Windows.
There are some pitfalls related to -optl-mwindows and encodings,
but, if you launch your program with $LANG set to proper windows
encoding like cp1251 and the std handles closed with  0- 1- 2-,
it should crash on related errors the same way as on windows.

I am not (yet) aware of any Haskell programs that don't run under Wine.

On Wed, 21 Nov 2012 13:05:45 +1100 Erik de Castro Lopo
mle...@mega-nerd.com wrote:

 So is it difficult for an open source contributor to test on windows?
 Hell yes! You have no idea how hard windows is in comparison to say
 FreeBSD. Even Apple's OS X is easier than windows, because I have
 friends who can give me SSH access to their machines.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers: Why do we need a core language?

2012-11-23 Thread Jacques Carette

On 22/11/2012 11:52 AM, Brandon Allbery wrote:
On Thu, Nov 22, 2012 at 7:56 AM, Jacques Carette care...@mcmaster.ca 
mailto:care...@mcmaster.ca wrote:


On 20/11/2012 6:08 PM, Richard O'Keefe wrote:

On 21/11/2012, at 4:49 AM, c...@lavabit.com
mailto:c...@lavabit.com wrote:

Well, I don't know. Would it save some time? Why bother
with a core language?

For a high level language (and for this purpose, even Fortran
66 counts as
high level) you really don't _want_ a direct translation
from source code
to object code.  You want to eliminate unused code and you
want to do all
sorts of analyses and improvements.  It is *much* easier to do
all that to
a small core language than to the full source language.


Actually, here I disagree.  It might be much 'easier' for the
programmers to do it for a small core language, but it may turn
out to be much, much less effective.  I 'discovered' this when
(co-)writing a partial evaluator for Maple: 



You're still using a core language, though; just with a slightly 
different focus than Haskell's.  I already mentioned gcc's internal 
language, which similarly is larger (semantically; syntactically it's 
sexprs).  What combination is more appropriate depends on the language 
and the compiler implementation.


Right, we agree: it is not 'core language' I disagreed with, it is 
'smaller core language'.  And we also agree that smaller/larger depends 
on the eventual application.  But 'smaller core language' is so 
ingrained as conventional wisdom that I felt compelled to offer 
evidence against this bit of folklore.


Jacques

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers: Why do we need a core language?

2012-11-23 Thread Jacques Carette

On 22/11/2012 7:37 PM, Richard O'Keefe wrote:

On 23/11/2012, at 1:56 AM, Jacques Carette wrote:

Actually, here I disagree. It might be much 'easier' for the 
programmers to do it for a small core language, but it may turn out 
to be much, much less effective. I 'discovered' this when 
(co-)writing a partial evaluator for Maple: we actually made our 
internal language *larger*, so that we could encode more invariants 
syntactically. This ended up making our jobs considerably easier, 
because we did not have to work so hard on doing fancy analyses to 
recover information that would otherwise have been completely 
obvious. Yes, there were a lot more cases, but each case was 
relatively easy; the alternative was a small number of extremely 
difficult cases. 

I don't think we do disagree.  We are talking about the same thing:
``not hav[ing] to work so hard on doing fancy analyses''.
The key point is that an (abstract) syntax *designed for the compiler*
and a syntax *designed for programmers* have to satisfy different
design goals and constraints; there's no reason they should be the same.


I must have mis-interpreted what you said then.  We definitely agree on 
this key point.


Jacques



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Portability of Safe Haskell packages

2012-11-23 Thread Roman Cheplyaka
It has been pointed out before that in order for Safe Haskell to be
useful, libraries (especially core libraries) should be annotated
properly with Safe Haskell LANGUAGE pragmas.

However, that would make these libraries unusable with alternative
Haskell implementations, even if otherwise they these libraries are
Haskell2010.

To quote the standard:

  If a Haskell implementation does not recognize or support a particular
  language feature that a source file requests (or cannot support the
  combination of language features requested), any attempt to compile or
  otherwise use that file with that Haskell implementation must fail
  with an error. 

Should it be advised to surround safe annotations with CPP #ifs?
Or does anyone see a better way out of this contradiction?

Roman

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers: Why do we need a core language?

2012-11-23 Thread Mike Meyer


Jacques Carette care...@mcmaster.ca wrote:

On 22/11/2012 11:52 AM, Brandon Allbery wrote:
 On Thu, Nov 22, 2012 at 7:56 AM, Jacques Carette care...@mcmaster.ca

 mailto:care...@mcmaster.ca wrote:

 On 20/11/2012 6:08 PM, Richard O'Keefe wrote:

 On 21/11/2012, at 4:49 AM, c...@lavabit.com
 mailto:c...@lavabit.com wrote:

 Well, I don't know. Would it save some time? Why bother
 with a core language?

 For a high level language (and for this purpose, even Fortran
 66 counts as
 high level) you really don't _want_ a direct translation
 from source code
 to object code.  You want to eliminate unused code and you
 want to do all
 sorts of analyses and improvements.  It is *much* easier to
do
 all that to
 a small core language than to the full source language.


 Actually, here I disagree.  It might be much 'easier' for the
 programmers to do it for a small core language, but it may turn
 out to be much, much less effective.  I 'discovered' this when
 (co-)writing a partial evaluator for Maple: 


 You're still using a core language, though; just with a slightly 
 different focus than Haskell's.  I already mentioned gcc's internal 
 language, which similarly is larger (semantically; syntactically it's

 sexprs).  What combination is more appropriate depends on the
language 
 and the compiler implementation.

Right, we agree: it is not 'core language' I disagreed with, it is 
'smaller core language'.  And we also agree that smaller/larger depends
on the eventual application.  But 'smaller core language' is so 
ingrained as conventional wisdom that I felt compelled to offer 
evidence against this bit of folklore.

I don't think you're evidence contacts that bit of folklore. But as stated it's 
vague. In particular, is smaller  relative to the full source language, or is 
it absolute (in which case you should compile to a RISC architecture and 
optimize that :-)? Since the latter seems silly, I have to ask if your core 
language for Maple was larger than Maple?
-- 
Sent from my Android tablet with K-9 Mail. Please excuse my swyping.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Problem with benchmarking FFI calls with Criterion

2012-11-23 Thread Janek S.
I am using Criterion library to benchmark C code called via FFI bindings and 
I've ran into a 
problem that looks like a bug. 

The first benchmark that uses FFI runs correctly, but subsequent benchmarks run 
much longer. I 
created demo code (about 50 lines, available at github: 
https://gist.github.com/4135698 ) in 
which C function copies a vector of doubles. I benchmark that function a couple 
of times. First 
run results in avarage time of about 17us, subsequent runs take about 45us. In 
my real code 
additional time was about 15us and it seemed to be a constant factor, not 
relative to correct 
run time. The surprising thing is that if my C function only allocates memory 
and does no 
copying:

double* c_copy( double* inArr, int arrLen ) {
  double* outArr = malloc( arrLen * sizeof( double ) );

  return outArr;
}

then all is well - all runs take similar amount of time. I also noticed that 
sometimes in my demo 
code all runs take about 45us, but this does not seem to happen in my real code 
- first run is 
always shorter.

Does anyone have an idea what is going on?

Janek

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers: Why do we need a core language?

2012-11-23 Thread Jacques Carette

On 23/11/2012 9:59 AM, Mike Meyer wrote:
[...] I have to ask if your core language for Maple was larger than 
Maple? 


Yes. Maple 10 had 62 cases in its AST, we had 75 (p.13 of [1])

Jacques

[1] http://www.cas.mcmaster.ca/~carette/publications/scp_MaplePE.pdf

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with benchmarking FFI calls with Criterion

2012-11-23 Thread Edward Z. Yang
Hello Janek,

What happens if you do the benchmark without unsafePerformIO involved?

Edward

Excerpts from Janek S.'s message of Fri Nov 23 10:44:15 -0500 2012:
 I am using Criterion library to benchmark C code called via FFI bindings and 
 I've ran into a 
 problem that looks like a bug. 
 
 The first benchmark that uses FFI runs correctly, but subsequent benchmarks 
 run much longer. I 
 created demo code (about 50 lines, available at github: 
 https://gist.github.com/4135698 ) in 
 which C function copies a vector of doubles. I benchmark that function a 
 couple of times. First 
 run results in avarage time of about 17us, subsequent runs take about 45us. 
 In my real code 
 additional time was about 15us and it seemed to be a constant factor, not 
 relative to correct 
 run time. The surprising thing is that if my C function only allocates memory 
 and does no 
 copying:
 
 double* c_copy( double* inArr, int arrLen ) {
   double* outArr = malloc( arrLen * sizeof( double ) );
 
   return outArr;
 }
 
 then all is well - all runs take similar amount of time. I also noticed that 
 sometimes in my demo 
 code all runs take about 45us, but this does not seem to happen in my real 
 code - first run is 
 always shorter.
 
 Does anyone have an idea what is going on?
 
 Janek
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with benchmarking FFI calls with Criterion

2012-11-23 Thread Janek S.
 What happens if you do the benchmark without unsafePerformIO involved?
I removed unsafePerformIO, changed copy to have type Vector Double - IO 
(Vector Double) and 
modified benchmarks like this:

bench C binding $ whnfIO (copy signal)

I see no difference - one benchmark runs fast, remaining ones run slow.

Janek


 Excerpts from Janek S.'s message of Fri Nov 23 10:44:15 -0500 2012:
  I am using Criterion library to benchmark C code called via FFI bindings
  and I've ran into a problem that looks like a bug.
 
  The first benchmark that uses FFI runs correctly, but subsequent
  benchmarks run much longer. I created demo code (about 50 lines,
  available at github: https://gist.github.com/4135698 ) in which C
  function copies a vector of doubles. I benchmark that function a couple
  of times. First run results in avarage time of about 17us, subsequent
  runs take about 45us. In my real code additional time was about 15us and
  it seemed to be a constant factor, not relative to correct run time.
  The surprising thing is that if my C function only allocates memory and
  does no copying:
 
  double* c_copy( double* inArr, int arrLen ) {
double* outArr = malloc( arrLen * sizeof( double ) );
 
return outArr;
  }
 
  then all is well - all runs take similar amount of time. I also noticed
  that sometimes in my demo code all runs take about 45us, but this does
  not seem to happen in my real code - first run is always shorter.
 
  Does anyone have an idea what is going on?
 
  Janek



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Portability of Safe Haskell packages

2012-11-23 Thread Bas van Dijk
On 23 November 2012 15:47, Roman Cheplyaka r...@ro-che.info wrote:
 Should it be advised to surround safe annotations with CPP #ifs?
 Or does anyone see a better way out of this contradiction?

I think that would be good advice. Note that even if you're only using
GHC then you still want to use CPP in order to support older GHC
versions which don't support Safe Haskell as in:

http://hackage.haskell.org/packages/archive/usb/1.1.0.4/doc/html/src/System-USB-Internal.html

Arguably, in that example it would be better to move the check for the
availability of Safe Haskell to the cabal file which would define a
CPP pragma SAFE_HASKELL which can be used in source files.

Bas

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with benchmarking FFI calls with Criterion

2012-11-23 Thread Edward Z. Yang
Running the sample code on GHC 7.4.2, I don't see the one
fast, rest slow behavior.  What version of GHC are you running?

Edward

Excerpts from Janek S.'s message of Fri Nov 23 13:42:03 -0500 2012:
  What happens if you do the benchmark without unsafePerformIO involved?
 I removed unsafePerformIO, changed copy to have type Vector Double - IO 
 (Vector Double) and 
 modified benchmarks like this:
 
 bench C binding $ whnfIO (copy signal)
 
 I see no difference - one benchmark runs fast, remaining ones run slow.
 
 Janek
 
 
  Excerpts from Janek S.'s message of Fri Nov 23 10:44:15 -0500 2012:
   I am using Criterion library to benchmark C code called via FFI bindings
   and I've ran into a problem that looks like a bug.
  
   The first benchmark that uses FFI runs correctly, but subsequent
   benchmarks run much longer. I created demo code (about 50 lines,
   available at github: https://gist.github.com/4135698 ) in which C
   function copies a vector of doubles. I benchmark that function a couple
   of times. First run results in avarage time of about 17us, subsequent
   runs take about 45us. In my real code additional time was about 15us and
   it seemed to be a constant factor, not relative to correct run time.
   The surprising thing is that if my C function only allocates memory and
   does no copying:
  
   double* c_copy( double* inArr, int arrLen ) {
 double* outArr = malloc( arrLen * sizeof( double ) );
  
 return outArr;
   }
  
   then all is well - all runs take similar amount of time. I also noticed
   that sometimes in my demo code all runs take about 45us, but this does
   not seem to happen in my real code - first run is always shorter.
  
   Does anyone have an idea what is going on?
  
   Janek

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Portability of Safe Haskell packages

2012-11-23 Thread Roman Cheplyaka
* Herbert Valerio Riedel h...@gnu.org [2012-11-24 00:06:44+0100]
 Roman Cheplyaka r...@ro-che.info writes:
  It has been pointed out before that in order for Safe Haskell to be
  useful, libraries (especially core libraries) should be annotated
  properly with Safe Haskell LANGUAGE pragmas.
 
  However, that would make these libraries unusable with alternative
  Haskell implementations, even if otherwise they these libraries are
  Haskell2010.
 
  To quote the standard:
 
If a Haskell implementation does not recognize or support a particular
language feature that a source file requests (or cannot support the
combination of language features requested), any attempt to compile or
otherwise use that file with that Haskell implementation must fail
with an error. 
 
  Should it be advised to surround safe annotations with CPP #ifs?
  Or does anyone see a better way out of this contradiction?
 
 ...but IIRC CPP isn't part of Haskell2010, or is it?

It isn't indeed. But:

1) it's a very basic extension which is supported by (almost?) all
   existing implementations; or
2) if you want to be 100% Haskell2010, you can name your file *.cpphs and
   let Cabal do preprocessing.

1) is a compromise and 2) is not very practical, so I'm eager to hear
other alternatives.

Roman

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: rethinkdb 0.1.0

2012-11-23 Thread Etienne Laurin
Greetings,

I am pleased to announce a Haskell client library for RethinkDB[1].
RethinkDB[2] is a newly released, open source, distributed database.
Its simple yet powerful API seemed well suited to be accessed from a
language like Haskell. This haskell library is modelled upon the
existing Javascript and Python API and adds static type checking.

Here is an example from the RethinkDB javascript tutorial[3] ported to Haskell:

run h $ orderBy [reduction]
   . groupedMapReduce (! Stname) mapF (0 :: NumberExpr) (R.+)
   . filter' filterF
   . pluck [Stname, POPESTIMATE2011, Dem, GOP]
   . zip'
   $ eqJoin (table county_stats) Stname (table polls)

   where mapF doc = ((doc ! POPESTIMATE2011) R.*
 ((doc ! GOP) R.- (doc ! Dem))) R./ (100 :: Int)
 filterF doc = let dem = doc ! Dem :: NumberExpr; gop =
doc ! GOP in
   (dem R. gop) `and'` ((gop R.- dem) R.
(15::Int))

What is the advantage of RethinkDB? [4]

What's really special about all RethinkDB queries is that the program
you wrote gets sent to the server, broken up into chunks, sent to
relevant shards, executed completely in parallel (to the degree that
the query makes possible -- complex queries often have multiple
parallelization and recombination stages), but as a user you don't
have to care about that at all. You just get the results.

[...]

Of course the visceral experience you get from using RethinkDB isn't
just about the query language, or parallelization, or pleasant client
libraries. It's about hundreds upon hundreds of examples like it, some
large, some small, that add up to a completely different product. The
team is fastidious about every command, every pixel, every algorithm,
sometimes even every assembly instruction, so that people can use the
product and at every step say `Wow, I can't believe how easy this was,
how do they do that?!!' 

I hope this library can help other Haskell programmers use this new database.

Etienne Laurin

[1] http://hackage.haskell.org/package/rethinkdb
[2] http://www.rethinkdb.com
[3] http://www.rethinkdb.com/docs/tutorials/elections/
[4] 
http://www.quora.com/RethinkDB/What-is-the-advantage-of-RethinkDB-over-MongoDB

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: rethinkdb 0.1.0

2012-11-23 Thread Brandon Allbery
On Fri, Nov 23, 2012 at 7:03 PM, Etienne Laurin etie...@atnnn.com wrote:

 What is the advantage of RethinkDB? [4]
 [4]
 http://www.quora.com/RethinkDB/What-is-the-advantage-of-RethinkDB-over-MongoDB


How about a URL for those who prefer not to be sold by Facebook?

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix/linux, openafs, kerberos, infrastructure  http://sinenomine.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: rethinkdb 0.1.0

2012-11-23 Thread Etienne Laurin
 http://www.quora.com/RethinkDB/What-is-the-advantage-of-RethinkDB-over-MongoDB

 How about a URL for those who prefer not to be sold by Facebook?

Sorry I did not realise there was an intrusive login dialog on that
page. You can click on the close link in the login dialog to view
the content of the page.

Etienne Laurin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal failures...

2012-11-23 Thread Erik de Castro Lopo
kudah wrote:

 Personally, I successfully use Wine to build, ship and test for Windows.
 There are some pitfalls related to -optl-mwindows and encodings,
 but, if you launch your program with $LANG set to proper windows
 encoding like cp1251 and the std handles closed with  0- 1- 2-,
 it should crash on related errors the same way as on windows.
 
 I am not (yet) aware of any Haskell programs that don't run under Wine.

Thats a very interesting solution. I use Wine to run the test suite
when I cross compile one of my C projects from Linux to Wine.

Would you consider documenting the process of setting everything up
to build Haskell programs under Wine on the Haskell Wiki?

Erik
-- 
--
Erik de Castro Lopo
http://www.mega-nerd.com/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers: Why do we need a core language?

2012-11-23 Thread wren ng thornton

On 11/20/12 6:54 AM, c...@lavabit.com wrote:

Hello,

I know nothing about compilers and interpreters. I checked several
books, but none of them explained why we have to translate a
high-level language into a small (core) language. Is it impossible
(very hard) to directly translate high-level language into machine
code?


It is possible to remove stages in the standard compilation pipeline, 
and doing so can speed up compilation time. For example, Perl doesn't 
build an abstract syntax tree (for now-outdated performance reasons), 
and instead compiles the source language directly into bytecode (which 
is then interpreted by the runtime). This is one of the reasons why Perl 
is (or was?) so much faster than other interpreted languages like Python 
etc. But there are some big problems to beware of:


* Not having a concrete representation for intermediate forms can rule 
out performing obvious optimizations. And I do mean *obvious* 
optimizations; I can talk more about this problem in Perl, if you really 
care.


* Not having a concrete representation for intermediate forms means 
mixing together code from many different stages of the compilation 
process. This sort of spaghetti code is hard to maintain, and even 
harder to explain to new developers.


* Not having a concrete representation for intermediate forms can lead 
to code duplication (in the compiler) because there's no convenient way 
to abstract over certain patterns. And, of course, repeating code is 
just begging for inconsistency bugs due to the maintenance burden of 
keeping all the copies in sync.


All three points are major driving forces in having intermediate forms. 
Joachim Breitner gave some illustrations for why intermediate forms are 
inevitable. But then, once you have intermediate forms, if you're 
interested in ensuring correctness and having a formal(izable) 
semantics, then it makes sense to try to turn those intermediate forms 
into an actual intermediate language. Intermediate forms are just an 
implementation detail, but intermediate languages can be reasoned about 
in the same ways as other languages. So it's more about shifting 
perspective in order to turn systems problems (implementation details) 
into language problems (semantics of the Core).


Furthermore, if you're a PL person and really are trying to ensure 
correctness of your language (e.g., type safety), you want to try to 
make your proof obligation as small as possible. For convenience to 
programmers, source code is full of constructs which are all more or 
less equivalent. But this is a problem for making proofs because when we 
perform case analysis on an expression we have to deal with all those 
different syntactic forms. Whereas if you first compile everything down 
into a small core language, then the proof has far fewer syntactic forms 
it has to deal with and so the proof is much easier. Bear in mind that 
this isn't just a linear problem. If we have N different syntactic 
forms, then proving something like confluence will require proving 
O(N^2) cases since you're comparing two different terms.


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] isLetter vs. isAlpha

2012-11-23 Thread wren ng thornton

On 11/21/12 4:59 PM, Artyom Kazak wrote:

I saw a question on StackOverflow about the difference between isAlpha
and isLetter today. One of the answers stated that the two functions are
interchangeable, even though they are implemented differently.

I decided to find out whether the difference in implementation
influences performance, and look what I found:


import Criterion.Main
import Data.Char
fTest name f list = bgroup name $ map (\(n,c) - bench n $ whnf f c) list
tests = [(latin, 'e'), (digit, '8'), (symbol, '…'), (greek, 'λ')]
main = defaultMain [fTest isAlpha isAlpha tests,
fTest isLetter isLetter tests]


produces this table (times are in nanoseconds):

  latin digit symbol greek
  - - -- -
isAlpha  | 156   212   368310
isLetter | 349   344   383310

isAlpha is twice as fast on latin inputs! Does it mean that isAlpha
should be preferred? Why isn’t isLetter defined in terms of isAlpha in
Data.Char?


FWIW, testing on an arbitrary snippit of Japanese yields:

benchmarking nf (map isAlpha)
mean: 26.21897 us, lb 26.17674 us, ub 26.27707 us, ci 0.950
std dev: 251.4027 ns, lb 200.4399 ns, ub 335.3004 ns, ci 0.950

benchmarking nf (map isLetter)
mean: 26.95068 us, lb 26.91681 us, ub 26.99481 us, ci 0.950
std dev: 197.5631 ns, lb 158.9950 ns, ub 239.4986 ns, ci 0.950

I'm curious what the difference is between the functions, and whether 
isLetter is ever preferable...


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal failures...

2012-11-23 Thread kudah
On Sat, 24 Nov 2012 13:46:37 +1100 Erik de Castro Lopo
mle...@mega-nerd.com wrote:

 kudah wrote:
 
  Personally, I successfully use Wine to build, ship and test for
  Windows. There are some pitfalls related to -optl-mwindows and
  encodings, but, if you launch your program with $LANG set to proper
  windows encoding like cp1251 and the std handles closed with  0-
  1- 2-, it should crash on related errors the same way as on
  1windows.
  
  I am not (yet) aware of any Haskell programs that don't run under
  Wine.
 
 Thats a very interesting solution. I use Wine to run the test suite
 when I cross compile one of my C projects from Linux to Wine.
 
 Would you consider documenting the process of setting everything up
 to build Haskell programs under Wine on the Haskell Wiki?
 
 Erik

Aside from what I posted above it's same as on Windows, just install
Haskell Platform. There's already a page on Haskell Wiki
http://www.haskell.org/haskellwiki/GHC_under_Wine though it seems very
outdated. I can update it with my own observations when I get an HW
account, they seem to have switched to manual registration while I
wasn't looking.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Quasiquotation page on HaskellWiki needs updating

2012-11-23 Thread Erik de Castro Lopo
Hi all,

It seems the Quasiquotation page on HaskellWiki

http://www.haskell.org/haskellwiki/Quasiquotation

has fallen behind the actually Quasiquotation implementation that
is in ghc-7.4.2 and later.

Specifically, the QuasiQuoter constructor that the Wiki takes two 
parameters:

data QuasiQuoter
= QuasiQuoter
{ quoteExp :: String - Q Exp
, quotePat :: String - Q Pat
}

while the one in ghc-7.4 and later takes four:

data QuasiQuoter
= QuasiQuoter
{ quoteExp :: String - Q Exp
, quotePat :: String - Q Pat
, quoteType :: String - Q Type
, quoteDec :: String - Q [Dec]
}

I'm just starting out with quasquotation and am not yet qualified
to update this page myself.

Erik
-- 
--
Erik de Castro Lopo
http://www.mega-nerd.com/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Is there a tool like ri from ruby?

2012-11-23 Thread Magicloud Magiclouds
RI is a very easy using tool that search and show the documents of ruby
modules/functions, etc.
Using RI, I could get help when programming with a few commands. Quick and
simple.
And with RI backend (libs), I could also simply integrate ri to my IDE
(like one hotkey to show function summary).

So I am wondering if there is such thing in Haskell world. I know haddock
saves the .haddock files in document folder. But I do not know if there is
any existing tools to index and view them.
-- 
竹密岂妨流水过
山高哪阻野云飞

And for G+, please use magiclouds#gmail.com.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Survey: What are the more common Haskell IDEs in use ?

2012-11-23 Thread Dan
Because I see there are many preferences on what IDE to use for Haskell 
I've created a quick survey on this topic.

Please click here and select your choices from the lists.

http://kwiksurveys.com/s.asp?sid=oqr42h4jc8h0nbc53652


Any comments/suggestions are welcome.
(if any is missing, etc)

Apologies for the add they show after you select the preferences - it is a free 
survey tool - i guess they have to live somehow too :)___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is there a tool like ri from ruby?

2012-11-23 Thread Tikhon Jelvis
Have you tried Hoogle? I know you can install it locally and use it from
GHCi or Emacs. I'm not familiar with ri, but from your description I think
a local Hoogle would serve the same purpose with the added benefit of being
able to search by types.

Here's the wiki page about it: http://www.haskell.org/haskellwiki/Hoogle


On Fri, Nov 23, 2012 at 11:18 PM, Magicloud Magiclouds 
magicloud.magiclo...@gmail.com wrote:

 RI is a very easy using tool that search and show the documents of ruby
 modules/functions, etc.
 Using RI, I could get help when programming with a few commands. Quick and
 simple.
 And with RI backend (libs), I could also simply integrate ri to my IDE
 (like one hotkey to show function summary).

 So I am wondering if there is such thing in Haskell world. I know haddock
 saves the .haddock files in document folder. But I do not know if there is
 any existing tools to index and view them.
 --
 竹密岂妨流水过
 山高哪阻野云飞

 And for G+, please use magiclouds#gmail.com.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Survey: What are the more common Haskell IDEs in use ?

2012-11-23 Thread Erik de Castro Lopo
Dan wrote:

 Because I see there are many preferences on what IDE to use for Haskell 
 I've created a quick survey on this topic.
 
 Please click here and select your choices from the lists.
 
 http://kwiksurveys.com/s.asp?sid=oqr42h4jc8h0nbc53652
 
 
 Any comments/suggestions are welcome.

I use Geany which is not on the list.

Erik
-- 
--
Erik de Castro Lopo
http://www.mega-nerd.com/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe