Re: [Haskell-cafe] emacs literate haskell mode

2011-09-27 Thread Ivan Lazar Miljenovic
On 28 September 2011 16:25, Mathijs Kwik  wrote:
> I tried mmm-mode with a few configurations, but I get into trouble
> when using other haskell-mode features. Also, the wiki page on
> haskell-mode ( 
> http://www.haskell.org/haskellwiki/Haskell_mode_for_Emacs#Literate_Haskell
> ) specifically mentions mmm-mode tricks are not needed anymore and
> shouldn't be used.
>
> Its built-in support does a great job to keep all code blocks working
> the way I want, but the latex parts are just dead text.
>
> I wouldn't mind to switch manually, as most of the time I'm either
> coding (touching only small parts of latex), or writing (leaving the
> code parts as-is).
> However, latex mode seems to trip over certain code parts ($ sign in
> haskell code for example).
> So it seems it's not smart enough to just ignore code blocks.
>
> Probably I need to look into latex mode a bit more, so it becomes
> off-topic for this list.

If you're using AucTeX, there's a way that you can specify that
\begin{code}...\end{code} is recognised as a verbatim (i.e. "not
LaTeX") environment:
http://stackoverflow.com/questions/3274091/auctex-emacs-problem-with-character

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] emacs literate haskell mode

2011-09-27 Thread Mathijs Kwik
I tried mmm-mode with a few configurations, but I get into trouble
when using other haskell-mode features. Also, the wiki page on
haskell-mode ( 
http://www.haskell.org/haskellwiki/Haskell_mode_for_Emacs#Literate_Haskell
) specifically mentions mmm-mode tricks are not needed anymore and
shouldn't be used.

Its built-in support does a great job to keep all code blocks working
the way I want, but the latex parts are just dead text.

I wouldn't mind to switch manually, as most of the time I'm either
coding (touching only small parts of latex), or writing (leaving the
code parts as-is).
However, latex mode seems to trip over certain code parts ($ sign in
haskell code for example).
So it seems it's not smart enough to just ignore code blocks.

Probably I need to look into latex mode a bit more, so it becomes
off-topic for this list.

Thanks for your help
Mathijs


On Wed, Sep 28, 2011 at 1:27 AM, Ivan Lazar Miljenovic
 wrote:
> On 28 September 2011 07:42, Rogan Creswick  wrote:
>> On Tue, Sep 27, 2011 at 11:24 AM, Mathijs Kwik  
>> wrote:
>>> Hi all,
>>>
>>> I'm using haskell-mode for emacs and I'm using it to open a literate
>>> haskell file which uses latex.
>>> This works fine, haskell code has syntax highlighting, and special
>>> symbols like lambda get used.
>>> However, the latex itself is dull and gree, no highlighting/coloring there.
>>> Does anyone know if it's possible to turn on latex highlighting in
>>> literate haskell mode?
>>> I tried switching to latex-mode, which does the trick (but it chokes
>>> on the haskell code inbetween), so I'm pretty sure emacs has
>>> everything it needs, but haskell-mode needs to enable this somehow.
>>
>> I'm not certain this /is/ easily in Emacs capabilities.  Emacs isn't
>> really set up to support more than one major mode at a time -- there
>> is, however, an extension that can do this.  The challenge is defining
>> the start and end of the areas of each 'mode' in the buffer; I've
>> never had very much success, but depending on the delimiters used in
>> the literal haskell syntax you're working with, you may be able to set
>> it up:
>>
>> http://www.emacswiki.org/emacs/MultipleModes
>
> There's a more detailed listing at configurations, etc. at:
>
> * 
> http://www.haskell.org/haskellwiki/Literate_programming#Multi-mode_support_in_Emacs
> * haskell-latex.el at http://www.loveshack.ukfsn.org/emacs/ (mentioned
> in the MultipleModes page on the emacs wiki)
>
> But in general, I agree: multiple modes suck in Emacs.  I tried all of
> the available attempts at multiple modes when trying to get Markdown +
> literate Haskell working, the best I could get was using multi-mode.el
> (and there are still a few glitches).
>
> In general, Emacs tends to go a bit nuts when it's time to switch modes :/
>
> --
> Ivan Lazar Miljenovic
> ivan.miljeno...@gmail.com
> IvanMiljenovic.wordpress.com
>

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considerednotentirelygreat?

2011-09-27 Thread Steve Schafer
On Tue, 27 Sep 2011 13:13:39 -0600, you wrote:

>On Tue, 2011-09-27 at 12:36 -0400, Steve Schafer wrote:
>> [0.1,0.2..0.5] isn't the problem. The problem is coming up with
>> something that not only works for [0.1,0.2..0.5], but also works for
>> [0.1,0.2..1234567890.5].
>> 
>> A good rule of thumb: For every proposal that purports to eliminate
>> having to explicitly take into consideration the limited precision of
>> floating-point representations, there exists a trivial example that
>> breaks that proposal.
>
>If by "trivial" you mean easy to construct, sure.  But if you mean
>typical, that's overstating the case by quite a bit.
>
>There are plenty of perfectly good uses for floating point numbers, as
>long as you keep in mind a few simple rules:

Your "rules" are what I meant by "...having to explicitly take into
consideration the limited precision of floating-point representations."

The problem, of course, is that people would rather not have to follow
any rules, and that floating-point arithmetic would just work the way
they think it ought to.

-Steve Schafer

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Much faster complex monad stack based on CPS state

2011-09-27 Thread Ryan Ingram
My guess is that Cont plays really nicely with GHC's inliner, so things that
end up looking like

 return x >>= \y -> ...

get optimized really well

return x >>= f
-- inline >>=
= ContState $ \s0 k -> runCS (return x) s0 $ \a s1 -> runCS (f a) s1 k
-- inline return
= ContState $ \s0 k -> runCS (ContState $ \s2 k2 -> k2 x s2) s0 $ \a s1
-> runCS (f a) s1 k
-- runCS record selector
= ContState $ \s0 k -> (\s2 k2 -> k2 x s2) s0 $ \a s1 -> runCS (f a) s1
k
-- beta
= ContState $ \s0 k -> (\k2 -> k2 x s0) $ \a s1 -> runCS (f a) s1 k
-- beta
= ContState $ \s0 k -> (\a s1 -> runCS (f a) s1 k) x s0
-- beta
= ContState $ \s0 k -> runCS (f x) s0 k

and then further inlining of f can take place.

On Mon, Sep 26, 2011 at 4:07 PM, Nicu Ionita  wrote:

> Hello list,
>
> Starting from this emails (http://web.archiveorange.com/**
> archive/v/nDNOvSM4JT3GJRSjOm9P
> **) I could refactor my code (a UCI chess engine, with complex functions,
> in which the search has a complex monad stack) to run twice as fast as with
> even some hand unroled state transformer! So from 23-24 kilo nodes per
> second it does now 45 to 50 kNps! And it looks like there is still some
> improvement room (I have to play a little bit with strictness annotations
> and so on).
>
> (Previously I tried specializations, then I removed a lot of polimorphism,
> but nothing helped, it was like hitting a wall.)
>
> Even more amazingly is that I could program it although I cannot really
> understand the Cont & ContT, but just taking the code example from Ryan
> Ingram (newtype ContState r s a = ...) and looking a bit at the code from
> ContT (from the transformers library), and after fixing some compilation
> errors, it worked and was so fast.
>
> I wonder why the transformers library does not use this kind of state monad
> definition. Or does it, and what I got is just because of the unrolling? Are
> there monad (transformers) libraries which are faster? I saw the library
> kan-extensions but I did not understand (yet) how to use it.
>
> Nicu
>
> __**_
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/**mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] emacs literate haskell mode

2011-09-27 Thread Ivan Lazar Miljenovic
On 28 September 2011 07:42, Rogan Creswick  wrote:
> On Tue, Sep 27, 2011 at 11:24 AM, Mathijs Kwik  
> wrote:
>> Hi all,
>>
>> I'm using haskell-mode for emacs and I'm using it to open a literate
>> haskell file which uses latex.
>> This works fine, haskell code has syntax highlighting, and special
>> symbols like lambda get used.
>> However, the latex itself is dull and gree, no highlighting/coloring there.
>> Does anyone know if it's possible to turn on latex highlighting in
>> literate haskell mode?
>> I tried switching to latex-mode, which does the trick (but it chokes
>> on the haskell code inbetween), so I'm pretty sure emacs has
>> everything it needs, but haskell-mode needs to enable this somehow.
>
> I'm not certain this /is/ easily in Emacs capabilities.  Emacs isn't
> really set up to support more than one major mode at a time -- there
> is, however, an extension that can do this.  The challenge is defining
> the start and end of the areas of each 'mode' in the buffer; I've
> never had very much success, but depending on the delimiters used in
> the literal haskell syntax you're working with, you may be able to set
> it up:
>
> http://www.emacswiki.org/emacs/MultipleModes

There's a more detailed listing at configurations, etc. at:

* 
http://www.haskell.org/haskellwiki/Literate_programming#Multi-mode_support_in_Emacs
* haskell-latex.el at http://www.loveshack.ukfsn.org/emacs/ (mentioned
in the MultipleModes page on the emacs wiki)

But in general, I agree: multiple modes suck in Emacs.  I tried all of
the available attempts at multiple modes when trying to get Markdown +
literate Haskell working, the best I could get was using multi-mode.el
(and there are still a few glitches).

In general, Emacs tends to go a bit nuts when it's time to switch modes :/

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] [ANN] crypto-api-tests

2011-09-27 Thread Thomas DuBuisson
The crypto-api test modules have been split out into their own
package, crypto-api-tests.  Additionally, the tests now use the
test-framework package.  This should make it much easier for
hash/cipher maintainers to integrate into their existing testing
infrastructure.  For example:

$ cabal update ; cabal install cryptocipher crypto-api crypto-api-tests

{- BEGIN CODE -}
import Test.Framework
import Test.AES (makeAESTests)
import Crypto.Cipher.AES (AES128)

main = do
  ts <- makeAESTests (a :: AES128)
  defaultMain ts
{- END CODE -}

$ ghc test.hs ; ./test
...
snip
...
OFBVarTxt128d.txt-125: [OK]
OFBVarTxt128d.txt-126: [OK]
OFBVarTxt128d.txt-127: [OK]
Block Cipher tests (ident):
  ECBEncDecID: [OK, passed 100 tests]
  CBCEncDecID: [OK, passed 100 tests]
  CFBEncDecID: [OK, passed 100 tests]
  OFBEncDecID: [OK, passed 100 tests]
  CTREncDecID: [OK, passed 100 tests]
Block Cipher tests (lazy/string bytestring equality):
  ECBStringLazyEq: [OK, passed 100 tests]
  CBCStrictLazyEq: [OK, passed 100 tests]
  CFBStrictLazyEq: [OK, passed 100 tests]
  OFBStrictLazyEq: [OK, passed 100 tests]
  CTRStrictLazyEq: [OK, passed 100 tests]

 Properties   Test Cases Total
 Passed  10   2272   2282
 Failed  00  0
 Total   10   2272   2282

Patches for more algorithms and/or property tests for classes of
algorithms are certainly welcome.

Cheers,
Thomas

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I for one welcome our new Robotic Overlords

2011-09-27 Thread Anthony Cowley
On Sep 27, 2011, at 2:01 PM, Jeremy Shaw  wrote:

> When the robots take over, do you want them to be developed using a sane 
> language like Haskell or Agda? Or some dangerous untyped OO language? I think 
> the answer is obvious.
> 
> The question is, "How?". The robots will not be developed by us, but by the 
> children of today. So, we must reach their pure minds before they have been 
> unsafely coerced by the evil unbelievers who do not worship the gods λ, Π, 
> and ω.

Timing: you have it. 

I presented the work behind https://github.com/acowley/roshask at IROS 2011 
just this morning. ROS is possibly the most widely used robotics middleware 
today, and you can now use Haskell to work with existing ROS components. 

While FP isn't hugely popular among the robotics community (I've been pitching 
functional approaches here for several years), this time around I am optimistic 
that we've turned the corner, or at least started that process. There was a lot 
of support, and developers behind other large projects expressed eagerness to 
rely more heavily on the compositionally of good old functions.

I am not aware of as good a story for Arduino-level development. Atom may be an 
appropriate foundation for such an effort, but I also hope that we can get GHC 
ARM support sorted out, and then use platforms like the forthcoming Raspberry 
Pi as the computational core of an inexpensive robotics platform.

In short, you can just about achieve your vision today with a TurtleBot from 
Willow Garage and roshask.

Anthony


> 
> My long term vision is:
> 
> A company which produces an extensible robotics platform for children and 
> adults ages 8 and up. The platform would be very open, extensible, and 
> hackable.
> 
> The robotic programming languages would be based around concepts like 
> functional reactive programming, dependent types, etc.
> 
> Children would begin with a simple FRP language to control the robot. They 
> would solve simple problems like "go forward until an object is encountered." 
> As the young masters grow, they can tackle more difficult problems such as 
> maze solving. Even later they can delve into more advanced subjects like 
> computer vision, speech recognition and synthesis, or mind control rays.
> 
> The short term vision can be summarized in one word "leverage".
> 
> We need to find an existing robotic platform which can be easily targeted 
> somehow using Haskell or Agda. Perhaps something that can be targeted using 
> atom or lava? Maybe something Arduino based?
> 
> I have created a wiki page here to record your suggestions and ideas:
> 
> http://haskell.org/haskellwiki/RoboticOverlords
> 
> The requirements now are something that is:
> 
> - hackable/open
> - easily obtained
> - reasonable in price
> - can easily be targeted via Haskell
> 
> The only candidate I know of so far is lego mindstorms via the NXT package on 
> hackage, Though some could argue that lego mindstorms are not reasonably 
> priced.
> 
> http://hackage.haskell.org/package/NXT
> 
> Let's here your ideas!
> - jeremy
> 
> 
> 
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] emacs literate haskell mode

2011-09-27 Thread Rogan Creswick
On Tue, Sep 27, 2011 at 11:24 AM, Mathijs Kwik  wrote:
> Hi all,
>
> I'm using haskell-mode for emacs and I'm using it to open a literate
> haskell file which uses latex.
> This works fine, haskell code has syntax highlighting, and special
> symbols like lambda get used.
> However, the latex itself is dull and gree, no highlighting/coloring there.
> Does anyone know if it's possible to turn on latex highlighting in
> literate haskell mode?
> I tried switching to latex-mode, which does the trick (but it chokes
> on the haskell code inbetween), so I'm pretty sure emacs has
> everything it needs, but haskell-mode needs to enable this somehow.

I'm not certain this /is/ easily in Emacs capabilities.  Emacs isn't
really set up to support more than one major mode at a time -- there
is, however, an extension that can do this.  The challenge is defining
the start and end of the areas of each 'mode' in the buffer; I've
never had very much success, but depending on the delimiters used in
the literal haskell syntax you're working with, you may be able to set
it up:

http://www.emacswiki.org/emacs/MultipleModes

--Rogan


> Any help would be great.
> Greetings,
> Mathijs
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I for one welcome our new Robotic Overlords

2011-09-27 Thread Antoine Latter
On Tue, Sep 27, 2011 at 4:01 PM, Jeremy Shaw  wrote:
>
> Let's here your ideas!

Here's a post outlining Arduino + Haskell via Atom:

http://leepike.wordpress.com/2010/05/31/twinkle-twinkle-little-haskell/

Atom might be a tool to use for any number of targets, but I haven't
used it so I really don't know how it might fit into FRP style
programming.

Antoine

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] I for one welcome our new Robotic Overlords

2011-09-27 Thread Jeremy Shaw
When the robots take over, do you want them to be developed using a  
sane language like Haskell or Agda? Or some dangerous untyped OO  
language? I think the answer is obvious.


The question is, "How?". The robots will not be developed by us, but  
by the children of today. So, we must reach their pure minds before  
they have been unsafely coerced by the evil unbelievers who do not  
worship the gods λ, Π, and ω.


My long term vision is:

A company which produces an extensible robotics platform for children  
and adults ages 8 and up. The platform would be very open, extensible,  
and hackable.


The robotic programming languages would be based around concepts like  
functional reactive programming, dependent types, etc.


Children would begin with a simple FRP language to control the robot.  
They would solve simple problems like "go forward until an object is  
encountered." As the young masters grow, they can tackle more  
difficult problems such as maze solving. Even later they can delve  
into more advanced subjects like computer vision, speech recognition  
and synthesis, or mind control rays.


The short term vision can be summarized in one word "leverage".

We need to find an existing robotic platform which can be easily  
targeted somehow using Haskell or Agda. Perhaps something that can be  
targeted using atom or lava? Maybe something Arduino based?


I have created a wiki page here to record your suggestions and ideas:

http://haskell.org/haskellwiki/RoboticOverlords

The requirements now are something that is:

 - hackable/open
 - easily obtained
 - reasonable in price
 - can easily be targeted via Haskell

The only candidate I know of so far is lego mindstorms via the NXT  
package on hackage, Though some could argue that lego mindstorms are  
not reasonably priced.


http://hackage.haskell.org/package/NXT

Let's here your ideas!
- jeremy



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considerednotentirelygreat?

2011-09-27 Thread Chris Smith
On Tue, 2011-09-27 at 12:36 -0400, Steve Schafer wrote:
> [0.1,0.2..0.5] isn't the problem. The problem is coming up with
> something that not only works for [0.1,0.2..0.5], but also works for
> [0.1,0.2..1234567890.5].
> 
> A good rule of thumb: For every proposal that purports to eliminate
> having to explicitly take into consideration the limited precision of
> floating-point representations, there exists a trivial example that
> breaks that proposal.

If by "trivial" you mean easy to construct, sure.  But if you mean
typical, that's overstating the case by quite a bit.

There are plenty of perfectly good uses for floating point numbers, as
long as you keep in mind a few simple rules:

1. Don't expect any exact answers.

2. Don't add or subtract values of vastly different magnitudes if you
expect any kind of accuracy in the results.

3. When you do depend on discrete answers (like with the Ord functions)
you assume an obligation to check that the function you're computing is
continuous around the boundary.

If you can't follow these rules, you probably should find a different
type.  But there's a very large class of computing tasks where these
rules are not a problem at all.  In your example, you're breaking rule
#2.  It's certainly not a typical case to be adding numbers like 0.1 to
numbers in the billions, and if you're doing that, you should know in
advance that an approximate type is risky.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered notentirelygreat?

2011-09-27 Thread Chris Smith
On Tue, 2011-09-27 at 09:47 -0700, Iavor Diatchki wrote:
> As Ross pointed out in a previous e-mail the instance for Rationals is
> also broken:
> 
> > last (map fromRational [1,3 .. 20])
> > 21.0

Sure, for Int, Rational, Integer, etc., frankly I'd be in favor of a
runtime error when the last value isn't in the list.  You don't need
approximate behavior for those types, and if you really mean
takeWhile (<= 20) [1,3..], then you should probably write that, rather
than a list range notation that doesn't mean the same thing.

-- 
Chris Smith


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] emacs literate haskell mode

2011-09-27 Thread Mathijs Kwik
Hi all,

I'm using haskell-mode for emacs and I'm using it to open a literate
haskell file which uses latex.
This works fine, haskell code has syntax highlighting, and special
symbols like lambda get used.
However, the latex itself is dull and gree, no highlighting/coloring there.
Does anyone know if it's possible to turn on latex highlighting in
literate haskell mode?
I tried switching to latex-mode, which does the trick (but it chokes
on the haskell code inbetween), so I'm pretty sure emacs has
everything it needs, but haskell-mode needs to enable this somehow.

Any help would be great.
Greetings,
Mathijs

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considerednotentirelygreat?

2011-09-27 Thread Chris Smith
On Tue, 2011-09-27 at 09:23 -0700, Donn Cave wrote:
> I think it's more than reasonable to expect
> 
>   [0.1,0.2..0.5] == [0.1,0.2,0.3,0.4,0.5]
> 
> and that would make everyone happy, wouldn't it?

But what's the justification for that?  It *only* makes sense because
you used short decimal literals.  If the example were:

let a = someComplicatedCalculation
b = otherComplicatedCalculation
c = thirdComplicatedCalculation
in  [a, b .. c]

then it would be far less reasonable to expect the notation to fudge the
numbers in favor of obtaining short decimal representations, which is
essentially what you're asking for.

-- 
Chris


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered notentirelygreat?

2011-09-27 Thread Ketil Malde
Iavor Diatchki  writes:

>>> last ([0.1, 0.2 .. 0.5]) == 0.5
>> False

>>> last (map fromRational [0.1, 0.2 .. 0.5]) == 0.5
>> True

> As Ross pointed out in a previous e-mail the instance for Rationals is
> also broken:

>> last (map fromRational [1,3 .. 20])
>> 21.0

But only because it tries to mimic the behavior of Float/Double, I
think.  Rational could easily have produced 19 here.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered notentirelygreat?

2011-09-27 Thread Iavor Diatchki
Hello,

On Tue, Sep 27, 2011 at 8:49 AM, Chris Smith  wrote:
> You could calculate the entire range using Rational and then convert
> each individual value after the fact.  That doesn't seem like a
> reasonable default, since it has a runtime performance cost.  Of course
> you're welcome to do it when that's what you need.
>
>> last ([0.1, 0.2 .. 0.5]) == 0.5
> False
>
>> last (map fromRational [0.1, 0.2 .. 0.5]) == 0.5
> True
>

As Ross pointed out in a previous e-mail the instance for Rationals is
also broken:

> last (map fromRational [1,3 .. 20])
> 21.0

-Iavor

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considerednotentirelygreat?

2011-09-27 Thread Steve Schafer
On Tue, 27 Sep 2011 09:23:20 -0700 (PDT), you wrote:

>I think it's more than reasonable to expect
>
>  [0.1,0.2..0.5] == [0.1,0.2,0.3,0.4,0.5]
>
>and that would make everyone happy, wouldn't it?

[0.1,0.2..0.5] isn't the problem. The problem is coming up with
something that not only works for [0.1,0.2..0.5], but also works for
[0.1,0.2..1234567890.5].

A good rule of thumb: For every proposal that purports to eliminate
having to explicitly take into consideration the limited precision of
floating-point representations, there exists a trivial example that
breaks that proposal.

-Steve Schafer

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considerednotentirelygreat?

2011-09-27 Thread Donn Cave
Quoth Chris Smith ,
...
> I certainly don't agree that wanting the exact value from a floating
> point type is a reasonable expectation.  The *only* way to recover those
> results is to do the math with the decimal or rational values instead of
> floating point numbers.  You'll get the rounding error from floating
> point regardless of how you do the computation, because the interval
> just isn't really 0.1.  The difference between those numbers is larger
> than 0.1, and when you step by that interval, you won't hit 0.5.

You may have misunderstand - you're right, it isn't reasonable to expect
`exact values' out of 0.1, 0.2, 0.3, etc., in the sense of the values
classically denoted by those terms on paper.  But I believe they do have
specific values of type Double, and it isn't unreasonable to expect the
range to produce those values and not some approximation that may have
been convenient to compute.

I think it's more than reasonable to expect

  [0.1,0.2..0.5] == [0.1,0.2,0.3,0.4,0.5]

and that would make everyone happy, wouldn't it?

If it's expensive to compute, hopefully people won't write code that
makes intensive use of Double range generation.

Donn

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered notentirelygreat?

2011-09-27 Thread Chris Smith
On Tue, 2011-09-27 at 00:29 -0700, Donn Cave wrote:
> It doesn't appear to me to be a technicality about the representation -
> the value we're talking about excluding is not just represented as
> greater than 0.3, it is greater than 0.3 when applied in computations.

Sure, the exact value is greater than 0.3.  But to *predict* that, you
have to know quite a bit about the technicalities of how floating point
values are represented.  For example, you need to know that 0.1 has no
exact representation as a floating point number, and that the closest
approximation is greater than the exact real number 0.1, and that the
difference is great enough that adding it twice adds up to a full ulp of
error.

> For example you can subtract 0.3 and get a nonzero value (5.55e-17.)

Again, if you're working with floating point numbers and your program
behaves in a significantly different way depending on whether you get 0
or 5.55e-17 as a result, then you're doing something wrong.

> The disappointment with iterative addition is not that
> its fifth value [should be] omitted because it's "technically" greater,
> it's that range generation via iterative addition does not yield the
> values I specified.

I certainly don't agree that wanting the exact value from a floating
point type is a reasonable expectation.  The *only* way to recover those
results is to do the math with the decimal or rational values instead of
floating point numbers.  You'll get the rounding error from floating
point regardless of how you do the computation, because the interval
just isn't really 0.1.  The difference between those numbers is larger
than 0.1, and when you step by that interval, you won't hit 0.5.

You could calculate the entire range using Rational and then convert
each individual value after the fact.  That doesn't seem like a
reasonable default, since it has a runtime performance cost.  Of course
you're welcome to do it when that's what you need.

> last ([0.1, 0.2 .. 0.5]) == 0.5
False

> last (map fromRational [0.1, 0.2 .. 0.5]) == 0.5
True

-- 
Chris



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CFG specification and analysis directly in Haskell

2011-09-27 Thread Joachim Breitner
Hi,

Am Dienstag, den 27.09.2011, 14:58 +0200 schrieb Heinrich Apfelmus:
> By the way, the old  mdo  notation was better suited to this task; the 
> new  rec  notation has some problems in this regard that will hopefully 
> be rectified soon.

wasn’t mdo rec’tified? If you rectify rec, will that then be called
recrec notation? Is fixrec the fixed point of rectification?

Greetings,
Joachim

-- 
Joachim "nomeata" Breitner
  m...@joachim-breitner.de  |  nome...@debian.org  |  GPG: 0x4743206C
  xmpp: nome...@joachim-breitner.de | http://www.joachim-breitner.de/



signature.asc
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CFG specification and analysis directly in Haskell

2011-09-27 Thread Anton Tayanovskyy
> John Meacham's frisby library [1] did something similar, though the
> technique is not as well-known as it should be.

Looks like an excellent library, thank you!

> Note that you don't need to give explicit names to your rules anymore, the
> monad can do that for you.

I was using the names for a Show instance. I am assuming there is no
syntax sugar to recover the name of the variable used in a binder as a
String.

Thanks,

Anton

-- 
Kind Regards,
Anton Tayanovskyy

WebSharper™ - type-safe JavaScript in F#
http://intellifactory.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CFG specification and analysis directly in Haskell

2011-09-27 Thread Heinrich Apfelmus

Anton Tayanovskyy wrote:

As a weekend hack, I just realized that Haskell has this wonderful
DoRec syntax that among other things seems to be able to let the user
express context-free grammars together with their processing rules in
normal Haskell code, without template Haskell or anything like that,
just like parser combinators.

I am just wondering if this is this a known and useful result? Are
there libraries doing similar stuff?


John Meacham's frisby library [1] did something similar, though the 
technique is not as well-known as it should be.


Note that you don't need to give explicit names to your rules anymore, 
the monad can do that for you.


By the way, the old  mdo  notation was better suited to this task; the 
new  rec  notation has some problems in this regard that will hopefully 
be rectified soon.


  [1]: http://repetae.net/computer/frisby/#v%3AnewRule


Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mapM is supralinear?

2011-09-27 Thread Arseniy Alekseyev
Ketil, I suppose your argument is correct for some implementations of
GC (hopefully not the ones I use). However, a trivial optimisation of
running GC with a frequency logarithmic in the (allocation rate / heap
size) seems to make almost any kind of GC amortized O(1) while keeping
the total heap bounded within constant factor of the reachable heap
size. So, we optimize the (1.) in your algorithm and in case of mapM
we should get a logarithmic (instead of linear) number of GCs with
exponentially (instead of linearly) increasing costs reaching O(N) in
the end and totalling to O(N) too!

Does anyone know if such worst-case complexity precautions are taken
in GHC? If not, why?

Malcolm, I fail to see how "They are the same thing" is compatible
with "Indeed I would". They together imply that O(1) (GC amortized
over the amount allocated) and O(+inf) (GC amortized over the amount
reclaimed) are the same thing. Also, I don't see how OOM condition is
relevant here. I may have enough memory for a lot of useful things
even without GC.

On 27 September 2011 12:42, Ketil Malde  wrote:
>
> Here's my feeble understanding of GC:
>
> 1. GC runs when after some specified amount of allocations
> 2. GC runs in time proportional to the live heap, which it needs to
>   traverse.
>
> This means that for a long running mapM, like any other computation
> generating a long list, (1) GC will run a number of times proportional to
> the length of the list, and (2) each run will have a cost
> proportional to the length of the list.  I.e. a linear algorithm is now
> quadratic.
>
> A lazy mapM (or mapM_), consuming the list as fast as it is generated,
> will of course keep the list short/heap small, and thus the cost of each
> GC is constant (for some value of "constant").
>
> I suppose generational GC will help in practice, since the young
> generation gets to start near the end of the list, but it will still be
> linear in generated length, and you still need major GCs too,
> occasionally.
>
> Also, I guess mapM is more vulnerable to this, since the operations (IO,
> say) involved in building the list likely do short-lived allocations,
> triggering more GCs than simpler, pure computations would.
>
> Do let me know if this is probably a terribly naive view.
>
> -k
> --
> If I haven't seen further, it is by standing in the footprints of giants
>


On 27 September 2011 12:35, Malcolm Wallace  wrote:
>
> On 27 Sep 2011, at 11:23, Arseniy Alekseyev wrote:
>
>> Malcolm, one should amortize the cost of the collection over the
>> amount of free space allocated rather than recovered
>
> They are the same thing.  You can only allocate from the space that has been 
> recovered.  It is true that generational GC has a nursery area of largely 
> constant size, which is always used for fresh allocation, but that is usually 
> considered an optimisation (albeit a considerable one), which does not 
> fundamentally change the underlying asymptotic costs of the major 
> collections.  When you have large heap residency, the proportion of time 
> spent in GC increases.
>
>> (there are cases
>> when no space is recovered, would you call the GC cost infinite
>> then?).
>
> Indeed I would.  When that happens, usually the program aborts without 
> completing its computation, so the computation is infinitely delayed.
>
> Regards,
>    Malcolm
>

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mapM is supralinear?

2011-09-27 Thread Ketil Malde

Here's my feeble understanding of GC:

1. GC runs when after some specified amount of allocations
2. GC runs in time proportional to the live heap, which it needs to
   traverse.

This means that for a long running mapM, like any other computation
generating a long list, (1) GC will run a number of times proportional to
the length of the list, and (2) each run will have a cost
proportional to the length of the list.  I.e. a linear algorithm is now
quadratic.

A lazy mapM (or mapM_), consuming the list as fast as it is generated,
will of course keep the list short/heap small, and thus the cost of each
GC is constant (for some value of "constant").

I suppose generational GC will help in practice, since the young
generation gets to start near the end of the list, but it will still be
linear in generated length, and you still need major GCs too,
occasionally.

Also, I guess mapM is more vulnerable to this, since the operations (IO,
say) involved in building the list likely do short-lived allocations,
triggering more GCs than simpler, pure computations would.

Do let me know if this is probably a terribly naive view.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mapM is supralinear?

2011-09-27 Thread Malcolm Wallace

On 27 Sep 2011, at 11:23, Arseniy Alekseyev wrote:

> Malcolm, one should amortize the cost of the collection over the
> amount of free space allocated rather than recovered

They are the same thing.  You can only allocate from the space that has been 
recovered.  It is true that generational GC has a nursery area of largely 
constant size, which is always used for fresh allocation, but that is usually 
considered an optimisation (albeit a considerable one), which does not 
fundamentally change the underlying asymptotic costs of the major collections.  
When you have large heap residency, the proportion of time spent in GC 
increases.

> (there are cases
> when no space is recovered, would you call the GC cost infinite
> then?).

Indeed I would.  When that happens, usually the program aborts without 
completing its computation, so the computation is infinitely delayed. 

Regards,
Malcolm

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 9th Ghent Functional Programming Group meeting on Tuesday, the 4th of October, 2011

2011-09-27 Thread Jasper Van der Jeugt
Hello all,

We would like to remind you of our 9th GhentFPG [1] meeting and set
some things straight. In the previous announcement, we mistakenly put
“Thursday, the 4th of October”. This should be “Tuesday, the 4th of
October”. We hope this mistake has not caused any major inconvenience.

There is also a small issue regarding the the room where we will host
the meeting: it is unreachable through the regular building entrance
because of construction works. Hence, attendees need to take the
alternative side entrance. This entrance can be found on the right
side (as seen from the street) of the building -- it will remain open
until 20:00. If you walk up to the Technicum building and go right at
the bicycle stands, you should see “Ingang Blok 1 & Blok 2” indicated.
Follow this, and you’ll be able to get inside, at which point you’ll
see arrows pointing to the meeting room. For safety measures the blue
doors will be locked, so if you arrive after 19:30, please give us a
call at +32 (0) 9 264 3370, so we can come and open the doors for you.

If you can’t find the entrance, or any other problems arise, you can
also contact us at +32476264847.

[1]: http://www.haskell.org/haskellwiki/Ghent_Functional_Programming_Group

Hoping to see you there,
The GhentFPG organizing committee

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mapM is supralinear?

2011-09-27 Thread Arseniy Alekseyev
Malcolm, one should amortize the cost of the collection over the
amount of free space allocated rather than recovered (there are cases
when no space is recovered, would you call the GC cost infinite
then?).

If one does that, and also runs the expensive collection not too often
[1], the time amortizes to O(1) easily (notice that the amount
allocated between GCs can be kept proportional to H).

I don't know if GC used in GHC does indeed have amortized O(1) cost
per allocation, but if it doesn't, it should.

[1] - a sufficient condition would be that there exists some real
number q such that q > 1 and the next GC runs not sooner than when H
reaches H_0*q where H_0 is the heap size remaining after the last
collection.

On 27 September 2011 10:02, Malcolm Wallace  wrote:
>
> On 26 Sep 2011, at 23:14, Arseniy Alekseyev wrote:
>
>> Garbage collection takes amortized O(1) per allocation, doesn't it?
>
> No.  For Mark-Sweep GC, the cost is proportional to
>
> (H+R) / (H-R)
> where H is the total heap size
>      R is the reachable (i.e. live) heap
>
> This formula amortises the cost of a collection over the amount of free space 
> recovered.
> For two-space copying collection, the cost is proportional to
>
> R / ((H/2)-R)
>
> In both cases, as R approaches H (or H/2), the cost of GC becomes rather 
> large.  So in essence, the more live data you have, the more GC will cost.
>
> Regards,
>    Malcolm
>
>

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Parsing binary data question

2011-09-27 Thread Michael Oswald

Hello all,

I am currently working on parser for some packets received via the 
network. The data structure currently is like that:



data Value = ValUInt8 Int8
   | ValUInt16 Int16
   | ValUInt32 Int32
-- more datatypes

data Parameter = Parameter {
  paramName :: String,
  paramValue :: Value
  }
  | ParameterN {
  paramName :: String,
  paramValue :: Value
  }deriving (Show)

data TCPacket = TCPacket {
  tcAPID :: Word16,
  tcType :: Word8,
  tcSubType :: Word8,
  tcParameters :: [Parameter]
  }

The output should a parsed packet (I am using cereal for this). The 
packet structure can vary depending on the type and the configuration, 
so I have a function which takes a TCPacket as "input template" which 
has already the correct list of parameters which are then parsed:


parseTCPacket :: Word16 -> Word8 -> Word8 -> ByteString -> TCPacket -> 
TCPacket

parseTCPacket apid t st pktData tmplate =
TCPacket apid t st params
where
tmplParams = (tcParameters tmplate)
params = zipWith (\p v -> p {paramValue = v} ) tmplParams values'
values = map paramValue tmplParams
values' = binValues values (pktData pusPkt)

getBinGet :: Value -> Get Value
getBinGet (ValInt8 _) = getWord8 >>= \x -> return $ ValInt8 $ fromIntegral x
getBinGet (ValInt16 _) = getWord16be >>= \x -> return $ ValInt16 $ 
fromIntegral x

-- many more datatypes

getBinValues :: [Value] -> Get [Value]
getBinValues inp = mapM getBinGet inp


binValues :: [Value] -> ByteString -> ([Value], B.ByteString)
binValues inp bytes = case runGet (getBinValues inp) bytes of
Left err -> throw $ DecodeError ("binValues: " 
++ err)

Right x -> x


This works quite well and does what I want. Now I have the problem that 
there are some parameters, which could be so-called "group repeaters" 
(the ParameterN constructor above). This means, that if such a parameter 
N is encountered during parsing (it has to be an int type), all 
following parameters are repeated N times with possible nesting.


So for example if the template (where the values are all 0) is like this:
[Parameter "Param1" (ValUInt32 0), ParameterN "N1" (ValUInt8 0), 
Parameter "Param2" (ValUint16 0), ParameterN "N2" (ValUint8 0),

Parameter "Param3" (ValUint8 0)]

Which means there is a group for the last 3 parameters which is repeated 
N1 times which contains another group which is repeated N2 times.
If binary data based on the template above would be like this (datatypes 
omitted):


10, 2, 439, 2, 12, 13, 65535, 2, 22, 23

then a valid packet after parsing would be:

[Parameter "Param1" (ValUint32 10), ParameterN "N1" (ValUint8 2), 
Parameter "Param2" (ValUint16 439), ParameterN "N2" (ValUint8 2),

Parameter "Param3" (ValUint8 12), Parameter "Param3" (ValUint8 13),
Parameter "Param2" (ValUint16 65535), ParameterN "N2" (ValUint8 2),
Parameter "Param3" (ValUint8 22), Parameter "Param3" (ValUint8 23)]

Now I am a bit lost on how to implement such a parser. It would be much 
easier if the structure would be already encoded in the binary data, but 
I have to stick to this template approach. I have some C++ parser which 
does this but of course it's very imperative and a little bit quirky 
implemented, so if anybody has an idea on how to proceed (cereal, 
attoparsec whatever), please tell me.



lg,
Michael




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mapM is supralinear?

2011-09-27 Thread Malcolm Wallace

On 26 Sep 2011, at 23:14, Arseniy Alekseyev wrote:

> Garbage collection takes amortized O(1) per allocation, doesn't it?

No.  For Mark-Sweep GC, the cost is proportional to

(H+R) / (H-R)
where H is the total heap size
  R is the reachable (i.e. live) heap

This formula amortises the cost of a collection over the amount of free space 
recovered.
For two-space copying collection, the cost is proportional to

R / ((H/2)-R)

In both cases, as R approaches H (or H/2), the cost of GC becomes rather large. 
 So in essence, the more live data you have, the more GC will cost.

Regards,
Malcolm


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered notentirelygreat?

2011-09-27 Thread Donn Cave
Quoth Chris Smith ,
...
> So that's what this is about: do we think of Float as an approximate
> real number type, or as an exact type with specific values.  If the
> latter, then "of course" you exclude the value that's larger than the
> upper range.  If the former, then using comparison operators like '<'
> implies a proof obligation that the result of the computation remains
> stable (loosely speaking, the function continuous) at that boundary
> despite small rounding error in either direction.  In that case,
> creating a language feature where, in the *normal* case of listing the
> last value you expect in the list, 0.3 (as an approximate real number)
> is excluded from this list just because of technicalities about the
> representation is an un-useful implementation, to say the least, and
> makes it far more difficult to satisfy that proof obligation.

It doesn't appear to me to be a technicality about the representation -
the value we're talking about excluding is not just represented as
greater than 0.3, it is greater than 0.3 when applied in computations.
For example you can subtract 0.3 and get a nonzero value (5.55e-17.)

Isn't the problem here with ranges really that they're computed in a
way that doesn't work for FP?  I mean, when I specify [0.1,0.2..0.5],
I really do mean to include 0.5 among those values, as you surmise -
so I imply a computation that actually produces that value, i.e.,
0.5::Double.  The disappointment with iterative addition is not that
its fifth value [should be] omitted because it's "technically" greater,
it's that range generation via iterative addition does not yield the
values I specified.

In other words, a range is a shorthand expression, where the correct
value is the one that would have resulted from evaluating the corresponding
list of constant expressions.  If we don't know how to generate that
list correctly for Double, then I suppose range shouldn't be supported,
but I thinking I'm seeing hints that we may indeed know how to do this?
(I sure don't! though of course for a specific case it can be easy,
e.g., (map (/ 10.0) [1..5]).)

Donn

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe