Re: [Haskell-cafe] strict, lazy, non-strict, eager

2011-12-28 Thread Albert Y. C. Lai
There are two flavours of MonadState, Control.Monad.State.Lazy and 
Control.Monad.State.Strict. There are two flavours of ByteString, 
Data.ByteString.Lazy and Data.Bytestring (whose doc says strict). 
There are two flavours of I/O libraries, lazy and strict. There are 
advices of the form: the program uses too much memory because it is too 
lazy; try making this part more strict. Eventually, someone will ask 
what are lazy and strict. Perhaps you answer this (but there are 
other answers, we'll see):


lazy refers to such-and-such evaluation order. strict refers to f ⊥ = 
⊥, but it doesn't specify evaluation order.


That doesn't answer the question. That begs the question: Why do 
libraries seem to make them a dichotomy, when they don't even talk about 
the same level? And the make-it-more-strict advice now becomes: the 
program uses too much memory because of the default, known evaluation 
order; try making this part use an unknown evaluation order, and this 
unknown is supposed to use less memory because...?


I answer memory questions like this: the program uses too much memory 
because it is too lazy---or nevermind lazy, here is the current 
evaluation order of the specific compiler, this is why it uses much 
memory; now change this part to the other order, it uses less memory. I 
wouldn't bring in the denotational level; there is no need.


(Sure, I use seq to change evaluation order, which may be overriden by 
optimizations in rare cases. So change my answer to: now add seq here, 
which normally uses the other order, but optimizations may override it 
in rare cases, so don't forget to test. Or use pseq.)


I said people, make up your mind. I do not mean a whole group of 
people for the rest of their lives make up the same mind and choose the 
same one semantics. I mean this: Each individual, in each context, for 
each problem, just how many levels of semantics do you need to solve it? 
(Sure sure, I know contexts that need 4. What about daily programming 
problems: time, memory, I/O order?)


MigMit questioned me on the importance of using the words properly. 
Actually, I am fine with using the words improperly, too: the program 
uses too much memory because it is too lazy, lazy refers to 
such-and-such evaluation order; try making this part more strict, strict 
refers to so-and-so evaluation order.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How hard is it to start a web startup using Haskell?

2011-12-28 Thread Gracjan Polak
Haisheng Wu freizl at gmail.com writes:

 
 
 Turns out that those guys doing start-up with Haskell are already expert
 at Haskell. Hence choosing Haskell is more straightforward.

We hope to become experts while doing Haskell.

 I'm thinking of using Haskell since it looks cool and beautiful.

Sense of internal taste is very important in engineering. And this is true
whatever language you choose.

 However I have little experience and will move slowly at certain
 begging period. This sounds not good to a startup company.
 
 Comparing with Django in Python, Rails in Ruby, yesod and snap looks
 not that mature.

Add whole Hackage to the list an suddenly Haskell looks on par with most of
other things out there. 

 
 Also, for instance, I'd like to build up a CRM application company, I
could leverage some open source projects in other languages.  In Haskell, we
need to build from scratch basically.

And that may be of course downside. If you happen do to what most other people
are doing out there, then using premade components is good option.

 Appreciate your suggestions/comments.

There was a reason 37signals decided to go with unknown little language and
create Rails from scratch. There was a reason Paul G. started viaweb in Lisp.
There was a reason that Apple things are programmed in Objective C instead of
plain C or plain Smalltalk. There was a reason that made Larry create perl
instead of sticking to awk+sed.

Of course you are to decide how much of these reasons apply to your situation.

Good luck anyway!

-- 
Gracjan



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] MIDI-controlled application

2011-12-28 Thread Tim Baumgartner
Hi Tom!

[...]
 
 
  Currently I'm using a monad that combines Parsec (with MIDI event
 stream) and a Writer (that writes commands that should result in IO). It's
 done in a way that during running the monad, many parses can be done and
 failing parses roll back the parser state so that a new parse can be tried.
 


 Care to share your code?


Yes, great! Have you worked with MIDI in Haskell? Perhaps
parsing/recognizing it? I think it will take a few more days (hopefully not
weeks) until I know what will be the foundation for my app. But then I will
create a project online and send you a message.
In case anybody has time to look at it, I just pasted my aforementioned
monad on hpaste. I thought about it some more and came to the conclusion
that for user-defined triggers (aka parsers), this approach is probably
sub optimal...
After Heinrich's suggestion, I worked through the slot machine example from
reactive-banana. It's a great introduction to FRP for me. The declarative
style is very appealing. I will try how it fits with my ideas.

Some of my code (thaugh probably obsolete): http://hpaste.org/55795

Regards
Tim
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strict, lazy, non-strict, eager

2011-12-28 Thread Yves Parès
When I explain to people what strict/lazy/eager mean, I often say something
like :

- Adjectives eager and lazy apply *only* to a global evaluation method: *
eager* is C evaluation style and *lazy* is that of Haskell.
- Adjective strict can be applied *both* to a global evaluation method and
a specific function: if applied to an eval method then it's a synonym of
strict, and if applied to a function f it means 'f ⊥ = ⊥' (which I detail
a little more), which is true for strict State monad for istance (= would
not allow its left argument to return ⊥).

Thus explaining why datatypes such as State or Bytestring exist in *strict *
and* lazy *flavours.


2011/12/28 Albert Y. C. Lai tre...@vex.net

 There are two flavours of MonadState, Control.Monad.State.Lazy and
 Control.Monad.State.Strict. There are two flavours of ByteString,
 Data.ByteString.Lazy and Data.Bytestring (whose doc says strict). There
 are two flavours of I/O libraries, lazy and strict. There are advices of
 the form: the program uses too much memory because it is too lazy; try
 making this part more strict. Eventually, someone will ask what are lazy
 and strict. Perhaps you answer this (but there are other answers, we'll
 see):

 lazy refers to such-and-such evaluation order. strict refers to f ⊥ = ⊥,
 but it doesn't specify evaluation order.

 That doesn't answer the question. That begs the question: Why do libraries
 seem to make them a dichotomy, when they don't even talk about the same
 level? And the make-it-more-strict advice now becomes: the program uses
 too much memory because of the default, known evaluation order; try making
 this part use an unknown evaluation order, and this unknown is supposed to
 use less memory because...?

 I answer memory questions like this: the program uses too much memory
 because it is too lazy---or nevermind lazy, here is the current
 evaluation order of the specific compiler, this is why it uses much memory;
 now change this part to the other order, it uses less memory. I wouldn't
 bring in the denotational level; there is no need.

 (Sure, I use seq to change evaluation order, which may be overriden by
 optimizations in rare cases. So change my answer to: now add seq here,
 which normally uses the other order, but optimizations may override it in
 rare cases, so don't forget to test. Or use pseq.)

 I said people, make up your mind. I do not mean a whole group of people
 for the rest of their lives make up the same mind and choose the same one
 semantics. I mean this: Each individual, in each context, for each problem,
 just how many levels of semantics do you need to solve it? (Sure sure, I
 know contexts that need 4. What about daily programming problems: time,
 memory, I/O order?)

 MigMit questioned me on the importance of using the words properly.
 Actually, I am fine with using the words improperly, too: the program uses
 too much memory because it is too lazy, lazy refers to such-and-such
 evaluation order; try making this part more strict, strict refers to
 so-and-so evaluation order.



 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strict, lazy, non-strict, eager

2011-12-28 Thread Yves Parès
 - Adjective strict can be applied *both* to a global evaluation method
and a specific function: if applied to an eval method then it's a synonym
of strict

I of course meant a synonym of *eager*. Sorry.

I admit this definition might be a little liberal, but it helps understand.


2011/12/28 Yves Parès limestr...@gmail.com

 When I explain to people what strict/lazy/eager mean, I often say
 something like :

 - Adjectives eager and lazy apply *only* to a global evaluation method: *
 eager* is C evaluation style and *lazy* is that of Haskell.
 - Adjective strict can be applied *both* to a global evaluation method
 and a specific function: if applied to an eval method then it's a synonym
 of strict, and if applied to a function f it means 'f ⊥ = ⊥' (which I
 detail a little more), which is true for strict State monad for istance
 (= would not allow its left argument to return ⊥).

 Thus explaining why datatypes such as State or Bytestring exist in *strict
 *and* lazy *flavours.


 2011/12/28 Albert Y. C. Lai tre...@vex.net

 There are two flavours of MonadState, Control.Monad.State.Lazy and
 Control.Monad.State.Strict. There are two flavours of ByteString,
 Data.ByteString.Lazy and Data.Bytestring (whose doc says strict). There
 are two flavours of I/O libraries, lazy and strict. There are advices of
 the form: the program uses too much memory because it is too lazy; try
 making this part more strict. Eventually, someone will ask what are lazy
 and strict. Perhaps you answer this (but there are other answers, we'll
 see):

 lazy refers to such-and-such evaluation order. strict refers to f ⊥ = ⊥,
 but it doesn't specify evaluation order.

 That doesn't answer the question. That begs the question: Why do
 libraries seem to make them a dichotomy, when they don't even talk about
 the same level? And the make-it-more-strict advice now becomes: the
 program uses too much memory because of the default, known evaluation
 order; try making this part use an unknown evaluation order, and this
 unknown is supposed to use less memory because...?

 I answer memory questions like this: the program uses too much memory
 because it is too lazy---or nevermind lazy, here is the current
 evaluation order of the specific compiler, this is why it uses much memory;
 now change this part to the other order, it uses less memory. I wouldn't
 bring in the denotational level; there is no need.

 (Sure, I use seq to change evaluation order, which may be overriden by
 optimizations in rare cases. So change my answer to: now add seq here,
 which normally uses the other order, but optimizations may override it in
 rare cases, so don't forget to test. Or use pseq.)

 I said people, make up your mind. I do not mean a whole group of people
 for the rest of their lives make up the same mind and choose the same one
 semantics. I mean this: Each individual, in each context, for each problem,
 just how many levels of semantics do you need to solve it? (Sure sure, I
 know contexts that need 4. What about daily programming problems: time,
 memory, I/O order?)

 MigMit questioned me on the importance of using the words properly.
 Actually, I am fine with using the words improperly, too: the program uses
 too much memory because it is too lazy, lazy refers to such-and-such
 evaluation order; try making this part more strict, strict refers to
 so-and-so evaluation order.



 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?

2011-12-28 Thread AUGER Cédric
Le Mon, 26 Dec 2011 19:30:20 -0800,
Alexander Solla alex.so...@gmail.com a écrit :

 So we give meaning to syntax through our semantics.  That is what this
 whole conversation is all about.  I am proposing we give Haskell
 bottoms semantics that bring it in line with the bottoms from various
 theories including lattice theory, the theory of sets, the theory of
 logic, as opposed to using denotational semantics' bottom semantic,
 which is unrealistic for a variety of reasons.  Haskell bottoms can't
 be compared, due to Rice's theorem.  Haskell bottoms cannot be given
 an interpretation as a Haskell value.  

There is no problem with Rice Theorem, you can perfectly compare
bottoms between them, all you cannot do is to make this compare
function both correct and complete in Haskell.

By correct, I mean (⊥==⊥)≠False, and by complete, I mean (⊥==⊥)≠⊥.
But incompleteness is very common in Haskell, and correctness is the
only real worry when computing (comparing two non terminating
values, and expecting the comparison to terminate is a mistake from
the programmer).

On purely theoretical side (so not in Haskell), there is no problem
either. For instance, the halting-problem can be turned into a
mathematical function φ (ie., φ ∈ ⅋(TM×{true,false}) satisfying the
criteria of functionnal total relation); the only thing is that this
set is not computable.

 as opposed to using denotational semantics' bottom semantic,
 which is unrealistic for a variety of reasons.  

Indeed, it is realistic, since in real world, things are not always
expected to terminate, so we have to take this into account into the
semantics. Don't forget that semantics is something upper Haskell
evaluation, it is in the purely mathematical world and can rely on non
computable things as far as they are consistent with the theory.

That is you cannot define a complete and correct function in pure
Haskell which takes a String which can be interpreted as a Haskell
function, and return a String which can be interpreted as the semantics
of the Haskell function.

(Of course the following function would be correct:
 semantics :: String - String
 semantics s = semantics s

 and this one would be complete
 semantics :: String - String
 semantics s = []

 and there is many possible functions which are complete and correct
 for many inputs and many possible functions which are correct and
 terminate for many inputs)

 Haskell bottoms cannot be given an interpretation as a Haskell
 value.  

As a lot of others said, Haskell bottoms ARE values, so they have an
interpretation as a Haskell value (itself).

If a whole community doesn't agree with you, either you are right and
know it and should probably run away from this community or publish a
well argumented book (you won't solve in 2 lines, what the community
believes in for years) either the other case (you are right but not
sure, or simply you are wrong) and in this case don't try to impose
your point of view, rather ask questions on the points you don't
understand.
For me your actual behaviour can be considered as trolling.

Note that I don't say the semantics you have in mind is wrong, I only
say that the semantics of ⊥ as described in the community's paper is
not wrong; but two different formal semantics can exist which give
the same observationnal semantics.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strict, lazy, non-strict, eager

2011-12-28 Thread Thiago Negri
I got a glimpse of understanding of what you are talking about after
reading the wiki [1].

Still difficult to reason about the difference between lazy and
non-strict without taking a look at the text.

I hope somebody will make an effort to better explain the differences
and persist it in the wiki or in a feed (maybe planet haskell).

[1] http://www.haskell.org/haskellwiki/Lazy_vs._non-strict

2011/12/28 Yves Parès limestr...@gmail.com:
 - Adjective strict can be applied both to a global evaluation method and a
 specific function: if applied to an eval method then it's a synonym of
 strict

 I of course meant a synonym of eager. Sorry.

 I admit this definition might be a little liberal, but it helps understand.



 2011/12/28 Yves Parès limestr...@gmail.com

 When I explain to people what strict/lazy/eager mean, I often say
 something like :

 - Adjectives eager and lazy apply only to a global evaluation method:
 eager is C evaluation style and lazy is that of Haskell.
 - Adjective strict can be applied both to a global evaluation method and a
 specific function: if applied to an eval method then it's a synonym of
 strict, and if applied to a function f it means 'f ⊥ = ⊥' (which I detail
 a little more), which is true for strict State monad for istance (= would
 not allow its left argument to return ⊥).

 Thus explaining why datatypes such as State or Bytestring exist in strict
 and lazy flavours.


 2011/12/28 Albert Y. C. Lai tre...@vex.net

 There are two flavours of MonadState, Control.Monad.State.Lazy and
 Control.Monad.State.Strict. There are two flavours of ByteString,
 Data.ByteString.Lazy and Data.Bytestring (whose doc says strict). There
 are two flavours of I/O libraries, lazy and strict. There are advices of the
 form: the program uses too much memory because it is too lazy; try making
 this part more strict. Eventually, someone will ask what are lazy and
 strict. Perhaps you answer this (but there are other answers, we'll see):

 lazy refers to such-and-such evaluation order. strict refers to f ⊥ = ⊥,
 but it doesn't specify evaluation order.

 That doesn't answer the question. That begs the question: Why do
 libraries seem to make them a dichotomy, when they don't even talk about the
 same level? And the make-it-more-strict advice now becomes: the program
 uses too much memory because of the default, known evaluation order; try
 making this part use an unknown evaluation order, and this unknown is
 supposed to use less memory because...?

 I answer memory questions like this: the program uses too much memory
 because it is too lazy---or nevermind lazy, here is the current evaluation
 order of the specific compiler, this is why it uses much memory; now change
 this part to the other order, it uses less memory. I wouldn't bring in the
 denotational level; there is no need.

 (Sure, I use seq to change evaluation order, which may be overriden by
 optimizations in rare cases. So change my answer to: now add seq here, which
 normally uses the other order, but optimizations may override it in rare
 cases, so don't forget to test. Or use pseq.)

 I said people, make up your mind. I do not mean a whole group of people
 for the rest of their lives make up the same mind and choose the same one
 semantics. I mean this: Each individual, in each context, for each problem,
 just how many levels of semantics do you need to solve it? (Sure sure, I
 know contexts that need 4. What about daily programming problems: time,
 memory, I/O order?)

 MigMit questioned me on the importance of using the words properly.
 Actually, I am fine with using the words improperly, too: the program uses
 too much memory because it is too lazy, lazy refers to such-and-such
 evaluation order; try making this part more strict, strict refers to
 so-and-so evaluation order.



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strict, lazy, non-strict, eager

2011-12-28 Thread Thiago Negri
I did read other wiki pages, and I guess I finally got it.
Anyone who still feel lost, take a look at them [1,2,3,4].

If the HaskellWiki is right, then the Wikipedia article for evaluation
strategies [5] is a bit misleading, as it classifies optimistic
evaluation under nondeterministic strategies when it should be under
the non-strict section.
Also, it mixes the word evaluation with strict and non-strict, e.g.:
In non-strict evaluation, arguments to a function are not evaluated
unless they are actually used in the evaluation of the function body.

This is lazy evaluation. A non-strict implementation may evaluate the
arguments of a function before calling it. The main diference is:

Strict semantics: the value of the function is 100% dependent of the
value of all arguments, any arguments yielding
bottom/error/non-termination/exception will result in the same
_error-value_ for the function.

A correct implementation of strict semantics is eager evaluation,
where all the arguments are evaluated before evaluating the function.


Non-strict semantics: the value of the function may not need it's
arguments' values to be produced, if the function can produce it's
value without the need of an argument's value, the function evaluates
correctly even if the unnused argument yields
bottom/error/non-termination/exception.

Lazy evaluation is one implementation of non-strict semantics, where
the arguments are evaluated only when they are needed.

Despite Haskell (GHC) don't do this, (0 * _|_) may yield 0 on a
non-strict implementation.


Did I got it right?


[1] http://www.haskell.org/haskellwiki/Non-strict_semantics
[2] http://www.haskell.org/haskellwiki/Strict_semantics
[3] http://www.haskell.org/haskellwiki/Lazy_evaluation
[4] http://www.haskell.org/haskellwiki/Eager_evaluation
[5] http://en.wikipedia.org/wiki/Evaluation_strategy

2011/12/28 Thiago Negri evoh...@gmail.com:
 I got a glimpse of understanding of what you are talking about after
 reading the wiki [1].

 Still difficult to reason about the difference between lazy and
 non-strict without taking a look at the text.

 I hope somebody will make an effort to better explain the differences
 and persist it in the wiki or in a feed (maybe planet haskell).

 [1] http://www.haskell.org/haskellwiki/Lazy_vs._non-strict

 2011/12/28 Yves Parès limestr...@gmail.com:
 - Adjective strict can be applied both to a global evaluation method and a
 specific function: if applied to an eval method then it's a synonym of
 strict

 I of course meant a synonym of eager. Sorry.

 I admit this definition might be a little liberal, but it helps understand.



 2011/12/28 Yves Parès limestr...@gmail.com

 When I explain to people what strict/lazy/eager mean, I often say
 something like :

 - Adjectives eager and lazy apply only to a global evaluation method:
 eager is C evaluation style and lazy is that of Haskell.
 - Adjective strict can be applied both to a global evaluation method and a
 specific function: if applied to an eval method then it's a synonym of
 strict, and if applied to a function f it means 'f ⊥ = ⊥' (which I detail
 a little more), which is true for strict State monad for istance (= would
 not allow its left argument to return ⊥).

 Thus explaining why datatypes such as State or Bytestring exist in strict
 and lazy flavours.


 2011/12/28 Albert Y. C. Lai tre...@vex.net

 There are two flavours of MonadState, Control.Monad.State.Lazy and
 Control.Monad.State.Strict. There are two flavours of ByteString,
 Data.ByteString.Lazy and Data.Bytestring (whose doc says strict). There
 are two flavours of I/O libraries, lazy and strict. There are advices of 
 the
 form: the program uses too much memory because it is too lazy; try making
 this part more strict. Eventually, someone will ask what are lazy and
 strict. Perhaps you answer this (but there are other answers, we'll see):

 lazy refers to such-and-such evaluation order. strict refers to f ⊥ = ⊥,
 but it doesn't specify evaluation order.

 That doesn't answer the question. That begs the question: Why do
 libraries seem to make them a dichotomy, when they don't even talk about 
 the
 same level? And the make-it-more-strict advice now becomes: the program
 uses too much memory because of the default, known evaluation order; try
 making this part use an unknown evaluation order, and this unknown is
 supposed to use less memory because...?

 I answer memory questions like this: the program uses too much memory
 because it is too lazy---or nevermind lazy, here is the current 
 evaluation
 order of the specific compiler, this is why it uses much memory; now change
 this part to the other order, it uses less memory. I wouldn't bring in the
 denotational level; there is no need.

 (Sure, I use seq to change evaluation order, which may be overriden by
 optimizations in rare cases. So change my answer to: now add seq here, 
 which
 normally uses the other order, but optimizations may override it in rare
 cases, so don't forget 

Re: [Haskell-cafe] Reifying case expressions [was: Re: On stream processing, and a new release of timeplot coming]

2011-12-28 Thread Brent Yorgey
On Mon, Dec 26, 2011 at 12:32:13PM +0400, Eugene Kirpichov wrote:
 2011/12/26 Gábor Lehel illiss...@gmail.com
 
  On Sun, Dec 25, 2011 at 9:19 PM, Eugene Kirpichov ekirpic...@gmail.com
  wrote:
   Hello Heinrich,
  
   Thanks, that's sure some food for thought!
  
   A few notes:
   * This is indeed analogous to Iteratees. I tried doing the same with
   Iteratees but failed, so I decided to put together something simple of my
   own.
   * The Applicative structure over this stuff is very nice. I was thinking,
   what structure to put on - and Applicative seems the perfect fit. It's
  also
   possible to implement Arrows - but I once tried and failed; however, I
  was
   trying that for a more complex stream transformer datatype (a hybrid of
   Iteratee and Enumerator).
   * StreamSummary is trivially a bifunctor. I actually wanted to make it an
   instance of Bifunctor, but it was in the category-extras package and I
   hesitated to reference this giant just for this purpose :) Probably
   bifunctors should be in prelude.
 
  Edward Kmett has been splitting that up into a variety of smaller
  packages, for instance:
 
  http://hackage.haskell.org/package/bifunctors
 
 Actually it's not a bifunctor - it's a functor in one argument and
 contrafunctor in the other.
 Is there a name for such a structure?

Bifunctor is usually used for such things as well.  Data.Bifunctor
only covers bifunctors which are covariant in both arguments.

-Brent

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strict, lazy, non-strict, eager

2011-12-28 Thread Jon Fairbairn
Thiago Negri evoh...@gmail.com writes:

 Lazy evaluation is one implementation of non-strict semantics, where
 the arguments are evaluated only when they are needed.

I would say this:

* non-strict semantics require that no argument is evaluated
  unless needed.

* lazy evaluation is an implementation of non-strict semantics
  in which no argument is evaluated more than once.

As an example of something other than lazy, normal order
reduction is non-strict, but arguments may be evaluated multiple
times.

-- 
Jón Fairbairn jon.fairba...@cl.cam.ac.uk



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strict, lazy, non-strict, eager

2011-12-28 Thread Thiago Negri
2011/12/28 Jon Fairbairn jon.fairba...@cl.cam.ac.uk:
 * non-strict semantics require that no argument is evaluated
  unless needed.

That's not the case on optimistic evaluation.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strict, lazy, non-strict, eager

2011-12-28 Thread Jon Fairbairn
Thiago Negri evoh...@gmail.com writes:

 2011/12/28 Jon Fairbairn jon.fairba...@cl.cam.ac.uk:
 * non-strict semantics require that no argument is evaluated
  unless needed.

 That's not the case on optimistic evaluation.

Oops, yes.  I should have said something like “non-strict
semantics require that evaluation should terminate if there is a
terminating reduction order” but went for something snappier
(and missed).

-- 
Jón Fairbairn jon.fairba...@cl.cam.ac.uk
http://www.chaos.org.uk/~jf/Stuff-I-dont-want.html  (updated 2010-09-14)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] On the purity of Haskell

2011-12-28 Thread Steve Horne
This is just my view on whether Haskell is pure, being offered up for 
criticism. I haven't seen this view explicitly articulated anywhere 
before, but it does seem to be implicit in a lot of explanations - in 
particular the description of Monads in SBCs Tackling the Awkward 
Squad. I'm entirely focused on the IO monad here, but aware that it's 
just one concrete case of an abstraction.


Warning - it may look like trolling at various points. Please keep going 
to the end before making a judgement.


To make the context explicit, there are two apparently conflicting 
viewpoints on Haskell...


1. The whole point of the IO monad is to support programming with
   side-effecting actions - ie impurity.
2. The IO monad is just a monad - a generic type (IO actions), a couple
   of operators (primarily return and bind) and some rules - within a
   pure functional language. You can't create impurity by taking a
   subset of a pure language.

My view is that both of these are correct, each from a particular point 
of view. Furthermore, by essentially the same arguments, C is also both 
an impure language and a pure one.


See what I mean about the trolling thing? I'm actually quite serious 
about this, though - and by the end I think Haskell advocates will 
generally approve.


First assertion... Haskell is a pure functional language, but only from 
the compile-time point of view. The compiler manipulates and composes IO 
actions (among other things). The final resulting IO actions are finally 
swallowed by unsafePerformIO or returned from main. However, Haskell is 
an impure side-effecting language from the run-time point of view - when 
the composed actions are executed. Impurity doesn't magically spring 
from the ether - it results from the translation by the compiler of IO 
actions to executable code and the execution of that code.


In this sense, IO actions are directly equivalent to the AST nodes in a 
C compiler. A C compiler can be written in a purely functional way - in 
principle it's just a pure function that accepts a string (source code) 
and returns another string (executable code). I'm fudging issues like 
separate compilation and #include, but all of these can be resolved in 
principle in a pure functional way. Everything a C compiler does at 
compile time is therefore, in principle, purely functional.


In fact, in the implementation of Haskell compilers, IO actions almost 
certainly *are* ASTs. Obviously there's some interesting aspects to that 
such as all the partially evaluated and unevaluated functions. But even 
a partially evaluated function has a representation within a compiler 
that can be considered an AST node, and even AST nodes within a C 
compiler may represent partially evaluated functions.


Even the return and bind operators are there within the C compiler in a 
sense, similar to the do notation in Haskell. Values are converted into 
actions. Actions are sequenced. Though the more primitive form isn't 
directly available to the programmer, it could easily be explicitly 
present within the compiler.


What about variables? What about referential transparency?

Well, to a compiler writer (and equally for this argument) an identifier 
is not the same thing as the variable it references.


One way to model the situation is that for every function in a C 
program, all explicit parameters are implicitly within the IO monad. 
There is one implicit parameter too - a kind of IORef to the whole 
system memory. Identifiers have values which identify where the variable 
is within the big implicit IORef. So all the manipulation of identifiers 
and their reference-like values is purely functional. Actual handling of 
variables stored within the big implicit IORef is deferred until run-time.


So once you accept that there's an implicit big IORef parameter to every 
function, by the usual definition of referential transparency, C is as 
transparent as Haskell. The compile-time result of each function is 
completely determined by its (implicit and explicit) parameters - it's 
just that that result is typically a way to look up the run-time result 
within the big IORef later.


What's different about Haskell relative to C therefore...

1. The style of the AST is different. It still amounts to the same
   thing in this argument, but the fact that most AST nodes are simply
   partially-evaluated functions has significant practical
   consequences, especially with laziness mixed in too. There's a deep
   connection between the compile-time and run-time models (contrast
   C++ templates).
2. The IO monad is explicit in Haskell - side-effects are only
   permitted (even at run-time) where the programmer has explicitly
   opted to allow them.
3. IORefs are explicit in Haskell - instead of always having one you
   can have none, one or many. This is relevant to an alternative
   definition of referential transparency. Politicians aren't
   considered transparent when they bury the relevant in a mass 

Re: [Haskell-cafe] On the purity of Haskell

2011-12-28 Thread AUGER Cédric
Le Wed, 28 Dec 2011 17:39:52 +,
Steve Horne sh006d3...@blueyonder.co.uk a écrit :

 This is just my view on whether Haskell is pure, being offered up for 
 criticism. I haven't seen this view explicitly articulated anywhere 
 before, but it does seem to be implicit in a lot of explanations - in 
 particular the description of Monads in SBCs Tackling the Awkward 
 Squad. I'm entirely focused on the IO monad here, but aware that
 it's just one concrete case of an abstraction.
 
 Warning - it may look like trolling at various points. Please keep
 going to the end before making a judgement.

It is not yet a troll for me as I am a newbie ^^

 To make the context explicit, there are two apparently conflicting 
 viewpoints on Haskell...
 
  1. The whole point of the IO monad is to support programming with
 side-effecting actions - ie impurity.
  2. The IO monad is just a monad - a generic type (IO actions), a
 couple of operators (primarily return and bind) and some rules -
 within a pure functional language. You can't create impurity by
 taking a subset of a pure language.
 
 My view is that both of these are correct, each from a particular
 point of view. Furthermore, by essentially the same arguments, C is
 also both an impure language and a pure one.
 
 See what I mean about the trolling thing? I'm actually quite serious 
 about this, though - and by the end I think Haskell advocates will 
 generally approve.
 
 First assertion... Haskell is a pure functional language, but only
 from the compile-time point of view. The compiler manipulates and
 composes IO actions (among other things). The final resulting IO
 actions are finally swallowed by unsafePerformIO or returned from
 main. However, Haskell is an impure side-effecting language from the
 run-time point of view - when the composed actions are executed.
 Impurity doesn't magically spring from the ether - it results from
 the translation by the compiler of IO actions to executable code and
 the execution of that code.
 
 In this sense, IO actions are directly equivalent to the AST nodes in
 a C compiler. A C compiler can be written in a purely functional way
 - in principle it's just a pure function that accepts a string
 (source code) and returns another string (executable code). I'm
 fudging issues like separate compilation and #include, but all of
 these can be resolved in principle in a pure functional way.
 Everything a C compiler does at compile time is therefore, in
 principle, purely functional.
 
 In fact, in the implementation of Haskell compilers, IO actions
 almost certainly *are* ASTs. Obviously there's some interesting
 aspects to that such as all the partially evaluated and unevaluated
 functions. But even a partially evaluated function has a
 representation within a compiler that can be considered an AST node,
 and even AST nodes within a C compiler may represent partially
 evaluated functions.
 
 Even the return and bind operators are there within the C compiler in
 a sense, similar to the do notation in Haskell. Values are converted
 into actions. Actions are sequenced. Though the more primitive form
 isn't directly available to the programmer, it could easily be
 explicitly present within the compiler.
 
 What about variables? What about referential transparency?
 
 Well, to a compiler writer (and equally for this argument) an
 identifier is not the same thing as the variable it references.
 
 One way to model the situation is that for every function in a C 
 program, all explicit parameters are implicitly within the IO monad. 
 There is one implicit parameter too - a kind of IORef to the whole 
 system memory. Identifiers have values which identify where the
 variable is within the big implicit IORef. So all the manipulation of
 identifiers and their reference-like values is purely functional.
 Actual handling of variables stored within the big implicit IORef is
 deferred until run-time.
 
 So once you accept that there's an implicit big IORef parameter to
 every function, by the usual definition of referential transparency,
 C is as transparent as Haskell. The compile-time result of each
 function is completely determined by its (implicit and explicit)
 parameters - it's just that that result is typically a way to look up
 the run-time result within the big IORef later.

Now as you ask here is my point of view:

IO monad doesn't make the language impure for me, since you can give
another implementation which is perfectly pure and which has the same
behaviour (although completely unrealistic):

-- An alternative to IO monad

data IO_ = IO_
  { systemFile :: String - String -- ^ get the contents of the file
  , handlers :: Handler - (Int, String)
-- ^ get the offset of the handler
  }

type IO a = IO { trans :: IO_ - (a, IO_) }

instance Monad IO where
  bind m f = IO { trans = \io1 - let (a, io2) = trans m io1 in
  trans (f a) io2
}
  return a = IO { trans = \io_ - (a, io_) }


Re: [Haskell-cafe] On the purity of Haskell

2011-12-28 Thread Heinrich Apfelmus

Steve Horne wrote:
This is just my view on whether Haskell is pure, being offered up for 
criticism. I haven't seen this view explicitly articulated anywhere 
before, but it does seem to be implicit in a lot of explanations - in 
particular the description of Monads in SBCs Tackling the Awkward 
Squad. I'm entirely focused on the IO monad here, but aware that it's 
just one concrete case of an abstraction.


Warning - it may look like trolling at various points. Please keep going 
to the end before making a judgement.


To make the context explicit, there are two apparently conflicting 
viewpoints on Haskell...


1. The whole point of the IO monad is to support programming with
   side-effecting actions - ie impurity.
2. The IO monad is just a monad - a generic type (IO actions), a couple
   of operators (primarily return and bind) and some rules - within a
   pure functional language. You can't create impurity by taking a
   subset of a pure language.

My view is that both of these are correct, each from a particular point 
of view. Furthermore, by essentially the same arguments, C is also both 
an impure language and a pure one. [...]


Purity has nothing to do with the question of whether you can express IO 
in Haskell or not.


The word purity refers to the fact that applying a value

   foo :: Int - Int

(a function) to another value *always* evaluates to the same result. 
This is true in Haskell and false in C.


The beauty of the IO monad is that it doesn't change anything about 
purity. Applying the function


   bar :: Int - IO Int

to the value 2 will always give the same result:

   bar 2 = bar (1+1) = bar (5-3)

Of course, the point is that this result is an *IO action* of type  IO 
Int , it's not the  Int  you would get when executing this action.



Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell meta-programming

2011-12-28 Thread Heinrich Apfelmus

John Lato wrote:

From: Heinrich Apfelmus apfel...@quantentunnel.de

* Meta-programming / partial evaluation. When designing a DSL, it is
often the case that you know how to write an optimizing compiler for
your DSL because it's usually a first-order language. However, trying to
squeeze that into GHC rules is hopeless. Having some way of compiling
code at run-time would solve that. Examples:
** Conal Elliott's image description language Pan
** Henning Thielemann's synthesizer-llvm


I've been thinking about this, and I suspect that meta-programming in
Haskell may not be that far off.  Suppose you have a Meta monad

data Meta a = Meta { runMeta :: a}

with the magical property that the compiler will optimize/recompile
expressions of type `Meta a` when they are run via `runMeta`.  That
would provide usable metaprogramming, and I think it would have all
the desired type properties etc.

Of course no compiler currently has that facility, but we could use a
different monad, perhaps something like this:

data ThMeta a = ThMeta { runThMeta :: Language.Haskell.TH.ExpQ }

now we just need to get the compiler to run an arbitrary TH splice and
check that the types match after `runThMeta` is called.  I think this
is already possible via the GHC api.  It would have the undesirable
property that some expressions could be ill-typed, and this wouldn't
be known until run-time, but it's a start.

That's about as far as I got before I discovered a much more
worked-out version on Oleg's site (with Chung-chieh Shan and Yukiyoshi
Kameyama).  Of course they've tackled a lot of the awkward typing
issues that my simple construct rudely pushes onto the user.

I'm probably showing my naivety here, and I haven't fully digested
their papers yet, but I wouldn't be surprised to see applications
using Haskell metaprogramming within a relatively short time.


You are probably referring to the paper

  Kameyama, Y; Kiselyov, O; Shan, C.
  Computational Effects across Generated Binders:
Maintaining future-stage lexical scope.
  http://www.cs.tsukuba.ac.jp/techreport/data/CS-TR-11-17.pdf

As far as I understand, they are dealing with the problem of binding 
variables in your ThMeta thing.



While this is certainly useful, I think the main problem is that GHC 
currently lacks a way to compile Template Haskell splices (or similar) 
at runtime. For instance, Henning is currently using LLVM to dynamically 
generate code, but the main drawback is that you can't freely mix it 
with existing Haskell code.


Put differently, I would say that DSLs often contain stages which are 
first-order and thus amenable to aggressive optimization, but they 
frequently also contain higher-order parts which should at least survive 
the optimization passes. A simple example would be list fusion on [Int 
- Int]: the fusion is first-order, but the list elements are 
higher-order. You need some kind of genuine partial evaluation to deal 
with that.



Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] [ANNOUNCE] HaskellNet-0.3 released

2011-12-28 Thread Jonathan Daugherty
Hi all,

I'm pleased to announce the release of HaskellNet, version 0.3.  Get
it on Hackage:

  http://hackage.haskell.org/package/HaskellNet-0.3

This release has quite a few changes; to get a comprehensive summary,
see the CHANGELOG included in the package.  My aim with this release
was to get the code to a state where I could do further work on it and
split it up into protocol-specific packages.  As a result, 0.3 is
likely to be the last release of this code under this name.

-- 
  Jonathan Daugherty

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] space-efficient, composable list transformers [was: Re: Reifying case expressions [was: Re: On stream processing, and a new release of timeplot coming]]

2011-12-28 Thread Sebastian Fischer
Hello Heinrich,

On Tue, Dec 27, 2011 at 1:09 PM, Heinrich Apfelmus 
apfel...@quantentunnel.de wrote:

 Sebastian Fischer wrote:

 all functions defined in terms of `ListTo` and `interpret`
 are spine strict - they return a result only after consuming all input
 list
 constructors.

 Indeed, the trouble is that my original formulation cannot return a result
 before it has evaluated all the case expressions. To include laziness, we
 need a way to return results early.

 Sebastian's  ListTransformer  type does precisely that for the special
 case of lists as results,


Hmm, I think it does more than that.. (see below)

but it turns out that there is also a completely generic way of returning
 results early. In particular, we can leverage lazy evaluation for the
 result type.


This is nice! It would be cool if we could get the benefits of ListConsumer
and ListTransformer in a single data type.

I know that you chose these names to avoid confusion, but I would like to
 advertise again the idea of choosing the *same* names for the constructors
 as the combinators they represent [...] This technique for designing data
 structures has the huge advantage that it's immediately clear how to
 interpret it and which laws are supposed to hold.


I also like your names better, although they suggest that there is a single
possible interpretation function. Even at the expense of blinding eyes to
the possibility of other interpretation functions, I agree that it makes
things clearer to use names from a *motivating* interpretation. In
hindsight, my names for the constructors of ListTransformer seem to be
inspired by operations on handles. So, `Cut` should have been named `Close`
instead..


 Especially in the case of lists, I think that this approach clears up a
 lot of confusion about seemingly new concepts like Iteratees and so on.


A share the discomfort with seemingly alien concepts and agree that clarity
of exposition is crucial, both for the meaning of defined combinators and
their implementation. We should aim at combinators that people are already
familiar with, either because they are commonplace (like id, (.), or fmap)
or because they are used by many other libraries (like the Applicative
combinators).

A good way to explain the meaning of the combinators is via the meaning of
the same combinators on a familiar type. Your interpretation function is a
type-class morphism from `ListTo a b` to `[a] - b`. For Functor we have:

interpret (fmap f a)  =  fmap f (interpret a)

On the left side, we use `fmap` for `ListTo a` on the right side for `((-)
l)`. Similarly, we have the following properties for the coresponding
Applicative instances:

interpret (pure x)  =  pure x
interpret (a * b)  =  interpret a * interpret b

Such morphism properties simplify how to think about programs a lot,
because one can think about programs as if they were written in the
*meaning* type without knowing anything about the *implementation* type.
The computed results are the same but they are computed more efficiently.

Your `ListTo` type achieves space efficiency for Applicative composition of
list functions by executing them in lock-step. Because of the additional
laziness provided by the `Fmap` constructor, compositions like

interpret a . interpret b

can also be executed in constant space. However, we cannot use the space
efficient Applicative combinators again to form parallel compositions of
sequential ones because we are already in the meaning type.

We could implement composition for the `ListTo` type as follows

(.) :: ListTo b c - ListTo a [b] - ListTo a c
a . b = interpret a $ b

But if we use a result of this function as argument of *, then the
advantage of using `ListTo` is lost. While

interpret ((,) $ andL * andL)

runs in constant space,

interpret ((,) $ (andL . idL) * (andL . idL))

does not.

The ListTransformer type supports composition in lock-step via a category
instance. The meaning of `ListTransformer a b` is `[a] - [b]` with the
additional restriction that all functions `f` in the image of the
interpretation function are incremental:

xs `isPrefixOf` ys  ==  f xs `isPrefixOf` f ys

Composition as implemented in the ListTransformer type satisfies morphism
properties for the category instance:

transformList id  =  id
transformList (a . b)  =  transformList a . transformList b

As it is implemented on the ListTransformer type directly (without using
the interpretation function), it can be combined with the Applicative
instance for parallel composition without losing space efficiency.

The Applicative instance for `ListTransformer` is different from the
Applicative instance for `ListTo` (or `ListConsumer`). While

interpret ((,) $ idL * idL)

is of type `[a] - ([a],[a])`

transformList ((,) $ idL * idL)

is of type `[a] - [(a,a)]`. We could achieve the latter behaviour with the
former instance by using an additional fmap. But

uncurry zip $ ((,) $ idL * idL)


Re: [Haskell-cafe] On the purity of Haskell

2011-12-28 Thread Steve Horne

On 28/12/2011 20:44, Heinrich Apfelmus wrote:

Steve Horne wrote:
This is just my view on whether Haskell is pure, being offered up for 
criticism. I haven't seen this view explicitly articulated anywhere 
before, but it does seem to be implicit in a lot of explanations - in 
particular the description of Monads in SBCs Tackling the Awkward 
Squad. I'm entirely focused on the IO monad here, but aware that 
it's just one concrete case of an abstraction.


Warning - it may look like trolling at various points. Please keep 
going to the end before making a judgement.


To make the context explicit, there are two apparently conflicting 
viewpoints on Haskell...


1. The whole point of the IO monad is to support programming with
   side-effecting actions - ie impurity.
2. The IO monad is just a monad - a generic type (IO actions), a couple
   of operators (primarily return and bind) and some rules - within a
   pure functional language. You can't create impurity by taking a
   subset of a pure language.

My view is that both of these are correct, each from a particular 
point of view. Furthermore, by essentially the same arguments, C is 
also both an impure language and a pure one. [...]


Purity has nothing to do with the question of whether you can express 
IO in Haskell or not.



...

The beauty of the IO monad is that it doesn't change anything about 
purity. Applying the function


   bar :: Int - IO Int

to the value 2 will always give the same result:

Yes - AT COMPILE TIME by the principle of referential transparency it 
always returns the same action. However, the whole point of that action 
is that it might potentially be executed (with potentially 
side-effecting results) at run-time. Pure at compile-time, impure at 
run-time. What is only modeled at compile-time is realized at run-time, 
side-effects included.


Consider the following...

#include stdio.h

int main (int argc, char*argv)
{
  char c;
  c = getchar ();
  putchar (c);
  return 0;
}

The identifier c is immutable. We call it a variable, but the 
compile-time value of c is really just some means to find the actual 
value in the big implicit IORef at runtime - an offset based on the 
stack pointer or whatever. Nothing mutates until compile-time, and when 
that happens, the thing that mutates (within that big implicit IORef) 
is separate from that compile-time value of c.


In C and in Haskell - the side-effects are real, and occur at run-time.

That doesn't mean Haskell is as bad as C - I get to the advantages of 
Haskell at the end of my earlier post. Mostly unoriginal, but I think 
the bit about explicit vs. implicit IORefs WRT an alternate view of 
transparency is worthwhile.


I hope If convinced you I'm not making one of the standard newbie 
mistakes. I've done all that elsewhere before, but not today, honest.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Windows: openFile gives permission denied when file in use

2011-12-28 Thread Michael Snoyman
Hi all,

I just received a bug report from a client that, when an input file is
open in FrameMaker, my program gives a permission denied error. This
bug is reproducible with a simple Haskell program:

import System.IO

main = do
putStrLn here1
h - openFile filename.txt ReadMode
putStrLn here2

I tried writing a simple C program using fopen, and it ran just fine.
Does anyone have experience with this issue, and know of a workaround?

Thanks,
Michael

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-28 Thread Jerzy Karczmarczuk

Le 28/12/2011 22:45, Steve Horne a écrit :
Yes - AT COMPILE TIME by the principle of referential transparency it 
always returns the same action. However, the whole point of that 
action is that it might potentially be executed (with potentially 
side-effecting results) at run-time. Pure at compile-time, impure at 
run-time. What is only modeled at compile-time is realized at 
run-time, side-effects included.

(...)

I hope If convinced you I'm not making one of the standard newbie 
mistakes. I've done all that elsewhere before, but not today, honest.
Sorry, perhaps this is not a standard newbie mistake, but you - 
apparently - believe that an execution of an action on the real world 
is a side effect.


I don't think it is.
Even if a Haskell programme fires an atomic bomb, a very impure one, 
/*there are no side effects within the programme itself*/.

If you disagree, show them.

I don't think that speaking about compile-time purity is correct.

Jerzy Karczmarczuk


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] [ANNOUNCEMENT] SBV 0.9.24 is out

2011-12-28 Thread Levent Erkok
Hello all,

SBV 0.9.24 is out: http://hackage.haskell.org/package/sbv

In short, SBV allows for scripting SMT solvers directly within
Haskell, with built-in support for bit-vectors and unbounded integers.
(Microsoft's Z3 SMT solver, and SRI's Yices can be used as backends.)

New in this release are SMT based optimization (both quantified and
iterative), and support for explicit domain constraints with vacuity
check.

Full release notes: http://github.com/LeventErkok/sbv/blob/master/RELEASENOTES

Happy new year!

-Levent.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-28 Thread Steve Horne

On 28/12/2011 22:01, Jerzy Karczmarczuk wrote:

Le 28/12/2011 22:45, Steve Horne a écrit :
Yes - AT COMPILE TIME by the principle of referential transparency it 
always returns the same action. However, the whole point of that 
action is that it might potentially be executed (with potentially 
side-effecting results) at run-time. Pure at compile-time, impure at 
run-time. What is only modeled at compile-time is realized at 
run-time, side-effects included.

(...)

I hope If convinced you I'm not making one of the standard newbie 
mistakes. I've done all that elsewhere before, but not today, honest.
Sorry, perhaps this is not a standard newbie mistake, but you - 
apparently - believe that an execution of an action on the real 
world is a side effect.


I don't think it is.
Even if a Haskell programme fires an atomic bomb, a very impure one, 
/*there are no side effects within the programme itself*/.
True. But side-effects within the program itself are not the only 
relevant side-effects.


As Simon Baron-Cohen says in Tackling the Awkward Squad...

   Yet the ultimate purpose of running a program is invariably to cause
   some side effect: a changed file, some new pixels on the screen, a
   message sent, or whatever. Indeed it's a bit cheeky to call
   input/output awkward at all. I/O is the raison d'^etre of every
   program. --- a program that had no observable effect whatsoever (no
   input, no output) would not be very useful.

Of course he then says...

   Well, if the side effect can't be in the functional program, it will
   have to be outside it.

Well, to me, that's a bit cheeky too - at least if taken overliterally. 
Even if you consider a mutation of an IORef to occur outside the 
program, it affects the later run-time behaviour of the program. The 
same with messages sent to stdout - in this case, the user is a part of 
the feedback loop, but the supposedly outside-the-program side-effect 
still potentially affects the future behaviour of the program when it 
later looks at stdin.


A key point of functional programming (including its definitions of 
side-effects and referential transparency) is about preventing bugs by 
making code easier to reason about.


Saying that the side-effects are outside the program is fine from a 
compile-time compositing-IO-actions point of view. But as far as 
understanding the run-time behaviour of the program is concerned, that 
claim really doesn't change anything. The side-effects still occur, and 
they still affect the later behaviour of the program. Declaring that 
they're outside the program doesn't make the behaviour of that program 
any easier to reason about, and doesn't prevent bugs.


A final SBC quote, still from Tackling the Awkward Squad...

   There is a clear distinction, enforced by the type system, between
   actions which may have
   side effects, and functions which may not.

SBC may consider the side-effects to be outside the program, but he 
still refers to actions which may have side-effects. The side-effects 
are still there, whether you consider them inside or outside the 
program, and as a programmer you still have to reason about them.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-28 Thread Bernie Pope
On 29 December 2011 10:51, Steve Horne sh006d3...@blueyonder.co.uk wrote:

 As Simon Baron-Cohen says in Tackling the Awkward Squad...

I think you've mixed up your Simons; that should be Simon Peyton Jones.

Cheers,
Bernie.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-28 Thread Steve Horne

On 28/12/2011 23:56, Bernie Pope wrote:

On 29 December 2011 10:51, Steve Hornesh006d3...@blueyonder.co.uk  wrote:


As Simon Baron-Cohen says in Tackling the Awkward Squad...

I think you've mixed up your Simons; that should be Simon Peyton Jones.


Oops - sorry about that.

FWIW - I'm diagnosed Aspergers. SBC diagnosed me back in 2001, shortly 
after 9/1/1.


Yes, I *am* pedantic - which doesn't always mean right, of course.

Not relevant, but what the hell.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-28 Thread Thiago Negri
We can do functional programming on Java. We use all the design patterns
for that.

At the very end, everything is just some noisy, hairy, side-effectfull,
gotofull machinery code.

The beauty of Haskell is that it allows you to limit the things you need to
reason about. If I see a function with the type (a, b) - a I don't need
to read a man page to see where I should use it or not. I know what it can
do by its type. In C I can not do this. What can I say about a function
int foo(char* bar)? Does it allocate memory? Does it asks a number for
the user on stdin? Or does it returns the length of a zero-ending char
sequence? In fact it can do anything, and I can't forbid that. I can't
guarantee that my function has good behaviour. You need to trust the man
page.
Em 28/12/2011 22:24, Steve Horne sh006d3...@blueyonder.co.uk escreveu:

 On 28/12/2011 23:56, Bernie Pope wrote:

 On 29 December 2011 10:51, Steve 
 Hornesh006d3592@blueyonder.**co.uksh006d3...@blueyonder.co.uk
  wrote:

  As Simon Baron-Cohen says in Tackling the Awkward Squad...

 I think you've mixed up your Simons; that should be Simon Peyton Jones.

  Oops - sorry about that.

 FWIW - I'm diagnosed Aspergers. SBC diagnosed me back in 2001, shortly
 after 9/1/1.

 Yes, I *am* pedantic - which doesn't always mean right, of course.

 Not relevant, but what the hell.


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-28 Thread Steve Horne
Sorry for the delay. I've written a couple of long replies already, and 
both times when I'd finished deleting all the stupid stuff there was 
nothing left - it seems I'm so focussed on my own view, I'm struggling 
with anything else today. Maybe a third try...


On 28/12/2011 19:38, AUGER Cédric wrote:

Le Wed, 28 Dec 2011 17:39:52 +,
Steve Hornesh006d3...@blueyonder.co.uk  a écrit :


This is just my view on whether Haskell is pure, being offered up for
criticism. I haven't seen this view explicitly articulated anywhere
before, but it does seem to be implicit in a lot of explanations - in
particular the description of Monads in SBCs Tackling the Awkward
Squad. I'm entirely focused on the IO monad here, but aware that
it's just one concrete case of an abstraction.



IO monad doesn't make the language impure for me, since you can give
another implementation which is perfectly pure and which has the same
behaviour (although completely unrealistic):



Now how would this work?
In a first time, you load all your system file before running the
program (a side-effect which does not modify already used structures;
it is just initialization), then you run the program in a perfectly
pure way, and at the end you commit all to the system file (so you
modify structures the running program won't access as it has
terminated).
I don't see how interactivity fits that model. If a user provides input 
in response to an on-screen prompt, you can't do all the input at the 
start (before the prompt is delayed) and you can't do all the output at 
the end.


Other than that, I'm OK with that. In fact if you're writing a compiler 
that way, it seems fine - you can certainly delay output of the 
generated object code until the end of the compilation, and the input 
done at the start of the compilation (source files) is separate from the 
run-time prompt-and-user-input thing.


See - I told you I'm having trouble seeing things in terms of someone 
elses model - I'm back to my current obsession again here.

In Haskell,
'hGetChar h= \c -  hPutChar i' always has the same value, but
'trans (hGetChar h= \c -  hPutChar i) (IO_ A)'
'trans (hGetChar h= \c -  hPutChar i) (IO_ B)'
may have different values according to A and B.

In C, you cannot express this distinction, since you only have:
'read(h,c, 1); write(i,c, 1);' and cannot pass explicitely the
environment.

Agreed. Haskell is definitely more powerful in that sense.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Windows: openFile gives permission denied when file in use

2011-12-28 Thread Antoine Latter
On Wed, Dec 28, 2011 at 3:52 PM, Michael Snoyman mich...@snoyman.com wrote:
 Hi all,

 I just received a bug report from a client that, when an input file is
 open in FrameMaker, my program gives a permission denied error. This
 bug is reproducible with a simple Haskell program:

 import System.IO

 main = do
    putStrLn here1
    h - openFile filename.txt ReadMode
    putStrLn here2

 I tried writing a simple C program using fopen, and it ran just fine.
 Does anyone have experience with this issue, and know of a workaround?


When GHC opens files for reading, it asks windows to disallow write
access to the file. I'm guessing that Framemaker has the file open for
writing, so GHC can't get that permission.

I imagine that the Haskell runtime does this to make lazy-io less crazy.

Here's the call GHC makes:
https://github.com/ghc/packages-base/blob/0e1a02b96cfd03b8488e3ff4ce232466d6d5ca77/include/HsBase.h#L580

To open a file for reading in your C demo in a similar way you could
do something like:

fd = _sopen(file_name, _O_RDONLY | _O_NOINHERIT,_SH_DENYWR, 0);

Here _SH_DENYWR is telling windows to deny others from writing to this file.

Here's the msdn link for _sopen and _wsopen:
http://msdn.microsoft.com/en-us/library/w7sa2b22%28VS.80%29.aspx

I haven't tested any of that, but that should help you in reproducing
how GHC opens files for read on windows.

You should be able to use System.Win32.File.createFile with a share
mode of (fILE_SHARE_READ .|. fILE_SHARE_WRITE) and then wrangling a
Haskell Handle out of that, but I haven't tried it.

Or you could modify the call to _wsopen and FFI call that - it takes
fewer parameters and might be less confusing.

Antoine

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-28 Thread Antoine Latter
On Wed, Dec 28, 2011 at 3:45 PM, Steve Horne
sh006d3...@blueyonder.co.uk wrote:
 On 28/12/2011 20:44, Heinrich Apfelmus wrote:

 Steve Horne wrote:

 This is just my view on whether Haskell is pure, being offered up for
 criticism. I haven't seen this view explicitly articulated anywhere before,
 but it does seem to be implicit in a lot of explanations - in particular the
 description of Monads in SBCs Tackling the Awkward Squad. I'm entirely
 focused on the IO monad here, but aware that it's just one concrete case of
 an abstraction.

 Warning - it may look like trolling at various points. Please keep going
 to the end before making a judgement.

 To make the context explicit, there are two apparently conflicting
 viewpoints on Haskell...

 1. The whole point of the IO monad is to support programming with
   side-effecting actions - ie impurity.
 2. The IO monad is just a monad - a generic type (IO actions), a couple
   of operators (primarily return and bind) and some rules - within a
   pure functional language. You can't create impurity by taking a
   subset of a pure language.

 My view is that both of these are correct, each from a particular point
 of view. Furthermore, by essentially the same arguments, C is also both an
 impure language and a pure one. [...]


 Purity has nothing to do with the question of whether you can express IO
 in Haskell or not.

 ...


 The beauty of the IO monad is that it doesn't change anything about
 purity. Applying the function

   bar :: Int - IO Int

 to the value 2 will always give the same result:

 Yes - AT COMPILE TIME by the principle of referential transparency it always
 returns the same action. However, the whole point of that action is that it
 might potentially be executed (with potentially side-effecting results) at
 run-time. Pure at compile-time, impure at run-time. What is only modeled at
 compile-time is realized at run-time, side-effects included.


I don't think I would put it that way - the value 'bar 2' is a regular
Haskell value. I can put it in a list, return it from a function and
all other things:

myIOActions :: [IO Int]
myIOActions = [bar 2, bar (1+1), bar (5-3)]

And I can pick any of the elements of the list to execute in my main
function, and I get the same main function either way.

 Consider the following...

 #include stdio.h

 int main (int argc, char*argv)
 {
  char c;
  c = getchar ();
  putchar (c);
  return 0;
 }

 The identifier c is immutable. We call it a variable, but the compile-time
 value of c is really just some means to find the actual value in the big
 implicit IORef at runtime - an offset based on the stack pointer or
 whatever. Nothing mutates until compile-time, and when that happens, the
 thing that mutates (within that big implicit IORef) is separate from that
 compile-time value of c.

 In C and in Haskell - the side-effects are real, and occur at run-time.

 That doesn't mean Haskell is as bad as C - I get to the advantages of
 Haskell at the end of my earlier post. Mostly unoriginal, but I think the
 bit about explicit vs. implicit IORefs WRT an alternate view of transparency
 is worthwhile.

 I hope If convinced you I'm not making one of the standard newbie mistakes.
 I've done all that elsewhere before, but not today, honest.



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-28 Thread Steve Horne

On 29/12/2011 00:57, Thiago Negri wrote:


We can do functional programming on Java. We use all the design
patterns for that.

At the very end, everything is just some noisy, hairy,
side-effectfull, gotofull machinery code.

The beauty of Haskell is that it allows you to limit the things you
need to reason about. If I see a function with the type (a, b) - a
I don't need to read a man page to see where I should use it or not. I
know what it can do by its type. In C I can not do this. What can I
say about a function int foo(char* bar)? Does it allocate memory?
Does it asks a number for the user on stdin? Or does it returns the
length of a zero-ending char sequence? In fact it can do anything, and
I can't forbid that. I can't guarantee that my function has good
behaviour. You need to trust the man page.

Well, I did say (an unoriginal point) that The IO monad is explicit in 
Haskell - side-effects are only permitted (even at run-time) where the 
programmer has explicitly opted to allow them.. So yes.


The it could do anything!!! claims are over the top and IMO 
counterproductive, though. The type system doesn't help the way it does 
in Haskell, but nevertheless, plenty of people reason about the 
side-effects in C mostly-successfully.


Mostly /= always, but bugs can occur in any language.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Windows: openFile gives permission denied when file in use

2011-12-28 Thread Chris Wong
On Thu, Dec 29, 2011 at 2:45 PM, Antoine Latter aslat...@gmail.com wrote:
 [...]

 When GHC opens files for reading, it asks windows to disallow write
 access to the file. I'm guessing that Framemaker has the file open for
 writing, so GHC can't get that permission.

In fact, this is required behavior according to the Haskell Report:

 Implementations should enforce as far as possible, at least locally to the 
 Haskell process, multiple-reader single-writer locking on files. That is, 
 there may either be many handles on the same file which manage input, or just 
 one handle on the file which manages output.

I guess on Windows, as far as possible means locking it across the
whole system.

(See 
http://www.haskell.org/onlinereport/haskell2010/haskellch41.html#x49-32800041.3.4
for the gory details)

-- Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Windows: openFile gives permission denied when file in use

2011-12-28 Thread Steve Schafer
On Thu, 29 Dec 2011 16:10:03 +1300, chris...@gmail.com wrote:

In fact, this is required behavior according to the Haskell Report:

 Implementations should enforce as far as possible, at least locally
 to the Haskell process, multiple-reader single-writer locking on
 files. That is, there may either be many handles on the same file
 which manage input, or just one handle on the file which manages
 output.

The second sentence somewhat contradicts the first.

The first sentence says, multiple-reader single-writer which implies
multiple readers AND at most one writer (i.e., at the same time). This
is pretty typical file-locking behavior, and, from the symptoms, appears
to be the way that Framemaker opens the file.

The second sentence, on the other hand, implies that there can be
multiple readers OR one writer, but not both (which appears to be the
way that GHC operates).

I guess on Windows, as far as possible means locking it across the
whole system.

Windows does allow finer-grained control (including byte-range locking),
but most applications don't bother.

-Steve Schafer

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the purity of Haskell

2011-12-28 Thread Steve Horne

On 29/12/2011 01:53, Antoine Latter wrote:

The beauty of the IO monad is that it doesn't change anything about
purity. Applying the function

   bar :: Int -  IO Int

to the value 2 will always give the same result:


Yes - AT COMPILE TIME by the principle of referential transparency it always
returns the same action. However, the whole point of that action is that it
might potentially be executed (with potentially side-effecting results) at
run-time. Pure at compile-time, impure at run-time. What is only modeled at
compile-time is realized at run-time, side-effects included.


I don't think I would put it that way - the value 'bar 2' is a regular
Haskell value. I can put it in a list, return it from a function and
all other things:

myIOActions :: [IO Int]
myIOActions = [bar 2, bar (1+1), bar (5-3)]

And I can pick any of the elements of the list to execute in my main
function, and I get the same main function either way.
Yes - IO actions are first class values in Haskell. They can be derived 
using all the functional tools of the language. But if this points out a 
flaw in my logic, it's only a minor issue in my distinction between 
compile-time and run-time.


Basically, there is a phase when a model has been constructed 
representing the source code. This model is similar in principle to an 
AST, though primarily (maybe entirely?) composed of unevaluated 
functions rather than node-that-represents-whatever structs. This phase 
*must* be completed during compilation. Of course evaluation of some 
parts of the model can start before even parsing is complete, but that's 
just implementation detail.


Some reductions (if that's the right term for a single evaluation step) 
of that model cannot be applied until run-time because of the dependence 
on run-time inputs. Either the reduction implies the execution of an IO 
action, or an argument has a data dependency on an IO action.


Many reductions can occur either at compile-time or run-time.

In your list-of-actions example, the list is not an action itself, but 
it's presumably a part of the expression defining main which returns an 
IO action. The evaluation of the expression to select an action may have 
to be delayed until run-time with the decision being based on run-time 
input. The function that does the selection is still pure. Even so, this 
evaluation is part of the potentially side-effecting evaluation and 
execution of the main IO action. Overall, the run-time execution is 
impure - a single side-effect is enough.


So... compile-time pure, run-time impure.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Windows: openFile gives permission denied when file in use

2011-12-28 Thread Antoine Latter
On Wed, Dec 28, 2011 at 3:52 PM, Michael Snoyman mich...@snoyman.com wrote:
 Hi all,

 I just received a bug report from a client that, when an input file is
 open in FrameMaker, my program gives a permission denied error. This
 bug is reproducible with a simple Haskell program:


This bug and its discussion is similar, but not identical:
http://hackage.haskell.org/trac/ghc/ticket/4363

 import System.IO

 main = do
    putStrLn here1
    h - openFile filename.txt ReadMode
    putStrLn here2

 I tried writing a simple C program using fopen, and it ran just fine.
 Does anyone have experience with this issue, and know of a workaround?

 Thanks,
 Michael

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe