Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Jonathan Cast

On 16 Feb 2008, at 11:46 PM, Anton van Straaten wrote:


Colin Paul Adams wrote:

"Cale" == Cale Gibbard <[EMAIL PROTECTED]> writes:

Cale> So, the first version:
Cale> import System.IO import Control.Exception (try)
Cale> main = do mfh <- try (openFile "myFile" ReadMode) case mfh
Cale> of Left err -> do putStr "Error opening file for reading: "
Cale> print err Right fh -> do mline <- try (hGetLine fh) case
Cale> mline of Left err -> do putStr "Error reading line: " print
Cale> err hClose fh Right line -> putStrLn ("Read: " ++ line)
Left? Right?
Hardly descriptive terms. Sounds like a sinister language to me.


I was thinking along the same lines.  Politically-sensitive left- 
handed people everywhere ought to be offended that "Left" is the  
alternative used to represent errors, mnemonic value notwithstanding.


Is there a benefit to reusing a generic Either type for this sort  
of thing?


Standardization.  It's already a standard, we need a standard sum  
type anyway, and it'd be kind of silly to have two isomorphic types  
with the same signature in the Prelude.


jcc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Anton van Straaten

Colin Paul Adams wrote:

"Cale" == Cale Gibbard <[EMAIL PROTECTED]> writes:


Cale> So, the first version:

Cale> import System.IO import Control.Exception (try)

Cale> main = do mfh <- try (openFile "myFile" ReadMode) case mfh
Cale> of Left err -> do putStr "Error opening file for reading: "
Cale> print err Right fh -> do mline <- try (hGetLine fh) case
Cale> mline of Left err -> do putStr "Error reading line: " print
Cale> err hClose fh Right line -> putStrLn ("Read: " ++ line)

Left? Right?

Hardly descriptive terms. Sounds like a sinister language to me.


I was thinking along the same lines.  Politically-sensitive left-handed 
people everywhere ought to be offended that "Left" is the alternative 
used to represent errors, mnemonic value notwithstanding.


Is there a benefit to reusing a generic Either type for this sort of 
thing?  For code comprehensibility, wouldn't it be better to use more 
specific names?  If I want car and cdr, I know where to find it.


Anton

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Arrows: definition of pure & arr

2008-02-16 Thread Peter Verswyvelen
After having played with some packages that use arrows, and after having 
read the very nice "programming with arrows" paper I wanted to build 
some of my own.


Strangely my code did not work, even the simplest function got stuck in 
an infinite loop or gave a stack overflow.


I quickly noticed I made a really stupid mistake, I forget to implement 
"arr"! However, the compiler did not give a warning on this. So I 
wandered how it was possible that the Arrow package had a default 
implementation for something so specific as arr?


The code revealed the following:

-- | Lift a function to an arrow: you must define either this
--   or 'pure'.
arr :: (b -> c) -> a b c
arr = pure

-- | A synonym for 'arr': you must define one or other of them.
pure :: (b -> c) -> a b c
pure = arr

Ah, so the default implementation of arr is pure... and vice versa...

This feels like rather incorrect to me, but my feelings are based on 
imperative background knowledge, so this might be totally correct design 
in Haskell.


Why not force people to implement arr and leave just pure as the 
synonym? And if pure is really a synonym for arr, what does it do inside 
the Arrow type class? Does it ever make sense to have a different 
implementation for arr and pure?



Thanks for any help,
Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Designing a Parser

2008-02-16 Thread PR Stanley
Actually, I haven't sent this question to the list before. So you're 
in no danger of repeating yourself.

Thanks for your kind reply anyway
Paul
At 07:03 17/02/2008, you wrote:

On Feb 17, 2008 6:20 AM, PR Stanley <[EMAIL PROTECTED]> wrote:
>   I can't think of an elegant pattern for the last function. I've
> already tried set comprehension. However, something tells me that the
> answer may lie in a complex recursive pattern. I'm not afraid of
> complex solutions but I naturally see them as an easy way out. It's
> those clear simple patterns that separate men from mice. :-) I'd be
> grateful for any advice on this and indeed my approach as a whole. If
> you think I'm on the wrong path from the start feel free to let me
> know. I'm not asking for the answer. I'd like to work that out by
> myself although some guidance would be most appreciated.

Yes, I definitely think you're on the wrong path from the start.  In
high school when I was learning C++ I wrote an arithmetic expression
parser in something like this fashion.  I scanned the input for the
first opening bracket, then I walked forward to the matching closing
bracket and extracted the subexpression and evaluated it.

The approach turned out to be both complicated and inefficient.  I
think the biggest problem was that it didn't scale well with
implementation complexity; eg. to add support for prefix functions
like sin() was almost impossible.  Think about how you would go about
doing such things using your approach.

I think I've seen you ask questions relating to this on the list
before, and at the risk of repeating others, I suggest the ReadS in
the Haskell Prelude.  You get the benefit of it being a clean,
top-down solution as well as the benefit of having a lot of primitive
Haskell types already implemented for you (such as Integers).

I'll describe the approach again, describing the combinators, and then
afterward showing how you might use them to accomplish this problem.
I'll leave the details up to you, since you can learn a lot by
implementing combinator libraries like this.

First, you build a bunch of functions which are "parsers".  A "parser"
is just a function of type String -> [(a,String)] for some type 'a';
that is, it takes a string and returns a list of "parses", where a
parse is the value parsed paired with the remainder of the string.  So
if you have a function:

parseInt :: String -> [(Int,String)]

Then you can expect this result:

parseInt "123 hello world" = [(123, " hello world")]

Or maybe even:

parseInt "123 hello world" = [(123, " hello world"), (12, "3 hello
world"), (1, "23 hello world")]

But that latter behavior is not recommended in this case (i.e. it is
advisable force int parsing to be greedy).

Then to combine parsers you can use some combination operations:

sequenceParsers :: (String -> [(a,String)]) -> (a -> String ->
[(b, String)]) -> String -> [(b,String)]

That one may be easier to see if you use the built-in type synonym
ReadS a = String -> [(a, String)]:

sequenceParsers :: ReadS a -> (a -> ReadS b) -> ReadsB

The second argument here is a function, because we have already parsed
the first argument and know its value, so the second argument ought to
be able to use it.  We can also write:

alternateParsers :: ReadS a -> ReadS a -> ReadS a

Which gives all valid parses that the first one recognizes
concatenated with all valid parsers that the second one recognizes.

Implementation of these combinators is left to you.  Since the types
of these functions are quite general, you can use a type-directed
approach (i.e. if your implementation uses all available data and it
typechecks, it's probably correct).

Now that you have these, how do you use them to actually parse
something?  Let's parse simple logical expressions.  First we need a
data structure to parse into:

data Exp
= Variable String
| AndExp Exp Exp
| OrExp Exp Exp

Let's do it with a top down coding strategy:  write a function to
parse an expression using whatever helper functions we need but
haven't written yet :-)

parseExp :: ReadS Exp
parseExp = parseAnyOf [ parseVariable, parseAndExp, parseOrExp ]

Where we haven't written parseAnyOf yet.  Write that inductively on the list:

parseAnyOf :: [ReadS a] -> ReadS a
parseAnyOf [] = \input -> []
parseAnyOf (p:ps) = alternateParsers p (parseAnyOf ps)

And jump in to the next thing we haven't written, parseVariable:

parseVariable :: ReadS Exp
parseVariable = mapParser Variable parseString

mapParser doesn't actually parse anything, it just parses whatever its
argument does and applies a function to the result:

mapParser :: (a -> b) -> ReadS a -> ReadS b
mapParser :: (a -> b) -> (String -> [(a,String)]) -> String -> 
[(b,String)]


I rewrote the type signature to help guide your implementation.

That should get you started, and show you how the approach usually
goes.  You seem to already get the idea of writing many 

Re: [Haskell-cafe] Designing a Parser

2008-02-16 Thread Luke Palmer
On Feb 17, 2008 6:20 AM, PR Stanley <[EMAIL PROTECTED]> wrote:
>   I can't think of an elegant pattern for the last function. I've
> already tried set comprehension. However, something tells me that the
> answer may lie in a complex recursive pattern. I'm not afraid of
> complex solutions but I naturally see them as an easy way out. It's
> those clear simple patterns that separate men from mice. :-) I'd be
> grateful for any advice on this and indeed my approach as a whole. If
> you think I'm on the wrong path from the start feel free to let me
> know. I'm not asking for the answer. I'd like to work that out by
> myself although some guidance would be most appreciated.

Yes, I definitely think you're on the wrong path from the start.  In
high school when I was learning C++ I wrote an arithmetic expression
parser in something like this fashion.  I scanned the input for the
first opening bracket, then I walked forward to the matching closing
bracket and extracted the subexpression and evaluated it.

The approach turned out to be both complicated and inefficient.  I
think the biggest problem was that it didn't scale well with
implementation complexity; eg. to add support for prefix functions
like sin() was almost impossible.  Think about how you would go about
doing such things using your approach.

I think I've seen you ask questions relating to this on the list
before, and at the risk of repeating others, I suggest the ReadS in
the Haskell Prelude.  You get the benefit of it being a clean,
top-down solution as well as the benefit of having a lot of primitive
Haskell types already implemented for you (such as Integers).

I'll describe the approach again, describing the combinators, and then
afterward showing how you might use them to accomplish this problem.
I'll leave the details up to you, since you can learn a lot by
implementing combinator libraries like this.

First, you build a bunch of functions which are "parsers".  A "parser"
is just a function of type String -> [(a,String)] for some type 'a';
that is, it takes a string and returns a list of "parses", where a
parse is the value parsed paired with the remainder of the string.  So
if you have a function:

parseInt :: String -> [(Int,String)]

Then you can expect this result:

parseInt "123 hello world" = [(123, " hello world")]

Or maybe even:

parseInt "123 hello world" = [(123, " hello world"), (12, "3 hello
world"), (1, "23 hello world")]

But that latter behavior is not recommended in this case (i.e. it is
advisable force int parsing to be greedy).

Then to combine parsers you can use some combination operations:

sequenceParsers :: (String -> [(a,String)]) -> (a -> String ->
[(b, String)]) -> String -> [(b,String)]

That one may be easier to see if you use the built-in type synonym
ReadS a = String -> [(a, String)]:

sequenceParsers :: ReadS a -> (a -> ReadS b) -> ReadsB

The second argument here is a function, because we have already parsed
the first argument and know its value, so the second argument ought to
be able to use it.  We can also write:

alternateParsers :: ReadS a -> ReadS a -> ReadS a

Which gives all valid parses that the first one recognizes
concatenated with all valid parsers that the second one recognizes.

Implementation of these combinators is left to you.  Since the types
of these functions are quite general, you can use a type-directed
approach (i.e. if your implementation uses all available data and it
typechecks, it's probably correct).

Now that you have these, how do you use them to actually parse
something?  Let's parse simple logical expressions.  First we need a
data structure to parse into:

data Exp
= Variable String
| AndExp Exp Exp
| OrExp Exp Exp

Let's do it with a top down coding strategy:  write a function to
parse an expression using whatever helper functions we need but
haven't written yet :-)

parseExp :: ReadS Exp
parseExp = parseAnyOf [ parseVariable, parseAndExp, parseOrExp ]

Where we haven't written parseAnyOf yet.  Write that inductively on the list:

parseAnyOf :: [ReadS a] -> ReadS a
parseAnyOf [] = \input -> []
parseAnyOf (p:ps) = alternateParsers p (parseAnyOf ps)

And jump in to the next thing we haven't written, parseVariable:

parseVariable :: ReadS Exp
parseVariable = mapParser Variable parseString

mapParser doesn't actually parse anything, it just parses whatever its
argument does and applies a function to the result:

mapParser :: (a -> b) -> ReadS a -> ReadS b
mapParser :: (a -> b) -> (String -> [(a,String)]) -> String -> [(b,String)]

I rewrote the type signature to help guide your implementation.

That should get you started, and show you how the approach usually
goes.  You seem to already get the idea of writing many small
functions and composing them together.  This is the same idea, except
the functions are abstracting the problem more.

Luke
___
Has

Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Colin Paul Adams
> "Cale" == Cale Gibbard <[EMAIL PROTECTED]> writes:

Cale> So, the first version:

Cale> import System.IO import Control.Exception (try)

Cale> main = do mfh <- try (openFile "myFile" ReadMode) case mfh
Cale> of Left err -> do putStr "Error opening file for reading: "
Cale> print err Right fh -> do mline <- try (hGetLine fh) case
Cale> mline of Left err -> do putStr "Error reading line: " print
Cale> err hClose fh Right line -> putStrLn ("Read: " ++ line)

Left? Right?

Hardly descriptive terms. Sounds like a sinister language to me.
-- 
Colin Adams
Preston Lancashire
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Where does ~> come from?

2008-02-16 Thread Cale Gibbard
On 17/02/2008, Steve Lihn <[EMAIL PROTECTED]> wrote:
> While I am reading TypeCompose module, I am having trouble finding
> where ~> comes from? (And its son ~~> and grandson ~~~>, etc.)  This
> is my immediate question. Help is appreciated.

(~>) is typically an infix type variable. If it were a constructor, it
would have to start with a colon. So it doesn't have a definition as
such, you can think of it as any type constructor with at least two
type parameters.

I think that would explain why you were having such trouble searching for it!

Generally, though, if you want to know where something is defined, you
can use ghci's :info command, and it should tell you the file and line
in which it's defined, as well as its type (if it's a value) and its
fixity (if it's an infix operator).

You can then load up whatever documentation or source code is
available for that module.  Of course, in this case, it would report
(correctly) that (~>) is not in scope.

Usually, packages on Hackage have whatever documentation is available
linked directly from their pages on Hackage.

hope this helps!
 - Cale
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Designing a Parser

2008-02-16 Thread PR Stanley

Hi friends
I'm in the process of designing a series of functions which you might 
collectively call a parser. The idea is to evaluate an input string 
for a proof software. So a typical input string would be something 
like "(P & Q) -> q".
There are a number of things to consider here, for example, does the 
string belong to the domain which is a subset of the 7-bit ASCII 
table, starting from 32 and ending at 126. Then you need to remove 
all space characters to make things a bit simpler.  Next we find out 
if the logical sentnece is bracketted. I start this by looking for a 
bracket in the string. If true then I isolate the brackets into a new 
set in order to spot any odd ones.


Here is what I've done so far:
domain :: [Char]
domain = [x | x <- ascii7, x >= ' ' && x <= '~']
  where ascii7 = ['\0'..'\127']

-- a legal input character is a member of domain.
isInDomain :: Char -> Bool
isInDomain x = any (==x) domain

-- a legal input string is a subset of domain.
isSubset :: [Char] -> Bool
isSubset input = all isInDomain input

-- delete spaces for easier parsing.
noSpace :: [Char] -> [Char]
noSpace input = [x | x <- input, x /= space]
  where space = ' '

-- set of brackets
brackets = ['(', ')', '[', ']', '{', '}']

-- Are there any brackets in input?
hasBrackets :: [Char] -> Bool
hasBrackets input = or [True | x <- input, any (==x) brackets]

-- filter all brackets into a subset.
getBrackets :: [Char] -> [Char]
getBrackets input = [x | x <- input, any (==x) brackets]

-- What are matching brackets?
areMatched :: Char -> Char -> Bool
areMatched '(' ')' = True
areMatched '[' ']' = True
areMatched '{' '}' = True
areMatched _ _ = False

-- Can all the brackets in input be paired up?
allMatched :: [Char] -> Bool

 I can't think of an elegant pattern for the last function. I've 
already tried set comprehension. However, something tells me that the 
answer may lie in a complex recursive pattern. I'm not afraid of 
complex solutions but I naturally see them as an easy way out. It's 
those clear simple patterns that separate men from mice. :-) I'd be 
grateful for any advice on this and indeed my approach as a whole. If 
you think I'm on the wrong path from the start feel free to let me 
know. I'm not asking for the answer. I'd like to work that out by 
myself although some guidance would be most appreciated.

Thanks, Paul

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Where does ~> come from?

2008-02-16 Thread Steve Lihn
While I am reading TypeCompose module, I am having trouble finding
where ~> comes from? (And its son ~~> and grandson ~~~>, etc.)  This
is my immediate question. Help is appreciated.

A bigger (higher-order?) issue I encountered and I think other people
could also have is to look up all these special "symbols" during the
learning curve. As things are getting higher and higher order in
Haskell, more inventive symbols are appearing. It becomes harder to
remember the exact definitions are. The difficulties are:

1) I used to see a page on haskell.org that lists many symbols, but I
could not find that page any more.
2) You would hope there is a quick way to search those symbols. But
most search engines do not treate symbols friendly, often just ignore
them. I typed ~> in Hoogle, it also returned nothing.
3) If the module defining the symbol is not in standard library, it is
not possible to look up the symbol in the core library index.

The only way to find the symbol seems to go to the source code, check
every module being imported and if I am lucky I will find it in the
same package. If  I am not lucky, I will have to search that module on
the Internet, and hunt it down recursively.

I wonder if anybody has the same problem. If enough crowd have the
same problem, maybe there could be a better way to handle this kind of
documentation/learning issue!

Thanks,
Steve
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Jonathan Cast

On 16 Feb 2008, at 5:04 PM, Donn Cave wrote:



On Feb 16, 2008, at 3:46 PM, Philippa Cowderoy wrote:


On Sat, 16 Feb 2008, Alan Carter wrote:


I'm a Haskell newbie, and this post began as a scream for help.


Extremely understandable - to be blunt, I don't really feel that  
Haskell

is ready as a general-purpose production environment unless users are
willing to invest considerably more than usual. Not only is it not as
"batteries included" as one might like, sometimes it's necessary  
to build

your own batteries!


Ironically, the simple task of reading a file is more work than I  
expect
precisely because I don't want to bother to handle exceptions.  I  
mean,
in some applications it's perfectly OK to let an exception go to  
the top.


But in Haskell, you cannot read a file line by line without writing an
exception handler, because end of file is an exception!  as if a  
file does

not normally have an end where the authors of these library functions
came from?


I agree 100%; to make life tolerable around Haskell I/O, I usually  
end up binding the moral equivalent of


tryJust (\ exc -> case exc of
 IOException e | isEOFError e -> return ()
 _ -> Nothing) $
   getLine

somewhere at top level and then calling that where it's needed.

For the author of the original post ... can't make out what you  
actually

found and tried, so you should know about "catch" in the Prelude, the
basic exception handler.


Also, you might need to know that bracket nests in various ways:

bracket openFile hClose $ bracket readLine cleanUpLine $ proceed

There's also finally, for when the first argument to bracket is  
ommitted, and (>>) for when the second argument is :)


jcc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Bryan O'Sullivan
Stefan O'Rear wrote:

> Well... that's what I meant by break horribly.

Buh?  That behaviour makes perfect sense to me.

http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Stefan O'Rear
On Sat, Feb 16, 2008 at 06:23:54PM -0800, Bryan O'Sullivan wrote:
> Stefan O'Rear wrote:
> 
> > I'll bet that breaks horribly in the not-so-corner case of /dev/tty.
> 
> Actually, it doesn't.  It seems to do a read behind the scenes if the
> buffer is empty, so it blocks until you type something.

Well... that's what I meant by break horribly.

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Bryan O'Sullivan
Stefan O'Rear wrote:

> I'll bet that breaks horribly in the not-so-corner case of /dev/tty.

Actually, it doesn't.  It seems to do a read behind the scenes if the
buffer is empty, so it blocks until you type something.

http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC strange behavior

2008-02-16 Thread Ruslan Evdokimov
2008/2/17, Stefan O'Rear <[EMAIL PROTECTED]>:

>
> As far as I can tell, that confirms my explanation.  If you see it
> differently - say how.
>
> Stefan
>

Seems you're right, I changed it to:
  [e,o] = map sum $ [filter even numbers, (filter odd) $ reverse numbers]

It prevents numbers from being collected and here is results:

>test.exe
1
12812 ms

>test.exe +RTS -N2
1
16671 ms
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Stefan O'Rear
On Sat, Feb 16, 2008 at 05:11:59PM -0800, Bryan O'Sullivan wrote:
> Donn Cave wrote:
> 
> > But in Haskell, you cannot read a file line by line without writing an
> > exception handler, because end of file is an exception!
> 
> Ah, yet another person who has never found System.IO.hIsEOF :-)
> 
> Whereas in C or Python you would check the return value of read against
> zero or an empty string, in Haskell you call hIsEOF *before* a read.

I'll bet that breaks horribly in the not-so-corner case of /dev/tty.

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Working with multiple time zones

2008-02-16 Thread Ryan Ingram
I don't have anything to answer for the "interesting" part of your
question, but if you're just interested in getting something
working...

On Feb 16, 2008 3:13 PM, Dave Hinton <[EMAIL PROTECTED]> wrote:
> 2. If GHC's implementation is working as designed, how do I translate
> the C program above into Haskell?

You can use the FFI to call localtime() directly; see
http://www.haskell.org/haskellwiki/FFI_Introduction
There's information on the types to use here:
http://haskell.org/ghc/docs/latest/html/libraries/base/Foreign-C.html
http://haskell.org/ghc/docs/latest/html/libraries/base/Foreign.html

Something along the lines of:

{-# LANGUAGE FFI #-}
{-# INCLUDE  #-}
import Foreign
import Foreign.C

data CStructTm = ... -- implement struct tm here
instance Storable CStructTm where ...

foreign import ccall unsafe "localtime" c_localtime :: Ptr CTime -> IO
(Ptr CStructTm)

localtime :: CTime -> IO CStructTm
localtime = do ... -- marshal time into a pointer, call c_localtime,
marshal structure back into a CStructTm and return it.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Bryan O'Sullivan
Donn Cave wrote:

> But in Haskell, you cannot read a file line by line without writing an
> exception handler, because end of file is an exception!

Ah, yet another person who has never found System.IO.hIsEOF :-)

Whereas in C or Python you would check the return value of read against
zero or an empty string, in Haskell you call hIsEOF *before* a read.

http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Donn Cave


On Feb 16, 2008, at 3:46 PM, Philippa Cowderoy wrote:


On Sat, 16 Feb 2008, Alan Carter wrote:


I'm a Haskell newbie, and this post began as a scream for help.


Extremely understandable - to be blunt, I don't really feel that  
Haskell

is ready as a general-purpose production environment unless users are
willing to invest considerably more than usual. Not only is it not as
"batteries included" as one might like, sometimes it's necessary to  
build

your own batteries!


Ironically, the simple task of reading a file is more work than I expect
precisely because I don't want to bother to handle exceptions.  I mean,
in some applications it's perfectly OK to let an exception go to the  
top.


But in Haskell, you cannot read a file line by line without writing an
exception handler, because end of file is an exception!  as if a file  
does

not normally have an end where the authors of these library functions
came from?

For the author of the original post ... can't make out what you actually
found and tried, so you should know about "catch" in the Prelude, the
basic exception handler.

Donn Cave, [EMAIL PROTECTED]
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC strange behavior

2008-02-16 Thread Stefan O'Rear
On Sun, Feb 17, 2008 at 03:41:52AM +0300, Ruslan Evdokimov wrote:
> 2008/2/17, Stefan O'Rear <[EMAIL PROTECTED]>:
> 
> > This makes perfect sense - -N2 tells GHC to use two threads, and if you
> > run two threads on a single-processor system it's implemented by running
> > the threads alternatingly (around 100/s for modern Linux, probably
> > similar for other systems).  Thus, the two evaluations never get more
> > than a hundreth of a second out of step, and memory usage is still low.
> >
> > Stefan
> 
> Test on windows XP AthlonX2 4200+ 2Gb:
> 
> C:\imp>test
> 1
> 12328 ms
> 
> C:\imp>test +RTS -N2
> 1
> 5234 ms
> 
> C:\imp>test +RTS -N2
> 1
> 3515 ms
> 
> 1st - 1 thread
> 2nd - 2 threads on single core (one core disabled through Task Manager)
> 3rd - 2 threads on different cores

As far as I can tell, that confirms my explanation.  If you see it
differently - say how.

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC strange behavior

2008-02-16 Thread Ruslan Evdokimov
2008/2/17, Stefan O'Rear <[EMAIL PROTECTED]>:

> This makes perfect sense - -N2 tells GHC to use two threads, and if you
> run two threads on a single-processor system it's implemented by running
> the threads alternatingly (around 100/s for modern Linux, probably
> similar for other systems).  Thus, the two evaluations never get more
> than a hundreth of a second out of step, and memory usage is still low.
>
> Stefan

Test on windows XP AthlonX2 4200+ 2Gb:

C:\imp>test
1
12328 ms

C:\imp>test +RTS -N2
1
5234 ms

C:\imp>test +RTS -N2
1
3515 ms

1st - 1 thread
2nd - 2 threads on single core (one core disabled through Task Manager)
3rd - 2 threads on different cores
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Jefferson Heard
  Since everyone's been focusing on the IO so far, I wanted to take a
quick stab at his mention of "green" vs. OS threads...  I like the
term "green", actually, as it's what my grandmother calls
decaffeinated coffee, owing to the fact that decaf taster's choice has
a big green plastic lid.  Distrust all coffee that comes in a plastic
lid, folks.  Life is better that way...

However, Haskell very much has real, caffeinated parallelism
mechanisms.  There is explicit concurrency, which says that things can
happen at the same time (see Control.Concurrent) and there is the
whole question of Glasgow Parallel Haskell and Data Parallel Haskell,
which I won't really begin to cover, as Manuel Chakravarty and Simon
Peyton Jones will do TONS better at explaining these than I can.  I
will however mention Control.Parallel and Control.Parallel.Strategies,
because they're my personal favorite way of being parallel.

The Haskell thread is semantically much like the Java thread, it's
green, in other words, but you can control the number of real OS
threads that Haskell threads are executed on at the command line.
Thus you might call them "half caffeinated"  But, at least with
Control.Parallel.Strategies, they're SO much easier to use.  There are
a couple of caveats, but I'll give an example first.  Let's say you're
doing some heavy computer graphics, but you're doing it all in
spherical coordinates (I do this all the time, which is why I'm using
it as an example) and before you go to OpenGL, you need to transform
everything into Carteisan coordinates.

vertices :: [GL.Vertex3] -- a list of oh, say, 150,000 vertices or so
in spherical coordinates

sphericalToCart :: GL.Vertex3 -> GL.Vertex3
sphericalToCart (GL.Vertex3 r a z) = (GL.Vertex3 (r * cos a * sin z)
(r * sin a * sin z) (r * cos a))

Now to convert them all, you'd just do a

map sphericalToCart vertices

and that would do a lazy conversion of everything, but since you know
you're going to use all the vertices, strictness is just as well, and
you can do strict things in parallel this way:

parMap rwhnf sphericalToCart vertices

or even more efficiently,

map rwhnf sphericalToCart vertices `using` parListChunk 1024

That'll execute on all cores of your processor and do the same
operation much faster, if you were going to have to do the entire
operation anyway.

-- Jeff

On Sat, Feb 16, 2008 at 5:05 PM, Alan Carter <[EMAIL PROTECTED]> wrote:
> Greetings Haskellers,
>
>  I'm a Haskell newbie, and this post began as a scream for help. Having
>  slept on it I find myself thinking of Simon Peyton-Jones' recent
>  request for good use cases. Perhaps a frustrated - and doubting -
>  newbie can also provide a data point. If my worries are unfounded (and
>  I hope they are), I think it's significant that to me, today, they
>  seem real enough. Please understand that I'm not being negative for
>  the sake of it - rather I'm describing what Haskell looks like from
>  the outside.
>
>  Let me put it this way. Imagine that two weeks ago my forward-thinking
>  and risk-embracing boss asked me to evaluate Haskell for the upcoming
>  Project X. Further imagine that she ensured I was able to sit in the
>  corner emitting curses for the whole two weeks, and on Monday I have
>  to provide my report.
>
>  At this point, two weeks in, I would be forced to say that I have no
>  reason to believe that Haskell is useful for real world tasks. ghc is
>  an industrial strength compiler for a toy language. While remarkable
>  claims are made for it, in practice even the experts are often unable
>  to implement the most basic behaviours, and where they are able to
>  implement, they find that their program has become so complex that
>  they are unable to describe or discuss the result. Likely this is a
>  deep problem, not a shallow one. The Haskell community is in denial
>  over this, leading to phenomenal time wasting as one goes round and
>  round in circles playing word games with documentation. This risks a
>  return of the chronic embuggerance that we thought we'd escaped when
>  Vista appeared and the set of people who would have to write Windows
>  device drivers reduced to Hewlett Packard employees, Joanna Rutkowska
>  and criminals. When people enthuse about Haskell, we should run a
>  program called Cat.hs from the haskell.org website, throw fruit at
>  them and laugh.
>
>  Strong words, but in all honesty I *want* to believe, and if I would
>  make such a report I imagine hundreds if not thousands would say the
>  same thing. I'm hoping I'm wrong about this, and what's actually
>  needed is some work on communication (perhaps from a production
>  programming point of view, which I'd be keen to help with).
>
>  What got me started with Haskell was the video of an Intel employee
>  holding a Teraflops in his hand. I still remember the very silly
>  September 1991 edition of Scientific American, which asked if a
>  Teraflops would *ever* be built. What a stupid question! Stack up
>  enough

Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Stefan O'Rear
On Sat, Feb 16, 2008 at 06:50:03PM -0500, Cale Gibbard wrote:
> On 16/02/2008, Alan Carter <[EMAIL PROTECTED]> wrote:
> > Then when all this was going on, question number five appeared: What
> > the hell are these "lightweight Haskell threads"? Are they some kind
> > of green threads running non-preemptively inside a single OS thread?
> > Are they OS threads that could run concurrently on multi-core
> > hardware? If they are green threads (and it sounds like they are) then
> > this is all an academic exercise which has no production application
> > yet.
> >
> > Best wishes - and still hoping I'm wrong after all
> >
> > Alan Carter

Yes, they are green threads.  But not the stupid kind you are used to.

Consider an operating system.  You are running hundreds of threads in a
typical system.  You don't have hundreds of processors - let's be
generous and say you have 8.  Obviously these threads are in some sense
'green'.  But they are still being run with (limited) parallelism!
There is no reason to expect anything less of user-level 'green
threads', and if all the systems you have been using are incapable of
running threads in paralell, then all the systems you have been using
are toys or broken.  GHC is not a toy (in this regard), and contains a
mini-operating system that schedules how ever many millions of threads
you have onto a number of OS threads specified with the +RTS -N
option.

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC strange behavior

2008-02-16 Thread Stefan O'Rear
On Sun, Feb 17, 2008 at 03:07:15AM +0300, Ruslan Evdokimov wrote:
> 2008/2/17, Jonathan Cast <[EMAIL PROTECTED]>:
> >
> > Wild guess?  If you leave o as a thunk, to be evaluated once the
> > program has e, then it has numbers, so you keep the entire 10-million
> > entry list in memory.  Evaluating e and o in parallel allows the
> > system to start garbage collecting cons cells from numbers much
> > earlier, which reduces residency (I'd've been unsuprised at more than
> > two orders of magnitude).  Managing the smaller heap (and especially
> > not having to copy numbers on each GC) then makes the garbage
> > collector go much faster, so you get a smaller run time.
> >
> But I also tested it on P-IV 3.0 with HT and 1GB (single core) running
> Windows-XP (ghc 6.8.2), and it works fine (fast & low GC) in all three
> cases without significant difference. Sure it didn't runs faster with
> -N2 'cause it's not dual-core.

This makes perfect sense - -N2 tells GHC to use two threads, and if you
run two threads on a single-processor system it's implemented by running
the threads alternatingly (around 100/s for modern Linux, probably
similar for other systems).  Thus, the two evaluations never get more
than a hundreth of a second out of step, and memory usage is still low.

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC strange behavior

2008-02-16 Thread Ruslan Evdokimov
2008/2/17, Don Stewart <[EMAIL PROTECTED]>:
>
> What flags did you compile the code with?
>
1st case:
ghc -O2 --make

2nd and 3rd cases:
ghc -O2 --make -threaded
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC strange behavior

2008-02-16 Thread Don Stewart
ruslan.evdokimov:
> 2008/2/17, Jonathan Cast <[EMAIL PROTECTED]>:
> >
> > Wild guess?  If you leave o as a thunk, to be evaluated once the
> > program has e, then it has numbers, so you keep the entire 10-million
> > entry list in memory.  Evaluating e and o in parallel allows the
> > system to start garbage collecting cons cells from numbers much
> > earlier, which reduces residency (I'd've been unsuprised at more than
> > two orders of magnitude).  Managing the smaller heap (and especially
> > not having to copy numbers on each GC) then makes the garbage
> > collector go much faster, so you get a smaller run time.
> >
> But I also tested it on P-IV 3.0 with HT and 1GB (single core) running
> Windows-XP (ghc 6.8.2), and it works fine (fast & low GC) in all three
> cases without significant difference. Sure it didn't runs faster with
> -N2 'cause it's not dual-core.

What flags did you compile the code with? 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC strange behavior

2008-02-16 Thread Ruslan Evdokimov
2008/2/17, Jonathan Cast <[EMAIL PROTECTED]>:
>
> Wild guess?  If you leave o as a thunk, to be evaluated once the
> program has e, then it has numbers, so you keep the entire 10-million
> entry list in memory.  Evaluating e and o in parallel allows the
> system to start garbage collecting cons cells from numbers much
> earlier, which reduces residency (I'd've been unsuprised at more than
> two orders of magnitude).  Managing the smaller heap (and especially
> not having to copy numbers on each GC) then makes the garbage
> collector go much faster, so you get a smaller run time.
>
But I also tested it on P-IV 3.0 with HT and 1GB (single core) running
Windows-XP (ghc 6.8.2), and it works fine (fast & low GC) in all three
cases without significant difference. Sure it didn't runs faster with
-N2 'cause it's not dual-core.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Cale Gibbard
On 16/02/2008, Alan Carter <[EMAIL PROTECTED]> wrote:
> Then when all this was going on, question number five appeared: What
> the hell are these "lightweight Haskell threads"? Are they some kind
> of green threads running non-preemptively inside a single OS thread?
> Are they OS threads that could run concurrently on multi-core
> hardware? If they are green threads (and it sounds like they are) then
> this is all an academic exercise which has no production application
> yet.
>
> Best wishes - and still hoping I'm wrong after all
>
> Alan Carter

Sorry for missing this question in my first response. The answer of
course depends on the Haskell implementation in question, but of
course, we're talking about GHC here.

Haskell threads, in the sense of Control.Concurrent.forkIO, are
essentially a sort of green thread which is scheduled by the Haskell
runtime system. Threads can either be bound to a particular OS thread,
or (as is default), not be bound to a particular OS thread, allowing
the scheduler to manage n Haskell threads with m OS threads, where
usually you want to set m to something like the number of processors
in your machine.

I'm a little hazy on the details, and perhaps someone more familiar
with the GHC runtime can fill in some more details for you if you'd
like.

Aside from Concurrent Haskell (which was originally designed for
single-processor concurrency and later extended to allow for
scheduling threads to execute in multiple OS threads), there is
Parallel Haskell, which is used to annotate pure computations for
parallelism (but since they're pure, there is no concurrency). At its
core, Parallel Haskell has an extremely simple programmer interface:

par :: a -> b -> b

Evaluation of an expression of the form (par x y) will cause x to be
put in a queue of expressions to be evaluated by a worker on some OS
thread, if there is free time, before resulting in y. If there is no
time to evaluate x on some processor before it is eventually needed,
then evaluation just proceeds normally, but if there is, then it won't
need evaluation later, due to the usual sharing from lazy evaluation.
>From this extremely simple form of parallel annotation, it's possible
to build lots of interesting mechanisms for carrying out evaluation in
parallel. You can read more about that in a paper titled "Algorithm +
Strategy = Parallelism" by PW Trinder, K Hammond, H-W Loidl and Simon
Peyton Jones, or check out the documentation for
Control.Parallel.Strategies.

 - Cale
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Philippa Cowderoy
On Sat, 16 Feb 2008, Alan Carter wrote:

> I'm a Haskell newbie, and this post began as a scream for help.

Extremely understandable - to be blunt, I don't really feel that Haskell 
is ready as a general-purpose production environment unless users are 
willing to invest considerably more than usual. Not only is it not as 
"batteries included" as one might like, sometimes it's necessary to build 
your own batteries! It's also sometimes hard to tell who the experts are, 
especially as many of us mostly work in fairly restricted areas - often 
way away from any IO, which is often a source of woe but whose avoidance 
leaves something of a hole in some coders' expertise.

The current state of error-handling is something of a mess, and there are 
at least two good reasons for this:

* Errors originating in the IO monad have a significantly different nature 
to those generated by pure code
* We don't have[1] extensible variants, leading to the kinds of problem 
you complain about with scalability as the number of potential errors 
increases

It's been a while since I was in the tutorial market, but I don't think 
many tutorials address the first point properly and it's a biggie. Most IO 
functions are written to throw exceptions in the IO monad if they fail, 
which forces you to handle them as such. So, here's an example:

import System.IO

fileName = "foo.bar"

main = (do h <- openFile fileName ReadMode
   catch (hGetContents h >>= putStr)
 (\e -> do putStrLn "Error reading file"
   hClose h
 )
   ) `catch` (\e -> putStrLn "Error opening file")

On my machine, putting this through runhaskell results in a line "Error 
opening file", as unsurprisingly there's no foo.bar. Producing an error 
opening is harder work, whereas if I change filename to the program's 
source I get the appropriate output. It may say something about me that I 
didn't get this to compile first time - the culprit being a layout error, 
followed by having got the parms to openFile in the wrong order.

Caveats so far: there are such things as non-IO exceptions in the IO 
monad, and catching them requires Control.Error.catch, which thankfully 
also catches the IO exceptions. If putStr were to throw an exception, I'd 
need yet another catch statement to distinguish it (though it'd be handled 
as-is). The sensible thing though is probably to use Control.Error.bracket 
(which is written in terms of catch) thusly:

import System.IO
import Control.Error

filename = "foo.bar"

main = bracket (openFile filename ReadMode)
   (\h -> hGetContents h >>= putStr)
   (\h -> hClose h)

So from here, we have two remaining problems:

1) What about pure errors?
2) What about new exception types?

I'll attack the second first, as there's a standard solution for IO and a 
similar approach can be adopted in pure code. It's a fairly simple, if 
arguably unprincipled, solution - use dynamic typing! Control.Error offers 
us throwDyn and catchDyn, which take advantage of facilities in 
Data.Dynamic. Pure code can make use of Data.Dynamic in a similar manner 
if needed. Personally I'm not too happy with this as a solution in most 
cases, but it's no worse than almost every other language ever - I guess 
Haskell's capabilities elsewhere have spoiled me.

As for pure errors, there're essentially two steps:

1) Find a type that'll encode both the errors and the success cases (Maybe 
and Either are in common use)
2) Write the appropriate logic

I'll not go into step 1 much, most of the time you want to stick with 
Maybe or Either (there's a punning mnemonic that "if it's Left it can't 
have gone right" - it's usual to use Right for success and Left for 
failure). The second point is where you get to adopt any approach from 
writing out all the case analysis longhand to using a monad or monad 
transformer based around your error type. It's worth being aware of 
Control.Monad.Error at this point, though personally I find it a little 
irritating to work with.

By the time you're building customised monads, you're into architecture 
land - you're constructing an environment for code to run in and defining 
how that code interfaces with the rest of the world, it's perhaps the 
closest thing Haskellers have to layers in OO design. If you find you're 
using multiple monads (I ended up with three in a 300 line lambda calculus 
interpreter, for example - Parsec, IO and a custom-built evaluation monad) 
then getting things right at the boundaries is important - if you've got 
that right and the monad's been well chosen then everything else should 
come easily. Thankfully, with a little practice it becomes possible to 
keep your code factored in such a manner that it's easy to refactor your 
way around the occasional snarl-ups that happen when a new change warrants 
re-architecting. That or someone just won buzzword bingo, anyway.

Anyway, I hope this's been helpful. 

[1] There are ways of impl

Re: [Haskell-cafe] GHC strange behavior

2008-02-16 Thread Jonathan Cast

On 16 Feb 2008, at 3:06 PM, Ruslan Evdokimov wrote:


Hi, all!

I have strange GHC behavior. Consider the code:

import Control.Parallel

main = print (o `par` (fromInteger e) / (fromInteger o))
  where
  [e,o] = map sum $ map (`filter` numbers) [even, odd]
  numbers = [1..1000]


When it compiled without threaded it has 19068 ms to run, 396 Mb  
total memory in use and %GC time  88.2%, the same with - 
threaded and +RTS -N1, but with +RTS -N2 it takes only 3806 ms to  
run, 3 Mb total memory in use and %GC time   8.1%. Why it so?  
It's a bug or I missed something?


Wild guess?  If you leave o as a thunk, to be evaluated once the  
program has e, then it has numbers, so you keep the entire 10-million  
entry list in memory.  Evaluating e and o in parallel allows the  
system to start garbage collecting cons cells from numbers much  
earlier, which reduces residency (I'd've been unsuprised at more than  
two orders of magnitude).  Managing the smaller heap (and especially  
not having to copy numbers on each GC) then makes the garbage  
collector go much faster, so you get a smaller run time.




I test it on dual-core Athlon X2 4200+ 2Gb running 64bit Gentoo  
system. gcc 4.2.2 and ghc 6.8.2.


jcc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Working with multiple time zones

2008-02-16 Thread Dave Hinton
(This is a toy program to demonstrate only the part of my real program
that I'm having trouble with.)

Suppose I'm writing a program to print the current time in various
time zones. The time zones are to be given symbolically on the command
line in the form "Europe/London" or "America/New_York". The idea is
that either the operating system or the runtime library keeps track of
what time zones different places are in, and when they are on summer
time, so that my code doesn't have to worry about it.

Haskell has a TimeZone type in Data.Time.LocalTime, but it only
represents constant offsets from UTC — doesn't encode rules for when
the clocks change. And there doesn't seem to be any way of looking up
the time zone for a locality.

Data.Time.LocalTime has the getTimeZone function, which returns the
time zone for a given UTC time on the local machine — this takes care
of summer time, but by itself only works for the local machine's
locality.

If I was writing this program in C, I'd get round this by setting the
TZ environment variable to the locality I was interested in before
doing time conversions.

$ cat cnow.c
#include 
#include 
#include 
#include 
void outTime (time_t utc, char *tzName)
{
  char env[100] = "TZ=";
  strcat (env, tzName);
  putenv (env);
  printf ("%s\t%s", tzName, asctime (localtime (&utc)));
}
int main (int argc, char **argv)
{
  int i;
  time_t utc = time (NULL);
  for (i = 1;  i < argc;  ++i)  outTime (utc, argv[i]);
  return 0;
}
$ gcc cnow.c -o cnow
$ ./cnow Europe/Paris Europe/Moscow Europe/London
Europe/ParisSat Feb 16 23:57:22 2008
Europe/Moscow   Sun Feb 17 01:57:22 2008
Europe/London   Sat Feb 16 22:57:22 2008

So far, so good. Here's the equivalent in Haskell:

$ cat hsnow.hs
import Data.Time
import Data.Time.LocalTime
import System.Environment
import System.Posix.Env
outTime utc env
  = do putEnv ("TZ=" ++ env)
   tz <- getTimeZone utc
   putStrLn (env ++ "\t" ++ show (utcToLocalTime tz utc))
main
  = do utc <- getCurrentTime
   mapM_ (outTime utc) =<< getArgs
$ ghc --make hsnow.hs -o hsnow
[1 of 1] Compiling Main ( hsnow.hs, hsnow.o )
Linking hsnow ...
$ ./hsnow Europe/Paris Europe/Moscow Europe/London
Europe/Paris2008-02-16 23:59:11.776151
Europe/Moscow   2008-02-16 23:59:11.776151
Europe/London   2008-02-16 23:59:11.776151
$ ./hsnow Europe/Moscow Europe/London Europe/Paris
Europe/Moscow   2008-02-17 01:59:28.617711
Europe/London   2008-02-17 01:59:28.617711
Europe/Paris2008-02-17 01:59:28.617711

Not good. GHC's runtime library seems to be taking the value of TZ the
first time it is called as gospel, and ignoring subsequent changes to
TZ.

So:

1. Is this a bug in GHC's Data.Time.LocalTime.getTimeZone?
2. If GHC's implementation is working as designed, how do I translate
the C program above into Haskell?

I'm running on Debian stable, with GHC 6.6.

Thanks.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Robert Dockins
I'm going to try to respond the the main practical question in this message; 
perhaps others will feel up to addressing the more philosophical aspects.

(I see now that Cale has beaten me to the punch, but I guess I'll post this 
anyways...)

> Greetings Haskellers,
[snip quite a bit of discussion]

> Great. Next, translate the bit that
> says (pseudocode):
>
>   if(attempt_file_open)
> if(attempt_file_read)
>   process
>
> That's it. No fancy, complex error messages. Just check the error
> returns and only proceed if I have something to proceed with. Like
> grown-ups do. I *will* check my error returns. I have tormented too
> many newbies to *ever* consider doing anything else. If I cannot check
> my error returns I will not write the program.

You'll find in Haskell that the normal way of handling things like I/O errors 
is to use the exception handling mechanism.  There aren't usually "error 
returns" to check.  Instead you usually place error handlers at the positions 
where you want to be notified of errors using the "catch" or "handle" 
functions.  If you want to you can convert any IO action into one with an 
error return by using the "try" function.  The Control.Exception module is 
probably the one you want to check out.

http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Exception.html

[snip more discussion]

> If so, 
> I sincerely suggest an example or two, like the small but well formed
> programs in K&R, Stroustrup or Gosling saying things like:
>
>   if((fp = fopen(...)) != NULL)
>   {
> if(fgets(...) != NULL)
> {
>   printf(...);
> }
>
> fclose(...)
>   }

Here is a quick example I whipped up.  It includes both a pretty direct 
translation of the above code, and another version which is a little more 
idiomatic.

Rob Dockins

--- code follows 
import Control.Exception
import System.IO


main = direct_translation

direct_translation = do
  tryh <- try (openFile "test.txt" ReadMode)
  case tryh of
Left err -> print err
Right h -> do
   tryl <- try (hGetLine h)
   case tryl of
 Left err -> do print err; hClose h
 Right l -> do
 putStrLn l
 hClose h
  
the_way_I_would_do_it = handle (\err -> print err) $
  bracket (openFile "test.txt" ReadMode) hClose $ \h -> do
 l <- hGetLine h
 putStrLn l
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] GHC strange behavior

2008-02-16 Thread Ruslan Evdokimov
Hi, all!

I have strange GHC behavior. Consider the code:

import Control.Parallel

main = print (o `par` (fromInteger e) / (fromInteger o))
  where
  [e,o] = map sum $ map (`filter` numbers) [even, odd]
  numbers = [1..1000]


When it compiled without threaded it has 19068 ms to run, 396 Mb total
memory in use and %GC time  88.2%, the same with -threaded and +RTS -N1,
but with +RTS -N2 it takes only 3806 ms to run, 3 Mb total memory in use and
%GC time   8.1%. Why it so? It's a bug or I missed something?
I test it on dual-core Athlon X2 4200+ 2Gb running 64bit Gentoo system. gcc
4.2.2 and ghc 6.8.2.

--
Ruslan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Response to unexpected doubt in haskell-cafe (was: Doubting Haskell)

2008-02-16 Thread Albert Y. C. Lai

Alan Carter wrote:

  if((fp = fopen(...)) != NULL)
  {
if(fgets(...) != NULL)
{
  printf(...);
}

fclose(...)
  }


This reminds me of a 1976 article written by David Parnas and Harald 
Würges: Response to undesired events in software systems. Since it's 
old, it is harder to find, but here are a few things to try:


If you have download privilege on ACM Digital Library,
http://portal.acm.org/citation.cfm?id=800253.807717

(If not, you can still see the full citation, the abstract, etc.)

The paper is also collected in this book full of Parnas's papers:
Software Fundamentals: collected papers by David L. Parnas. Edited by 
Daniel M. Hoffman and David M. Weiss.


Someone else made slides to present this paper:
http://www.cs.virginia.edu/~wh5a/personal/Quals/misc/ParnasPaper%20on%20stanley/ResponseToUndesiredEvents.ppt


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Doubting Haskell

2008-02-16 Thread Cale Gibbard
On 16/02/2008, Alan Carter <[EMAIL PROTECTED]> wrote:
> Greetings Haskellers,
>
> I'm still hoping that this is solvable. That the instinctive
> priorities of production programmers are just different to those of
> researchers, and in fact it *is* possible to open a file *and* read
> it, checking *both* error returns, without being driven insane. If so,
> I sincerely suggest an example or two, like the small but well formed
> programs in K&R, Stroustrup or Gosling saying things like:
>
>   if((fp = fopen(...)) != NULL)
>   {
> if(fgets(...) != NULL)
> {
>   printf(...);
> }
>
> fclose(...)
>   }
>
> Best wishes - and still hoping I'm wrong after all
>
> Alan Carter

Well, first of all, have you read the documentation for System.IO?

http://www.haskell.org/ghc/docs/latest/html/libraries/base/System-IO.html

That has all the corresponding functions you need. I'm not sure I
understand completely how you managed to spend two weeks struggling
with this before asking. Two minutes on #haskell, or a quick question
about how to open and read a file should have got you a useful
response. :)

First, I'll write the program in a straightforward, but extremely
explicit manner, handling possible errors and managing clean up
explicitly. This code is rather verbose, so I'll then show some other
less verbose ways to handle things while still maintaining safety.

So, the first version:

import System.IO
import Control.Exception (try)

main = do mfh <- try (openFile "myFile" ReadMode)
  case mfh of
Left err -> do putStr "Error opening file for reading: "
   print err
Right fh ->
do mline <- try (hGetLine fh)
   case mline of
 Left err -> do putStr "Error reading line: "
print err
hClose fh
 Right line -> putStrLn ("Read: " ++ line)

Okay, so this is hopefully fairly self-explanatory to a C-programmer.
The only potentially-confusing part is the function 'try', imported
from Control.Exception. What it does is to catch all possible
exceptions, and reflect them through the return value of the action.
If an exception is thrown, 'try' will catch it, and give us a value of
the form (Left e), for e being the exception. If instead, the
operation succeeds without an exception, we get a value (Right x),
where x is the normal return value of the action.

The successive 'case' expressions are used to pattern match on this,
and handle the errors by printing out an explanatory message. Some
example runs of this program:

[EMAIL PROTECTED]:~$ rm myFile
[EMAIL PROTECTED]:~$ ./read
Error opening file for reading: myFile: openFile: does not exist (No
such file or directory)
[EMAIL PROTECTED]:~$ touch myFile
[EMAIL PROTECTED]:~$ ./read
Error reading line: myFile: hGetLine: end of file
[EMAIL PROTECTED]:~$ echo "hello" >> myFile
[EMAIL PROTECTED]:~$ ./read
Read: hello

This program actually does more error handling than your example C
program, so let's tone it down a bit, and make use of some nice IO
operations provided to handle errors and clean things up safely in the
event of a failure.

import System.IO

main = withFile "myFile" ReadMode $ \fh ->
 do line <- hGetLine fh
putStrLn ("Read: " ++ line)

The function 'withFile' takes a filename, a mode in which to open the
file, and a function, taking a filehandle, and giving an action to be
performed with that handle, and wraps that action up inside an
exception handler, which ensures that the file handle is safely closed
if an exception is thrown. (This doesn't matter much in our small
example, but I'm sure you'll appreciate how that's an important
thing.)

We don't handle the exceptions explicitly in this program, but we
still could. There are a host of exception-handling mechanisms in
Control.Exception, ranging from simple value-oriented things like try,
to more explicit operations for wrapping things in exception handlers,
like catch:

catch :: IO a -> (Exception -> IO a) -> IO a

Or to get more selective:

catchJust :: (Exception -> Maybe b) -> IO a -> (b -> IO a) -> IO a

Which takes a function that gets to decide whether to handle the
exception, and at the same time, transform the exception in some way
before passing it on to the exception handler.

For more information about exceptions, check out the documentation for
Control.Exception here:

http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Exception.html

I assure you that Haskell is a very reasonable programming language in
which to write safe and correct programs. There are whole companies
founded on writing high-assurance software in Haskell.

If you have more questions, I would be happy to answer them, either
here, or perhaps more comfortably, on IRC, #haskell on
irc.freenode.net. It's a very beginner friendly channel, and asking
questions there is a great way to learn the language

[Haskell-cafe] Doubting Haskell

2008-02-16 Thread Alan Carter
Greetings Haskellers,

I'm a Haskell newbie, and this post began as a scream for help. Having
slept on it I find myself thinking of Simon Peyton-Jones' recent
request for good use cases. Perhaps a frustrated - and doubting -
newbie can also provide a data point. If my worries are unfounded (and
I hope they are), I think it's significant that to me, today, they
seem real enough. Please understand that I'm not being negative for
the sake of it - rather I'm describing what Haskell looks like from
the outside.

Let me put it this way. Imagine that two weeks ago my forward-thinking
and risk-embracing boss asked me to evaluate Haskell for the upcoming
Project X. Further imagine that she ensured I was able to sit in the
corner emitting curses for the whole two weeks, and on Monday I have
to provide my report.

At this point, two weeks in, I would be forced to say that I have no
reason to believe that Haskell is useful for real world tasks. ghc is
an industrial strength compiler for a toy language. While remarkable
claims are made for it, in practice even the experts are often unable
to implement the most basic behaviours, and where they are able to
implement, they find that their program has become so complex that
they are unable to describe or discuss the result. Likely this is a
deep problem, not a shallow one. The Haskell community is in denial
over this, leading to phenomenal time wasting as one goes round and
round in circles playing word games with documentation. This risks a
return of the chronic embuggerance that we thought we'd escaped when
Vista appeared and the set of people who would have to write Windows
device drivers reduced to Hewlett Packard employees, Joanna Rutkowska
and criminals. When people enthuse about Haskell, we should run a
program called Cat.hs from the haskell.org website, throw fruit at
them and laugh.

Strong words, but in all honesty I *want* to believe, and if I would
make such a report I imagine hundreds if not thousands would say the
same thing. I'm hoping I'm wrong about this, and what's actually
needed is some work on communication (perhaps from a production
programming point of view, which I'd be keen to help with).

What got me started with Haskell was the video of an Intel employee
holding a Teraflops in his hand. I still remember the very silly
September 1991 edition of Scientific American, which asked if a
Teraflops would *ever* be built. What a stupid question! Stack up
enough VIC20s and eventually you'll get a Teraflops. The question
should have been "when". Now it's the size of a CD, and only 80 cores
are needed. Unfortunately keeping 80 cores running is tricky. I know
this from writing some heavy parallel stuff in the mid-90s. It was all
quite clever in it's day. Chuck bloated and unguessable CORBA, do
something light with TCP/IP (Beuwolf took that to extremes). Neat
linkage like rpcgen gave C, so that I could run fast on an SMP Sequent
with 30 cores or on a floorfull of about 70 Sun pizza boxen at night.

Unfortunately despite having a nice framework, tracing rays is still
hard (the rays and medium were... interesting). Making a problem
parallel required a sneaky and dedicated person's sincere skull-sweat.
Worse, the solutions so produced had a horrible structural instability
about them. Just a small change to the requirement could require a
computed value where it wasn't needed before, so that it resulted in
big changes to the implementation. The skull-sweating would be needed
all over again. (Remember that the big point about objects, which e.g.
Booch always emphasized, was that a well chosen set of classes maps
well to the domain and so reduces such structural instability.) Even
then, it was devilish hard to keep 70 "cores" busy.

So watching the Intel guy got my klaxons going. We now need to be able
to do parallel with ease. Functional programming just got really
important. It's years since I last played with Scheme, but I quickly
moved on because I could see the "which Scheme" problem becoming a
millstone round everyone's necks outside of research contexts. Ditto
Lisp. So Haskell. Grown-up compiler, one standard and (apparently) a
decent corpus of example code and tutorials. I might be an imperative
programmer, but I do lapse - for example I find it very easy to
generate swathes of cross referenced documentation using m4. My head
goes kind of weird after a few hours, such that m4 seems sane and it's
the rest of the world that's ungainly, so maybe it should be banned
like DMT, but I like it. I felt able to enter the functional world.

I'll omit the first week of suffering, which I see is a well
documented rite of passage. (Although most evaluators will have left
the building by the end of week one so it's not helping popularity.
Perhaps there could be Squires of the Lambda Calculus who haven't done
the vigil, mortification of flesh and so on?) Eventually a 3 page
introduction on the O'Reilly website together with a good document
called "Haskell for C Programmers"

[Haskell-cafe] Haskell project support and analysis on ohloh

2008-02-16 Thread Don Stewart
Recently ohloh.net added support for analysing haskell code.
You can see a range of Haskell projects analysed on ohloh here:

http://www.ohloh.net/projects/search?q=haskell

Including:

http://www.ohloh.net/projects/11789?p=nhc98
http://www.ohloh.net/projects/6869?p=xmonad
http://www.ohloh.net/projects/4078?p=HUGS
http://www.ohloh.net/projects/6949?p=yi
http://www.ohloh.net/projects/10496?p=bytestring
http://www.ohloh.net/projects/11766?p=parsec
http://www.ohloh.net/projects/11779?p=mtl
http://www.ohloh.net/projects/11729?p=binary
http://www.ohloh.net/projects/11769?p=X11
http://www.ohloh.net/projects/11790?p=array

Which gives rise to some fun 'reports' (and also helps non-Haskell
people notice that there are lots of Haskell projects, and developers
with experience):

xmonad:

" Over the past twelve months, 34 developers contributed new code to
xmonad.  This is one of the largest open-source teams in the world,
and is in the top 2% of all project teams on Ohloh. "

nhc98:

"nhc has a mature, well established code base. The first lines of
source code were added to nhc98 in 1999. This is a relatively long
time for an open source project to stay active, and can be a very
good sign.

A long source control history like this one shows that the project
has enough merit to hold contributors's interest for a long time. It
might indicate a mature and relatively bug-free code base, and can
be a sign of an organized, dedicated development team. " 

Now, they don't support analysing darcs repos, but the do support git,
and with darcs-to-git and darcs2git I've been able to (slowly) get a
bunch of repos into git form. You can find the git conversions here, if
you're interested:

http://www.cse.unsw.edu.au/~dons/git/

(git clone http://www.cse.unsw.edu.au/~dons/git/parsec for example).

GHC, base and lambdabot are still in the process of conversion.
If you want to do your own conversions, I suggest trying darcs2git
first, which is very fast. If that doesn't suceed, try darcs-to-git,
which is *much* slower (damn ruby), but seems to be more robust.
You can then register the resulting git repo for analysis.

The global language statistics, and individual Haskell statistics will
update in the next few hours, I suspect.

http://www.ohloh.net/languages/38

Though at the time of writing they appear out of date.

Happy hacking.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Designing DSL with explicit sharing [was: I love purity, but it's killing me]

2008-02-16 Thread Matthew Naylor
Hi Oleg,

at the possible risk of straying from Tom's original problem, consider
the following generator that could conceivably occur in practice:

> sklansky f [] = []
> sklansky f [x] = [x]
> sklansky f xs = left' ++ [ f (last left') r | r <- right' ]
>   where
> (left, right) = splitAt (length xs `div` 2) xs
> left' = sklansky f left
> right' = sklansky f right
>
> test_sklansky n = runState sk exmap0
>   where
> sk = sequence (Prelude.map unA (sklansky add xs))
> xs = Prelude.map (variable . show) [1..n]

(Example due to Chalmers folk, Mary Sheeran and Emil Axelsson;
sklanksy is similar to scanl1, but contains more parallelism.)

If a 128-bit processor were being developed, sklansky could reasonably
be passed 128 elements,

  *Main> test_sklansky 128-- on an AMD64 2200
  (3.71 secs, 296129440 bytes)

and could form part of a larger expression, and be called several
times.  So I think this is close to a real situation where CSE is not
practical.  But like you say, a human would not write such a humungous
expression by hand; hand-written expressions may therefore be ideal
for crunching by your CSE method.

Still, we might like to write generators like sklansky.  Hope is
offered by the new "let_" construct, but it's not clear how to use it
in this case. The problem is that sklansky is a function over lists of
expressions rather than single expressions, and the "let_" construct
can only bind single expressions and return single expressions.  To
write sklansky using your approach, it seems (to me) that the DSL's
expression type needs to be extended to support lists.  Will this not
be complicated and introduce notational inconvenience?  (In an earlier
post, I outlined a method for embedding algebraic data types in
Haskell, and it does have some notational burden.)

Matt.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Help with error

2008-02-16 Thread Jeff φ
2008/2/15 Antoine Latter <[EMAIL PROTECTED]>:

> (sent to the list this time)
>
> The problem is in the type-signature for from_seq:
>
> from_seq :: (Sequence seq) => (seq e) -> (t e)
>
> Neither the From_seq class or the type signature of the from_seq
> function place any restrictions on the type of e, so the type can be
> rewritten as:
>
> from_seq :: forall e seq . Sequence seq => (seq e) -> (t e)
>
> That is, the class explicitly defines from_seq has having norestrictions
> on e.
>
> Your from_seq' function requires the type e (in the error, e1) to
> inhabit IArray a e.
>
> The IArray constraint isn't compatible with the From_seq class
> definition.  You may need to explore multi-parameter type classes:
> http://en.wikibooks.org/wiki/Haskell/Advanced_type_classes
>
> Does this help?
>

Yes, this helped.  I added the type variable, e, to my From_seq class and it
worked.  Thank you for the explanation.   Here are the changes I made:

class From_seq t e where
from_seq :: (Sequence seq) => (seq e) -> (t e)

instance From_seq [] e where
from_seq seq
| snull seq  = []
| otherwise = (shead seq) : (from_seq (stail seq))

instance (Ix i, Num i, IArray a e) => From_seq (a i) e where
from_seq seq = from_seq' seq
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe