Re: GHC on IA-64

2006-03-27 Thread Simon Marlow
Thanks for the analysis Duncan.  Would you mind submitting a ticket for 
this?


What does the crash look like in gdb?  Does it look like it has jumped 
into the middle of nowhere, or is it a crash in some actual code that 
you can disassemble?  To find out what was happening just before the 
crash, the recipe is usually:


  - make it deterministic: +RTS -C0

  - set a breakpoint in a common basic block, eg. stg_upd_frame_info,
stg_upd_frame_1_info, stg_ap_p_info.

  - ignore 1 999, run, info break, ignore 1 N-1, run (where N is the
number of times the breakpoint was hit).

  - single step forwards to find out what was going on just before the
crash.  If you have to single step a long way, it might be better
to pick a different breakpoint - take note of which code blocks
you step through.

Cheers,
Simon

Duncan Coutts wrote:

On Sat, 2006-03-25 at 21:29 +, Duncan Coutts wrote:



If anyone wants to look into it, I could narrow down the threshold of
the -funfolding-use-threshold flag at which the bug is triggered and
compare the assembly output.



I found the threshold is 14-15. With -funfolding-use-threshold14 it
works ok and with -funfolding-use-threshold15 the mangler complains as
in my original message.

Here is the assembly output for them both:
http://haskell.org/~duncan/ghc/ia64/SHA1-unfold14.raw_S
http://haskell.org/~duncan/ghc/ia64/SHA1-unfold15.raw_S

This does seem to be a critical unfolding threshold since the file sizes
are dramatically different. On the other hand, much of the two are the
same (modulo symbol names).

In the case that breaks the mangler (SHA1-unfold15.raw_S) you can see
there is a function that has a massive straight line section of code.

It is the crossing of this unfolding threshold that seems to have
allowed the second version to inline almost everything and give nice
short straight line code.

The specific bit that the mangler complains about is in
SHA1-unfold15.raw_S line 1304.

Prologue junk?: .proc s60A_ret#
s60A_ret:
.save ar.pfs, r107
alloc r107 = ar.pfs, 0, 77, 8, 0
.body

The whole bit is:
s60A_ret:
.prologue 12, 106
.save ar.pfs, r107
alloc r107 = ar.pfs, 0, 77, 8, 0
mov r108 = r1
.save rp, r106
mov r106 = b0
.body
;;

Which is not dramatically different from the same bit from
SHA1-unfold14.raw_S:

s5pF_ret:
.prologue 12, 61
.save ar.pfs, r62
alloc r62 = ar.pfs, 0, 32, 0, 0
mov r63 = r1
.save rp, r61
mov r61 = b0
.body
;;

Interestingly, all other alloc directives in both versions are exactly:
alloc r62 = ar.pfs, 0, 32, 0, 0
There is only that one occurrence of
alloc r107 = ar.pfs, 0, 77, 8, 0
in the whole of SHA1-unfold15.raw_S


From looking at the Evil Mangler it is clear that it does not expect the

different form of alloc directive:

elsif ($TargetPlatform =~ /^ia64-/) {
$p =~ s/^\t\.prologue .*\n//;
$p =~ s/^\t\.save ar\.pfs, r\d+\n\talloc r\d+ = ar
\.pfs, 0, 3[12], \d+, 0\n//;

So if we relax the regexp from 3[12] to just \d+ then the mangler no
longer barfs.

Unfortunately it seems that when recompiled with this change to the
mangler and the original -funfolding-use-threshold20, darcs dies with an
illegal instruction error when performing some repo operations. I need
more help to figure out in gdb what's going on. I couldn't figure out
how to get a disassembly in gdb of the code executing just before the
error.

So it's not clear if this illegal instruction bug is caused by my
mangler change or if it's just another effect of the extreme unfolding.

With the -funfolding-use-threshold flag removed, darcs builds and passes
98% of the its test suite.

Duncan


___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: GHC on IA-64

2006-03-27 Thread Duncan Coutts
Hi Simon,

On Mon, 2006-03-27 at 10:29 +0100, Simon Marlow wrote:
 Thanks for the analysis Duncan.  Would you mind submitting a ticket for 
 this?
 
 What does the crash look like in gdb?  Does it look like it has jumped 
 into the middle of nowhere, or is it a crash in some actual code that 
 you can disassemble?  To find out what was happening just before the 
 crash, the recipe is usually:

Matthew Chapman explained why it happened. Let me quote:


   The reason this regex is so specific is that it acts as a sanity check. 
   The way GHC works on IA64 is that every STG function runs in the same
   register stack window (this allows tailcalls to work).  This register
   stack window has 32 locals allocated, so a function must not use any
   more than this.
  
  So that explains why it didn't work when I merely relaxed the regexp in
  the mangler.
 
 Yep, the Illegal instruction you get is exactly because it's
 accessing past the end of the allocated register window, and that
 particular fault is mapped by Linux to Illegal instruction.


So it's not a surprise. The solution is not to relax the mangler as I
tried but to make sure gcc never uses more than 32 locals but spills to
the stack instead in the case that register pressure is very high.

With gcc 3.3 the problem only happened on an example of extreme
unrolling in darcs. With gcc 4.1 on ia64 the problem happened when
rebuilding ghc itself. For one function in one ghc module, gcc was
allocating 33 locals, which is one more than allowed. This problem is
only going to get worse as gcc gets smarter on ia64. (Other than that,
gcc 4.1 seems to work on wit ghc on ia64. Darcs compiled fine when using
gcc 4.1. Interestingly when darcs was compiled with ghc+gcc4.1 rather
than ghc+gcc3.3, the darcs binary stopped causing unaligned access
warnings from the kernel.)

Matthew suggested using -mfixed-range=loc32-loc79 or
-mfixed-range=loc16-loc79 or even -mfixed-range=in0-in7,loc16-loc79.

This does appear to work in reducing the number of locals that gcc
allocates, but then the mangler chokes on the bit of code immediately
after (that appears to be spilling registers to the stack instead).

We haven't figured that latter bit out yet.

If you're interested, Simon, I have a registerised build of ghc-6.4.1
for ia64 with GHCi available. It's currently sitting on the Gentoo
mirrors as distfiles/ghc-bin-6.4.1-ia64.tbz2. Of course it installs
into /usr and would need some sed -i work to relocate. In fact if you
want these things (linked or hosted) for your download page, we've got
x86, amd64, alpha, hppa, sparc, ppc, ppc64 and ia64. We're having
trouble with mips.

If you ever feel like getting it to work on mips linux I can probably
get you access to a 16 way SMP mips @ 250Mhz with 8GB ram :-).

Duncan

___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


trying to get a registerised alpha build

2006-03-27 Thread Duncan Coutts
More on the porting theme,

We've had an unregisterised alpha port in gentoo for some time and two
of our alpha devs with an interest in ghc decided to try to get the
registerised build working.

They started by turning off the mangler and trying to get the register
bit working. This is ghc-6.4.1.

It works up to building the libs and then gcc complains:

unable to find a register to spill in class `R27_REG'

This looks like it's a similar to an error Ian found:
http://www.haskell.org/pipermail/glasgow-haskell-users/2003-September/005692.html

though in that example the error occurred while compiling C modules
where as here it's with a .hc file.


==fptools== make all -wr;
 in /var/tmp/portage/ghc-6.4.1-r2/work/ghc-6.4.1/libraries/base

../../ghc/compiler/ghc-inplace -H32m -O0 -fno-asm-mangling
-fglasgow-exts -cpp -Iinclude -#include HsBase.h -funbox-strict-fields
-ignore-package base -fgenerics  -fgenerics-c GHC/Err.lhs-boot -o
GHC/Err.o-boot  -ohi GHC/Err.hi-boot
../../ghc/compiler/ghc-inplace -H32m -O0 -fno-asm-mangling
-fglasgow-exts -cpp -Iinclude -#include HsBase.h -funbox-strict-fields
-ignore-package base -fgenerics  -fgenerics-c GHC/Base.lhs -o
GHC/Base.o  -ohi GHC/Base.hi
/tmp/ghc9727.hc: In function `s1BE_ret':
/tmp/ghc9727.hc:1688: error: unable to find a register to spill in class
`R27_REG'
/tmp/ghc9727.hc:1688: error: this is the insn:
(insn 24 23 26 1 (parallel [
(set (reg/v:DI 14 $14 [ R1 ])
(div:DI (reg:DI 24 $24 [77])
(reg:DI 25 $25 [78])))
(clobber (reg:DI 23 $23))
(clobber (reg:DI 28 $28))
]) 43 {*divmoddi_internal_er} (insn_list 22 (insn_list 21
(nil)))
(expr_list:REG_UNUSED (reg:DI 23 $23)
(expr_list:REG_UNUSED (reg:DI 28 $28)
(expr_list:REG_DEAD (reg:DI 25 $25 [78])
(expr_list:REG_DEAD (reg:DI 24 $24 [77])
(nil))
/tmp/ghc9727.hc:1688: confused by earlier errors, bailing out
make[2]: *** [GHC/Base.o] Error 1
make[1]: *** [all] Error 1
make[1]: Leaving directory
`/var/tmp/portage/ghc-6.4.1-r2/work/ghc-6.4.1/libraries'
make: *** [build] Error 1

We stared hard at the MachRegs.h file and the Alpha ABI documentation
for some time but couldn't see what would be wrong. We tried not using
so many registers (removing REG_R7  REG_R8) but that did not make any
difference.

What is odd here is that the class is `R27_REG' rather than a more
general class. The R27 is used as:

Procedure value (PV) register. In a standard call, the procedure
value of the procedure being called is passed in this register.
(See Section 2.2.) In a standard call, this register can be
modified by the called procedure without being saved and
restored. 

http://www.cs.arizona.edu/computer.help/policy/DIGITAL_unix/AA-PY8AC-TET1_html/callCH2.html#GENERAL-PURPOSE-INTEGER-REGISTER

And this is not a register that we're stealing. We were using gcc 3.4.4.
We did not try with gcc 2.95.

When looking at the MachRegs.h we found that the bit:
#  define NCG_Reserved_I1 22
#  define NCG_Reserved_I2 27
#  define NCG_Reserved_F1 f29
#  define NCG_Reserved_F2 f30

is no longer used anywhere. It was used in the NCG in ghc 5.x but not in
6.4.x.

Duncan

___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #734: Spurious type variable scope error report

2006-03-27 Thread GHC
#734: Spurious type variable scope error report
--+-
  Reporter:  [EMAIL PROTECTED]   |  Owner:
  Type:  bug  | Status:  closed
  Priority:  normal   |  Milestone:
 Component:  Compiler (Type checker)  |Version:  6.4.1 
  Severity:  normal   | Resolution:  fixed 
  Keywords:   | Os:  Linux 
Difficulty:  Unknown  |   Architecture:  x86_64 (amd64)
--+-
Changes (by simonpj):

  * resolution:  = fixed
  * status:  new = closed

Comment:

 Fixed thank you.

 Simon

-- 
Ticket URL: http://cvs.haskell.org/trac/ghc/ticket/734
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: building docs fails for stable ghc-6.4.2 (2006/03/23 snapshot)

2006-03-27 Thread Sven Panne
Am Freitag, 24. März 2006 15:55 schrieb Duncan Coutts:
 I built yesterday's ghc-6.4.2 snapshot (2006/03/23) and had a problem
 building the docs. [...]

Things have improved, but make html still fails, this time when building the 
Haddock documentation for the base package:

   haddock: modules are recursive: Control.Exception GHC.Conc GHC.TopHandler 
Prelude System.IO GHC.Handle GHC.IO Data.IORef

SimonM: Is make html part of the nightly builds? It should definitely be, as 
in the past it turned out to be one of the stages with most bugs in it...

Cheers,
   S.
___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Still some GHC trouble when building Haddock

2006-03-27 Thread Sven Panne
Some days ago I reported a regression in GHC:

   http://www.haskell.org//pipermail/cvs-all/2006-March/046637.html

I've fixed Happy to produce correct code, so the 2nd problem mentioned is 
already solved. Nevertheless, the regression remains even in yesterday's GHC 
built from darcs HEAD (what's the right terminology here?). Shouldn't this be 
caught by the nightly builds somehow?

Cheers,
   S.
___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


[GHC] #735: Missing case in fgl/Data/Graph/Inductive/Internal/RootPath.hs

2006-03-27 Thread GHC
#735: Missing case in fgl/Data/Graph/Inductive/Internal/RootPath.hs
--+-
Reporter:  [EMAIL PROTECTED]  |Owner: 
Type:  bug|   Status:  new
Priority:  normal |Milestone: 
   Component:  libraries (other)  |  Version:  6.4.1  
Severity:  normal | Keywords: 
  Os:  Multiple   |   Difficulty:  Easy (1 hr)
Architecture:  Multiple   |  
--+-
In the package `fgl', in the file
 Data/Graph/Inductive/Internal/RootPath.hs
 a case is missing from the function `findP' that causes it to crash on
 some graphs (but not all).

 Patch that fixes the bug:
 {{{


 --- RootPath.hs 2006-03-27 18:16:26.0 -0500
 +++ RootPath.hs 2006-03-27 18:16:01.0 -0500
 @@ -34,6 +34,7 @@
  -- | Find the first path in a tree that starts with the given node
  findP :: Node - LRTree a - [LNode a]
  findP _ []  = []
 +findP v ((LP []):ps)= findP v ps
  findP v ((LP (p@((w,_):_))):ps) | v==w  = p
  | otherwise = findP v ps
 }}}

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/735
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: How to access hs_free_fun_ptr from a Visual Studio DLL

2006-03-27 Thread Simon Marlow

Sven Panne wrote:

Am Samstag, 25. März 2006 20:00 schrieb Brian Hulley:


I've found a workaround to the problem below: instead of trying to use
hs_free_fun_ptr, I instead pass a FunPtr to the Haskell function
freeHaskellFunPtr into my DLL, and use this to free everything, finally
using it to free itself (!) which I assume should be safe. [...]



It has been quite some time since I've worked on GHC's Adjustor.c and Hugs' 
FFI, but IIRC it is in general *not* safe to do this. On some platforms code 
is being generated dynamically for these kind of callbacks, which has already 
been freed by the time the callback returns. This might appear to work, 
depending on your processor architecture and dynamic memory management 
behaviour, but it's better not to rely on this. Perhaps the FFI spec should 
be clearer here.


I'm pretty sure this *does* work with GHC, at least on some platforms 
(definitely x86 and x86_64, I'm not sure about the others).  We're 
careful not to require any dynamically-allocated code to run when the 
callback returns.


I agree the FFI spec should be clearer here.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Still problems building ghc 6.5 with ghc 6.4

2006-03-27 Thread Simon Marlow

Michael Marte wrote:

Hello *,

when building ghc 6.5 with ghc 6.4 and alex 2.0.1, the build fails as 
follows:


make -C ghc/compiler stage=2
make[2]: Entering directory `/home/marte/fptools/ghc/compiler'
../../ghc/compiler/ghc-inplace -H16m -O  -istage2/utils  
-istage2/basicTypes  -istage2/types  -istage2/hsSyn  -istage2/prelude  
-istage2/rename  -istage2/typecheck  -istage2/deSugar  -istage2/coreSyn  
-istage2/specialise  -istage2/simplCore  -istage2/stranal  
-istage2/stgSyn  -istage2/simplStg  -istage2/codeGen  -istage2/main  
-istage2/profiling  -istage2/parser  -istage2/cprAnalysis  
-istage2/ndpFlatten  -istage2/iface  -istage2/cmm  -istage2/nativeGen  
-istage2/ghci -Istage2 -DGHCI -package template-haskell -threaded 
-package readline -DUSE_READLINE -cpp -fglasgow-exts -fno-generics 
-Rghc-timing -I. -IcodeGen -InativeGen -Iparser -package unix -package 
Cabal -ignore-package lang -recomp -Rghc-timing  -H16M '-#include 
hschooks.h' -ignore-package ghc   -fgenerics  -funbox-strict-fields  
-c parser/Lexer.hs -o stage2/parser/Lexer.o  -ohi stage2/parser/Lexer.hi


parser/Lexer.x:578:10:
   lexical error in string/character literal at character '\955'
ghc: 51822676 bytes, 5 GCs, 98216/98216 avg/max bytes residency (1 
samples), 16M in use, 0.00 INIT (0.00 elapsed), 0.62 MUT (0.84 elapsed), 
0.11 GC (0.13 elapsed) :ghc

make[2]: *** [stage2/parser/Lexer.o] Error 1
make[2]: Leaving directory `/home/marte/fptools/ghc/compiler'
make[1]: *** [stage2] Error 2
make[1]: Leaving directory `/home/marte/fptools'
make: *** [bootstrap2] Error 2

Is this problem caused by alex?


This is with a recently checked-out HEAD?  I thought I fixed this 
problem.  It isn't caused by Alex, the issue is that 6.4 didn't have the 
necessary Unicode functionality in its base package.  The fix was to 
include it in the libcompat library when building GHC, but for some 
reason this appears not to be happening for you.


Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Message GHC/PrimopWrappers.hs:133:29: Not in scope: `GHC.Prim.quotInteger2Exp#' building GHC with additional primitive operation

2006-03-27 Thread Simon Marlow

Thorkil Naur wrote:

Thanks a lot, that removed some obstacles. Unfortunately, not all. Following 
successful make clean and make all in ghc/compiler and 
libraries/base, a make all in the top-level directory reported:


  ../../ghc/compiler/stage1/ghc-inplace -o stage2/ghc-6.4.1 -H16m -O  

... snip...

  /home/tn/tn/Haskell/ghc/unpack/ghc-6.4.1/ghc/rts/libHSrts.a(Linker.o):
(.data+0x41c): undefined reference to `quotInteger2Expzh_fast'
  collect2: ld returned 1 exit status


And that message persisted, even when I tried make clean and make all in 
the top-level directory.


quotInteger2Expzh_fast is the function you are adding to PrimOps.cmm to 
implement the primop.  The patch in your original message indicated that 
you had added a stub for this function, so it should link ok.  I don't 
understand what has gone wrong.


You could check that indeed ghc/rts/PrimOps.o contains a definition for 
this symbol (nm ghc/rts/PrimOps.o), and also check that the symbol is 
defined in ghc/rts/libHSrts.a.


Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to access hs_free_fun_ptr from a Visual Studio DLL

2006-03-27 Thread Brian Hulley

Krasimir Angelov wrote:

Hi Brian,

The problem is that hs_free_fun_ptr is defined in a static library
(the Haskell RTS) while you have declared it with
__declspec(dllimport). In this case the compiler is tring tp optimize
the call and it is using __imp__hs_free_fun_ptr instead of
hs_free_fun_ptr. You have to remove the dllimport pragma and to link
your program with the Haskell RTS library. Unfortunatelly the the
static library format is different for VC++ and GCC and I expect that
you may have problems if you try to statically link gcc code with VC++
code.


Hi Krasimir,

Yes I was getting confused about the difference between static and dynamic 
libraries. I've mostly been using static libs in my C++ projects, and when 
you build a static lib it doesn't matter that some functions are not defined 
in it: no linking is done at this point, so the library can be created 
without needing to specify where any extern functions are actually defined. 
However when building a DLL, the linker is invoked and so it needs to know 
the location of any function not defined in the DLL itself. But since my DLL 
needs to be used by multiple Haskell applications, and since the RTS is 
built into the application code as opposed to being supplied by a separate 
DLL itself, there is no way to link the DLL against RTS API functions, even 
if the VC/GCC imcompatibilites could be overcome.


However there is no need for this anyway because of the simple workaround of 
just wrapping freeHaskellFunPtr in a FunPtr and passing this into the DLL, 
effectively doing dynamic linking of this function explicitly rather than 
relying on the low level OS-specific linking process to accomplish this.


Thanks, Brian. 


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to access hs_free_fun_ptr from a Visual Studio DLL

2006-03-27 Thread Brian Hulley

Sven Panne wrote:

Am Samstag, 25. März 2006 20:00 schrieb Brian Hulley:

I've found a workaround to the problem below: instead of trying to
use hs_free_fun_ptr, I instead pass a FunPtr to the Haskell function
freeHaskellFunPtr into my DLL, and use this to free everything,
finally using it to free itself (!) which I assume should be safe.
[...]


It has been quite some time since I've worked on GHC's Adjustor.c and
Hugs'
FFI, but IIRC it is in general *not* safe to do this. On some
platforms code
is being generated dynamically for these kind of callbacks, which has
already
been freed by the time the callback returns. This might appear to
work,
depending on your processor architecture and dynamic memory management
behaviour, but it's better not to rely on this. Perhaps the FFI spec
should
be clearer here.


I've been thinking about this a bit more, and I'm now puzzled at why there 
should be a problem, even if some dynamic code is generated. In Section 
5.4.2 of the Haskell FFI 1.0 Addendum 98, freeHaskellFunPtr is specified as:


   freeHaskellFunPtr :: FunPtr a - IO ()

   Release the storage associated with the given FunPtr, which must have 
been obtained
   from a wrapper stub. This should be called whenever the return value 
from
   a foreign import wrapper function is no longer required; otherwise, the 
storage it uses will leak.


Thus all the function is supposed to do is just release the storage 
associated with the FunPtr itself. It is not supposed to affect the storage 
of the code for the function pointed to by the FunPtr, other than to inform 
the garbage collector that there is now one less reference to it.


I'd have thought that even if the function code is generated dynamically, 
the FunPtr would just be treated as an extra root (leading to the block of 
memory storing the code) as far as garbage collection is concerned. 
freeHaskellFunPtr would then remove this extra root. However if the code is 
currently being executed, surely the (machine code) program counter would 
act as a root for gc, to prevent the block containing the currently 
executing code (and any blocks on the call stack) from being reclaimed?


Otherwise, freeHaskellFunPtr is not safe to use anywhere (perhaps that's why 
you used a scavenger function?). For example, in:


  foreign import ccall wrapper mkIO :: IO () - IO (FunPtr (IO ()))
  foreign import ccall set_callback :: FunPtr (IO ()) - IO ()
  foreign import ccall run :: IO ()

  foo1 :: IO ()
  foo1 = do
 set_callback foo2
 something_else-- does this code still exist?

  foo2 :: IO ()
  foo2 = ...

  main = do
 set_callback foo1
 run  -- causes foo1 to be called at some point

where set_callback wraps its arg in a FunPtr and calls freeHaskellFunPtr if 
a previous callback function has already been specified.


If foo1 is generated as dynamic code, then freeing the FunPtr to it as a 
consequence of changing the callback to foo2, becomes problematic unless 
there is a guarantee that currently executing code is always considered to 
be rooted by the program counter.


So my questions are:

1) Is my understanding of the FFI definition of freeHaskellFunPtr incorrect?

2) Are some implementations not treating the program counter and call stack 
as roots for blocks of memory which contain dynamically generated code that 
is currently executing?


3) Are some implementations treating a FunPtr as the sole owner of the code 
it is pointing to, thus requiring the workaround of a scavenger function to 
safely free FunPtr's at a point where it is known that none of the code 
pointed to by any of the FunPtr's is currently executing?


Thanks, Brian.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Still problems building ghc 6.5 with ghc 6.4

2006-03-27 Thread Michael Marte

Simon Marlow wrote:


Michael Marte wrote:


Hello *,

when building ghc 6.5 with ghc 6.4 and alex 2.0.1, the build fails as 
follows:


make -C ghc/compiler stage=2
make[2]: Entering directory `/home/marte/fptools/ghc/compiler'
../../ghc/compiler/ghc-inplace -H16m -O  -istage2/utils  
-istage2/basicTypes  -istage2/types  -istage2/hsSyn  
-istage2/prelude  -istage2/rename  -istage2/typecheck  
-istage2/deSugar  -istage2/coreSyn  -istage2/specialise  
-istage2/simplCore  -istage2/stranal  -istage2/stgSyn  
-istage2/simplStg  -istage2/codeGen  -istage2/main  
-istage2/profiling  -istage2/parser  -istage2/cprAnalysis  
-istage2/ndpFlatten  -istage2/iface  -istage2/cmm  
-istage2/nativeGen  -istage2/ghci -Istage2 -DGHCI -package 
template-haskell -threaded -package readline -DUSE_READLINE -cpp 
-fglasgow-exts -fno-generics -Rghc-timing -I. -IcodeGen -InativeGen 
-Iparser -package unix -package Cabal -ignore-package lang -recomp 
-Rghc-timing  -H16M '-#include hschooks.h' -ignore-package ghc   
-fgenerics  -funbox-strict-fields  -c parser/Lexer.hs -o 
stage2/parser/Lexer.o  -ohi stage2/parser/Lexer.hi


parser/Lexer.x:578:10:
   lexical error in string/character literal at character '\955'
ghc: 51822676 bytes, 5 GCs, 98216/98216 avg/max bytes residency (1 
samples), 16M in use, 0.00 INIT (0.00 elapsed), 0.62 MUT (0.84 
elapsed), 0.11 GC (0.13 elapsed) :ghc

make[2]: *** [stage2/parser/Lexer.o] Error 1
make[2]: Leaving directory `/home/marte/fptools/ghc/compiler'
make[1]: *** [stage2] Error 2
make[1]: Leaving directory `/home/marte/fptools'
make: *** [bootstrap2] Error 2

Is this problem caused by alex?



This is with a recently checked-out HEAD?  I thought I fixed this 
problem.  It isn't caused by Alex, the issue is that 6.4 didn't have 
the necessary Unicode functionality in its base package.  The fix was 
to include it in the libcompat library when building GHC, but for some 
reason this appears not to be happening for you.




Yes, I synced my working copy of ghc 6.5 yesterday with darcs pull.
Are there any requirements as to which exact version of ghc 6.4 I am 
supposed to use?
I am using the plain 6.4 release but I am able to build the head of the 
6.4 branch.


BTW. Am I supposed to run configure after syncing the working copy 
and/or clean the source tree before issueing the make command?


Michael

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to access hs_free_fun_ptr from a Visual Studio DLL

2006-03-27 Thread Sven Panne
Am Montag, 27. März 2006 14:27 schrieb Brian Hulley:
 [...] For example, in:

foreign import ccall wrapper mkIO :: IO () - IO (FunPtr (IO ()))
foreign import ccall set_callback :: FunPtr (IO ()) - IO ()
foreign import ccall run :: IO ()

foo1 :: IO ()
foo1 = do
   set_callback foo2
   something_else-- does this code still exist?

foo2 :: IO ()
foo2 = ...

main = do
   set_callback foo1
   run  -- causes foo1 to be called at some point

 where set_callback wraps its arg in a FunPtr and calls freeHaskellFunPtr if
 a previous callback function has already been specified.

 If foo1 is generated as dynamic code, then freeing the FunPtr to it as a
 consequence of changing the callback to foo2, becomes problematic unless
 there is a guarantee that currently executing code is always considered to
 be rooted by the program counter.

 So my questions are:

 1) Is my understanding of the FFI definition of freeHaskellFunPtr
 incorrect?

Partially, yes. Your example above leaves out an important point: Calling 
mkIO. Regardless of the implementation details, this has to accomplish two 
things:

 * Making sure that the Haskell function being wrapped is not garbage 
collected. This is important, because the returned FunPtr might contain the 
last reference to the wrapped function internally.

 * Setup some mechanism which does the impedance matching between C and 
Haskell, i.e. put the arguments in the right places, do some stack fiddling, 
etc.

For GHC (and Hugs, IIRC), this is solved by generating a StablePtr to the 
Haskell function and baking it somehow into dynamically generated code (see 
ghc/rts/Adjustor.c), which does the argument/stack fiddling and then jumps to 
some C stub, which in turn uses the StablePtr and the rest of the arguments 
to call the Haskell function. If this sounds tricky and highly 
platform-dependent: Yes, it is! :-)

The main point here is: The Haskell runtime doesn't know the time when it is 
safe to free the StablePtr and the dynamically generated adjustor code. The 
FunPtr's C equivalent could be stored anywhere in C land, perhaps in some GUI 
callback table, some global variables, it could be in use by some other 
(non-Haskell) thread, etc. etc. Consequently, the programmer must help via 
freeHaskellFunPtr/hs_free_fun_ptr, but this should not happen while the 
FunPtr is in use, e.g. while the Haskell callback is currently being 
executed. The technical reason for this is that after returning from Haskell 
land, the adjustor code might need to do some cleanup:

   C - adjustor - stub - Haskell - stub - adjustor - C

It could be the case that the adjustor tail-jumps to the stub, but this is not 
guaranteed to be the case for all platforms.

 2) Are some implementations not treating the program counter and call stack
 as roots for blocks of memory which contain dynamically generated code that
 is currently executing?

As hopefully explained above, a pointer to the adjustor could be everywhere, 
so GC is not an option here.

 3) Are some implementations treating a FunPtr as the sole owner of the code
 it is pointing to, thus requiring the workaround of a scavenger function to
 safely free FunPtr's at a point where it is known that none of the code
 pointed to by any of the FunPtr's is currently executing?

Ownership is not the problem here, but lifetime: A callback should simply not 
free its own FunPtr. This requirement is not much different from having to 
avoid accessing memory which has already been freed.

I know that this is all complicated stuff, but if you would really like to 
understand the mechanisms behind, it would probably be best to pick a 
platform you know and have a look at  dsFExportDynamic in
ghc/compiler/deSugar/DsForeign.lhs and ghc/rts/Adjustor.c.

Cheers,
   S.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re[2]: Message GHC/PrimopWrappers.hs:133:29: Not in scope: `GHC.Prim.quotInteger2Exp#' building GHC with additional primitive operation

2006-03-27 Thread Bulat Ziganshin
Hello Simon,

Monday, March 27, 2006, 2:57:47 PM, you wrote:

 quotInteger2Expzh_fast is the function you are adding to PrimOps.cmm to

Thorkil, i can't understand why you can't just use FFI to import
functions you required? why you need to patch the PrimOps list?



-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Still problems building ghc 6.5 with ghc 6.4

2006-03-27 Thread Simon Marlow

Michael Marte wrote:


Yes, I synced my working copy of ghc 6.5 yesterday with darcs pull.
Are there any requirements as to which exact version of ghc 6.4 I am 
supposed to use?
I am using the plain 6.4 release but I am able to build the head of the 
6.4 branch.


BTW. Am I supposed to run configure after syncing the working copy 
and/or clean the source tree before issueing the make command?


The *safest* thing to do after pulling is to completely make clean, 
autoreconf, and build from scratch.  You may be able to avoid being that 
drastic if you know what you're doing.  There are quite a few 
dependencies in the GHC tree that aren't tracked explicitly, for 
practical reasons; for example, the build system doesn't know that when 
the .hi format changes you need to rebuild all your libraries.


I suspect something is out of date in your case, but I'm not sure what. 
   If you don't want to rebuild everything, you could try just 
rebuilding ghc/lib/compat, and then remove ghc/stage1/parser/Lexer.o, 
build stage 1, and then carry on with stage 2.


Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to access hs_free_fun_ptr from a Visual Studio DLL

2006-03-27 Thread Brian Hulley

Sven Panne wrote:

[snip]
being executed. The technical reason for this is that after returning
from Haskell land, the adjustor code might need to do some cleanup:

   C - adjustor - stub - Haskell - stub - adjustor - C

It could be the case that the adjustor tail-jumps to the stub, but
this is not guaranteed to be the case for all platforms.


Thanks Sven. I had naively thought the wrapper stub was just a way to get a 
StablePtr and pass args to the Haskell function, but had not realised that 
adjustor code was needed also in the transition from Haskell back to C (it's 
obvious in retrospect now that you've pointed it out).


So I'll use your idea of a scavenger function, which I can call from my main 
event loop to be sure that no FunPtr's will directly or indirectly be 
freeing themselves (so that the DLL will be safe regardless of the 
particular Haskell implementation).


Thanks for the explanations,

Brian. 


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to access hs_free_fun_ptr from a Visual Studio DLL

2006-03-27 Thread Wolfgang Thaller
   C - adjustor - stub - Haskell - stub - adjustor - CIt could be the case that the adjustor tail-jumps to the stub, but this is not guaranteed to be the case for all platforms.Hmmm, I thought it was.Well, the FFI addendum is rather vague on this point; this seems to be all it says about freeHaskellFunPtr.freeHaskellFunPtr :: FunPtr a - IO ()  Release the storage associated  with the given \code{FunPtr}, which must have been obtained from a wrapper  stub.  This should be called whenever the return value from a foreign import  wrapper function is no longer required; otherwise, the storage it uses will  leak.Do I need the FunPtr value in order to return to C land? I didn't think so. The addendum doesn't say so. (It doesn't clearly say otherwise, though).All of the implementations in Adjustor.c *except* for the IA-64 one go to some lengths to ensure that a tail-calll is used:C - adjustor - stub  - Haskell - stub - (maybe some static code) - CI think this confusion should be fixed...Cheers,Wolfgang___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Possible runtime overhead of wrapping the IO monad?

2006-03-27 Thread Brian Hulley

Hi,
I'm designing an API for a simple graphics window, and am trying to make the 
correct usage of the API functions explicit and visible to the type system 
by using different monads which ultimately just wrap the IO monad. For 
example, in a callback for rendering stuff to the screen, only operations in 
the RenderM monad are allowed, and when specifying vertex info for a 
primitive, only VertexM operations are allowed.


However I'm wondering if I can rely on all this monad stuff being optimized 
out at compile time. A sample monad is below:


  newtype VertexM a = VertexM (IO a)

  instance Monad VertexM where
 VertexM x = fry = VertexM $ do
 ax - x
 let VertexM y = fry ax
 y

 return x = VertexM $ return x

  instance MonadIO VertexM where
 liftIO = VertexM

The monad doesn't do anything interesting apart from allowing the type 
checker to reject programs that don't use the API the way it was intended 
(all these things you have to keep in your head in C programs), but I don't 
want to use it if I'm going to get a performance hit.


Also, in:

  foreign import ccall duma_vertex3f :: Float - Float - Float - IO ()

  vertex3f :: Float - Float - Float - VertexM ()
  vertex3f x y z = liftIO $ duma_vertex3f x y z

is there a penalty involved in calling vertex3f (from another module) or 
will the worker/wrapper optimization ensure that machine code in the other 
module just calls duma_vertex3f directly since the liftIO operation is just 
an irrelevance at the machine code level?


So far I've just been using ghc --make and not bothering about what kind of 
code is generated. Is there a flag I can use to get ghc to output the stg 
code (or something higher level than just x86 machine code itself) so I can 
look at the output to see what optimizations are being done?


Thanks, Brian.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Possible runtime overhead of wrapping the IO monad?

2006-03-27 Thread Chris Kuklewicz
Brian Hulley wrote:
 Hi,
 I'm designing an API for a simple graphics window, and am trying to make
 the correct usage of the API functions explicit and visible to the type
 system by using different monads which ultimately just wrap the IO
 monad. For example, in a callback for rendering stuff to the screen,
 only operations in the RenderM monad are allowed, and when specifying
 vertex info for a primitive, only VertexM operations are allowed.
 
 However I'm wondering if I can rely on all this monad stuff being
 optimized out at compile time. A sample monad is below:
 
   newtype VertexM a = VertexM (IO a)
 
   instance Monad VertexM where
  VertexM x = fry = VertexM $ do
  ax - x
  let VertexM y = fry ax
  y
 
  return x = VertexM $ return x
 
   instance MonadIO VertexM where
  liftIO = VertexM
 

There should be no overhead for a newtype.  The above can be shortened to one 
line:

newtype VertexM a = VertexM (IO a) deriving (Functor,Monad,MonadIO)

(Needs ghc -fglasgow-exts, I expect)

 
 Also, in:
 
   foreign import ccall duma_vertex3f :: Float - Float - Float - IO ()
 
   vertex3f :: Float - Float - Float - VertexM ()
   vertex3f x y z = liftIO $ duma_vertex3f x y z
 
 is there a penalty involved in calling vertex3f (from another module) or
 will the worker/wrapper optimization ensure that machine code in the
 other module just calls duma_vertex3f directly since the liftIO
 operation is just an irrelevance at the machine code level?

I doubt there is a penalty.

 
 So far I've just been using ghc --make and not bothering about what kind
 of code is generated. Is there a flag I can use to get ghc to output the
 stg code (or something higher level than just x86 machine code itself)
 so I can look at the output to see what optimizations are being done?
 
 Thanks, Brian.
 

Yes, there are several ghc options:

-ddump-insert keyword is documented in
http://www.haskell.org/ghc/docs/latest/html/users_guide/options-debugging.html
In particular -ddump-simpl has been helpful for some people, and you want
-ddump-stg, perhaps.



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Possible runtime overhead of wrapping the IO monad?

2006-03-27 Thread Brian Hulley

Chris Kuklewicz wrote:

Brian Hulley wrote:

Hi,
I'm designing an API for a simple graphics window, and am trying to
[snip]


There should be no overhead for a newtype.  The above can be
shortened to one line:

newtype VertexM a = VertexM (IO a) deriving (Functor,Monad,MonadIO)


Thanks - that certainly saves a lot of typing! :-)



(Needs ghc -fglasgow-exts, I expect)



Also, in:

  foreign import ccall duma_vertex3f :: Float - Float - Float -
IO ()

  vertex3f :: Float - Float - Float - VertexM ()
  vertex3f x y z = liftIO $ duma_vertex3f x y z

is there a penalty involved in calling vertex3f (from another
module) or will the worker/wrapper optimization ensure that machine
code in the other module just calls duma_vertex3f directly since the
liftIO operation is just an irrelevance at the machine code level?


I doubt there is a penalty.
[snip]
In particular -ddump-simpl has been helpful for some people, and you
want -ddump-stg, perhaps.


I've just now tried compiling with -ddump-stg but this is difficult (for me) 
to understand so I tried with -ddump-simpl as you suggested, and compared 
the outputs when compiling with and without the -O2 optimization flag. 
With -O2 enabled, __ccall_GC duma_vertex3f is indeed called directly instead 
of vertex3f, from a different module, so that proves that different monads 
can indeed be used to wrap IO operations without any performance penalty at 
all. What a great language and compiler!!! :-)


One little thing I wondered about when looking at the -ddump-simpl output: 
for each C API function, there is a warning of the form:


   warning: implicit declaration of function `duma_vertex3f'

Do you know what this means? It doesn't seem to matter as everything 
compiles and runs ok but it would be interesting to know.


Thanks, Brian. 


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Possible runtime overhead of wrapping the IO monad?

2006-03-27 Thread John Meacham
On Mon, Mar 27, 2006 at 08:14:40PM +0100, Brian Hulley wrote:
 However I'm wondering if I can rely on all this monad stuff being optimized 
 out at compile time. A sample monad is below:
 
   newtype VertexM a = VertexM (IO a)
 

in GHC you can actually guarentee there is no overhead with the newtype
deriving feature. 

   newtype VertexM a = VertexM (IO a)
deriving(Monad,Functor,MonadIO)

now it will use the exact same dictionaries as the IO monad.

the newtype deriving feature is very very useful for this sort of
thing, or when you need to make an alternate type that has almost all
the qualities of another one, you can newtype-derive all the same bits
and just provide the instance for the different one.

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Possible runtime overhead of wrapping the IO monad?

2006-03-27 Thread Duncan Coutts
On Tue, 2006-03-28 at 01:37 +0100, Brian Hulley wrote:
 Hi Duncan -
 I just declared duma_vertex3f (in another module) by:
 
  foreign import ccall duma_vertex3f :: Float - Float - Float - IO ()
 
 I thought this means that the C function prototype must be:
 
  void duma_vertex3f(HsFloat, HsFloat, HsFloat);
 
 so I don't understand why GHC (or any other Haskell compiler) would ever 
 need to see a C header file to generate the correct code. What info could be 
 in the header that is not already in the Haskell type?

Because ghc does compile via C and does use the C header files to get
the C function prototype. Well it can compile via C and it's recommended
when compiling FFI code since it allows the Haskell type you've declared
to be checked against the C prototype.

The warning you saw was coming from gcc as it compiled the C code
emitted by ghc.

So it's mostly as a sanity check but actually there is some information
in the C prototype that is not reflected in the Haskell type. For
example 'const'ness, which is why you might sometimes see warnings from
gcc about casting away the const attribute. More seriously there is the
single/double issue. In the absence of a prototype C will promote single
to double. If the function really was declared as single then using
double will probably cause a crash. There are probably other examples
where not using a C prototype can lead to trouble.

There are more probably details on this issue in the emails during the
FFI standardisation process.

Interestingly that particular issue is less of a problem when not
compiling via C since the Haskell type does provide enough information
for making a call using the standard system ABI. C's type sytem (such as
it is) has slightly more information in it than just the system ABI
needs (such as const).

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread Krasimir Angelov
As an author of HSQL and Visual Haskell I do agree that there is still
a lot to be done in order to make Haskell as popular as C++/Java in
the industry. We need strong support for GUI, Database, XML and many
other libraries that the other languages already have. The existing
development environments also need a lot of improvements. I know that
we already have many bits of that but it is still not enough.
Fortunately the situation is improving and now we have much better
libraries and tools than few years ago.

Cheers,
   Krasimir

2006/3/27, Paul Johnson [EMAIL PROTECTED]:
 Neil Mitchell wrote:

  - Larger memory footprint
 
 
  You are talking about GHC, not Haskell. Take a look at nhc98, which
  has a small memory footprint.

 I don't want to get into a compiler flame war.  The fact is that if you want 
 to do huge datasets or very fast computation (the two tend to go together) 
 then Haskell is the wrong language.  This isn't an attack on Haskell, its a 
 straightforward look at the strengths and weaknesses of the language.

  - Little integration with GUI tools.  No Visual Haskell or Borland
  Haskell equivalents.
 
 
  http://www.haskell.org/visualhaskell/ - indeed there is  :)

 I've never used Visual Haskell, but from the web page two things stand out:

 1: Its version 0.0.  This is not for production use.

 2: Its just a programming environment.  The point I was making was about 
 integration with GUI tools.  You can't (AFAIK) draw a dialog box and then tie 
 each button to a Haskell function in the IO monad.

  - Little integration with databases.
 
 
  There are quite a few Haskell DB programs, I've never used any, but
  they do exist.

 I have used HSQL, and it does what it does perfectly competently.  But
 it requires a bunch of boilerplate code in the IO monad for every
 query.  There are higher level libraries that build nice type-safe stuff
 on top of this, but they don't seem to be production strength yet.

 Remember that this is not about what Haskell could or should be in the
 future, its about what Haskell is right now.  If you start a Haskell
 development project then this is the reality that you face.

 Now, we could try to fix these deficiencies.  That means playing
 catch-up with the entire infrastructure that has been erected around
 conventional languages.  We will simply never succeed.  Sure, we can
 probably do it with a 10th the manpower of the conventional languages,
 but they have 100 times our manpower to throw at the problem.

 Or we can play to our strengths by looking for markets where the
 negatives don't matter, or matter less than the positives.  Haskell
 offers a value proposition that is very different to conventional
 languages.  There is likely to be a market out there that is poorly
 served by conventional languages, but where Haskell will fit very
 nicely.  All we have to do is find it and target it.

 Hmmm.  I wonder about simulation and testing.  Some time ago I asked
 here about doing discrete event simulation, and I have succeeded.
 (Unfortunately the resulting code is owned by my employer, so I can't
 share it).  QuickCheck has shown a new approach to testing.  Testing of
 stateful stuff using this approach would require a simulation of the
 thing to be tested.  This might be a winning proposition.

 Paul.
 ___
 Haskell mailing list
 Haskell@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread Michael Marte

Paul Johnson wrote:



- Little integration with mainstream software development.  No 
analysis or design methods or notations.  Haskell development appears 
to be primarily a matter of paying a bunch of very bright developers 
to cut code.



Writing a functional specification is a very good design method.
The nice thing with Haskell is, that this specification is executable 
and with ghc it usually executes fast enough.


- Concerns about support and maintenance.  Those bright developers may 
not be around in the long term.


This seems to be a hen  egg problem.
Students do not study functional programming deeply because it seems 
irrelevant to finding a job. Decision makers, on the other hand, either 
have never heard of FP or are aware that there are not enough people 
familiar with it and so there are natural concerns about maintenance.


Michael

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread Jean-Philippe Bernardy
  - Concerns about support and maintenance.  Those bright developers may
  not be around in the long term.

 This seems to be a hen  egg problem.
 Students do not study functional programming deeply because it seems
 irrelevant to finding a job. Decision makers, on the other hand, either
 have never heard of FP or are aware that there are not enough people
 familiar with it and so there are natural concerns about maintenance.

On the other hand, a project is much more likely to succeed if bright
developpers are involved. Using haskell is a very good way to attract
bright developpers.
Hence, a project is much more likely to succeeds if it uses haskell.

This is exemplified by the Pugs project.

Cheers,
JP.
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Re: Haskell as a disruptive technology?

2006-03-27 Thread Simon Marlow

Neil Mitchell wrote:


- Larger memory footprint


You are talking about GHC, not Haskell. Take a look at nhc98, which
has a small memory footprint.


We should be clear about whether we're talking about the footprint of 
the compiler or the generated binary.  I can well believe that GHC needs 
a lot more memory than nhc98 to run, but I'm not at all sure that the 
same relation holds when you consider the generated binaries, especially 
when GHC's compacting collector is being used and optimisation is turned 
on.  Code size is probably larger with GHC.  It'd nice to see some 
actual measurements done here.


Cheers,
Simon
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread Immanuel Litzroth
Donald Bruce Stewart [EMAIL PROTECTED] writes:



 In general I would say GHC/Haskell was very very fast, as long as you
 know what you're doing.

I have created programs that fill an array with the first 100
prime numbers using erathostenes sieve.  
I have done this in lisp, c++, ocaml and haskell. Lisp and c++ win
hands down, being approximately 5x faster than ocaml, haskell. 
Immanuel
-- 
***
I can, I can't.
Tubbs Tattsyrup

--
Immanuel Litzroth
Software Development Engineer
Enfocus Software
Antwerpsesteenweg 41-45
9000 Gent
Belgium
Voice: +32 9 269 23 90
Fax : +32 9 269 16 91
Email: [EMAIL PROTECTED]
web : www.enfocus.be
***
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread Paul Johnson
So far all the responses apart from the one by jake have taken issue 
with one or other of my assessments of Haskell's weaknesses.  Curiously, 
nobody seems to be arguing with my assessments of its strengths.


I know the feeling: Haskell is so obviously great that it becomes a kind 
of spinal reflex to defend it against even the slightest slight.  I used 
to feel that way about Eiffel, before I learned Haskell.  And we could 
have long and learned arguments about how to optimise GHC, the relative 
benefits of nhc98, wxHaskell vs Gtk2hs, and all the rest it.


But in my original post I tried to look at a broader question.  I've 
seen other excellent languages wind up as roadkill (did I mention I 
liked Eiffel?), and one thing I have learned is that trying to fight the 
incumbents on their own turf is just suicide.  You will never win, and 
it doesn't matter how long you try or how brilliant your technology is.  
There are lots of reasons for this, some good, some bad, and thats just 
the way the world is.


History has repeatedly shown that the only way you dislodge an incumbent 
technology is through the disruptive technology route, which would be 
better described as disruptive marketing.  Find a niche that is not 
adequately addressed by the incumbents and establish a beachead there.  
Then move out from that base.


So I tried to summarise the Haskell value proposition compared to the 
incumbent languages.  Thats what it looks like to me, and I am not 
exactly ignorant on the subject, so I suggest we take it as a given for 
now and look at the real question:


Is there a market that is poorly served by the incumbent languages for 
which Haskell would be an absolute godsend?


Paul.
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread Malcolm Wallace
Paul Johnson [EMAIL PROTECTED] wrote:

 Is there a market that is poorly served by the incumbent languages for
 which Haskell would be an absolute godsend?

Yes.  Safety critical systems, encompassing everything from avionics to
railway signalling equipment, to medical devices.  These markets are
relatively small / low-volume, with needs for high assurance, and better
development times.

However, despite these appealing characteristics, I would say Haskell is
still currently unsuitable for those areas.

  * These tend to be embedded systems, with daunting memory, speed, and
power-consumption limits.
  * Analysing and guaranteeing performance characteristics (time,
memory) is something we still can't do well with Haskell.  

Note, this is not a question of raw speed, it is a question of hard
guarantees to meet deadlines.  In this field, a slow implementation that
provably meets a deadline of 2ms is better than a fast implementation
that claims a cycle time of .02ms, but cannot be shown to be
failure-free.

Regards,
Malcolm
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread Johannes Waldmann
Paul Johnson wrote:

 Is there a market that is poorly served by the incumbent languages for
 which Haskell would be an absolute godsend?

yes - teaching principles of programming (and languages).

E. g. I have my students (2nd year) learn (some) Haskell first,
so they will hopefully understand better what is a (polymorphic) type
and what is behind those OO-design patterns
(Composite = algebraic data type, Visitor = map/fold,
Template Method = higher order function etc.)

But the comparison to e. g. Java also shows quite clearly
that Haskell is lacking some features (ranging from essential
to convenient) in the area of software engineering (module system,
record system, annotations, etc., see some entries on Haskell-prime)
And I don't hide that opinion from the students.

You may say that my teaching is disruptive,
but then, there's no market since the students have no choice ...
But most of them accept the challenge. (I'm afraid they'd accept
an advanced Perl(*) hacking course as well.) (*) - replace with
name of any untyped interpreted scripting language.

Best regards,
-- 
-- Johannes Waldmann -- Tel/Fax (0341) 3076 6479/80 --
 http://www.imn.htwk-leipzig.de/~waldmann/ ---

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread Dusan Kolar


Malcolm Wallace wrote:

Paul Johnson [EMAIL PROTECTED] wrote:

  

Is there a market that is poorly served by the incumbent languages for
which Haskell would be an absolute godsend?



Yes.  Safety critical systems, encompassing everything from avionics to
railway signalling equipment, to medical devices.  These markets are
relatively small / low-volume, with needs for high assurance, and better
development times.
  
Well, the market is growing and not that small. ;-) Think of mobile 
phones and cars, for instance, they are full of embedded computers.



However, despite these appealing characteristics, I would say Haskell is
still currently unsuitable for those areas.

  * These tend to be embedded systems, with daunting memory, speed, and
power-consumption limits.
  * Analysing and guaranteeing performance characteristics (time,
memory) is something we still can't do well with Haskell.  
  
Well, is it a problem to make GC a deterministic task or there is a 
problem that a program may run out of memory unpredictably? Can you be 
more explicit, or link to some article?


Regards

 Dusan

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread Sebastian Sylvan
On 3/27/06, Paul Johnson [EMAIL PROTECTED] wrote:


 Is there a market that is poorly served by the incumbent languages for
 which Haskell would be an absolute godsend?


As far as I'm concerned, yes almost every market. At least soon.
Within a few years (say, five) we will have tens of cores on the
desktop. The speed of an application will correspond fairly directly
to how well it uses multiple threads. This will only become more and
more true in the future.
Haskell (lightweight threads + STM) really is a godsend in this area.
It's really not feasible to distribute a general application to tens
or hundreds of threads in C++ (or any imperative langauge, really).
Some specific problems (e.g. raytracing) are simple enough to be
easily parallellised in any language, but in general we need purely
functional programming with good support for concurrency and
parallellism if we are to have any hope of remaining sane while using
these massively multicore systems of the future effectively.
Haskell (or rather GHC) is ideal here. Parallellising pure functions
are fairly easy, for concurrency I think message passing and immutable
values are really pretty the best story right now, and even if you
decide you need some layer of shared-state concurrency we can remain
pretty safe using STM.

/S

--
Sebastian Sylvan
+46(0)736-818655
UIN: 44640862
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread Johannes Waldmann

   * Analysing and guaranteeing performance characteristics (time,
 memory) is something we still can't do well with Haskell.

Seconded. I think all three of:
specification, prediction and post-morten analysis
of resource consumption of Haskell programs
are largely uncharted territory.
I don't see a widely accepted formal base (a resource calculus)
and that's probably why there is no tool support
(yes, I know about ghc -prof -auto-all).
Ultimately, there should be some refined type system
that allows to express and prove resource consumption guarantees.

best regards,
-- 
-- Johannes Waldmann -- Tel/Fax (0341) 3076 6479/80 --
 http://www.imn.htwk-leipzig.de/~waldmann/ ---

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread Malcolm Wallace
Dusan Kolar [EMAIL PROTECTED] wrote:

  Yes.  Safety critical systems, encompassing everything from avionics
  to railway signalling equipment, to medical devices.  These markets
  are relatively small / low-volume, with needs for high assurance,
  and better development times.

 Well, the market is growing and not that small. ;-) Think of mobile 
 phones and cars, for instance, they are full of embedded computers.

The embedded computation market is certainly huge (about 10x the size of
the PC market), but how many of those systems are actually safety
critical?  Mobile phones certainly are not.  A safety critical system is
one whose failure could directly lead to loss of life, e.g. gas turbine
engine controller on an Airbus.

* Analysing and guaranteeing performance characteristics (time,
  memory) is something we still can't do well with Haskell.  

 Well, is it a problem to make GC a deterministic task or there is a 
 problem that a program may run out of memory unpredictably? Can you be
 more explicit, or link to some article?

The problem goes much further than amortising the cost of GC.  With lazy
evaluation, in general we do not even know the complexity class of space
usage (constant, linear, exponential) for any given program of moderate
size without doing some empirical measurement or profiling.  There have
been a few attempts at formal analysis in this field (Pareto and Hughes,
Hammond and Michaelson) but nothing is yet truly comprehensive.

Regards,
Malcolm
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread John Hughes

Dusan Kolar wrote:



Malcolm Wallace wrote:


Paul Johnson [EMAIL PROTECTED] wrote:

 


Is there a market that is poorly served by the incumbent languages for
which Haskell would be an absolute godsend?




Yes.  Safety critical systems, encompassing everything from avionics to
railway signalling equipment, to medical devices.  These markets are
relatively small / low-volume, with needs for high assurance, and better
development times.
  


Well, the market is growing and not that small. ;-) Think of mobile 
phones and cars, for instance, they are full of embedded computers.


Mobile phones are a big market, but they are not safety critical. In 
fact, you can make a phone which crashes quite often and still sell it 
successfully (as I know to my cost!). Low power (==long battery life) is 
probably more important for phone software. Here execution time is 
strongly correlated with energy use, but things like compacting garbage 
collection can actually help--you can turn off part of your memory after 
a GC and save the leakage current. I think low power functional 
programming could be very interesting, but we certainly have no good 
story on this point at the moment.


Car components are commodities and sold under extreme price pressure. 
Because volumes are high, the price per unit is all that matters, and 
there is typically no way to charge for software at all--it's assumed 
that the subcontractor develops it as part of the development cost of 
the component. Maybe that could give Haskell an advantage--lower 
software development costs--as long as the hardware doesn't need to cost 
a penny more. But again, with hard real time constraints, this doesn't 
feel like a natural market for Haskell in its current state.


I think Paul asked just the right question--but I wonder if we'll see 
the answer on this list? After all, anyone with a really *good* answer 
stands to make a lot of money...


John
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Speed of development/change? Re: Haskell as a disruptive technology?

2006-03-27 Thread Shae Matijs Erisson
Paul Johnson [EMAIL PROTECTED] writes:


 So I tried to summarise the Haskell value proposition compared to the
 incumbent languages.  Thats what it looks like to me, and I am not exactly
 ignorant on the subject, so I suggest we take it as a given for now and look
 at the real question:

 Is there a market that is poorly served by the incumbent languages for which
 Haskell would be an absolute godsend?

What about speed of development, and speed of change?

One of the pragmatic programmer guys recently suggested that software
development could become simple enough that you could have a dev on every
street corner. You'd walk down to the corner and ask for certain app, then get
it in a few hours.

Abstraction addiction can be a danger, but it also lets you put out working
code in a tiny amount of time, as well as later extend that code to add new
features in equally small amounts of time.

What if development time for new programs was measured in days not years?
-- 
I've tried to teach people autodidactism,| ScannedInAvian.com
but it seems they always have to learn it for themselves.| Shae Matijs Erisson

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread Robert Dockins


On Mar 27, 2006, at 9:03 AM, Sebastian Sylvan wrote:

On 3/27/06, Paul Johnson [EMAIL PROTECTED] wrote:



Is there a market that is poorly served by the incumbent languages  
for

which Haskell would be an absolute godsend?



As far as I'm concerned, yes almost every market. At least soon.
Within a few years (say, five) we will have tens of cores on the
desktop. The speed of an application will correspond fairly directly
to how well it uses multiple threads. This will only become more and
more true in the future.


FWIW, I'd like to voice my agreement with this view.  The hardware  
manufacturers are hard at work building the perfect niche for Haskell  
-- all of commodity desktop computing!  Moore's law is still packing  
more transistors onto the silicon, and it seems like people have run  
out of ideas for what to do with them, so they just start duplicating  
cores ;-)  All we have to do is be ready for it when it arrives.   
When people see that, using Haskell, they can write programs using 1)  
fewer man-hours with 2) fewer bugs which 3) scale effortlessly to  
highly parallel hardware to beat the pants off C/C++/Java/what-have- 
you, they'll be beating down the doors to get it.


I'd love to see Haskell on highly concurrent hardware becoming more  
of a reality: Haskell on the Cell is a great idea; I'd love to see  
Haskell over MPI (using YHC bytecode maybe?); Haskell-over-GPU (you  
know you like it!); and of course, SMP Haskell is also interesting.   
One of the things I love about Haskell is that the language  
definition transcends execution strategy/environment.




Haskell (lightweight threads + STM) really is a godsend in this area.
It's really not feasible to distribute a general application to tens
or hundreds of threads in C++ (or any imperative langauge, really).
Some specific problems (e.g. raytracing) are simple enough to be
easily parallellised in any language, but in general we need purely
functional programming with good support for concurrency and
parallellism if we are to have any hope of remaining sane while using
these massively multicore systems of the future effectively.
Haskell (or rather GHC) is ideal here. Parallellising pure functions
are fairly easy, for concurrency I think message passing and immutable
values are really pretty the best story right now, and even if you
decide you need some layer of shared-state concurrency we can remain
pretty safe using STM.


Indeed.  And for this reason, I'm glad to see concurrency very  
seriously considered for Haskell'.  I'd even go so far as to say that  
STM is a good candidate for inclusion, or at least for addendum  
status.  STM makes writing concurrent programs a joy rather than a  
total PITA.






Rob Dockins

Speak softly and drive a Sherman tank.
Laugh hard; it's a long way to the bank.
  -- TMBG
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] ANNOUNCE: lhs2tex-1.11

2006-03-27 Thread Conal Elliott
Hi Andres. I just successfully installed  tested lhs2TeX on cygwin. The only hitch was in an inconsistent translation of /usr/local into c:/cygwin/usr/local (I guess bash vs Haskell). The fix was very simple: ./configure --prefix=c:/cygwin/usr/local before make and make install. I suggest you add a note to that effect in INSTALL.
Thanks for the lhs2TeX update!Regards, - ConalOn 3/14/06, Andres Loeh [EMAIL PROTECTED]
 wrote: lhs2TeX version 1.11 
We are pleased to announce a new release of lhs2TeX,a preprocessor to generate LaTeX code from literate Haskellsources.lhs2TeX includes the following features:* Highly customized output.
* Liberal parser -- no restriction to Haskell 98.* Generate multiple versions of a program or document froma single source.* Active documents: call Haskell to generate parts of thedocument (useful for papers on Haskell).
* A manual explaining all the important aspects of lhs2TeX.Changes (w.r.t. lhs2TeX 1.9)[Note that 1.10 has never been released to avoid confusionwith some privately distributed versions.]
* Specification code is now handled correctly (that is, ignored)in the code and newcode styles.* Comments and Pragmas are handled in a better way bythe newcode style.
* There are some new forms of implicit formatting directives.* The LaTeX code produced in the poly style looks slightlymore beautiful.* There is a new Library section, containing some frequently
used formatting directives.* Generation of file/linenumber directives in the producedLaTeX code, for Stefan Wehr's adjust tool. Based on apatch submitted by Stefan Wehr.* lhs2TeX can now replace ghc's literate preprocessor.
* Improved efficiency of \eval and \perform (to call ghcior hugs from lhs2TeX documents).Requirements and Download-A source distribution that should be suitable for Unix-based
environments is available fromhttp://www.iai.uni-bonn.de/~loeh/lhs2tex/It has been verified to build on Linux and MacOSX.Binaries will be made available on request.
You need a recent version of GHC (6.4.1 is tested, 6.2.2 mightwork) to build lhs2TeX, and, of course, you need a TeX distributionto make use of lhs2TeX's output. The program includes aconfiguration that is suitable for use with LaTeX. In theory,
there should be no problem to generate code for other TeXflavors, such as plainTeX or ConTeXt.Happy lhs2TeXing,Andres Loeh and Ralf Hinze[EMAIL PROTECTED]
[EMAIL PROTECTED]___Haskell mailing listHaskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Re: Haskell as a disruptive technology?

2006-03-27 Thread Ashley Yakeley

Paul Johnson wrote:


I've never used Visual Haskell, but from the web page two things stand out:

1: Its version 0.0.  This is not for production use.


Also: You may not use or distribute this Software or any derivative 
works in any form for commercial purposes.


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


RE: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread Tim Docker
Robert Dockins wrote:

 All we have to do is be ready for it when it arrives.   
 When people see that, using Haskell, they can write programs using 1)

 fewer man-hours with 2) fewer bugs which 3) scale effortlessly to  
 highly parallel hardware to beat the pants off C/C++/Java/what-have- 
 you, they'll be beating down the doors to get it.

 I'd love to see Haskell on highly concurrent hardware becoming more  
 of a reality: Haskell on the Cell is a great idea; I'd love to see  
 Haskell over MPI (using YHC bytecode maybe?); Haskell-over-GPU (you  
 know you like it!); and of course, SMP Haskell is also interesting.   
 One of the things I love about Haskell is that the language  
 definition transcends execution strategy/environment.

I think your enthusiasm outstrips reality a bit here. STM appear
to provide a much improved model for concurrent programming on shared
memory architectures (I say appears here because I've read the papers
but haven't used it).

Whilst this applies directly to the limited scalability of new
multi-core
desktop machines, I don't think it's going to provide huge benefits to
the
more scalable architectures based upon message passing (eg MPI and
Cell).
I'd be pleased to be corrected, but I'm not aware of any mainstream
haskell
libs/extensions that make implementing message passing concurrent
systems fundamentally easier than in other languages.

Tim
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread ajb
G'day all.

Quoting Immanuel Litzroth [EMAIL PROTECTED]:

 I have created programs that fill an array with the first 100
 prime numbers using erathostenes sieve.
 I have done this in lisp, c++, ocaml and haskell. Lisp and c++ win
 hands down, being approximately 5x faster than ocaml, haskell.

Can we see your programs?

Cheers,
Andrew Bromage
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Re: Haskell as a disruptive technology?

2006-03-27 Thread Krasimir Angelov
Visual Haskell is now with BSD license and it is open sourced. See

http://darcs.haskell.org/vshaskell/

Cheers,
   Krasimir

2006/3/28, Ashley Yakeley [EMAIL PROTECTED]:
 Paul Johnson wrote:

  I've never used Visual Haskell, but from the web page two things stand out:
 
  1: Its version 0.0.  This is not for production use.

 Also: You may not use or distribute this Software or any derivative
 works in any form for commercial purposes.

 ___
 Haskell mailing list
 Haskell@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: important news: refocusing discussion

2006-03-27 Thread Malcolm Wallace
John Goerzen [EMAIL PROTECTED] wrote:

 On Fri, Mar 24, 2006 at 11:07:53AM +, Malcolm Wallace wrote:
   I assume that since a non-concurrent implementation has
  only one thread, that thread will be trying to MVar-synchronise with
  something that does not exist, and hence be blocked for ever.
 
 Not necessarily.  An MVar is a useful tool in place of an IORef.  It
 works well when a given hunk of code is used in a threaded program,
 but it also works well in a non-threaded program.  If they are used
 correctly, there is no problem.

Your final sentence is the one that I want to emphasise.  What does it
mean to use an MVar correctly, such that one can avoid blocking in a
non-threaded implementation?

Regards,
Malcolm
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Alternatives to . for composition

2006-03-27 Thread Doaitse Swierstra
I am not sure how this look in other people's editors and mail- 
readers, but on my machine (Mac OS X Tiger) there
is significantly more white space after the symbol than before. Is  
that normal?





For emacs, just bind a key (C-. say) to (ucs-insert
 #X2218). ucs-insert comes from ucs-tables.


You can also activate the TeX input method using C-u C-\ TeX RET (if
leim is installed). Then you can insert many characters by typing
their TeX names. The character ∘ can be inserted by typing \comp,  
for

instance.


 Doaitse Swierstra

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Alternatives to . for composition

2006-03-27 Thread Josef Svenningsson
FYI, Cayenne used the center dot as composition. See the System$HO module.
http://www.cs.chalmers.se/~augustss/cayenne/system.html
I remember liking it but I think the ring operator would be closer to
mathematics notation and indeed the best choice.

Cheers,

/Josef

On 3/25/06, Dylan Thurston [EMAIL PROTECTED] wrote:
 At http://hackage.haskell.org/trac/haskell-prime/wiki/CompositionAsDot ,
 there is a list of possible Unicode replacements for the '.'
 operator.  Oddly, the canonical one is missing (from
 http://www.unicode.org/charts/PDF/U2200.pdf ):

 2218  RING OPERATOR
   = composite function
   = APL jot
 00B0 degree sign
 25E6 white bullet

 I don't think any other Unicode character should be considered.

 (Is this the approved way to send minor updates like this?)

 Peace,
 Dylan Thurston


 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.1 (GNU/Linux)

 iD8DBQFEJXsQVeybfhaa3tcRApNiAJ9eSfuIgaRkbJaOle1IG5AmzWoOfACdH9U1
 Vh/63jQ4c0Rft041WGEbut8=
 =HF0S
 -END PGP SIGNATURE-


 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://haskell.org/mailman/listinfo/haskell-prime



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Concurrency (was: RE: Re[2]: important news: refocusing discussion)

2006-03-27 Thread Simon Marlow
On 26 March 2006 02:31, isaac jones wrote:

 Possible Interests:
  1. I can write tools like filesystems, web servers, and GUIs in
 Haskell'
  2. Libraries that I use are thread-safe
  3. I can compile my code with any Haskell' compiler
  4. Tools such as debuggers and tracers that claim to support Haskell'
 actually work on my code.
  5. That there not be too many Haskell's
  6. That there be a diversity of Haskell' implementations
  7. That concurrency be reasonable to implement for existing
 compilers/interpreters.
  8. That it be reasonable to implement for new compilers/interpreters.
  9. Show off how effective Haskell can be in this area (possibly
 attracting new users).
 
 By 5 I mean that it might be nice to have a core Haskell and a bunch
 of addenda.  But this could cause no two Haskell' implementations to
 be the same. (My Haskell' might have concurrency and FFI, but no class
 system, or something.)  The more optional addenda we have, the more we
 actually fracture the language.  We could be in the same situation
 we're in today.
 
 Isaac's Interests
  * 1-6, 9
 
 Simon's Interests:
  * He's mentioned 9, I'm sure that there are others.

I'd count all of 1-9 as interests - they're all desirable.  But we
haven't found a design that satisfies 1-9, and in the absence of that we
have to compromise somewhere.

But before we get carried away figuring out all the pros and cons of
various options, let me point out once again that
  
  This is just a marketing decision

Because

 (a) we're going to standardise concurrency anyway

 (b) it is unlikely that Hugs or JHC will implement concurrency
 even if it goes into the standard

 (c) the question is just whether the brand Haskell' encompasses
 concurrency or not (I thought I'd use that word because I
 know JL likes it so much :-)

Yes there are several ramifications of this decision, but none of them
are technical.  As I see it, we either specify Concurrency as an
addendum, or NoConcurrency as an addendum, and both options are about
the same amount of work.

So on that note, I'll restate my preference that Haskell' should include
concurrency, and leave it at that.  We can start the standardisation
process without arriving at a conclusion on this issue anyway.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: important news: refocusing discussion

2006-03-27 Thread Neil Mitchell
 it's too hard to implement (and it's not always hard
 - the YHC guys
 managed it in a matter of days

Tom is the one who implemented it in Yhc, and details can be found
http://www.haskell.org/haskellwiki/Yhc/RTS/Concurrency

but some of the reasons that it was easier than in other compilers are:

* We compile to byte code, then execute the bytecode. Because of this,
to add support for concurrency only really changes the executer, which
is a standalone program.

* Bytecode also means we can just schedule each process for n instructions.

* Its simulated concurrency, if you have two processors, only one will
ever be used.  The only exception is FFI, where a number of FFI calls
can run in parallel with some Haskell code. This means that no locking
is needed on the global heap.

If compiling to native code, and aiming for proper concurrency at the
operating system level, it would be a lot more work! If you wanted
high performance concurrency, like GHC, you would need to do that
extra work.

Thanks

Neil
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Re[2]: important news: refocusing discussion

2006-03-27 Thread Ross Paterson
On Mon, Mar 27, 2006 at 09:36:28AM +0100, Simon Marlow wrote:
 On 26 March 2006 03:44, Ross Paterson wrote:
  [...] the key point is that
  a Haskell' module that does not use concurrency, but is thread-safe,
  ought to work with non-concurrent implementations too.
  
  To make that work, we'd need two interfaces:
   * one for applications that make use of concurrency.  This would be
 unavailable on some implementations.
   * one for thread-safe use of state.  This would be available on all
 implementations, and authors not requiring concurrency would be
 encouraged to use it for maximum portability.
 
 Sure, I think this is a point on which we're all agreed.
 
 The portable interface could be Control.Concurrent.MVar, perhaps.

As Malcolm pointed out, using MVars requires some care, even if you were
just aiming to be thread-safe.  Packaged things like atomicModifyIORef
are safe, but awkward, and need extra stuff to handle multiple variables.

How about STM (minus retry/orElse) and TVars as the portable interface?
They're trivial for a single-threaded implementation, and provide a
comfortable interface for everyone.

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Re[2]: important news: refocusing discussion

2006-03-27 Thread Taral
On 3/27/06, Ross Paterson [EMAIL PROTECTED] wrote:
 How about STM (minus retry/orElse) and TVars as the portable interface?
 They're trivial for a single-threaded implementation, and provide a
 comfortable interface for everyone.

+1 on STM as the core interface. Why do you suggest omitting retry/orElse?

--
Taral [EMAIL PROTECTED]
You can't prove anything.
-- Gödel's Incompetence Theorem
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Re[2]: important news: refocusing discussion

2006-03-27 Thread Aaron Denney
On 2006-03-28, Taral [EMAIL PROTECTED] wrote:
 On 3/27/06, Ross Paterson [EMAIL PROTECTED] wrote:
 How about STM (minus retry/orElse) and TVars as the portable interface?
 They're trivial for a single-threaded implementation, and provide a
 comfortable interface for everyone.

 +1 on STM as the core interface. Why do you suggest omitting retry/orElse?

-1.  STM is a cool little abstraction making it easy to write dead-lock
free code.  I haven't wrapped my head around writing _quick_ dead-lock free
code, where as the MVar model has all sorts of abstractions built that
make that, well, not _easy_, but the difficulties are understood.

-- 
Aaron Denney
--

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] Bit streams programs in Haskell

2006-03-27 Thread Spencer Janssen
I have rewritten the Huffman benchmark (see
http://cse.unl.edu/~sjanssen/huffman.hs), resulting in a significant
performance increase.  On my computer it completes one iteration in
about 0.9 seconds.  In comparison, the O'Caml version takes around 3.1
seconds.  These samples seem to indicate that this Haskell program
will compare favorably with Erlang.

I'd say the program is written in fairly idiomatic Haskell.  Turning
the bytes in the buffer into  a lazy stream of Bool's (see the
bitstream function) is both elegant and surprisingly efficient.

There is one infidelity in my implementation: the program writes the
output once per iteration, and this IO is included in the measured
time.  This results in a small time penalty in comparison to the other
versions.  I am still searching for a solution to this, but for now it
seems that printing the output is the cheapest way to force
evaluation.

Improvements and discussion are solicited.

Feel free to include this code in your website, if you'd like.


Spencer Janssen

On 3/22/06, Per Gustafsson [EMAIL PROTECTED] wrote:

 Haskell gurus,

 We have made a proposal to extend the Erlang `binary' data type from
 being a sequence of bytes (a byte stream) to being a sequence of bits (a
 bitstream) with the ability to do pattern matching at the bit level.

 This proposal has now been fully implemented all
 these at the level of the BEAM virtual machine and in the HiPE
 compiler. Most probably, they will be part of the next major
 Erlang/OTP open source release. (We write most probably because we
 do not really control what Ericsson desides to release as part of its
 open source system.)

 We wanted to evaluate the performance of our implementation and the
 succintness of our syntax, particularly against other `similar'
 languages. For this reason, we wrote five reasonably representative
 benchmarks, showing the things that one could employ bit stream pattern
 matching and bit-level comprehensions for.

 The benchmarks (drop3, five11, huffman, uudecode, and uuendode) have
 all been written in Erlang, O'Caml and Haskell. For some of them, C
 and Java versions exist. They can be found in the following homepage:

http://bitbenches.infogami.com/

 As you will see there, the Haskell numbers are significantly slower
 than those of Erlang and O'Caml. We are wondering why this is so.

 For each language, we have spent a considerable effort in writing the
 benchmarks in -- at least what we feel -- is the most natural and
 efficient way one can write them.

 The only constraint we impose is that for functional languages, data
 structures without any explicit mutation have to be used in the part
 of the program for which measurements are taken.

 Our experience in writing efficient (and beautiful) Haskell programs is
 close to (if not below) zero. Also, perhaps our mind might be suffering
 from severe case of strictness and might be completely unable to `think
 lazily'. So, we request your help in noticing obvious NO-NOs and stupid
 mistakes that we might have made. We even welcome completely different
 Haskell programs provided they adhere to the constraint mentioned
 above -- no mutation.

 Best regards,

 Kostis Sagonas and Per Gustafsson


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bit streams programs in Haskell

2006-03-27 Thread Spencer Janssen
It seems I have spoken too soon!

 There is one infidelity in my implementation: the program writes the
 output once per iteration, and this IO is included in the measured
 time.
Due to a few changes, this caveat no longer applies.  As a result
Haskell performs just a bit better.  The code is still at
http://cse.unl.edu/~sjanssen/huffman.hs.  The old main is left for
reference, renamed to main'.

Run time (in seconds) for 10 iterations:
O'Caml: 35.9
Old Haskell: 25.6
New Haskell: 8.8


Spencer Janssen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Positive integers

2006-03-27 Thread John Meacham
On Fri, Mar 24, 2006 at 06:54:54PM +, Aaron Denney wrote:
  Without breaking compatibility?
  But class instances become invalid if the hierarchy is modified.
 
 No, compatibility will be broken.  Hopefully not for most uses -- I
 don't think most people define new instances, and those that do will be
 able to do so more reasonably, so hopefully wouldn't mind.

The problem isn't with creating instances, it is with using the classes
at all.

well, in interfaces you are going to end up with some specific class or
another concretely mentioned in your type signatures, which means you
can't interact with code that only knows about the alternate class. like

genericLength :: Integral a = [b] - a

if you have a different 'Integral' you can't call genericLength with it,
or anything built up on genericLength. basically there would be no way
for 'new' and 'old' polymorphic code to interact. 

the inability to evolve the class hierarchy is a serious issue, enough
that it very well could be impractical for haskell' unless something
like class aliases were widely adopted.

John


-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Positive integers

2006-03-27 Thread Jan-Willem Maessen


On Mar 26, 2006, at 4:35 PM, Daniel McAllansmith wrote:

[Discussion of positive integers and Words]



I was thinking about several things in this thread, torsors, overflow
semantics, bounds checking...
I wonder if there would be any merit in being able to define  
constrained

subsets of integers and the semantics when/if they overflow.


Oddly, I've just finished coding this up, with type-level bounds  
(represented by type-level ints).  It's a big pile of modules on top  
of John Meacham's type-level Nats library, which add type-level Ints  
(as non-normalized Nat pairs), unknown endpoints (permitting Integer  
to fit in the same framework), and the actual bounded ints themselves.


Naturally, I needed to define my own versions of the functionality in  
Eq, Ord, and Num.  These resolve statically as much as possible  
(including some possibly dubious behavior with singleton intervals).   
That said, I don't try to do everything at the type level---it became  
too tiring, with not enough plausible benefit.


Any and all: Drop me a line if you are interested, it's a stack of  
3-4 modules and at best half-baked.  I'd just gotten a mostly- 
complete version.


-Jan-Willem Maessen


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Positive integers

2006-03-27 Thread Burton Samograd
 For example, take UNIX nice levels -20 to 19.  You could have
   data ConstrainedInteger = CI {distToMax :: Natural, distToMin :: Natural}
 this would ensure only the 40 integers can be represented.
 Then you could have _something_ that defined what happened on overflow,
 whether it wraps, reflects, errors, truncates or whatever.

Doesn't Ada have constrained number types which have similar behaviour?

--
burton samograd[EMAIL PROTECTED]
kruhft.blogspot.com   www.myspace.com/kruhft   metashell.blogspot.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Positive integers

2006-03-27 Thread Dylan Thurston
On Mon, Mar 27, 2006 at 05:02:20AM -0800, John Meacham wrote:
 well, in interfaces you are going to end up with some specific class or
 another concretely mentioned in your type signatures, which means you
 can't interact with code that only knows about the alternate class. like
 
 genericLength :: Integral a = [b] - a
 
 if you have a different 'Integral' you can't call genericLength with it,
 or anything built up on genericLength. basically there would be no way
 for 'new' and 'old' polymorphic code to interact. 

I think the idea would be that the source for genericLength would
compile using either class hierarchy with no change.  For the case of
genericLength, this is true for the proposed alternate prelude Hennig
Theilemann pointed to.  It would be mostly true in general for that
proposal, with the exception that you would sometimes need to add Show
or Eq instances.

 the inability to evolve the class hierarchy is a serious issue, enough
 that it very well could be impractical for haskell' unless something
 like class aliases were widely adopted.

I think that as long as you're not defining classes source compatibility
would not be hard.  Of course you couldn't hope to link code written
with one hierarchy against another.

Peace,
Dylan Thursto


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Positive integers

2006-03-27 Thread Neil Mitchell
 Doesn't Ada have constrained number types which have similar behaviour?

Yes. Just for comparison, the behaviour of the Ada number is to throw
an exception at runtime if a number overflows its bounds. If these
checks can be eliminated statically, then they are. If an operation
will always give a runtime error then this is given as a warning at
compile time.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Positive integers

2006-03-27 Thread Henning Thielemann


On Mon, 27 Mar 2006, Neil Mitchell wrote:


Doesn't Ada have constrained number types which have similar behaviour?


Yes. Just for comparison, the behaviour of the Ada number is to throw
an exception at runtime if a number overflows its bounds. If these
checks can be eliminated statically, then they are. If an operation
will always give a runtime error then this is given as a warning at
compile time.


Quite similar to Modula (maybe also Pascal), as I indicated. There the 
bounded integers are called sub-ranges.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bit streams programs in Haskell

2006-03-27 Thread Per Gustafsson

Hi

Thanks to everyone that have helped, the runtimes for the haskell 
programs have decreased significantly, presently they look like this 
compared to O'Caml:


Benchmark   haskell ocaml
drop3   5.786   3.151
five11  8.657   7.692
huffman 7.134   18.593
uudecode6.042   2.657
uuencode7.775   2.824

The huffman benchmark should not really be that slow, but it seems that 
O'Caml programs that build large lists tend to be slow.


The versions that I have used in this measurement is available at:

http://bitbenches.infogami.com/

Many thanks to the people that have responded, and if you have more 
comments on the programs, particularily uudecode and uuencode (which I 
have mostly written myself), feel free to suggest further improvments.


Note also that uudecode and uuencode for both O'Caml and Haskell breaks 
the rules slightly by creating an array for each row that is created. To 
end up with a list of rows that is than turned into one array. Is there 
a library fuction that can do this?


I wrote a function myself, but it is not very pretty.

Per
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] error vs. MonadError vs. fail

2006-03-27 Thread Andrew Pimlott
On Mon, Mar 27, 2006 at 02:53:58PM +1200, Daniel McAllansmith wrote:
 Is there a consensus on how anticipatable failure situations should be 
 handled?
 
 There was a thread, haskell programming guidelines, from 2006-02-25 where 
 John Meacham and Cale Gibbard had a bit of back-and-forth about using 
 Monad.fail or a purpose specific MonadFail class.
 
 Using fail certainly seems quick and easy, but I find it a bit
 distasteful for a few different reasons:

All of your reasons are good, but I recently tripped over an even better
one:  While fail must be defined in all monads, it has no sensible
definition in many, and so throws an exception.  I got burned because I
wrote a function to run some monad of mine, which might result in an
answer or an error, and I used fail for the error case:

run :: Monad m = MyMonad a - m a
run m = ... if ... then return x else fail e

Then, I accidentally (this was spread across two functions) ran my monad
twice:

run (run m)

This typechecked and crashed.  The inner run was given type

MyMonad a - MyMonad a

and you can guess what fail does in MyMonad.  Ugh.  If I had used
MonadError for the return value of run, run would only typecheck in a
monad that can sensibly handle errors, catching my bug.

 Apparently the advantage of fail is that user of the library can choose to 
 receive failures as eg Maybes, Eithers, [], or whatever they like.
...
 MonadError is not up to this task as far as I can tell.

Why not?  All that needs to be done is write the missing instances, eg

instance MonadError () Maybe where
  throwError x   = Nothing
  Nothing `catchError` f = f ()
  Just x `catchError` f  = Just x

instance Error () where
  noMsg  = ()
  strMsg s   = () 

As you might tell, I would like to see this instance in MonadError.  An
instance for [] is however questionable, IMO.

BTW, I've posted about these issues several times, eg

http://www.haskell.org/pipermail/haskell-cafe/2005-June/010361.html

Andrew
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] multiple computations, same input

2006-03-27 Thread Greg Fitzgerald
How do I perform multiple computations on a long lazy list without introducing a space leak?Doing a single computation like this works great:
 f = show . filter ( 1)But if I do something like this: f lst = show (filter ( 1) lst, filter ( 2) lst)then it looks like GHC won't garbage collect list elements until the first filter has completely finished, and the second filter has iterated over them.
Is there an easy way to feed each element into both functions, instead of feeding all elements to one function, and then all to the next?Thanks,Greg
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] multiple computations, same input

2006-03-27 Thread Neil Mitchell
Hi Greg,

 But if I do something like this:
   f lst = show (filter ( 1) lst, filter ( 2) lst)
 then it looks like GHC won't garbage collect list elements
 until the first
 filter has completely finished

There is a very good reason for this, show (a,b) is essentially

show (a,b) = ( ++ show a ++ ,  ++ show b ++ )

so all the elements of a are shown first, which means that lst is held
up in a thunk of filter, to be evaluated later. In this particular
case, you know that not (2) implies not (1) so you can speed it up
with:

f lst = show (part1, part2)
   where
   part1 = filter (1) lst
   part2 = fitler (2) part1

note that part2 only examines part1. As long as part1 is smaller than
lst, this will reduce the space leak.

There is really no way to eliminate the space leak in this
circumstance, since you really do need to hold a part of the data in
memory while you show the first one, so you can then show the second
one.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] multiple computations, same input

2006-03-27 Thread Greg Fitzgerald
  hold a part of the data in memory while you show the first one,Here would be a better example then.  f lst = show (sum (filter ( 1) lst), sum (filter ( 2) lst))It ought to be *possible* to compute both operations without holding onto any of the list elements. 
In the imperative world, you'd say: sum1 = 0 sum2 = 0 for num in lst sum1 += num if num  1 sum2 += num if num  2 end puts sum1, sum2One could probably hack it together with foldM, the State monad, and maybe some strictness, but I'd like to make full use of laziness and stick to the basic list operations if it at all possible.
I'm not so concerned with solving this particular problem as I am in learning the purely functional technique for performing orthogonal computations on the same input without introducing a space leak.Maybe something like this?
 arr (sum . filter (1))  arr (sum . filter (2))Thanks,Greg
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Defining instance needs allow-undecidable-instance?

2006-03-27 Thread Daniel McAllansmith
Hi, folks.

I've got a working class and instances thereof.  I would now like to change 
the class such that error behaviour is specified by MonadError, for the 
moment throwing String errors.  When I try to add MonadError into the types I 
eventually hit the requirement for allow-undecidable-instances.  
Is there some way I can I avoid having to use this extension?


My starting point consists of:

class (Num i, Bounded i, Monad m) = MonadSource i m | m - i where
...

newtype SourceT i m a = SourceT (StateT (Store i) m a)
deriving (Functor, Monad, MonadIO, MonadTrans)

instance (Num i, Bounded i, Monad m) = MonadSource i (SourceT i m) where
...

I changed it to:

class (Num i, Bounded i, Monad m, MonadError String m)
= MonadSource i m | m - i where


newtype SourceT i m a = SourceT (StateT (Store i) m a)
deriving (Functor, Monad, MonadIO, MonadTrans, MonadError e)

instance (Num i, Bounded i, Monad m, MonadError String m)
= MonadSource i (SourceT i m) where
...



Thanks
Daniel
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Equirecursive types?

2006-03-27 Thread lee marks
John Hughes wrote:
 ...
 
 Indeed, error messages in common cases would be worsened significantly, 
 because as Ross says, common errors (such as forgetting a parameter in a 
 recursive call) would lead to legal definitions that cause a type error 
 when applied. Partly for this reason, OCaml permits recursion only in 
 object types, if I remember rightly, where it's most useful. In other 
 cases the occurs check is still used.
 

Couldn't this be mitigated by requiring an explicit type annotation on the 
function being called
recursively? In other words infinite types are never silently inferred, but can 
only coincide with
explicit signatures.

So this is legal:

  type Fix s a = s a (Fix s a)

  fold :: Functor (s a) = (s a b - b) - Fix s a - b
  fold f = f . fmap (fold f)

but this is not:

  fold f = f . fmap (fold f)


I guess it just seems that banning infinite types altogether is throwing the 
baby with the bath.



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] multiple computations, same input

2006-03-27 Thread Neil Mitchell
Hi,

 Here would be a better example then.

 f lst = show (sum (filter ( 1) lst), sum (filter ( 2) lst))

I suspected that you actually wanted to do something cleverer with
the list, for the sake of argument, I'm going to change 1 to p1 and
2 to p2 - to show how this can be done in the general case. With the
specific information you know about 1 vs 2 you can do better, but
this gets across the general point:

f lst = show (sumPairs (1) (2) lst)

sumPairs :: (Int - Bool) - (Int - Bool) - [Int] - (Int, Int)
sumPairs p1 p2 [] = (0, 0)
sumPairs p1 p2 (x:xs) = (add p1 a, add p2 b)
where
   (a,b) = sumPairs xs
   add pred value = if pred x then x+value else value

[Untested, something like this should work]

You can actually arrive at this solution entirely be reasoning on the
program, i.e. not coming up with a fresh definition.

The above code essentially follows your imperative pseudo code - I
think its constant space, but I'm not too sure...

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] multiple computations, same input

2006-03-27 Thread Greg Buchholz
Neil Mitchell wrote:
 I suspected that you actually wanted to do something cleverer with
 the list, for the sake of argument, I'm going to change 1 to p1 and
 2 to p2 - to show how this can be done in the general case. With the
 specific information you know about 1 vs 2 you can do better, but
 this gets across the general point:
 
 f lst = show (sumPairs (1) (2) lst)
 
 sumPairs :: (Int - Bool) - (Int - Bool) - [Int] - (Int, Int)
 sumPairs p1 p2 [] = (0, 0)
 sumPairs p1 p2 (x:xs) = (add p1 a, add p2 b)
 where
(a,b) = sumPairs xs
add pred value = if pred x then x+value else value
 
 [Untested, something like this should work]

Nope.  That won't work because you end up creating huge add thunks
which cause end up causing a stack overflow (tested with GHC -O2).  I
think you are probably going to need strictness in order to skin this
cat in Haskell.  Here's an example that does work...

import Data.List
main = print $ para_filter_sum ( 1) ( 2) lst

twos = 2: twos
lst = take 1000 $ [1,2,3,4,5] ++ twos

-- f lst = show (filter ( 1) lst, filter ( 2) lst)
para_filter_sum f g xs =
foldl' (\(n,m) elem - seq n $ seq m $
 (n+if f elem then elem else 0,
  m+if g elem then elem else 0 ) ) (0,0) xs


Greg Buchholz

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] multiple computations, same input

2006-03-27 Thread Greg Fitzgerald
Thanks Neil. How do I add in another ~10 computations, or map a list of a 100 computations to the same input?Isn't there a way to do this without one computation having to be aware of the other? This feels like a situation Parsec users would find themselves in all the time. When you have a bunch of parsers in a 'choice', does the start of the input stream linger until the last parser is executed?
Thanks,GregOn 3/27/06, Neil Mitchell [EMAIL PROTECTED] wrote:
Hi, Here would be a better example then. f lst = show (sum (filter ( 1) lst), sum (filter ( 2) lst))I suspected that you actually wanted to do something cleverer with
the list, for the sake of argument, I'm going to change 1 to p1 and2 to p2 - to show how this can be done in the general case. With thespecific information you know about 1 vs 2 you can do better, but
this gets across the general point:f lst = show (sumPairs (1) (2) lst)sumPairs :: (Int - Bool) - (Int - Bool) - [Int] - (Int, Int)sumPairs p1 p2 [] = (0, 0)sumPairs p1 p2 (x:xs) = (add p1 a, add p2 b)
where (a,b) = sumPairs xs add pred value = if pred x then x+value else value[Untested, something like this should work]You can actually arrive at this solution entirely be reasoning on the
program, i.e. not coming up with a fresh definition.The above code essentially follows your imperative pseudo code - Ithink its constant space, but I'm not too sure...ThanksNeil

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] multiple computations, same input

2006-03-27 Thread Neil Mitchell
 Thanks Neil.  How do I add in another ~10 computations, or map a list of a
 100 computations to the same input?

 Isn't there a way to do this without one computation having to be aware of
 the other?

I guess they have to be aware at some level, perhaps arrows generalise
the awareness they need, to perhaps you'd need something else.


 This feels like a situation Parsec users would find themselves in all the
 time.  When you have a bunch of parsers in a 'choice', does the start of the
 input stream linger until the last parser is executed?

No, as soon as one token is accepted from any parser, that input is
decided upon, and it will never go back. If you want that behaviour
you have to wrap the particular parser in try, which does give the
backtracking (and space leak)

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] multiple computations, same input

2006-03-27 Thread Josef Svenningsson
On 3/28/06, Neil Mitchell [EMAIL PROTECTED] wrote:
  This feels like a situation Parsec users would find themselves in all the
  time.  When you have a bunch of parsers in a 'choice', does the start of the
  input stream linger until the last parser is executed?

 No, as soon as one token is accepted from any parser, that input is
 decided upon, and it will never go back. If you want that behaviour
 you have to wrap the particular parser in try, which does give the
 backtracking (and space leak)

I personally find this behaviour terribly confusing. It makes writing
the parser highly unmodular. It forces me to know exactly what a
certain parser recognizes to know whether I need to wrap a 'try'
around it when I compose it in parallel with another parser. Which is
why I much prefer to use parsing libraries based on Koen Claessen's
technique which performs all parses at the same time. It works
breadth-first on the parse forest (the set of parse trees).
Text.ParserCombinators.ReadP is one example which uses this technique.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Defining instance needs allow-undecidable-instance?

2006-03-27 Thread Daniel McAllansmith
On Tuesday 28 March 2006 11:12, Daniel McAllansmith wrote:
 Hi, folks.

 I've got a working class and instances thereof.  I would now like to change
 the class such that error behaviour is specified by MonadError, for the
 moment throwing String errors.  When I try to add MonadError into the types
 I eventually hit the requirement for allow-undecidable-instances.
 Is there some way I can I avoid having to use this extension?


I've received a link[1] off-list to a patch to GHC changing this behaviour, so 
it looks like in the future I'll be able to remove the 
allow-undecidable-instances flag and rely on just glasgow-exts.

 instance (Num i, Bounded i, Monad m, MonadError String m)
 = MonadSource i (SourceT i m) where
 ...

That aside, is what I have done, including a type in the context, considered 
an acceptable solution for this sort of problem?

Cheers
Daniel

[1] http://article.gmane.org/gmane.comp.lang.haskell.cvs.ghc/13500
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Eval of a syntax tree for reduction

2006-03-27 Thread Steve Downey
I'm working through Pierce's Types and Programming Languages on my own. I'm attempting to write the typecheckers in Haskell, instead of ML. However, when it comes to his eval function, I'm a little stuck. The original is
let rec eval t =  try let t' = eval1 t in eval t' with NoRuleApplies - tWhere eval1 is a single step evaluation of the abstract syntax tree, and NoRuleApplies is an exception that's raised if there are no rules that can reduce the _expression_. Now, there's a footnote that it isn't good style in ML. It seems even less natural in Haskell, at least to me, but I still don't know the language that well.
The natural equivalent to what Pierce has in ML is something along the lines ofdata ArithExpr  = TmTrue | TmFalse | TmIfExpr ArithExpr ArithExpr ArithExpr | TmZero  | TmSucc ArithExpr
 | TmPred ArithExpr | TmIsZero ArithExpr deriving (Show, Eq)eval1 :: ArithExpr - ArithExpreval1 (TmIfExpr TmTrue t _) = teval1 (TmIfExpr TmFalse _ t) = teval1 (TmIfExpr t1 t2 t3) = 
 let t1' = eval1 t1 in TmIfExpr t1' t2 t3-- and so onBut, how to terminate the recursion in the full eval?I'm considering changing eval1 to be ArithExpr-Maybe ArithExprIf the _expression_ is reducible, then I return Just t, and if it's not reducible, then Nothing
It makes eval1 a bit more complicated, and not as straightforward translation from the type system being described, though.e.g reducing If looks more likeeval1 (TmIfExpr t1 t2 t3) =  let t1' = eval1 t1
 in case t1' of { Just t1'' - Just $ TmIfExpr t1'' t2 t3 ; Nothing - Nothing  }and eval then looks likeeval t =  let t' = eval1 t  in case t' of 
 { Just t'' - eval t''  ; Nothing - t' }I'm looking for some suggestions on the direction to proceed. 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Eval of a syntax tree for reduction

2006-03-27 Thread Antti-Juhani Kaijanaho

Steve Downey wrote:
It makes eval1 a bit more complicated, and not as straightforward 
translation from the type system being described, though.

e.g reducing If looks more like

eval1 (TmIfExpr t1 t2 t3) =
let t1' = eval1 t1
in  case t1' of
{ Just t1'' - Just $ TmIfExpr t1'' t2 t3
; Nothing - Nothing
}


You should use the fact that Maybe is a monad:

eval1 (TmIfExpr t1 t2 t3) =
  do t1' - eval1 t1
 return $ TmIfExpr t1' t2 t3


and eval then looks like
eval t =
let t' = eval1 t
in case t' of
 { Just t'' - eval t''
 ; Nothing - t'
 }


(In the above, I suspect you need Nothing - t, no prime.)

BTW, there's no need to use let here:

eval t = case eval1 t of Just t' - eval t'
 Nothing - t


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] multiple computations, same input

2006-03-27 Thread Tomasz Zielonka
On Mon, Mar 27, 2006 at 03:10:18PM -0800, Greg Fitzgerald wrote:
   hold a part of the data in memory while you show the first one,
 
 Here would be a better example then.
 
 f lst = show (sum (filter ( 1) lst), sum (filter ( 2) lst))
 
 It ought to be *possible* to compute both operations without holding onto
 any of the list elements.

I wonder if it would be possible to remove the space-leak by running both
branches concurrently, and scheduling threads in a way that would
minimise the space-leak. I proposed this before

  http://www.haskell.org/pipermail/haskell-cafe/2005-December/013428.html

I would like to hear opinions from some compiler gurus.

Best regards
Tomasz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] multiple computations, same input

2006-03-27 Thread Greg Buchholz
Tomasz Zielonka wrote:
 I wonder if it would be possible to remove the space-leak by running both
 branches concurrently, and scheduling threads in a way that would
 minimise the space-leak. I proposed this before
 
   http://www.haskell.org/pipermail/haskell-cafe/2005-December/013428.html

FWIW, (not much), I asked a similar questions over on the
Lambda-the-Ultimate blog a while back...

http://lambda-the-ultimate.org/node/923
http://lambda-the-ultimate.org/node/485

...Anyway, I can't help but think that there might be a happy medium
between eager and lazy evaluation.

Greg Buchholz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Eval of a syntax tree for reduction

2006-03-27 Thread Greg Buchholz
Steve Downey wrote:
] I'm considering changing eval1 to be ArithExpr-Maybe ArithExpr
] 
] If the expression is reducible, then I return Just t, and if it's not
] reducible, then Nothing
] 
] It makes eval1 a bit more complicated, and not as straightforward
] translation from the type system being described, though.
] e.g reducing If looks more like
] 
] eval1 (TmIfExpr t1 t2 t3) =
] let t1' = eval1 t1
] in  case t1' of
] { Just t1'' - Just $ TmIfExpr t1'' t2 t3
] ; Nothing - Nothing
] }
] 
] I'm looking for some suggestions on the direction to proceed.

If you are looking to get rid of the noise caused by Maybe, you
could package up all of the case and Just stuff into a few reusable
functions.  In fact, its already been done for you, since Maybe is a
monad...

http://www.nomaware.com/monads/html/maybemonad.html

...You could try something like...

 import Control.Monad  -- for liftM3, etc.
  
 eval1_ :: ArithExpr - Maybe ArithExpr
 eval1_ (TmIfExpr TmTrue  t _) = return t
 eval1_ (TmIfExpr TmFalse _ t) = return t
 eval1_ (TmIfExpr t1 t2 t3) = liftM3 TmIfExpr (eval1_ t1) (return t2) (return 
 t3)

...and if it turns out you don't like the resulting Maybeified
program, you can get the original functionality back by changing the
type signature to use the Identity monad instead of Maybe.

Greg Buchholz

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe