Re: Loop unrolling + fusion ?

2009-03-09 Thread Brandon S. Allbery KF8NH

On 2009 Mar 9, at 9:32, Claus Reinke wrote:
One way out would be to treat the whole mutual recursion as a single  
entity, either implicitly, as I indicated, or explicitly, as I  
interpret Brandon's somewhat ambiguous comment. In other words, the  
peel/unroll limits would apply to a whole group of mutually


Sorry, yes, I intended that the unrolling applied explicitly to a  
group of mutually recursive functions.  I'm not sure if the unroll/ 
peel counts should be multiplied by the number of functions, though.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allb...@kf8nh.com
system administrator [openafs,heimdal,too many hats] allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon universityKF8NH




PGP.sig
Description: This is a digitally signed message part
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Loop unrolling + fusion ?

2009-03-09 Thread Max Bolingbroke
2009/3/9 Claus Reinke :
 But if you annotate all your unrolled and peeled new definitions as
> NOINLINE, do you still get the optimizations you want? There are probably a
> few GHC optimizations that can "look through" non-recursive
> lets, but RULES are not among those.

The benefit that comes immediately to mind is extra freedom for the
code generator. If we have several copies of the body of e.g. a loop
it may be able to schedule instructions much better. This is why GCC
unrolls loops, of course. Of course, Core may not be the best place to
do this sort of unrolling as Roman pointed out earlier in the thread.
But yeah, beyond this I don't /think/ that non-inlined duplications
would help GHC at all (it might be a different story if we did partial
inlining).

All the best,
Max
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Loop unrolling + fusion ?

2009-03-09 Thread Claus Reinke

let f = ..f.. in f{n,m} -PEEL-> let f = ..f.. in ..f{n-1,m}..



Probably what you intend here is that you create one copy of the
definition every round rather than one per call site, is that right?


I don't think so - ultimately, the point of both peeling and unrolling is to
unfold a definition into a use site, to enable further optimizations, not
just to move from a recursive to a non-recursive definition. We could try to
do it in two steps, as you suggest, but that would expose us to the
heuristics of GHC inlining again (will or won't it inline the
new shared definition?), in the middle of a user-annotation-based unfolding.


Ah - I was thinking of something a bit different, where:

* PEEL / UNROLL pragmas duplicate the method body once per level of
peeling / unrolling and fix up the recursive calls as appropriate
* The user optionally adds an INLINE pragma to the function if he
additionally wants to be SURE that those duplicates get inlined at the
use sites


Ok, I suspected as much. You'd need to make the 'INLINE f' apply
to the generated 'fN', of course.


This means that PEEL / UNROLL represent nice logically-orthogonal bits
of functionality to INLINE-ing.


Usually, I'm all for orthogonality, and for more knobs to allow hand-tuning
of things that have no automatically reachable optimal solutions. In this case, 
however, I'm not sure anything would be gained. I recall that your hand-
unrolled code was written in a similar style, and assumed that it was a 
question of style, which GHC would inline into the same code.


But if you annotate all your unrolled and peeled new definitions as 
NOINLINE, do you still get the optimizations you want? There are 
probably a few GHC optimizations that can "look through" non-recursive

lets, but RULES are not among those.

For loop-style recursion, there'd be only one use per definition, so inlining
would be the default and there'd be no difference, but for non-loop-style
recursion, inlining might not happen, and so no further optimizations would
be enabled. Off the top of my head, I can't think of a case where that
would lead to improved code, but as I'm discovering, I'm not very familiar
with the details of what optimizations GHC is actually doing (though this
is quite helpful: 
http://hackage.haskell.org/trac/ghc/wiki/Commentary/Compiler/HscMain )

so I might be missing something?



Furthermore, I'm not too keen on duplicating method bodies at call
sites willy-nilly because it may lead to increased allocations (of the
function closures) in inner loops. At least if you bind the duplicated
methods at the same level as the thing you are duplicating you only
increase the dynamic number of closures created by a constant factor!


Yes, every form of INLINE has its limits. But if users say they want 
inlining (or peeling or unrolling or any other form of unfolding), that's 
what they should get, including those worrysome duplications. The 
idea is to create lots of added code (in order to remove abstractions 
that might hide optimization opportunities), which will then be simplified 
to something smaller (or at least better performing) than what we 
started out with. Providing the means to fine tune the amount of 
duplications might be useful, but preventing them entirely is not an option.


Claus

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


GHC and Haskell Standard wrt floatRange...

2009-03-09 Thread Tyson Whitehead
I think there might be a difference between C/GHC and the Haskell Standard's 
idea of the range of the exponential.  Specifically, the comment in gcc's 
float.h (i.e., where GHC appears to gets its definition from) says

/* Minimum int x such that FLT_RADIX**(x-1) is a normalized float, emin */

while the Haskell Standard says

"...the lowest and highest values the exponent may assume respectively..."

This results in (GHC 6.10.1 on Debian)

Prelude> (2**128::Float)
Infinity
Prelude> (2**127::Float)
1.7014119e38
Prelude> floatRange (0::Float)
(-125,128)

I can file a bug report/patch if nobody is seeing something I missed here...

Cheers!  -Tyson


signature.asc
Description: This is a digitally signed message part.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Loop unrolling + fusion ?

2009-03-09 Thread Max Bolingbroke
2009/3/9 Claus Reinke :
>>> let f = ..f.. in f{n,m} -PEEL-> let f = ..f.. in ..f{n-1,m}..
>
>> Probably what you intend here is that you create one copy of the
>> definition every round rather than one per call site, is that right?
>
> I don't think so - ultimately, the point of both peeling and unrolling is to
> unfold a definition into a use site, to enable further optimizations, not
> just to move from a recursive to a non-recursive definition. We could try to
> do it in two steps, as you suggest, but that would expose us to the
> heuristics of GHC inlining again (will or won't it inline the
> new shared definition?), in the middle of a user-annotation-based unfolding.

Ah - I was thinking of something a bit different, where:

* PEEL / UNROLL pragmas duplicate the method body once per level of
peeling / unrolling and fix up the recursive calls as appropriate
* The user optionally adds an INLINE pragma to the function if he
additionally wants to be SURE that those duplicates get inlined at the
use sites

This means that PEEL / UNROLL represent nice logically-orthogonal bits
of functionality to INLINE-ing.

Furthermore, I'm not too keen on duplicating method bodies at call
sites willy-nilly because it may lead to increased allocations (of the
function closures) in inner loops. At least if you bind the duplicated
methods at the same level as the thing you are duplicating you only
increase the dynamic number of closures created by a constant factor!
I've actually been thinking about using a different strategy for case
liberation (which duplicates method bodies at call sites) to make it
more constructor-specialisation like (which duplicates method bodies
at the definition site) partly for this reason.

Cheers,
Max
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Type functions and ambiguity

2009-03-09 Thread Dan Doel
On Monday 09 March 2009 11:56:14 am Simon Peyton-Jones wrote:
> For what it's worth, here's why. Suppose we have
>
> type family N a :: *
>
> f :: forall a. N a -> Int
> f = 
>
> g :: forall b. N b -> Int
> g x = 1 + f x
>
> The defn of 'g' fails with a very similar error to the one above.  Here's
> why.  We instantiate the occurrence of 'f' with an as-yet-unknown type
> 'alpha', so at the call site 'f' has type N alpha -> Int
> Now, we know from g's type sig that x :: N b.  Since f is applies to x, we
> must have N alpha ~ N b

I think this explains my confusion. I was thinking roughly in terms like, "I 
need 'N b -> Int'; I have 'forall a. N a -> Int', so instantiate 'a' to 'b'." 
Not in terms of collecting constraints and unifying after the fact. From the 
latter perspective the ambiguity makes sense.

> This kind of example encapsulates the biggest single difficulty with using
> type families in practice.  What is worse is that THERE IS NO WORKAROUND.

I suppose one can always add arguments like the "Proxy" to functions to 
disambiguate, but that certainly isn't ideal.

> Anyway I hope this helps clarify the issue.

Yes; thanks.

-- Dan
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cygwin version

2009-03-09 Thread Tuomo Valkonen

On 2009-03-09, Tuomo Valkonen  wrote:
> On 2009-03-09, John Meacham  wrote:
>> perhaps the most recent non-cabalized ghc build might be worth a try. I
>> think darcs still compiles with ghc 6.6, but am not positive., 
>
> Mingw-bootstrap, source, or both?

Tried with both. Got:

ghc.exe: unknown package: unix

-- 
Be an early adopter! Beat the herd! Choose Windows today!

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Type functions and ambiguity

2009-03-09 Thread Simon Peyton-Jones
Dan's example fails thus:

|   Map.hs:25:19:
| Couldn't match expected type `Nest n1 f b'
|against inferred type `Nest n1 f1 b'
| In the expression: fmap (deepFMap n f)
| In the definition of `deepFMap':
| deepFMap (S n) f = fmap (deepFMap n f)
|
| for reasons I don't really understand. So I tried the following:

For what it's worth, here's why. Suppose we have

type family N a :: *

f :: forall a. N a -> Int
f = 

g :: forall b. N b -> Int
g x = 1 + f x

The defn of 'g' fails with a very similar error to the one above.  Here's why.  
We instantiate the occurrence of 'f' with an as-yet-unknown type 'alpha', so at 
the call site 'f' has type
N alpha -> Int
Now, we know from g's type sig that x :: N b.  Since f is applies to x, we must 
have
N alpha ~ N b

Does that imply that (alpha ~ b)?   Alas no!  If t1=t2 then (N t1 = N t2), of 
course, but *not* vice versa.  For example, suppose that

type instance N [c] = N c

Now we could solve the above with (alpha ~ [b]), or (alpha ~ [[b]]).

You may say
a) There is no such instance.  Well, but you can see it pushes the search for a 
(unique) solution into new territory.

b) It doesn't matter which solution the compiler chooses: any will do.  True in 
this case, but false if f :: forall a. (Num a) => N a -> Int.  Now it matters 
which instance is chosen.


This kind of example encapsulates the biggest single difficulty with using type 
families in practice.  What is worse is that THERE IS NO WORKAROUND.  You 
*ought* to be able to add an annotation to guide the type checker.  Currently 
you can't.  The most obvious solution would be to allow the programmer to 
specify the types at which a polymorphic function is called, like this:

g :: forall b. N b -> Int
g x = 1 + f {b} x

The {b} says that f takes a type argument 'b', which should be used to 
instantiate its polymorphic type variable 'a'.   Being able to do this would be 
useful in other circumstances (eg impredicative polymorphism). The issue is 
really the syntax, and the order of type variables in an implicit forall.

Anyway I hope this helps clarify the issue.

Simon



| -Original Message-
| From: glasgow-haskell-users-boun...@haskell.org [mailto:glasgow-haskell-users-
| boun...@haskell.org] On Behalf Of Dan Doel
| Sent: 06 March 2009 03:08
| To: glasgow-haskell-users@haskell.org
| Subject: Deep fmap with GADTs and type families.
|
| Greetings,
|
| Someone on comp.lang.functional was asking how to map through arbitrary
| nestings of lists, so I thought I'd demonstrate how his non-working ML
| function could actually be typed in GHC, like so:
|
| --- snip ---
|
| {-# LANGUAGE TypeFamilies, GADTs, EmptyDataDecls,
| Rank2Types, ScopedTypeVariables #-}
|
| data Z
| data S n
|
| data Nat n where
|   Z :: Nat Z
|   S :: Nat n -> Nat (S n)
|
| type family Nest n (f :: * -> *) a :: *
|
| type instance Nest Z f a = f a
| type instance Nest (S n) f a = f (Nest n f a)
|
| deepMap :: Nat n -> (a -> b) -> Nest n [] a -> Nest n [] b
| deepMap Z f = map f
| deepMap (S n) f = map (deepMap n f)
|
| --- snip ---
|
| This works. However, the straight forward generalisation doesn't:
|
| --- snip ---
|
| deepFMap :: Functor f => Nat n -> (a -> b) -> Nest n f a -> Nest n f b
| deepFMap Z f = fmap f
| deepFMap (S n) f = fmap (deepFMap n f)
|
| --- snip ---
|
| This fails with a couple errors like:
|
|   Map.hs:25:19:
| Couldn't match expected type `Nest n1 f b'
|against inferred type `Nest n1 f1 b'
| In the expression: fmap (deepFMap n f)
| In the definition of `deepFMap':
| deepFMap (S n) f = fmap (deepFMap n f)
|
| for reasons I don't really understand. So I tried the following:
|
...rest snipped...

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Loop unrolling + fusion ?

2009-03-09 Thread Simon Marlow

Claus Reinke wrote:

That was one of my questions in the optimization and rewrite rules
thread: shouldn't -fvia-C be supported (as a non-default option)
for at least as long as the alternative isn't a clear win in all cases?


The trouble with supporting multiple backends is that the cost in 
terms of testing and maintenance is high.  And the registerised 
-fvia-C backend is particularly nasty, coming as it does with 
thousands of lines of Perl 4 that regularly get broken by new versions 
of gcc.


Yes, I can understand that you'd like to leave that part behind sometime
before yesterday:-) I assume that this very complexity means that the
-fvia-C route doesn't really get all the way to its most interesting
promises (easy portability, and full backend optimizations inherited
from gcc). And with that in mind, I can also understand that you don't 
want to put in any further work into trying to improve it, if that 
distracts from a better long-term solution.

What I don't understand yet is the routemap for replacing -fvia-C. We've
seen -fvia-C being demoted from default to backup (fine by me), we've
seen a feature supported only by -fvia-C removed completely, instead of 
seeing support for it added to the -fasm route (macro-based APIs

used to work with ffi, would now require a wrapper generator, which
doesn't exist yet).
Indications are that -fvia-C still tends to produce better code (even 
though it is not the best that ghc+gcc could produce) than -fasm (is 
that any better for the new backend?). And last, but not least, ghc has 
more limited resources than gcc, so how is ghc going to beat gcc at the 
portability and backend optimizations game while still making progress
in its core competencies (ie, higher-level improvements; there's also 
the interesting side-issue of how the two stages of optimizations are 
going to interact in ghc, if there is a barrier that can only be crossed 
in one direction)?


Ok, thanks for bringing these points up.  Hopefully I'll be able to lay 
your fears to rest:


1. Performance.

-fvia-c currently produces code that is on average about 1% faster than -fasm:

  http://www.cse.unsw.edu.au/~dons/nobench/x86_64/results.html

There's one notable exception: floating-point code on x86 (not x86_64) is 
terrible with -fasm, because our native code generator has a particularly 
simple/stupid implementation of the x87 instruction set.  So we need to 
make the SSE2 code generator in the x86_64 backend work for x86, too.


Having said that, the native backend has much more potential for generating 
faster code than we can with gcc.  Firstly, it can re-use fixed registers 
(e.g. argument registers) within a basic block, whereas gcc can't.  We 
don't do this currently because the C-- lacks the liveness information on 
jumps, but the new backend will be able to do it.  I bet this alone will be 
worth more than that 1%.  Secondly we have a much better handle on aliasing 
inside GHC than gcc does, and there's no good way to tell gcc what we know 
about aliasing.


On x86, gcc has a grand total of 2 spare registers, which means it has 
virtually no scope for generating good code.  There's also not much room 
for generating C that is more amenable to gcc's optimisations.  The obvious 
thing to do is to make recursive functions look like loops.  We've tried it 
(there's some experimental code in GHC to do it), IIRC it didn't buy very 
much.  The lack of registers, and the lack of knowledge about aliasing 
(heap doesn't alias with stack) meant that gcc didn't do some 
obvious-looking optimisations.  Trying to do better here is a dead end.


2. Portability.

We haven't had a single new registerised port of GHC in many years now. 
While the via-C backend seems at first glance to offer some portability 
benefits, in practice porting the mangler is still a pain unless your 
platform is very similar to an existing one (e.g. vanilla ELF).


The only C-only registerised port we had was Sparc, and thanks to Ben 
Lippmeier we now have a native backend for that too.  Dropping the C 
backend won't harm any of our existing ports, and it doesn't seem like 
people are making new ports of GHC this way either.


We'll still have the unregisterised porting route, whose only drawback is 
performance.  Still, lots of platforms are successfully using 
unregisterised GHC ports (via Debian).


One day maybe we'll have an LLVM backend, or similar.  My impression is 
that right now we can't make an LLVM backend with as good performance as 
our native backend, without changes in LLVM.  Maybe that will change. 
Nothing that we're doing now precludes adding an LLVM backend later, I believe.


3. Features.

This is a non-issue: -fvia-C vs. -fasm should not affect what programs 
work.  Up until 6.10.1 we had a bug whereby you could use -fvia-C to bind 
to CPP-based C APIs, but that bug was removed in 6.10.1.  Ok, I realise 
that some people considered this to be a feature and its removal to be a 
regression.  However, I bel

Re: Loop unrolling + fusion ?

2009-03-09 Thread Claus Reinke

{-# INLINE f PEEL n #-}
  inline calls *into* recursive f (called loop peeling for loops)
{-# INLINE f UNROLL m #-}
  inline recursive calls to f *inside* f (called loop unrolling for  
loops)


{-# INLINE f PEEL n UNROLL m #-}
  combine the previous two


The problem here is that this only works for directly recursive  
functions which I, for instance, don't normally use in high- 
performance code. Most of my loops are pipelines of collective  
combinators like map, filter, fold etc. because these are the ones  
that can be fused automatically. Unless I'm misunderstanding  
something, this approach doesn't handle such cases.


Actually, my first sketch had a problem in that it would work
only too well for mutually recursive functions, making it necessary
to use loop breakers in spite of the explicit limits (even if we limit
unroll to direct recursion, as I intended originally, peeling would 
then apply to the calls into other functions in the recursion). 

One way out would be to treat the whole mutual recursion as a 
single entity, either implicitly, as I indicated, or explicitly, as I 
interpret Brandon's somewhat ambiguous comment. In other 
words, the peel/unroll limits would apply to a whole group of 
mutually recursive definitions, ensuring termination of the inline 
process without additional loop breakers. If we do that, then

it might make sense to talk about peeling/unrolling wrt the whole
recursion group.

In any case, I need to refine my spec!-) But this discussion is
very helpful in finding the issues that need to be addressed and
clarified. Another issue that I ran into in manual unrolling is that
I sometimes want to unroll wrt a specific parameter of a multi-
parameter function, usually because that parameter can only
have a very small numer of possible values, or just because the
original function encodes multiple loops that I want to disentangle.

Claus

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Loop unrolling + fusion ?

2009-03-09 Thread Claus Reinke

let f = ..f.. in f{n,m} -PEEL-> let f = ..f.. in ..f{n-1,m}..



Probably what you intend here is that you create one copy of the
definition every round rather than one per call site, is that right?


I don't think so - ultimately, the point of both peeling and unrolling 
is to unfold a definition into a use site, to enable further optimizations, 
not just to move from a recursive to a non-recursive definition. We 
could try to do it in two steps, as you suggest, but that would expose 
us to the heuristics of GHC inlining again (will or won't it inline the
new shared definition?), in the middle of a user-annotation-based 
unfolding.


As for the remainder of your useful reply, I'll have to think more
about how to make a local-rule-based approach work properly
(without the hickups of my first sketch) before I can think about
the interactions. I still think it would be useful to have such a
rule-based description, even if a monolithic core2core pass may
be easier to implement: having two independent specs makes it
easier to spot inconsistencies, and if the rule-based form doesn't
get too complicated, it should be more suited for documentation.

Claus

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cygwin version

2009-03-09 Thread Tuomo Valkonen
On 2009-03-09, John Meacham  wrote:
> perhaps the most recent non-cabalized ghc build might be worth a try. I
> think darcs still compiles with ghc 6.6, but am not positive., 

Mingw-bootstrap, source, or both?

> remember ghc working on cygwin at some point. I have been in a similar
> position in the past actually in the odd place of developing linux/unix
> software while sitting at a windows box, and cygwin builds of tools were
> more useful in general.

I'm just planning on keeping the good bits of the *nix toolchain 
while switching to another OS from Linux (or *BSD) that keeps 
burying those pieces under layers upon layers of stinking and 
unreliable gnomeshit, and that suffers from de facto central control
over software distribution. The only choice is Windows, because Apple
is just too steve-jobs-knows-what-you-want, although would otherwise
offer a better *nix environment. 

-- 
"[Fashion] is usually a form of ugliness so intolerable that we have
 to alter it every six months." -- Oscar Wilde
"The computer industry is the only industry that is more fashion-driven
 than women's fashion." -- RMS

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cygwin version

2009-03-09 Thread Duncan Coutts
On Sun, 2009-03-08 at 12:29 +, Tuomo Valkonen wrote:
> I want a _real_ cygwin version of darcs. The non-deterministic
> pseudo-cygwin *nix/Windows hybrid currently available has just 
> too many problems integrating into cygwin, that I want to use as
> my TeXing and minor coding environment. A real cygwin version
> of darcs would seem to depend on a real cygwin version of GHC.
> Is there any easy way to compile one? Otherwise I may have to
> abandon darcs (and Haskell software in general) for Mercurial.
> 
> (Thanks to the over-bearing cabal and resulting hsc2hs etc. 
> build problems with conventional Makefiles, I have already
> pretty much already abandoned my own Haskell projects.)

Yes we did introduce a problem with hsc2hs in the most recent ghc
release and I'm sorry about that. Just in case you're interested
however, the fix for your makefiles is to add two flags:

hsc2hs --cc=ghc --ld=ghc

That should work with any version of hsc2hs and it gives the behaviour
of the older hsc2hs versions that came with older ghc releases.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cygwin version

2009-03-09 Thread John Meacham
On Sun, Mar 08, 2009 at 11:46:07PM +, Tuomo Valkonen wrote:
> No:
> 
> Configuring extensible-exceptions-0.1.0.0...
> cabal-bin.exe: Cannot find the program 'ghc' at
> '/c/bin/ghc-6.10.1.20090308/bin/ghc' or on the path
> 
> 'ldd libraries/cabal-bin.exe' finds no cygwin dependencies;
> everything points to /c/WINDOWS/system32. I presume that
> the mingw-ghc used the include mingw gcc, not cygwin's gcc.
> Indeed, I tried deleting that file, and got: 
> 
> ghc.exe: could not execute: C:\bin\ghc-6.10.1.20090308\gcc
> 
> It doesn't seem like it will build cygwin programs.

perhaps the most recent non-cabalized ghc build might be worth a try. I
think darcs still compiles with ghc 6.6, but am not positive., I seem to
remember ghc working on cygwin at some point. I have been in a similar
position in the past actually in the odd place of developing linux/unix
software while sitting at a windows box, and cygwin builds of tools were
more useful in general.

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cygwin version

2009-03-09 Thread Simon Marlow

Tuomo Valkonen wrote:

On 2009-03-08, John Meacham  wrote:

if you follow those steps, but then don't override the host in the
./configure step to just let it pick up the cygwin environment will it
work properly?

John


No:

Configuring extensible-exceptions-0.1.0.0...
cabal-bin.exe: Cannot find the program 'ghc' at
'/c/bin/ghc-6.10.1.20090308/bin/ghc' or on the path

'ldd libraries/cabal-bin.exe' finds no cygwin dependencies;
everything points to /c/WINDOWS/system32. I presume that
the mingw-ghc used the include mingw gcc, not cygwin's gcc.
Indeed, I tried deleting that file, and got: 


ghc.exe: could not execute: C:\bin\ghc-6.10.1.20090308\gcc

It doesn't seem like it will build cygwin programs.


There's no fundamental reason why there can't be a "real" Cygwin GHC (that 
is, a GHC producing binaries that are linked against the Cygwin DLL). 
Indeed it used to work, a long time ago.  But due to lack of demand it 
bitrotted.


I imagine it would be a fair amount of work to get it going again, but of 
greater concern to me is how we would *keep* it working: there needs to be 
interested people actively maintaining buildbots and catching bitrot as it 
happens.  Without a sustainable process, there isn't a great deal of point 
in updating the port.


Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: a possibility to redefine built-in GHCi commands

2009-03-09 Thread Simon Marlow

Peter Hercek wrote:

Hi GHCi users,

I would like to be able to redefine the built-in GHCi commands. The idea 
is that when searching for a command the user defined commands would be 
searched first and only then the built-in commands would be searched. If 
 user wants to invoke a built-in command regardless of user defined 
commands he/she would need to start it with two colons (instead of one).


It is an user interface change which may break some scripts, but it 
would allow to provide different default behavior.

For example:
* when I use GhciExt I want all my ":continue" commands to be actually 
":x :continue"
* it would allow to specify different order of searching for abbreviated 
commands as the default one

* it would allow to specify different default switches for builtin commands

Would such a change be merged upstream if I would provide a patch?


Seems reasonable to me.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Reliability of trace?

2009-03-09 Thread Colin Paul Adams
I'm trying to use Debug.Trace to debug some tree-walking that I've
written.

It seems to me that I am missing some traces on intermediate function
calls. I guess that ghc is re-arranging the code in such a way that
some of these intermediate calls disappear. Anyway of stopping this? I
already specify -O0.
-- 
Colin Adams
Preston Lancashire
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Loop unrolling + fusion ?

2009-03-09 Thread Max Bolingbroke
2009/3/9 Roman Leshchinskiy :
> The problem here is that this only works for directly recursive functions
> which I, for instance, don't normally use in high-performance code. Most of
> my loops are pipelines of collective combinators like map, filter, fold etc.
> because these are the ones that can be fused automatically. Unless I'm
> misunderstanding something, this approach doesn't handle such cases.

Yep, I think this is an orthogonal piece of functionality. I believe
Claus is concerned with getting the compiler to perform some of the
transformations people currently might want to do for their directly
recursive functions. Of course, you could still UNROLL your unstream
definition, but that doesn't give the user any control over the amount
of unrolling that takes place, which as you have pointed out earlier
may not be a great idea!

Cheers,
Max
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: finally part run twice on Ctrl-C

2009-03-09 Thread Neil Mitchell
Hi

I have filed bug http://hackage.haskell.org/trac/ghc/ticket/3081 to
track this issue, marking it as effecting Windows only.

Thanks

Neil

On Fri, Feb 27, 2009 at 8:45 AM, Philip K.F. Hölzenspies
 wrote:
> On Friday 27 February 2009 09:39:14 Neil Mitchell wrote:
>> It looks like you are running in GHCi, which I think works. It's only
>> when the program is compiled and run from the command line (Cygwin or
>> DOS) that I get the above problem.
>
> Dear Neil,
>
> You were right. When I do compile it, though, I get the same (correct)
> behaviour (now I pressed Ctrl-C the first run and let it be for the second):
>
> holze...@ewi1043:~/tmp> ghc Test.hs
> holze...@ewi1043:~/tmp> ./a.out
> goodbye
> holze...@ewi1043:~/tmp> ./a.out
> goodbye
> holze...@ewi1043:~/tmp>
>
> This is on Linux, though, so it may also be OS-dependent.
>
> Regards,
> Philip
>
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users