Re: Stack traces in ghci

2016-12-08 Thread Simon Marlow
I created a ticket: https://ghc.haskell.org/trac/ghc/ticket/12946

On 7 December 2016 at 16:37, <domi...@steinitz.org> wrote:

> Hi Simon,
>
> Thanks for getting back.
>
> 1. Without -prof and -fexternal-interpreter, the program runs fine.
>
> 2. With just -prof, the program runs fine.
>
> 3. With just -fexternal-interpreter, I get the error below.
>
> Dominic.
>
> On 7 Dec 2016, at 13:52, Simon Marlow <marlo...@gmail.com> wrote:
>
> Hi Dominic - this looks like a problem with loading hmatrix into GHCi.
> Does it load without -prof and -fexternal-interpreter?  How about with just
> -fexternal-interpreter?
>
> Cheers
> SImon
>
> On 5 December 2016 at 12:20, Dominic Steinitz <domi...@steinitz.org>
> wrote:
>
>> I am trying to debug a package in which there is a divide by 0 error and
>> attempting to use Simon Marlow’s stack traces:
>> https://simonmar.github.io/posts/2016-02-12-Stack-traces-in-GHCi.html.
>> However ghci is complaining about  missing symbols. What do I need to add
>> to the command line to coax ghci into giving me a stack trace?
>>
>> > ~/Dropbox/Private/Stochastic/demo $ ghci -fexternal-interpreter -prof
>> fe-handling-example.o -i../../monad-bayes/src
>> -package-db=.cabal-sandbox/x86_64-osx-ghc-8.0.1-packages.conf.d
>> > GHCi, version 8.0.1: http://www.haskell.org/ghc/  :? for help
>> > Prelude> :l app/Main.hs
>> > [ 1 of 16] Compiling Control.Monad.Bayes.LogDomain (
>> ../../monad-bayes/src/Control/Monad/Bayes/LogDomain.hs, interpreted )
>> > [ 2 of 16] Compiling Control.Monad.Bayes.Primitive (
>> ../../monad-bayes/src/Control/Monad/Bayes/Primitive.hs, interpreted )
>> > [ 3 of 16] Compiling Control.Monad.Bayes.Class (
>> ../../monad-bayes/src/Control/Monad/Bayes/Class.hs, interpreted )
>> > [ 4 of 16] Compiling Control.Monad.Bayes.Sampler (
>> ../../monad-bayes/src/Control/Monad/Bayes/Sampler.hs, interpreted )
>> > [ 5 of 16] Compiling Control.Monad.Bayes.Sequential (
>> ../../monad-bayes/src/Control/Monad/Bayes/Sequential.hs, interpreted )
>> > [ 6 of 16] Compiling Control.Monad.Bayes.Prior (
>> ../../monad-bayes/src/Control/Monad/Bayes/Prior.hs, interpreted )
>> > [ 7 of 16] Compiling Control.Monad.Bayes.Rejection (
>> ../../monad-bayes/src/Control/Monad/Bayes/Rejection.hs, interpreted )
>> > [ 8 of 16] Compiling Control.Monad.Bayes.Weighted (
>> ../../monad-bayes/src/Control/Monad/Bayes/Weighted.hs, interpreted )
>> > [ 9 of 16] Compiling Control.Monad.Bayes.Population (
>> ../../monad-bayes/src/Control/Monad/Bayes/Population.hs, interpreted )
>> > [10 of 16] Compiling Control.Monad.Bayes.Deterministic (
>> ../../monad-bayes/src/Control/Monad/Bayes/Deterministic.hs, interpreted )
>> > [11 of 16] Compiling Control.Monad.Bayes.Conditional (
>> ../../monad-bayes/src/Control/Monad/Bayes/Conditional.hs, interpreted )
>> > [12 of 16] Compiling Control.Monad.Bayes.Dist (
>> ../../monad-bayes/src/Control/Monad/Bayes/Dist.hs, interpreted )
>> > [13 of 16] Compiling Control.Monad.Bayes.Coprimitive (
>> ../../monad-bayes/src/Control/Monad/Bayes/Coprimitive.hs, interpreted )
>> > [14 of 16] Compiling Control.Monad.Bayes.Trace (
>> ../../monad-bayes/src/Control/Monad/Bayes/Trace.hs, interpreted )
>> > [15 of 16] Compiling Control.Monad.Bayes.Inference (
>> ../../monad-bayes/src/Control/Monad/Bayes/Inference.hs, interpreted )
>> > [16 of 16] Compiling Main ( app/Main.hs, interpreted )
>> >
>> > app/Main.hs:92:7: warning: [-Wunused-matches]
>> > Defined but not used: ‘a’
>> >
>> > app/Main.hs:92:9: warning: [-Wunused-matches]
>> > Defined but not used: ‘prevP’
>> >
>> > app/Main.hs:92:15: warning: [-Wunused-matches]
>> > Defined but not used: ‘prevZ’
>> >
>> > app/Main.hs:106:5: warning: [-Wunused-do-bind]
>> > A do-notation statement discarded a result of type ‘GHC.Prim.Any’
>> > Suppress this warning by saying
>> >   ‘_ <- ($)
>> >   error (++) "You are here " (++) show state (++) " " show
>> p_obs’
>> > Ok, modules loaded: Main, Control.Monad.Bayes.LogDomain,
>> Control.Monad.Bayes.Primitive, Control.Monad.Bayes.Class,
>> Control.Monad.Bayes.Population, Control.Monad.Bayes.Conditional,
>> Control.Monad.Bayes.Inference, Control.Monad.Bayes.Sampler,
>> Control.Monad.Bayes.Rejection, Control.Monad.Bayes.Weighted,
>> Control.Monad.Bayes.Sequential, Control.Monad.Bayes.Trace,
>> Control.Monad.Bayes.Dist, Control.Monad.Bayes.Prior,
>> Control.Mona

Re: Stack traces in ghci

2016-12-07 Thread Simon Marlow
Hi Dominic - this looks like a problem with loading hmatrix into GHCi.
Does it load without -prof and -fexternal-interpreter?  How about with just
-fexternal-interpreter?

Cheers
SImon

On 5 December 2016 at 12:20, Dominic Steinitz  wrote:

> I am trying to debug a package in which there is a divide by 0 error and
> attempting to use Simon Marlow’s stack traces: https://simonmar.github.io/
> posts/2016-02-12-Stack-traces-in-GHCi.html. However ghci is complaining
> about  missing symbols. What do I need to add to the command line to coax
> ghci into giving me a stack trace?
>
> > ~/Dropbox/Private/Stochastic/demo $ ghci -fexternal-interpreter -prof
> fe-handling-example.o -i../../monad-bayes/src  -package-db=.cabal-sandbox/
> x86_64-osx-ghc-8.0.1-packages.conf.d
> > GHCi, version 8.0.1: http://www.haskell.org/ghc/  :? for help
> > Prelude> :l app/Main.hs
> > [ 1 of 16] Compiling Control.Monad.Bayes.LogDomain (
> ../../monad-bayes/src/Control/Monad/Bayes/LogDomain.hs, interpreted )
> > [ 2 of 16] Compiling Control.Monad.Bayes.Primitive (
> ../../monad-bayes/src/Control/Monad/Bayes/Primitive.hs, interpreted )
> > [ 3 of 16] Compiling Control.Monad.Bayes.Class (
> ../../monad-bayes/src/Control/Monad/Bayes/Class.hs, interpreted )
> > [ 4 of 16] Compiling Control.Monad.Bayes.Sampler (
> ../../monad-bayes/src/Control/Monad/Bayes/Sampler.hs, interpreted )
> > [ 5 of 16] Compiling Control.Monad.Bayes.Sequential (
> ../../monad-bayes/src/Control/Monad/Bayes/Sequential.hs, interpreted )
> > [ 6 of 16] Compiling Control.Monad.Bayes.Prior (
> ../../monad-bayes/src/Control/Monad/Bayes/Prior.hs, interpreted )
> > [ 7 of 16] Compiling Control.Monad.Bayes.Rejection (
> ../../monad-bayes/src/Control/Monad/Bayes/Rejection.hs, interpreted )
> > [ 8 of 16] Compiling Control.Monad.Bayes.Weighted (
> ../../monad-bayes/src/Control/Monad/Bayes/Weighted.hs, interpreted )
> > [ 9 of 16] Compiling Control.Monad.Bayes.Population (
> ../../monad-bayes/src/Control/Monad/Bayes/Population.hs, interpreted )
> > [10 of 16] Compiling Control.Monad.Bayes.Deterministic (
> ../../monad-bayes/src/Control/Monad/Bayes/Deterministic.hs, interpreted )
> > [11 of 16] Compiling Control.Monad.Bayes.Conditional (
> ../../monad-bayes/src/Control/Monad/Bayes/Conditional.hs, interpreted )
> > [12 of 16] Compiling Control.Monad.Bayes.Dist (
> ../../monad-bayes/src/Control/Monad/Bayes/Dist.hs, interpreted )
> > [13 of 16] Compiling Control.Monad.Bayes.Coprimitive (
> ../../monad-bayes/src/Control/Monad/Bayes/Coprimitive.hs, interpreted )
> > [14 of 16] Compiling Control.Monad.Bayes.Trace (
> ../../monad-bayes/src/Control/Monad/Bayes/Trace.hs, interpreted )
> > [15 of 16] Compiling Control.Monad.Bayes.Inference (
> ../../monad-bayes/src/Control/Monad/Bayes/Inference.hs, interpreted )
> > [16 of 16] Compiling Main ( app/Main.hs, interpreted )
> >
> > app/Main.hs:92:7: warning: [-Wunused-matches]
> > Defined but not used: ‘a’
> >
> > app/Main.hs:92:9: warning: [-Wunused-matches]
> > Defined but not used: ‘prevP’
> >
> > app/Main.hs:92:15: warning: [-Wunused-matches]
> > Defined but not used: ‘prevZ’
> >
> > app/Main.hs:106:5: warning: [-Wunused-do-bind]
> > A do-notation statement discarded a result of type ‘GHC.Prim.Any’
> > Suppress this warning by saying
> >   ‘_ <- ($)
> >   error (++) "You are here " (++) show state (++) " " show
> p_obs’
> > Ok, modules loaded: Main, Control.Monad.Bayes.LogDomain,
> Control.Monad.Bayes.Primitive, Control.Monad.Bayes.Class,
> Control.Monad.Bayes.Population, Control.Monad.Bayes.Conditional,
> Control.Monad.Bayes.Inference, Control.Monad.Bayes.Sampler,
> Control.Monad.Bayes.Rejection, Control.Monad.Bayes.Weighted,
> Control.Monad.Bayes.Sequential, Control.Monad.Bayes.Trace,
> Control.Monad.Bayes.Dist, Control.Monad.Bayes.Prior, 
> Control.Monad.Bayes.Deterministic,
> Control.Monad.Bayes.Coprimitive.
> > *Main> main
> > ghc-iserv-prof:
> > lookupSymbol failed in relocateSection (relocate external)
> > /Users/dom/Dropbox/Private/Stochastic/demo/.cabal-
> sandbox/lib/x86_64-osx-ghc-8.0.1/hmatrix-0.18.0.0-7aYEqJARQEvKYNyM4UGAPZ/
> libHShmatrix-0.18.0.0-7aYEqJARQEvKYNyM4UGAPZ_p.a: unknown symbol
> `___ieee_divdc3'
> > ghc-iserv-prof: Could not on-demand load symbol '_vectorScan'
> >
> > ghc-iserv-prof:
> > lookupSymbol failed in relocateSection (relocate external)
> > /Users/dom/Dropbox/Private/Stochastic/demo/.cabal-
> sandbox/lib/x86_64-osx-ghc-8.0.1/hmatrix-0.18.0.0-7aYEqJARQEvKYNyM4UGAPZ/
> libHShmatrix-0.18.0.0-7aYEqJARQEvKYNyM4UGAPZ_p.a: unknown symbol
> `_vectorScan'
> > ghc-iserv-prof: Could not on-demand load symbol '_
> hmatrixzm0zi18zi0zi0zm7aYEqJARQEvKYNyM4UGAPZZ_InternalziVectorizzed_
> constantAux_closure'
> >
> > ghc-iserv-prof:
> > lookupSymbol failed in relocateSection (relocate external)
> > /Users/dom/Dropbox/Private/Stochastic/demo/.cabal-
> sandbox/lib/x86_64-osx-ghc-8.0.1/hmatrix-0.18.0.0-7aYEqJARQEvKYNyM4UGAPZ/
> 

Warnings, -Wall, and versioning policy

2016-01-12 Thread Simon Marlow

Hi folks,

We haven't described what guarantees GHC provides with respect to -Wall 
behaviour across versions, and as a result there are some differing 
expectations around this.  It came up in this weeks' GHC meeting, so we 
thought it would be a good idea to state the policy explicitly.  Here it is:


  We guarantee that code that compiles with no warnings with -Wall
  ("Wall-clean") and a particular GHC version, on a particular
  platform, will be Wall-clean with future minor releases of the same
  major GHC version on the same platform.

(we plan to put this text in the User's Guide for future releases)

There are no other guarantees.  In particular:
- In a new major release, GHC may introduce new warnings into -Wall, 
and/or change the meaning of existing warnings such that they trigger 
(or not) under different conditions.
- GHC may generate different warnings on different platforms. (examples 
of this are -fwarn-overflowed-literals and 
-fwarn-unsupported-calling-conventions)


Some rationale:
- We consider any change to the language that GHC accepts to be a 
potentially code-breaking change, and subject to careful scrutiny. To 
extend this to warnings would be a *lot* of work, and would make it 
really difficult to introduce new warnings and improve the existing ones.
- Warnings can be based on analyses that can change in accuracy over 
time. The -fwarn-unused-imports warning has changed a lot in what it 
rejects, for example.
- We often introduce new warnings that naturally belong in -Wall. If 
-Wall was required to be a fixed point, we would have to start 
introducing new flags, and versioning, etc. and even keep the old 
implementation of warnings when they change. It would get really messy.


There are some consequences to this.  -Wall -Werror is useful for 
keeping your code warning-clean when developing, but shipping code with 
these options turned on in the build system is asking for trouble when 
building your code with different GHC versions and platforms.  Keep 
those options for development only.  Hackage already rejects packages 
that include -Wall for this reason.


One reason we're raising this now is that it causes problems for the 
3-release policy 
(https://prime.haskell.org/wiki/Libraries/3-Release-Policy) which 
requires that it be possible to write code that is Wall-clean with 3 
major releases of GHC.  GHC itself doesn't guarantee this, so it might 
be hard for the core libraries committee to provide this guarantee.  I 
suggest this requirement be dropped from the policy.


Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: -prof, -threaded, and -N

2015-06-19 Thread Simon Marlow
That's a leftover from when profiling didn't support -N, I'll fix it. 
Thanks!


Simon

On 03/06/2015 07:03, Lars Kuhtz wrote:

 From https://github.com/ghc/ghc/blob/master/rts/RtsFlags.c#L1238 it seems that 
the behavior described in my email below is intended:

```

 if (rts_argv[arg][2] == '\0') {
#if defined(PROFILING)
 RtsFlags.ParFlags.nNodes = 1;
#else
 RtsFlags.ParFlags.nNodes = getNumberOfProcessors();
#endif
```

So, my question is: what is the reason for this difference between the 
profiling and the non-profiling case?

Lars


On Jun 2, 2015, at 10:20 PM, Lars Kuhtz hask...@kuhtz.eu wrote:

Hi,

The behavior of the -N flag (without argument) with the profiling runtime seems 
inconsistent compared to the behavior without profiling. The following program

```
module Main where

import GHC.Conc

main :: IO ()
main = print numCapabilities
```

when compiled with `ghc -threaded -fforce-recomp Prof.hs` and run as `./Prof 
+RTS -N` prints `2` on my machine. When the same program is compiled with `ghc 
-threaded -fforce-recomp -prof Prof.hs` and executed as `./Prof +RTS -N` it 
prints `1`.

When an argument is provided to `-N` (e.g. `./Prof +RTS -N2`) the profiling and 
non-profiling versions behave the same.

I tested this with GHC-7.10.1 but I think that I already observed the same 
behavior with GHC-7.8.

Is this inconsistency intended?

Lars
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Native -XCPP Conclusion

2015-06-19 Thread Simon Marlow
I have no problem with plan 4.  However, I'll point out that 
implementing CPP is not *that* hard... :)


Cheers,
Simon

On 18/06/2015 09:32, Herbert Valerio Riedel wrote:

Hello *,

Following up on the Native -XCPP Proposal discussion, it appears that
cpphs' current LGPL+SLE licensing doesn't pose an *objective*
showstopper problem but is rather more of an inconvenience as it causes
argumentation/discussion overhead (which then /may/ actually result in
Haskell being turned down eventually over alternatives that do without
LGPL components).

In order to acknowledge this discomfort, for GHC 7.12 we propose to follow
plan 4 according to [1] (i.e. calling out to a cpphs-executable as a
separate process), thereby avoiding pulling any LGPL-subjected cpphs
code into produced executables when linking against the 'ghc' package.

Plan 2 (i.e. embedding/linking cpphs' code directly into ghc) would
reduce fork/exec overhead, which can be substantial on Windows [2],
but plan 4 is no worse than what we have now.

Last Call: Are there any objections with GHC adopting plan 4[1]?

  [1]: https://ghc.haskell.org/trac/ghc/wiki/Proposal/NativeCpp

  [2]: http://permalink.gmane.org/gmane.comp.lang.haskell.ghc.devel/8869

Thanks,
   HVR

On 2015-05-06 at 13:08:03 +0200, Herbert Valerio Riedel wrote:

Hello *,

As you may be aware, GHC's `{-# LANGUAGE CPP #-}` language extension
currently relies on the system's C-compiler bundled `cpp` program to
provide a traditional mode c-preprocessor.

This has caused several problems in the past, since parsing Haskell code
with a preprocessor mode designed for use with C's tokenizer has caused
already quite some problems[1] in the past. I'd like to see GHC 7.12
adopt an implemntation of `-XCPP` that does not rely on the shaky
system-`cpp` foundation. To this end I've created a wiki page

   https://ghc.haskell.org/trac/ghc/wiki/Proposal/NativeCpp

to describe the actual problems in more detail, and a couple of possible
ways forward. Ideally, we'd simply integrate `cpphs` into GHC
(i.e. plan 2). However, due to `cpp`s non-BSD3 license this should be
discussed and debated since affects the overall-license of the GHC
code-base, which may or may not be a problem to GHC's user-base (and
that's what I hope this discussion will help to find out).

So please go ahead and read the Wiki page... and then speak your mind!


Thanks,
   HVR


[1]: ...does anybody remember the issues Haskell packages ( GHC)
  encountered when Apple switched to the Clang tool-chain, thereby
  causing code using `-XCPP` to suddenly break due to subtly
  different `cpp`-semantics?



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Thread behavior in 7.8.3

2015-01-21 Thread Simon Marlow

On 21/01/2015 03:43, Michael Jones wrote:

Simon,

The code below hangs on the frameEx function.

But, if I change it to:

f  - frameCreate objectNull idAny linti-scope PMBus Scope Tool 
rectZero (frameDefaultStyle .|. wxMAXIMIZE)

it will progress, but no frame pops up, except once in many tries. Still hangs, 
but progresses through all the setup code.

However, I did make past statements that a non-GUI version was hanging. So I am 
not blaming wxHaskell. Just noting that in this case it is where things go 
wrong.

Anyone,

Are there any wxHaskell experts around that might have some insight?

(Remember, works on single core 32 bit, works on quad core 64 bit,
fails on 2 core 64 bit. Using GHC 7.8.3. Any recent updates to the
code base to fix problems like this?)


No, there are no recently fixed or outstanding bugs in this area that 
I'm aware of.


From the symptoms I strongly suspect there's an unsafe foreign call 
somewhere causing problems, or another busy-wait loop.


Cheers,
Simon




— CODE SAMPLE 

gui :: IO ()
gui
   = do
values - varCreate []-- Values to be 
painted
timeLine - varCreate 0   -- Line time
sample - varCreate 0 -- Sample Number
running - varCreate True -- True when 
telemetry is active

HANG HERE

f - frameEx frameDefaultStyle [ text := linti-scope PMBus Scope 
Tool] objectNull

Setup GUI components code was here

return ()

go :: IO ()
go = do
 putStrLn Start GUI
 start $ gui

exeMain :: IO ()
exeMain = do
   hSetBuffering stdout NoBuffering
   getArgs = parse
   where
 parse [-h] = usageexit
 parse [-v] = version  exit
 parse [] = go
 parse [url, port, session, target] = goServer url port (read session) 
(read target)

 usage   = putStrLn Usage: linti-scope [url, port, session, target]
 version = putStrLn Haskell linti-scope 0.1.0.0
 exit= System.Exit.exitWith System.Exit.ExitSuccess
 die = System.Exit.exitWith (System.Exit.ExitFailure 1)

#ifndef MAIN_FUNCTION
#define MAIN_FUNCTION exeMain
#endif
main = MAIN_FUNCTION

On Jan 20, 2015, at 9:00 AM, Simon Marlow marlo...@gmail.com wrote:


My guess would be that either
- a thread is in a non-allocating loop
- a long-running foreign call is marked unsafe

Either of these would block the other threads.  ThreadScope together with some 
traceEventIO calls might help you identify the culprit.

Cheers,
Simon

On 20/01/2015 15:49, Michael Jones wrote:

Simon,

This was fixed some time back. I combed the code base looking for other busy 
loops and there are no more. I commented out the code that runs the I2C + 
Machines + IO stuff, and only left the GUI code. It appears that just the 
wxhaskell part of the program fails to start. This matches a previous 
observation based on printing.

I’ll see if I can hack up the code to a minimal set that I can publish. All the 
IP is in the I2C code, so I might be able to get it down to one file.

Mike

On Jan 19, 2015, at 3:37 AM, Simon Marlow marlo...@gmail.com wrote:


Hi Michael,

Previously in this thread it was pointed out that your code was doing busy 
waiting, and so the problem can be fixed by modifying your code to not do busy 
waiting.  Did you do this?  The -C flag is just a workaround which will make 
the RTS reschedule more often, it won't fix the underlying problem.

The code you showed us was:

sendTransactions :: MonadIO m = SMBusDevice DeviceDC590 - TVar Bool - 
ProcessT m (Spec, String) ()
sendTransactions dev dts = repeatedly $ do
  dts' - liftIO $ atomically $ readTVar dts
  when (dts' == True) (do
  (_, transactions) - await
  liftIO $ sendOut dev transactions)

This loops when the contents of the TVar is False.

Cheers,
Simon

On 18/01/2015 01:15, Michael Jones wrote:

I have narrowed down the problem a bit. It turns out that many times if
I run the program and wait long enough, it will start. Given an event
log, it may take from 1000-1 entries sometimes.

When I look at a good start vs. slow start, I see that in both cases
things startup and there is some thread activity for thread 2 and 3,
then the application starts creating other threads, which is when the
wxhaskell GUI pops up and IO out my /dev/i2c begins. In the slow case,
it just gets stuck on thread 2/3 activity for a very long time.

If I switch from -C0.001 to -C0.010, the startup is more reliable, in
that most starts result in an immediate GUI and i2c IO.

The behavior suggests to me that some initial threads are starving the
ability for other threads to start, and perhaps on a dual core machine
it is more of a problem than single or quad core machines. For certain,
due to some printing, I know that the main thread is starting, and that
a print just before the first fork is not printing. Code between them is
evaluating wxhaskell functions, but the main frame is not yet asked to
become

Re: Thread behavior in 7.8.3

2015-01-20 Thread Simon Marlow

My guess would be that either
 - a thread is in a non-allocating loop
 - a long-running foreign call is marked unsafe

Either of these would block the other threads.  ThreadScope together 
with some traceEventIO calls might help you identify the culprit.


Cheers,
Simon

On 20/01/2015 15:49, Michael Jones wrote:

Simon,

This was fixed some time back. I combed the code base looking for other busy 
loops and there are no more. I commented out the code that runs the I2C + 
Machines + IO stuff, and only left the GUI code. It appears that just the 
wxhaskell part of the program fails to start. This matches a previous 
observation based on printing.

I’ll see if I can hack up the code to a minimal set that I can publish. All the 
IP is in the I2C code, so I might be able to get it down to one file.

Mike

On Jan 19, 2015, at 3:37 AM, Simon Marlow marlo...@gmail.com wrote:


Hi Michael,

Previously in this thread it was pointed out that your code was doing busy 
waiting, and so the problem can be fixed by modifying your code to not do busy 
waiting.  Did you do this?  The -C flag is just a workaround which will make 
the RTS reschedule more often, it won't fix the underlying problem.

The code you showed us was:

sendTransactions :: MonadIO m = SMBusDevice DeviceDC590 - TVar Bool - 
ProcessT m (Spec, String) ()
sendTransactions dev dts = repeatedly $ do
  dts' - liftIO $ atomically $ readTVar dts
  when (dts' == True) (do
  (_, transactions) - await
  liftIO $ sendOut dev transactions)

This loops when the contents of the TVar is False.

Cheers,
Simon

On 18/01/2015 01:15, Michael Jones wrote:

I have narrowed down the problem a bit. It turns out that many times if
I run the program and wait long enough, it will start. Given an event
log, it may take from 1000-1 entries sometimes.

When I look at a good start vs. slow start, I see that in both cases
things startup and there is some thread activity for thread 2 and 3,
then the application starts creating other threads, which is when the
wxhaskell GUI pops up and IO out my /dev/i2c begins. In the slow case,
it just gets stuck on thread 2/3 activity for a very long time.

If I switch from -C0.001 to -C0.010, the startup is more reliable, in
that most starts result in an immediate GUI and i2c IO.

The behavior suggests to me that some initial threads are starving the
ability for other threads to start, and perhaps on a dual core machine
it is more of a problem than single or quad core machines. For certain,
due to some printing, I know that the main thread is starting, and that
a print just before the first fork is not printing. Code between them is
evaluating wxhaskell functions, but the main frame is not yet asked to
become visible. From last week, I know that an non-gui version of the
app is getting stuck, but I do not know if it eventually runs like this
case.

Is there some convention that when I look at an event log you can tell
which threads are OS threads vs threads from fork?

Perhaps someone that knows the scheduler might have some advice. It
seems odd that a scheduler could behave this way. The scheduler should
have some built in notion of fairness.


On Jan 12, 2015, at 11:02 PM, Michael Jones m...@proclivis.com
mailto:m...@proclivis.com wrote:


Sorry I am reviving an old problem, but it has resurfaced, such that
one system behaves different than another.

Using -C0.001 solved problems on a Mac + VM + Ubuntu 14. It worked on
a single core 32 bit Atom NUC. But on a dual core Atom MinnowBoardMax,
something bad is going on. In summary, the same code that runs on two
machines does not run on a third machine. So this indicates I have not
made any breaking changes to the code or cabal files. Compiling with
GHC 7.8.3.

This bad system has Ubuntu 14 installed, with an updated Linux 3.18.1
kernel. It is a dual core 64 bit I86 Atom processor. The application
hangs at startup. If I remove the -C0.00N option and instead use -V0,
the application runs. It has bad timing properties, but it does at
least run. Note that a hang hangs an IO thread talking USB, and the
GUI thread.

When testing with the -C0.00N option, it did run 2 times out of 20
tries, so fail means fail most but not all of the time. When it did
run, it continued to run properly. This perhaps indicates some kind of
internal race condition.

In the fail to run case, it does some printing up to the point where
it tries to create a wxHaskell frame. In another non-UI version of the
program it also fails to run. Logging to a file gives a similar
indication. It is clear that the program starts up, then fails during
the run in some form of lockup, well after the initial startup code.

If I run with the strace command, it always runs with -C0.00N.

All the above was done with profiling enabled, so I removed that and
instead enabled eventlog to look for clues.

In this case it lies between good and bad, in that IO to my USB is
working, but the GUI comes up blank and never paints. Running this
case

Re: Thread behavior in 7.8.3

2015-01-19 Thread Simon Marlow

Hi Michael,

Previously in this thread it was pointed out that your code was doing 
busy waiting, and so the problem can be fixed by modifying your code to 
not do busy waiting.  Did you do this?  The -C flag is just a workaround 
which will make the RTS reschedule more often, it won't fix the 
underlying problem.


The code you showed us was:

sendTransactions :: MonadIO m = SMBusDevice DeviceDC590 - TVar Bool - 
ProcessT m (Spec, String) ()

sendTransactions dev dts = repeatedly $ do
  dts' - liftIO $ atomically $ readTVar dts
  when (dts' == True) (do
  (_, transactions) - await
  liftIO $ sendOut dev transactions)

This loops when the contents of the TVar is False.

Cheers,
Simon

On 18/01/2015 01:15, Michael Jones wrote:

I have narrowed down the problem a bit. It turns out that many times if
I run the program and wait long enough, it will start. Given an event
log, it may take from 1000-1 entries sometimes.

When I look at a good start vs. slow start, I see that in both cases
things startup and there is some thread activity for thread 2 and 3,
then the application starts creating other threads, which is when the
wxhaskell GUI pops up and IO out my /dev/i2c begins. In the slow case,
it just gets stuck on thread 2/3 activity for a very long time.

If I switch from -C0.001 to -C0.010, the startup is more reliable, in
that most starts result in an immediate GUI and i2c IO.

The behavior suggests to me that some initial threads are starving the
ability for other threads to start, and perhaps on a dual core machine
it is more of a problem than single or quad core machines. For certain,
due to some printing, I know that the main thread is starting, and that
a print just before the first fork is not printing. Code between them is
evaluating wxhaskell functions, but the main frame is not yet asked to
become visible. From last week, I know that an non-gui version of the
app is getting stuck, but I do not know if it eventually runs like this
case.

Is there some convention that when I look at an event log you can tell
which threads are OS threads vs threads from fork?

Perhaps someone that knows the scheduler might have some advice. It
seems odd that a scheduler could behave this way. The scheduler should
have some built in notion of fairness.


On Jan 12, 2015, at 11:02 PM, Michael Jones m...@proclivis.com
mailto:m...@proclivis.com wrote:


Sorry I am reviving an old problem, but it has resurfaced, such that
one system behaves different than another.

Using -C0.001 solved problems on a Mac + VM + Ubuntu 14. It worked on
a single core 32 bit Atom NUC. But on a dual core Atom MinnowBoardMax,
something bad is going on. In summary, the same code that runs on two
machines does not run on a third machine. So this indicates I have not
made any breaking changes to the code or cabal files. Compiling with
GHC 7.8.3.

This bad system has Ubuntu 14 installed, with an updated Linux 3.18.1
kernel. It is a dual core 64 bit I86 Atom processor. The application
hangs at startup. If I remove the -C0.00N option and instead use -V0,
the application runs. It has bad timing properties, but it does at
least run. Note that a hang hangs an IO thread talking USB, and the
GUI thread.

When testing with the -C0.00N option, it did run 2 times out of 20
tries, so fail means fail most but not all of the time. When it did
run, it continued to run properly. This perhaps indicates some kind of
internal race condition.

In the fail to run case, it does some printing up to the point where
it tries to create a wxHaskell frame. In another non-UI version of the
program it also fails to run. Logging to a file gives a similar
indication. It is clear that the program starts up, then fails during
the run in some form of lockup, well after the initial startup code.

If I run with the strace command, it always runs with -C0.00N.

All the above was done with profiling enabled, so I removed that and
instead enabled eventlog to look for clues.

In this case it lies between good and bad, in that IO to my USB is
working, but the GUI comes up blank and never paints. Running this
case without -v0 (event log) the gui partially paints and stops, but
USB continues.

Questions:

1) Does ghc 7.8.4 have any improvements that might pertain to these
kinds of scheduling/thread problems?
2) Is there anything about the nature of a thread using USB, I2C, or
wxHaskell IO that leads to problems that a pure calculation app would
not have?
3) Any ideas how to track down the problem when changing conditions
(compiler or runtime options) affects behavior?
4) Are there other options besides -V and -C for the runtime that
might apply?
5) What does -V0 do that makes a problem program run?

Mike




On Oct 29, 2014, at 6:02 PM, Michael Jones m...@proclivis.com
mailto:m...@proclivis.com wrote:


John,

Adding -C0.005 makes it much better. Using -C0.001 makes it behave
more like -N4.

Thanks. This saves my project, as I need to deploy on a single core
Atom and was stuck.


Re: RFC: changes to -i flag for finding source files

2014-05-30 Thread Simon Marlow

On 30/05/14 11:10, John Meacham wrote:

On Fri, May 30, 2014 at 2:45 AM, Daniel Trstenjak
daniel.trsten...@gmail.com wrote:

Well, it might not be terribly surprising in itself, but we
just have quite complex systems and the not terribly surprising
things just accumulate and then it might get surprising somewhere.

I really prefer simplicity and explicitly.

If a central tool like GHC adds this behaviour, then all other
tools are forced to follow.



Well, I just proposed it as an alternative to some of the other ideas
floated here. A command line flag like -i would definitely be ghc
specific and inherently non-portable.


Just to clarify this point, what I proposed was not intended to be 
compiler-specific, because it would be a change to the hs-source-dirs 
field of a .cabal file, and implemented in whatever way each compiler 
wants to do it.  In the case of GHC the hs-source-dirs are just wrapped 
in -i flags.


Still, I'm not planning to push the change into GHC because (a) changes 
are needed in Cabal and a few other places, (b) opinion was divided, and 
(c) it's not a big enough win.


JHC's semantics would also need changes in Cabal, incidentally, for the 
same reason: supporting 'sdist' (and possibly other reasons that I've 
forgotten).


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.8.3 release

2014-05-30 Thread Simon Marlow

On 27/05/14 09:06, Austin Seipp wrote:

PPS: This might also impact the 7.10 schedule, but last Simon and I
talked, we thought perhaps shooting for ICFP this time (and actually
hitting it) was a good plan. So I'd estimate on that a 7.8.4 might
happen a few months from now, after summer.


FWIW, I think doing 7.10 in October is way too soon.  Major releases 
create a large distributed effort for package maintainers and users, and 
there are other knock-on effects, so we shouldn't do them too often.  A 
lot of our users want stability, while many of them also want progress, 
and 12 months between major releases is the compromise we settled on.


The last major release slipped for various reasons, but I don't believe 
that means we should try to get back on track by having a short time 
between 7.8 and 7.10.  7.8 will be out of maintenance when it has only 
just made it into a platform release.


Anyway, that's my opinion.  Of course if everyone says they don't mind a 
7.10 in October then I withdraw my objection :-)


(as a data point, upgrading to 7.8 at work cost me three weeks, but 
we're probably a special case)


Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Future of DYNAMIC_GHC_PROGRAMS?

2014-05-24 Thread Simon Marlow

On 19/05/2014 13:51, harry wrote:

harry wrote

I need to build GHC 7.8 so that Template Haskell will work without shared
libraries (due to a shortage of space).

I understand that this can be done by turning off DYNAMIC_GHC_PROGRAMS and
associated build options. Is this possibility going to be kept going
forward, or will it be deprecated once dynamic GHC is fully supported on
all platforms?


PS This is for Linux x64.


We may yet go back and turn DYNAMIC_GHC_PROGRAMS off by default, it has 
yet to be decided.  The worst situation would be to have to support 
both, so I imagine once we've decided one way or the other we'll 
deprecated the other method.


Is it just shortage of space, or is there anything else that pushes you 
towards DYNAMIC_GHC_PROGRAMS=NO?  Isn't disk space cheap?


Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using mutable array after an unsafeFreezeArray, and GC details

2014-05-13 Thread Simon Marlow

On 12/05/2014 21:28, Brandon Simmons wrote:

The idea is I'm using two atomic counters to coordinate concurrent
readers and writers along an infinite array (a linked list of array
segments that get allocated as needed and garbage collected as we go).
So currently each cell in each array is written to only once, with a
CAS.


Certainly you should freeze arrays when you're done writing to them, 
this will help the GC a lot.



How large are your arrays? Perhaps the new small array type (in HEAD but not
7.8) would help?


Thanks, maybe so! The arrays can be any size, but probably not smaller
than length 64 (this will be static, at compile-time).

I read through https://ghc.haskell.org/trac/ghc/ticket/5925, and it
seems like the idea is to improve array creation. I'm pretty happy
with the speed of cloning an array (but maybe cloneSmallArray will be
even faster still).

It also looks like stg_casSmallArrayzh (in PrimOps.cmm) omits the card
marking (maybe the idea is if the array is already at ~128 elements or
less, then the card-marking is all just overhead?).


That right, the cards currently cover 128 elements, and there's also a 
per-array dirty bit, so the card table in an array smaller than 128 elts 
is just overhead.


Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: AlternateLayoutRule

2014-05-13 Thread Simon Marlow

On 13/05/14 15:04, John Meacham wrote:

Hi, I noticed that ghc now supports an 'AlternateLayoutRule' but am
having trouble finding information about it. Is it based on my
proposal and sample implementation?
http://www.mail-archive.com/haskell-prime@haskell.org/msg01938.html


Yes it is, but I think we had to flesh it out with a few more cases. 
Ian will know more, he implemented it in GHC.


It has never been the default implementation, because it wasn't possible 
to cover 100% of the strange ways that code in the wild currently relies 
on the parse-error behaviour in the layout rule.  You can get it with 
-XAlternateLayoutRule though.


I'm not sure what we should do about it.  I think Ian's motivation was 
to experiment with a view to proposing it as a replacement for the 
layout rule in Haskell', but (and this is my opinion) I think it ends up 
not being as clean as we might have hoped, and the cases where it 
doesn't work in the same way as the old rule aren't easily explainable 
to people.


On the other hand, we did find a nice use for it in GHC: the multiline 
parser in GHCi can tell whether you've finished typing a complete 
expression using the alternate layout rule.


Cheers,
Simon


https://ghc.haskell.org/trac/haskell-prime/wiki/AlternativeLayoutRule
implies it has been in use since 6.13. If that is the case, I assume
it has been found stable?

I ask because I was going to rewrite the jhc lexer and would like to
use the new mechanism in a way that is compatible with ghc. If it is
already using my code, so much the better.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using mutable array after an unsafeFreezeArray, and GC details

2014-05-12 Thread Simon Marlow

On 10/05/2014 21:57, Brandon Simmons wrote:

Another silly question: when card-marking happens after a write or
CAS, does that indicate this segment maybe contains old-to-new
generation references, so be sure to preserve (scavenge?) them from
collection ?


Yes, that's exactly right.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using mutable array after an unsafeFreezeArray, and GC details

2014-05-12 Thread Simon Marlow

On 09/05/2014 19:21, Brandon Simmons wrote:

A couple of updates: Edward Yang responded here, confirming the sort
of track I was thinking on:

   http://blog.ezyang.com/2014/05/ghc-and-mutable-arrays-a-dirty-little-secret/

And I can report that:
   1) cloning a frozen array doesn't provide the benefits of creating a
new array and freezing
   2) and anyway, I'm seeing some segfaults when cloning, freezing,
reading then writing in my library

I'd love to learn if there are any other approaches I might take, e.g.
maybe with my own CMM primop variants?


I'm not sure exactly what your workload looks like, but if you have 
arrays that tend to be unmodified for long periods of time it's 
sometimes useful to keep them frozen but thaw before mutating.


How large are your arrays? Perhaps the new small array type (in HEAD but 
not 7.8) would help?


Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: how to compile non-dynamic ghc-7.8.2 ?

2014-04-29 Thread Simon Marlow

On 25/04/2014 02:15, John Lato wrote:

Hello,

I'd like to compile ghc-7.8.2 with DynamicGhcPrograms disabled (on
64-bit linux).  I downloaded the source tarball, added

DYNAMIC_GHC_PROGRAMS = NO

to mk/build.mk http://build.mk, and did ./configure  ./make.

ghc builds and everything seems to work (cabal installed a bunch of
packages, ghci seems to work), however whenever I try to run Setup.hs
dynamically (either 'runghc Setup.hs configure' or loading it with ghci
and executing 'main') it dumps core.  Compiling Setup.hs works, and
nothing else has caused ghci to crash either (this email is a literate
haskell file equivalent to Setup.hs).

Building with DYNAMIC_GHC_PROGRAMS = YES works properly.

With that in mind, I have a few questions:

  How should I compile a non-dynamic ghc?
  Is this a bug in ghc?


I think you are running into this: 
https://ghc.haskell.org/trac/ghc/ticket/8935


It took me a *long* time to track that one down.  I still don't know 
what the root cause is, because I don't understand the system linker's 
behaviour here.  Given that other people are running into this, we ought 
to milestone it for 7.8.3 and do something about it.


Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RFC: changes to -i flag for finding source files

2014-04-28 Thread Simon Marlow

On 25/04/2014 17:57, Roman Cheplyaka wrote:

* Edward Kmett ekm...@gmail.com [2014-04-25 11:22:46-0400]

+1 from me. I have a lot of projects that suffer with 4 levels of vacuous
subdirectories just for this.

In theory cabal could support this on older GHC versions by copying all of the
files to a working dir in dist with the expected layout on older GHCs.

That would enable this to get much greater penetration more quickly.


I'd really not want that people start putting ghc-options: -isrc=...
into their cabal files. That'd make them irrecoverably ghc-specific, as no other
tool will know how to process the files unless it reads ghc-options.


No, the idea would be to use hs-source-dirs like this:

  hs-source-dirs: A.B.C=src

Cabal just passes this in a -i option to GHC, so it almost Just Works, 
except that Cabal needs to preprocess some source files so it needs to 
find them, and also sdist needs to find source files.


Cheers,
Simon


I'm +1 on adding support for this to cabal. Then the mechanism cabal uses to
achieve this is just that — an implementation detail. And this new -i syntax can
become one of such mechanisms.

But cabal support should really come first, so that people aren't tempted by
ghc-specific solutions.

Roman


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RFC: changes to -i flag for finding source files

2014-04-28 Thread Simon Marlow

On 25/04/2014 21:26, Malcolm Wallace wrote:


On 25 Apr 2014, at 14:17, Simon Marlow wrote:


The problem we often have is that when you're writing code for a library that 
lives deep in the module hierarchy, you end up needing a deep directory 
structure, where the top few layers are all empty.


I don't see how this is a problem at all.  Navigating the vacuous structure 
is as simple as pressing the tab key a few times.  But if you change the mapping 
convention between files and module names, you need to do it for all tools, not just the 
compiler.  I imagine all of the following tools would need to know about it:

cabal, hoogle, haddock, happy, alex, hat, hsc2hs


So actually many of these tools don't need to map module names to source 
files, so they work unchanged.  Only Cabal definitely needs changes, I'm 
not sure about hoogle and hat but I suspect they would be fine.



and probably a few more.  The feature seems like a very low power-to-weight 
ratio, so -1 from me.


Fair enough :)

Cheers,
Simon




Regards,
 Malcolm
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RFC: changes to -i flag for finding source files

2014-04-28 Thread Simon Marlow
Thanks for all the feedback.  Clearly opinion is divided on this one, so 
I'll sit on it and think it through some more.


Cheers,
Simon

On 25/04/2014 14:17, Simon Marlow wrote:

I want to propose a simple change to the -i flag for finding source
files.  The problem we often have is that when you're writing code for a
library that lives deep in the module hierarchy, you end up needing a
deep directory structure, e.g.

  src/
Graphics/
  UI/
   Gtk/
 Button.hs
 Label.hs
 ...

where the top few layers are all empty.  There have been proposals of
elaborate solutions for this in the past (module grafting etc.), but I
want to propose something really simple that would avoid this problem
with minimal additional complexity:

   ghc -iGraphics.UI.Gtk=src

the meaning of this flag is that when searching for modules, ghc will
look for the module Graphics.UI.Gtk.Button in src/Button.hs, rather than
src/Graphics/UI/Gtk/Button.hs.  The source file itself is unchanged: it
still begins with module Graphics.UI.Gtk.Button 

The implementation is only a few lines in the Finder (and probably
rather more in the manual and testsuite), but I wanted to get a sense of
whether people thought this would be a good idea, or if there's a better
alternative before I push it.

Pros:

   - simple implementation (but Cabal needs mods, see below)
   - solves the deep directory problem

Cons:

   - It makes the rules about finding files a bit more complicated.
 People need to find source files too, not just compilers.
   - packages that want to be compatible with older compilers can't
 use it yet.
   - you can't use '=' in a source directory name (but we could pick
 a different syntax if necessary)
   - It won't work for Cabal packages until Cabal is modified to
 support it (PreProcess and SrcDist and perhaps Haddock are the only
 places affected, I think)
   - Hackage will need to reject packages that use this feature without
 also specifying ghc = 7.10 and some cabal-version too.
   - Are there other tools/libraries that will need changes? Leksah?

Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RFC: changes to -i flag for finding source files

2014-04-25 Thread Simon Marlow
I want to propose a simple change to the -i flag for finding source 
files.  The problem we often have is that when you're writing code for a 
library that lives deep in the module hierarchy, you end up needing a 
deep directory structure, e.g.


 src/
   Graphics/
 UI/
  Gtk/
Button.hs
Label.hs
...

where the top few layers are all empty.  There have been proposals of 
elaborate solutions for this in the past (module grafting etc.), but I 
want to propose something really simple that would avoid this problem 
with minimal additional complexity:


  ghc -iGraphics.UI.Gtk=src

the meaning of this flag is that when searching for modules, ghc will 
look for the module Graphics.UI.Gtk.Button in src/Button.hs, rather than 
src/Graphics/UI/Gtk/Button.hs.  The source file itself is unchanged: it 
still begins with module Graphics.UI.Gtk.Button 


The implementation is only a few lines in the Finder (and probably 
rather more in the manual and testsuite), but I wanted to get a sense of 
whether people thought this would be a good idea, or if there's a better 
alternative before I push it.


Pros:

  - simple implementation (but Cabal needs mods, see below)
  - solves the deep directory problem

Cons:

  - It makes the rules about finding files a bit more complicated.
People need to find source files too, not just compilers.
  - packages that want to be compatible with older compilers can't
use it yet.
  - you can't use '=' in a source directory name (but we could pick
a different syntax if necessary)
  - It won't work for Cabal packages until Cabal is modified to
support it (PreProcess and SrcDist and perhaps Haddock are the only
places affected, I think)
  - Hackage will need to reject packages that use this feature without
also specifying ghc = 7.10 and some cabal-version too.
  - Are there other tools/libraries that will need changes? Leksah?

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: -optl behavior in ghc-7.8.1

2014-04-15 Thread Simon Marlow

On 14/04/2014 15:44, Brandon Allbery wrote:

On Mon, Apr 14, 2014 at 10:42 AM, Simon Marlow marlo...@gmail.com
mailto:marlo...@gmail.com wrote:

The problem I was fixing was that we weren't always passing the
-optl options.  Now when we invoke a program the -optXXX options
always come first - I think before it was kind of random and
different for each of the phases.


Some things do need to come first, like that; but apparently we need
either after-options or a more flexible library syntax. (Or an
other-objects?)


If you need control over the ordering of all the linker options, you 
just put -optl in front of them all (object files, library files, -l, 
-L).  Then you'll need at least one object file (an empty one will do) 
to convince GHC that you actually want to run the linker though.  FWIW, 
this is what our build system does.


Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: -optl behavior in ghc-7.8.1

2014-04-14 Thread Simon Marlow

On 10/04/2014 18:11, Yuras Shumovich wrote:

On Thu, 2014-04-10 at 18:49 +0200, Karel Gardas wrote:

On 04/10/14 06:39 PM, Yuras Shumovich wrote:

...and other linker options must come after, like in my case. So what?
Are there any ticket where people complain about the old behavior? I'm
not advocating any specific behavior, I'm just asking why it was
changed.


Hmm, I'm not sure if I'm the patch provider, but at least I provided
patch which was merged into HEAD (don't have 7.8 branch here so check
yourself) which fixes linking of binaries failure on Solaris. Please see
b9b94ec82d9125da47c619c69e626120b3e60457

The core of the change is:

-else package_hs_libs ++ extra_libs ++ other_flags
+else other_flags ++ package_hs_libs ++ extra_libs


Thank you for pointing to the commit. I hoped it was incidental change,
but now I see the reason.


Actually this was me: 
https://ghc.haskell.org/trac/ghc/changeset/1e2b3780ebc40d28cd0f029b90df102df09e6827/ghc


The problem I was fixing was that we weren't always passing the -optl 
options.  Now when we invoke a program the -optXXX options always come 
first - I think before it was kind of random and different for each of 
the phases.


Cheers,
Simon





Thanks,
Yuras



the patch contains full explanation in comment so see it for more
information.

If this is not what bugs you, then please ignore me.

Thanks,
Karel



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


a
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: PROPOSAL: Literate haskell and module file names

2014-03-26 Thread Simon Marlow

On 17/03/2014 13:08, Edward Kmett wrote:

Foo+rst.lhs does nicely dodge the collision with jhc.

How does ghc do the search now? By trying each alternative in turn?


Yes - see compiler/main/Finder.hs

Cheers,
Simon







On Sun, Mar 16, 2014 at 1:14 PM, Merijn Verstraaten
mer...@inconsistent.nl mailto:mer...@inconsistent.nl wrote:

I agree that this could collide, see my beginning remark that I
believe that the report should provide a minimal specification how
to map modules to filenames and vice versa.

Anyhoo, I'm not married to this specific suggestion. Carter
suggested Foo+rst.lhs on IRC, other options would be Foo.rst+lhs
or Foo.lhs+rst, I don't particularly care what as long as we pick
something. Patching tools to support whatever solution we pick
should be trivial.

Cheers,
Merijn

On Mar 16, 2014, at 16:41 , Edward Kmett wrote:

One problem with Foo.*.hs or even Foo.md.hs mapping to the module
name Foo is that as I recall JHC will look for Data.Vector in
Data.Vector.hs as well as Data/Vector.hs

This means that on a case insensitive file system
Foo.MD.hs matches both conventions.

Do I want to block an change to GHC because of an incompatible
change in another compiler? Not sure, but I at least want to raise
the issue so it can be discussed.

Another small issue is that this means you need to actually scan
the directory rather than look for particular file names, but off
my head really I don't expect directories to be full enough for
that to be a performance problem.

-Edward



On Sun, Mar 16, 2014 at 8:56 AM, Merijn Verstraaten
mer...@inconsistent.nl mailto:mer...@inconsistent.nl wrote:

Ola!

I didn't know what the most appropriate venue for this
proposal was so I crossposted to haskell-prime and
glasgow-haskell-users, if this isn't the right venue I welcome
advice where to take this proposal.

Currently the report does not specify the mapping between
filenames and module names (this is an issue in itself, it
essentially makes writing haskell code that's interoperable
between compilers impossible, as you can't know what directory
layout each compiler expects). I believe that a minimal
specification *should* go into the report (hence,
haskell-prime). However, this is a separate issue from this
proposal, so please start a new thread rather than
sidetracking this one :)

The report only mentions that by convention .hs extensions
imply normal haskell and .lhs literate haskell (Section 10.4).
In the absence of guidance from the report GHC's convention of
mapping module Foo.Bar.Baz to Foo/Bar/Baz.hs or
Foo/Bar/Baz.lhs seems the only sort of standard that exists.
In general this standard is nice enough, but the mapping of
literate haskell is a bit inconvenient, it leaves it
completelyl ambiguous what the non-haskell content of said
file is, which is annoying for tool authors.

Pandoc has adopted the policy of checking for further file
extensions for literate haskell source, e.g. Foo.rst.lhs and
Foo.md.lhs. Here .rst.lhs gets interpreted as being
reStructured Text with literate haskell and .md.lhs is
Markdown with literate haskell. Unfortunately GHC currently
maps filenames like this to the module names Foo.rst and
Foo.md, breaking anything that wants to import the module Foo.

I would like to propose allowing an optional extra extension
in the pandoc style for literate haskell files, mapping
Foo.rst.lhs to module name Foo. This is a backwards compatible
change as there is no way for Foo.rst.lhs to be a valid module
in the current GHC convention. Foo.rst.lhs would map to module
name Foo.rst but module name Foo.rst maps to filename
Foo/rst.hs which is not a valid haskell module anyway as the
rst is lowercase and module names have to start with an
uppercase letter.

Pros:
 - Tool authors can more easily determine non-haskell content
of literate haskell files
 - Currently valid module names will not break
 - Report doesn't specify behaviour, so GHC can do whatever it
likes

Cons:
 - Someone has to implement it
 - ??

Discussion: 4 weeks

Cheers,
Merijn


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
mailto:Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users







___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: can't load .so/.DLL - undefined symbol

2014-03-17 Thread Simon Marlow

On 11/03/2014 22:11, Henning Thielemann wrote:

I am trying to understand the following linker message. I have started
GHCi, loaded a program and try to run it:

Main main
...
Loading package poll-0.0 ... linking ... done.
Loading package alsa-seq-0.6.0.3 ... can't load .so/.DLL for:
/var/cabal/lib/x86_64-linux-ghc-7.8.0.20140228/alsa-seq-0.6.0.3/libHSalsa-seq-0.6.0.3-ghc7.8.0.20140228.so
(/var/cabal/lib/x86_64-linux-ghc-7.8.0.20140228/alsa-seq-0.6.0.3/libHSalsa-seq-0.6.0.3-ghc7.8.0.20140228.so:
undefined symbol:
alsazmseqzm0zi6zi0zi3_SystemziPosixziPoll_zdfStorableFd_closure)


I assume that GHCi wants to say the following: The instance Storable Fd
defined in module System.Posix.Poll cannot be found in the shared object
file of the alsa-seq package. That's certainly true because that module
is in the package 'poll' and not in 'alsa-seq'. But 'alsa-seq' imports
'poll'. What might be the problem?


It seems to have the idea that System.Posix.Poll is part of the alsa-seq 
package.  Perhaps you have a copy of that module on the search path 
somewhere, or inside the alsa-seq package?


Cheers,
Simon



It's a rather big example that fails here, whereas the small examples in
alsa-seq package work. Thus I first like to know what the message really
means, before investigating further. I installed many packages at once
with cabal-install using a single build directory, like:

$ cabal install --builddir=/tmp/dist --with-ghc=ghc7.8.0.20140228 poll
alsa-seq pkg1 pkg2 pkg3 ...

? Can this cause problems?
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RC2 build failures on Debian: sparc

2014-03-12 Thread Simon Marlow

These look suspicious:

/tmp/ghc29241_0/ghc29241_2.hc: In function 'stg_ap_pppv_ret':

/tmp/ghc29241_0/ghc29241_2.hc:2868:30:
 warning: function called through a non-compatible type [enabled by 
default]


/tmp/ghc29241_0/ghc29241_2.hc:2868:30:
 note: if this code is reached, the program will abort

If this is a general problem with unregisterised via-C compilation then 
we can probably fix it.  Could you open a ticket (or point me to the 
existing ticket if there is one)?


Cheers,
Simon

On 05/03/2014 21:54, Joachim Breitner wrote:

Hi,

sparc fails differently than in RC1, and very plainly with a
segmentation fault in dll-split (which happens to be the first program
to be run that is compiled with stage1):
https://buildd.debian.org/status/fetch.php?pkg=ghcarch=sparcver=7.8.20140228-1stamp=1393975264

Any ideas? Anyone feeling responsible?

It would be shame to loose a lot of architectures in 7.8 compared to
7.6, but I’m not a porter and don’t know much about these part of the
compiler, so I have to rely on your support in fixing these problems,
preferably before 7.8.1.

Greetings,
Joachim



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 7.8.1, template haskell, and dynamic libraries

2014-02-17 Thread Simon Marlow
I think you can summarise all that with tl;dr the right thing will 
happen, you don't have to remember to give any new flags to GHC or Cabal.


Cheers,
Simon

On 09/02/2014 21:14, Austin Seipp wrote:

Actually, just to keep it even simpler, so nobody else is confused
further: Cabal will *also* properly turn on dynamic builds for regular
packages if GHC is dynamic, TemplateHaskell or not. So any library you
compile will still work in GHCi as expected.

So here's the breakdown:

   1) Cabal 1.18 will handle dynamical GHCi correctly, including
compiling things dynamically, however it must.
   2) Per #1, libraries are compiled dynamically. This means libraries
work in GHCi, just like they did.
   3) -Executables- are statically linked by default, still. (But
because of #1 and #2, it's very easy to turn on dynamic exes as well,
without needing to recompile a lot.)
   4) TemplateHaskell works as expected due to #1 and #2. But there is
one caveat for executables, noted separately below.

The caveat with TemplateHaskell is for executables: This is because if
you end up with an executable that needs TH and profiling, Cabal must
be aware of this. Why? Because GHCi cannot load *profiled* objects,
only normal ones. So we must compile twice: once without profiling,
and once with profiling. The second compilation will use the 'normal'
objects, even though the final executable will be profiled. Cabal
doesn't know to do this if it doesn't know TemplateHaskell is a
requirement.

Does this clear things up? My last message might give the impression
some things aren't compiled dynamically, because I merely ambiguously
referred to 'packages'.

On Sun, Feb 9, 2014 at 2:37 PM, Austin Seipp aus...@well-typed.com wrote:

It is correct that Template Haskell now requires dynamic objects.
However, GHC can produce static and dynamic objects at the same time,
so you don't have to recompile a package twice (it's a big
optimization, basically).

Furthermore, if TemplateHaskell is enabled as a requirement for a
package, and GHC is built dynamically, Cabal will do The Right Thing
by building the shared objects for the dependencies as well. It will
save time by doing so using -dynamic-too, if possible. This is because
it queries GHC before compilation to figure this out (you can run 'ghc
--info' with the 7.8.1 RC to see GHC Dynamic and Supports
dynamic-too fields.)

Finally, if you simply run 'ghc Foo.hs' on a file that requires
TemplateHaskell, it will also switch on -dynamic-too for the needed
objects in this simple case.

There is one caveat, if I remember correctly: if a package uses
TemplateHaskell, it must declare it as such in the Cabal file. This is
because Cabal does not parse the source to detect if TemplateHaskell
is needed in the dependency graph of the compiled modules. Only GHC
can do this reliably. If you don't specify TemplateHaskell as an
extension, Cabal might not do the right thing. This is noted in the
release notes:


Note that Cabal will correctly handle -dynamic-too for you automatically, 
especially when -XTemplateHaskell is needed - but you *must* tell Cabal you are 
using the TemplateHaskell extension.


However, based on the other suggestions in the thread and confusion
around this, a big Incompatible changes section with this listed as
the first thing with clear detail would be a good idea. I'll do so.

If something else is going on, please file a bug.

On Sun, Feb 9, 2014 at 1:37 PM, George Colpitts
george.colpi...@gmail.com wrote:

Yes, in general I think the doc needs a section: Incompatible changes. The
hope is that you can take the release and just work as usual but when (for
good reasons as in this release) it is not true is is important to have such
a section. Another case that needs to be there is how to compile so you can
load compile object files into ghci as what you did in 7.6.3 won't work in
this release.


On Sun, Feb 9, 2014 at 1:11 PM, Carter Schonwald
carter.schonw...@gmail.com wrote:


Indeed. The problem is that many folks might have cabal config files that
explicitly disable shared.  (For the compile times!).  They might need clear
information about wiping that field.


On Sunday, February 9, 2014, Brandon Allbery allber...@gmail.com wrote:


On Sun, Feb 9, 2014 at 9:28 AM, Greg Horn gregmainl...@gmail.com wrote:


Is --enable-shared off by default?


It's supposed to be on by default in 7.8. That said, not sure how many
people have played with ~/.cabal/config

--
brandon s allbery kf8nh   sine nomine
associates
allber...@gmail.com
ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad
http://sinenomine.net



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org

Re: Parallel building multiple targets

2014-01-22 Thread Simon Marlow

On 05/01/2014 23:48, John Lato wrote:

(FYI, I expect I'm the source of the suggestion that ghc -M is broken)

First, just to clarify, I don't think ghc -M is obviously broken.
  Rather, I think it's broken in subtle, unobvious ways, such that
trying to develop a make-based project with ghc -M will fail at various
times in a non-obvious fashion, at least without substantial additional
rules.


If I understand you correctly, you're not saying that ghc -M is broken, 
but that it would be easier to use if it did more.  Right?  Maybe you 
could make specific suggestions?  Saying it is broken is a bit 
FUD-ish.  We use it in GHC's build system, so by an existence proof it 
is certainly not broken.


Cheers,
Simon


  For an example of some of the extra steps necessary to make

something like this work, see e.g.
https://github.com/nh2/multishake (which is admittedly for a more
complicated setup, and also has some issues).  The especially
frustrating part is, just when you think you have everything working,
someone wants to add some other tool to a workflow (hsc2hs, .cmm files,
etc), and your build system doesn't support it.

ghc --make doesn't allow building several binaries in one run, however
if you use cabal all the separate runs will use a shared build
directory, so subsequent builds will be able to take advantage of the
intermediate output of the first build.  Of course you could do the same
without cabal, but it's a convenient way to create a common build
directory and manage multiple targets.  This is the approach I would
take to building multiple executables from the same source files.

ghc doesn't do any locking of build files AFAIK.  Running parallel ghc
commands for two main modules that have the same import, using the same
working directory, is not safe.  In pathological cases the two different
main modules may even generate different code *for the imported module*.
  This sort of situation can arise with the IncoherentInstances
extension, for example.

The obvious approach is of course to make a library out of your common
files.  This has the downsides of requiring a bit more work on the
developer's part, but if the common files are relatively stable it'll
probably lead to the fastest builds of your executables.  Also in this
case you could run multiple `ghc --make`s in parallel, using different
build directories, since they won't be rebuilding any common code.

John L.

On Sun, Jan 5, 2014 at 1:47 PM, Sami Liedes sami.lie...@iki.fi
mailto:sami.lie...@iki.fi wrote:

Hi,

I have a Haskell project where a number of executables are produced
from mostly the same modules. I'm using a Makefile to enable parallel
builds. I received advice[1] that ghc -M is broken, but that there
is parallel ghc --make in HEAD.

As far as I can tell, ghc --make does not allow building several
binaries in one run, so I think it may not still be a full replacement
for Makefiles.

However I have a question about ghc --make that is also relevant
without parallel ghc --make:

If I have two main modules, prog1.hs and prog2.hs, which have mutual
dependencies (for example, both import A from A.hs), is it safe to run
ghc --make prog1 in parallel with ghc --make prog2? IOW, is there
some kind of locking to prevent both from building module A at the
same time and interfering with each other?

Is there a good way (either in current releases or HEAD) to build
multiple binaries partially from the same sources in parallel?

 Sami


[1]

http://stackoverflow.com/questions/20938894/generating-correct-link-dependencies-for-ghc-and-makefile-style-builds

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
mailto:Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: --split-objs

2014-01-20 Thread Simon Marlow

On 19/12/2013 03:00, Mikhail Glushenkov wrote:

The problem in https://github.com/haskell/cabal/issues/1611 is with
that we have a module (say, A) from which we're only importing a
single value, and this module is a part of the cabal-install source
tree. It would be nice if the whole contents of A weren't linked with
the final executable. So I tried to compile cabal-install with
--split-objs, but apparently this doesn't work because in this case
the linker's input is A.o instead of A_split_0.o A_split_1.o ...
A_split_N.o. And apparently that's why the documentation says that
--split-objs doesn't make sense for executables.

Note that if cabal-install was split into an executable and a library,
then this would work.

So the question is why --split-objs only works for libraries and
whether this behaviour can be fixed.


There is nothing fundamental about -split-objs that prevents it from 
working with executables.  I expect that GHC doesn't take it into 
account during its link step when linking an executable with 
-split-objs, though.  That would be a reasonable enhancement - open a 
ticket?


Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: love for hpc?

2013-11-13 Thread Simon Marlow

On 07/11/13 05:03, Evan Laforge wrote:

Is anyone out there using HPC?  It seems like it was gotten into a
more or less working if not ideal state, and then abandoned.

Things I've noticed lately:

The GHC runtime just quits on the spot if there's already a tix file.
This bit me when I was parallelizing tests.  It's also completely
unsafe when run concurrently, mostly it just overwrites the file,
sometimes it quits.  Sure to cause headaches for someone trying to
parallelize tests.

You can't change the name of the output tix file, so I worked around
by hardlinking the binary to a bunch of new ones, and then doing 'hpc
sum' on the results.

The hpc command is super slow.  It might have to do with it doing its
parsing with Prelude's 'read', and it certainly doesn't help the error
msgs.

And the whole thing is generally minimally documented.

I can already predict the answer will be yes, HPC could use some
love, roll up your sleeves and welcome!  It does look like it could
be improved a lot with just a bit of effort, but that would be a yak
too far for me, at the moment.  I'm presently just curious if anyone
else out there is using it, and if they feel like it could do with a
bit of polishing.


I think the core functionality of HPC is working pretty well, I gave it 
an overhaul when I combined the internal mechanisms used by HPC, 
Profiling and the GHCi debugger.  The surrounding tooling and 
documentation, as you say, could do with some love.


I think this would be a great way for someone to get involved with GHC 
development, because for the most part it's not deep technology, and 
there are lots of small improvements to make.  A good way to start would 
be to create some feature-request tickets describing some improvements 
that could be made.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Annotations

2013-11-08 Thread Simon Marlow
Simon, Austin and I discussed this briefly yesterday. There's an 
existing ticket:


  http://ghc.haskell.org/trac/ghc/ticket/4268

I added a comment to the ticket to summarise our conclusion: we won't 
add a flag for the pragma, however we should add a warning when an ANN 
pragma is being ignored.


On 06/11/2013 10:55, Simon Peyton-Jones wrote:

I’ve just noticed that there is no GHC language extension for annotations

http://www.haskell.org/ghc/docs/latest/html/users_guide/extending-ghc.html#annotation-pragmas

That feels like an oversight. Yes, they are in a pragma, but you may get
an error message if you compile with a stage-1 compiler, for example.
Plus, the language extensions should truthfully report what extra stuff
you are using.

I’m inclined to add a language extension “Annotations”.

·Without it {-# ANN … #-} pragmas are ignored as comments

·With it, they are treated as annotations

Do you agree?

I don’t know whether this can (or even should) land in 7.8.1.  Do you
care either way?

Guidance welcome

Simon

/Microsoft Research Limited (company number 03369488) is registered in
England and Wales /

/Registered office is at 21 Station Road, Cambridge, CB1 2FB/



___
ghc-devs mailing list
ghc-d...@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Desugaring do-notation to Applicative

2013-10-11 Thread Simon Marlow
Thanks for all the comments.  I've updated the wiki page, in particular 
to make it clear that Applictive do-notation would be an opt-in extension.


Cheers,
Simon

On 02/10/13 16:09, Dan Doel wrote:

Unfortunately, in some cases, function application is just worse. For
instance, when the result is a complex arithmetic expression:

 do x - expr1; y - expr2; z - expr3; return $ x*y + y*z + z*x

In cases like this, you have pretty much no choice but to name
intermediate variables, because the alternative is incomprehensible. But
applicative notation:

 (\x y z - x*y + y*z + z*x) $ expr1 * expr2 * expr3

moves the variable bindings away from the expressions they're bound to,
and we require extra parentheses to delimit things, and possibly more.

Desugaring the above do into applicative is comparable to use of plain
let in scheme (monad do is let*, mdo was letrec). And sometimes, let is
nice, even if it has an equivalent lambda form.

And as Jake mentioned, syntax isn't the only reason for Applicative.
Otherwise it'd just be some alternate names for functions involving Monad.



On Wed, Oct 2, 2013 at 5:12 AM, p.k.f.holzensp...@utwente.nl
mailto:p.k.f.holzensp...@utwente.nl wrote:

I thought the whole point of Applicative (at least, reading Connor’s
paper) was to restore some function-application-style to the whole
effects-thing, i.e. it was the very point **not** to resort to binds
or do-notation.

__ __

That being said, I’m all for something that will promote the use of
the name “pure” over “return”.

__ __

+1 for the Opt-In

__ __

Ph.

__ __

__ __

__ __

*From:*Glasgow-haskell-users
[mailto:glasgow-haskell-users-boun...@haskell.org
mailto:glasgow-haskell-users-boun...@haskell.org] *On Behalf Of
*Iavor Diatchki



__ __

do x1 - e1

__ __

-- The following part is `Applicative`

(x2,x3) - do x2 - e2 x1

  x3 - e3

  pure (x2,x3)

__ __

f x1 x2 x3


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
mailto:Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Desugaring do-notation to Applicative

2013-10-11 Thread Simon Marlow

On 02/10/13 17:01, Dag Odenhall wrote:

What about |MonadComprehensions|, by the way? The way I see it, it's an
even better fit for |Applicative| because the |return| is implicit.


It would happen automatically, because a Monad comprehension is 
represented using the same abstract syntax as a do-expression internally.


Cheers,
Simon






On Tue, Oct 1, 2013 at 2:39 PM, Simon Marlow marlo...@gmail.com
mailto:marlo...@gmail.com wrote:

Following a couple of discussions at ICFP I've put together a
proposal for desugaring do-notation to Applicative:

http://ghc.haskell.org/trac/__ghc/wiki/ApplicativeDo
http://ghc.haskell.org/trac/ghc/wiki/ApplicativeDo

I plan to implement this following the addition of Applicative as a
superclass of Monad, which is due to take place shortly after the
7.8 branch is cut.

Please discuss here, and I'll update the wiki page as necessary.

Cheers,
Simon
_
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.__org
mailto:Glasgow-haskell-users@haskell.org
http://www.haskell.org/__mailman/listinfo/glasgow-__haskell-users
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Desugaring do-notation to Applicative

2013-10-01 Thread Simon Marlow
Following a couple of discussions at ICFP I've put together a proposal 
for desugaring do-notation to Applicative:


  http://ghc.haskell.org/trac/ghc/wiki/ApplicativeDo

I plan to implement this following the addition of Applicative as a 
superclass of Monad, which is due to take place shortly after the 7.8 
branch is cut.


Please discuss here, and I'll update the wiki page as necessary.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 7.8 Release Update

2013-09-09 Thread Simon Marlow

On 09/09/13 08:14, Edward Z. Yang wrote:

Excerpts from Kazu Yamamoto (山本和彦)'s message of Sun Sep 08 19:36:19 -0700 2013:


% make show VALUE=GhcLibWays
make -r --no-print-directory -f ghc.mk show
GhcLibWays=v p dyn



Yes, it looks like you are missing p_dyn from this list. I think
this is a bug in the build system.  When I look at ghc.mk
it only verifies that the p way is present, not p_dyn; and I don't
see any knobs which turn on p_dyn.

However, I must admit to being a little confused; didn't we abandon
dynamic by default and switch to only using dynamic for GHCi (in which
case the profiling libraries ought not to matter)?


I think Kazu is saying that when he builds something with profiling 
using cabal-install, it fails because cabal-install tries to build a 
dynamic version too.  We don't want dyanmic/profiled libraries (there's 
no point, you can't load them into GHCi).  Perhaps this is something 
that needs fixing in cabal-install?


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 7.8 Release Update

2013-09-09 Thread Simon Marlow
Template Haskell *does* work with profiling, you just have to compile the
code without profiling first (Cabal knows how to do this and does it
automatically).

The big obstacles to loading profiled code into ghci are (a) lack of
support in the byte code compiler and interpreter and (b) ghci itself would
need to be a profiled executable.
On 9 Sep 2013 22:21, Edward Z. Yang ezy...@mit.edu wrote:

 Hello Mikhail,

 It is a known issue that Template Haskell does not work with profiling
 (because
 GHCi and profiling do not work together, and TH uses GHCi's linker). [1]
 Actually,
 with the new linker patches that are landing soon we are not too far off
 from
 having this work.

 Edward

 [1] http://ghc.haskell.org/trac/ghc/ticket/4837

 Excerpts from Mikhail Glushenkov's message of Mon Sep 09 14:15:54 -0700
 2013:
  Hi,
 
  On Mon, Sep 9, 2013 at 10:11 PM, Simon Marlow marlo...@gmail.com
 wrote:
  
   I think Kazu is saying that when he builds something with profiling
 using
   cabal-install, it fails because cabal-install tries to build a dynamic
   version too.  We don't want dyanmic/profiled libraries (there's no
 point,
   you can't load them into GHCi).  Perhaps this is something that needs
 fixing
   in cabal-install?
 
  Aren't they needed when compiling libraries that are using Template
  Haskell for profiling? The issue sounds like it could be TH-related.
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 32-bit libs required for 64-bit install

2013-08-25 Thread Simon Marlow

On 25/08/13 13:48, Yitzchak Gale wrote:

I had trouble installing the generic 64-bit Linux tarball for 7.6.3.
With some help from Ian, who pointed out that the problem was
related to ld-linux.so, I finally figured out the root of the problem:
the installation requires *both* the 64-bit and 32-bit
versions of libc6 plus deps to be available.

Once the installation was complete, I could then remove the
32-bit libs and ghc still seems to work. It appears that the
32-bit libs are only required for some of the auxiliary executables
that come with the tarball, such as ghc-pwd.

Is there any reason in principle that we only allow 64-bit GHC
to be installed on multiarch Linux? That seems like a rather
arbitrary restriction.


It's certainly a bug.  Do you know which executable is 32-bit?

Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: throwTo semantics

2013-08-13 Thread Simon Marlow

On 28/07/13 14:36, Roman Cheplyaka wrote:

The documentation for throwTo says:

   throwTo does not return until the exception has been raised in the
   target thread. The calling thread can thus be certain that the target
   thread has received the exception. This is a useful property to know
   when dealing with race conditions: eg. if there are two threads that
   can kill each other, it is guaranteed that only one of the threads
   will get to kill the other.

I don't see how the last sentense follows. I understood it so that the
implication

   throwTo has returned = exception has been delivered

is true, but not the reverse. If my understanding is correct, then both
exceptions could be delivered without any of throwTos returning.


Perhaps this needs to be clarified.  The extra information is: if a 
thread's next operation is a throwTo, then it may either receive an 
exception from another thread *or* perform the throwTo, but not both. 
Informally, there's no state of the system in which the exception is in 
flight: it has either been delivered or not.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Log exceptions to eventlog

2013-08-13 Thread Simon Marlow

On 12/08/13 10:20, Roman Cheplyaka wrote:

Hi,

Is there any way to log asynchronous exceptions to the eventlog,
including information on which thread sent the exception and to which
thread it was sent?


You can insert events yourself using Debug.Trace.traceEventIO.  Adding 
some built-in events for throwTo would be a good idea, we don't 
currently have that (you could try adding it yourself if you like, it's 
not too hard).


Cheers,
Simon




Or are there any other ways to get this information?

Roman



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: PSA: GHC can now be built with Clang

2013-06-28 Thread Simon Marlow

On 26/06/13 04:13, Austin Seipp wrote:

Thanks Manuel!

I have an update on this work (I am also CC'ing glasgow-haskell-users,
as I forgot last time.) The TL;DR is this:

  * HEAD will correctly work with Clang 3.4svn on both Linux, and OS X.
  * I have a small, 6-line patch to Clang to fix the build failure in
primitive (Clang was too eager to stringify something.) Once this fix
is integrated into Clang (hopefully very soon,) it will be possible to
build GHC entirely including all stage2 libraries without any patches.
The patch is here: http://llvm.org/bugs/show_bug.cgi?id=16371 - I am
hoping this will also make it into XCode 5.
  * I still have to eliminate some warnings throughout the build, which
will require fiddling and a bit of refactoring. The testsuite still
probably won't run cleanly on Linux, at least, until this is done I'm
afraid (but then again I haven't tried...)

As for the infamous ticket #7602, the large performance regression on
Mac OS X, I have some numbers finally between my fast-TLS and slow-TLS
approach.

./gc_bench.slow-tls 19 50 5 22 +RTS -H180m -N7 -RTS  395.57s user
173.18s system 138% cpu 6:50.71 total

vs

./gc_bench.fast-tls 19 50 5 22 +RTS -H180m -N7 -RTS  322.98s user
132.37s system 132% cpu 5:44.65 total

Now, this probably looks totally awful from a scalability POV. And,
well, yeah, it is. But I am almost 100% certain there is something
extremely screwy going on with my machine here. I base this on the
fact that during gc_bench, kernel_task was eating up about ~600% of my
CPU consistently, giving user threads no time to run. I've noticed
this with other applications that were totally unrelated too (close
tweetbot - 800% CPU usage,) so I guess it's time to learn DTrace. Or
turn it on and off again or something. Ugh.

Anyway, if you look at the user times, you get a nice 30% speedup
which is about what we expect!


30% better than before is good, but we need some absolute figures. Can 
you validate that against the performance on Linux, or against the 
performance you get when the RTS is compiled with gcc?  If it's hard to 
get a direct comparison on equivalent hardware. you could compare the 
slowdown with -threaded on Linux and OS X.


Cheers,
Simon



On a related note, due to the source code structure at the moment,
Linux/Clang hilariously suffers from this same bug. That's because
while Clang on Linux supports extremely fast TLS via __thread (like
GCC,) it falls back to pthread_getspecific/setspecific. I haven't
fixed this yet. It'll happen after I fix #7602 and get it merged in.
On my Linux machine, gc_bench also sees a consistent 30% speedup
between these two approaches, so I think this is a relatively accurate
measurement. Well, as accurate as I can be without running nofib just
yet. So if you're just dying to have GHC HEAD built with Clang HEAD on
Linux because you've got reasons, you should probably hold on.

I also may have a similar, better approach to fixing #7602 that is not
entirely as evil and sneaky as crashing the WebKit party. I'll follow
up on this soon when I have more info in a separate thread to confer
with Simon. With nofib results. I hope.

But anyway, 7.8 will be shaping up quite nicely - in particular in the
Mac OS X department, I hope. Please feel free to pester me with
questions or if you attempt something and it doesn't work.

On Tue, Jun 25, 2013 at 7:34 PM, Manuel M T Chakravarty
c...@cse.unsw.edu.au wrote:

Austin,

Thank you very much for taking care of all these clang issues — that is very 
helpful!

Cheers,
Manuel

Austin Seipp ase...@pobox.com:

Hi all,

As of commit 5dc74f it should now be possible to build a working
stage1 and stage2 compiler with (an extremely recent) Clang. With some
caveats.

You can just do:

$ CC=/path/to/clang ./configure --with-gcc=/path/to/clang
$ make

I have done this work on Linux. I don't expect much difficulty on Mac
OS X, but it needs testing. Ditto with Windows, although Clang/mingw
is considered experimental.

The current caveats are:

* The testsuite will probably fail everywhere, because of some
warnings that happen during the linking phase when you invoke the
built compiler. So the testsuite runner will probably be unhappy.
Clang is very noisy about unused options, unlike GCC. That needs to be
fixed somewhere in DriverPipeline I'd guess, but with some
refactoring.
* Some of the stage2 libraries don't build due to a Clang bug. These
are vector/primitive/dph so far.
* There is no buildbot or anything to cover it.

You will need a very recent Clang. Due to this bug (preventing
primitive etc from building,) you'll preferably want to use an SVN
checkout from about 6 hours ago at latest:

http://llvm.org/bugs/show_bug.cgi?id=16363

Hilariously, this bug was tripped on primitive's Data.Primitive.Types
module due to some CPP weirdness. But even with a proper bugfix and no
segfault, it still fails to correctly parse this same module with the
same CPP declarations. I'm fairly certain this is 

Re: 'interrupted' when exiting

2013-03-04 Thread Simon Marlow

On 04/03/13 06:02, Akio Takano wrote:

Hi,

If I compile and run the attached program in the following way, it
crashes on exit:

$ ghc -threaded thr_interrupted.hs
$ ./thr_interrupted
thr_interrupted: Main_dtl: interrupted

This is particularly bad when profiling, because the program
terminates before writing to the .prof file.

Is this a bug in GHC, or am I supposed to terminate the thread before
exiting? If it's a GHC issue, how should it be fixed? It seems that
the error was raised in the C stub code GHC generated for the foreign
import wrapper function.


So this behaviour is by design, that is, I intended it to work this 
way.  That's not to say it's necessarily the best design; maybe a better 
alternative exists.


The issue is this: when the main thread exits, the RTS starts shutting 
down, and it kills all the other threads.  If any of these threads were 
the result of an external in-call from C, then that call returns, with 
the status Interrupted.  Now, if this was a foreign export or a 
foreign import wrapper, then there is no provision for returning an 
error code to the caller, so the foreign export must either


  a) kill the current OS thread
  b) kill the whole process
  c) sleep indefinitely

Right now we do (b) I think.  Perhaps (a) would be better; at least that 
might fix your problem.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: base package -- goals

2013-02-27 Thread Simon Marlow

On 25/02/13 18:05, Ian Lynagh wrote:

On Mon, Feb 25, 2013 at 06:38:46PM +0100, Herbert Valerio Riedel wrote:

Ian Lynagh i...@well-typed.com writes:

[...]


If we did that then every package would depend on haskell2010, which
is fine until haskell2013 comes along and they all need to be changed
(or miss out on any improvements that were made).


...wouldn't there also be the danger of type(class)-incompatible
(e.g. the superclass breakages for startes) changes between say
haskell2010 and haskell2013, that would cause problems when trying to
mix libraries depending on different haskell20xx library versions?


I think that actually, for the Num/Show change, the hasell98/haskell2010
packages just incorrectly re-export the new class.

Personally, I don't think the language report should be specifying the
content of libraries at all,


It's not that straightforward, because the language report refers to 
various library functions, types and classes.  For example, integer 
literals give rise to a constraint on Num, so we have to say what Num 
is.  Guards depend on Bool, the translation of list comprehensions 
refers to map, and so on.


It could be whittled down certainly (we actually removed a few libraries 
in Haskell 2010), but there's still a core that is tied to the language 
definition.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: base package -- goals

2013-02-27 Thread Simon Marlow

On 25/02/13 19:25, Johan Tibell wrote:

Hi all,

Let me add the goals I had in mind last time I considered trying to
split base.

  1. I'd like to have text Handles use the Text type and binary Handles
use the ByteString type. Right now we have this somewhat awkward setup
where the I/O APIs are spread out and bundled with pure types. Splitting
base would let us fix this and write a better I/O layer.

  2. The I/O manager currently has a copy of IntMap inside its
implementation because base cannot use containers. Splitting base would
let us get rid of this code duplication.

I'm less interested in having super fine-grained dependencies in my
libraries. More packages usually means more busy-work managing
dependencies. Taken to its extreme you could imagine having base-maybe,
base-bool, and whatnot. I don't think this is an improvement. Splitting
base into perhaps 3-5 packages (e.g. GHC.*, IO, pure types) should let
us get a bunch of benefits without too many downsides.


+1 to all that.

I'd like to add one other thing that we've been wanting to clean up: the 
unix/Win32 packages should sit low down in the dependency hierarchy, so 
that the IO library can depend on them.  Right now we have bits and 
pieces of unix/Win32 in the base package, some of which have to be 
re-exported via internal modules in base to unix/Win32 proper 
(System.Posix.Internals).


I seem to recall the situation with signal handlers being a bit of a 
mess: the code to handle signals is in base, but the API is in unix. 
Glancing at the code in GHC.Conc.Signals it looks like I even had to use 
Dynamic to get around the dependency problems (shhh!).


Cleaning up things like this is a win.  But I'm with Johan in that 
having fine-grained packages imposes a cost on the clients (where the 
clients in this case includes everyone), so there should be significant 
tangible benefits (e.g. more stability).


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: base package

2013-02-21 Thread Simon Marlow

On 20/02/13 15:40, Joachim Breitner wrote:


+-- | This exception is thrown by the 'fail' method of the 'Monad' 'IO' 
instance.
+--
+--   The Exception instance of IOException will also catch this, converting the
+--   IOFail to a UserError, for compatibility and consistency with the Haskell
+--   report
+data IOFail = IOFail String
+
+instance Typeable IOFail -- deriving does not work without package
+instance Show IOFail -- name changes to GHC
+instance Exception IOFail
+


I like the idea of making IOFail a separate exception type.


-instance Exception IOException
+instance Exception IOException where
+toException = SomeException
+fromException e = case cast e of
+Just (IOFail s) - Just (userError s)
+Nothing - cast e


I think that should be

 +fromException (SomeException e) = case cast e of
 +Just (IOFail s) - Just (userError s)
 +Nothing - cast e

Otherwise it will typecheck but not work (hurrah for dynamic typing).

The trick is indeed neat, but only if it is possible to make IOFail 
completely invisible.  If it isn't possible to make it completely 
invisible, then I would prefer IOFail to be a first-class exception type 
without the special trick to coerce it to IOException, and accept the 
API breakage.  I don't think it's a good idea to have special magic in 
the exception hierarchy - other people would start doing it too, then 
we'd have a mess.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: base package (Was: GHC 7.8 release?)

2013-02-21 Thread Simon Marlow

On 20/02/13 17:12, Ian Lynagh wrote:

On Fri, Feb 15, 2013 at 02:45:19PM +, Simon Marlow wrote:


Remember that fingerprinting is not hashing.  For fingerprinting we
need to have a realistic expectation of no collisions.  I don't
think FNV is suitable.

I'm sure it would be possible to replace the C md5 code with some
Haskell.  Performance *is* important here though - Typeable is in
the inner loop of certain generic programming libraries, like SYB.


We currently just compare
 hash(str)
for equality, right? Could we instead compare
 (hash str, str)
? That would be even more correct, even if a bad/cheap hash function is
used, and would only be slower for the case where the types match
(unless you're unlucky and get a hash collision).



In fact, we may be able to arrange it so that in the equal case the
strings are normally exactly the same string, so we can do a cheap
pointer equality test (like ByteString already does) to make the equal
case fast too (falling back to actually checking the strings are equal,
if they aren't the same string).


So it's not a single string, a TypeRep consists of a TyCon applied to 
some arguments, which themselves are TypeReps etc.


You could do pointer equality, and maybe that would work for the common 
cases.  But I don't see why we have to do that when fingerprinting works 
well and we already have it.  Why add a potential performance pitfall 
when we don't have to?


One other thing: it's useful to be able to use the fingerprint as an 
identifier for the contents, e.g. when sending Typeable values across 
the network.  If you can't do this with the fingerprint, then you need 
another unique Id, which is the situation we used to have before 
fingerprints.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: base package (Was: GHC 7.8 release?)

2013-02-15 Thread Simon Marlow

On 15/02/13 09:36, Simon Peyton-Jones wrote:

|  Doesn't the FFI pull in some part of the I/O layer, though?  In
|  particular threaded programs are going to end up using forkOS?
|
| Another good reason to try to have a pure ground library.

Remember that we have UNSAFE ffi calls and SAFE ones.

The SAFE ones may block, cause GC etc.  They involve a lot of jiggery pokery 
and I would not be surprised if that affected the I/O manager.

But UNSAFE ones are, by design, no more than fat machine instructions that 
are implemented by taking an out-of-line call.  They should not block.  They should not 
cause GC.  Nothing.  Think of 'sin' and 'cos' for example.

Fingerprinting is a classic example, I would have thought.

So my guess is that it should not be hard to allow UNSAFE ffi calls in the core 
(non-IO-ish) bits, leaving SAFE calls for higher up the stack.


Actually as far as the Haskell-level API goes, there's no difference 
between safe and unsafe FFI calls, the difference is all in the codegen. 
 I don't think safe calls cause any more difficulties for splitting up 
the base.


Cheers,
Simon





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: base package (Was: GHC 7.8 release?)

2013-02-15 Thread Simon Marlow

On 15/02/13 08:36, Joachim Breitner wrote:

Hi,

Am Donnerstag, den 14.02.2013, 21:41 -0500 schrieb brandon s allbery
kf8nh:

On Thursday, February 14, 2013 at 8:14 PM, Johan Tibell wrote:

On Thu, Feb 14, 2013 at 2:53 PM, Joachim Breitner
m...@joachim-breitner.de wrote:
I don't think having FFI far down the stack is a problem. There are
lots of pure data types we'd like in the pure data layer (e.g.
bytestring) that uses FFI. As long as the I/O layer itself
(System.IO, the I/O manager, etc) doesn't get pulled in there's no
real problem in depending on the FFI.


I think it would be nice, also to other Haskell implementations that
might have not have FFI, to separate the really basic stuff from
pure-but-impurely-implemented stuff. At least as long as it does not
caues trouble.

GHC.Fingerprint does not need to be crippled when it is going to use a
pure hashing; I quickly added some simple fingerpriting found via
Wikipedia that was easier than MD5.
https://github.com/nomeata/packages-base/commit/b7f80066a03fd296950e0cafa2278d43a86f82fc
The choice is of course not final, maybe something with more bits is
desirable.


Remember that fingerprinting is not hashing.  For fingerprinting we need 
to have a realistic expectation of no collisions.  I don't think FNV is 
suitable.


I'm sure it would be possible to replace the C md5 code with some 
Haskell.  Performance *is* important here though - Typeable is in the 
inner loop of certain generic programming libraries, like SYB.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: base package (Was: GHC 7.8 release?)

2013-02-15 Thread Simon Marlow

On 15/02/13 12:22, Joachim Breitner wrote:

Hi,

more progress: On top of base-pure, I created base-io involving GHC/IO
and everything required to build it (which pulled in ST, some of Foreign
and unfortunately some stuff related to Handles and Devices, because it
is mentioned in IOException). This is the list of modules:

 Foreign.C.Types,
 Foreign.ForeignPtr,
 Foreign.ForeignPtr.Imp,
 Foreign.ForeignPtr.Safe,
 Foreign.ForeignPtr.Unsafe,
 Foreign.Ptr,
 Foreign.Storable,
 GHC.ForeignPtr,
 GHC.IO.BufferedIO,
 GHC.IO.Buffer,
 GHC.IO.Device,
 GHC.IO.Encoding.Types,
 GHC.IO.Exception,
 GHC.IO.Handle.Types,
 GHC.IO,
 GHC.IORef,
 GHC.MVar,
 GHC.Ptr,
 GHC.Stable,
 GHC.ST,
 GHC.Storable,
 GHC.STRef


You have a random collection of modules here :)

I think you want to have the IO *monad* (GHC.IO) live in a lower layer, 
separate from the IO *library* (GHC.IO.Device and so on).  Every Haskell 
implementation will need the IO monad, but they might want to replace 
the IO library with something else.


Things like GHC.IORef, GHC.MVar can all live in a low-down layer because 
they're just wrappers over the primops.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.8 release?

2013-02-13 Thread Simon Marlow

On 13/02/13 07:06, wren ng thornton wrote:

On 2/12/13 3:37 AM, Simon Marlow wrote:

One reason for the major version bumps is that base is a big
conglomeration of modules, ranging from those that hardly ever change
(Prelude) to those that change frequently (GHC.*). For example, the new
IO manager that is about to get merged in will force a major bump of
base, because it changes GHC.Event.  The unicode support in the IO
library was similar: although it only added to the external APIs that
most people use, it also changed stuff inside GHC.* that we expose for a
few clients.

The solution to this would be to split up base further, but of course
doing that is itself a major upheaval.  However, having done that, it
might be more feasible to have non-API-breaking releases.


While it will lead to much wailing and gnashing of teeth in the short
term, if it's feasible to break GHC.* off into its own package, then I
think we should. The vast majority of base seems quite stable or else is
rolling along at a reasonable pace. And yet, every time a new GHC comes
out, there's a new wave of fiddling the knobs on cabal files because
nothing really changed. On the other hand, GHC.* moves rather quickly.
Nevertheless, GHC.* is nice to have around, so we don't want to just
hide that churning. The impedance mismatch here suggests that they
really should be separate packages. I wonder whether GHC.* should be
moved in with ghc-prim, or whether they should remain separate...

But again, this depends on how feasible it would be to actually split
the packages apart. Is it feasible?


So I think we'd need to add another package, call it ghc-base perhaps. 
The reason is that ghc-prim sits below the integer package 
(integer-simple or integer-gmp).


It's feasible to split base, but to a first approximation what you end 
up with is base renamed to ghc-base, and then the new base contains just 
stub modules that re-export stuff from ghc-base.  In fact, maybe you 
want to do it exactly like this for simplicity - all the code goes in 
ghc-base.  There would be some impact on compilation time, as we'd have 
twice as many interfaces to read.


I believe Ian has done some experiments with splitting base further, so 
he might have more to add here.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.8 release?

2013-02-12 Thread Simon Marlow

On 11/02/13 23:03, Johan Tibell wrote:

Hi,

I think reducing breakages is not necessarily, and maybe not even
primarily, an issue of releases. It's more about realizing that the cost
of breaking things (e.g. changing library APIs) has gone up as the
Haskell community and ecosystem has grown. We need to be conscious of
that and carefully consider if making a breaking change (e.g. changing a
function instead of adding a new function) is really necessary. Many
platforms (e.g. Java and Python) rarely, if ever, make breaking changes.
If you look at  compiler projects (e.g. LLVM and GCC) you never see
intentional breakages, even in major releases*. Here's a question I
think we should be asking ourselves: why is the major version of base
bumped with every release? Is it really necessary to make breaking
changes this often?


One reason for the major version bumps is that base is a big 
conglomeration of modules, ranging from those that hardly ever change 
(Prelude) to those that change frequently (GHC.*). For example, the new 
IO manager that is about to get merged in will force a major bump of 
base, because it changes GHC.Event.  The unicode support in the IO 
library was similar: although it only added to the external APIs that 
most people use, it also changed stuff inside GHC.* that we expose for a 
few clients.


The solution to this would be to split up base further, but of course 
doing that is itself a major upheaval.  However, having done that, it 
might be more feasible to have non-API-breaking releases.


Of course we do also make well-intentioned changes to libraries, via the 
library proposal process, and some of these break APIs.  But it wouldn't 
do any harm to batch these up and defer them until the next API-changing 
release.


It would be great to have a list of the changes that had gone into base 
in the last few major releases, any volunteers?


Cheers,
Simon



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.8 release?

2013-02-10 Thread Simon Marlow

On 10/02/13 15:36, Simon Peyton-Jones wrote:

We seem to be circling ever closer to consensus here! Yay!

Indeed!  Good :-)

However, I’m not getting the bit about API changing vs non-API changing.

Firstly I don’t know which APIs are intended.  The GHC API is
essentially GHC itself, so it changes daily.  Maybe you mean the base
package?  Or what?

I suspect you mean that a “non-API-changing” release absolutely
guarantees to compile any package that compiled with the previous
version.  If so, that is a very strong constraint indeed. We do observe
it for patch releases for GHC (e g 7.6.2 should compile anything that
7.6.1 compiles).  But I think it would be difficult to guarantee for
anything beyond a patch release.  Every single commit (and the commit
rate is many/day) would have to be evaluated against this criterion.
And if it failed the criterion, it would have to go on a API-breaking
HEAD. In effect we’d have two HEADs.  I can’t see us sustaining this.
And I don’t yet really see why it’s necessary.  If you don’t want an
API-breaking change, stick with the patch releases.

So, we have a channel for non-API-breaking changes already: the patch
releases.  So that means we already have all three channels!


Mark is asking for major GHC releases every year at the most, preferably 
less frequently.  That means major GHC releases in the sense that we do 
them now, where libraries change, and a wave of package updates are 
required to get everything working.


Johan, Manuel and Carter are saying that they want releases that add 
features but don't break code, i.e. a non-API-breaking release, as a way 
to get the new bits into the hands of the punters sooner.  This is 
something that we don't do right now, and it would entail a change to 
our workflow and release schedule.


It doesn't mean no API changes at all - we would have to allow APIs to 
be extended, because many feature additions come with new primops, or 
new supporting code in the ghc-prim or base packages.  The package 
version policy states precisely what it means to extend an API 
(http://www.haskell.org/haskellwiki/Package_versioning_policy) and most 
third-party packages will still work so long as we only bump the minor 
versions of the packages that come with GHC.


The GHC package itself would have to be exempt, because it contains 
every module in GHC, and hence would be impossible to keep stable if we 
are modifying the compiler to add new features.


Of course it's not practical to maintain an extra branch of GHC for 
non-API-breaking development - two branches is already plenty.  So there 
would need to be an API-freeze for a while between the major release and 
the non-API-breaking release, during which time people developing API 
changes would need to work on branches.


Is it workable?  I'm not sure, but I think it's worth a try.  I wouldn't 
want to see this replace the patchlevel bugfix releases that we already 
do, and as Ian points out, there isn't a lot of room in the release 
schedule for more releases, unless we stretch out the timescales, doing 
major releases less frequently.


Cheers,
Simon



·Haskell Platform

·Patch-level releases

·New releases


if that’s so, all we need is better signposting.   And I’m all for that!

Have I got this right?


Simon

*From:*Mark Lentczner [mailto:mark.lentcz...@gmail.com]
*Sent:* 09 February 2013 17:48
*To:* Simon Marlow; Manuel M T Chakravarty; Johan Tibell; Simon
Peyton-Jones; Mark Lentczner; andreas.voel...@gmail.com; Carter
Schonwald; kosti...@gmail.com; Edsko de Vries; ghc-d...@haskell.org;
glasgow-haskell-users
*Subject:* Re: GHC 7.8 release?

We seem to be circling ever closer to consensus here! Yay!

I think the distinction of non-API breaking and API breaking release is
very important. Refining SPJ's trifecta:

*Haskell Platform* comes out twice a year. It is based on very
stable version of GHC, and intention is that people can just assume
things on Hackage work with it. These are named for the year and
sequence of the release: 2013.2, 2013.2.1, 2013.4,...

*Non-API breaking releases* can come out as often as desired.
However, the version that is current as of mid-Feb. and mid-Aug.
will be the ones considered for HP inclusion. By non-API breaking we
mean the whole API surface including all the libraries bundled with
GHC, as well as the operation of ghc, cabal, ghc-pkg, etc. Additions
of features that must be explicitly enabled are okay. Additions of
new APIs into existing modules are discouraged: Much code often
imports base modules wholesale, and name clashes could easily
result. These should never bump the major revision number: 7.4.1,
7.4.2...

*API breaking releases* happen by being released into a separate
channel when ready for library owners to look at them. This channel
should probably go through several stages: Ready for core package
owners to work with, then HP package owners, then all

Re: GHC 7.8 release?

2013-02-09 Thread Simon Marlow
I agree too - I think it would be great to have non-API-breaking 
releases with new features.  So let's think about how that could work.


Some features add APIs, e.g. SIMD adds new primops.  So we have to 
define non-API-breaking as a minor version bump in the PVP sense; that 
is, you can add to an API but not change it.


As a straw man, let's suppose we want to do annual API releases in 
September, with intermediate non-API releases in February.  Both would 
be classed as major, and bump the GHC major version, but the Feb 
releases would only be allowed to bump minor versions of packages. 
(except perhaps the version of the GHC package, which is impossible to 
keep stable if we change the compiler).


So how to manage the repos.  We could have three branches, but that 
doesn't seem practical.  Probably the best way forward is to develop new 
features on separate branches and merge them into master at the 
appropriate time - i.e. API-breaking feature branches could only be 
merged in after the Feb release.


Thoughts?

Cheers,
Simon

On 09/02/13 02:04, Manuel M T Chakravarty wrote:

I completely agree with Johan. The problem is to change core APIs too
fast. Adding, say, SIMD instructions or having a new type extension
(that needs to be explicitly activated with a -X option) shouldn't break
packages.

I'm all for restricting major API changes to once a year, but why can't
we have multiple updates to the code generator per year or generally
release that don't affect a large number of packages on Hackage?

Manuel

Johan Tibell johan.tib...@gmail.com mailto:johan.tib...@gmail.com:

On Fri, Feb 8, 2013 at 6:28 AM, Simon Marlow marlo...@gmail.com
mailto:marlo...@gmail.com wrote:

For a while we've been doing one major release per year, and 1-2
minor releases.  We have a big sign at the top of the download
page directing people to the platform.  We arrived here after
various discussions in the past - there were always a group of
people that wanted stability, and a roughly equally vocal group of
people who wanted the latest bits.  So we settled on one
API-breaking change per year as a compromise.

Since then, the number of packages has ballooned, and there's a
new factor in the equation: the cost to the ecosystem of an
API-breaking release of GHC.  All that updating of packages
collectively costs the community a lot of time, for little
benefit.  Lots of package updates contributes to Cabal Hell.  The
package updates need to happen before the platform picks up the
GHC release, so that when it goes into the platform, the packages
are ready.

So I think, if anything, there's pressure to have fewer major
releases of GHC.  However, we're doing the opposite: 7.0 to 7.2
was 10 months, 7.2 to 7.4 was 6 months, 7.4 to 7.6 was 7 months.
We're getting too efficient at making releases!


I think we want to decouple GHC major releases (as in, we did lots
of work) from API breaking releases. For example, GCC has lots of
major (or big) releases, but rarely, if ever, break programs.

I'd be delighted to see a release once in a while that made my
programs faster/smaller/buggy without breaking any of them.

-- Johan





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.8 release?

2013-02-08 Thread Simon Marlow
For a while we've been doing one major release per year, and 1-2 minor 
releases.  We have a big sign at the top of the download page directing 
people to the platform.  We arrived here after various discussions in 
the past - there were always a group of people that wanted stability, 
and a roughly equally vocal group of people who wanted the latest bits.  
So we settled on one API-breaking change per year as a compromise.


Since then, the number of packages has ballooned, and there's a new 
factor in the equation: the cost to the ecosystem of an API-breaking 
release of GHC.  All that updating of packages collectively costs the 
community a lot of time, for little benefit.  Lots of package updates 
contributes to Cabal Hell.  The package updates need to happen before 
the platform picks up the GHC release, so that when it goes into the 
platform, the packages are ready.


So I think, if anything, there's pressure to have fewer major releases 
of GHC.  However, we're doing the opposite: 7.0 to 7.2 was 10 months, 
7.2 to 7.4 was 6 months, 7.4 to 7.6 was 7 months. We're getting too 
efficient at making releases!


My feeling is that this pace is too fast.  You might argue that with 
better tools and infrastructure the community wouldn't have so much work 
to do for each release, and I wholeheartedly agree. Perhaps if we stop 
releasing GHC so frequently they'll have time to work on it :)  
Releasing early and often is great, but at the moment it's having 
negative effects on the ecosystem (arguably due to deficiencies in the 
infrastructure).


Does this strike a chord with anyone, or have I got the wrong impression 
and everyone is happy with the pace?


Cheers,
Simon

On 07/02/13 18:15, Simon Peyton-Jones wrote:


It’s fairly simple in my mind. There are two “channels” (if I 
understand Mark’s terminology right):


·Haskell Platform:

oA stable development environment, lots of libraries known to work

oNewcomers, and people who value stability, should use the Haskell 
Platform


oHP comes with a particular version of GHC, probably not the hottest 
new one, but that doesn’t matter.  It works.


·GHC home page downloads:

oMore features but not so stable

oLibraries not guaranteed to work

oWorth releasing, though, as a forcing function to fix bugs, and as a 
checkpoint for people to test, so that by the time the HP adopts a 
particular version it is reasonably solid.


So we already have the two channels Mark asks for, don’t we? One is 
called the Haskell Platform and one is called the GHC home page.



That leaves a PR issue: we really /don’t/ want newcomers or Joe Users 
wanting the “new shiny”. They want the Haskell Platform, and as Mark 
says those users should not pay the slightest attention until it 
appears in the Haskell Platform.


So perhaps we principally need a way to point people away from GHC and 
towards HP?  eg We could prominently say at every download point 
“Stop!  Are you sure you want this?  You might be better off with the 
Haskell Platform!  Here’s why...”.


Have I understood aright?  If so, how could we achieve the right 
social dynamics?


Our goal is to let people who value stability get stability, while the 
hot-shots race along in a different channel and pay the price of flat 
tires etc.


PS: absolutely right to use 7.6.2 for the next HP.  Don’t even think 
about 7.8.


Simon

*From:*Mark Lentczner [mailto:mark.lentcz...@gmail.com]
*Sent:* 07 February 2013 17:43
*To:* Simon Peyton-Jones
*Cc:* andreas.voel...@gmail.com; Carter Schonwald; GHC users; Simon 
Marlow; parallel-haskell; kosti...@gmail.com; Edsko de Vries; 
ghc-d...@haskell.org

*Subject:* Re: GHC 7.8 release?

I'd say the window for 7.8 in the platform is about closed. If 7.8 
were to be release in the next two weeks that would be just about the 
least amount of time I'd want to see for libraries in the platform to 
get all stable with the GHC version. And we'd also be counting on the 
GHC team to be quickly responding to bugs so that there could be a 
point release of 7.8 mid-April. Historically, none of that seems likely.


So my current trajectory is to base HP 2013.2.0.0 on GHC 7.6.2.

Since 7.8 will seems like it will be released before May, we will be 
faced again with the bad public relations issue: Everyone will want 
the new shiny and be confused as to why the platform is such a 
laggard. We'll see four reactions:


  * New comers who are starting out and figure they should use the
latest... Many will try to use 7.8, half the libraries on hackage
won't work, things will be wonky, and they'll have a poor experience.
  * People doing production / project work will stay on 7.6 and ignore
7.8 for a few months.
  * The small group of people exploring the frontiers will know how to
get things set up and be happy.
  * Eventually library authors will get around to making sure their
stuff will work with it.

I wish GHC would radically change it's release process. Things like 
7.8 shouldn't

Re: Extended periods of waking up thread %d on cap %d

2013-01-28 Thread Simon Marlow
?

I've CC'd Edward Yang (who I understand has recently been doing a
rework on the scheduler) and Simon Marlow.

Thanks,

- Ben


[1] https://github.com/bgamari/bayes-stack
[2] http://goldnerlab.physics.umass.edu/~bgamari/Benchmark-wakeup.eventlog
[3] 
http://goldnerlab.physics.umass.edu/~bgamari/Benchmark-wakeup-smaller.eventlog




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Size of crosscompiled exectuable

2013-01-28 Thread Simon Marlow

On 26/01/13 08:24, Nathan Hüsken wrote:

On 01/25/2013 05:45 PM, Simon Marlow wrote:

On 25/01/13 16:35, Simon Marlow wrote:

On 25/01/13 15:51, Stephen Paul Weber wrote:

Somebody claiming to be Simon Marlow wrote:

On 25/01/13 13:58, Nathan Hüsken wrote:

A simple hello world application has 1Mb in by 64 bit ubunut machine.
When I stript it, is has about 750kb.
When I build a cross compiler for android (arm), the executable has a
asize of about 10MB, stripped about 5MB.

That is huge, five times the size on my linux sysem.


Not sure what you mean by five times the size on my linux system.
What is 5 times larger than what?


He's saying that the size of the android executable (made by his cross
compiler) is five time the sive of the equivalent Ubuntu executable
(made by, I assume, his system's GHC).


Yes, exactly. Sorry for my bad phrasing.


The problem is not the size, but the size ratio.


Ah, I see.  Yes, my executables are a similar size.  I'm not sure why,
I'll try to look into it.


It's just the lack of SPLIT_OBJS.  Also, unregisterised accounts for a
factor of 1.5 or so.


What exactly does SPLIT_OBJS do? Is there a chance to get it working for
cross platform?


SPLIT_OBJS turns on the -split-objs flag to GHC when building libraries, 
which makes it generate lots of little object files rather than one big 
object file for each module.  This means that when linking with a static 
library, we only link in the necessary functions, not the whole module tree.


I haven't tried it, but as far as I know SPLIT_OBJS should work when 
cross-compiling. There was a commit adding support for -split-objs with 
LLVM: 1f9ca81cff59ed6c0078437a992f40c13d2667c7


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to get started with a new backend?

2013-01-28 Thread Simon Marlow

On 28/01/13 11:21, Simon Peyton-Jones wrote:

I would like to explore making a backend for .NET. I've done a lot of
background reading about previous .NET and JVM attempts for Haskell. It
seems like several folks have made significant progress in the past and,
with the exception of UHC, I can't find any code around the internet
from the previous efforts. I realize that in total it's a huge
undertaking and codegen is only one of several significant hurdles to
success.

Someone should start a wiki page about this!  It comes up regularly.
(That doesn’t mean it’s a bad idea; just that we should share wisdom on
it.)  Maybe there is one, in which case perhaps it could be updated?


There's the FAQ entry about this, which I believe you wrote:

http://www.haskell.org/haskellwiki/GHC:FAQ#Why_isn.27t_GHC_available_for_.NET_or_on_the_JVM.3F

It's on the Haskell wiki, so people could update it with more recent info.

Cheers,
Simon



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to get started with a new backend?

2013-01-28 Thread Simon Marlow

On 28/01/13 01:15, Jason Dagit wrote:

I would like to explore making a backend for .NET. I've done a lot of
background reading about previous .NET and JVM attempts for Haskell. It
seems like several folks have made significant progress in the past and,
with the exception of UHC, I can't find any code around the internet
from the previous efforts. I realize that in total it's a huge
undertaking and codegen is only one of several significant hurdles to
success.

I would like to get a very, very, very simple translation working inside
GHC. If all I can compile and run is fibonacci, then I would be quite
happy. For my first attempt, proof of concept is sufficient.

I found a lot of good documentation on the ghc trac for how the
compilation phases work and what happens in the different parts of the
backend. The documentation is excellent, especially compared to other
compilers I've looked at.

When I started looking at how to write the code, I started to wonder
about the least effort path to getting something (anything?) working.
Here are some questions:
   * Haskell.NET seems to be dead. Does anyone know where their code went?
   * Did lambdavm also disappear? (JVM I know, but close enough to be
useful)
   * Would it make sense to copymodify the -fvia-C backend to generate
C#? The trac claims that ghc can compile itself to C so that only
standard gnu C tools are needed to build an unregistered compiler. Could
I use this trick to translate programs to C#?
   * What stage in the pipeline should I translate from? Core? STG? Cmm?
   * Which directories/source files should I look at to get familiar
with the code gen? I've heard the LLVM codegen is relatively simple.
   * Any other advice?


Just to put things in perspective a bit, the LLVM backend shares the RTS 
with the native backend, and uses exactly the same ABI.  That limits its 
scope significantly: it only has to replace the stages between Cmm and 
assembly code, everything else works as-is.


You don't have this luxury with .NET (or JVM), because you can't link 
.NET or JVM code to native code directly, and these systems already have 
their own runtimes.  Basically you're replacing not only the code 
generator, but also the runtime, and probably large chunks of the 
libraries.  That's why it's a bigger job.


You can't go from Cmm, because as Simon says it's already too low-level. 
 You'll want .NET/JVM to manage the stack for you, and you'll want to 
have your own compilation scheme for functions and thunks, and so on. 
The right place to start is after CorePrep, where thunks are explicit 
(this is where the bytecode generator starts, incidentally: you might 
want to look at ghci/ByteCodeGen.hs).


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to get started with a new backend?

2013-01-28 Thread Simon Marlow

On 28/01/13 06:35, Christopher Done wrote:

The trac claims that ghc can compile itself to C so that only standard gnu C 
tools are needed to build an unregistered compiler.


Wait, it can? Where's that?


It used to be able to.  Nowadays we cross-compile.

Cheers,
Simon




On 28 January 2013 02:15, Jason Dagit dag...@gmail.com wrote:

I would like to explore making a backend for .NET. I've done a lot of
background reading about previous .NET and JVM attempts for Haskell. It
seems like several folks have made significant progress in the past and,
with the exception of UHC, I can't find any code around the internet from
the previous efforts. I realize that in total it's a huge undertaking and
codegen is only one of several significant hurdles to success.

I would like to get a very, very, very simple translation working inside
GHC. If all I can compile and run is fibonacci, then I would be quite happy.
For my first attempt, proof of concept is sufficient.

I found a lot of good documentation on the ghc trac for how the compilation
phases work and what happens in the different parts of the backend. The
documentation is excellent, especially compared to other compilers I've
looked at.

When I started looking at how to write the code, I started to wonder about
the least effort path to getting something (anything?) working. Here are
some questions:
   * Haskell.NET seems to be dead. Does anyone know where their code went?
   * Did lambdavm also disappear? (JVM I know, but close enough to be useful)
   * Would it make sense to copymodify the -fvia-C backend to generate C#?
The trac claims that ghc can compile itself to C so that only standard gnu C
tools are needed to build an unregistered compiler. Could I use this trick
to translate programs to C#?
   * What stage in the pipeline should I translate from? Core? STG? Cmm?
   * Which directories/source files should I look at to get familiar with the
code gen? I've heard the LLVM codegen is relatively simple.
   * Any other advice?

Thank you in advance!
Jason

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: any successfull ghc registerised builds on arm?

2013-01-25 Thread Simon Marlow

On 24/01/13 16:58, Nathan Hüsken wrote:

On 01/24/2013 04:50 PM, Stephen Paul Weber wrote:

Somebody claiming to be Nathan Hüsken wrote:

On 01/24/2013 04:28 PM, Stephen Paul Weber wrote:

Do you think it is specifically the 3.2 that made it work?

Yes. With llvm version 3.1 I was only able to get an unregisterised
build to work.

http://hackage.haskell.org/trac/ghc/attachment/ticket/7621/unregistered-arm-llvm-hack.patch

?


Not exactly, see the patch here:
http://www.haskell.org/pipermail/ghc-devs/2013-January/000118.html
and the changes to compiler/llvmGen/LlvmCodeGen/Ppr.hs


Oh, man, the fact that I don't have that setting for QNX is probably not
doing me any favours...

How the heck am I supposed to figure out what that string should be? :(


Do you mean the data layout? Actually, I have to admit I just copied it
from arm linux.



That said... how did you get an unregisterised build to work with an
LLVM backend?  Everything I've seen in the code implied that the moment
you are unregisteried, it uses via-C...  Which is what my above patch is
primarily about.



I ... it just worked :). I passed --enable-unregistered to configure and
that did the trick. During building it always said via-C, but it worked.


You're not using LLVM, due to #7622.  I'll push the trivial patch to fix 
that as soon as it has validated here.


Cheers,
Simon



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Error building ghc on raspberry pi.

2013-01-25 Thread Simon Marlow

FYI, I created a wiki page for cross-compiling to Raspberry Pi:

http://hackage.haskell.org/trac/ghc/wiki/Building/Preparation/RaspberryPi

I have an unregisterised build using LLVM working now (it just worked, 
modulo the tiny fix for #7622).


Cheers,
Simon

On 21/01/13 16:06, Karel Gardas wrote:

On 01/21/13 04:43 PM, rocon...@theorem.ca wrote:

So the binary-dist has a settings.in file. It is the configure step in
the binary-dist that generates the corrupt settings file.


Perhaps you've forgotten to regenerate bin-dist configure as you did
with build tree configure after applying my patch?


I'll try to poke around to see where and why the stage2 compiler and the
binary-dist compiler differ.


Please post your findings here, I'm really curious what is the culprit
here...

Karel



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Error building ghc on raspberry pi.

2013-01-25 Thread Simon Marlow

On 25/01/13 11:23, Neil Davies wrote:

Simon

Looking at the wiki - I take it that the stage 1 compiler can now be used as 
native compiler on the RPi? (last line of entry)?


Do you mean the stage 2 compiler?  If so yes - in principle.  But in 
practice the binary-dist machinery doesn't work properly for 
cross-compilers yet, so it's hard to install it on the RPi.  If you have 
a shared network filesystem then perhaps 'make install' works, or if you 
copy the build tree to your RPi at the same location as your build 
machine, then maybe it will work.


Cheers,
Simon




Neil

On 25 Jan 2013, at 10:46, Simon Marlow marlo...@gmail.com wrote:


FYI, I created a wiki page for cross-compiling to Raspberry Pi:

http://hackage.haskell.org/trac/ghc/wiki/Building/Preparation/RaspberryPi

I have an unregisterised build using LLVM working now (it just worked, modulo 
the tiny fix for #7622).

Cheers,
Simon

On 21/01/13 16:06, Karel Gardas wrote:

On 01/21/13 04:43 PM, rocon...@theorem.ca wrote:

So the binary-dist has a settings.in file. It is the configure step in
the binary-dist that generates the corrupt settings file.


Perhaps you've forgotten to regenerate bin-dist configure as you did
with build tree configure after applying my patch?


I'll try to poke around to see where and why the stage2 compiler and the
binary-dist compiler differ.


Please post your findings here, I'm really curious what is the culprit
here...

Karel



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Fastest way to reload module with GHC API

2013-01-25 Thread Simon Marlow

On 25/01/13 14:30, JP Moresmau wrote:

Hello, I just want to be sure of what's the fastest way to reload a
module with the GHC API.
I have a file whose path is fp
I load the module with:
addTarget Target { targetId = TargetFile fp Nothing, targetAllowObjCode
= True, targetContents = Nothing }
Then I load the module
load LoadAllTargets
And when I want to reload the module (the contents of fp have changed) I do:
removeTarget (TargetFile fp Nothing)
load LoadAllTargets
and then I rerun my initial code (addTarget, load)


You should be able to just invoke 'load LoadAllTargets' and omit the 
intermediate remove/load step.  Or is there a reason you want to remove 
the target?


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Size of crosscompiled exectuable

2013-01-25 Thread Simon Marlow

On 25/01/13 13:58, Nathan Hüsken wrote:

A simple hello world application has 1Mb in by 64 bit ubunut machine.
When I stript it, is has about 750kb.


GHC statically links all its libraries by default.  If you want a 
dynamically linked executable, use -dynamic (ensure you have the dynamic 
libraries built and/or installed though).



When I build a cross compiler for android (arm), the executable has a
asize of about 10MB, stripped about 5MB.

That is huge, five times the size on my linux sysem.


Not sure what you mean by five times the size on my linux system. 
What is 5 times larger than what?


Static linking is useful when cross compiling, because it means you can 
just copy the binary over to the target system and run it.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Fastest way to reload module with GHC API

2013-01-25 Thread Simon Marlow
Has the file's modification time changed?  If you're doing this very 
quickly (within 1 second) then you might run into this:


http://hackage.haskell.org/trac/ghc/ticket/7473

Cheers,
Simon

On 25/01/13 16:02, JP Moresmau wrote:

When I do that (only adding the target once and just doing load after
the file has changed) the changes in the file are not taken into account
(getNamesInScope for example doesn't give me the name of a type added
inside the file). I probably have my stupid hat on (friday
afternoon...), but when I do remove/load in between it works...

Thanks


On Fri, Jan 25, 2013 at 4:33 PM, Simon Marlow marlo...@gmail.com
mailto:marlo...@gmail.com wrote:

On 25/01/13 14:30, JP Moresmau wrote:

Hello, I just want to be sure of what's the fastest way to reload a
module with the GHC API.
I have a file whose path is fp
I load the module with:
addTarget Target { targetId = TargetFile fp Nothing,
targetAllowObjCode
= True, targetContents = Nothing }
Then I load the module
load LoadAllTargets
And when I want to reload the module (the contents of fp have
changed) I do:
removeTarget (TargetFile fp Nothing)
load LoadAllTargets
and then I rerun my initial code (addTarget, load)


You should be able to just invoke 'load LoadAllTargets' and omit the
intermediate remove/load step.  Or is there a reason you want to
remove the target?

Cheers,
 Simon




--
JP Moresmau
http://jpmoresmau.blogspot.com/



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ghc passing -undef to preprocessor and thereby eliminating OS specific defines

2013-01-24 Thread Simon Marlow

On 24/01/13 11:21, Nathan Hüsken wrote:

Hey,

I am trying to adapt some code in the libraries when compiling for
android (i.E. because some things are different on android to other
posix systems).

So in C code I would just do #ifdef __ANDROID__.
While in the *.h and *.c files it seems to work, it does not work in
*.hs files.
I noted that the preprocessor is run like this:

arm-linux-androideabi-gcc -E -undef -traditional -fno-stack-protector
-DTABLES_NEXT_TO_CODE

The -undef parameter is causing the __ANDROID__ define to be removed.
Does it make sense to pass -undef to the preprocessor?

Any other Ideas how I could adapt the code for android?


You want to use:

  #ifdef android_HOST_OS

the *_HOST_OS symbol is defined by GHC when it ivokes CPP.

Cheers,
Simon



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: any successfull ghc registerised builds on arm?

2013-01-21 Thread Simon Marlow

On 19/01/13 07:32, Stephen Paul Weber wrote:

Somebody claiming to be Stephen Paul Weber wrote:

Somebody claiming to be Nathan Hüsken wrote:

Was that an registerised or unregisterised build?
Did anyone succesfully build ghc on an arm system which produces non
crashing executables?


Just finally got a BB10 device set up so I can test my cross-compiler
on the ARM

I'm about to try a configuration with --enable-unregisterised to see
if that helps.


make -r --no-print-directory -f ghc.mk phase=final all
inplace/bin/ghc-stage1 -static  -H64m -O0 -fasm-package-name
integer-simple-0.1.1.0 -hide-all-packages -i
-ilibraries/integer-simple/.
-ilibraries/integer-simple/dist-install/build
-ilibraries/integer-simple/dist-install/build/autogen
-Ilibraries/integer-simple/dist-install/build
-Ilibraries/integer-simple/dist-install/build/autogen
-Ilibraries/integer-simple/.-optP-include
-optPlibraries/integer-simple/dist-install/build/autogen/cabal_macros.h
-package ghc-prim-0.3.1.0  -package-name integer-simple -Wall
-XHaskell98 -XCPP -XMagicHash -XBangPatterns -XUnboxedTuples
-XForeignFunctionInterface -XUnliftedFFITypes -XNoImplicitPrelude -O
-fasm  -no-user-package-db -rtsopts  -odir
libraries/integer-simple/dist-install/build -hidir
libraries/integer-simple/dist-install/build -stubdir
libraries/integer-simple/dist-install/build -hisuf hi -osuf  o -hcsuf hc
-c libraries/integer-simple/./GHC/Integer/Type.hs -o
libraries/integer-simple/dist-install/build/GHC/Integer/Type.o

when making flags consistent: Warning:
 Compiler unregisterised, so compiling via C
/tmp/ghc25891_0/ghc25891_0.hc: In function 'c2pA_entry':

/tmp/ghc25891_0/ghc25891_0.hc:3691:1:
  warning: this decimal constant is unsigned only in ISO C90
[enabled by default]

/tmp/ghc25891_0/ghc25891_0.hc:3691:17:
  error: expected ')' before numeric constant
make[1]: ***
[libraries/integer-simple/dist-install/build/GHC/Integer/Type.o] Error 1
make: *** [all] Error 2


Strange, I didn't see this on my builds, which I think is the same as 
yours (GHC HEAD, cross-compiling for RPi with --enable-unregisterised).


If you make a ticket with full details, I can try to reproduce.

Cheers,
Simon



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Should ghc -msse imply gcc -msse

2013-01-17 Thread Simon Marlow

On 17/01/13 20:06, Johan Tibell wrote:

On Thu, Jan 17, 2013 at 12:01 PM, Johan Tibell johan.tib...@gmail.com wrote:

I forgot I once raised this on the GHC bug tracker:
http://hackage.haskell.org/trac/ghc/ticket/7025

Here's what Simon M had to say back then:

The right thing is to put -msse in the cc-options field of your
.cabal file, if that's what you want.

I'm distinctly uneasy about having -msse magically pass through to gcc.

* There are many flags that we do not pass through to gcc, so having
one that we do pass through could be confusing (and lead to lots more
requests for more flags to be passed through)

* What if there is a variant of -msse that gcc supports but we don't?
Wouldn't we have to keep them in sync?

I'm going to close this as wontfix, but please feel free to reopen and
disagree.


One problem with having the user set cc-options in addition to passing
-msse to GHC, is that the user might not realize that he/she needs to
do this. This is bad if you use -fllvm, as your -msse will essentially
just be ignored as the LLVM primitives we use in the LLVM backend
(e.g. for popcnt) won't convert to SSE instructions.

Even worse, LLVM doesn't support a -msse flag, instead you need to use
-mattr=+sse, so the user needs to be aware of this difference and
change his/her flags depending on if we use the native backend or the
LLVM backend.


If the intended meaning of -msse is

  Use SSE instructions in Haskell compilations

then of course we should pass -mattr=+sse to LLVM, because it is the 
backend for Haskell compilations.  But we should not pass it to gcc, 
unless we're using the C backend.


If instead the intended meaning of -msse is

  Use SSE instructions in all compilations

then we should pass it to gcc too.  This just feels a bit too magical to 
me, and since we have a way to say exactly what you want, I'm not sure 
it's necessary.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: What is the scheduler type of GHC?

2013-01-16 Thread Simon Marlow

On 16/01/13 08:32, Magicloud Magiclouds wrote:

Hi,
   Just read a post about schedulers in erlang and go lang, which
informed me that erlang is preemptive and go lang is cooperative.
   So which is used by GHC? From ghc wiki about rts, if the question is
only within haskell threads, it seems like cooperative.


GHC is pre-emptive, but see http://hackage.haskell.org/trac/ghc/ticket/367.

Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Tagging subjects of emails sent from the list

2012-12-20 Thread Simon Marlow
Right, we deliberately don't add subject tags for exactly that reason. 
There are enough headers in mailing list emails for your mail reader to 
automatically filter/tag messages if you need to.


Cheers,
Simon

On 20/12/12 09:21, Johan Tibell wrote:

I'd prefer if they weren't tagged. My mail reader (GMail) can do the
tagging for me and I'll end up with duplicated tags and the list of
subjects get harder to scan.

On Thu, Dec 20, 2012 at 9:57 AM, Jan Stolarek jan.stola...@p.lodz.pl wrote:

Would it be possible to change mailing list settings so that topics of emails 
begin
with [glasgow-haskell-users] (and for upcoming lists: [ghc-devs], 
[ghc-commits] and so on.
No quotation marks of course). It would make filtering and searching in the 
mailbox easier.

Janek

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to start with GHC development?

2012-12-18 Thread Simon Marlow

On 15/12/12 14:46, Jan Stolarek wrote:

OK, so how can we improve it?

First of all I think that materials in the wiki are well organized for people 
who already have
some knowledge and only need to learn about particular things they don't yet 
know. In this case I
think it is fine to have wiki pages connected between one another so they form 
cycles and don't
have a particular ordering.

However as a beginner I'd expect that pages are ordered in a linear way (sorry 
for repeating
myself, but I think this is important), that every concept is explained in only 
one place and
that there are no links in the text (even at the end - if the text is 
structured lineary than I
don't need to decide what to read next, I just go to next section/chapter). 
This is of course a
perfect situation that seems to be achievable in a book, but probably not in a 
wiki (wikibook
could be a middle ground?).

Right now there are two pages (I know of...) that instruct how to get the 
sources:
http://hackage.haskell.org/trac/ghc/wiki/Building/GettingTheSources
http://hackage.haskell.org/trac/ghc/wiki/WorkingConventions/Git


So the first page here tells you how to get a single source tree so that 
you can build it.  The second page tells you how to create a 2-tree 
setup for use with GHC's validate; the latter is aimed at people doing 
lots of GHC development (which is why it's under WorkingConventions). 
Both scenarios are important, and I think it's more important to deal 
with the simple case first which is why it is right near the start of 
the Building Guide.


Does that help?  Is there something we could add somewhere that would 
make it clearer?



It seems that many informations in the wiki are duplicated. There are two pages 
about
repositories:
http://hackage.haskell.org/trac/ghc/wiki/Repositories
http://hackage.haskell.org/trac/ghc/wiki/WorkingConventions/Repositories
(after reading the first one source tree started to make much more sense - this 
is one of the
informations *I* would like to get first).


The first page lists the repositories and where the upstreams and 
mirrors are.  The second page contains the conventions for working on 
other repositories (which is why it's under WorkingConventions).



In general I think that for beginers it would be good if the wiki had a form of 
a book divided
into chapters. I only wonder if it is possible to adjust the wiki for the 
newbies and at the same
time keep it useful for more experienced developers.


The nice thing about a wiki is that you don't have to move content 
around, you can just make new contents pages that contain whatever 
organisation you want. So maybe what you want is a separate page that 
links to things to read in a particular order?


Cheers,
Simon



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Separating build tree from the source tree

2012-12-18 Thread Simon Marlow

On 18/12/12 10:09, Jan Stolarek wrote:

It turns out that running 'perl boot' in symlinked directory (ghc-build) is not 
enough. I had to
run 'perl boot' in the original ghc-working dir and now configure succeedes in 
ghc-build.


You shouldn't do that, because now you have build files in your source 
directory.


The problem you ran into is that the configure script tries to use git 
to detect the date of the latest patch, to use as the version number of 
GHC (e.g. 7.7.20121218).  If you're in a build tree made by lndir, then 
you don't have a .git directory, so the configure script gives up and 
uses 7.7 as the version.  This will work, but it's not good because if 
you later install some packages for this GHC build using cabal, they 
will conflict with packages from other GHC builds in your ~/.cabal 
directory. (you can use cabal-dev to avoid this, which is what I do 
sometimes).


I've added a note to the wiki about this: 
http://hackage.haskell.org/trac/ghc/wiki/Building/Using#Sourcetreesandbuildtrees


The workaround is to link your .git directory from your build tree, like so:

 $ cd ghc-build
 $ ln -s $source/.git .

where $source is your source tree.

I don't know why configure failed on your Debian box, though.

Cheers,
Simon



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Suggested policy: use declarative names for tests instead of increasing integers

2012-12-18 Thread Simon Marlow

On 18/12/12 12:33, Roman Cheplyaka wrote:

* Simon Peyton-Jones simo...@microsoft.com [2012-12-18 10:32:39+]

(This belongs on cvs-ghc, or the upcoming ghc-devs.)

| I find our tests to be quite hard to navigate, as the majority have
| names like tc12345.hs or some such. I suggest we instead use descriptive
| names like GADT.hs or PrimOps.hs instead. What do people think?

We've really moved to a naming convention connected to tickets. Thus test T7490 
is a test for Trac ticket #7490.  This is fantastic.  It eliminates the need 
for elaborate comments in the test to say what is being tested... just look at 
the ticket.

The old serially number tests tc032 etc are history.

If there isn't a corresponding ticket, it'd be a good idea to create one.

Increasingly we refer to tickets in source-code comments.  They are incredibly 
valuable resource to give the detail of what went wrong.

OK?  We should document this convention somewhere.


It is sort of documented at 
http://hackage.haskell.org/trac/ghc/wiki/Building/RunningTests/Adding

   Having found a suitable place for the test case, give the test case a
   name. For regression test cases, we often just name the test case
   after the bug number (e.g. T2047). Alternatively, follow the
   convention for the directory in which you place the test case: for
   example, in typecheck/should_compile, test cases are named tc001,
   tc002, and so on.

But I wonder what if one wants to create a test preventively (say, for a
new feature), and there isn't actually any bug to create a ticket for?


It wouldn't hurt to be more descriptive with test names than we are 
currently in e.g. codeGen and typechecker.  Some parts of the testsuite 
are better, e.g. see libraries/base/tests where the tests are named 
after the function being tested (sort of), or in codeGen/should_run_asm:


test('memcpy',
 unless_platform('x86_64-unknown-linux',skip), compile_cmp_asm, [''])
test('memcpy-unroll',
 unless_platform('x86_64-unknown-linux',skip), compile_cmp_asm, [''])
test('memcpy-unroll-conprop',
 unless_platform('x86_64-unknown-linux',skip), compile_cmp_asm, [''])

ticket numbers are good names for regression tests, but for other tests 
more descriptive names would help. There isn't always a good name for a 
test, but often there is.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to start with GHC development?

2012-12-18 Thread Simon Marlow

On 18/12/12 15:51, Simon Peyton-Jones wrote:

|  It seems that many informations in the wiki are duplicated. There are
|  two pages about
|  repositories:
|  http://hackage.haskell.org/trac/ghc/wiki/Repositories
|  http://hackage.haskell.org/trac/ghc/wiki/WorkingConventions/Repositori
|  es (after reading the first one source tree started to make much more
|  sense - this is one of the informations *I* would like to get first).
|
| The first page lists the repositories and where the upstreams and
| mirrors are.  The second page contains the conventions for working on
| other repositories (which is why it's under WorkingConventions).

Simon, I don't find that a clear distinction. Looking at the two, I'm a bit 
confused too!


So Repositories is what repositories there are, and 
WorkingConventions/Repositories is how to work on them.  Isn't that a 
clear distinction?



* The lists on WorkingConventions/Repositories duplicates the table in 
Repositories.


There are two separate workflows, so we have to say which libraries each 
workflow applies to.  I'd be fine with merging this info with the other 
table - it might be slightly more awkward having the info on a separate 
page, but there would be only one list of repositories.



* I believe that perhaps WorkingConventions/Repositories is solely concerned 
with how to *modify* a library; it opening para says as much.  Fine; but it 
shouldn't duplicate the info.


Right.


Maybe the table could do with a column saying GHC or Upstream to specify the 
how to modify convention?  (I wish the table could somehow be narrower.  And that the library 
name was the first column.)  Perhaps the master table can look like this:

What   GHC repo locationUpstream repo exists?
http://darcs.haskell.org/

GHCghc.git
ghc-tarballs   ghc-tarballs.git
...etc...
binary binary.git   YES
...etc...

Then we can deal with the complexities of upstream repos in another page.  I 
think that might put the info in a way that's easier to grok.  I can do it if 
Simon and Ian agree; or Ian could.


Ok by me.

Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Hoopl vs LLVM?

2012-12-13 Thread Simon Marlow

On 12/12/12 17:06, Greg Fitzgerald wrote:

On Wed, Dec 12, 2012 at 4:35 AM, Simon Marlow marlo...@gmail.com
mailto:marlo...@gmail.com wrote:

Now, all that LLVM knows is that z was read from Sp[8], it has no
more information about its value.


Are you saying Hoopl can deduce the original form from the CPS-version?
  Or that LLVM can't encode the original form?  Or within GHC, LLVM is
thrown in late in the game, where neither Hoopl nor LLVM can be of much use.


We can run Hoopl passes on the pre-CPS code, but LLVM only sees the 
post-CPS code.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Hoopl vs LLVM?

2012-12-13 Thread Simon Marlow

On 12/12/12 17:37, Johan Tibell wrote:

On Wed, Dec 12, 2012 at 4:35 AM, Simon Marlow marlo...@gmail.com wrote:

On 11/12/12 21:33, Johan Tibell wrote:

I'd definitely be interesting in understanding why as it, like you
say, makes it harder for LLVM to understand what our code does and
optimize it well.



The example that Simon gave is a good illustration:

snip


My question was more: why do we CPS transform? I guess it's because we
manage our own stack?


Right.  In fact, LLVM does its own CPS transform (but doesn't call it 
that) when the code contains non-tail function calls.  We give LLVM code 
with tail-calls only.


The choice about whether to manage our own stack is *very* deep, and has 
ramifications all over the system.  Changing it would mean a completely 
new backend and replacing a lot of the RTS, that is if you could find a 
good scheme for tracking pointers in the stack - I'm not sure LLVM is up 
to the job without more work.  It could probably be done, but it's a 
huge undertaking and it's not at all clear that you could do any better 
than GHC currently does.  We generate very good code from idiomatic 
Haskell; where we fall down is in heavy numerical and loopy code, where 
LLVM does a much better job.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Haskell Dynamic Loading with -fllvm

2012-12-13 Thread Simon Marlow

On 12/12/12 18:03, David Terei wrote:

Hi Nathan,

So dynamic libraries should be supported on a few platforms but not
all, not as many as the NCG. Support also varies from LLVM version.
What platform and version of LLVM are you trying to utilize? And
specifically what flags are you using?


Also I believe even if it works, the code that LLVM generates for 
-dynamic is not very good.  This is because it makes every symbol 
reference a dynamic reference, whereas the NCG only makes dynamic 
references for symbols in other packages.  It ought to be possible to 
fix this by using the right symbol declarations (I'm guessing, I haven't 
looked into it).


Cheers,
Simon



Cheers,
David

On 11 December 2012 08:53, Nathaniel Neitzke night...@gmail.com wrote:

Essentially I have a use case that, if worked, would save countless hours in
development time.  I am writing a scientific computing web service utilizing
the Repa and Snap libraries.  The Snap framework has a dynamic loader that
will load modules on the fly when the source files change.

This works excellent!  The problem is that the modules must be compiled with
full optimizations (including -fllvm) or web service operations take minutes
instead of  second to execute at run time.  I do not mind the penalty paid
for optimized compilation.  It is still much faster than recompiling and
linking the entire exe from scratch and restarting the server.

The problem is when the code is compiled with -fllvm dynamically, it
crashes.  I believe this is a known issue as listed in this trac -

http://hackage.haskell.org/trac/ghc/ticket/4210

NOTE: it says The LLVM backend doesn't support dynamic libraries at the
moment.

My question is could anyone point me in the right direction as to what might
need to be implemented support for this?  Is anyone currently working on it?
It would be a huge win for the work I am currently doing, to the point where
if I can't find a way to get this working (even if it means diving in and
attacking it myself), I may have to switch to another language/platform.

Thanks,
Nathan


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to start with GHC development?

2012-12-13 Thread Simon Marlow

On 13/12/12 09:54, Yuras Shumovich wrote:

On Thu, 2012-12-13 at 09:41 +, Chris Nicholls wrote:


What's the best way to get started? Bug fixes? Writing a toy plugin? I
don't have a huge amount of time to offer, but I would like to learn to
help!



GHC bug sweep is the way I'm trying to start with:
http://hackage.haskell.org/trac/ghc/wiki/BugSweep
(have no idea whether it is the best way or not :) )


The BugSweep was a great idea at the time, but I think it stalled.  Do 
feel free to carry on though!  I think it's a great way to learn about 
GHC, because it will send you off in random directions investigating things.


While there are lots of old bugs that are now fixed, or have duplicates, 
or aren't relevant, etc. there are also lots of old bugs that are still 
around because they aren't fixed because they aren't worth the effort. 
Still, making even *some* progress on any old bug is worthwhile, even if 
it is to point out a workaround that didn't exist before, or update the 
test case.  This will generate an email, and since there are people that 
read all the Trac emails, occasionally this prods someone else to make 
further progress or just close the ticket.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Hoopl vs LLVM?

2012-12-12 Thread Simon Marlow

On 11/12/12 21:33, Johan Tibell wrote:

On Tue, Dec 11, 2012 at 11:16 AM, Simon Peyton-Jones
simo...@microsoft.com wrote:

Notice that the stack is now *explicit* rather than implicit, and LLVM has no 
hope of moving the assignment to z past the call to g (which is trivial in the 
original).  I can explain WHY we do this (there is stuff on the wiki) but the 
fact is that we do, and it's no accident.


I'd definitely be interesting in understanding why as it, like you
say, makes it harder for LLVM to understand what our code does and
optimize it well.


The example that Simon gave is a good illustration:

f() {
x = blah
z = blah2
p,q = g(x)
res = z + p - q
return res
}

In this function, for example, a Hoopl pass would be able to derive 
something about the value of z from its assignment (blah2), and use that 
information in the assignment to res, e.g. for constant propagation, or 
more powerful partial value optimisations.


However, the code that LLVM sees will look like this:

f () {
x = blah
z = blah2
Sp[8] = z
jump g(x)
}

fr1( p,q ) {
z = Sp[8];
res = z + p - q
return res
}

Now, all that LLVM knows is that z was read from Sp[8], it has no more 
information about its value.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to use C-land variable from Cmm-land?

2012-12-11 Thread Simon Marlow

On 10/12/12 12:46, Yuras Shumovich wrote:

On Mon, 2012-12-10 at 10:58 +, Simon Marlow wrote:

On 08/12/12 23:12, Yuras Shumovich wrote:

I tried to hack stg_putMVarzh directly:

  if (enabled_capabilities == 1) {
  info = GET_INFO(mvar);
  } else {
  (ptr info) = ccall lockClosure(mvar ptr);
  }


You should use n_capabilities, not enabled_capabilities.  The latter
might be 1, even when there are multiple capabilities actually in use,
while the RTS is in the process of migrating threads.


Could you please elaborate? setNumCapabilities is guarded with
asquireAllCapabilities, so all threads are in scheduler. And threads
will be migrated from disabled capabilities before they get a chance to
put/take mvar.
I changed my mind re enabled_capabilities/n_capabilities a number of
times during the weekend. Most likely you are right, and I should use
n_capabilities. But I'll appreciate if you find time to explain it for
me.


n_capabilities is the actual number of capabilities, and can only 
increase, never decrease.  enabled_capabilities is the number of 
capabilities we are currently aiming to use, which might be less than 
n_capabilities.  If enabled_capabilities is less than n_capabilities, 
there might still be activity on the other capabilities, but the idea is 
that threads get migrated away from the inactive capabilities as quickly 
as possible.  It's hard to do this immediately, which is why we have 
enabled_capabilities and we don't just change n_capabilities.



The problem was that movl $enabled_capabilities,%eax loaded the
address of enabled_capabilities, not a value.


Yes, sorry, you are right.


(Again, why does it use
32bit register? The value is 32bit on linux, but the address is 64bit,
isn't it?) So the correct way to use C-land variable is:

if (CInt[enabled_capabilities]) {...}

Not very intuitive, but at least it works.


That's C-- syntax for a memory load of a CInt value (CInt is a CPP 
symbol that expands to a real C-- type, like bits32).  Unlike in C, 
memory loads are explicit in C--.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHCi + FFI + global C variables

2012-12-11 Thread Simon Marlow

On 10/12/12 00:11, Nils wrote:

I'm currently working with a C library that needs to use/modify global C
variables, for example:

igraph_bool_t igraphhaskell_initialized = 0;

int igraphhaskell_initialize()
{
  if (igraphhaskell_initialized != 0)
  {
printf(C: Not initializing. igraphhaskell_initialized = %i\n,
igraphhaskell_initialized);
return 1;
  }
  // initialization
}

If I compile an example programm using this library everything works
fine, but if I try to run the same program in GHCi it dies with the message

C: Not initializing. igraphhaskell_initialized = -90

The value (and apparently the adress of the global variable) is
completly off, and I have no idea what is causing this or how to solve
this issue and make the library GHCi-friendly. Is it possible to run
this code in GHCi at all? Also, since it's a foreign library I obviously
cannot just change the C code to simply not use any global variables at
all.


Sounds like it could be this: http://hackage.haskell.org/trac/ghc/ticket/781

Compiling your program with -fPIC should fix it.

Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 7.6.2 Release Candidate 1

2012-12-10 Thread Simon Marlow

On 09/12/12 21:39, Ian Lynagh wrote:


We are pleased to announce the first release candidate for GHC 7.6.2:

 http://www.haskell.org/ghc/dist/7.6.2-rc1/

This includes the source tarball, installers for Windows, and
bindists for Windows, Linux, OS X and FreeBSD, on x86 and x86_64.

We plan to make the 7.6.2 release early in 2013.

Please test as much as possible; bugs are much cheaper if we find them
before the release!


Here's the list of bugs fixed in 7.6.2, assuming we've been good about 
assigning milestones correctly (i.e. there might be one or two missing):


http://hackage.haskell.org/trac/ghc/query?status=closedstatus=neworder=idcol=idcol=summarycol=milestonecol=statuscol=ownercol=componentcol=versionmilestone=7.6.2desc=1resolution=fixed

Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to use C-land variable from Cmm-land?

2012-12-10 Thread Simon Marlow

On 08/12/12 23:12, Yuras Shumovich wrote:

Hi,

I'm working on that issue as an exercise/playground while studding the
GHC internals: http://hackage.haskell.org/trac/ghc/ticket/693


It's not at all clear that we want to do this.  Perhaps you'll be able 
to put the question to rest and close the ticket!



First I tried just to replace ccall lockClosure(mvar ptr) with
GET_INFO(mvar) in stg_takeMVarzh and stg_putMVarzh and got 60% speedup
(see the test case at the end.)

Then I changed lockClosure to read header info directly when
enabled_capabilities == 1. The speedup was significantly lower, 20%

I tried to hack stg_putMVarzh directly:

 if (enabled_capabilities == 1) {
 info = GET_INFO(mvar);
 } else {
 (ptr info) = ccall lockClosure(mvar ptr);
 }


You should use n_capabilities, not enabled_capabilities.  The latter 
might be 1, even when there are multiple capabilities actually in use, 
while the RTS is in the process of migrating threads.



But got no speedup at all.
The generated asm (amd64):

 movl $enabled_capabilities,%eax
 cmpq $1,%rax
 je .Lcgq
.Lcgp:
 movq %rbx,%rdi
 subq $8,%rsp
 movl $0,%eax
 call lockClosure
 addq $8,%rsp
.Lcgr:
 cmpq $stg_MVAR_CLEAN_info,%rax
 jne .Lcgu
{...}
.Lcgq:
 movq (%rbx),%rax
 jmp .Lcgr


It moves enabled_capabilities into %eax and then compares 1 with %rax.
It looks wrong for me: the highest part of %rax remains uninitialized.


As Axel noted, this is correct.

Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Emitting constants to the .data section from the NatM monad

2012-12-07 Thread Simon Marlow

On 06/12/12 22:11, Johan Tibell wrote:

On Thu, Dec 6, 2012 at 1:34 PM, Simon Marlow marlo...@gmail.com wrote:

So are you going to add the two missing MachOps, MO_UF_Conv  MO_FU_Conv?


I'm trying to add those. I'm now thinking that I will use C calls
(which is still much faster than going via Integer) instead of
emitting some assembly, as the former is much easier but still allows
us to do the latter later. The LLVM backend will use the dedicated
LLVM instruction for conversions so it will generate really good code.


Sounds reasonable.

Cheers,
Simon



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Dynamic libraries by default and GHC 7.8

2012-12-06 Thread Simon Marlow

On 05/12/12 15:17, Brandon Allbery wrote:

On Wed, Dec 5, 2012 at 12:03 AM, Chris Smith cdsm...@gmail.com
mailto:cdsm...@gmail.com wrote:

I'm curious how much of the compile twice situation for static and
dynamic libraries could actually be shared.


Probably none; on most platforms you're actually generating different
code (dynamic libraries require generation of position-independent
code).  That said, the PPC ABI uses position-independent code even for
static libraries and I think Apple decided to go that route on Intel as
well rather than change their build system ... but if you do this then
linking to other platform-native libraries may be more difficult.  Not a
problem for Apple since they control the ABI, but not something ghc can
force on libc or random libraries someone might want to use FFI with.


Sure there's a lot of differences in the generated code, but inside GHC 
these differences only appear at the very last stage of the pipeline, 
native code generation (or LLVM).  All the stages up to that can be 
shared, which accounts for roughly 80% of compilation time (IIRC).


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: proposal: separate lists for ghc-cvs commits and ghc-dev chatter

2012-12-06 Thread Simon Marlow

On 06/12/12 17:04, Ian Lynagh wrote:

On Thu, Dec 06, 2012 at 12:29:01PM +, Simon Peyton-Jones wrote:

My own understanding is this:

A GHC *user* is someone who uses GHC, but doesn't care how it is implemented.
A GHC *developer* is someone who wants to work on GHC itself in some way.

The current mailing lists:

* glasgow-haskell-users: for anything that a GHC *user* cares about
* glasgow-haskell-bugs: same, but with a focus on bug reporting


I see glasgow-haskell-bugs as being mainly for developers, who want to
see what bugs are coming in.

It's true that we do give e-mailing it as a (less preferred) way for
users to submit a bug on
 http://hackage.haskell.org/trac/ghc/wiki/ReportABug
but I wonder if we shouldn't change that. It's rare that we get a bug
report e-mailed, and normally we ultimately end up creating a trac
ticket for it anyway. I'm sure that people who really want to submit a
bug report and for whatever reason can't use trac will e-mail it
somewhere sensible.


+1.  ghc-bugs used to be for user-generated bug reports, but now it is 
almost exclusively Trac-generated emails. I don't think anything is 
gained by suggesting that people email bug reports any more.


Cheers,
Simon




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: proposal: separate lists for ghc-cvs commits and ghc-dev chatter

2012-12-06 Thread Simon Marlow

On 06/12/12 13:23, Sean Leather wrote:

On Thu, Dec 6, 2012 at 1:29 PM, Simon Peyton-Jones wrote:

My own understanding is this:

A GHC *user* is someone who uses GHC, but doesn't care how it is
implemented.
A GHC *developer* is someone who wants to work on GHC itself in some
way.

The current mailing lists:

* glasgow-haskell-users: for anything that a GHC *user* cares about
* glasgow-haskell-bugs: same, but with a focus on bug reporting
* cvs-ghc: for GHC *developers*

I don't think we want to confuse users with developers.  If we flood
users with dev-related conversations they'll get fed up.

I don't see a very useful distinction between glasgow-haskell-users
and glasgow-haskell-bugs.  The distinction was very important before
we had a bug tracker, but it doesn't seem useful now.

I can see a perhaps-useful distinction between two *developer* lists
  (A) human email about implementation aspects of GHC
  (B) machine-generated email from buildbots etc

I rather think that (A) could usefully include Trac ticket creation
and Git commit messages, since both are really human-generated.


I think the last two things (tickets and commit messages) should be
separate from a mailing that is intended for (email-only) discussion.
The content may be human-generated, but:

(1) The number of messages is overwhelming. Alternatively stated, if you
consider each ticket or commit message a different thread (which many
email clients do), the number of different threads is large.
(2) The commit messages do not all lead to conversations, and most of
the discussion on tickets takes place on Trac with every message
duplicated to the list.

Consequently, any email-only discussion threads on the mailing list can
easily get lost among all the other threads.

So that would leave only buildbot logs on (B).


So I would be content to
   * Abolish glasgow-haskell-bugs in favour of glasgow-haskell-users
   * Split out cvs-ghc into two in some way; details to be agreed.

But for me the issue is not a pressing one.


I identify the following different needs:

(1) User email discussion
(2) Developer email discussion
(3) Buildbot reports
(4) Trac reports
(5) Commit messages


Sounds good to me.  I like the idea of separating out the buildbot 
reports too, because there tends to be little signal in those (I 
typically only look at one per day, just to check whether there's 
anything really broken).  Although that problem could be solved a 
different way, by having the build server emit a single email with a 
good summary once per day.


Cheers,
Simon




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Does GHC still support x87 floating point math?

2012-12-06 Thread Simon Marlow

On 06/12/12 11:01, Herbert Valerio Riedel wrote:

Ben Lippmeier b...@ouroborus.net writes:


On 06/12/2012, at 12:12 , Johan Tibell wrote:


I'm currently trying to implement word2Double#. Other such primops
support both x87 and sse floating point math. Do we still support x87
fp math? Which compiler flag enables it?


It's on by default unless you use the -sse2 flag. The x87 support is
horribly slow though. I don't think anyone would notice if you deleted
the x87 code and made SSE the default, especially now that we have the
LLVM code generator. SSE has been the way to go for over 10 years now.


btw, iirc GHC uses SSE2 for x86-64 code generation by default, and that
the -msse2 option has only an effect when generating x86(-32) code


Yes, because all x86_64 CPUs support SSE2.  Chips older than P4 don't 
support it.  I imagine there aren't too many of those around that people 
want to run GHC on, and as Ben says, there's always -fllvm.


Cheers,
Simon



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Emitting constants to the .data section from the NatM monad

2012-12-06 Thread Simon Marlow

On 06/12/12 00:29, Johan Tibell wrote:

Hi!

I'm trying to implement word2Double# and I've looked at how e.g. LLVM
does it. LLVM outputs quite clever branchless code that uses two
predefined constants in the .data section. Is it possible to add
contents to the current .data section from a function in the NatM
monad e.g.

 coerceWord2FP :: Width - Width - CmmExpr - NatM Register

?


Yes, you can emit data.  Look at the LDATA instruction in the X86 
backend, for example, and see how we generate things like table jumps.


So are you going to add the two missing MachOps, MO_UF_Conv  MO_FU_Conv?

Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Dynamic libraries by default and GHC 7.8

2012-12-06 Thread Simon Marlow

On 06/12/12 21:35, Brandon Allbery wrote:

On Thu, Dec 6, 2012 at 4:04 PM, Simon Marlow marlo...@gmail.com
mailto:marlo...@gmail.com wrote:

On 05/12/12 15:17, Brandon Allbery wrote:

Probably none; on most platforms you're actually generating
different
code (dynamic libraries require generation of position-independent

Sure there's a lot of differences in the generated code, but inside
GHC these differences only appear at the very last stage of the
pipeline, native code generation (or LLVM).  All the stages up to
that can be shared, which accounts for roughly 80% of compilation
time (IIRC).


I was assuming it would be difficult to separate those stages of the
internal compilation pipeline out, given previous discussions of how
said pipeline works.  (In particular I was under the impression
saving/restoring state in the pipeline to rerun the final phase with
multiple code generators was not really possible, and multithreading
them concurrently even less so.)


I don't think there's any problem (unless I've forgotten something).  In 
fact, the current architecture should let us compile one function at a 
time both ways, so we don't get a space leak by retaining all the Cmm code.


Cheers,
Simon



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Patch to enable GHC runtime system with thr_debug_p options...

2012-12-04 Thread Simon Marlow

On 03/12/12 20:11, Joachim Breitner wrote:

Dear Michał,

Am Sonntag, den 02.12.2012, 22:44 +0100 schrieb Michał J. Gajda:

On 12/02/2012 09:20 PM, Joachim Breitner wrote:

I noticed that Ubuntu, as well as Debian and original packages come
without some variants of threaded debugging binaries.
A recent change added of printing a stack trace with -xc option requires
using both -ticky and profiling compile options,
which in turn forces program to be compiled in a -debug RTS way.
Since stack trace looks like indispensable debugging tool, and
convenient parallelization is strength of Haskell,
I wonder is there any remaining reason to leave beginners with a cryptic
error message
when they try to debug a parallel or threaded application, and want to
take advantage of stack trace?

The resulting ghc-prof package would be increased by less than 1%.

Here is a patch for Ubuntu/Debian GHC 7.4.2 package, as well as upstream


--- ghc-7.4.2-orig/mk/config.mk.in  2012-06-06 19:10:25.0 +0200
+++ ghc-7.4.2/mk/config.mk.in   2012-12-01 00:22:29.055003842 +0100
@@ -256,7 +256,7 @@
  #   l   : event logging
  #   thr_l   : threaded and event logging
  #
-GhcRTSWays=l
+GhcRTSWays=l thr_debug_p thr_debug

  # Usually want the debug version
  ifeq $(BootingFromHc) NO


I notice that your patch modifies the defaults of GHC as shipped by
upstream, and I wonder if there is a reason why these ways are not
enabled by default.

Dear GHC HQ: Would you advice against or for providing a RTS in the
thr_debug_p and thr_debug ways in the Debian package?


thr_debug is already enabled by default.  thr_debug_p is not currently 
enabled, but only because we very rarely need it, so there wouldn't be 
any problem with enabling it by default.


Cheers,
Simon



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Is the GHC release process documented?

2012-11-30 Thread Simon Marlow

On 30/11/12 03:54, Johan Tibell wrote:

While writing a new nofib benchmark today I found myself wondering
whether all the nofib benchmarks are run just before each release,
which the drove me to go look for a document describing the release
process. A quick search didn't turn up anything, so I thought I'd ask
instead. Is there a documented GHC release process? Does it include
running nofib? If not, may I propose that we do so before each release
and compare the result to the previous release*.

* This likely means that nofib has to be run for the upcoming release
and the prior release each time a release is made, as numbers don't
translate well between machines so storing the results somewhere is
likely not that useful.


I used to do this on an ad-hoc basis: the nightly builds at MSR spit out 
nofib results that I compared against previous releases.


In practice you want to do this much earlier than just before a release, 
because it can take time to investigate and squash any discrepancies.


On the subject of the release process, I believe Ian has a checklist 
that he keeps promising to put on the wiki (nudge :)).


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Dynamic libraries by default and GHC 7.8

2012-11-30 Thread Simon Marlow

On 27/11/12 14:52, Ian Lynagh wrote:

GHC HEAD now has support for using dynamic libraries by default (and in
particular, using dynamic libraries and the system linker in GHCi) for a
number of platforms.

This has some advantages and some disadvantages, so we need to make a
decision about what we want to do in GHC 7.8. There are also some policy
questions we need to answer about how Cabal will work with a GHC that
uses dynamic libraries by default. We would like to make these as soon
as possible, so that GHC 7.6.2 can ship with a Cabal that works
correctly.

The various issues are described in a wiki page here:
 http://hackage.haskell.org/trac/ghc/wiki/DynamicByDefault

If you have a few minutes to read it then we'd be glad to hear your
feedback, to help us in making our decisions


It's hard to know what the best course of action is, because all the 
options have downsides.


Current situation:
 * fast code and compiler
 * but there are bugs in GHCi that are hard to fix, and an ongoing
   maintenance problem (the RTS linker).
 * binaries are not broken by library updates

Switching to dynamic:
 * slower code and compiler (by varying amounts depending
   on the platform)
 * but several bugs in GHCi are fixed, no RTS linker needed
 * binaries can be broken by library updates
 * can't do it on Windows (as far as we know)

Perhaps we should look again at the option that we discarded: making 
-static the default, and require a special option to build objects for 
use in GHCi.  If we also build packages both static+dynamic at the same 
time in Cabal, this might be a good compromise.


Static by default, GHCi is dynamic:
 * fast code and compiler
 * GHCi bugs are fixed, no maintenance problems
 * binaries not broken by library updates
 * we have to build packages twice in Cabal (but can improve GHC to
   emit both objects from a single compilation)
 * BUT, objects built with 'ghc -c' cannot be loaded into GHCi unless
   also built with -dynamic.
 * still can't do this on Windows

Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Dynamic libraries by default and GHC 7.8

2012-11-29 Thread Simon Marlow

On 28/11/12 23:15, Johan Tibell wrote:

What does gcc do? Does it link statically or dynamically by default?
Does it depend on if it can find a dynamic version of libraries or
not?


If it finds a dynamic library first, it links against that.

Unlike GHC, with gcc you do not have to choose at compile-time whether 
you are later going to link statically or dynamically, although you do 
choose at compile-time to make an object for a shared library (-fPIC is 
needed).


When gcc links dynamically, it assumes the binary will be able to find 
its libraries at runtime, because they're usually in /lib or /usr/lib. 
Apps that ship with their own shared libraries and don't install into 
the standard locations typically have a wrapper script that sets 
LD_LIBRARY_PATH, or they use RPATH with $ORIGIN (a better solution).


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Dynamic libraries by default and GHC 7.8

2012-11-28 Thread Simon Marlow

On 27/11/12 23:28, Joachim Breitner wrote:

Hi,

Am Dienstag, den 27.11.2012, 14:52 + schrieb Ian Lynagh:

The various issues are described in a wiki page here:
 http://hackage.haskell.org/trac/ghc/wiki/DynamicByDefault

If you have a few minutes to read it then we'd be glad to hear your
feedback, to help us in making our decisions


here comes the obligatory butting in by the Debian Haskell Group:

Given the current sensitivity of the ABI hashes we really do not want to
have Programs written in Haskell have a runtime dependency on all the
included Haskell libraries. So I believe we should still link Haskell
programs statically in Debian.

Hence, Debian will continue to provide its libraries built the static
way.

Building them also in the dynamic way for the sake of GHCi users seems
possible.


So let me try to articulate the options, because I think there are some 
dependencies that aren't obvious here.  It's not a straightforward 
choice between -dynamic/-static being the default, because of the GHCi 
interaction.


Here are the 3 options:

(1) (the current situation) GHCi is statically linked, and -static is
the default.  Uses the RTS linker.

(2) (the proposal, at least for some platforms) GHCi is dynamically
linked, and -dynamic is the default.  Does not use the RTS linker.

(3) GHCi is dynamically linked, but -static is the default.  Does not
use the RTS linker.  Packages must be installed with -dynamic,
otherwise they cannot be loaded into GHCi, and only objects
compiled with -dynamic can be loaded into GHCi.

You seem to be saying that Debian would do (3), but we hadn't considered 
that as a viable option because of the extra hoops that GHCi users would 
have to jump through.  We consider it a prerequisite that GHCi continues 
to work without requiring any extra flags.


Cheers,
Simon





Open question: What should GHC on Debian do when building binaries,
given that all libraries are likely available in both ways – shared or
static. Shared means that all locally built binaries (e.g. xmonad!) will
suddenly break when the user upgrades its Haskell packages, as the
package management is ignorant of unpackaged, locally built programs.
I’d feel more comfortable if that could not happen.

Other open question: Should we put the dynamic libraries in the normal
libghc-*-dev package? Con: Package size doubles (and xmonad users are
already shocked by the size of stuff they need to install). Pro: It
cannot happen that I can build Foo.hs statically, but not load it in
GHCi, or vice-versa.

I still find it unfortunate that once cannot use the .so for static
linking as well, but that is a problem beyond the scope of GHC.

Greetings,
Joachim



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Dynamic libraries by default and GHC 7.8

2012-11-28 Thread Simon Marlow

On 27/11/12 14:52, Ian Lynagh wrote:


Hi all,

GHC HEAD now has support for using dynamic libraries by default (and in
particular, using dynamic libraries and the system linker in GHCi) for a
number of platforms.

This has some advantages and some disadvantages, so we need to make a
decision about what we want to do in GHC 7.8. There are also some policy
questions we need to answer about how Cabal will work with a GHC that
uses dynamic libraries by default. We would like to make these as soon
as possible, so that GHC 7.6.2 can ship with a Cabal that works
correctly.

The various issues are described in a wiki page here:
 http://hackage.haskell.org/trac/ghc/wiki/DynamicByDefault


Thanks for doing all the experiments and putting this page together, it
certainly helps us to make a more informed decision.


If you have a few minutes to read it then we'd be glad to hear your
feedback, to help us in making our decisions


My personal opinion is that we should switch to dynamic-by-default on 
all x86_64 platforms, and OS X x86. The performance penalty for 
x86/Linux is too high (30%), and there are fewer bugs affecting the 
linker on that platform than OS X.


I am slightly concerned about the GC overhead on x86_64/Linux (8%), but 
I think the benefits outweigh the penalty there, and I can probably 
investigate to find out where the overhead is coming from.


Cheers,
Simon




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Dynamic libraries by default and GHC 7.8

2012-11-28 Thread Simon Marlow

On 28/11/12 12:48, Ian Lynagh wrote:

On Wed, Nov 28, 2012 at 09:20:57AM +, Simon Marlow wrote:


My personal opinion is that we should switch to dynamic-by-default
on all x86_64 platforms, and OS X x86. The performance penalty for
x86/Linux is too high (30%),


FWIW, if they're able to move from x86 static to x86_64 dynamic then
there's only a ~15% difference overall:

Run Time
-1 s.d. -   -18.7%
+1 s.d. -   +60.5%
Average -   +14.2%

Mutator Time
-1 s.d. -   -29.0%
+1 s.d. -   +33.7%
Average -   -2.6%

GC Time
-1 s.d. -   +22.0%
+1 s.d. -   +116.1%
Average -   +62.4%


The figures on the wiki are different: x86 static - x86_64 dynamic has 
+2.3% runtime. What's going on here?


I'm not sure I buy the argument that it's ok to penalise x86/Linux users 
by 30% because they can use x86_64 instead, which is only 15% slower. 
Unlike OS X, Linux users using the 32-bit binaries probably have a 
32-bit Linux installation, which can't run 64-bit binaries (32-bit is 
still the recommended Ubuntu installation for desktops, FWIW).


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Leaving Microsoft

2012-11-22 Thread Simon Marlow

Today I'm announcing that I'm leaving Microsoft Research.

My plan is to take a break to finish the book on Parallel and
Concurrent Haskell for O'Reilly, before taking up a position at
Facebook in the UK in March 2013.

This is undoubtedly a big change, both for me and for the Haskell
community.  I'll be stepping back from full-time GHC development and
research and heading into industry, hopefully to use Haskell.  It's an
incredibly exciting opportunity for me, and one that I hope will
ultimately be a good thing for Haskell too.

What does this mean for GHC? Obviously I'll have much less time to
work on GHC, but I do hope to find time to fix a few bugs and keep
things working smoothly. Simon Peyton Jones will still be leading the
project, and we'll still have support from Ian Lynagh, and of course
the community of regular contributors. Things are in a reasonably
stable state - there haven't been any major architectural changes in
the RTS lately, and while we have just completed the switchover to the
new code generator, I've been working over the past few weeks to
squeeze out all the bugs I can find, and I'll continue to do that over
the coming months up to the 7.8.1 release.

In due course I hope that GHC can attract more of you talented hackers
to climb the learning curve and start working on the internals, in
particular the runtime and code generators, and I'll do my best to
help that happen.

Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Undocumented(?) magic this package-id in PackageImports

2012-11-13 Thread Simon Marlow

Please submit a bug (ideally with a patch!).  It should be documented.

However, note that we don't really like people to use PackageImports.
It's not a carefully designed feature, we only hacked it in so we could 
build the base-3 wrapper package a while ago. It could well change in 
the future.


Cheers,
Simon

On 13/11/2012 12:30, Herbert Valerio Riedel wrote:

Hello Simon,

I just found out that in combination with the PackageImports extension
there's a special module name this which according to [1] always
refers to the current package. But I couldn't find this rather useful
feature mentioned in the GHC 7.6.1 Manual PackageImports section[2]. Has
this been omitted on purpose from the documentation?

Cheers,
   hvr

  [1]: 
https://github.com/ghc/ghc/commit/436a5fdbe0c9a466569abf1d501a6018aaa3e49e
  [2]: 
http://www.haskell.org/ghc/docs/latest/html/users_guide/syntax-extns.html#package-imports



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using DeepSeq for exception ordering

2012-11-13 Thread Simon Marlow

On 12/11/2012 16:56, Simon Hengel wrote:

Did you try -fpedantic-bottoms?


I just tried.  The exception (or seq?) is still optimized away.

Here is what I tried:

 -- file Foo.hs
 import Control.Exception
 import Control.DeepSeq
 main = evaluate (('a' : undefined) `deepseq` return () :: IO ())

 $ ghc -fforce-recomp -fpedantic-bottoms -O Foo.hs  ./Foo  echo bar
 [1 of 1] Compiling Main ( Foo.hs, Foo.o )
 Linking Foo ...
 bar


Sounds like a bug, -fpedantic-bottoms should work here.  Please open a 
ticket.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using DeepSeq for exception ordering

2012-11-12 Thread Simon Marlow

Did you try -fpedantic-bottoms?

Cheers,
Simon

On 08/11/2012 19:16, Edward Z. Yang wrote:

It looks like the optimizer is getting confused when the value being
evaluated is an IO action (nota bene: 'evaluate m' where m :: IO a
is pretty odd, as far as things go). File a bug?

Cheers,
Edward

Excerpts from Albert Y. C. Lai's message of Thu Nov 08 10:04:15 -0800 2012:

On 12-11-08 01:01 PM, Nicolas Frisby wrote:

And the important observation is: all of them throw A if interpreted in
ghci or compiled without -O, right?


Yes.



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Building GHC for BB10 (QNX)

2012-11-12 Thread Simon Marlow

On 10/11/2012 19:53, Stephen Paul Weber wrote:

Hey all,

I'm interesting in trying to get an initial port for BlackBerry 10 (QNX)
going.  It's a POSIXish environment with primary interest in two
architechtures: x86 (for simulator) and ARMv7 (for devices).

I'm wondering if
http://hackage.haskell.org/trac/ghc/wiki/Building/Porting is fairly
up-to-date or not?  Is there a better place I should be looking?

One big difference (which may turn out to be a problem) is that the
readily-available QNX compilers (gcc ports) are cross-compilers.  I
realise that GHC has no good support to act as a cross-compiler yet, and
anticipate complications arising from trying to build GHC using a
cross-compiler for bootstrapping (since that implies GHC acting as a
cross-compiler at some point in the bootstrapping).

Any suggestions would be very welcome.


Cross-compilation is the way to port GHC at the moment, although 
unfortunately our support for cross-compilation is currently under 
development and is not particularly robust.  Some people have managed to 
port GHC using this route in recent years (e.g. the iPhone port).  For 
the time being, you will need to be able to diagnose and fix problems 
yourself in the GHC build system to get GHC ported.


Ian Lynagh is currently looking into cross-compilation and should be 
able to tell you more.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


  1   2   3   4   5   6   7   8   9   10   >