Re: Enabling TypeHoles by default

2014-01-14 Thread Duncan Coutts
On Tue, 2014-01-14 at 17:44 +0100, Johan Tibell wrote:
 I can make another cabal release if needed, if someone submits a pull
 request with the right fix (i.e. add TypedHoles with TypeHoles as a
 synonym.)

Thanks Johan, or I'm happy to do it.

Duncan

 On Tue, Jan 14, 2014 at 5:33 PM, Austin Seipp aus...@well-typed.com wrote:
 
  At the very least, Type(d)Holes would never appear explicitly since it
  would be enabled by default. But it might be turned off (but I don't
  know who would do that for the most part.) Cabal at least might still
  need an update.
 
  In any case, Herbert basically summed it up: the time window is kind
  of close, and we would need to re-release/redeploy a few things most
  likely. I really think it mostly depends on the Cabal team and what
  their priorities are. I've CC'd Duncan and Johan for their opinions.
 
  On Tue, Jan 14, 2014 at 10:27 AM, Herbert Valerio Riedel h...@gnu.org
  wrote:
   Hi,
  
   On 2014-01-14 at 17:14:51 +0100, David Luposchainsky wrote:
   On 14.01.2014 17:07, Austin Seipp wrote:
   We probably won't change the name right now however. It's already
   been put into Cabal (as a recognized extension,) so the name has
   propagated a slight bit. We can however give it a new name and
   deprecate the old -XTypeHoles in the future. Or, we could change
   it, but I'm afraid it's probably a bit too late in the cycle for
   other devs to change.
  
   Removing a name later on is more time-consuming, with or without
   deprecation. People get used to the wrong name and stop caring, but
   I can already picture the type holes are really typed holes
   discussions on IRC. I'm strongly in favour of introducing the new name
   (and the deprecation for the synonym) as early as possible. This
   change should not be very extensive anyway, so why not slip it in?
  
   Well, as Austin hinted at, this would also require a Cabal-1.18.x
   release in time for the final 7.8, and a recompile of Hackage to pick it
   up so that people can start using the new 'TypedHoles' token in their
   .cabal files... so there's a bit of coordination required to make this
   happen in a timely manner... Or put differently, somebody has to care
   enough to invest some time and pull this through :-)
  
   Cheers,
 hvr
  
 
 
 
  --
  Regards,
 
  Austin Seipp, Haskell Consultant
  Well-Typed LLP, http://www.well-typed.com/
 


-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cabal and cross compilation

2013-01-23 Thread Duncan Coutts
On 23 January 2013 05:41, Nathan Hüsken nathan.hues...@posteo.de wrote:
 Hey,

 I am working on getting ghc to cross compile to android.

 When trying to get haskeline to compile. I want to change the cabal file
 such that it sets a flag when compiling for android.

 For that I changed cabal so that it recognizes android as a OS.
 But cabal seems to get its os information from System.Info.os, and from
 what I can tell this always returns the host os and not the target os.

 Am I getting this right, is cabal unaware of the target os?
 How can we change this?

That's right, currently Cabal only knows about the host OS  arch not
the target. Adding proper cross compilation awareness and support into
Cabal will require some hacking in the Cabal library, (to pass in the
target platform, toolchain etc)

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Bytestring and GHC 7.6.2

2013-01-13 Thread Duncan Coutts
On 12 January 2013 16:05, Ian Lynagh i...@well-typed.com wrote:
 On Tue, Jan 08, 2013 at 08:10:18PM +, Duncan Coutts wrote:

 Either way, lemme know if this is all fine, and I'll make the 0.10.0.2
 release.

 Looks good, thanks! I've updated the GHC 7.6 repo to match the tag.

Ta muchly!

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: relocatable packages: GHC_PACKAGE_PATH and package.conf

2012-05-28 Thread Duncan Coutts
On 28 May 2012 05:36, Tim Cuthbertson t...@gfxmonk.net wrote:

  - ghc doesn't seem to support ${pkgroot} prefixes. I thought it did,
 but I'm new to this so I may be misunderstanding where they can be
 used.

I thought it did too since I think I wrote the code for it. I don't
recall exactly what version it got into, it may well have been only
7.2+

 Additionally, the paths that come out of cabal build have the compiler
 name and version hard coded, e.g lib/packagename/ghc-7.0.4/*. Is there
 any way to configure how this path is constructed to get rid of the
 ghd-7.0.4 part?

By default, yes, cabal produces absolute packages. It does have
support for relocatable packages on some compiler/platform combos:

http://www.haskell.org/cabal/users-guide/installing-packages.html#prefix-independence

sadly ghc on unix is not one of them because we do not have a reliable
way to find the program location (needed to find data files etc).
Actually more specifically it's not easy and nobody has implemented
it, rather than it being impossible.

So at the moment you could work around it in specific cases by hacking
the package registration info before registering. Do something like:
cabal copy --destdir=...
cabal register --gen-pkg-config=blah.pkg
sed -i =e '...' blah.pkg

Obviously your app/library had better not use the Cabal-provided
functions for finding data files at runtime since that'll get
confused.

If you want a proper solution you'll have to help us implement the
Cabal prefix independence feature for the ghc/unix combo.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.2.2 Distribution.Simple.Program.Ar

2012-05-18 Thread Duncan Coutts
On 18 May 2012 20:20, Joe Buehler as...@cox.net wrote:
 I built GHC 7.2.2 on a LINUX box running RHEL 3.  When compiling a package 
 using
 this GHC it is trying to invoke ar thus:

 execve(/usr/bin/ar, [/usr/bin/ar, -r, -c,
 dist/build/libHSregex-base-0.93, dist/build/Text/Regex/Base.o,
 dist/build/Text/Regex/Base/Regex..., dist/build/Text/Regex/Base/Conte...,
 dist/build/Text/Regex/Base/Impl], [/* 45 vars */]) = 0

 My version of ar does not like being invoked as /usr/bin/ar -r -c lib.a file
 file file..., it complains that the .a file is missing.  I believe it should 
 be
 /usr/bin/ar rc lib.a file file file

The -c flag is to tell it to create the archive (so not to complain if
the file is missing).

You're saying it accepts ar rc but rejects ar -r -c ?

I was under the impression that posix allowed the '-' on the ar
command line flags. e.g. http://www.unix.com/man-page/posix/1posix/ar/

 This appears to originate in Distribution.Simple.Program.Ar.

Yes.

 Can someone tell me what is going on here?

I'm very surprised it's not working on some version of Red Hat. This
has worked on many varieties of linux for many years. You don't have
some non-standard ar installed do you? What version of gnu binutils?
(ar -V)

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.2.2 Distribution.Simple.Program.Ar

2012-05-18 Thread Duncan Coutts
On 18 May 2012 22:03, Joe Buehler as...@cox.net wrote:
 Duncan Coutts wrote:

 I'm very surprised it's not working on some version of Red Hat. This
 has worked on many varieties of linux for many years. You don't have
 some non-standard ar installed do you? What version of gnu binutils?
 (ar -V)

 No, it's the RHES ar program.

 # rpm -qf /usr/bin/ar
 binutils-2.14.90.0.4-42

 A CentOS 6 box works fine, so this may be a bug in RHES 3.

 The installed binutils is the latest for RHES 3.  Locally compiled
 versions of binutils have the same bug, so perhaps there is a bug
 elsewhere in the system.  For example, I do not have the very latest C
 library.

As a local workaround you can of course hack your Cabal library
sources and reinstall the lib. Until we work out what's going on I'm a
bit reluctant to chage the upstream version since that has been tested
on so many systems (Linuxes, BSDs, other unixes).

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Taking binary from hackage or GHC?

2012-02-08 Thread Duncan Coutts
On 8 February 2012 10:24, Joachim Breitner nome...@debian.org wrote:
 Dear interested parties :-),

 GHC 7.4.1 started to ship and expose the binary library, version
 0.5.0.3. On hackage is binary-0.5.1.0.

It was firmly my opinion that shipping and exposing binary in GHC was
and is a mistake. Previously it was given a different name to try to
discourage people using it, but apparently that didn't work. The
authors of binary (myself included) don't want to ship it yet as part
of the Haskell Platform because the API isn't right yet (ongoing
work), and shipping it with GHC effectively makes it part of the
platform.

 In Debian, we try to provide one version of each library, so we have to 
 decide:

Yes, you're not put in an easy situation. Nor will we be when we come
to packaging the next HP release.

  * Use the version provided by GHC and drop the independent binary
 package (as we have done with random, for example).

  * Do not expose binary in GHC and continue using the version from
 hackage.

I'm not sure I have the whole answer, you'll also need a response from Ian.

Eventually we will want to propose binary for the HP, but GHC may well
still want to depend on binary and ship it. So it might end up as one
of those HP libs that is shipped with GHC in the long term.

 @Upstream: Do you think binary on hackage will diverge much from the one
 in GHC and would you expect your users to want the new versions before
 they are shipped with GHC?

No, it should not diverge much, GHC picks up the latest code from the
upstream version occasionally.

 And do you expect breakage in any components
 (e.g. haddock) if everything but GHC uses a newer binary package?

At some point, we will have a major version change and that will break
the API and the binary format (we might even split the package in
two).

If they use similar versions but not the same, then probably the only
thing to break would be haddock, since I'm guessing that it makes use
of binary instances provided by the GHC package. But of course haddock
is also shipped with GHC.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Way to expose BLACKHOLES through an API?

2011-11-16 Thread Duncan Coutts
On Tue, 2011-11-08 at 15:43 +, Simon Marlow wrote:

 Hmm, but there is something you could do.  Suppose a thread could be in 
 a mode in which instead of blocking on a BLACKHOLE it would just throw 
 an asynchronous exception WouldBlock.  Any computation in progress would 
 be safely abandoned via the usual asynchronous exception mechanism, and 
 you could catch the exception to implement your evaluateNonBlocking 
 operation.
 
 I'm not sure this would actually be useful in practice, but it's 
 certainly doable.

The linux kernel folks have been discussing a similar idea on and off
for the last few years. The idea is to return in another thread if the
initial system call blocks.

Perhaps there's an equivalent here. We have an evaluateThingy function
and when the scheduler notices that thread is going to block for some
reason (either any reason or some specific reason) we return from
evaluateThingy with some info about the blocked thread.

The thing that the kernel folks could never decide on was to do with
thread identity: if it was the original thread that blocked and we
return in a new thread, or if the original thread returns and a clone is
the one that blocks.

Or perhaps it's a crazy idea and it would never work at all :-)

Duncan


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Should GHC default to -O1 ?

2011-11-09 Thread Duncan Coutts
On 9 November 2011 13:53, Greg Weber g...@gregweber.info wrote:
 How much does using ghc without cabal imply a newer programmer? I don't use
 cabal when trying out small bits of code (maybe I should be using ghci), but
 am otherwise always using cabal.

The main reason cabal has always defaulted to -O is because
historically it's been assumed that the user is installing something
rather than just hacking on their own code.

If we can distinguish cleanly in the user interface between the
installing and hacking use cases then we could default to -O0 for the
hacking case.

Duncan

 On Wed, Nov 9, 2011 at 3:18 AM, Duncan Coutts duncan.cou...@googlemail.com
 wrote:

 On 9 November 2011 00:17, Felipe Almeida Lessa felipe.le...@gmail.com
 wrote:
  On Tue, Nov 8, 2011 at 3:01 PM, Daniel Fischer
  daniel.is.fisc...@googlemail.com wrote:
  On Tuesday 08 November 2011, 17:16:27, Simon Marlow wrote:
  most people know about 1, but I think 2 is probably less well-known.
  When in the edit-compile-debug cycle it really helps to have -O off,
  because your compiles will be so much quicker due to both factors 1 
  2.
 
  Of course. So defaulting to -O1 would mean one has to specify -O0 in
  the
  .cabal or Makefile resp. on the command line during development, which
  certainly is an inconvenience.
 
  AFAIK, Cabal already uses -O1 by default.

 Indeed, and cabal check / hackage upload complain if you put -O{n} in
 your .cabal file.

 The recommended method during development is to use:

 $ cabal configure -O0


 Duncan

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Incrementally consuming the eventlog

2011-05-01 Thread Duncan Coutts
On Thu, 2011-04-28 at 23:31 +0200, Johan Tibell wrote:

 The RTS would invoke listeners every time a new event is written. This
 design has many benefits:
 
 - We don't need to introduce the serialization, deserialization, and
 I/O overhead of first writing the eventlog to file and then parsing it
 again.

The events are basically generated in serialised form (via C code that
writes them directly into the event buffer). They never exist as Haskell
data structures, or even C structures.

 - Programs could monitor themselves and provide debug output (e.g. via
 some UI component).
 - Users could write code that redirects the output elsewhere e.g. to a
 socket for remote monitoring.
 
 Would invoking a callback on each event add too big of an overhead?

Yes, by orders of magnitude. In fact it's impossible because the act of
invoking the callback would generate more events... :-)

 How about invoking the callback once every time the event buffer is
 full?

That's much more realistic. Still, do we need the generality of pushing
the event buffers through the Haskell code? For some reason it makes me
slightly nervous. How about just setting which output FD the event
buffers get written to.

Turning all events or various classes of events on/off at runtime should
be doable. The design already supports multiple classes, though
currently it just has one class (the 'scheduler' class). The current
design does not support fine grained filtering at the point of event
generation.

Those two features combined (plus control over the frequency of event
buffer flushing) would be enough to implement a monitoring socket
interface (web http or local unix domain socket).

Making the parser in the ghc-events package incremental would be
sensible and quite doable as people have already demonstrated.

Duncan


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Package management

2011-05-01 Thread Duncan Coutts
On Tue, 2011-04-26 at 14:05 -0700, Brandon Moore wrote:
 Based on my own misadventures and Albert Y. C. Lai's SICP 
 (http://www.vex.net/~trebla/haskell/sicp.xhtml)
 it seems the that root of all install problems is that reinstalling a
 particular version of a particular package deletes any other existing
 builds of that version, even if other packages already depend on them.
 
 Deleting perfectly good versions seems to be the root of all package
 management problems.

Yes.

 There are already hashes to keep incompatible builds of a package separate. 
 Would anything break if existing packages were left alone when a new
 version was installed? (perhaps preferring the most recent if a
 package flag specifies version but not hash).

That is the nix solution. It is also my favoured long term solution.

 The obvious difficulty is a little more trouble to manually specify packages. 
 Are there any other problems with this idea?

See nix and how it handles the configuration and policy issues thrown up
by allowing multiple instances of the same version of each package. For
example, they introduce the notion of a package environment which is a
subset of the universe of installed packages.

Duncan


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: build issue: * Missing header file: HsBase.h

2010-12-16 Thread Duncan Coutts
On 16 December 2010 10:02, Simon Marlow marlo...@gmail.com wrote:

 ghc-cabal: Missing dependency on a foreign library:
 * Missing header file: HsBase.h
 This problem can usually be solved by installing the system package that
 provides this library (you may need the -dev version). If the library is
 already installed but in a non-standard location then you can use the
 flags
 --extra-include-dirs= and --extra-lib-dirs= to specify where it is.


 The problem is HsBase.h is where it is on my reference build tree on
 workstation:

 -bash-4.0$ find . -name 'HsBase.h'
 ./libraries/base/include/HsBase.h


 I suppose some external library might be missing, but here the error is
 quite misleading and I cannot find which one might be the culprit of
 this error.

 Do you have any idea what to install in order to proceed?

 I don't know what's going on here, I'm afraid.  Looks like Cabal tried to
 find HsBase.h and couldn't find it - so either it wasn't there (but you say
 it was), or Cabal was looking in the wrong place.  Maybe follow up the
 latter hypothesis?

Cabal will report this error when it cannot compile HsBase.h, that
usually means it is missing, but it's also possible that something
just does not compile. This is like the check that ./configure scripts
do. It's rather hard from the exit code of gcc to work out if it's
genuinely missing, or fails to compile (though we could try doing cpp
and cc phases separately).

One can run with -v3 to see the error that gcc reports.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Wadler space leak

2010-11-08 Thread Duncan Coutts
On 8 November 2010 13:28, Simon Marlow marlo...@gmail.com wrote:

 There's another approach in Jan Sparud's paper here:

 http://portal.acm.org/citation.cfm?id=165196

 although it's not clear that this interacts very well with inlining either,
 and it has a suspicious-looking side-effecting operation.  It also looks
 like it creates a circular reference between the thunk and the selectors,
 which might hinder optimisations, and would probably also make things slower
 (by adding extra free variables to the thunk).

This proposal is mentioned favourably by Jörgen Gustavsson David Sands
in [1] (see section 6, case study 6). They mention that there is a
formalisation in Gustavsson's thesis [2]. That may say something about
inlining, since that's just the kind of transformation they'd want to
show is a space improvement.

[1]: Possibilities and Limitations of Call-by-Need Space Improvement (2001)
  http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.8.4097

[2]: Space-Safe Transformations and Usage Analysis for Call-by-Need
Languages (2001)
  (which I cannot immediately find online)

Duncan
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: $thisdir for package.conf?

2010-08-12 Thread Duncan Coutts
On 12 August 2010 02:20, Greg Fitzgerald gari...@gmail.com wrote:
 Is there a way for a package.conf file to contain paths that are relative to
 the directory containing the .conf file?  GHC 6.12.1 chokes on relative
 paths.  I see the problem is solved for GHC's core libraries with the
 $topdir variable.  Is there something like a $thisdir we could use in
 inplace .conf files?

We came up with a specification for this but it is not yet implemented:

http://www.haskell.org/pipermail/libraries/2009-May/011772.html
http://hackage.haskell.org/trac/ghc/ticket/3268

Patches welcome.

Duncan
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] cabal: problem building ffi shared library and significance of __stginit

2010-05-19 Thread Duncan Coutts
On Tue, 2010-05-18 at 17:31 -0400, Anthony LODI wrote:
 Hello,
 
 I'm trying to build some haskell code as a .so/.dll so that it can
 ultimately be used by msvc.  I have it working when I compile by hand
 (listed below) but I can't get the exact same thing built/linked with
 cabal.  On linux everything builds fine, but when I try to link the
 resulting .so file, I get an error about a missing
 '__stginit_CInterface' reference.  Indeed I couldn't find that name in
 any of the cabal-generated .dyn_o files.  I checked the output of
 'cabal build -v' and it seems to be executing about the same thing
 that I'm executing manually so I'm not sure what could be going wrong.
  On windows cabal won't even configure since '--enable-shared' seems
 to imply '-dynamic' (right?), and that's not currently supported.
 
 Also, when I remove the line 'hs_add_root(__stginit_CInterface);', and
 the corresponding forward declaration, the program runs fine!  Does
 ghc no longer need this call or are my toy programs just being lucky
 sofar?

For reference for other people, Anthony and I worked this out today.

full example:
http://pastebin.com/aLdyFMPg

The difference between doing it manually and building a library via
Cabal is the package name.

When building directly with ghc, the default package name is  aka the
main package. When building a ghc/Haskell package, the package name gets
set (ghc -package-name test-0.0). This package name gets encoded into
the symbol names. So we get:

   __stginit_testzm0zi0_CInterface
vs __stginit_CInterface

(testzm0zi0 is the Z-encoding of test-0.0)

What is bad here is that the __stginit stuff is even necessary. Anthony
is going to file a ghc ticket and/or complain on the ghc users list,
citing this example.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: -O vs. -O2

2010-05-08 Thread Duncan Coutts
On Wed, 2010-05-05 at 21:24 +1000, Roman Leshchinskiy wrote:
 Whenever I do cabal sdist on one of my projects, I get this warning:
 
 Distribution quality warnings:
 'ghc-options: -O2' is rarely needed. Check that it is giving a real benefit
 and not just imposing longer compile times on your users.
 
 This finally got me curious and I did a nofib run to compare -O to
 -O2. The results are below (this is with the current HEAD).
 
 Is there a real-world example of -O2 causing significantly longer
 compile times without providing a real benefit? If not, would it
 perhaps make sense for Cabal to use -O2 by default or even for GHC to
 make the two flags equivalent?

It should be -O1 for default/balanced optimisations and -O2 for things
involving a bigger tradeoff in terms of code size or compile time. so
any optimisations in -O2 that GHC HQ believe are a no-brainer for the
majority of packages should be moved into -O1.

It's fine for people writing performance sensitive code to use -O2 in
their packages. It's just not something we need to encourage for random
packages. Before we added that warning, many package authors were not
really thinking and just chucking in -O2 because 2 is bigger than 1 so
it must be better right?. There certainly used to be packages that took
longer to compile, generated more code, and ran slower when using -O2.
That was some time ago of course.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Parallel Haskell: 2-year project to push real world use

2010-04-30 Thread Duncan Coutts

GHC HQ and Well-Typed are very pleased to announce a 2-year project
funded by Microsoft Research to push the real-world adoption and
practical development of parallel Haskell with GHC.

We are seeking organisations to take part: read on for details.

In the last few years GHC has gained impressive support for parallel
programming on commodity multi-core systems. In addition to traditional
threads and shared variables, it supports pure parallelism, software
transactional memory (STM), and data parallelism. With much of this
research and development complete, and more on the way, the next stage
is to get the technology into more widespread use.

This project aims to do the engineering work to solve whatever remaining
practical problems are blocking organisations from making serious use of
parallelism with GHC.  The driving force will be the *applications*
rather than the *technology*.

We will work in partnership with a small number of commercial or
scientific users who are keen to make use of parallel Haskell. We will
work with these partners to identify the issues, major or minor, that
are hindering progress. The project is prepared to handle system issues,
covering everything from compiler and runtime system through to more
mundane platform and tool problems.  Meanwhile our partners will
contribute their domain-specific expertise to use parallel Haskell to
address their application.

We are now seeking organisations to take part in this project.
Organisations do not need to contribute financially but should be
prepared to make a significant commitment of their own time.  We expect
to get final confirmation of the project funding in June and to start
work shortly thereafter.

Well-Typed will coordinate the project, working directly with both the
participating organisations and the Simons at GHC HQ. If you think your
organisation may be interested then get in touch with me, Duncan Coutts,
via i...@well-typed.com.

-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Parallel Haskell: 2-year project to push real world use

2010-04-30 Thread Duncan Coutts
On Fri, 2010-04-30 at 10:25 -0400, Tyson Whitehead wrote:
 On April 30, 2010 06:32:55 Duncan Coutts wrote:
  In the last few years GHC has gained impressive support for parallel
  programming on commodity multi-core systems. In addition to traditional
  threads and shared variables, it supports pure parallelism, software
  transactional memory (STM), and data parallelism. With much of this
  research and development complete, and more on the way, the next stage
  is to get the technology into more widespread use.
 
 Does this mean DPH is ready for abuse?

This project is about pushing the practical use of the parallel
techniques that are already mature, rather than about pushing research
projects along further.

So this project is not really about DPH. On the other hand it's possible
someone might be able to make more immediate use of the dense, regular
parallel arrays which has been a recent spinoff of the DPH project. They
have the advantage of being considerably easier to implement, but much
less expressive than the full sparse, nested parallel arrays.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC.Prim.ByteArray# - confusing documentation

2009-12-26 Thread Duncan Coutts
On Thu, 2009-12-24 at 18:18 -0500, Antoine Latter wrote:
 Folks,
 
 I found some of the documentation in GHC.Prim confusing - so I thought
 I'd share. The documentation for the ByteArray# type[1] explains
 that's it's a raw region in memory that also remembers it's size.
 
 Consequently I expected sizeOfByteArray# to return the same number
 that I passed in to newByteArray#. But it doesn't - It returned
 however much it decided to allocate, which on my platform is always a
 multiple of four bytes.

Yes, this is an artefact of the fact that ghc measures heap stuff in
units of words.

 This is something which could be clarified in the documentation.

It would be jolly useful for making short strings for GHC's ByteArray#
to to use a byte length rather than a word length. It'd mean a little
more bit twiddling in the GC code that looks at ByteArray#s, however
it'd save an extra 2 words in a short string type (or allow us to store
'\0' characters in short strings).

It's been on my TODO list for some time to design a portable low level
ByteArray module that could be implemented by hugs, nhc, ghc, etc. The
aim would be to be similar to ForeignPtr + Storable but using native
heap allocated memory blocks.

In turn this would be the right portable layer on which to build
ByteString, Text and probably IO buffers too.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC-6.12.1: broken configure

2009-12-23 Thread Duncan Coutts
On Wed, 2009-12-23 at 21:49 +, Simon Marlow wrote:

 I personally think we should revert to using the standard config.guess 
 and normalising the result as we used to.

Aye.

 It was changed due to this:
 
 http://hackage.haskell.org/trac/ghc/ticket/1717
 
 so we should find another way around that.

I think we should just tell the ppc linux people to get their change to
config.guess pushed upstream. They cannot expect to change every package
instead.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: cabal-install-0.8 final testing

2009-12-22 Thread Duncan Coutts
On Mon, 2009-12-21 at 11:34 +, Gracjan Polak wrote:
 Duncan Coutts duncan.coutts at googlemail.com writes:
 
if flag(test)
Buildable: True
Build-depends: base5, bytestring, HUnit, directory
else
Buildable: False
  
 
 Is this solution good for the time being? If so, I'll change it to make peace
 and happiness prevail among cabal users.

I suggest you don't change anything until we decide what the semantics
are supposed to be in this case.

 Side question: mmaptest is meant to be devel/testing thing only that is not
 build during normal usage. Is there a better way to achieve such purpose?

Not something purpose designed at the moment.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: cabal-install-0.8 final testing

2009-12-21 Thread Duncan Coutts
On Mon, 2009-12-21 at 12:44 +, Simon Marlow wrote:

  The current Cabal code takes a slightly different approach. It says at
  the end oh this exe isn't buildable, so its deps do not contribute to
  the deps of the package.
 
  The problem is what it was doing before that. It sees the dependency on
  HUnit and checks that it can be satisfied. It's only at the end that it
  ends up discarding it. So if it was not actually available then it fails
  earlier.
 
 I was following the description up until this paragraph (too many its 
 and thats, I'm not sure what they all refer to). Don't worry about it 
 if it's hard to explain, but if you have time to elaborate a bit I'd be 
 interested.

Sorry, I'll try again:

There are essentially two stages to the resolution. The main one and a
simple post-processing. The post-processing notes that some components
are not buildable and so ignores the deps from those components.

But that's not good enough because the first stage would already have
failed if those dependencies of the non-buildable component were not
available. So it's no good just doing it as a post-processing stage. We
must properly express the fact that the dependencies are optional and
related to whether or not the component is buildable.

The solver currently does not know that buildable is special in any
way. Indeed that it should be special is rather irksome. We had this
field before we added conditionals. We would not have added buildable
like this way after adding conditionals. Instead we should have added
things with comprehensible semantics like fail.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Static linking and linker search paths

2009-12-21 Thread Duncan Coutts
All,

Conor's problems on OSX with libiconv reminds me of something I've been
thinking about for a little while.

So the problem is that the semantics of static linking does not at all
match what we really want. We can easily accidentally interpose a
wrong library in place of the one we wanted.

In the iconv case people have hit recently we've got this situation:

libHSbase-4.2.0.0.a
contains references to _iconv_open, _iconv, _iconv_close

${system}/libiconv.a

${macports}/libiconv.a

When the code for libHSbase was built, it was against the header files
for the standard system version of iconv on OSX. Thus it is intended
that we link against the system version of iconv.

No suppose by default that the system is set up such that only the
system iconv is found by default. So I am assuming there is no
LD_LIBRARY_PATH or similar set. Then when we link something then we
correctly resolve the references in base to the system iconv lib.

When we call gcc to link we're doing something like:

-L${ghclibdir} -lHSbase-4.2.0.0.a -liconv

Now suppose we build some FFI package (libHSfoo-1.0.a) that provides a
binding to a C lib installed via macports (libfoo.a). When we link a
program that depends indirectly on this FFI package then the gcc linker
line is like:

-L${ghclibdir} -L/opt/local/lib
-lHSfoo-1.0.a -lfoo -lHSbase-4.2.0.0.a -liconv

and now it all breaks.

We're asking the linker to look in /opt/local/lib first for *all the
remaining libraries* and that includes iconv, so we pick up the wrong
iconv. Our intention was just to look for -lfoo using /opt/local/lib,
but we cannot express that to the linker.

The problem is that our notion of library dependencies is hierarchical
but for the system linker it is linear. We know that package foo-1.0
needs the C library libfoo.a and that it should be found by looking
in /opt/local/lib. But we end up asking the linker to look there first
for all other libs too, including for the -liconv needed by the base
package. The registration for the base package never
mentioned /opt/local/lib of course.

What we'd really like to say is something like:

link HSfoo-1.0.a
push search path /opt/local/lib
link foo
pop search path

If we think we know enough about the behaviour of the system linker and
its search path then we could do just:

-L${ghclibdir}
-lHSfoo-1.0.a /opt/local/lib/libfoo.a -lHSbase-4.2.0.0.a -liconv

that is, specify the .a file exactly by fully resolved path and not add
anything to the linker search path. (Actually we'd do it for the
libHS*.a ones too since we know precisely where they live)

The possible danger is that how we search for the library and how the
system linker searches for it might not match exactly and we could end
up sowing confusion.

Worth thinking about anyway.

Note that some of this is different for shared libs.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: cabal-install-0.8 final testing

2009-12-19 Thread Duncan Coutts
On Fri, 2009-12-18 at 19:44 -0800, Dave Bayer wrote:
 Hi Duncan,
 
 Installation was easy (I typed cabal-install HTTP zlib
 cabal-install ;-).

Thanks for testing it. I've uploaded it to hackage.

 Overall, seems to work fine. I couldn't build darcs, but I couldn't do
 that by hand either; I used their binary installer. I don't think they
 build yet under GHC 6.12.1.
 
 One oddity, I tried to use cabal install to install mmap, and it
 failed because the HUnit package was missing. I then installed HUnit
 without incident, went back and installed mmap without incident. No
 idea why this didn't work in one pass, but I have sandbox systems if
 you'd like me to see if I can reliably reproduce this.

Mm. This is a worse bug than I thought. It's not trivial to fix. I'll
have to think about it.

The problem is mmap uses:

Executable mmaptest
  Main-is: tests/mmaptest.hs
  if flag(mmaptest)
  Buildable: True
  else
  Buildable: False
  Build-depends: base5, bytestring, HUnit, directory

Now the question is what does this mean exactly. The previous version of
Cabal said essentially well the executable needs HUnit thus the package
needs HUnit. This despite the fact that we're not going to actually
built this test executable!

The current Cabal code takes a slightly different approach. It says at
the end oh this exe isn't buildable, so its deps do not contribute to
the deps of the package.

The problem is what it was doing before that. It sees the dependency on
HUnit and checks that it can be satisfied. It's only at the end that it
ends up discarding it. So if it was not actually available then it fails
earlier.

The reason it's then inconsistent between configuring a package and what
the cabal install planner is doing is that the planner assumes all the
packages on hackage are available (sort of) while when we get to
actually configuring a package to install it, only the other installed
packages are available. So that's why the same solver gives us different
answers, because we're making different assumptions about the available
packages.

So the issue is we need to treat Buildable: False specially in the
solver because if we end up picking a configuration with Buildable:
False for a component then have to have not already required the
dependencies for that component.

Essentially we want to reinterpret it as something like:

  if flag(test)
  Buildable: True
  Build-depends: base5, bytestring, HUnit, directory
  else
  Buildable: False

So that the dependencies themselves are conditional on the component
being buildable. Then the solver would do the right thing.

In general I guess the transformation would look like:

  if A1
Buildable: False
  if A2
Buildable: False
  ...

  if B1
Build-depends: blah
  if B2 
Build-depends: blah

where A1,A2,... and B1,B2,... are arbitrary conditions (including none)

then this becomes:

  if A1
Buildable: False 
  if A2
Buildable: False
  ...

  if B1  ! (A1 || A2 || ...)
Build-depends: blah 
  if B2  ! (A1 || A2 || ...) 
Build-depends: blah
  ...

I don't especially like this though. It makes the meaning of buildable
rather magic. In the short term I may have to revert the change in
behaviour so that these dependencies become unconditional again, even
though they're not used. Sigh.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: (alpha) stick shift cabal install for GHC 6.12.1

2009-12-18 Thread Duncan Coutts
On Thu, 2009-12-17 at 14:00 -0800, Dave Bayer wrote:

 Background: I never got cabal install to work on OS X 10.5 with GHC
 6.10.4, basically because zlib wouldn't work. Odd, because a perfectly
 good version of gunzip already exists on most platforms, and the code
 doesn't fall back to this version if needed.

Do you mean OSX 10.5 or 10.6. I've never heard of major problems on 10.5
and lots of problems on 10.6. The latter are all fixable.

The issue on 10.6 was that gcc defaults to compiling 64bit, but ghc
expects 32bit. The hack for ghc-6.10.4 was to change the wrapper script
to pass -optc-m32 -optl-m32. That's enough to get ghc working, but for
other packages that bind to foreign libs you also need to apply the same
trick to hsc2hs.

That's why so many people bumped into zlib not working, it was because
their hsc2hs was thinking it should be using 64bit when everything else
was expecting 32bit. That manifested in a zlib initialisation error
(because the structure size check fails).

So in short, the problems are not with cabal or zlib, you just need a
fully working ghc installation.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


cabal-install-0.8 final testing

2009-12-18 Thread Duncan Coutts
All,

If you'd like to help test the new cabal-install-0.8 release then grab
it here:

http://haskell.org/cabal/release/cabal-install-0.8.0/

It should work with ghc-6.10 and 6.12 (and indeed 6.8 and 6.6).

If nobody reports any major show stoppers then I'll upload this to
hackage.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ByteString-backed Handles, and another couple of questions

2009-12-15 Thread Duncan Coutts
On Tue, 2009-12-15 at 12:48 -0800, Bryan O'Sullivan wrote:

 
 Yes, that would amount to double-buffering, and would work nicely,
 only the current buffers go through foreign pointers while text uses
 an unpinned array. I can see why this is (so iconv can actually work),
 but it does introduce a fly into the ointment :-)

It should not be strictly necessary to use a ForeignPtr in this case. If
the IO buffers use pinned ByteArray#s then they can still be passed to
iconv for it to write into.

It should also be possible for Text to be constructed from a pinned
ByteArray#.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Initial ghc-6.12 + hackage build results

2009-12-14 Thread Duncan Coutts
All,

I've tried building 1324 out of the ~1700 packages from hackage using
ghc-6.10.4 and ghc-6.12.0. This is the subset of packages that I could
build in one go.

Compared to the subset that I could build with ghc-6.10.4, I had to
chuck out 125 packages because their build dependency constraints
precluded them from building with ghc-6.12.

Of the remaining 1324 packages, 113 packages that built OK previously
now do not build.

Amongst that 113, the ones that cause most knock-on failures are:
(16,packedstring-0.1.0.1)
(12,MissingH-1.1.0.1)
(11,syb-with-class-0.5.1)
(6,uvector-0.1.0.5)
(3,stringtable-atom-0.0.6)
(3,bindings-common-1.3.4)
(3,binary-strict-0.4.6)
(3,base64-string-0.1)

packedstring fails because it needs base 4, but only says
build-depends: base, so cabal uses its compatibility tricks and builds
it against base 3. It should specify build-depends: base == 4.*.

I need to investigate MissingH further. It failed because a dependency
of its non-buildable test program was not found. That should not have
been a problem.

syb-with-class has type errors, I'm guessing due to changes in
template-haskell package

uvector does not compile because of the changes in GHC's Handle
implementation.

stringtable-atom now fails with a type error for reasons I don't quite
understand:
src/StringTable/AtomMap.hs:320:0:
Occurs check: cannot construct the infinite type: a = (Int, a)
When generalising the type(s) for `findMax'
Perhaps some change in a containers function?

bindings-common and c2hs fail because the CLDouble FFI type has been
removed.

binary-strict fails because the export of Control.Applicative.many now
clashes with a local definition.

base64-string fails because it sets -Werror in it's .cabal file (a
practise which has been banned for some time for just this reason).


Of the remaining 113:

2 failed at the configure step:
MissingH-1.1.0.1  -- this needs investigation, I smell a cabal bug
lax-0.1.0.0

52 failed at the build step (including those singled out above):
ArrayRef-0.1.3.1-- ghc Handle changes
Boolean-0.0.0   -- name clash with Control.Applicative.*
CCA-0.1.1   -- TH changes, missing Lift instances
ChasingBottoms-1.2.4-- name clash with Data.Sequence.partition
ChristmasTree-0.1.2 -- TH changes (Name vs TyVarBndr)
HCodecs-0.1 -- Data.Array.Diff disappeared 
HList-0.2   -- change in GADT syntax (context)
MonadLab-0.0.2  -- Some API change, not immediately obvious
base64-string-0.1   -- described above
binary-strict-0.4.6 -- described above
bindings-common-1.3.4   -- described above
bindings-gsl-0.1.1.6-- CLDouble
bloomfilter-1.2.6   -- declaration of a type or class operator `:*'
bytestringparser-0.3-- -Werror
c2hs-0.16.0 -- CLDouble
cabal2arch-0.6  -- Cabal API changes
cabal2doap-0.2  -- Cabal API changes
compact-map-2008.11.9   -- GHC Handle changes
conjure-0.1 -- Data.Array.Diff disappeared
flock-0.1   -- -Werror fail
gtk2hs-cast-gtk-0.10.1.2-- type error, not immediately obvious
hask-home-2009.3.18 -- Cabal lib API changes
hgalib-0.2  -- Data.Array.Diff disappeared
hircules-0.3.92 -- ambiguity GHC.IO.liftIO vs C.M.State.liftIO
hnop-0.1-- -Werror fail
hommage-0.0.5   -- GHC.IO changes
hxt-filter-8.3.0-- name clash System.IO.utf8 and local name
ideas-0.5.8 -- A pattern match on a GADT requires -XGADT
ieee-0.6-- CLDouble
ivor-0.1.9  -- 'rec' now a keyword
loch-0.2-- -Werror fail
nano-hmac-0.2.0 -- -Werror fail
nanocurses-1.5.2-- package specifies non-existant config.h (which used
to accidentally pick up ghc's config.h) this happens because the package
needs to run ./configure but does not say so.
network-fancy-0.1.4 -- GHC Handle changes
pkggraph-0.1-- Cabal lib API changes
plugins-1.4.1   -- Cabal lib API changes
posix-realtime-0.0.0.1  -- type of internal IOError constructor
printf-mauke-0.3-- CLDouble
pugs-HsSyck-0.41-- mdo syntax removed?
random-fu-0.0.3 -- type error, not immediately obvious
rdtsc-1.1.1 -- -Werror fail
repr-0.3.1  -- same as random-fu
safe-lazy-io-0.1-- GHC Handle changes
sendfile-0.5-- GHC Handle changes
sessions-2008.7.18  -- needs -XTypeOperators
stringtable-atom-0.0.6  -- described above
syb-with-class-0.5.1-- described above
type-settheory-0.1.2-- TH lib API changes
uu-parsinglib-2.3.0 -- scoped type var changes?
uuagc-0.9.12-- 'rec' now a keyword
uvector-0.1.0.5 -- GHC Handle changes
word24-0.1.0-- Not in scope: `divZeroError'

The remaining 59 are knock-on failures, ie packages that depended on one
of the failed packages.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org

Re: ANNOUNCE: GHC version 6.12.1

2009-12-14 Thread Duncan Coutts
On Mon, 2009-12-14 at 22:49 +0100, Daniel Fischer wrote:
 Oh great, that's not what I expected:
 
 $ cabal install cabal-install
 cabal: This version of the cabal program is too old to work with ghc-6.12+.
 You will need to install the 'cabal-install' package version 0.8 or higher.
 If you still have an older ghc installed (eg 6.10.4), run:
 $ cabal install -w ghc-6.10.4 'cabal-install = 0.8'
 $ cabal install -w ghc-6.10.3 'cabal-install = 0.8'
 Resolving dependencies...
 cabal: There is no available version of cabal-install that satisfies =0.8
 
 Oops, nothing higher than 0.6.4 on Hackage, even
 darcs.haskell.org/cabal-install is only version 0.7.5. 

Right, the cabal-install 0.8.x release will appear in due course.

It shouldn't be too long since I've already been using it for Hackage
regression testing of ghc-6.12.

 That seems to work, though, but I needed to manually install network, mtl and 
 parsec 
 before bootstrap.sh ran.

Yes, the bootstrap needs updating to take account of the fact that those
packages are no longer shipped with ghc-6.12.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: GHC 6.12 + zlib + Mac OS 10.6

2009-11-30 Thread Duncan Coutts
On Mon, 2009-11-30 at 08:44 +, Simon Peyton-Jones wrote:
 Should this go in a FAQ? For GHC? Or for a particular architecture?

For ghc-6.10, yes. It'd should be a section GHC on OSX 10.6 (Snow
Leopard) and should describe the changes required to the shell script
wrappers of ghc and hsc2hs. It should also note that none of this is
necessary for ghc-6.12+.

For ghc-6.12, we should just fix ticket #3681.

http://hackage.haskell.org/trac/ghc/ticket/3681


Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: -package-name flag in 6.10.x

2009-11-25 Thread Duncan Coutts
On Wed, 2009-11-25 at 04:56 +, Malcolm Wallace wrote:
 Moreover, when I attempt to use the flag, like so:
 
  $ ghc -package-name hat-2.06 ...
  command line: cannot parse 'hat-2.06' as a package identifier
 
 This used to work with ghc-6.6.x and ghc-6.8.x, but seems to have  
 stopped working with ghc-6.10.x.  I surmise that the leading zero  
 after the version point separator is to blame?  It seems an  
 unfortunate regression.

On the contrary, it is good that it checks this now. The package name
compiled into the code really ought to match the package name registered
with ghc-pkg and the latter is a package identifier, not an arbitrary
string.

In principle it would be possible to parse 2.06 as a Version [2,6],
however I think we decided that allowing that redundancy in the original
string is not a good idea since people might expect 2.6 and 2.06 to be
different. It would also mean we could not rely on lossless round trips
between parser and printer which would be annoying for things like
finding the file or directory containing a package (imagine we go
looking for 2.6/ when the files are really in 2.06/).

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 6.12.1 Release Candidate 2

2009-11-24 Thread Duncan Coutts
On Sun, 2009-11-22 at 22:00 -0600, Tom Tobin wrote:
 On Nov 22, 2009, at 11:53 AM, Ian Lynagh wrote:
  Hi all,
  
  We are pleased to announce the second release candidate for GHC 6.12.1:
  
 http://www.haskell.org/ghc/dist/6.12.1-rc2/
 
 IIRC, an earlier 6.12 RC announcement mentioned that cabal-install
 wasn't working yet; has this been resolved?

The current darcs version of cabal-install works with 6.12.

darcs get --partial http://darcs.haskell.org/cabal-install/



Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 3 questions regarding profiling in ghc

2009-11-13 Thread Duncan Coutts
On Fri, 2009-11-13 at 17:18 +0300, Daniil Elovkov wrote:

  Did you use -auto-all, to automatically create cost centers for all
  top-level functions?  I find that I get very verbose cost info for
  definitions under imported libraries.
 
 Yeah, I've got it. Modules in packages were done by cabal configure -p. 
 That probably doesn't imply -auto-all.

Right, because you usually do not want to see cost centres for all of
the dependent libs (especially not all the way down) to the bottom of
the package stack.

What we need is somewhat better control in Cabal so you can say,
actually I do want this package to have cost centres.


Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Fwd: (Solved) cabal install with external gcc tool chain not the ghc-bundled one

2009-11-12 Thread Duncan Coutts
On Thu, 2009-11-12 at 10:46 +0100, Daniel Kahlenberg wrote:
 to answer this question myself how the use of another gcc is specified
 with effect, I used the following options with the 'cabal install' call:
 
  --ghc-options=-pgmc e:/programme/ghc/mingw-gcc4/bin/gcc.exe -pgml
 e:/programme/ghc/mingw-gcc4/bin/gcc.exe
 
 See
 http://www.haskell.org/ghc/docs/latest/html/users_guide/options-phases.html#replacing-phases
 (searched in the wrong direction, too many trees...). Slightly tuned,
 this should be the way to go in all similar cases.
 
 One thing I haven't considered yet is if the '--with-ld' and
 '--with-gcc' options (if curious too, see logs in my previous mail -
 Subject [Haskell-cafe] caba install with external gcc toolchain not the
 ghc-bundled one) only effect what gets written into the
 setup-config/package.conf file or what other effects these have.

Feel free to file a ticket about this. What makes me somewhat nervous is
that the gcc you want to use for say .c files is not necessarily the
same as the one ghc wants to use to compile .hc files or link stuff.
This is particularly the case on Windows where ghc includes its own copy
of gcc. Similarly on Solaris 10, ghc cannot use the /usr/bin/gcc because
it's a hacked-up gcc that uses the Sun CC backend (which doesn't grok
some of the crazy GNU C stuff that ghc uses).

So it'd certainly be possible to have cabal's --with-gcc/ld override the
ones that ghc uses by default, but the question is should it do so? I
think it's worth asking the ghc hackers about this.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Three patches for cabal

2009-11-09 Thread Duncan Coutts
On Fri, 2009-11-06 at 01:13 +0100, Niklas Broberg wrote:

  Can someone please comment on these two proposed changes. I agree with
  Niklas but I'm a bit reluctant to apply the patches without at least
  some sign of agreement from someone else.
 
  Deprecating PatternSignatures seems uncontroversial, but the
  NoMonoPatBinds is potentially controversial. GHC essentially uses
  -XMonoPatBinds by default, even in H98 mode, and the user can use
  -XNoMonoPatBinds to restore H98 behaviour. Niklas's and my point is that
  the list of language extensions in Language.Haskell.Exceptions are
  differences from H98 so it should be MonoPatBinds to get the difference
  not NoMonoPatBinds to restore H98.
 
  In practise, since ghc uses MonoPatBinds by default it'd mean that
  people who want to get back to H98 would need to use:
 
   ghc-options: -XNoMonoPatBinds
 
  Because the extensions field is additive, not subtractive. Using the
  name MonoPatBinds allows other compilers to implement it without it
  having to be the default.
 
 I had a look at the source for cabal HEAD and was surprised to see
 that this stuff had fallen by the wayside. What's holding it up? I
 can't imagine that anyone would be against the deprecation of
 PatternSignatures at least.

I'd forgotten they were separate patches. I've applied the
PatternSignatures one since that is indeed uncontroversial.

I don't think the discussion on the other ones were conclusive yet.

I think in the end I'm with Ian on his suggestion that we should allow
the No prefix to invert an extension. This would help in this case and
also let us handle things better when the default extensions change.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC, CPP and stringize

2009-10-30 Thread Duncan Coutts
On Fri, 2009-10-30 at 17:17 +, Neil Brown wrote:
 Hi,
 
 The GHC manual says that if you pass -cpp to GHC, it runs the C
 preprocessor, cpp on your code before compilation
 (http://www.haskell.org/ghc/docs/latest/html/users_guide/options-phases.html#c-pre-processor).
   But why, in that case, does stringize not seem to work when the -cpp flag 
 is given?


 #define TR(f) (trace #f f)


 What am I missing?

That ghc uses cpp in traditional mode so it does not grok new ANSI C
things like cpp string concatenation.

As I understand it we have to use traditional CPP because some modern
features break Haskell code.

Really we should all move over to cpphs.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Update on GHC 6.12.1

2009-10-29 Thread Duncan Coutts
On Wed, 2009-10-28 at 22:55 +, Simon Peyton-Jones wrote:

 SECOND, we have produced Release Candidate 1 for GHC 6.12.1, and are
 about to produce RC2.  However, before releasing 6.12 we'd like to
 compile all of Hackage, in case doing so reveals bugs in GHC's APIs
 (which are not supposed to change).  But we can't do that until an
 update to cabal-install is ready. (We don't expect this dependency to
 happen all the time, but it does hold for 6.12.)
 
 Duncan has been working on the cabal-install update, and expects
 to release by end November.  So the timetable looks like this:
 
  - Very soon: GHC 6.12.1 release candidate 2
  - End Nov: cabal-install release
  - ...test GHC against Hackage...

An update on this:

People can now grab the current darcs version of cabal-install and build
and test it with ghc-6.12 (or indeed earlier ghc versions).

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Fwd: Re: [Gtk2hs-devel] Help with build on Alpha]

2009-10-18 Thread Duncan Coutts
On Sun, 2009-10-18 at 16:01 -0200, Marco Túlio Gontijo e Silva wrote:
 Hi.
 
 I sent a mail to gtk2hs-devel about this bug, and I'm forwarding it's
 response to here.

This limitation might be different now that ghc is using libffi.

Duncan

  I don't have a clue about this bug:
  http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=540879 .  Do you have
  any idea about what could be causing this?
 (...)
 This is a limitation of ghc. So you should ask the ghc people if they  
 can fix this. Then we could think of a work-around. But the only  
 workaround there is is not to bind functions that are affected by the  
 limitation. That would pretty much rule out all modules in ModelView/  
 which is a rather important part of Gtk2Hs.
 (...)
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Re[4]: ANNOUNCE: GHC 6.12.1 Release Candidate 1

2009-10-12 Thread Duncan Coutts
On Mon, 2009-10-12 at 19:29 +0400, Bulat Ziganshin wrote:
 Hello Duncan,
 
 Monday, October 12, 2009, 6:58:43 PM, you wrote:
 
  also, i propose to enable +RTS -N by default. Haskell is very popular
  as multithreaded language, don't fool novices!
 
  Note that you'd also have to enable -threaded by default. This would
  have other surprising effects (like breaking most GUI progs).
 
 afair, it's on by default for a few years

With runghc and ghci you get the threaded rts by default. For compiled
standalone programs the single threaded rts is still the default. You
have to link using the -threaded flag to get the threaded rts.

 and yes, i had SERIOUS problems with it in my GUI program :)

Yeah, it's a long-standing tricky issue.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 6.12.1 Release Candidate 1

2009-10-12 Thread Duncan Coutts
On Mon, 2009-10-12 at 16:04 -0400, Brent Yorgey wrote:
 What's the canonical way to install a version of ghc but not have it
 be the default?  i.e., I'd like to try testing this release candidate
 but I want to have to call it explicitly; I want 'ghc', 'ghc-pkg'
 etc. to still be aliases to ghc-6.10.4, instead of being overwritten
 by the 6.12.1 install.

What I do is keep my default as /usr/bin/ghc, then when I install
testing versions I just rm the unversioned ghc scripts that get
installed in /usr/local/bin/ (because /usr/local/bin appears on my $PATH
first).

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: haddock problem. Was: ANNOUNCE: GHC 6.12.1 Release Candidate 1

2009-10-12 Thread Duncan Coutts
On Mon, 2009-10-12 at 18:43 +0200, Christian Maeder wrote:

 P.S. I wonder why Registering is done twice

It's Cabal's fault. It's a new feature to let components within a
package depend on each other. To do that it needs to register the lib
into a local inplace package db. At the moment it's always doing it,
even when it's not strictly necessary. At some point we'll probably tidy
that up so that it only does so when it's needed. On the other hand,
always doing so during the testing phase has already caught a couple
configuration bugs so I'm not in any great rush to add the optimisation.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Ghci fails to load modules, but ghc compiles OK

2009-10-09 Thread Duncan Coutts
On Thu, 2009-10-08 at 19:27 +0100, Colin Paul Adams wrote:
 I've been using ghc 6.10.3 on 64-bit Linux to compile my application,
 and it runs OK, modulo bugs.
 
 I want to debug a problem, so I load it in ghci, but when i type main
 I get:
 
  Loading package network-2.2.1.1 ... 
 
 GHCi runtime linker: fatal error: I found a duplicate definition for symbol
my_inet_ntoa
 whilst processing object file
/usr/lib64/ghc-6.10.3/network-2.2.1.1/HSnetwork-2.2.1.1.o
 This could be caused by:
* Loading two different object files which export the same symbol
* Specifying the same object file twice on the GHCi command line
* An incorrect `package.conf' entry, causing some object to be
  loaded twice.
 GHCi cannot safely continue in this situation.  Exiting now.  Sorry.
 
 Why would ghci have a problem, but not ghc?

Because the system linker does not care about duplicate definitions for
symbols. It just merrily picks the first one and resolves all references
to point that first one. The GHCi linker is a tad more careful.

What you're probably doing is loading two versions of the network
package (eg indirectly as a dependencies of other packages) and they
both have an unversioned C symbol in them. The Haskell symbols are all
versioned which is why they do not clash.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC on Snow Leopard: best practices?

2009-10-08 Thread Duncan Coutts
On Thu, 2009-10-08 at 18:32 +1100, Manuel M T Chakravarty wrote:
 David Menendez:
  Is there any consensus about what needs to be done to get a working
  ghc installation on a Snow Leopard (Mac OS X 10.6) system? The Mac OS
  X wiki page[1] currently links to a blog post[2] that recommends
  manually patching /usr/bin/ghc, but I have also seen recommendations
  that people patch ghci, runhaskell, runghc, and hsc2hs. Is that also
  recommended? If so, there should probably be an updated how-to on the
  wiki.
 
 Patching /usr/bin/ghc is sufficient to get a version of GHC that  
 passes the regression tests suite in fast mode (the same setting  
 that the validate script uses).  If you want to use hsc2hs, you need  
 to patch that, too.  I haven't found a need to patch the interpreter,  
 though.

And they almost certainly do want hsc2hs (even if they don't know it)
because it's used by all sorts of other libs. The first one people hit
is the zlib binding which is used by cabal-install. It appears to
compile ok but then fails the version check performed by the zlib C
library (eg when someone does cabal update).

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to solve ghc-stage2: mkTextEncoding: invalid argument (Invalid argument) issue.

2009-09-15 Thread Duncan Coutts
On Tue, 2009-09-15 at 16:56 +0200, Karel Gardas wrote:
 Hello,
 
 recently I've found out that my solaris-based GHC buildbot is completely
 unusable since it always (when it get to, which means it does not fail
 with usual magic number mismatch: old/corrupt interface file?) fails with:

 ghc-stage2: mkTextEncoding: invalid argument (Invalid argument)

So that's when it calls iconv_open with the names of some text
encodings. Apparently that is failing. You should be able to confirm
this with some tracing.

 I've even tried to update my building GHC from 6.8.3 to 6.10.4, but it
 still does not help.

No, it wouldn't.

 Since I would really like to resurrect my GHC buildbot, do you have any
 idea how to fix this issue?

Dig into base/GHC/IO/Encoding/Iconv.hs mkTextEncoding function where it
calls iconv_open, see what it's being called with.

In particular check if HAVE_LANGINFO_H is getting defined. If it's not
then the code assumes GNU iconv.

 PS: for reference, please have a look at
 http://darcs.haskell.org/buildbot/all/builders/kgardas%20head

See also the

http://darcs.haskell.org/buildbot/head/builders/sparky%20head

Which is running Solaris 10 on sparc and seems to be working fine.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: no backspace, delete or arrow keys in ghci

2009-09-08 Thread Duncan Coutts
On Mon, 2009-09-07 at 11:24 -0700, Judah Jacobson wrote:

 I'm not sure I understand.  Are you saying that you can't use
 backspace/arrows/etc when the getLine command itself is waiting for
 input?  But otherwise at the Prelude prompt, where you type in the
 commands, everything behaves fine?
 
 If so, that is normal behavior for the getLine function.

For what it's worth, while ghci has behaved this way for a long time
(since at least 6.4), hugs seems to work more nicely in this regard. In
hugs when you getLine it seems to switch into cooked mode so you at
least get backspace etc.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Libraries in the repo

2009-08-28 Thread Duncan Coutts
On Fri, 2009-08-28 at 11:42 +0100, Simon Marlow wrote:

 Can anyone think of a good reason not to upgrade darcs to 2.3.0 on 
 darcs.haskell.org?  I can think of 3 reasons to do so:
 
   - this script, for preventing accidental divergence from upstream
   - faster pushes, due to transfer-mode
   - hide those annoying Ignore-this: x messages

By the way, people who regularly work with the ghc repos (at least on
Linux) and who are thinking of upgrading to darcs-2.3.0 should heed this
advice:

Use darcs get to get your repos again. Not remotely, just
locally. This switches them from darcs1 traditional format to
darcs1 hashed format.

If you do this, then darcs whatsnew gets ~4 times quicker.

If you do not do this, then darcs whatsnew gets ~100 times
slower.

All times measured on Linux, local ext3 filesystem, ghc testsuite repo.
All times are the second of two runs to allow for OS caching. The
results may well be quite different on a different file systems, like
Windows NTFS.

Perhaps someone can suggest a way of doing this using the ./darcs-all
script, that would not mess up what the default push/pull address is. Of
course doing a get means the copy doesn't have the changes from the
working directory. As far as I know darcs currently does not provide a
way to do an inplace upgrade to the faster format.

I've emailed the darcs list to raise this issue, that:
 1. we get no warning or advice from darcs that we should switch
format
 2. that there is not a really convenient way of doing the switch

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Libraries in the repo

2009-08-26 Thread Duncan Coutts
On Wed, 2009-08-26 at 17:15 +0100, Simon Marlow wrote:

   * Sometimes we want to make local modifications to INDEPENDENT
 libraries:
   - when GHC adds a new warning, we need to fix instances of the
 warning in the library to keep the GHC build warning-free.

I have to say I think this one is rather dubious. What is wrong with
just allowing warnings in these independent libs until they get fixed
upstream? I know ghc's build system sets -Werror on them, but I don't
see that as essential, especially for new warnings added in ghc head.


 Experience with Cabal and bytestring has shown that (1) can work for
 INDPENDENT libraries, but only if we're careful not to get too
 out-of-sync (as we did with bytestring).  In the case of Cabal, we never 
 have local changes in our branch that aren't in Cabal HEAD, and that 
 works well.

It requires an attentive maintainer to notice when people forget to push
upstream (as they inevitably do on occasion). If it goes unnoticed for
too long then ghc ends up with a forked repo that cannot sanely be
synced from the upstream repo (like bytestring).

I suggest if we stick with the independent repo approach that we have
some automation to check that changes are indeed getting pushed
upstream.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ghc 6.10.4 infix declarations and '\' bug or not ?

2009-08-22 Thread Duncan Coutts
On Sat, 2009-08-22 at 15:52 +1000, John Lask wrote:
 in declaring fixity for an operator (\\) to get it to compile using ghc 
 6.10.4, I needed to use the following code
 
 infixl 9 \\\
 (\\) a b = etc ...
 
 where I assume the first \ escapes the second \, using infixl 9 \\ generates 
 a syntax error

 infixl 9 \\  used to compile no problems with ghc 6.8.2
 
 what is going on here ?

Usually this problem is related to cpp, since \\ at the end of a line
has special meaning to cpp.

Are you sure that you're comparing like with like when you say it worked
in ghc-6.8.2? If you're using cpp for the module now and in the past you
were not then that would explain it.

Another trick I've seen is:

infixl 9 \\  -- this comment is here to defeat evil cpp mangling

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: gcc version for GHC 6.10.4 under Sparc/Solaris

2009-08-10 Thread Duncan Coutts
On Fri, 2009-08-07 at 10:55 +0200, Christian Maeder wrote:
 Duncan Coutts wrote:
  I should also note that there is a GHC 6.10.4 binary for Sparc/Linux
  that is now included with Gentoo. It's got all features turned on except
  for split objects (which fails due to mixing ld -r and --relax flags).
  In particular it's a registerised via-C build with ghci, TH and
  profiling working.
 
 Does compiling using gcc-4.3.x work if -fvia-C is added?

There's no Sparc NCG in the 6.10 series so it's only -fvia-C. The new
Sparc NCG is in 6.12. This build used gcc-4.1.2 which is the latest
stable one on Gentoo for sparc.

  It's a distro package not a generic relocatable GHC binary tarball so
  there's no point putting it on the ghc download page, but it's there
  nevertheless if people want it (look for the gentoo ghc ebuild).
 
 I've found
 http://packages.gentoo.org/package/dev-lang/ghc
 
 Where are ebuilds or downloadable binaries?

For non-gentoo users, the ebuilds are available from cvs:
http://sources.gentoo.org/viewcvs.py/gentoo-x86/dev-lang/ghc/
eg 6.10.4:
http://sources.gentoo.org/viewcvs.py/gentoo-x86/dev-lang/ghc/ghc-6.10.4.ebuild?view=markup

The binaries are not set up for separate distribution but if you inspect
the ebuild you find:
mirror://gentoo/ghc-bin-${PV}-sparc.tbz2

which means ghc-bin-6.10.4-sparc.tbz2 in the distfiles of any gentoo
mirror, eg:
http://www.mirrorservice.org/sites/www.ibiblio.org/gentoo/distfiles/ghc-bin-6.10.4-sparc.tbz2

It expects to unpack to / and installs under /usr however it's quite
possible to unpack to a temp dir and relocate the scripts and
package.conf using a bit of sed. In fact this is what the ebuild does
for the bootstrapping binary.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: use gtar and not tar under solaris

2009-08-10 Thread Duncan Coutts
On Fri, 2009-08-07 at 13:14 +0200, Christian Maeder wrote:
 Christian Maeder wrote:
  Matthias Kilian wrote:
  However, to create an archive, you can use something like
 
  $ pax -wf foo.tar directory
  
  Do you think gtar --format=posix would be different from pax?

I would expect they are the same. The USTAR format is standardised by a
POSIX standard from 1988 while the pax extensions are standardised by
POSIX from 2001 I think. The pax program has an -x format flag and can
use pax, ustar or cpio formats. The pax format is an extension of the
ustar format.

  The only question is, if we should create archives using the ustar,
  posix/pax, or gnu format. ustar seems to be the least common
  denominator. Does ustar have any disadvantages?

For source code distribution I think the ustar format is ideal. This is
what cabal-install's sdist mode uses. As you say it's the lowest common
denominator. The limitations of the format (file sizes, lack of extended
file meta-data) are not a practical problem for source code or binaries.

 My plain tar command under solaris cannot handle the pax files, too.
 So ustar archives should be created (at least under solaris).

That's odd since pax is supposed to be a compatible extension of ustar
that just adds extra meta-data entries. Older programs should either
ignore those entries or extract them as if they were ordinary files.

 But I don't know why the ustar format can handle long file names,

The ustar format can handle file names up to 100+155 characters (bytes)
long. The reason for 100+155 is that it's not simply 255. The split into
a 100 and 155 field must happen on a directory separator.

 whereas the gnu format creates a @LongLink file and pax a PaxHeader
 file (when unpacked with tar).

Right, those files are the extended entries.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Haskell Platform 2009.2.0.2

2009-08-10 Thread Duncan Coutts
On Thu, 2009-07-30 at 15:12 +0200, Christian Maeder wrote:
 Don Stewart wrote:
  Heads up lads, we're about 24 hours from Haskell Platform 2009.2.0.2
  
  http://code.haskell.org/haskell-platform/haskell-platform.cabal
 
 I still see
 
 time  ==1.1.2.4,
 
 although ghc-6.10.4 comes with:
 
time-1.1.4
 
 http://trac.haskell.org/haskell-platform/ticket/74
 
 Is this on purpose (possibly installing two time versions)?

Yes it is on purpose. The HP policy is to keep the same API for a whole
major release series. That is, the API of packages included in each
minor release of 2009.2.0.x are the same. The initial release 2009.2.0
included time-1.1.2.4 and so subsequent releases must use a compatible
version.

Version of time in the extralibs tarball in ghc-6.10.x releases:

 * GHC-6.10.1: time-1.1.2.4
 * GHC-6.10.2: none
 * GHC-6.10.3: time-1.1.3
 * GHC-6.10.4: time-1.1.4

So we have the current situation where the HP supplies one version and
ghc's extralibs tarball includes a different version. As far as I know,
the windows and osx HP installers (which include ghc) only include one
version of time. For the source installer of course we cannot control
other pre-existing versions.

 I only know that time-1.1.4 and time-1.1.3 supply more Typeable
 instances (which may break existing code)

Right, that's not allowed in a HP minor release as it's an API change.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: use gtar and not tar under solaris

2009-08-06 Thread Duncan Coutts
On Tue, 2009-08-04 at 10:15 +0200, Christian Maeder wrote:
 Hi,
 
 I've just been informed that unpacking the binary (i386) solaris
 distribution using bunzip2 and tar:

It may work better in future if you use a non-GNU tar to pack it up in
the first place. GNU tar uses a non-standard tar format by default.
Solaris tar would likely have more luck unpacking a POSIX/USTAR tar
format file. It's also possible to use gnu tar to make standard tar
format files, using --format ustar rather than gnu tar's default of
--format gnu.

Duncan
(who now knows an unhealthy amount about tar file formats after writing
a Haskell package to read them)

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: gcc version for GHC 6.10.4 under Sparc/Solaris

2009-08-06 Thread Duncan Coutts
On Thu, 2009-08-06 at 10:04 +0200, Christian Maeder wrote:
 Hi Ian,
 
 could you add a note on the download page that
 GCC version 4.3.x is not suited for:
 
 http://www.haskell.org/ghc/dist/6.10.4/maeder/ghc-6.10.4-sparc-sun-solaris2.tar.bz2
 
 The binary-dist was compiled using gcc-4.2.2 (but also works i.e. for
 gcc-3.4.4)

I should also note that there is a GHC 6.10.4 binary for Sparc/Linux
that is now included with Gentoo. It's got all features turned on except
for split objects (which fails due to mixing ld -r and --relax flags).
In particular it's a registerised via-C build with ghci, TH and
profiling working.

It's a distro package not a generic relocatable GHC binary tarball so
there's no point putting it on the ghc download page, but it's there
nevertheless if people want it (look for the gentoo ghc ebuild).

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: use gtar and not tar under solaris

2009-08-06 Thread Duncan Coutts
On Thu, 2009-08-06 at 12:30 +0100, Duncan Coutts wrote:
 On Tue, 2009-08-04 at 10:15 +0200, Christian Maeder wrote:
  Hi,
  
  I've just been informed that unpacking the binary (i386) solaris
  distribution using bunzip2 and tar:
 
 It may work better in future if you use a non-GNU tar to pack it up in
 the first place. GNU tar uses a non-standard tar format by default.
 Solaris tar would likely have more luck unpacking a POSIX/USTAR tar
 format file. It's also possible to use gnu tar to make standard tar
 format files, using --format ustar rather than gnu tar's default of
 --format gnu.

In fact I think I'd always advocate using the USTAR tar format over the
GNU tar format when distributing software, since portability is of prime
concern. This is what cabal-install does. I'd recommend ghc do it too. I
also filed a ticket for darcs dist about this some time ago.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: 6.12.1 planning

2009-06-30 Thread Duncan Coutts
On Tue, 2009-06-30 at 14:45 +0100, Simon Peyton-Jones wrote:
 I've dumped all this on a release plans wiki page:
   http://hackage.haskell.org/trac/ghc/wiki/Status/Releases
 
 Manuel, Duncan: maybe you can modify the wiki directly?

Done. Fortunately there's not much left to do for shared libs, at least
on Linux.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Three patches for cabal

2009-06-17 Thread Duncan Coutts
On Wed, 2009-06-03 at 16:41 +0200, Niklas Broberg wrote:

 Second there's the constructor NoMonoPatBinds, which actually
 describes the default Haskell 98 behavior, even if GHC has a different
 default. It's GHC's behavior that is the extension, so the constructor
 in cabal should really be named MonoPatBinds.
 
 Also, the PatternSignatures constructor has been deprecated in GHC and
 superceded by ScopedTypeVariables.
 
 The attached patches (three in one file) adds the proposed new
 constructors, deprecates the old ones, and adds documentation.

Can someone please comment on these two proposed changes. I agree with
Niklas but I'm a bit reluctant to apply the patches without at least
some sign of agreement from someone else.

Deprecating PatternSignatures seems uncontroversial, but the
NoMonoPatBinds is potentially controversial. GHC essentially uses
-XMonoPatBinds by default, even in H98 mode, and the user can use
-XNoMonoPatBinds to restore H98 behaviour. Niklas's and my point is that
the list of language extensions in Language.Haskell.Exceptions are
differences from H98 so it should be MonoPatBinds to get the difference
not NoMonoPatBinds to restore H98.

In practise, since ghc uses MonoPatBinds by default it'd mean that
people who want to get back to H98 would need to use:

  ghc-options: -XNoMonoPatBinds

Because the extensions field is additive, not subtractive. Using the
name MonoPatBinds allows other compilers to implement it without it
having to be the default.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Fwd: OSX installer -- first draft]

2009-06-04 Thread Duncan Coutts
On Wed, 2009-06-03 at 15:59 -0400, David Menendez wrote:
 On Tue, Jun 2, 2009 at 5:38 AM, Duncan Coutts
 duncan.cou...@worc.ox.ac.uk wrote:
  OSX users,
 
  please could you try out Gregory's Haskell Platform package below and
  send commentary to the platform list, or file tickets in the platform
  trac, that'd be great.
  http://trac.haskell.org/haskell-platform/newticket?component=OSX%20installer
 
 Is this a universal binary, or Intel-only?

Like the existing ghc .pkg, it's Intel-only and requires OS X 10.5.

  The plan is that for ghc-6.12 and onwards, that this will be the primary
  way that end-users get their Haskell goodness on OSX, so it's important
  that you provide feedback now so that we can get the thing working
  nicely and try to make everyone happy.
 
 Will there be some integration with the existing distribution schemes
 for Mac OS X (fink and macports), or are end users expected to use
 cabal install?

We hope that since fink and macports are essentially traditional unix
ports systems that they'll provide haskell-platform meta-packages, just
as we hope other linux/bsd distros will do.

What we (the HP team) provide ourselves will be generic packages for
unix (possibly generic binary packages for linux) and packages for the
systems that have no distro system of their own, ie windows and os x
(for non-macports/fink users). We also provide the specification for
distro people to make native distro packages.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Three patches for cabal

2009-06-04 Thread Duncan Coutts
On Thu, 2009-06-04 at 08:43 +0100, Simon Peyton-Jones wrote:
 I'd be quite happy to rename the flag to GeneralisedisedListComp, and
 clarify the user manual.  Would that suit everyone?
 
 I suppose the alternative is to leave it as TransformListComp, and
 document that fact.  But I rather agree that GeneralisedListComp fits
 the literature better.

Looking at the paper, it doesn't really give it any specific name.
Perhaps is a good thing because we can change the ghc user guide
relatively easily, but not published papers.

I appreciate Max's point that GeneralisedListComp is a bit vague, though
TransformListComp doesn't really speak to me. I would not (and did not)
guess it has any relation to the extension in question. My only other
suggestion is QueryListComp or GeneralisedQueryListComp (bikeshed
dangers approaching...).

The main point is that the docs and the extension name should be
consistent (and should refer to each other).

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Three patches for cabal

2009-06-03 Thread Duncan Coutts
On Wed, 2009-06-03 at 16:33 +0100, Max Bolingbroke wrote:
 2009/6/3 Niklas Broberg niklas.brob...@gmail.com:
  First there's the constructor called TransformListComp, which should
  really be named GeneralizedListComp, since the constructor should
  describe the extension and not the implementation scheme.
 
 It's called TransformListComp because the then f syntax transforms a
 list using f (which has type [a] - [a]) - not because the
 implementation works by transformation or anything like that! We
 considered but rejected GeneralizedListComp because it's too vague -
 what if someone comes up with another list comprehension
 generalisation in the future?

In that case can the documentation, eg in the ghc user guide[1] be
updated to use the new name. It's jolly confusing to go looking for the
extension name corresponding to the extension. The ghc user guide
section on it calls it generalised and mentions no flag or extension
name. We initially thought that the extension had never been registered
and I only found it by accident when looking in the reference section of
the ghc user guide on flag names[2] I noticed that transform list
comprehensions actually links to the section on generalise list
comprehensions.

Duncan

[1]
http://haskell.org/ghc/docs/latest/html/users_guide/syntax-extns.html#generalised-list-comprehensions
[2]
http://haskell.org/ghc/docs/latest/html/users_guide/flag-reference.html#id2949842

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Fwd: OSX installer -- first draft]

2009-06-02 Thread Duncan Coutts
OSX users,

please could you try out Gregory's Haskell Platform package below and
send commentary to the platform list, or file tickets in the platform
trac, that'd be great.
http://trac.haskell.org/haskell-platform/newticket?component=OSX%20installer

The plan is that for ghc-6.12 and onwards, that this will be the primary
way that end-users get their Haskell goodness on OSX, so it's important
that you provide feedback now so that we can get the thing working
nicely and try to make everyone happy.

I'm sure Gregory would also appreciate help in working out how to make
the OSX .pkg tools behave properly.

Duncan

 Forwarded Message 
 From: Gregory Collins g...@gregorycollins.net
 To: haskell-platform haskell-platf...@projects.haskell.org
 Subject: OSX installer -- first draft
 Date: Tue, 02 Jun 2009 00:20:18 -0400
 
 Hi all,
 
 After months of intense frustration I have something approaching a
 reasonable OSX installer for the Haskell Platform. I'd appreciate it if
 some OSX hackers could try it out.
 
 The installer can be downloaded from:
 http://gregorycollins.net/static/haskell/haskell-platform-2009.2.0.1-alpha1.pkg
 
 Please do me a favour and don't link there from the haskell platform
 website :)
 
 Features/caveats:
 
   * it presupposes GHC-6.10.3 is installed from the binary distro. The
 final release will bundle the two together in a .dmg file.
 
   * it installs the platform libraries and executables to
 /Library/Framework/HaskellPlatform.framework, registers the
 libraries with GHC, and symlinks the binaries to /usr/local/bin
 
   * I had to build the distro package by hand using Apple's GUI tool
 because I can't figure out how to do it otherwise -- and not for
 lack of trying, either, I reckon I've put 20-30 man-hours into
 trying to figure it out -- thanks Apple! Similarly, for the life of
 me I cannot figure out how to bundle GHC and the platform libs
 together into one installer.
 
   * There's some haddock documentation in there but I'm not sure how
 nicely it's cross-linked.
 
   * The code is a mess, I need to clean it up (and systematize the
 process) before it's fit for inclusion into the project.
 
   * I've done some (very) limited testing but given how difficult the
 whole project has been, I fully expect problems.
 
 Sorry about how long it's taken -- I'm getting married on Saturday (!)
 so finding time to work on this has been difficult indeed.
 
 G.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: haddock-2.3.0 literate comments discarded from .lhs input

2009-05-29 Thread Duncan Coutts
On Thu, 2009-05-28 at 23:40 +0100, Claus Reinke wrote:
  If you don't want to move from absolute paths for non-core packages,
  the current system should just work, right?
  
  Yes.
 
 The current system being the $topdir one.

Yep. It works, it's just not nice, it's ghc-specific and only make sense
when ghc is installed in a prefix-independent way.

  Though it also allows for the possibility of relocatable sets of
  packages that are not installed relative to the compiler. But more
  importantly it's more general and simpler than the current '$topdir'
  that ghc uses.
 
 'it' now being the new system evolving in this thread, or have I missed
 anything?

The new system I've been proposing.
 
  (a) making ghc-pkg (optionally) instantiate any variables in its
  database in (all of) its command-line output and 
  
  Yes, though I'm only asking for two vars (previously one), not an ad-hoc
  set of vars.
  
  (b) allowing non-core packages to be relocated without having to
  update ghc-pkg's database.
  
  In my suggested system this is possible if that set of packages use
  their own package db (containing relative paths).
 
 That is news to me - was that specified before this thread moved
 to ghc-users?

It was in the first email that was cc'ed to ghc-users:

How about this: a way to specify paths in the package
registration info that are relative to the location of the
package db they are in. That makes sense beyond just ghc and
even with would allow other sets of relocatable packages, not
just those installed with ghc.

  In your system it's possible by updating some var in a central registry
  and having that set of packages use paths relative to that var.
 
 So, essentially, your system would have to keep a file listing the
 various package.conf locations (currently, GHC only knows about
 two: system/user, everything else would have to be passed on the
 commandline..). While my system would have to keep a file listing
 the variable bindings, so that tools processing the package db can
 instantiate the variables.

If you want multiple relocatable sets of packages that are immediately
available in the environment.

 I could see both approaches being useful, even together.
  
  So ghc's current system uses two vars, $topdir and $httptopdir. 
 
 This is GHC's view of its database. It should be useable independently,
 via ghc-pkg and ghc api clients (such as GHC, GHCi, Haddock, ..) -
 all of which should be able to resolve the variable bindings, in the
 same way.

It's not usable independently, ghc does not always have a topdir. This
makes life hard for tools. It's also not clear what topdir would mean in
the context of other compilers.

 Btw, it would really be nice if the package handling code 
 was shared rather than duplicated.

It would be nice, yes.

  I'm proposing to replace those with a standardised ${pkgroot} and
  ${pkgrooturl} vars which are usable by all compilers and in more
  situations.
 
 Now you are talking about Cabal's view of its database. 

Cabal does not own the package databases, however it does expect that
they are in the format describe by the Cabal spec, which places
obligations on Haskell implementations to be somewhat package-aware.

 It doesn't have to expose the underlying implementation's view,
 especially since the other implementations organise their package
 handling differently.

All compilers use the same information (it's in the Cabal spec). They do
store it differently but they all identify the location of the
information using a file path. That seems pretty universal, compared to
$topdir.

 And why just two variables? Is $pkgroot about .hi files, .a/.so./.dll
 files, or about include files, or haddock indices, or ..?

You only need one variable to identify the location of the installed
package description. All relative paths can be constructed from that.
The second variable is to allow for two representations of the same
location, one as a native system path, the other as a URL. We do not
need different variables for different kinds of files (except in as much
as some fields use paths and some urls).

 In windows, these tend to end in a common sub-hierarchy, but you're
 aiming for something general, right?

If you're making a relocatable package then these files will be in a
common sub-hierarchy and you would use relative paths. If you're not
making a relocatable package (eg following the Linux FSH) then you would
not use relative paths.

So that should be general. It does not remove any existing capability
and it adds the ability to have relative paths for relocatable packages.

Perhaps what you're saying is that we should be able to take any package
whether it lives in a common sub-hierarchy or not and relocate it. In
general this is problematic since packages can embed paths and if those
paths are not relative to a common root then you have to specify them
all (Cabal enables this by setting environment variables). Assuming

draft proposal for relative paths in installed package descriptions

2009-05-29 Thread Duncan Coutts
All,

This is a draft proposal for a common mechanism for implementing
relative paths in installed package descriptions.

Comments and feedback are welcome. I'm cc'ing this to the cabal and
ghc-users lists but let's keep the discussion on the libraries list.

There has been some discussion of this issue on the cabal and ghc-users
list, but it's a relatively long thread and the idea has evolved
somewhat so this is an attempt to present the proposal clearly.


Proposal


This proposal is an extension to the Cabal spec section 3
http://haskell.org/cabal/proposal/x272.html


Motivation
--

Being able to have relative paths in the installed package description
is useful as it makes it possible to construct relocatable
(prefix-independent) package installations. This is useful when
distributing compilers along with packages and there may be other uses.

This proposal does not require that all paths are relative. It is still
perfectly ok to use absolute paths where appropriate.  It just adds an
option to use relative paths.

The aim is for a single simple specification that any compiler can
implement and have the tools work uniformly, rather than ad-hoc
implementations for each compiler.


Details
---

In the installed package description, we will allow paths that begin
with the variables ${pkgroot} or ${pkgrooturl}. For example:

library-dirs: ${pkgroot}/foo-1.0
haddock-html: ${pkgrooturl}/doc/foo-1.0

They may only appear at the beginning of the paths and must be followed
by a directory separator (platform-dependent '/' or '\'). This is
because the vars refer to a directory so it does not make sense to
construct extensions like ${pkgroot}blah. The use of '{}' is required
and is to avoid any ambiguity (especially since the string $pkgrooturl
is otherwise an extension of $pkgroot).

Directly relative file paths like blah or ./blah are not allowed.
Fields containing paths must be absolute or begin with ${pkgroot}. Field
containing urls must be absolute or begin with ${pkgrooturl}.

The var ${pkgroot} is to be interpreted as the directory containing the
installed package description. For ghc this will be the dir containing
the package.conf db, for hugs/nhc for each package there is a specific
installed package description file, and ${pkgroot} is thus the directory
containing that file. The syntax of the string representing this
directory is the usual system-dependent filepath syntax, e.g.
windows: c:\ghc\ghc-6.12.1
unix:/opt/ghc-6.12.1

The var ${pkgrooturl} is to be interpreted as a url representation of
the directory containing the installed package description. For ghc this
will be the dir containing the package.conf db, for hugs/nhc for each
package there is a specific installed package description file, and
${pkgroot} is thus the directory containing that file. The syntax of the
string representing this directory is as a valid file url including the
file:// prefix e.g.
windows: file:///c:/ghc/ghc-6.12.1
unix:file:///opt/ghc-6.12.1

This is similar to how relative paths in .cabal files are interpreted
relative to the directory containing the .cabal file, however in this
case we mark relative paths more explicitly using a variable.

So in the original example

library-dirs: ${pkgroot}/foo-1.0
haddock-html: ${pkgrooturl}/doc/foo-1.0

If we assume this installed package description is at c:\ghc\ghc-6.12.1
\package.conf on windows or /opt/ghc-6.12.1/package.conf on unix then
the compiler and other tools would interpret these as

library-dirs: c:\ghc\ghc-6.12.1/foo-1.0
haddock-html: file:///c:/ghc/ghc-6.12.1/doc/foo-1.0

or on unix:

library-dirs: /opt/ghc-6.12.1/foo-1.0
haddock-html: file:///opt/ghc-6.12.1/doc/foo-1.0


Tools
-

Tools which process the installed package description format will be
expected to interpret these relative paths. This requires that they can
discover the path for the ${pkgroot}. How they discover this is
dependent on the Haskell implementation. Some implementations use
separate files for each installed package description, some embed them
in library package files, some use databases of installed package
descriptions. Haskell implementations should provide a mechanism to
discover the path for an installed package description.


Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: haddock-2.3.0 literate comments discarded from .lhs input

2009-05-28 Thread Duncan Coutts
On Thu, 2009-05-28 at 11:16 +0100, Claus Reinke wrote:

   How about this: a way to specify paths in the package registration info 
   that
   are relative to the location of the package db they are in. 
  ahem. That sounds like a backwards step, being dependent on two
  locations instead of one.
  I don't follow this. Which two?
 
 package db + package path: in the current system, you only have to
 update the package db if you move a package that isn't installed under
 the GHC tree; in your suggestion, you also have to update it if you move 
 the package db/GHC itself while having non-core packages installed
 outside the GHC tree.

But if you're registering global packages that are installed outside of
the GHC tree then you wouldn't register them using relative paths. I'm
not saying everything must use relative paths.

  With your variant, just about any change would need updating.
  I must be missing something. If you move package.conf and the packages
  in one go, then nothing needs changing as far as I can see.
 
 You seem to be assuming that everything is under a common root?

Well it is on Windows which is the main case where people want
relocatable installations.

If we wanted relocatable installations on Unix then it'd all have to be
under one root too, eg /opt/whatever.

 That isn't the case for most unixes (different locations for bin/ doc/
 lib/ .., docs installed or not), and even on windows, it stopped being 
 the case with cabal insisting on 'Program Files/Haskell/...' as the 
 default install.

Sure, extra packages should not be installed in the ghc tree and so
those should not use paths relative to the ghc location.

 Since ghc traditionally installs into 'c:/ghc/ghc-version' 
 (on my system, at least, but I think that no-spaces-location was 
 suggested by one of the GHC installers originally, and spaces in
 tool paths still confuse the GHC build system), I have two locations.
 
 If I move GHC, nothing needs changing. If I move packages that
 didn't come with GHC, package.db needs updating. If the packages 
 had been registered wrt to a $cabaltopdir, no changes would be 
 needed in either case.

For some reason I really dislike the idea that we make up specific vars
like $cabaltopdir for specific purposes. Perhaps that's just me. I want
a general solution, not something that forces everyone to adopt
conventions like installing everything in ~/.cabal/. That's just a
sensible default, but the user rightly has full control over --prefix,
--libdir etc etc.

 In your suggestion, if I move GHC but not the packages, package.db 
 needs updating,

No it does not. That would only be the case if you always registered
things relative to ghc, but that'd be silly for things not actually
installed in the ghc install tree.

 if I move the packages but not GHC, package.dg needs updating, only if
 I move both, and by the same relative path, no update is needed.

Are you suggesting that we need to be able to move core libs that are
distributed with ghc, independently of where the ghc binary is?

  Assuming that  the parts are independently located by whatever the OS
  packaging  conventions say, and can be independently relocated
  otherwise, it  seems simpler to continue with the variable scheme, but
  with improved support and documentation for it.
  
  My suggestion seems very simple! I'm clearly missing some problem which
  you can see.
  
  To be clear, here's what I'm imagining:
  
  blah/package.conf
  blah/lib/foo-1.0/libfoo-1.0.a
 
 That is everything under one tree, right?

Not necessarily. For the things in the same tree it'd be sensible to use
relative paths. For things not in the same tree it'd be sensible to use
absolute paths.

This scheme also allows other sets of relocatable packages, so long as
ghc gets told where to find the package.conf.

 And since package.conf is GHC's register, GHC would have to be in that
 tree as well.

For core packages shipped with ghc/hp, yes.

  and package.conf would contain foo-1.0 with paths looking like
  $dbdir/lib/foo-1.0. That is, we interpret $dbdir (or whatever var name
  we agree on) as being blah/ because that's the dir containing the db.
  
  So crucially, it doesn't really matter where ghc.exe is. Assuming ghc
  can find the package conf then it can find all the files. So it'd let
  you create multiple relocatable package collections. If the primary
  package db is kept relative to ghc (eg in ghc's topdir) then the whole
  ghc installation including libs is relocatable
 
 That is what GHC did on windows before cabal changed the package
 locations away to a path that neither GHC nor its build tools can use.

Do you mean installing binaries in C:\Program Files\Haskell\bin by
default? That decision was made by the Windows users.

It's true that the GHC build system cannot work in a directory
containing spaces, and that's probably too hard to fix. However using
tools (eg happy, alex) that are in a dir containing spaces should not be
nearly so hard to 

Re: haddock-2.3.0 literate comments discarded from .lhs input

2009-05-28 Thread Duncan Coutts
On Thu, 2009-05-28 at 14:12 +0100, Claus Reinke wrote:
  But if you're registering global packages that are installed outside of
  the GHC tree then you wouldn't register them using relative paths. I'm
  not saying everything must use relative paths.
 
 Please don't move your windmills while I'm fighting them!-)
 
 If you don't want to move from absolute paths for non-core packages,
 the current system should just work, right?

Yes.

Though it also allows for the possibility of relocatable sets of
packages that are not installed relative to the compiler. But more
importantly it's more general and simpler than the current '$topdir'
that ghc uses.

 I thought we were talking about

 (a) making ghc-pkg (optionally) instantiate any variables in its
 database in (all of) its command-line output and 

Yes, though I'm only asking for two vars (previously one), not an ad-hoc
set of vars.

 (b) allowing non-core packages to be relocated without having to
 update ghc-pkg's database.

In my suggested system this is possible if that set of packages use
their own package db (containing relative paths).

In your system it's possible by updating some var in a central registry
and having that set of packages use paths relative to that var.

  For some reason I really dislike the idea that we make up specific vars
  like $cabaltopdir for specific purposes. Perhaps that's just me. I want
  a general solution, not something that forces everyone to adopt
  conventions like installing everything in ~/.cabal/. That's just a
  sensible default, but the user rightly has full control over --prefix,
  --libdir etc etc.
 
 Personally, I only dislike the idea of hardcoding specific variable names 
 in ghc-pkg, which is why I suggested a name-independent approach
 (I also dislike the current duplication of code in ghc-pkg/ghc api/..).
 
 $cabaltopdir would just improve the handling of the default cabal
 install locations, without dictating where users say those default locations
 should be - and if users move specific packages/package parts to
 different absolute locations, those absolute locations would still have
 to appear in the package database, but I'd expect that to be an 
 exception.

So ghc's current system uses two vars, $topdir and $httptopdir. 

I'm proposing to replace those with a standardised ${pkgroot} and
${pkgrooturl} vars which are usable by all compilers and in more
situations.

You're proposing a central registry of vars and to have ghc-pkg
(optionally) expand these vars which could be used anywhere in the
installed package descriptions. Presumably you're also suggesting some
mechanism to query and update this registry of variables.

Is that a fair summary?

 Let's say I wanted to move a GHC/Cabal/HP installation to a 
 USB drive: moving GHC/corelibs is straightforward (it doesn't
 care under what drive name the USB drive gets mounted on the
 lecture theatre computer), but how would I move Cabal-installed 
 non-core packages (not to mention Cabal itself?)? Is that use case
 documented in some faq?

Ok, so you want to construct a set of relocatable packages. This needs
to be decided from the beginning when you compile said packages because
otherwise packages can have paths baked into them. There are some
restrictions on making relocatable packages, eg you can't set --libdir
to an absolute path, it has to be relative to the --prefix.

In addition to making the package relocatable, we would have to register
the package into a package db that lives relative to the packages in
question. This db would contain relative paths (using ${pkgroot}).

Once this is done then the whole lot would be relocatable onto a USB
drive or whatever. To use this set of packages you would need to specify
--package-conf= to ghc, or --package-db= to cabal.

 If the extra package paths are absolute, it would involve something 
 like searchreplace on the concrete representation of the supposedly 
 abstract package database, but as long as that representation is a
 simple text file, that might not be too troublesome; 

Aye, so if you want to be able to move then then it's better if they're
relative.

 if the extra package paths are relative to a $cabaltopdir, it would 
 involve telling GHC about the new location prefix whenever calling 
 it directly (or telling Cabal about its new location, and Cabal passing 
 that on when calling GHC).

So that's the bit in your suggestion that corresponds to using
--package-conf= in my suggestion. And it assumes that you don't need to
set $cabaltopdir to two values simultaniously, eg if the machine you've
moved it to on the USB stick also has cabal packages that it needs to
use.

  It's true that the GHC build system cannot work in a directory
  containing spaces, and that's probably too hard to fix. However using
  tools (eg happy, alex) that are in a dir containing spaces should not be
  nearly so hard to fix.
 
 Maybe so, but last time (end of January) I asked about the GHC build 
 (in a space-free path) 

Re: haddock-2.3.0 literate comments discarded from .lhs input

2009-05-27 Thread Duncan Coutts
On Wed, 2009-05-27 at 15:10 +0100, Alistair Bayley wrote:
 Andrea,
 
 2009/3/19 Andrea Vezzosi sanzhi...@gmail.com:
  It turns out that those variables are there to allow relocation, in
  fact $topdir is expanded by
  Distribution.Simple.GHC.getInstalledPackages, it seems that
  $httptopdir has been overlooked.
  I'd be tempted to say that it's ghc-pkg dump/describe responsibility
  to expand those vars instead, like it does for ghc-pkg field.
 
 Do you (or anyone else) intend to work on this? If not, I'd like to
 fix it, but I'll need some guidance. Like, is
 Distribution.Simple.GHC.getInstalledPackages where the variable
 expansion code should go, or should it be somewhere else?

I don't think we should be hacking around this in Cabal without any
discussion with the ghc folks on what is supposed to be there, what
variables are allowed.

We need a clear spec on what variables tools are expected to handle and
how they are to be interpreted. The output of ghc-pkg describe/dump is
not just for ghc to define and play around with. It's supposed to be
defined by the Cabal spec.

Supporting relocatable sets of packages is a good idea. We should aim to
have something that is usable by each compiler, not just ghc, so
interpreting paths relative to ghc's libdir doesn't seem ideal. How
about this: a way to specify paths in the package registration info that
are relative to the location of the package db they are in. That makes
sense beyond just ghc and even with would allow other sets of
relocatable packages, not just those installed with ghc.

Then perhaps as a compat hack we should get Cabal to handle older ghc
versions that do use these funny vars.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] A problem with par and modules boundaries...

2009-05-23 Thread Duncan Coutts
On Fri, 2009-05-22 at 05:30 -0700, Don Stewart wrote:
 Answer recorded at:
 
 http://haskell.org/haskellwiki/Performance/Parallel

I have to complain, this answer doesn't explain anything. This isn't
like straight-line performance, there's no reason as far as I can see
that inlining should change the operational behaviour of parallel
evaluation, unless there's some mistake in the original such as
accidentally relying on an unspecified evaluation order.

Now, I tried the example using two versions of ghc and I get different
behaviour from what other people are seeing. With the original code, (ie
parallelize function in the same module) with ghc-6.10.1 I get no
speedup at all from -N2 and with 6.11 I get a very good speedup (though
single threaded performance is slightly lower in 6.11)

Original code
  ghc-6.10.1,   -N1 -N2
  real  0m9.435s0m9.328s
  user  0m9.369s0m9.249s

  ghc-6.11, -N1 -N2
  real  0m10.262s   0m6.117s
  user  0m10.161s   0m11.093s

With the parallelize function moved into another module I get no change
whatsoever. Indeed even when I force it *not* to be inlined with {-#
NOINLINE parallelize #-} then I still get no change in behaviour (as
indeed I expected).

So I view this advice to force inlining with great suspicion (at worst
it encourages people not to think and to look at it as magic). That
said, why it does not get any speedup with ghc-6.10 is also a mystery to
me (there's very little GC going on).

Don: can we change the advice on the wiki please? It currently makes it
look like a known and understood issue. If anything we should suggest
using a later ghc version.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] A problem with par and modules boundaries...

2009-05-23 Thread Duncan Coutts
On Fri, 2009-05-22 at 16:34 +0200, Daniel Fischer wrote:

  That's great, thank you. I am still baffled, though.

I'm baffled too! I don't see the same behaviour at all (see the other
email).

  Must every exported function that uses `par' be INLINEd? Does every
  exported caller of such a function need the same treatment?

It really should not be necessary.

  Is `par' really a macro, rather than a function?

It's a function.

 As far as I understand, par doesn't guarantee that both arguments are
 evaluated in parallel, it's just a suggestion to the compiler, and if
 whatever heuristics the compiler uses say it may be favourable to do
 it in parallel, it will produce code to calculate it in parallel
 (given appropriate compile- and run-time flags), otherwise it produces
 purely sequential code.
 
 With parallelize in a separate module, when compiling that, the
 compiler has no way to see whether parallelizing the computation may
 be beneficial, so doesn't produce (potentially) parallel code. At the
 use site, in the other module, it doesn't see the 'par', so has no 
 reason to even consider producing parallel code.

I don't think this is right. As I understand it, par always creates a
spark. It has nothing to do with heuristics.

Whether the spark actually gets evaluated in parallel depends on the
runtime system and whether the spark fizzles before it gets a chance
to run. Of course when using the single threaded rts then the sparks are
never evaluated in parallel. With the threaded rts and given enough
CPUs, the rts will try to schedule the sparks onto idle CPUs. This
business of getting sparks running on other CPUs has improved
significantly since ghc-6.10. The current development version uses a
better concurrent queue data structure to manage the spark pool. That's
probably the underlying reason for why the example works well in
ghc-6.11 but works badly in 6.10. I'm afraid I'm not sure of what
exactly is going wrong that means it doesn't work well in 6.10.

Generally I'd expect the effect of par to be pretty insensitive to
inlining. I'm cc'ing the ghc users list so perhaps we'll get some expert
commentary.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: --out-implib when linking shared libraries

2009-05-16 Thread Duncan Coutts
On Sat, 2009-05-16 at 11:07 +0100, Neil Mitchell wrote:

 I don't, although having that option wouldn't be a bad thing - having
 a minimal .lib is perfectly reasonable as a default. Having a massive
 .lib seems crazy. (The fact that .lib is named .dll.a isn't too much
 of an issue)

It's possible to create a minimal import lib via a .def file (which
lists the exports). I think the dlltool helps with that.

  So my suggestion is remove it, if you're linking using gcc it should
  work.
 
 I'm not linking the .dll at all, only using dynamic linking, which
 works without the .lib. But I don't really want to start removing
 files - doing that in a build system seems like a bad idea.

Sure, so at least you don't have to install them.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: possible alternative to libFFI

2009-05-16 Thread Duncan Coutts
On Sat, 2009-05-16 at 22:31 +0400, Bulat Ziganshin wrote:
 Hello glasgow-haskell-users,
 
 http://www.nongnu.org/cinvoke/faq.html

From the page:

How does C/Invoke compare to libFFI?

At the C API level they're pretty similar, aside from some minor
quibbles. libFFI has been around longer and is much more
portable, but the last release was in 1998.

Note that there are separate libffi releases again:

http://sourceware.org/libffi/

libffi-3.0.8 was released on December 19, 2008. You can ftp it
from ftp://sourceware.org/pub/libffi/libffi-3.0.8.tar.gz


Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: --out-implib when linking shared libraries

2009-05-15 Thread Duncan Coutts
On Fri, 2009-05-15 at 15:31 +0100, Neil Mitchell wrote:
 Hi,
 
 I've just built a Haskell dll on Windows. As part of the process it
 generated an 14Mb foo.dll, and a 40Mb foo.dll.a. Looking at the flags
 passed to ld I see --out-implib=foo.dll.a. What is the purpose of the
 .a file? What might it be needed for? Is it possible to suppress it?

I'm less familiar with the windows dlls as I've been working on the unix
case first, but as I understand it, .lib files serve a dual purpose as
static libs and as import libs for corresponding dlls. To add confusion
the windows gnu tools use the .dll.a extension rather than .lib which
the MS tools use.

It looks like what you're getting is an import lib that also contains a
full copy of all the code.

I think it's possible to have minimal .lib files that do not contain any
code and only refer to the corresponding dll. Further, I think recent
gnu ld versions can link directly against dlls without using an import
lib (though you may still need the import lib if you want to use MSVC to
link to your dll).

So my suggestion is remove it, if you're linking using gcc it should
work.

See also:
http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/gnu-linker/win32.html

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: strictness of interpreted haskell implementations

2009-05-07 Thread Duncan Coutts
On Tue, 2009-05-05 at 00:43 +0100, Geraint Jones wrote:
 Sorry to revive a year-old thread, but...
 
 On  Fri, 25 Apr 2008 at 20:17:53 +0100 Duncan Coutts wrote:
  On Fri, 2008-04-25 at 09:08 -0700, Don Stewart wrote:
   Geraint.Jones:
Are there well-known differences in the implementations of Haskell in
ghci and hugs?  I've got some moderately intricate code (simulations
of pipelined processors) that behave differently - apparently because
ghci Haskell is stricter than hugs Haskell, and I cannot find any
obviously relevant claims about strictness in the documentation.
 
  I think they should give the same answer. It sounds like a bug in one
  implementation or the other.
 
   Hugs does no optimisations, while GHC does a truckload, including
   strictness analysis. Some of these optimisations prevent space leaks.
 
  Though none should change the static semantics.
 
  Post the code. Even if you don't have time to track down the difference,
  someone might.
 
 At the time I was reluctant to impose all the code on anyone and I found 
 it hard to cut the example down to a manageable size.  I've just got it 
 down to a one-liner: it's the implementation of what I think ought to be
 strict fields in records:
 
   data S = S { a :: Int, b :: ! Int }
 
 I think ghci is correct:
 
   *Main a (S { a = 0, b = 1 })
   0
   *Main a (S { a = 0, b = undefined })
   *** Exception: Prelude.undefined
 
 and that hugs had been concealing a bug in my program by not demanding
 one of the fields of S when it ought to:
 
   Main a (S { a = 0, b = 1 })
   0
   Main a (S { a = 0, b = undefined })
   0
 
 Ho hum.  Is this a known difference?

It's certainly a bug. I suspect it is not well known. It's not
documented at
http://cvs.haskell.org/Hugs/pages/users_guide/haskell98.html#BUGS-HASKELL98

Also, if we instead define:

data S' = S' Int !Int

a' (S' x _) = x
b' (S' _ x) = x

Then:

Main a' (S' 0 undefined)

Program error: Prelude.undefined

Which is clearly inconsistent. There's something wrong in hugs with the
strictness annotations on data defined using the record syntax.

 (What makes you think I'm teaching the same course again this year?)

:-)

As an ex teaching assistant my recommendation is Use ghci!.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: runhaskell a parallel program

2009-05-07 Thread Duncan Coutts
On Thu, 2009-05-07 at 15:12 +0100, Neil Mitchell wrote:

  This is a test framework that spawns system commands. My guess is the
  Haskell accounts for a few milliseconds of execution per hour. Running
  two system commands in parallel gives a massive boost.
 
  That still doesn't explain why you need +RTS -N2.  You can spawn multiple
  processes by making multiple calls to runProcess or whatever. If you want to
  wait for multiple processes simultaneously, compile with -threaded and use
  forkIO.
 
 I do need to wait for them - so I don't end up firing too many at
 once. I was hoping to avoid the compile, which the ghc -e will give
 me.

Right, it works because ghc itself is compiled with the threaded rts. So
you don't need the +RTS -N when you call ghc -e.

Note that for the next ghc release the process library will use a
different implementation of waitForProcess (at least on Unix) so will
not need multiple OS threads to wait for multiple processes
simultaneously.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Static library to redefine entrypoint

2009-04-25 Thread Duncan Coutts
On Fri, 2009-04-24 at 12:56 +0200, Philip K.F. Hölzenspies wrote:
 Dear GHCers,
 
 I am trying to write a wrapper library for lab work to give to students.
 My problem is, that the libraries I use require initialization that I
 really want to hide from our students. The wrapper I'm writing is
 compiled to a static library and installed with cabal, so students can
 just ghc --make or ghci their sources. Here comes the problem: I
 want to define main and let students just define labMain as an entry
 point to their program.
 
 How can I have my library use labMain without a definition? Keep in
 mind that I want to give them a cabalized library that they can just
 link to, so I can't give them a template file that they fill and compile
 together with my code. Is it at all possible to have external
 functions and letting the linker sort stuff out?

When I've set practicals like this I've just provided them with a 3 line
Main module that imports functions exported by the module(s) that the
students write. Eg I have them fill in Draw.lhs but tell them to compile
the thing using ghc --make Main.hs. I never found that this confused any
of them (indeed some were interested in looking at the code).

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 6.10.3 plans

2009-04-23 Thread Duncan Coutts
On Wed, 2009-04-22 at 18:55 -0700, Sigbjorn Finne wrote:
 Hi Ian,
 
 thanks for the update on plans and the willingness to jump in and do another
 release cycle so soon after 6.10.2. The suggested fixes seem agreeable to
 me, but I have one _minor_ additional request for 6.10.3 if you end having
 to rebuild 'base' -- add a DEPRECATED (or some such) to
 Foreign.ForeignPtr.{newForeignPtr,addForeignPtrFinalizer} to indicate
 that the operational behaviour of these have changed.
 
 Small change, but could be helpful to package usersauthors when migrating
 beyond 6.10.1

I agree that it's a little unfortunate that this change is in a minor
release.

I'm not sure what can be done as far as automatic messages go however.
The notice about the change is in the release notes. The functions are
not deprecated (they're part of the FFI spec).

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 6.10.3 plans

2009-04-23 Thread Duncan Coutts
On Thu, 2009-04-23 at 05:59 -0700, Sigbjorn Finne wrote:
 On 4/23/2009 02:05, Duncan Coutts wrote:
  On Wed, 2009-04-22 at 18:55 -0700, Sigbjorn Finne wrote:

  Hi Ian,
 
  thanks for the update on plans and the willingness to jump in and do 
  another
  release cycle so soon after 6.10.2. The suggested fixes seem agreeable to
  me, but I have one _minor_ additional request for 6.10.3 if you end having
  to rebuild 'base' -- add a DEPRECATED (or some such) to
  Foreign.ForeignPtr.{newForeignPtr,addForeignPtrFinalizer} to indicate
  that the operational behaviour of these have changed.
 
  Small change, but could be helpful to package usersauthors when migrating
  beyond 6.10.1
  
 
  I agree that it's a little unfortunate that this change is in a minor
  release.
 
  I'm not sure what can be done as far as automatic messages go however.
  The notice about the change is in the release notes. The functions are
  not deprecated (they're part of the FFI spec).

 Sorry, didn't mean to imply that they were. Just offered it as a 
 pragmatic solution to deliver extra help to folks without spending the
 dev. time to implement a more appropriate pragma like WARNING/INFO. If
 such a thing already existed...

For INFO we'd want a mechanism to have it tell us the first time but
once we acknowledge the info, for it not to keep bugging us or our users
every time. Hmm, tricky.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Re[2]: bug: unstable myThreadId in GHC 6.6.1

2009-04-16 Thread Duncan Coutts
On Sat, 2009-04-11 at 21:07 +0400, Bulat Ziganshin wrote:
 Hello Bertram,
 
 Saturday, April 11, 2009, 8:09:46 PM, you wrote:
 
  What does same thread mean? I'll risk a guess.
 
 well, that's possible - i'll ask on gtk2hs list too
 
 currently, i believe that mainGUI just runs endless loop processing
 queue of GUI events


You are both right. mainGUI does just run an endless event processing
loop but callbacks for events (like button clicks) are indeed ffi
wrapper callbacks and so do get run in a fresh thread.

Your 'guiThread' is blocked in the mainGUI call, so nothing ever happens
in that Haskell thread until the even loop terminates.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell] Re: ANNOUNCE: GHC version 6.10.2

2009-04-02 Thread Duncan Coutts
On Thu, 2009-04-02 at 13:47 +0900, Benjamin L.Russell wrote:
 On Wed, 1 Apr 2009 18:48:13 -0700, Lyle Kopnicky li...@qseep.net
 wrote:
 
 Great! But what happened to the time package? It was in 6.10.1. Has it been
 intentionally excluded from 6.10.2?

Yes, the maintainer of the time package asked for it to be removed:

 Can I remove the time package from the GHC build process? I
 want to update it but I don't want to deal with GHC's
 autotools stuff or break the GHC build.


 Then I should probably hold off on installing the new version for now.
 Any estimate on when this problem will be fixed?

The time package will be part of the first platform release (assuming we
get enough volunteers to do the platform release!)

In the mean time you can just:

$ cabal install time


Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: compilation of pattern-matching?

2009-03-25 Thread Duncan Coutts
On Wed, 2009-03-25 at 09:18 +, Simon Peyton-Jones wrote:

 * More promising might be to say this is the hot branch.  That information 
 about frequency could in principle be used by the back end to generate better 
 code.  However, I am unsure how
 a) to express this info in source code
 b) retain it throughout optimisation

Claus, last time I asked about this approach Simon filed the following
ticket:

http://hackage.haskell.org/trac/ghc/ticket/849

If you add a new commentary page then it is at least worth
cross-referencing this ticket.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 6.10.2 Release Candidate 1

2009-03-19 Thread Duncan Coutts
On Thu, 2009-03-19 at 16:34 -0700, Don Stewart wrote:
 We must have the gtk2hs team invovled in this discussion. They were
 using an undocumented feature. It may be trivial to fix.

   This will need to be fixed in gtk2hs.  Previously GHC allowed finalizers
   to call back into Haskell, but this was never officially supported.  Now 
   it
   is officially unsupported, because finalizers created via
   Foreign.mkForeignPtr are run directly by the garbage collector.

I had a quick look but so far I cannot see where any callback into
Haskell is happening. The only interesting case I've found is one
finaliser which is implemented in C and uses hs_free_stable_ptr.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Under Solaris: GHC 6.10.2 Release Candidate 1

2009-03-18 Thread Duncan Coutts
On Tue, 2009-03-17 at 21:12 -0400, Brandon S. Allbery KF8NH wrote:
 On 2009 Mar 17, at 20:28, Duncan Coutts wrote:

  It works for me under Solaris 10. Perhaps Solaris 9 or older do not  
  have a standard compliant /bin/sh program. What do you suggest we use
  instead as a workaround?

 For backward compatibility reasons sh in Solaris 9 and earlier is not  
 POSIX compliant.  Use /usr/xpg4/bin/sh or /bin/bash instead.

 (Unfortunately you can't cheat and define a shell function, although  
 you could create a program called !:

  if ${1+$@}; then
  exit 1
  else
  exit 0
  fi

Actually this is what the script used 'til someone pointed out to me
that sh has the ! syntax :-). I'll switch it back to using this style
with a note to say why.

Duncan


-  ! grep  ${PKG}-${VER_MATCH} ghc-pkg.list  /dev/null 21
+  if grep  ${PKG}-${VER_MATCH} ghc-pkg.list  /dev/null 21
+  then
+return 1;
+  else
+return 0;
+  fi
+  #Note: we cannot use ! grep as Solaris 9 /bin/sh doesn't like it.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ghci finding include files exported from other packages?

2009-03-17 Thread Duncan Coutts
On Tue, 2009-03-17 at 08:53 +, Simon Marlow wrote:
 Duncan Coutts wrote:
  On Mon, 2009-03-16 at 12:13 +, Simon Marlow wrote:
  
  Yes, if we know we're using it. If we specify -package blah on the
  command line then we do know we're using it and everything works
  (because ghc uses the include-dirs when it calls cpp). If we don't
  specify -package then ghc does not know we need the package until after
  import chasing is done. Import chasing requires that we run cpp on
  the .hs file first and that brings us full circle.
 
 I don't see a reason why we shouldn't pass *all* the include paths for the 
 exposed packages to CPP.  Indeed that's what I thought we did, but I've 
 just checked and I see we don't.  Wouldn't that fix Conal's problem?

Yes it probably would. On my system that'd only be between 25-50 include
directories, which I guess is not too bad. Lets hope not too many
packages decide they need a config.h file.

So, presumably by passing all include dirs for all exposed packages that
takes into account -package flags, so when Cabal does -hide-all-packages
then it gets its desired behaviour.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Under Solaris: GHC 6.10.2 Release Candidate 1

2009-03-17 Thread Duncan Coutts
On Tue, 2009-03-17 at 11:09 +0100, Christian Maeder wrote:
 GHC 6.10.2 will have a problem with cabal-install-0.6.2!
 
 When I tried to install cabal-install-0.6.2 for ghc-6.10.1.20090314
 I needed to change #!/bin/sh to #!/bin/bash in bootstrap.sh to avoid the
 following errors:
 
 -bash-3.00$ ./bootstrap.sh
 Checking installed packages for ghc-6.10.1.20090314...
 ./bootstrap.sh: !: not found

 Under Solaris sh is not bash!

Indeed.

According to the OpenGroup that syntax should be fine:
http://www.opengroup.org/onlinepubs/009695399/utilities/xcu_chap02.html#tag_02_09_02

It works for me under Solaris 10. Perhaps Solaris 9 or older do not have
a standard compliant /bin/sh program. What do you suggest we use instead
as a workaround?

 Next, ghc-6.10.1.20090314 comes with package unix-2.4.0.0, but
 cabal-install.cabal requests:
 
   unix = 2.0   2.4
 
 Changing to = 2.4 was not sufficient, so I changed it to = 2.5.
 This will affect any OS!

Hmm, it's a bit suspicious that the major version number is changing in
a minor ghc release. Do we know what the API breakage is? This could
affect any program.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ghci finding include files exported from other packages?

2009-03-16 Thread Duncan Coutts
On Mon, 2009-03-16 at 12:13 +, Simon Marlow wrote:

  This sounds like a chicken and egg problem. To know which package
  include directories to use GHCi needs to know which packages your module
  uses. However to work out which packages it needs it has to load the
  module which means pre-processing it!
  
  With cabal we get round this problem because Cabal calls ghc with
  -package this -package that etc and so when ghc cpp's the module it does
  know which package include directories to look in.
 
 Perhaps I'm missing something, but if applicative-numbers is an exposed 
 package, shouldn't we be adding its include-dirs when invoking CPP?

Yes, if we know we're using it. If we specify -package blah on the
command line then we do know we're using it and everything works
(because ghc uses the include-dirs when it calls cpp). If we don't
specify -package then ghc does not know we need the package until after
import chasing is done. Import chasing requires that we run cpp on
the .hs file first and that brings us full circle.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ghci finding include files exported from other packages?

2009-03-16 Thread Duncan Coutts
On Mon, 2009-03-16 at 16:04 -0700, Conal Elliott wrote:
 On Mon, Mar 16, 2009 at 2:47 PM, Duncan Coutts
 duncan.cou...@worc.ox.ac.uk wrote:
 On Mon, 2009-03-16 at 12:13 +, Simon Marlow wrote:

  Perhaps I'm missing something, but if applicative-numbers is
 an exposed
  package, shouldn't we be adding its include-dirs when
 invoking CPP?
 
 
 Yes, if we know we're using it. If we specify -package blah on
 the
 command line then we do know we're using it and everything
 works
 (because ghc uses the include-dirs when it calls cpp). If we
 don't
 specify -package then ghc does not know we need the package
 until after
 import chasing is done. Import chasing requires that we run
 cpp on
 the .hs file first and that brings us full circle.
 
 Duncan
 
 Unless you drop the cpp-first requirement and have import-chasing look
 into #include'd files, as I described earlier.  - Conal

Yes, that's what I said earlier about re-implementing cpp, possibly via
cpphs. That would let you chase #includes and assuming you know which
packages provide which include files then we could work out which
packages are needed for cpp-ing.

Or you could play it fast and loose and assume that you can usually
ignore # cpp lines and still work out what the Haskell imports are most
of the time. That's not correct but is probably easier and would
probably work most of the time.

Neither approach is easy or very nice.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ghci finding include files exported from other packages?

2009-03-15 Thread Duncan Coutts
On Sat, 2009-03-14 at 23:43 -0700, Conal Elliott wrote:
 The applicative-numbers package [1] provides an include file.  With
 ghci, the include file isn't being found, though with cabal+ghc it is
 found.
 
 My test source is just two lines:
 
 {-# LANGUAGE CPP #-}
 #include ApplicativeNumeric-inc.hs
 
 I'd sure appreciate it if someone could take a look at the .cabal file
 [2] and tell me if I'm doing something wrong.  And/or point me to one
 or more working examples of cabal packages that export include files
 that are then findable via ghci.

This sounds like a chicken and egg problem. To know which package
include directories to use GHCi needs to know which packages your module
uses. However to work out which packages it needs it has to load the
module which means pre-processing it!

With cabal we get round this problem because Cabal calls ghc with
-package this -package that etc and so when ghc cpp's the module it does
know which package include directories to look in.

So if you did ghci -package applicative-numbers then it should work. I'm
afraid I don't have any good suggestion for how to make it work with
ghci without having to specify any options at all.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ghci finding include files exported from other packages?

2009-03-15 Thread Duncan Coutts
On Sun, 2009-03-15 at 09:13 -0700, Conal Elliott wrote:
 That did it.  I've added :set -package applicative-numbers to
 my .ghci and am back in business.  Thanks!
 
 IIUC, there's an inconsistency in ghci's treatment of modules vs
 include files, in that modules will be found without -package, but
 include files won't.  Room for improvement, perhaps.

But that's because of the circularity I described. GHC can chase Haskell
imports because it can parse Haskell, but chasing CPP #includes would
require us to re-implement cpp. Perhaps we could do it by modifying
cpphs.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cygwin version

2009-03-09 Thread Duncan Coutts
On Sun, 2009-03-08 at 12:29 +, Tuomo Valkonen wrote:
 I want a _real_ cygwin version of darcs. The non-deterministic
 pseudo-cygwin *nix/Windows hybrid currently available has just 
 too many problems integrating into cygwin, that I want to use as
 my TeXing and minor coding environment. A real cygwin version
 of darcs would seem to depend on a real cygwin version of GHC.
 Is there any easy way to compile one? Otherwise I may have to
 abandon darcs (and Haskell software in general) for Mercurial.
 
 (Thanks to the over-bearing cabal and resulting hsc2hs etc. 
 build problems with conventional Makefiles, I have already
 pretty much already abandoned my own Haskell projects.)

Yes we did introduce a problem with hsc2hs in the most recent ghc
release and I'm sorry about that. Just in case you're interested
however, the fix for your makefiles is to add two flags:

hsc2hs --cc=ghc --ld=ghc

That should work with any version of hsc2hs and it gives the behaviour
of the older hsc2hs versions that came with older ghc releases.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: hsc2hs and HsFFI.h

2009-02-18 Thread Duncan Coutts
On Tue, 2009-02-10 at 17:15 +, Ian Lynagh wrote:
 Hi all,
 
 Currently, hsc2hs (as shipped with GHC) cannot be used with just
 hsc2hs Foo.hsc
 as it cannot find HsFFI.h (http://hackage.haskell.org/trac/ghc/ticket/2897).
 To make it work you need to run something like
 hsc2hs -I /usr/lib/ghc-6.10.1/include Foo.hsc
 (it also works when called by Cabal, as Cabal passes it this flag
 automatically). However, we would like to have it work without needing
 to use any special flags, and without having to use it from within a
 Cabal package.
 
 The obvious solution to this problem would seem to be to put HsFFI.h in
 /usr/lib/hsc2hs/include
 and have hsc2hs automatically add that to the include path. However,
 hsc2hs is supposed to be a compiler-independent tool, and HsFFI.h isn't
 a compiler-independent header file; for example, GHC's implementation
 defines HsInt to be a 64-bit integer type on amd64, whereas hugs's
 implementation defines it to be a 32-bit type. We therefore need a
 different HsFFI.h depending on which compiler we are using.
 
 One option would be to have hsc2hs (when installed with GHC) append
 -I /usr/lib/ghc-6.10.1/include to the commandline. If the user gives a
 -I /usr/lib/hugs/include flag then this path will be looked at first,
 and the hugs HsFFI.h will be used.
 
 Another option would be for the user to tell hsc2hs which compiler
 they're using, e.g.
 hsc2hs --compiler=/usr/bin/ghc Foo.hsc
 (this compiler is distinct from the C compiler that hsc2hs will use).
 hsc2hs will then pass the appropriate -I flag, depending on what sort of
 compiler it is told to use. The hsc2hs that comes with GHC would
 probably default to using the GHC that it is installed with, but
 standalone hsc2hs would probably default to searching for /usr/bin/ghc,
 /usr/bin/hugs, etc.
 
 This last approach would also make it possible for hsc2hs to take
 -package foo flags, and add the include paths for the requested
 packages too.
 
 The downside is that it's pushing a lot more knowledge into hsc2hs,
 which means there is one more thing to keep in sync.
 
 
 Has anyone got any other alternatives to consider? Or opinions on which
 solution is best?

I don't see nice solutions here. It's not nice to have each compiler
ship their own variant/wrapper of hsc2hs (which one gets to
be /usr/bin/hsch2s ?). It's also not nice for hsc2hs to have to know
about each different compiler. It's worst to have ghc get taught how to
compile .hsc files.

My suggestion is to avoid the problem. Why does hsc2hs need to know
anything about which Haskell compiler? It's because it #includes HsFFI.h
in its default hsc template. Why does it need to include HsFFI.h? Well,
actually it probably doesn't need to at all.

The HsFFI.h is not needed by code in the hsc template itself and nor is
it needed by other code generated by hsc2hs as far as I can tell. Does
anyone remember why HsFFI.h was included in the default hsc template? My
guess it's there as a convenience for those modules that need things
from HsFFI.h. I speculate that the number of .hsc modules that actually
need this header file is very low.

So my suggestion is that in those few cases where it is needed, it
should be specified explicitly in the .hsc file. In such cases it is the
responsibility of the build system to use the right -I include dir.
Cabal does this ok and others can do it too if needed.

To see if this is viable we'd want to check that building a bunch of
packages from hackage that use hsc2hs works with a modified template
file. This test should be relatively easy to perform, though would take
several hours of building.

Sound plausible?

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Gtk2HS 0.10.0 Released

2009-02-17 Thread Duncan Coutts
On Tue, 2009-02-17 at 08:47 +, Simon Marlow wrote:
 Duncan Coutts wrote:

  Maybe. Dealing with linker scripts properly is probably rather tricky
  and we get it for free when we switch to shared libraries.
 
 I don't follow this last point - how does switching to shared libraries for 
 Haskell code change things here?

It means that ghci will not need to link to system shared libs except
when someone uses -lblah on the ghci command line. That's because when
we link a Haskell package as a shared lib the system linker interprets
any linker scripts and embeds the list of dependencies on other shared
libs (other Haskell packages and system libs). Then ghci just dlopens
the shared libs for the directly used Haskell packages that that
automatically resolves all their deps on other Haskell and system shared
libs.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] createProcess shutting file handles

2009-02-15 Thread Duncan Coutts
On Sun, 2009-02-15 at 11:06 +, Duncan Coutts wrote:
 On Sun, 2009-02-15 at 09:24 +, Neil Mitchell wrote:
  Hi
  
   What have I done wrong? Did createProcess close the handle, and is
   there a way round this?
  
   The docs for runProcess says:
  
  Any Handles passed to runProcess are placed immediately in the
  closed state.
  
   but the equivalent seems to be missing from the documentation for
   createProcess.
  
  However the createProcess command structure has the close_fds flag,
  which seems like it should override that behaviour, and therefore this
  seems like a bug in createProcess.
 
 close_fds :: Bool
 
 Close all file descriptors except stdin, stdout and stderr in
 the new process
 
 This refers to inheriting open unix file descriptors (or Win32 HANDLEs)
 in the child process. It's not the same as closing the Haskell98 Handles
 in the parent process that you pass to the child process.

So lets not talk about if the current behaviour is a bug or not. It's
reasonably clear (if not brilliantly well documented) that it's the
intended behaviour.

The thing we want to talk about is what reason is there for the current
behaviour, if that's necessary and if it is the sensible default
behaviour. As I said before I don't know why it is the way it is. I'm
cc'ing the ghc users list in the hope that someone there might know.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Gtk2HS 0.10.0 Released

2009-02-12 Thread Duncan Coutts
On Thu, 2009-02-12 at 10:11 +0100, Christian Maeder wrote:
 Duncan Coutts wrote:
  On Wed, 2009-02-11 at 15:49 +0100, Lennart Augustsson wrote:
  Does this version work from ghci?
 
-- Lennart
  
  Specifically I believe Lennart is asking about Windows. It's worked in
  ghci in Linux for ages and it worked in ghci in Windows prior to the
  0.9.13 release.
  
  In the 0.9.13 release on Windows there was something funky with linking
  (possibly due to using a newer mingw) and ghci's linker could not
  understand was was going on and could not load the packages.
 
 I'm having trouble
 http://hackage.haskell.org/trac/ghc/ticket/2615
 (cairo depends on pthread, which has a linker script)
 Is there an easy workaround?

The way it used to work was that the Gtk2Hs ./configure script just
filtered out pthread on linux systems. Of course that's just a hack.

 Maybe that ticket can be considered in Plans for GHC 6.10.2

Maybe. Dealing with linker scripts properly is probably rather tricky
and we get it for free when we switch to shared libraries.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Pragma not recognised when wrapped in #ifdef

2009-02-11 Thread Duncan Coutts
On Tue, 2009-02-10 at 13:43 +, Simon Marlow wrote:
 Simon Peyton-Jones wrote:
  I'm guessing a bit here, but it looks as if you intend this:
  
  * GHC should read Foo.hs, and see {-# LANGUAGE CPP #-}
  * Then it should run cpp
  * Then it should look *again* in the result of running cpp,
to see the now-revealed {-# LANGUAGE DeriveDataTypeable #-}
  
  I'm pretty sure we don't do that; that is, we get the command-line flags 
  once for all from the pre-cpp'd source code.  Simon or Ian may be able to 
  confirm.
 
 Spot on.
 
  If so, then this amounts to
 a) a documentation bug: it should be clear what GHC does
 
 Right, I checked the docs and it doesn't explicitly say this.
 
 b) a feature request, to somehow allow cpp to affect in-file flags
I'm not sure what the spec would be
 
 It needs a bit of thought - what should happen to pragmas that are there 
 pre-CPP but not post-CPP, for example?

If we ever make Cabal do the cpp'ing instead of Cabal getting ghc to do
it then it would operate differently. Cabal would look for the LANGUAGE
CPP pragma and use that to decide if the module needs to be cpp'ed. Then
ghc would get the result after cpp and so get all pragmas post-cpp.

Is there any problem with having ghc do the similar thing now? So ghc
reads the file and collects the pragmas. From that set of pragmas it
discovers that it needs to run cpp on the file. It should now *discard*
all the pragmas and info that it collected. It should run cpp and read
the resulting .hs file normally. That means it will accurately pick up
pragmas that are affected by cpp, just as if the user had run cpp on the
file manually.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Temporarily overriding Data.Generic

2009-02-04 Thread Duncan Coutts
On Thu, 2009-02-05 at 00:11 +0100, Deniz Dogan wrote:
 I'm currently working on hacking Data.Generics for my master thesis.
 I'm basically trying to find out whether it can be made any faster
 using e.g. rewrite rules. The problem I'm having is that I need an
 easy way to import my own modified version of Data.Generics (currently
 located in the same directory as my testing program) without
 unregistering or hiding syb-0.1.0.0 as base seems to depend on it.

This should just work. If ./Data/Generics.hs exists in / relative to the
current directory then by default that overrides the module of the same
name from the syb package. There's clearly some specific problem you're
hitting, can you tell us more about it?

When you say currently located in the same directory as my testing
program do you mean you've got Generics.hs in the same dir as your
Test.hs module or do you mean you've got ./Test.hs
and ./Data/Generics.hs, ie in a subdirectory?

The problems you're likely to run into will be with other code that
already uses the syb:Data.Generics module as the types are necessarily
not substitutable for each other.

 I've read the GHC user manual trying to find nice ways to do this
 using a bunch of different parameters to ghc, but I can't figure it
 out. Does anyone here know?

The command line options for controlling the module search path are
basically the -package flags and the -i flag. The default if you don't
say anything is -i. meaning look first in the current directory.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Ready for testing: Unicode support for Handle I/O

2009-02-03 Thread Duncan Coutts
On Tue, 2009-02-03 at 11:03 -0600, John Goerzen wrote:

 Will there also be something to handle the UTF-16 BOM marker?  I'm not
 sure what the best API for that is, since it may or may not be present,
 but it should be considered -- and could perhaps help autodetect encoding.

I think someone else mentioned this already, but utf16 (as opposed to
utf16be/le) will use the BOM if its present.

I'm not quite sure what happens when you switch encoding, presumably
it'll accept and consider a BOM at that point.

  Thanks to suggestions from Duncan Coutts, it's possible to call
  hSetEncoding even on buffered read Handles, and the right thing
  happens.  So we can read from text streams that include multiple
  encodings, such as an HTTP response or email message, without having
  to turn buffering off (though there is a penalty for switching
  encodings on a buffered Handle, as the IO system has to do some
  re-decoding to figure out where it should start reading from again).
 
 Sounds useful, but is this the bit that causes the 30% performance hit?

No. You only pay that penalty if you switch encoding. The standard case
has no extra cost.

  Performance is about 30% slower on hGetContents = putStr than
  before.  I've profiled it, and about 25% of this is in doing the
  actual encoding/decoding, the rest is accounted for by the fact that
  we're shuffling around 32-bit chars rather than bytes in the Handle
  buffer, so there's not much we can do to improve this.
 
 Does this mean that if we set the encoding to latin1, tat we should see
 performance 5% worse than present?

No, I think that's 30% for latin1. The cost is not really the character
conversion but the copying from a byte buffer via iconv to a char
buffer.

 30% slower is a big deal, especially since we're not all that speedy now.

Bear in mind that's talking about the [Char] interface, and nobody using
that is expecting great performance. We already have an API for getting
big chunks of bytes out of a Handle, with the new Handle we'll also want
something equivalent for a packed text representation. Hopefully we can
get something nice with the new text package.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Ready for testing: Unicode support for Handle I/O

2009-02-03 Thread Duncan Coutts
On Tue, 2009-02-03 at 17:39 -0600, John Goerzen wrote:
 On Tue, Feb 03, 2009 at 10:56:13PM +, Duncan Coutts wrote:
Thanks to suggestions from Duncan Coutts, it's possible to call
hSetEncoding even on buffered read Handles, and the right thing
happens.  So we can read from text streams that include multiple
encodings, such as an HTTP response or email message, without having
to turn buffering off (though there is a penalty for switching
encodings on a buffered Handle, as the IO system has to do some
re-decoding to figure out where it should start reading from again).
   
   Sounds useful, but is this the bit that causes the 30% performance hit?
  
  No. You only pay that penalty if you switch encoding. The standard case
  has no extra cost.
 
 I'm confused.  I thought the standard case was conversion to the
 system's local encoding?  How is that different than selecting the
 same encoding manually?

Sorry, I think we've been talking at cross purposes.

 There always has to be *some* conversion from a 32-bit Char to the
 system's selection, right?

Yes. In text mode there is always some conversion going on. Internally
there is a byte buffer and a char buffer (ie UTF32).

 What exactly do we have to do to avoid the penalty?

The penalty we're talking about here is not the cost of converting bytes
to characters, it's in switching which encoding the Handle is using. For
example you might read some HTTP headers in ASCII and then switch the
Handle encoding to UTF8 to read some XML.

Switching the Handle encoding has a penalty. We have to discard the
characters that we pre-decoded and re-decode the byte buffer in the new
encoding. It's actually slightly more complicated because we do not
track exactly how the byte and character buffers relate to each other
(it'd be too expensive in the normal cases) so to work out the
relationship when switching encoding we have to re-decode all the way
from the beginning of the current byte buffer.

The point is, in terms of performance we get the ability to switch
handle encoding more or less for free. It has a cost in terms of code
complexity. The simpler alternative design was that you would not be
able to switch encoding on a read handle that used any buffering at the
character level without loosing bytes. The performance penalty when
switching encoding is the downside to the ordinary code path being fast.

  No, I think that's 30% for latin1. The cost is not really the character
  conversion but the copying from a byte buffer via iconv to a char
  buffer.
 
 Don't we already have to copy between a byte buffer and a char buffer,
 since read() and write() use a byte buffer?

In the existing Handle mechanism we read() into a byte buffer and then
when doing say getLine or getContents we allocate [Char]'s in a loop
reading bytes directly from the byte buffer. There is no separate
character buffer.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


permissions api in directory package is useless (and possibly harmful)

2009-01-28 Thread Duncan Coutts
All,

We need to think about a new better permissions api for the
System.Directory module. The current api and also the implementation are
at best useless and possibly harmful.

I've been trying to debug various permission problems related to
installing files with Cabal on Unix and Windows. Half the problems seem
to stem from the permissions api and the copying of permissions in
copyFile.

data Permissions = Permissions {
  readable :: Bool,
  writable :: Bool,
  executable :: Bool,
  searchable :: Bool
}

getPermissions :: FilePath - IO Permissions
setPermissions :: FilePath - Permissions - IO ()

These are clearly designed for the unix permissions model, however they
do not map onto it very usefully. get/setPermissions only get the user
permissions, not the group or other. So for example if I have a file:

-rw-rw-rw- 1 duncan users 0 2009-01-28 12:34 foo

then

setPermissions foo (Permissions True False False False)

only removes write permissions from me, not from everyone else:

-r--rw-rw- 1 duncan users 0 2009-01-28 12:34 foo

which cannot be what we wanted.

It's also pretty useless for installing files globally. For that we want
to say that a file is readable by everyone and only writable by the
owner (usually root). It might even be ok to say only readable by
everyone and writable by nobody, but we cannot even do that. Combine
that with copyFile copying permissions and we can easily lock people out
or install world-writable files depending on the umask of the user
building the software.

On windows getPermissions tells us almost nothing. In particular if it
says the file is readable or writable, this is no guarantee that we can
read or write the file. getPermissions does not look at permissions. It
only consults the old DOS read-only attribute (and the file extension to
tell us if a file is executable).

Similarly, on windows setPermissions also only sets the read-only
attribute. The read-only attribute should really be avoided. It has
rather unhelpful semantics. For example moving a file over a read-only
file fails, where as if windows permissions (ACLs) are used then it
works as expected (same as on POSIX). The read-only attribute is only
there for compatibility with heritage software, we should really never
set it.

There is also an implementation of copyPermissions, it's not actually
exposed in System.Directory but it is used by copyFile. It calls stat to
get the permissions and chmod to set them. The implementation is the
same between unix and windows, on windows it uses the stat and chmod
from the msvcrt library.

On Unix copyPermissions does work and is probably the right thing for
copyFile to do. It's the default behaviour of /bin/cp for example.
However it does mean there is no easy efficient way to make a copy of a
file where the destination file gets the default permissions, given the
umask of the current user.

This means that copyFile is not appropriate for installing files, since
the user building a package is not necessarily the same as the one
installing it and their umasks can and often are different. The unix
install program does not copy permissions, instead it sets them
explicitly. We have no portable way of doing the same. But we can at
least use the System.Posix.Files.setFileMode to do it.

One nice thing about copyFile is that it replaces the destination file
atomically. However if we have to set the permissions in a second step
then we loose this nice property. Ideally we would create the temporary
file in the destination directory (exclusively and with permissions such
that no other user could write to our file), we'd copy the data over,
set the new destination permissions and atomically replace the
destination file.

The copyPermissions function on windows is useless. It does not copy
permissions. The way that msvcrt implements stat and chmod means that
the only permissions that are copied is the old DOS read-only
attribute. Perhaps copying the read-only attribute is the right thing
for copyFile to do, it's what the Win32 CopyFile function does, however
it's never what I want to do for installing software package files.


So, can we craft a useful api for permissions? We should consider what
the defaults for copyfile etc should be with respect to permissions.
Where those defaults are not helpful can we think of a way to let us do
what we want (eg get default or specific permissions instead of copying
permissions).

Or is it impossible and we just have to use system-specific functions
whenever we need something non-standard. In which case we need to make
those functions better, they appear to be mostly lacking for the Win32
case.

In either case I'm sure the existing permissions api needs to be
deprecated with big flashing warnings.


Duncan


Oh, and while I'm thinking about it: it's not currently possible to open
new/exclusive files with specific permissions and the open temp file
functions on windows do not ensure the file is writable only by the
current user.


Re: permissions api in directory package is useless (and possibly harmful)

2009-01-28 Thread Duncan Coutts
On Wed, 2009-01-28 at 14:57 +0100, Johan Tibell wrote:
 On Wed, Jan 28, 2009 at 2:13 PM, Duncan Coutts
 duncan.cou...@worc.ox.ac.uk wrote:
  We need to think about a new better permissions api for the
  System.Directory module. The current api and also the implementation are
  at best useless and possibly harmful.
 
 Perhaps there's something we can learn from the rearchitecture of
 Java's file handling that's happening in NIO 2. They're overhauling
 how files, directories, file systems, links, and metadata is handled.
 They address things such as providing both a lowest common denominator
 layer and platform specific extensions.
 
 See for example
 http://javanio.info/filearea/nioserver/WhatsNewNIO2.pdf starting at
 slide 17.

Yes, I'm sure there are some good ideas to consider there. I looked at
NIO1 and it's got some fairly sensible stuff.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: gcc version to use on sparc/solaris

2009-01-11 Thread Duncan Coutts
On Sun, 2009-01-11 at 14:48 +, Duncan Coutts wrote:

 I built four versions of gcc and used them to build ghc-6.8.3.

While on the subject, the annoyances I bumped into while doing this are:

discovering a ghc/gcc combination does not work is not obvious. The
first symptom is ./configure failing to find the path to top of build
tree. Of course I realised this happened to be because we build a
Haskell pwd program during configure. It'll confuse other users. It
would be nice if after detecting the location and version of
ghc, ./configure would explicitly test that ghc to make sure it can
compile, link and run a hello world program. Each of these stages can
fail. On different occasions I've managed to get all three stages to
fail independently so it would be nice if each were checked explicitly.
The compile phase can fail if the ghc/gcc combo is bad. The link phase
can fail if the ghc cannot find libgmp/libreadline/libwhatever. The run
phase can fail if the ghc/gcc combo generated bad code or if the libgmp
is not on the runtime linker path but was on the static linker path.

It's rather tricky to configure ghc to use libraries from a non-standard
location, one that is not on the standard linker link-time or runtime
paths. See http://hackage.haskell.org/trac/ghc/ticket/2933

It's tricky to get configure ghc to use gcc from a non-standard
location. The ./configure --with-gcc flag works for gcc used at build
time but the installed ghc still uses ghc from the $PATH.


Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: gcc version to use on sparc/solaris

2009-01-11 Thread Duncan Coutts
On Sun, 2009-01-11 at 10:29 -0500, Brandon S. Allbery KF8NH wrote:
 On 2009 Jan 11, at 9:48, Duncan Coutts wrote:
  On Fri, 2009-01-02 at 21:06 +1100, Ben Lippmeier wrote:
  I'm running into some huge compile times that I'm hoping someone will
  have some suggestions about. When compiling Parser.hs the  
  intermediate
  .hc file is 4MB big, and is taking GCC 4.2.1 more than 2 hours to get
  through.
 
  Here is what I've discovered...
 
  I built four versions of gcc and used them to build ghc-6.8.3. I
  selected the last point release of the last four major gcc releases:
  gcc-4.0.4
  gcc-4.1.2
  gcc-4.2.4
  gcc-4.3.2
 
  Summary: gcc-4.0.4 or gcc-4.1.2 seems to be the best choice at the
  moment for ghc on sparc/solaris.
 
 FWIW I've built 6.8.2 with gcc-4.2.2, ti was slow but it built and the  
 testsuite didn't look too horrible.

Yes, I built ghc-6.8.3 with gcc-4.2.4 and it builds ok, it just takes
two days to do so. So it's not really usable for ghc development. It may
be fine for using ghc, but not for hacking ghc.

In future it should not matter once we get the -fasm route working. Then
we'll only need gcc to build the rts and to assemble things, not to
compile massive .hc files.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Build system idea

2009-01-10 Thread Duncan Coutts
Just cleaning out my inbox and realised I meant to reply to this about 4
months ago :-)

On Thu, 2008-09-04 at 23:15 -0700, Iavor Diatchki wrote:

 On Thu, Sep 4, 2008 at 1:30 PM, Duncan Coutts
  Packages are not supposed to expose different APIs with different
 flags
  so I don't think that's right. Under that assumption cabal-install
 can
  in principle resolve everything fine. I'm not claiming the current
  resolution algorithm is very clever when it comes to picking flags
  (though it should always pick ones that give an overall valid
 solution)
  but there is certainly scope for a cleverer one. Also, the user can
  always specify what features they want, which is what systems like
  Gentoo do.
 
  Do you have any specific test cases where the current algorithm is less
  than ideal? It'd be useful to report those for the next time someone
  hacks on the resolver.
 
 The examples that I was thinking of arise when libraries can provide
 conditional functionality, depending on what is already installed on
 the system, a kind of co-dependecy.  [...]
 
 I guess, you could say that we structured the library wrong---perhaps
 we should have had a core package that only provides manual parsing
 (no external libraries required), and then have a separate packages
 for each of the parsers that use a different parsing combinator
 library.
 
 Conceptually, this might be better, but in practice it seems like a
 bit of a pain---each parser is a single module, but it would need a
 whole separate directory, with a separate cabal file, license, and a
 setup script, all of which would be almost copies of each other.

Right, I admit it might be handy. Unfortunately we could not translate
such packages into other packaging systems because I don't know of any
standard native packaging systems that allow such co-dependencies. They
have to be translated into multiple packages.

If we did support such conditional stuff it would have to be explicit to
the package manager because otherwise choices about install order would
change the exposed functionality (indeed it might not even be stable /
globally solvable).

In particular I've no idea what we should do about instances. Where we'd
like to provide an instance for a class defined in another package that
we do not directly need (except to be able to provide the instance).

If we did not have the constraint of wanting to generate native packages
then there are various more sophisticated things we could do, but
generating native packages is really quite important to our plans for
world domination.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


  1   2   3   4   >