Re: Enabling TypeHoles by default

2014-01-14 Thread Duncan Coutts
On Tue, 2014-01-14 at 17:44 +0100, Johan Tibell wrote:
 I can make another cabal release if needed, if someone submits a pull
 request with the right fix (i.e. add TypedHoles with TypeHoles as a
 synonym.)

Thanks Johan, or I'm happy to do it.

Duncan

 On Tue, Jan 14, 2014 at 5:33 PM, Austin Seipp aus...@well-typed.com wrote:
 
  At the very least, Type(d)Holes would never appear explicitly since it
  would be enabled by default. But it might be turned off (but I don't
  know who would do that for the most part.) Cabal at least might still
  need an update.
 
  In any case, Herbert basically summed it up: the time window is kind
  of close, and we would need to re-release/redeploy a few things most
  likely. I really think it mostly depends on the Cabal team and what
  their priorities are. I've CC'd Duncan and Johan for their opinions.
 
  On Tue, Jan 14, 2014 at 10:27 AM, Herbert Valerio Riedel h...@gnu.org
  wrote:
   Hi,
  
   On 2014-01-14 at 17:14:51 +0100, David Luposchainsky wrote:
   On 14.01.2014 17:07, Austin Seipp wrote:
   We probably won't change the name right now however. It's already
   been put into Cabal (as a recognized extension,) so the name has
   propagated a slight bit. We can however give it a new name and
   deprecate the old -XTypeHoles in the future. Or, we could change
   it, but I'm afraid it's probably a bit too late in the cycle for
   other devs to change.
  
   Removing a name later on is more time-consuming, with or without
   deprecation. People get used to the wrong name and stop caring, but
   I can already picture the type holes are really typed holes
   discussions on IRC. I'm strongly in favour of introducing the new name
   (and the deprecation for the synonym) as early as possible. This
   change should not be very extensive anyway, so why not slip it in?
  
   Well, as Austin hinted at, this would also require a Cabal-1.18.x
   release in time for the final 7.8, and a recompile of Hackage to pick it
   up so that people can start using the new 'TypedHoles' token in their
   .cabal files... so there's a bit of coordination required to make this
   happen in a timely manner... Or put differently, somebody has to care
   enough to invest some time and pull this through :-)
  
   Cheers,
 hvr
  
 
 
 
  --
  Regards,
 
  Austin Seipp, Haskell Consultant
  Well-Typed LLP, http://www.well-typed.com/
 


-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Hackage 2 switchover imminent

2013-09-26 Thread Duncan Coutts
On 24 September 2013 22:38, Duncan Coutts dun...@well-typed.com wrote:
 Hi everyone,

 Having been running the Hackage 2 alpha  beta for several months we
 intend to do the final switchover tomorrow (Wednesday 25th Sept).

 This will involve disabling uploads to the old server for a few hours.
 If all goes well we will switch the DNS over. If anything goes wrong
 we will just re-enable the old server.

The DNS has now been switched over. It should propagate for everyone
within an hour or so.

Uploads are open on the new server.


 If you run into problems after the switch there are site
 administrators you can contact and there is an issue tracker. Please
 report bugs in the site issue tracker:

 https://github.com/haskell/hackage-server/issues

 For issues with accounts or permissions please contact the
 administrators by email at ad...@hackage.haskell.org


-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage 2 switchover imminent

2013-09-25 Thread Duncan Coutts
On 24 September 2013 22:38, Duncan Coutts dun...@well-typed.com wrote:

 For issues with accounts or permissions please contact the
 administrators by email at adm...@hackage.haskell.org

Sorry, that should be: ad...@hackage.haskell.org

Ie singular admin, not plural. I got myself confused because there are
several admins.

-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Hackage 2 switchover imminent

2013-09-24 Thread Duncan Coutts
Hi everyone,

Having been running the Hackage 2 alpha  beta for several months we
intend to do the final switchover tomorrow (Wednesday 25th Sept).

This will involve disabling uploads to the old server for a few hours.
If all goes well we will switch the DNS over. If anything goes wrong
we will just re-enable the old server.

After the switch, everyone with a hackage account will need to do a
one-time self-service account upgrade. A small handful of hackage
early adopters will need assistance from an administrator to upgrade
their accounts. We will email these people individually with
instructions.

If you wish to help or keep an eye on the switchover process then you
are welcome to join us on the #hackage IRC channel on Freenode.

If you run into problems after the switch there are site
administrators you can contact and there is an issue tracker. Please
report bugs in the site issue tracker:

https://github.com/haskell/hackage-server/issues

For issues with accounts or permissions please contact the
administrators by email at adm...@hackage.haskell.org

-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Which builder to choose?

2013-09-20 Thread Duncan Coutts
On Fri, 2013-09-20 at 14:57 +0200, Alejandro Serrano Mena wrote:
 Hi,
 I'm looking at the packages blaze-builder and bytestring, and both provide
 builders for ByteString.
 Which one should I use? In which situations is one more convenient than the
 other?

I'd say the usual answer here would be bytestring. Both implementations
are by Simon Meier, but the bytestring one is the later one with various
refinements and more API review. The bytestring one is intended to
replace the blaze-builder one. The bytestring one is in the platform. 

The blaze-builder one still has some extra special-case functionality
such as for html encoding. I *think* (but not 100% sure) that Simon's
plan is to make blaze-builder re-export the bytestring builder and just
add the few extra functions on top.


Duncan
(bytestring co-maintainer)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] Hackage 2 now available for beta testing

2013-09-10 Thread Duncan Coutts
On Tue, 2013-09-10 at 12:10 +0100, Ross Paterson wrote:
 On Mon, Sep 09, 2013 at 07:23:59PM +0100, Duncan Coutts wrote:
  Well-Typed and the Industrial Haskell Group (IHG) are very pleased to
  announce that Hackage 2 is now available for public beta testing. The
  plan is to do the final switchover in late September, to coincide
  with ICFP.
  
  http://beta.hackage.haskell.org/
 
 What's the story with haddock documentation?  I see that some packages
 have docs imported from the old server, some have newly generated docs,
 and some have none, but no indication whether a bot has tried to build
 it or not. 

Right, for old docs we imported it (building the really old packages is
rather tricky). The doc builder (hackage-build) is a client included in
the hackage-server package.

The doc builder does keep track of which packages it cannot build. That
information is reported via the build report mechanism, so we can
actually end up with many reports (from different clients) that a
package failed to build. Currently we do not present any of the build
report info on the site. While we can link to the raw build results,
what we really need is a way to digest the build reports and turn it
into useful info.

 There's mention of maintainer uploads of docs as a fallback,
 but I couldn't find where one would do that.  (It would also need to
 document the flags needed to get the links right.)

It's not yet well documented, but one can figure it out from the API
page:

http://beta.hackage.haskell.org/api#documentation-core

/package/:package/docs
  * GET: tar -- Download documentation
  * PUT: tar -- Upload documentation

That is, you can currently upload the doc tarball using a client like
curl -X PUT.

Yes, we'd need documentation to tell maintainers how to get the links
set up right.

-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] Hackage 2 now available for beta testing

2013-09-09 Thread Duncan Coutts
Well-Typed and the Industrial Haskell Group (IHG) are very pleased to
announce that Hackage 2 is now available for public beta testing. The
plan is to do the final switchover in late September, to coincide
with ICFP.

http://beta.hackage.haskell.org/

Read on for details of how to help with the public beta, an overview
of the new features and what the IHG has been doing to help.


Support from the Industrial Haskell Group
=

The IHG is a consortium of companies that rely on Haskell. The IHG
members have funded the effort to get Hackage 2 up to feature parity
and get it ready for the switchover. The IHG funded this effort because
while the volunteer effort got us the first 90% of the way there
(including adding a number of new features) there was still the last
90% to do to get it production ready.

The IHG members decided to fund Hackage 2 not just because they are
good citizens, but out of enlightened self-interest. Hackage has over
5000 packages written by over 1000 people -- including the world's best
Haskell developers. This is a massive resource. The IHG members
recognise that improvements to the tools and infrastructure that the
community uses helps the community to produce more and better code.
This is a benefit to everyone in the community -- including the
commercial users.

The IHG is keen to increase its membership so that more resources can
be dedicated to improving the Haskell development platform. If your
organisation relies on Haskell in some way then you may want to
consider joining. See the IHG website for more details or contact
i...@industry.haskell.org.

[IHG website]: http://industry.haskell.org/

Despite the help of the IHG in getting to this point, Hackage is a
community project, and its success depends on the community maintaining
and further improving the new server. The code is now on github so it
is easier to contribute, and now that the server is live there is more
immediate gratification for volunteers contributing fixes and new
features.


Public beta
===

We would like to encourage you to take part in the public beta testing.
We need help both from package authors as well as other users of the
site.

Please report any problems you find using the issue tracker on the
hackage-server github site.

[issue tracker]: https://github.com/haskell/hackage-server/issues
[github site]:   https://github.com/haskell/hackage-server

We are mirroring packages from the old server (every 30min) so it is
suitable to use as your main hackage server with some caveats: we are
allowing package authors to upload (as well as doing the mirroring) so
you may find a slightly different set of packages on this server.

If you are a package author then you are welcome to poke about and
upload packages. We have imported user accounts from the old server
(except for a small number of early adopters of the original server who
will need to contact an administrator). Note that before we do the
final switchover we will *delete everything* from the beta instance
and do a fresh import from the old hackage server.


Configuring cabal-install
-

Edit your ~/.cabal/config file. Comment-out the existing remote-repo
line near the top of the file and add in a new one like this:

--remote-repo: hackage.haskell.org:http://hackage.haskell.org/packages/archive

remote-repo: beta.hackage.haskell.org:http://beta.hackage.haskell.org/


New features


Though our main priority has been feature parity so that we can switch
over, volunteers have contributed several new features, including
better package search, a new site theme, improved security, the ability
to fix package dependencies after a release, changelogs, and a
REST-style interface.

See the beta site for more details on these new features, plus details
of other features that are partially implemented or are in need of
improvement.

[new features]: http://beta.hackage.haskell.org/new-features


Contributing to the development
===

The code is on github and we welcome pull requests.

There are open tickets describing existing bugs and features that we
want or that are in need of improvement. Help on any of these would be
greatly appreciated.

There is some developer and user documentation on the github wiki,
including a quick guide to getting your own server instance up and
running.

You can ask questions on the cabal-devel mailing list or on IRC in
the #hackage channel on freenode.

[code]:https://github.com/haskell/hackage-server
[github wiki]: https://github.com/haskell/hackage-server/wiki
[cabal-devel]: http://www.haskell.org/mailman/listinfo/cabal-devel


-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] Hackage 2 now available for beta testing

2013-09-09 Thread Duncan Coutts
Well-Typed and the Industrial Haskell Group (IHG) are very pleased to
announce that Hackage 2 is now available for public beta testing. The
plan is to do the final switchover in late September, to coincide
with ICFP.

http://beta.hackage.haskell.org/

Read on for details of how to help with the public beta, an overview
of the new features and what the IHG has been doing to help.


Support from the Industrial Haskell Group
=

The IHG is a consortium of companies that rely on Haskell. The IHG
members have funded the effort to get Hackage 2 up to feature parity
and get it ready for the switchover. The IHG funded this effort because
while the volunteer effort got us the first 90% of the way there
(including adding a number of new features) there was still the last
90% to do to get it production ready.

The IHG members decided to fund Hackage 2 not just because they are
good citizens, but out of enlightened self-interest. Hackage has over
5000 packages written by over 1000 people -- including the world's best
Haskell developers. This is a massive resource. The IHG members
recognise that improvements to the tools and infrastructure that the
community uses helps the community to produce more and better code.
This is a benefit to everyone in the community -- including the
commercial users.

The IHG is keen to increase its membership so that more resources can
be dedicated to improving the Haskell development platform. If your
organisation relies on Haskell in some way then you may want to
consider joining. See the IHG website for more details or contact
i...@industry.haskell.org.

[IHG website]: http://industry.haskell.org/

Despite the help of the IHG in getting to this point, Hackage is a
community project, and its success depends on the community maintaining
and further improving the new server. The code is now on github so it
is easier to contribute, and now that the server is live there is more
immediate gratification for volunteers contributing fixes and new
features.


Public beta
===

We would like to encourage you to take part in the public beta testing.
We need help both from package authors as well as other users of the
site.

Please report any problems you find using the issue tracker on the
hackage-server github site.

[issue tracker]: https://github.com/haskell/hackage-server/issues
[github site]:   https://github.com/haskell/hackage-server

We are mirroring packages from the old server (every 30min) so it is
suitable to use as your main hackage server with some caveats: we are
allowing package authors to upload (as well as doing the mirroring) so
you may find a slightly different set of packages on this server.

If you are a package author then you are welcome to poke about and
upload packages. We have imported user accounts from the old server
(except for a small number of early adopters of the original server who
will need to contact an administrator). Note that before we do the
final switchover we will *delete everything* from the beta instance
and do a fresh import from the old hackage server.


Configuring cabal-install
-

Edit your ~/.cabal/config file. Comment-out the existing remote-repo
line near the top of the file and add in a new one like this:

--remote-repo: hackage.haskell.org:http://hackage.haskell.org/packages/archive

remote-repo: beta.hackage.haskell.org:http://beta.hackage.haskell.org/


New features


Though our main priority has been feature parity so that we can switch
over, volunteers have contributed several new features, including
better package search, a new site theme, improved security, the ability
to fix package dependencies after a release, changelogs, and a
REST-style interface.

See the beta site for more details on these new features, plus details
of other features that are partially implemented or are in need of
improvement.

[new features]: http://beta.hackage.haskell.org/new-features


Contributing to the development
===

The code is on github and we welcome pull requests.

There are open tickets describing existing bugs and features that we
want or that are in need of improvement. Help on any of these would be
greatly appreciated.

There is some developer and user documentation on the github wiki,
including a quick guide to getting your own server instance up and
running.

You can ask questions on the cabal-devel mailing list or on IRC in
the #hackage channel on freenode.

[code]:https://github.com/haskell/hackage-server
[github wiki]: https://github.com/haskell/hackage-server/wiki
[cabal-devel]: http://www.haskell.org/mailman/listinfo/cabal-devel


-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage upload problem

2013-08-07 Thread Duncan Coutts
On Wed, 2013-08-07 at 14:32 +0200, Thomas Hallgren wrote:
 Hi,
 
 I get the following error when I try to upload gf-3.5.tar.gz [1] to Hackage.
 
   400 Error in upload
 could not extract gf-3.5 directory from gf-3.5.tar.gz
 
 
 I get the same error when I try to Check the previous version, gf-3.4.tar.gz
 [2], which was uploaded without problems 6 months ago [3], so it seems that
 something has changed on the server.
 
 Does anyone know what it could be that is going wrong? Is there a way to get a
 more informative error message?

You're welcome to try uploading it to the new-hackage.haskell.org and
see if that gives a more informative error message. If it's a poor error
message there then we can fix that.

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Contracts

2013-08-07 Thread Duncan Coutts
On Mon, 2013-08-05 at 23:13 -0400, Julian Arni wrote:
 I've come across interesting papers - and old, sometimes inaccessible,
 repositories - related contracts in Haskell (Dana Xu and SPJ's papers;
 haskellcontracts and the Programatica Project). And then some newer,
 apparently not quite mature projects (Dan Rosen's repositories, for
 instance). Is there anything in between - mature, but not abandoned?

We did an implementation of the contracts idea for one of our customers
which is available (under AGPL or commercial license) on github:

https://github.com/netrium/Netrium

-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Contracts

2013-08-07 Thread Duncan Coutts
On Wed, 2013-08-07 at 17:48 +0100, Colin Adams wrote:
 I thought the OP was talking about software contracts (as in Eiffel /
 Design By Contract ).

Oh oops, you're right. SPJ has too many papers about contracts! :-)

 On 7 August 2013 16:15, Duncan Coutts dun...@well-typed.com wrote:
 
  On Mon, 2013-08-05 at 23:13 -0400, Julian Arni wrote:
   I've come across interesting papers - and old, sometimes inaccessible,
   repositories - related contracts in Haskell (Dana Xu and SPJ's papers;
   haskellcontracts and the Programatica Project). And then some newer,
   apparently not quite mature projects (Dan Rosen's repositories, for
   instance). Is there anything in between - mature, but not abandoned?
 
  We did an implementation of the contracts idea for one of our customers
  which is available (under AGPL or commercial license) on github:
 
  https://github.com/netrium/Netrium
 
  --
  Duncan Coutts, Haskell Consultant
  Well-Typed LLP, http://www.well-typed.com/
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 


-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] Well-Typed are hiring: Haskell developer

2013-07-11 Thread Duncan Coutts

Fellow Haskellers,

We are looking to hire a Haskell expert to work with us at Well-Typed as
a Haskell developer. This is an exciting opportunity for someone who is
passionate about Haskell and who is keen to improve and promote Haskell
in a professional context.

The role is quite general and could cover any of the projects and
activities that we are involved in as a company. The tasks may involve:

  * working on the Haskell compiler, libraries and tools;
  * Haskell application development;
  * working directly with clients to solve their problems;
  * teaching Haskell, and developing training materials.

At the moment, we are particularly hoping to find someone with an
interest in supporting the development and maintenance of GHC.
Therefore, some knowledge or interest in compiler internals, operating
systems, the foreign function interface (FFI), and/or deployment issues
would be welcome.

Well-Typed has a variety of clients. For some we do proprietary Haskell
development and consulting. For others, much of the work involves
open-source development and cooperating with the rest of the Haskell
community: the commercial, open-source and academic users.

Our ideal candidate has excellent knowledge of Haskell, whether from
industry, academia, or personal interest. Familiarity with other
languages, low-level programming, and good software engineering
practices are also useful. Good organisation and ability to manage your
own time, and reliably meet deadlines, is important. You should also
have good communication skills. Being interested or having experience in
teaching Haskell (or other technical topics) is a bonus. Experience of
consulting, or running a business, is also a bonus. You are likely to
have a bachelor's degree or higher in computer science or a related
field, although this isn't a requirement.

The offer is initially for a one-year full time contract. We are also
happy to receive applications for part-time work. The annual salary is
from GBP 34,800 or pro rata for part-time or flexible work. We also
operate a bonus scheme. We offer flexible hours and work from home.
Living in England is not required. We may be able to offer either
employment or sub-contracting, depending on the jurisdiction in which
you live.

If you are interested, please apply via i...@well-typed.com. Tell us why
you are interested and why you would be a good fit for the job, and
attach your CV. Please also indicate how soon you might be able to
start. We are more than happy to answer informal enquiries. Contact
Duncan Coutts, Ian Lynagh or Andres Löh for further information, either
by email or IRC.

To ensure we can properly consider your application, please get it to us
by July 25th, 2013, though we may be able to consider applications
received later.

-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/

-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] Well-Typed are hiring: Haskell developer

2013-07-11 Thread Duncan Coutts
Fellow Haskellers,

We are looking to hire a Haskell expert to work with us at Well-Typed as
a Haskell developer. This is an exciting opportunity for someone who is
passionate about Haskell and who is keen to improve and promote Haskell
in a professional context.

The role is quite general and could cover any of the projects and
activities that we are involved in as a company. The tasks may involve:

  * working on the Haskell compiler, libraries and tools;
  * Haskell application development;
  * working directly with clients to solve their problems;
  * teaching Haskell, and developing training materials.

At the moment, we are particularly hoping to find someone with an
interest in supporting the development and maintenance of GHC.
Therefore, some knowledge or interest in compiler internals, operating
systems, the foreign function interface (FFI), and/or deployment issues
would be welcome.

Well-Typed has a variety of clients. For some we do proprietary Haskell
development and consulting. For others, much of the work involves
open-source development and cooperating with the rest of the Haskell
community: the commercial, open-source and academic users.

Our ideal candidate has excellent knowledge of Haskell, whether from
industry, academia, or personal interest. Familiarity with other
languages, low-level programming, and good software engineering
practices are also useful. Good organisation and ability to manage your
own time, and reliably meet deadlines, is important. You should also
have good communication skills. Being interested or having experience in
teaching Haskell (or other technical topics) is a bonus. Experience of
consulting, or running a business, is also a bonus. You are likely to
have a bachelor's degree or higher in computer science or a related
field, although this isn't a requirement.

The offer is initially for a one-year full time contract. We are also
happy to receive applications for part-time work. The annual salary is
from GBP 34,800 or pro rata for part-time or flexible work. We also
operate a bonus scheme. We offer flexible hours and work from home.
Living in England is not required. We may be able to offer either
employment or sub-contracting, depending on the jurisdiction in which
you live.

If you are interested, please apply via i...@well-typed.com. Tell us why
you are interested and why you would be a good fit for the job, and
attach your CV. Please also indicate how soon you might be able to
start. We are more than happy to answer informal enquiries. Contact
Duncan Coutts, Ian Lynagh or Andres Löh for further information, either
by email or IRC.

To ensure we can properly consider your application, please get it to us
by July 25th, 2013, though we may be able to consider applications
received later.

-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-29 Thread Duncan Coutts
On Thu, 2013-04-25 at 00:52 +0200, Gábor Lehel wrote:
 On Wed, Apr 24, 2013 at 7:56 PM, Bryan O'Sullivan b...@serpentine.comwrote:
 
  On Wed, Apr 24, 2013 at 10:47 AM, Duncan Coutts 
  duncan.cou...@googlemail.com wrote:
 
  I address it briefly in my thesis [1], Section 4.8.2. I think it's a
  fundamental limitation of stream fusion.
 
 
  See also concat, where the naive fusion-based implementation has quadratic
  performance:
 
  concat :: [Text] - Text
  concat txts = unstream (Stream.concat (List.map stream txts))
 
  I've never figured out how to implement this with sensible characteristics
  within the fusion framework.
 
 
 If you could solve concat, might that also lead to be being able to do
 without the Skip constructor?

Dan is right, we still need Skip. My suggested solution to the
concatmap problem is also mostly independent of the skip issue.

You shouldn't think of skip as being a hack. It's not. It's how we
express a more general class of producers in a way that is productive. 

You can think of a stream as being a little state machine and sometimes
the state machine needs to be able to make transitions without producing
any output. One solution to that is to hide those transitions (by
running the state machine until it does produce something, ie using
recursion/loops) and the other is to expose the transition as a skip.
The skip approach where we don't use recursion/loops allows us to do the
various transformations we need to be able to effectively optimise the
whole thing.

If you're interested in this stuff, you can look at the section of my
thesis that goes on about this state machine perspective on things. I
think it's quite a useful way to understand it (and understand how we
optimise stream functions by composing these state machines). More
generally, that chapter explains why stream fusion should actually be an
optimisation.

As for step and the list base functor. Yes, absolutely. And adding skip
does make things harder to prove, because it adds more junk values.
The other major chapter of my thesis explains why it's all still true,
even when we have skip, or rather how we have to do things carefully so
that it does still remain valid.

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-29 Thread Duncan Coutts
On Mon, 2013-04-29 at 20:19 +0200, Gábor Lehel wrote:

 Thanks for the explanation. I looked at your thesis previously, but only
 read through a couple of sections (including the one about concatMap). I
 might go through the state machine parts as well now that I know the
 significance/relevance.
 
 The thing in particular that was motivating me is that if it weren't for
 Skip, it seems that to some extent (I haven't had time to investigate
 precisely what extent) you could write a stream fusion framework in a
 datatype-generic way, parameterized over the base functor. But it wasn't
 obvious to me how (or whether) you would translate Skip. But maybe the
 state machine perspective will provide some insight into that. I'll think
 about it.

Oh I think you can write it in a data-type generic way.

If your datatype is described by a base functor F, then the skip version
is a simple transformation on that functor.

F_skip a = F a + a

And then the stream type for F is  nu a. F_skip a

See section 3.6.

In most of my theory chapter I write it in this style, rather than using
the list functor specifically.

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] Haskell training in San Francisco Bay Area and New York

2013-04-25 Thread Duncan Coutts
Well-Typed are offering Haskell courses in the San Francisco Bay Area
and New York in early June.

They are for professional developers who want to learn Haskell or
improve their skills. There is a 2-day introductory course and a 2-day
advanced course.

Full course and registration details:
http://www.well-typed.com/services_training

Well-Typed are running these courses in partnership with FP Complete and
Skills Matter.

Locations, dates


San Francisco Bay Area

* Introductory Course: June 4-5th, 2013
* Advanced Course: June 6-7th, 2013


New York

* Introductory Course: June 10-11th, 2013
* Advanced Course: June 12-13th, 2013
* Early bird discount before April 29th


-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] Haskell training in San Francisco Bay Area and New York

2013-04-25 Thread Duncan Coutts
Well-Typed are offering Haskell courses in the San Francisco Bay Area
and New York in early June.

They are for professional developers who want to learn Haskell or
improve their skills. There is a 2-day introductory course and a 2-day
advanced course.

Full course and registration details:
http://www.well-typed.com/services_training

Well-Typed are running these courses in partnership with FP Complete and
Skills Matter.

Locations, dates


San Francisco Bay Area

* Introductory Course: June 4-5th, 2013
* Advanced Course: June 6-7th, 2013


New York

* Introductory Course: June 10-11th, 2013
* Advanced Course: June 12-13th, 2013
* Early bird discount before April 29th


-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-24 Thread Duncan Coutts
On Sun, 2013-04-21 at 18:07 -0700, Edward Z. Yang wrote:
 Hello all, (cc'd stream fusion paper authors)
 
 I noticed that the current implementation of stream fusion does
 not support multiple-return stream combinators, e.g.
 break :: (a - Bool) - [a] - ([a], [a]).  I thought a little
 bit about how might one go about implement this, but the problem
 seems nontrivial. (One possibility is to extend the definition
 of Step to support multiple return, but the details are a mess!)
 Nor, as far as I can tell, does the paper give any treatment of
 the subject.  Has anyone thought about this subject in some detail?

I address it briefly in my thesis [1], Section 4.8.2. I think it's a
fundamental limitation of stream fusion.

It looks like fold and unfold fusion systems have dual limitations:
fold-based fusion cannot handle zip style functions, while unfold-based
fusion cannot handle unzip style functions. That is fold-based cannot
consume multiple inputs, while unfold-based cannot produce multiple
outputs.

I'll be interested to see in more detail the approach that Ben is
talking about. As Ben says, intuitively the problem is that when you've
got multiple outputs so you need to make sure that someone is consuming
them and that that consumption is appropriately synchronised so that you
don't have to buffer (buffering would almost certainly eliminate the
gains from fusion). That might be possible if ultimately the multiple
outputs are combined again in some way, so that overall you still have a
single consumer, that can be turned into a single lazy or eager loop.

[1]: http://code.haskell.org/~duncan/thesis.pdf

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-24 Thread Duncan Coutts
On Wed, 2013-04-24 at 10:56 -0700, Bryan O'Sullivan wrote:
 On Wed, Apr 24, 2013 at 10:47 AM, Duncan Coutts 
 duncan.cou...@googlemail.com wrote:
 
  I address it briefly in my thesis [1], Section 4.8.2. I think it's a
  fundamental limitation of stream fusion.
 
 
 See also concat, where the naive fusion-based implementation has quadratic
 performance:
 
 concat :: [Text] - Text
 concat txts = unstream (Stream.concat (List.map stream txts))
 
 I've never figured out how to implement this with sensible characteristics
 within the fusion framework.

Well of course concatMap is another issue. I address that in section
4.8.3 :-)

Summary there is that I don't think it is a fundamental limitation, but
certainly we don't do it properly in practice now. I have a suggestion
in that section for how we might do it.

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] unsafeInterleaveST (and IO) is really unsafe

2013-04-18 Thread Duncan Coutts
On Mon, 2013-04-15 at 20:44 +0200, David Sabel wrote:

 A very interesting discussion,  I may add my 2 cents:
 making unsafeInterleaveIO nondeterministic indeed seems to make it safe,
 more or less this was proved in our paper:
 
 http://www.ki.informatik.uni-frankfurt.de/papers/sabel/chf-conservative-lics.pdf
 slides: 
 http://www.ki.informatik.uni-frankfurt.de/persons/sabel/chf-conservative.pdf
 
 there we proposed an extension to Concurrent Haskell which adds a primitive
 
 future :: IO a - IO a
 
 Roughly speaking future is like unsafeInterleaveIO, but creates a new 
 concurrent thread
 to compute the result of the IO-action interleaved without any fixed order.

That's very interesting to hear. It has always been my intuition that
the right way to understand unsafeInterleaveIO is using a concurrency
semantics (with a demonic scheduler). And whenever this
unsafeInterleaveIO is unsound issue comes up, that's the argument I
make to whoever will listen! ;-)

That intuition goes some way to explain why unsafeInterleaveIO is fine
but unsafeInterleaveST is right out: ST is supposed to be deterministic,
but IO can be non-deterministic.

 We have shown that adding this primitive to the functional core language 
 is 'safe' in the sense
 that all program equations of the pure language still hold in the 
 extended language
 (which we call a conservative extension in the above paper)
 
 The used equality is contextual equivalence
 (with may- and a variant of must-convergence in the concurrent case).

Ok.

 We also showed that adding unsafeInterleaveIO (called lazy futures in 
 the paper..)
 - which delays until its result is demanded - breaks this conservativity,
 since the order of evaluation can be observed.

My conjecture is that with a concurrent semantics with a demonic
scheduler then unsafeInterleaveIO is still fine, essentially because the
semantics would not distinguish it from your 'future' primitive. That
said, it might not be such a useful semantics because we often want the
lazy behaviour of a lazy future.

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Cabal and cross compilation

2013-01-23 Thread Duncan Coutts
On 23 January 2013 05:41, Nathan Hüsken nathan.hues...@posteo.de wrote:
 Hey,

 I am working on getting ghc to cross compile to android.

 When trying to get haskeline to compile. I want to change the cabal file
 such that it sets a flag when compiling for android.

 For that I changed cabal so that it recognizes android as a OS.
 But cabal seems to get its os information from System.Info.os, and from
 what I can tell this always returns the host os and not the target os.

 Am I getting this right, is cabal unaware of the target os?
 How can we change this?

That's right, currently Cabal only knows about the host OS  arch not
the target. Adding proper cross compilation awareness and support into
Cabal will require some hacking in the Cabal library, (to pass in the
target platform, toolchain etc)

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Bytestring and GHC 7.6.2

2013-01-13 Thread Duncan Coutts
On 12 January 2013 16:05, Ian Lynagh i...@well-typed.com wrote:
 On Tue, Jan 08, 2013 at 08:10:18PM +, Duncan Coutts wrote:

 Either way, lemme know if this is all fine, and I'll make the 0.10.0.2
 release.

 Looks good, thanks! I've updated the GHC 7.6 repo to match the tag.

Ta muchly!

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] cabal sdist warns about optimization levels

2013-01-13 Thread Duncan Coutts
On 13 January 2013 20:27, Petr P petr@gmail.com wrote:

 to the cabal file. Now cabal sdist complains with:

 'ghc-options: -O2' is rarely needed. Check that it is giving a real
 benefit and not just imposing longer compile times on your users.

 I wonder:

 (1) Is there a way how to disable the warning? As the main aim of the
 library is speed, I believe -O2 is appropriate here. And since the code is
 quite short, I'm quite sure the increased compile time won't be noticeable.

No, but you can just ignore it. You clearly have checked and you're
satisfied it's the right thing to do, so it's fine.

You don't need to hit 0 warnings, nobody is going to give you or your
package black marks becuase of it! :-)

 (2) Why does cabal complain about it at the first place?

There's lots of programs where it makes no measurable difference
except to make compile times longer. To some extent it's to try to
break the habbit of C programmers who always default to -O2. With gcc
-O2 will almost always be significantly better than -O, but with ghc
that's not the case: -O is the sensible default (almost by definition,
to a first approximation, things that are always a win get put into
-O, things that are sometimes a win and sometimes not go into -O2).

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: In opposition of Functor as super-class of Monad

2012-10-24 Thread Duncan Coutts
On 24 October 2012 11:16, S. Doaitse Swierstra doai...@swierstra.net wrote:
 There are very good reasons for not following this road; indeed everything 
 which is a Monad can also be made an instance of Applicative. But more often 
 than not we want to have a more specific implementation. Because Applicative 
 is less general, there is in general more that you can do with it.

I don't think anyone is suggesting that we force all type that are
both Monad and Applicative to use (*)  = ap as the implementation.
As you say, that'd be crazy.

The details and differences between the various superclass proposals
are to do with how you provide the explicit instance vs getting the
default.

The wiki page explains it and links to the other similar proposals:

http://hackage.haskell.org/trac/ghc/wiki/DefaultSuperclassInstances

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] Cabal dependencies

2012-10-06 Thread Duncan Coutts
On 6 October 2012 17:25, José Lopes jose.lo...@ist.utl.pt wrote:
 OK.

 But, wouldn't it be possible for xmobar to use mtl-2.0.1.0 and for parsec to
 use mtl-2.1.1, while xmobar would use this parsec version?
 In this case, I am assuming that mtl-2.0.1.0 and mtl-2.1.1 are considered
 two different libraries.

Possibly, yes, but cabal doesn't know that. It has to make the
conservative assumption that you might use them together and so they'd
better be the same type.

If cabal knew for sure that parsec did not expose types from mtl, then
it'd be fine for it to use parsec built against a different version of
mtl, because there would be no way to end up trying to equate types
from two different package instances.

This is the idea behind private or encapsulated dependencies: we would
declare in .cabal files that our use of some dependency does not leak
out. But to be clear: this feature has not yet been implemented.

But actually in this case I think parsec does expose the fact that it
uses types from mtl. So it actually would not help here.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: cabal-install-1.16.0 (and Cabal-1.16.0.1)

2012-10-03 Thread Duncan Coutts
On 3 October 2012 18:22, José Lopes jose.lo...@ist.utl.pt wrote:
 Hey,

 I have debian 6.0.5 with GHC version 6.12.1 and base version
 4.2.0.0 (although base 3.0.3.2 is also installed).

I've pushed a fix for this. We'll wait a couple days to give a chance
for other issues to be discovered, and then Johan will make a bug-fix
release.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Hackage2 server testing

2012-09-25 Thread Duncan Coutts
On 25 September 2012 12:41, Herbert Valerio Riedel h...@gnu.org wrote:
 Ian Lynagh i...@well-typed.com writes:

 I am pleased to announce that a test instance of the Hackage 2 server is
 now running at
 http://new-hackage.haskell.org/

 The plan is to test the new server for 3 weeks (until Monday 24
 September), before switching over to Hackage 2 for real.

 ...as it's now past monday... what's the status? is the switch-over going to 
 happen now?

Hi Herbert,

Ian has handed the task over to me, so I'll be leading the switchover,
along with the admins we've recruited.

We have a few issues we're still working on, mainly to do with
importing the old data and making the account switchover smooth.

https://github.com/haskell/cabal/issues?labels=hackage2%2Cimportantstate=open

I'll post more details on the cabal-devel mailing list in the coming
days, so you can follow it there.

-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] not enough fusion?

2012-06-25 Thread Duncan Coutts
On 25 June 2012 02:04, Johannes Waldmann waldm...@imn.htwk-leipzig.de wrote:
 Dear all,

 while doing some benchmarking (*)
 I noticed that function  s1  is considerably faster than  s2
 (but I wanted  s2  because it looks more natural)
 (for n = 1,  s1 takes 20 s, s2 takes 13 s; compiled by ghc-7.4.2 -O2)

 s1 :: Int - Int
 s1 n = sum $ do
        x - [ 0 .. n-1 ]
        return $ sum $ do
            y - [ 0 .. n-1 ]
            return $ gcd x y

 s2 :: Int - Int
 s2 n = sum $ do
      x - [ 0 .. n-1 ]
      y - [ 0 .. n-1 ]
      return $ gcd x y

 I was expecting that in both programs,
 all lists will be fused away (are they?)

Almost certainly not.

 so the code generator essentially can produce straightforward
 assembly code (no allocations, no closures, etc.)

Unless it changed recently, sum does not fuse (as it is currently
defined, using the current implementation of foldr/build fusion).
Also, lists built using do notation do not (I think) translate into
instances of foldr and build, only list comprehension syntax.

On the first point: sum is a foldl and the current implementation of
foldr/build fusion does not cope well with foldl. While foldl can be
defined in terms of foldr the result is lots of closure allocations.
This could in principle be fixed with an arity raising transformation,
and GHC now does have a simple implementation of this transformation,
so it may be possible to get sum as a foldl to fuse. I'm not sure that
anyone has yet tried changing the sum implementation to try to get it
to fuse. It would be an interesting experiment.

On the second point: ghc has a special desugaring for list
comprehensions (in -O mode) where it turns them into uses of foldr and
build. On the other hand, do notation desugars into bind and return.
I'm not sure how well the result fuses, it uses: foldr ((++) . k) [].

You can find out, just look at the core. If all goes well then you
should see a single list being built and then consumed by sum.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector operations

2012-05-29 Thread Duncan Coutts
On 29 May 2012 11:49, Evan Laforge qdun...@gmail.com wrote:
 Good question.. I copied both to a file and tried ghc-core, but it
 inlines big chunks of Data.Vector and I can't read it very well, but
 it looks like the answer is no, it still builds the the list of sums.
 I guess the next step is to benchmark and see how busy the gc is on
 each version.

 But my impression was that stream fusion can't handle early aborts,
 which was why I was wondering why Vector lacks a foldAbort type
 function.

Note that foldr allows early abort so that's fine. Also, there's no
fundamental restriction due to stream fusion. Stream fusion can be
used for lazy lists afterall and can implement Data.List.foldr just
fine.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: relocatable packages: GHC_PACKAGE_PATH and package.conf

2012-05-28 Thread Duncan Coutts
On 28 May 2012 05:36, Tim Cuthbertson t...@gfxmonk.net wrote:

  - ghc doesn't seem to support ${pkgroot} prefixes. I thought it did,
 but I'm new to this so I may be misunderstanding where they can be
 used.

I thought it did too since I think I wrote the code for it. I don't
recall exactly what version it got into, it may well have been only
7.2+

 Additionally, the paths that come out of cabal build have the compiler
 name and version hard coded, e.g lib/packagename/ghc-7.0.4/*. Is there
 any way to configure how this path is constructed to get rid of the
 ghd-7.0.4 part?

By default, yes, cabal produces absolute packages. It does have
support for relocatable packages on some compiler/platform combos:

http://www.haskell.org/cabal/users-guide/installing-packages.html#prefix-independence

sadly ghc on unix is not one of them because we do not have a reliable
way to find the program location (needed to find data files etc).
Actually more specifically it's not easy and nobody has implemented
it, rather than it being impossible.

So at the moment you could work around it in specific cases by hacking
the package registration info before registering. Do something like:
cabal copy --destdir=...
cabal register --gen-pkg-config=blah.pkg
sed -i =e '...' blah.pkg

Obviously your app/library had better not use the Cabal-provided
functions for finding data files at runtime since that'll get
confused.

If you want a proper solution you'll have to help us implement the
Cabal prefix independence feature for the ghc/unix combo.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.2.2 Distribution.Simple.Program.Ar

2012-05-18 Thread Duncan Coutts
On 18 May 2012 20:20, Joe Buehler as...@cox.net wrote:
 I built GHC 7.2.2 on a LINUX box running RHEL 3.  When compiling a package 
 using
 this GHC it is trying to invoke ar thus:

 execve(/usr/bin/ar, [/usr/bin/ar, -r, -c,
 dist/build/libHSregex-base-0.93, dist/build/Text/Regex/Base.o,
 dist/build/Text/Regex/Base/Regex..., dist/build/Text/Regex/Base/Conte...,
 dist/build/Text/Regex/Base/Impl], [/* 45 vars */]) = 0

 My version of ar does not like being invoked as /usr/bin/ar -r -c lib.a file
 file file..., it complains that the .a file is missing.  I believe it should 
 be
 /usr/bin/ar rc lib.a file file file

The -c flag is to tell it to create the archive (so not to complain if
the file is missing).

You're saying it accepts ar rc but rejects ar -r -c ?

I was under the impression that posix allowed the '-' on the ar
command line flags. e.g. http://www.unix.com/man-page/posix/1posix/ar/

 This appears to originate in Distribution.Simple.Program.Ar.

Yes.

 Can someone tell me what is going on here?

I'm very surprised it's not working on some version of Red Hat. This
has worked on many varieties of linux for many years. You don't have
some non-standard ar installed do you? What version of gnu binutils?
(ar -V)

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.2.2 Distribution.Simple.Program.Ar

2012-05-18 Thread Duncan Coutts
On 18 May 2012 22:03, Joe Buehler as...@cox.net wrote:
 Duncan Coutts wrote:

 I'm very surprised it's not working on some version of Red Hat. This
 has worked on many varieties of linux for many years. You don't have
 some non-standard ar installed do you? What version of gnu binutils?
 (ar -V)

 No, it's the RHES ar program.

 # rpm -qf /usr/bin/ar
 binutils-2.14.90.0.4-42

 A CentOS 6 box works fine, so this may be a bug in RHES 3.

 The installed binutils is the latest for RHES 3.  Locally compiled
 versions of binutils have the same bug, so perhaps there is a bug
 elsewhere in the system.  For example, I do not have the very latest C
 library.

As a local workaround you can of course hack your Cabal library
sources and reinstall the lib. Until we work out what's going on I'm a
bit reluctant to chage the upstream version since that has been tested
on so many systems (Linuxes, BSDs, other unixes).

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Offer to mirror Hackage

2012-04-24 Thread Duncan Coutts
On Thu, 2012-04-19 at 11:12 -0400, Ryan Newton wrote:
 Hello all,
 
 
 Right now I'm trying to answer a simple question:
   * Would the current Haskell.org / hackage infrastructure benefit
 from the donation of a dedicated VM with good
 bandwidth/uptime?
 Whoever already knows how to do this could configure it.  
 
 
 In trying to answer the above question I found this long email thread
 from 1.5 years ago.  Duncan said the following:
 
 On Thu, Dec 9, 2010 at 6:47 PM, Duncan Coutts
 duncan.cou...@googlemail.com wrote:
 That's certainly what we've been planning on, that anyone can
 run a
 mirror, no permissions needed. The issue people have raised is
 what
 about having public mirrors that are used automatically or
 semi-automatically by clients.
 
 
 Are there any updates to this in the last year?  Is anybody running a
 mirror?

Yes, we're running a public testing instance of the new hackage server
at:

http://hackage.factisresearch.com/

It has live mirroring running.

This is in a VM donated by factis research, at least on a temporary
basis to help with the testing of the new hackage server code.

I think the answer for the longer term is still yes. We have not yet
discussed with Galois if the new hackage server should be hosted on
their infrastructure. The new code does take more resources and is not
based on apache, so it may not be appropriate to host it on the same
machine as is currently used.

There's two options I think:
 1. a machine for the central hackage server,
 2. a machine for doing package builds

The former will require more organisation, partly because we need the
haskell.org people to have some degree of control over the system. The
latter is easier because the design allows for multiple clients to do
builds rather than just one central machine. So all that requires is a
user account to upload the data. (plus the small matter of a working
build bot client software, which is where scoutess may help)

Duncan



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Announce: Hackager - new version of hackage-test

2012-04-23 Thread Duncan Coutts
Good work David.

I used to do something like this for Cabal regression testing but the
method I used didn't scale well as hackage grew. I'll look into using
your tool next time for testing a major Cabal / cabal-install release.

Duncan

On 23 April 2012 21:37, David Terei dave.te...@gmail.com wrote:
 Hi all,

 I've updated the old hackage-test tool and renamed to hackager.

 http://hackage.haskell.org/package/hackager

 Hackager is a tool to automate the compiling of all packages on
 Hackage. It builds each package on hackage in isolation and records
 the results. The purpose being to catch regressions caused by changes
 to GHC (and Cabal although this was not the motivation). Two runs of
 Hackager can be compared, so the first run is done with a known
 version of GHC and the next run with a new, experimental version of
 GHC... ect.

 The improvements to Hackager over hackage-test are:
 * Parallelized the build process. Can now specify how many packages to
 build in parallel, which cuts total run time down greatly (e.g 2 days
 - 5 hours)
 * hackage-test and hackage-report are now one tool, 'hackager' that
 works as a mutli-command tool.
 * Proper option handling
 * Fixed some stability issues

 The new homepage for development can be found here:
 https://github.com/dterei/Hackager

 Cheers,
 David

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: String != [Char]

2012-03-19 Thread Duncan Coutts
On 17 March 2012 01:44, Greg Weber g...@gregweber.info wrote:
 the text library and Text data type have shown the worth in real world
 Haskell usage with GHC.
 I try to avoid String whenever possible, but I still have to deal with
 conversions and other issues.
 There is a lot of real work to be done to convert away from [Char],
 but I think we need to take it out of the language definition as a
 first step.

I'm pretty sure the majoirty of people would agree that if we were
making the Haskell standard nowadays we'd make String type abstract.

Unfortunately I fear making the change now will be quite disruptive,
though I don't think we've collectively put much effort yet into
working out just how disruptive.

In principle I'd support changing to reduce the number of string types
used in interfaces. From painful professional experience, I think that
one of the biggest things where C++ went wrong was not having a single
string type that everyone would use (I once had to write a C++
component integrating code that used 5 different string types). Like
Python 3, we should have two common string types used in interfaces:
string and bytes (with implementations like our current Text and
ByteString).

BTW, I don't think taking it out of the langauge would be a helpful
step. We actually want to tell people use *this* string type in
interfaces, not leave everyone to make their own choice. I think
taking it out of the language would tend to encourage everyone to make
their own choice.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] Hackage 2 maintainership

2012-02-15 Thread Duncan Coutts
On 14 February 2012 10:43, Kirill Zaborsky qri...@gmail.com wrote:
 I apologize,
 But does hackage.haskell.org being down for some hours already has something
 with the process of bringing up Hackage 2?

No, completely independent. The server was down for a few hours (and I
think restored in the morning by the sysadmin at Galois).

It was while I was mirroring to the new test server, but I don't think
that was the cause of the outage.

Some people were joking that the old server didn't want to be retired
and so was shutting down to prevent the mirroring :-)

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage 2 maintainership

2012-02-13 Thread Duncan Coutts
Hi Ben,

On 13 February 2012 23:44, Ben Gamari bgamari.f...@gmail.com wrote:
 Hey all,

 Those of you who follow the Haskell subreddit no doubt saw today's post
 regarding the status of Hackage 2. As has been said many times in the
 past, the primary blocker at this point to the adoption of Hackage 2
 appears to be the lack of an administrator.

Yes, much of it is lack of an individual to keep momentum up and keep
everyone else motivated. While I'm keen that hackage moves forward, my
volunteer time is spread too thin to be that person keeping everything
organised. That said, where I spend my volunteer time is to a large
part directed by what other people are doing, it's much more fun and
motivating if there's other people working with you.

Speaking of which, I spent much of this evening fixing things, more
details below.

 It seems to me this is a poor reason for this effort to be held
 up. Having taken a bit of time to consider, I would be willing to put in
 some effort to get things moving and would be willing to maintain the
 haskell.org Hackage 2.0 instance going forward if necessary.

That would be great. So in the short term I'm very happy to get the
help and in the longer term I'm happy to hand over to anyone sensible
who puts in the effort. That person could be you, someone else or a
team of several people.

More immediately, my general policy with commit access is to give it
to anyone who's sent a few good patches. Currently there are 7 people
with write access to the darcs repo on code.h.o. It is of course also
fine for people to maintain their own public branches (which they can
do using git or darcs, whichever).

 I currently have a running installation on my personal machine and
 things seem to be working as they should. On the whole, installation was
 quite trivial, so it seems likely that the project is indeed at a point
 where it can take real use (although a logout option in the web
 interface would make testing a bit easier).

Yes, we're at the stage where we can run a public testing instance.
You'll see there's a bit more to implement and test for a switchover.

 That being said, it would in my opinion be silly to proceed without
 fixing the Hackage trac. It was taken down earlier this year due to
 spamming[1] and it seems the recovery project has been orphaned. I would
 be willing to help with this effort, but it seems like the someone more
 familiar with the haskel.org infrastructure might be better equipped to
 handle the situation.

I spent a couple hours on this this evening and I've finally fixed it
(I hope). I still need to purge a bit of wiki/ticket spam (help
apreciated there). Sadly I've had to blow away the previos login
accounts, but I've semi-restored by copying the ghc trac accounts. So
if you happen to have an account on the ghc trac, then your login
should work for the hackage trac. Otherwise you'll need to re-register
as if it was a new account.

 It seems that this process will go something like this,
  1) Bring Hackage trac back from the dead

Check.

  2) Bring up a Hackage 2 instance along-side the existing
     hackage.haskell.org

Yes, now that the trac is back, you can see what notes we have on the
switchover process at:
http://hackage.haskell.org/trac/hackage/wiki/HackageDB/2.0

Note also that the nice people at factisresearch.com have given us a
VM with enough memory (8GB) for the purpose of running a public test
with the full package set (in principle it should not need so much
memory, but we currently keep unnecessary package metadata in memory).

So thanks to you and others this evening motivating me, I've also
taken Max's latest patches to my tar package (which coincidentally I
released yesterday) and the corresponding hackage-server patch and set
it running at:

http://hackage.factisresearch.com/

This is running the latest upstream darcs version. I have also fired
off a one-shot mirroring operation. This will mirror all the existing
packages from hackage. It'll probably take half a day or so to
complete since there's something like 30-40k tarballs to be copied
over. I'll check the logs tomorrow hopefully and after that kick off a
live/continuous mirror so it'll get new updates from the main hackage
within 20-30 min or so.

Last time Max and I tried this we were able to mirror almost all
packages. Most of the unmirrorable ones at the time were due to
packages with quirks in their tar format, which is what his tar
patches were aimed at. So I'm hopeful we'll now have only a tiny
handful of unmirrorable packages.

  3) Enlist testers
  4) Let things simmer for a few weeks/months ensuring nothing explodes
  5) After it's agreed that things are stable, eventually swap the
     Hackage 1 and 2 instances

Right, that's more or less it. Other details on the wiki (and if
there's anything missing, edit it).

You'll see that there are still some missing components. In particular
while I finished the live mirroring client, and Max has done a doc
builder client, I 

Re: [Haskell-cafe] Hackage 2 maintainership

2012-02-13 Thread Duncan Coutts
On 14 February 2012 01:53, Duncan Coutts duncan.cou...@googlemail.com wrote:
 Hi Ben,

 On 13 February 2012 23:44, Ben Gamari bgamari.f...@gmail.com wrote:
 Hey all,

 Those of you who follow the Haskell subreddit no doubt saw today's post
 regarding the status of Hackage 2. As has been said many times in the
 past, the primary blocker at this point to the adoption of Hackage 2
 appears to be the lack of an administrator.

 Yes, much of it is lack of an individual to keep momentum up and keep
 everyone else motivated. While I'm keen that hackage moves forward, my
 volunteer time is spread too thin to be that person keeping everything
 organised. That said, where I spend my volunteer time is to a large
 part directed by what other people are doing, it's much more fun and
 motivating if there's other people working with you.

Ah, here's the link to my last go at getting people to self-organise.

http://www.haskell.org/pipermail/cabal-devel/2011-October/007803.html

You should find it somewhat useful. It gives an overview of people who
are / have been involved.

We did get another reasonable push at the time. In particular Max did
a lot of good work. I'm not quite sure why it petered out again, I'd
have to ask Max what went wrong, if it was my fault for letting things
block on me or if it was just holidays/christmas. Maintaining momentum
is hard.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] Support tools for development with Cabal and Darcs

2012-02-10 Thread Duncan Coutts
On 10 February 2012 12:06, Henning Thielemann
lemm...@henning-thielemann.de wrote:

 I have uploaded three packages to Hackage that shall simplify maintaining
 cabal packages under darcs revision control. They simplify tasks like
 uploading packages after some tests, cabal version handling, compiling
 multiple local packages in the right order, replacing identifiers in
 multiple files, renaming modules.

 * http://hackage.haskell.org/package/darcs-scripts
 * http://hackage.haskell.org/package/cabal-scripts
 * http://hackage.haskell.org/package/cabal-sort

Hi Henning,

Would you mind giving me a breif explanation of the compiling
multiple local packages in the right order feature? I'd like to
understand what is missing in cabal-install in this respect.

Currently with cabal-install you can say:

cabal install ./a ./b ./c

and it will install the packages in those directories in the right
order. (It is also possible to list other install targets including
local and remote tarballs, and of course packages from hackage).

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] Support tools for development with Cabal and Darcs

2012-02-10 Thread Duncan Coutts
On 10 February 2012 14:00, Henning Thielemann
lemm...@henning-thielemann.de wrote:

 On Fri, 10 Feb 2012, Duncan Coutts wrote:

 Would you mind giving me a breif explanation of the compiling
 multiple local packages in the right order feature? I'd like to
 understand what is missing in cabal-install in this respect.

 Currently with cabal-install you can say:

 cabal install ./a ./b ./c

 and it will install the packages in those directories in the right
 order. (It is also possible to list other install targets including
 local and remote tarballs, and of course packages from hackage).


 I believe that the first version of cabal-sort is older than this
 cabal-install feature.

Ah yes, looks like we developed it concurrently. cabal-install 0.10.2
came out in March 2011 and cabal-sort 0.0.1 in May.

 Nonetheless, cabal-sort can be used for other
 purposes, too. E.g. you can use plain Cabal installation, when 'cabal
 install' does not work due to problems in constraint solving. You can upload
 packages in topologically sorted order to Hackage, such that building on
 Hackage works correctly. Is this also done by cabal-install's upload
 command?

No it's not, it's an interesting point. In practice with the current
builder you'd be very unlucky to hit the exact time when it kicks off
builds (only a few times a day). With the new server it's likely to be
more of an issue since that will allow lots of builder clients.

 You can also use cabal-sort for stepping into package directories
 and do some manual updates before installing.

 However, I will add references to the new cabal-install features in the
 documentation of cabal-sort.

Cheers.

I wonder if you have any suggestions for UI changes/improvements for
cabal-install for working with related sets of local packages
(something it's fairly weak on at the moment). Are there things in
cabal-sort you think we should just lift directly into cabal-install
or variations etc?

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mersenne-random and standard random API

2012-02-09 Thread Duncan Coutts
On 9 February 2012 10:59, Aleksey Khudyakov alexey.sklad...@gmail.com wrote:

 So is it possible to use the fast and efficient mersenne generator with
 the convenient and general random API?

 I think design of Random type class basically precludes efficient generators
 with large periods and consequently large state.
 Look at next function:

 next :: g - (Int, g)

 It means that state has to be copied but for efficiency we want to
 mutate it in place. I consider Random type class a failure and ignore
 it.

Actually it is not true that the state has to be copied. Using the
lazy ST monad we can implement this interface and internally use
mutable ST arrays.

See for example
http://web.archive.org/web/20090108050217/http://www.augustsson.net/Darcs/MT/MersenneTwister.hs

It ends up with this function that generates the infinite lazy
sequence of words from a seed word. This can be then made to fit the
next :: g - (Int, g) interface, with g = [W].

mersenneTwister :: W - [W]
mersenneTwister s = L.runST (mersenneTwisterST s)

For reference:

import Control.Monad.ST.Lazy as L
type W = Word32
mersenneTwisterST :: W - L.ST s [W]

So yes this looses a small constant factor because of the extra lazy
list (which could be reduced somewhat), so it's not suitable for the
absolutely max performance interface, but it certainly allows the use
of traditional mutable PRNGs with the nice interface with decent
performance.

Certainly none of these PRNGs support split, but that's a separate
issue. Overall, I think the Random type class is salvageable given a
few changes. In the end, the Random module probably needs both an
ST-monadic interface and a pure one. People can use the ST one if it
happens to be more convenient or if they need to squeeze the last drop
of performance out.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Taking binary from hackage or GHC?

2012-02-08 Thread Duncan Coutts
On 8 February 2012 10:24, Joachim Breitner nome...@debian.org wrote:
 Dear interested parties :-),

 GHC 7.4.1 started to ship and expose the binary library, version
 0.5.0.3. On hackage is binary-0.5.1.0.

It was firmly my opinion that shipping and exposing binary in GHC was
and is a mistake. Previously it was given a different name to try to
discourage people using it, but apparently that didn't work. The
authors of binary (myself included) don't want to ship it yet as part
of the Haskell Platform because the API isn't right yet (ongoing
work), and shipping it with GHC effectively makes it part of the
platform.

 In Debian, we try to provide one version of each library, so we have to 
 decide:

Yes, you're not put in an easy situation. Nor will we be when we come
to packaging the next HP release.

  * Use the version provided by GHC and drop the independent binary
 package (as we have done with random, for example).

  * Do not expose binary in GHC and continue using the version from
 hackage.

I'm not sure I have the whole answer, you'll also need a response from Ian.

Eventually we will want to propose binary for the HP, but GHC may well
still want to depend on binary and ship it. So it might end up as one
of those HP libs that is shipped with GHC in the long term.

 @Upstream: Do you think binary on hackage will diverge much from the one
 in GHC and would you expect your users to want the new versions before
 they are shipped with GHC?

No, it should not diverge much, GHC picks up the latest code from the
upstream version occasionally.

 And do you expect breakage in any components
 (e.g. haddock) if everything but GHC uses a newer binary package?

At some point, we will have a major version change and that will break
the API and the binary format (we might even split the package in
two).

If they use similar versions but not the same, then probably the only
thing to break would be haddock, since I'm guessing that it makes use
of binary instances provided by the GHC package. But of course haddock
is also shipped with GHC.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] ST not strict enough?

2011-11-18 Thread Duncan Coutts
On 18 November 2011 13:17, Daniel Fischer
daniel.is.fisc...@googlemail.com wrote:
 On Friday 18 November 2011, 13:05:06, Yves Parès wrote:
 ...so there is no way to do that inside the function passed to
 modifySTRef? In other words, there is no way to ensure inside a
 function that its result will be evaluated strictly?

 Well,

 modifySTRef ref fun = do
    val - readSTRef ref
    writeSTRef ref  (fun val)

But note therefore that you can just:

val - readSTRef ref
writeSTRef ref  $! fun val

rather than using modifySTRef.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Way to expose BLACKHOLES through an API?

2011-11-16 Thread Duncan Coutts
On Tue, 2011-11-08 at 15:43 +, Simon Marlow wrote:

 Hmm, but there is something you could do.  Suppose a thread could be in 
 a mode in which instead of blocking on a BLACKHOLE it would just throw 
 an asynchronous exception WouldBlock.  Any computation in progress would 
 be safely abandoned via the usual asynchronous exception mechanism, and 
 you could catch the exception to implement your evaluateNonBlocking 
 operation.
 
 I'm not sure this would actually be useful in practice, but it's 
 certainly doable.

The linux kernel folks have been discussing a similar idea on and off
for the last few years. The idea is to return in another thread if the
initial system call blocks.

Perhaps there's an equivalent here. We have an evaluateThingy function
and when the scheduler notices that thread is going to block for some
reason (either any reason or some specific reason) we return from
evaluateThingy with some info about the blocked thread.

The thing that the kernel folks could never decide on was to do with
thread identity: if it was the original thread that blocked and we
return in a new thread, or if the original thread returns and a clone is
the one that blocks.

Or perhaps it's a crazy idea and it would never work at all :-)

Duncan


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Should GHC default to -O1 ?

2011-11-09 Thread Duncan Coutts
On 9 November 2011 13:53, Greg Weber g...@gregweber.info wrote:
 How much does using ghc without cabal imply a newer programmer? I don't use
 cabal when trying out small bits of code (maybe I should be using ghci), but
 am otherwise always using cabal.

The main reason cabal has always defaulted to -O is because
historically it's been assumed that the user is installing something
rather than just hacking on their own code.

If we can distinguish cleanly in the user interface between the
installing and hacking use cases then we could default to -O0 for the
hacking case.

Duncan

 On Wed, Nov 9, 2011 at 3:18 AM, Duncan Coutts duncan.cou...@googlemail.com
 wrote:

 On 9 November 2011 00:17, Felipe Almeida Lessa felipe.le...@gmail.com
 wrote:
  On Tue, Nov 8, 2011 at 3:01 PM, Daniel Fischer
  daniel.is.fisc...@googlemail.com wrote:
  On Tuesday 08 November 2011, 17:16:27, Simon Marlow wrote:
  most people know about 1, but I think 2 is probably less well-known.
  When in the edit-compile-debug cycle it really helps to have -O off,
  because your compiles will be so much quicker due to both factors 1 
  2.
 
  Of course. So defaulting to -O1 would mean one has to specify -O0 in
  the
  .cabal or Makefile resp. on the command line during development, which
  certainly is an inconvenience.
 
  AFAIK, Cabal already uses -O1 by default.

 Indeed, and cabal check / hackage upload complain if you put -O{n} in
 your .cabal file.

 The recommended method during development is to use:

 $ cabal configure -O0


 Duncan

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell] Parallel GHC project: new opportunity for an organisation to participate

2011-06-08 Thread Duncan Coutts

GHC HQ and Well-Typed are pleased to announce a new opportunity for an
organisation to take part in the Parallel GHC Project.

The project started in November 2010 with four industrial partners, and
consulting and engineering support from Well-Typed. Each organisation is
working on its own particular project making use of parallel Haskell.
The overall goal is to demonstrate successful use of parallel Haskell
and along the way to apply engineering effort to any problems with the
tools that the partner organisations might run into.

We have capacity to support another partner organisation for the
remaining duration of the project (at least another 12 months).
Organisations do not need to contribute financially but should be
prepared to make a significant commitment of their own time. Familiarity
with Haskell would be helpful, but Haskell expertise is not needed.
Partner organisations' choice of projects is similarly open-ended and
could be based on anything from pre-existing code bases to green field
endeavours.

We would welcome organisations interested in pure parallelism,
concurrency and/or distributed Haskell. Presently, two of our partner
organisations are using mainly pure parallelism and two are using
concurrency. What would be especially interesting for us is to diversify
this mix further by working with an organisation interested in making
use of of distributed Haskell, in particular the work highlighted in the
recent paper Haskell for the Cloud [1].

To help give an idea what participating in the Parallel GHC Project is
like, here is what some of what our current partner organisations have
to say:


The Parallel GHC Project has enabled us to make steady progress
towards our goals. Well-typed has provided support in the form
of best practice recommendations, general engagement with the
project, and directly coding up key components.

I have been getting lots of help from Well-Typed, and enjoy
our weekly meetings.
  -- Finlay Thompson, Dragonfly


My organization is now trying to implement highly concurrent Web
servers. After GHC 7 was released we faced several critical bugs
in the new IO manager and one guy at Well-Typed kindly fixed all
the bugs. This has been a big benefit for our organization.

Another benefit is feedback/suggestions from Well-Typed.
Well-Typed and our organization have meetings every other week
and we report progress to each other. During the discussions, we
can set the right direction to go in.
  -- Kazu Yamamoto, IIJ Innovation Institute Inc.


Well-Typed is coordinating the project, working directly with the
participating organisations and the Simons at GHC HQ. If you think your
organisation may be interested then get in touch with me via
i...@well-typed.com

[1] 
http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/remote.pdf


-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] Parallel GHC project: new opportunity for an organisation to participate

2011-06-08 Thread Duncan Coutts

GHC HQ and Well-Typed are pleased to announce a new opportunity for an
organisation to take part in the Parallel GHC Project.

The project started in November 2010 with four industrial partners, and
consulting and engineering support from Well-Typed. Each organisation is
working on its own particular project making use of parallel Haskell.
The overall goal is to demonstrate successful use of parallel Haskell
and along the way to apply engineering effort to any problems with the
tools that the partner organisations might run into.

We have capacity to support another partner organisation for the
remaining duration of the project (at least another 12 months).
Organisations do not need to contribute financially but should be
prepared to make a significant commitment of their own time. Familiarity
with Haskell would be helpful, but Haskell expertise is not needed.
Partner organisations' choice of projects is similarly open-ended and
could be based on anything from pre-existing code bases to green field
endeavours.

We would welcome organisations interested in pure parallelism,
concurrency and/or distributed Haskell. Presently, two of our partner
organisations are using mainly pure parallelism and two are using
concurrency. What would be especially interesting for us is to diversify
this mix further by working with an organisation interested in making
use of of distributed Haskell, in particular the work highlighted in the
recent paper Haskell for the Cloud [1].

To help give an idea what participating in the Parallel GHC Project is
like, here is what some of what our current partner organisations have
to say:


The Parallel GHC Project has enabled us to make steady progress
towards our goals. Well-typed has provided support in the form
of best practice recommendations, general engagement with the
project, and directly coding up key components.

I have been getting lots of help from Well-Typed, and enjoy
our weekly meetings.
  -- Finlay Thompson, Dragonfly


My organization is now trying to implement highly concurrent Web
servers. After GHC 7 was released we faced several critical bugs
in the new IO manager and one guy at Well-Typed kindly fixed all
the bugs. This has been a big benefit for our organization.

Another benefit is feedback/suggestions from Well-Typed.
Well-Typed and our organization have meetings every other week
and we report progress to each other. During the discussions, we
can set the right direction to go in.
  -- Kazu Yamamoto, IIJ Innovation Institute Inc.


Well-Typed is coordinating the project, working directly with the
participating organisations and the Simons at GHC HQ. If you think your
organisation may be interested then get in touch with me via
i...@well-typed.com

[1] 
http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/remote.pdf


-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Incrementally consuming the eventlog

2011-05-01 Thread Duncan Coutts
On Thu, 2011-04-28 at 23:31 +0200, Johan Tibell wrote:

 The RTS would invoke listeners every time a new event is written. This
 design has many benefits:
 
 - We don't need to introduce the serialization, deserialization, and
 I/O overhead of first writing the eventlog to file and then parsing it
 again.

The events are basically generated in serialised form (via C code that
writes them directly into the event buffer). They never exist as Haskell
data structures, or even C structures.

 - Programs could monitor themselves and provide debug output (e.g. via
 some UI component).
 - Users could write code that redirects the output elsewhere e.g. to a
 socket for remote monitoring.
 
 Would invoking a callback on each event add too big of an overhead?

Yes, by orders of magnitude. In fact it's impossible because the act of
invoking the callback would generate more events... :-)

 How about invoking the callback once every time the event buffer is
 full?

That's much more realistic. Still, do we need the generality of pushing
the event buffers through the Haskell code? For some reason it makes me
slightly nervous. How about just setting which output FD the event
buffers get written to.

Turning all events or various classes of events on/off at runtime should
be doable. The design already supports multiple classes, though
currently it just has one class (the 'scheduler' class). The current
design does not support fine grained filtering at the point of event
generation.

Those two features combined (plus control over the frequency of event
buffer flushing) would be enough to implement a monitoring socket
interface (web http or local unix domain socket).

Making the parser in the ghc-events package incremental would be
sensible and quite doable as people have already demonstrated.

Duncan


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Package management

2011-05-01 Thread Duncan Coutts
On Tue, 2011-04-26 at 14:05 -0700, Brandon Moore wrote:
 Based on my own misadventures and Albert Y. C. Lai's SICP 
 (http://www.vex.net/~trebla/haskell/sicp.xhtml)
 it seems the that root of all install problems is that reinstalling a
 particular version of a particular package deletes any other existing
 builds of that version, even if other packages already depend on them.
 
 Deleting perfectly good versions seems to be the root of all package
 management problems.

Yes.

 There are already hashes to keep incompatible builds of a package separate. 
 Would anything break if existing packages were left alone when a new
 version was installed? (perhaps preferring the most recent if a
 package flag specifies version but not hash).

That is the nix solution. It is also my favoured long term solution.

 The obvious difficulty is a little more trouble to manually specify packages. 
 Are there any other problems with this idea?

See nix and how it handles the configuration and policy issues thrown up
by allowing multiple instances of the same version of each package. For
example, they introduce the notion of a package environment which is a
subset of the universe of installed packages.

Duncan


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell] Creating a Project

2011-04-18 Thread Duncan Coutts
On Sat, 2011-04-09 at 21:27 +0100, Dominic Steintiz wrote:
 Hi,
 
 I am trying to create a project using
 http://community.haskell.org/admin/project_request.html but I am getting
 Service Temporarily Unavailable. Does anyone know when it will be back
 on line?

BTW, this is back online now.

Similarly, the account request form is working again along with the
other things mentioned at http://community.haskell.org/admin/


Duncan


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: Proposal: Define UTF-8 to be the encoding of Haskell source files

2011-04-17 Thread Duncan Coutts
On Thu, 2011-04-07 at 15:44 +0200, Roel van Dijk wrote:
 On 7 April 2011 14:11, Duncan Coutts duncan.cou...@googlemail.com wrote:
  I would be happy to work with you and others to develop the report text
  for such a proposal. I posted my first draft already :-)
 
 What would be a good way to proceed? Looking at the process I think we
 should create a wiki page and a ticket for this proposal. If necessary
 I'll volunteer to be the proposal owner.

Ok, I can give you permissions on the wiki. What is your username on the
haskell-prime wiki?

Duncan


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: Define UTF-8 to be the encoding of Haskell source files

2011-04-06 Thread Duncan Coutts
On 4 April 2011 23:48, Roel van Dijk vandijk.r...@gmail.com wrote:
 * Proposal

 The Haskell 2010 language specification states that: Haskell uses the
 Unicode character set [2]. It does not state what encoding should be
 used. This means, strictly speaking, it is not possible to reliably
 exchange Haskell source files on the byte level.

 I propose to make UTF-8 the only allowed encoding for Haskell source
 files. Implementations must discard an initial Byte Order Mark (BOM)
 if present [3].

 * Next step

 Discussion! There was already some discussion on the haskell-cafe
 mailing list [7].

This is a simple and obviously sensible proposal. I'm certainly in favour.

I think the only area where there might be some issue to discuss is
the language of the report. As far as I can see, the report does not
require that modules exist as files, does not require the .hs
extension and does not give the standard mapping from module name to
file name.

So since the goal is interoperability of source files then perhaps we
should also have a section somewhere with interoperability guidelines
for implementations that do store Haskell programs as OS files. The
section would describe the one module per file convention, the .hs
extension (this is already obliquely mentioned in the section on
literate Haskell syntax) and the mapping of module names to file names
in common OS file systems. Then this UTF8 stipulation could go there
(and it would be clear that it applies only to conventional
implementations that store Haskell programs as files).

e.g.

Interoperability Guidelines


This Report does not specify how Haskell programs are represented or
stored. There is however a conventional representation using OS files.
Implementations that conform to these guidelines will benefit from the
portability of Haskell program representations.

Haskell modules are stored as files, one module per file. These
Haskell source files are given the file extension .hs for usual
Haskell files and .lhs for literate Haskell files (see section
10.4).

Source files must be encoded as UTF-8 \cite{utf8}. Implementations
must discard an initial Byte Order Mark (BOM) if present.

To find a source file corresponding to a module name used in an import
declaration, the following mapping from module name to OS file name is
used. The '.' character is mapped to the OS's directory separator
string while all other characters map to themselves. The .hs or
.lhs extension is added. Where both .hs and .lhs files exist for
the same module, the .lhs one should be used. The OS's standard
convention for representing Unicode file names should be used.

For example, on a UNIX based OS, the module A.B would map to the file
name A/B.hs for a normal Haskell file or to A/B.lhs for a literate
Haskell file. Note that because it is rare for a Main module to be
imported, there is no restriction on the name of the file containing
the Main module. It is conventional, but not strictly necessary, that
the Main module use the .hs or .lhs extension.


Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: Define UTF-8 to be the encoding of Haskell source files

2011-04-06 Thread Duncan Coutts
On Wed, 2011-04-06 at 16:09 +0100, Ben Millwood wrote:
 On Wed, Apr 6, 2011 at 2:13 PM, Duncan Coutts
 duncan.cou...@googlemail.com wrote:
 
  Interoperability Guidelines
  
 
  [...]
 
  To find a source file corresponding to a module name used in an import
  declaration, the following mapping from module name to OS file name is
  used. The '.' character is mapped to the OS's directory separator
  string while all other characters map to themselves. The .hs or
  .lhs extension is added. Where both .hs and .lhs files exist for
  the same module, the .lhs one should be used. The OS's standard
  convention for representing Unicode file names should be used.
 
 
 This standard isn't quite universal. For example, jhc will look for
 Data.Foo in Data/Foo.hs but also Data.Foo.hs [1]. We could take this
 as an opportunity to discuss that practice, or we could try to make
 the changes to the report orthogonal to that issue.

Indeed. But it's true to say that if you do support the common
convention then you get portability. This does not preclude JHC from
supporting something extra, but sources that take advantage of JHC's
extension are not portable to implementations that just use the common
convention.

 In some sense I think it's cute that the Report doesn't specify
 anything about how Haskell modules are stored or represented, but I
 don't think that freedom is actually used, so I'm happy to see it go.
 I'd think, though, that in that case there would be more to discuss
 than just the encoding, so if we could separate out the issues here, I
 think that would be useful.

It's not going. I hope I was clear in the example text that the
interoperability guidelines were not forcing implementations to use
files etc, just that if they do, if they uses these conventions then
sources will be portable between implementations.

It doesn't stop an implementation using URLs, sticking multiple modules
in a file or keeping modules in a database.

Duncan


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


[Haskell] ANNOUNCE: c2hs 0.16.3

2011-03-24 Thread Duncan Coutts
All,

c2hs version 0.16.3 is out and is available from hackage
http://hackage.haskell.org/package/c2hs


What is c2hs


c2hs is an FFI preprocessor tool, a bit like hsc2hs, that helps with the
development of Haskell bindings to C libraries.

The major advantages of using c2hs compared to writing foreign imports
by hand (or using hsc2hs) are:

  * Cross-language type safety:
C functions are imported with the correct Haskell types.

  * Saves time:
boilerplate marshaling code is generated.

It works by extracting interface information from C header files and it
generates Haskell code with foreign imports and marshalling.


Changes in this release
===

  * In and out marshallers may now have arguments, rather than just
being single function names.
  * Foreign function imports will now use stdcall if the C headers
specify this calling convention. This will help with bindings to
some C libraries on Windows.
  * New {# alignof #} hook, like the existing {# sizeof #} hook.
This is useful for the alignment in Storable instances.
  * Fewer marshalling functions are needed from the bundled C2HS
module. The trend is towards eliminating the need for any
axillary marshalling module.


Credits
===

Thanks to Jonathan Rockway, r...@gamr7.com and Dmitry Astapov for
contributions to this release.

Links
=

Home page:
http://www.cse.unsw.edu.au/~chak/haskell/c2hs/

Bug tracker:
http://hackage.haskell.org/trac/c2hs/


Duncan


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Status update on {code, trac, projects, planet, community}.haskell.org

2011-02-18 Thread Duncan Coutts
On Thu, 2011-02-17 at 19:30 +0100, Henning Thielemann wrote:
 Duncan Coutts schrieb:
 
  Several people have asked about the new host key. Yes, there is a new
  RSA host key for the community server, the fingerprint of which is:
  
  21:b8:59:ff:39:69:58:7a:51:ef:c1:d8:c6:24:6e:f7
  
  ssh will likely give you a scary warning and you'll need to delete the
  old entry in your ~/.ssh/known_hosts file. You don't need to enter a new
  one, just delete the old one. When you next log into the server, ssh
  will ask you if you're happy with the new key. If you're paranoid, you
  can double check that it matches the key fingerprint above.
 
 Do you think it is paranoid?

Sorry, I didn't mean it literally (or pejoratively).

 Unfortunately it has become quite common to ignore SSH warnings
 because admins often do not care about restoring keys when updating
 the operating system or moving the machine, even not telling users
 that the host key has changed. But if I had  ignored the SSH warning
 on code.haskell.org recently I might have logged in and from there
 maybe to other servers, thus giving my passwords to the attackers. I
 think generally that just deleting a host from known_hosts in response
 to an SSH warning and blindly accepting a new host key is not a fix. Am
 I too afraid?

No, you're quite right. It was these warnings that initially alerted us
to the problem.

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Status update on {code, trac, projects, planet, community}.haskell.org

2011-02-16 Thread Duncan Coutts
On Wed, 2011-02-16 at 02:12 +, Duncan Coutts wrote:
  We have not yet re-enabled user login accounts, nor re-enabled access
  to code repositories. We will send a further update when these are
  re-enabled, or procedures for people to re-enable them are finalised.
 
 Logging in
 ==
 
 We have restored ssh logins for around 250 user accounts (ie darcs push
 will work).

Several people have asked about the new host key. Yes, there is a new
RSA host key for the community server, the fingerprint of which is:

21:b8:59:ff:39:69:58:7a:51:ef:c1:d8:c6:24:6e:f7

ssh will likely give you a scary warning and you'll need to delete the
old entry in your ~/.ssh/known_hosts file. You don't need to enter a new
one, just delete the old one. When you next log into the server, ssh
will ask you if you're happy with the new key. If you're paranoid, you
can double check that it matches the key fingerprint above.

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Status update on {code, trac, projects, planet, community}.haskell.org

2011-02-15 Thread Duncan Coutts
On Wed, 2011-02-02 at 01:33 +, Duncan Coutts wrote:
 All,
 
 As you will be aware, some of the *.haskell.org websites have been down
 recently, specifically:
 
 code.haskell.org
 trac.haskell.org
 projects.haskell.org
 planet.haskell.org
 community.haskell.org

[...]

 We have not yet re-enabled user login accounts, nor re-enabled access
 to code repositories. We will send a further update when these are
 re-enabled, or procedures for people to re-enable them are finalised.

Logging in
==

We have restored ssh logins for around 250 user accounts (ie darcs push
will work).

If you are not one of those 250 and you cannot log in then you will need
to email supp...@community.haskell.org. Give your real name, your unix
user name and attach your current ssh public key.


Once you have logged in
===

Personal webspace
-

public URL: http://code.haskell.org/~$username/
server-side: ~/public_html(-disabled)

You will notice that your ~/public_html directory has been renamed to
~/public_html-disabled. There is a slim possibility that the data was
altered when the server was compromised. We recommend that you check it
first and then to restore use: mv ~/public_html-disabled ~/public_html


Code repositories
-

public URL: http://code.haskell.org/$projname/
server-side: /srv/code/$projname/
or: /srv/srv-from-nun/code/{checked-failed,checked-strayfiles}/$projname/

Similarly, many code repositories (44) have not been re-enabled. Ones
that we could check automatically have already been restored. 

If the /srv/code/$project directory for your project is empty or missing
then you will find it in one of the directories
in /srv/srv-from-nun/code/, either checked-failed/ if darcs check
failed on that repository, or in checked-strayfiles/ if the repository
contains extra unrecorded files that we could not check automatically.

You should check that you are satisfied that the repository contains
just what you expect and then email supp...@community.haskell.org to ask
for it to be moved back to the usual location.


Project websites


public URL: http://projects.haskell.org/$projname/
server-side: /srv/projects/${projname/
or: /srv/srv-from-nun/projects/$projname/

If the /srv/projects/$project directory for your project is empty or
missing then will find the project website
in /srv/srv-from-nun/projects/$project.

You should check that you are satisfied that the website directory
contains just what you expect and then email
supp...@community.haskell.org to ask for it to be moved back to the
usual location.



Explanation
===

We believe that when the server was compromised, the attacker was mainly
interested in collecting usernames and passwords. Since we do not use
password based logins, we think the attacker was not successful in this.
However we are unable to trust any of the ~/.ssh/authorized_keys because
the attacker could have modified them to give access at a later date.

We were able to verify the ~/.ssh/authorized_keys for around 250 users
by comparing the current file against the key that was originally
submitted in the account creation request. People who have added keys or
changed keys since initial account creation have not had their login
access restored and they must resend their current key.

For html, css, javascript files etc, there was the slight concern that
the attacker may have defaced sites or made malicious files available
for download. While we have not found any instance of this so far, we
need the help of project owners to check this.


Duncan
(On behalf of the Haskell infrastructure team)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal license combinations

2011-02-10 Thread Duncan Coutts
On Wed, 2011-02-09 at 03:47 +1300, Vivian McPhail wrote:

  license: Foo, Bar
 
 
 Could this be computed automatically from the source files by Cabal?

I would not want to rely on that.


 Looking specifically at hmatrix, there are three kinds of modules
 
i) bindings to GSLGPL
ii) bindings to LAPACK BSD
iii) pure Haskellhmatrix author's choice
 
 1) Am I correct in thinking that even the bindings modules (the
 Haskell parts, not the C files) can be under any licence, FOO, chosen
 by the author, but the binary _linked_ to, say, GSL has to comply with
 FOO and GPL?

Yes. Code you write can be under any license you like. When someone uses
the combination then they have to comply with all the licenses (so
implicitly there's the constraint that the license you pick must be
compatible with the license of the other parts).

 2) If someone uses hmatrix but no GSL functions (hence there are no
 GSL functions in the linked binary) can they get away with not
 complying with the GSL requirement?

Not sure. In practice I don't think you can achieve that with our
toolchain.

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Cabal-devel] Cabal license combinations

2011-02-10 Thread Duncan Coutts
On Wed, 2011-02-09 at 18:35 -0500, Dan Knapp wrote:
 I haven't heard anyone mention this yet, and it's a biggie, so I
 guess I'd better de-lurk and explain it.  The issue is this:  There is
 a legal distinction between static and dynamic linking, or at least
 some licenses (the GPL is the one I'm aware of) believe that there is.
  In particular, they assert that you are legally creating a derived
 work if you statically link with their library, and that your
 library, even if it's just a thin bindings wrapper, must therefore
 comply by their license terms.  They make no such claim for dynamic
 linking.

I think you're in danger of adding confusion rather than removing it.
The LGPL provides some alternative terms for dynamic linking but you
still must comply with the license.

 Of course, Haskell on most platforms and default configurations
 links everything statically!  So I believe this means that you have to
 comply by the licenses of all your dependencies!

You always have to do that.

 Now, there's a difference between complying by those licenses and
 being under them yourself, but for example I believe this means that
 if we have a package named hs-save-the-whales that is under the GPL,
 and a front-end package hs-redeem-them-for-valuable-cash-prizes
 which makes use of the functionality in hs-save-the-whales, the
 front-end MUST be offered under the GPL, and, additionally, CANNOT be
 offered under BSD (I think).

No, that is wrong. There is no difference between GPL code depending on
BSD code and BSD code depending on GPL code. The direction of the
dependency is irrelevant. In both cases the end user/distributor must
comply with both licenses.

 I think it would be a very useful and valuable thing for Cabal to
 detect this situation and warn appropriately!  Contamination by
 undesired licenses is a serious flaw in the packaging of a package; it
 just happens to be a legal flaw rather than a technical one.  Indeed,
 I would argue that this is far more important than any hypothetical
 per-file licensing.

We are already working on a feature that will show the full set of
licenses that the end user must comply with (a patch has been submitted
and it's been through one round of review so far). In your example that
would mean you expect the set to be {BSD} but the tool will show you
that it is in fact {BSD, GPL}. You can then use that as your warning
that the set of licenses is not what you expected.

The tool will not claim that {BSD, GPL} is wrong, because it isn't! What
is wrong is expectations not matching reality, and hopefully that's what
the tool can help with.

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal license combinations

2011-02-10 Thread Duncan Coutts
On Thu, 2011-02-10 at 19:00 +1300, Vivian McPhail wrote:
   It seems then that a package should be the least restrictive
   combination of all the licenses in all the contained modules.
 
  Omit the words least restrictive and I think you are correct.
 
  To combine licences, just aggregate them.  There is no lattice of
  subsumption; no more or less restrictive ordering.
 
 
 I was thinking that the lattice was already flattened into a list of
 licences.  Currently the top-level package has a single licence field
 which is an arbitrary disjunctive choice.  Much better is a
 conjunctive aggregation which is just as or less restrictive than the
 arbitrary disjunctive choice.

Note that the license field in the .cabal specifies the license *for
that package only* not the license of dependencies, or any notion of
effective given the dependencies.

Each package specifies its own license, then given a dependency tree we
can calculate the set of licenses that users must simultaneously comply
with.

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Cabal-devel] Cabal license combinations

2011-02-10 Thread Duncan Coutts
On Thu, 2011-02-10 at 12:44 +0100, Stefan Kersten wrote:
 On 10.02.11 12:12, Duncan Coutts wrote:
  We are already working on a feature that will show the full set of
  licenses that the end user must comply with (a patch has been submitted
  and it's been through one round of review so far). In your example that
  would mean you expect the set to be {BSD} but the tool will show you
  that it is in fact {BSD, GPL}. You can then use that as your warning
  that the set of licenses is not what you expected.
 
 that would be quite useful for those who want to distribute executables!
 
 may i also suggest the following feature: collect all the copyright notices 
 and
 license files from each dependency (many licenses require those to be 
 displayed
 and distributed along with the executable) and optionally bundle them with the
 applicaton's data-files? i briefly looked into this, but couldn't find a way 
 to
 retrieve the installed License-File from ~/.cabal using Cabal. doing this by
 hand turns out to be quite tedious when there are many dependencies ...

Right, that feature was also in the patch. From experience I know that
this would indeed be very useful (I've also had to do it manually).

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal license combinations

2011-02-08 Thread Duncan Coutts
On Mon, 2011-02-07 at 14:42 +, Malcolm Wallace wrote:
  It seems then that a package should be the least restrictive  
  combination of all the licenses in all the contained modules.
 
 Omit the words least restrictive and I think you are correct.
 
 To combine licences, just aggregate them.  There is no lattice of  
 subsumption; no more or less restrictive ordering.  It's simple:  
 you must obey all of them.  Some aggregations introduce a  
 contradiction of terms, so you cannot legally aggregate those modules  
 without breaking some term.  But if the terms of the aggregated  
 licences are compatible rather than contradictory, then all is good.

Right, so the effect of per-file/mixed licenses could be achieved by
letting packages specify a list of licenses:

license: Foo, Bar

Meaning you may copy/distribute provided you comply with all these
licenses.

Note that this does not cover dual licensing, e.g. Foo or Bar at
distributor's choice.

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Status update on {code, trac, projects, planet, community}.haskell.org

2011-02-07 Thread Duncan Coutts
On Wed, 2011-02-02 at 01:33 +, Duncan Coutts wrote:

 As you will be aware, some of the *.haskell.org websites have been down
 recently, specifically:
 
 code.haskell.org
 trac.haskell.org
 projects.haskell.org
 planet.haskell.org
 community.haskell.org

[...]

 We have not yet re-enabled user login accounts, nor re-enabled access to
 code repositories. The support ticket system is not yet enabled.
 
 We will send a further update when these are re-enabled, or procedures
 for people to re-enable them are finalised.

We have restored read-only access to the majority of projects on:
code.haskell.org(code repositories)
projects.haskell.org(project webspace)

A small number have been held back because they contain .js, .html
or .tar.gz snapshots that are not recorded in any repository. We will
ask project owners to check these before they are restored.

Duncan
(On behalf of the Haskell infrastructure team)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Status update on {code, trac, projects, planet, community}.haskell.org

2011-02-03 Thread Duncan Coutts
On Thu, 2011-02-03 at 10:37 +0200, Roman Cheplyaka wrote:
 * Duncan Coutts duncan.cou...@googlemail.com [2011-02-02 01:33:22+]
  These are all hosted on the community server. The community server was
  hacked on the 26th January and we took it offline. The server was
  running an old version of debian that was no longer supported with
  security updates. (Ironically, two days previously the infrastructure
  team had been discussing the fact that nobody seemed to have any time
  available to do the planned migration to a new host). The hacker
  replaced sshd which we noticed because the ssh host signature changed
  and it started prompting for passwords (we use key-based rather than
  password based logins).
 
 Might be related:
 http://sourceforge.net/blog/sourceforge-attack-full-report/

Yes, it's quite possible.

One difference to note is that we use ssh key based logins, not
passwords. We suspect this saved us from the worst case scenarios.

Nevertheless, while we don't have to reset passwords, we are concerned
about the potential that the attacker replaced or added to users
~/.ssh/authorized_keys lists, which is why we have not yet re-enabled
user accounts.

We will try and provide as full a picture as we can when we're satisfied
we've got as much info and confidence as we're likely to get.

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Status update on {code, trac, projects, planet, community}.haskell.org

2011-02-01 Thread Duncan Coutts
All,

As you will be aware, some of the *.haskell.org websites have been down
recently, specifically:

code.haskell.org
trac.haskell.org
projects.haskell.org
planet.haskell.org
community.haskell.org

These are all hosted on the community server. The community server was
hacked on the 26th January and we took it offline. The server was
running an old version of debian that was no longer supported with
security updates. (Ironically, two days previously the infrastructure
team had been discussing the fact that nobody seemed to have any time
available to do the planned migration to a new host). The hacker
replaced sshd which we noticed because the ssh host signature changed
and it started prompting for passwords (we use key-based rather than
password based logins).

MSR kindly allowed Ian to take time off from the GHC 7.0.2 release to
work on migrating the services to the new host (that had previously been
partially prepared). Thanks to Ian's efforts and some help from other
members of the infrastructure team, the migration is now nearly
complete.

planet.haskell.org should now be working, along with email, project
mailing lists and trac instances.

We have not yet re-enabled user login accounts, nor re-enabled access to
code repositories. The support ticket system is not yet enabled.

We will send a further update when these are re-enabled, or procedures
for people to re-enable them are finalised.

On a more positive note, the new VM that we are using is a lot more
powerful than the old one, in particular about 5x more memory. As many
of you will have experienced, services on the old server tended to go
AWOL which was primarily due to running up against memory limits,
causing services to get killed off. In the case of the web server it was
mainly due to having to use a very small number of concurrent
connections, again to minimise memory use. So all in all we expect the
new server to be a good deal more reliable.


Duncan
(On behalf of the Haskell infrastructure team)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal ignores ld-options? - 2011 remix

2011-01-31 Thread Duncan Coutts
On Mon, 2011-01-31 at 01:13 -0500, Ryan Newton wrote:
 Ok, I've made some progress in the direction Duncan suggested.
 
 I can filter out the extra library before the postConf hook gets it.
 And calling make from the hooks is pretty easy.
 
 I've got a hack working that does allow full build/link/install given
 a hardcoded path (e.g. hack).  It works by passing -Wl,-rpath=/opt/...
 directly through the .cabal file (ld-options).  I need to un-hardcode
 that directory but I can't figure out how to achieve the same effect
 as that ld-options from my Setup.hs (I tried adding buildProgramArgs
 in the buildHook but no luck yet).

Just modify the package description instead, e.g. in the configure step.
The build prog args is intended for users, not for package authors. 

(It's so that cabal build --$PROG-options= works, and that's not
something you're supposed to interfere with. Afterall, users are in
control and can always add whatever options they want by adding wrapper
scripts etc, this just makes it more convenient.)

 There's also the question of how to find the install dir so that rpath
 can be set to it.  Unfortunately, I haven't been able to dig that out
 yet.  When I try to use absoluteInstallDirs via something like below,
 I get this error: setup: internal error InstallDirs.libsubdir
 
   putStrLn$ Install dirs:  ++ show (absoluteInstallDirs desc linfo
 NoCopyDest)

You're doing the right thing, just don't look at the libsubdir or
datasubdir after calling absoluteInstallDirs, because those do not make
sense after making the paths absolute (they're inherently relative).

The reason it's labelled as an internal error is because if cabal were
ever to call those, then it would be an internal error. When you call it
however, it's your fault ;-)

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal ignores ld-options? - 2011 remix

2011-01-30 Thread Duncan Coutts
On Sun, 2011-01-30 at 05:20 -0500, Ryan Newton wrote:

 
 I am trying to link a .a file built by a separate makefile into my
 library.  GHC has no problem with it, but I need to convince cabal to
 do it for the package to be hackage-friendly.  There's a thread about
 this back in 2007:
 
http://www.haskell.org/pipermail/cabal-devel/2007-April/000507.html
 
 Is there a method for this now, or is the state of affairs the same as
 before?

Having a Haskell package link against an external C library is
straightforward, you just list it in the extra-libraries field. The
external library must already be installed on the system however.

Bundling a copy of the external library with a cabal package is not
quite so easy. You will have to write code in the Setup.hs to call the
separate makefile during the build (and perhaps extra steps at configure
time too).

Your best bet is probably to use the simple build system's hooks API.
You can list the library in the extra-libraries field in the .cabal file
but you will also need to modify the check for external libraries that
are done in the postConf hook (since the separate .a library will not be
available at configure time): modify the package description that gets
passed to the postConf hook to exclude the library.

Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lazy cons, Stream-Fusion style?

2011-01-02 Thread Duncan Coutts
On 2 January 2011 13:35, Stephen Tetley stephen.tet...@gmail.com wrote:
 Hello all

 Can a lazy cons be implemented for (infinite) Streams in the
 Stream-Fusion style?

In the standard stream fusion style, all stream operations are strict
in the stream itself (that is, the pair of stepper function and
initial state, not strict in the unfolded sequence or elements). Thus
it is not possible to write things like:

 bad_ones :: Stream Int
 bad_ones = s where s = 1 `S.cons` s

I'm not sure if making the stream operations lazy in the stream
argument (e.g. using an irrefutable ~pattern) would interfere with the
usual stream optimisations. It's an interesting question. The usual
optimisations rely on having access to the definitions of input
streams and will not apply to a circular definition such as the above.

This strictness issue isn't a semantic problem problem when applying
stream fusion to data structures like lists. A circular definition of
a list will simply not fuse (there is nowhere to apply the
stream/unstream rule).

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cabal dependency on code repositories.

2010-12-19 Thread Duncan Coutts
On 19 December 2010 17:44, Greg Weber g...@gregweber.info wrote:
 Michael Snoyman and I were discussing the need for beta releases of Yesod
 and he encourage me to post this to the cafe. Beta releases could be built
 into the hackage system. However, this can be viewed as a more general
 problem of distributing multiple versions of code (stable vs. experimental,
 forks, etc). This is a problem that has been largely solved by version
 control, but in this instance just needs some integration with an installer
 system.

The first system I want to add is to allow cabal-install to install
from tarballs identified by a URL, e.g.

cabal install http://example.org/~me/foo-1.0.tar.gz


As you know, .cabal files can specify source repos so in principle one
could write a tool to install from a source repo. In practice there
are a number of things to solve involving how you manage local caches
of remote repos etc. The easiest thing to get working first is
installing from tarball URLs, but I would welcome patches for the
source repo feature, in particular a specific plan for how users
interact with the system.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: build issue: * Missing header file: HsBase.h

2010-12-16 Thread Duncan Coutts
On 16 December 2010 10:02, Simon Marlow marlo...@gmail.com wrote:

 ghc-cabal: Missing dependency on a foreign library:
 * Missing header file: HsBase.h
 This problem can usually be solved by installing the system package that
 provides this library (you may need the -dev version). If the library is
 already installed but in a non-standard location then you can use the
 flags
 --extra-include-dirs= and --extra-lib-dirs= to specify where it is.


 The problem is HsBase.h is where it is on my reference build tree on
 workstation:

 -bash-4.0$ find . -name 'HsBase.h'
 ./libraries/base/include/HsBase.h


 I suppose some external library might be missing, but here the error is
 quite misleading and I cannot find which one might be the culprit of
 this error.

 Do you have any idea what to install in order to proceed?

 I don't know what's going on here, I'm afraid.  Looks like Cabal tried to
 find HsBase.h and couldn't find it - so either it wasn't there (but you say
 it was), or Cabal was looking in the wrong place.  Maybe follow up the
 latter hypothesis?

Cabal will report this error when it cannot compile HsBase.h, that
usually means it is missing, but it's also possible that something
just does not compile. This is like the check that ./configure scripts
do. It's rather hard from the exit code of gcc to work out if it's
genuinely missing, or fails to compile (though we could try doing cpp
and cc phases separately).

One can run with -v3 to see the error that gcc reports.

Duncan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Cabal message problem.

2010-12-16 Thread Duncan Coutts
On 16 December 2010 13:38, Daniel Fischer
daniel.is.fisc...@googlemail.com wrote:

 The problem is that without upper bounds, things will break a lot when
 packages undergo API changes, but probably more often things will also work
 with the new API. So with upper bounds, you prevent breakage at the cost of
 preventing builds which would work.

It's a tradeoff.

One way to look at it is to say that upper bounds are just bad because
there's a chance it might work if you were not using the bit of the
API that changed.

The other is to look at it from the point of users of the package and
what kind of error messages they get. If there's no upper bound they
get a random compile failure and they don't know what is wrong, who is
to blame or how to fix it. If there is an upper bound then we at least
have the chance to tell the users that the package does not work (or
at least has not been tested by anyone) with that version of a
dependency. We also have the possibility of picking different deps
that are known to work. Yes, this stuff depends on having a reasonably
clever dependency resolution algorithm, but I think we can improve in
that area, there's plenty of ideas floating about, but less time to
implement them.

 Maybe a flag ignore upper bounds and try with the latest for cabal would
 be a solution. Would that be hard to implement or easy?

That suggestion has come up quite a few times. I think it's probably a
good idea.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conditional compilation for different versions of GHC?

2010-12-13 Thread Duncan Coutts
On 1 December 2010 03:54, Michael Snoyman mich...@snoyman.com wrote:
 On Wed, Dec 1, 2010 at 4:07 AM, Thomas Schilling
 nomin...@googlemail.com wrote:
 I think a nicer way to solve that issue is to use Cabal's MIN_VERSION macros.

  1. Add CPP to your extensions.  This will cause cabal to
 auto-generate a file with MIN_VERSION_pkg macros for each pkg in
 build-depends.

  2. GHC 6.12.* comes with template-haskell 2.4, so to test for that use:

 #ifdef MIN_VERSION_template_haskell(2,4,0)
  .. ghc-6.12.* code here.
 #endif

 This should make it more transparent to the user.

 Also, I think

 #if GHC7
 ...
 #endif

 is more transparent than a check on template-haskell.

Note that in many cases it is inappropriate to use the ghc version as
a proxy for a library version since most libraries are upgradable
independently of GHC.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] gtk2hs and hmake websites

2010-12-12 Thread Duncan Coutts
On 12 December 2010 20:55, Florian Weimer f...@deneb.enyo.de wrote:

 So the net effect will be a migration from haskell.org to
 haskell.cs.yale.edu, and not to community.haskell.org.  I think that's
 pretty odd, but people will have their reasons for doing it this way.

I addition to what Thomas says, note that haskell.cs.yale.edu will be
shut down in approximately a month (there is a full filesystem backup
in case people have not moved things by then).

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Please help me to reconstruct the Yarrow website! Re: New haskell.org server

2010-12-11 Thread Duncan Coutts
On 11 December 2010 14:52, Frank Rosemeier fr...@rosemeier.info wrote:

 Is there any chance to obtain an account for the old internet address for a
 sufficient time?

You can ask for a redirect. That is, you can ask the haskell.org
admings to make the URL http://haskell.org/yarrow/ redirect users to a
new URL, (wherver you move the new site to, e.g. somewhere on
community.haskell.org) so there is no need for old links to be broken.

To make such a request, see: http://community.haskell.org/admin/

Of course that does not help with the issue of moving stuff over, but
if it's just static content then that bit is quite easy.

Duncan

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Offer to mirror Hackage

2010-12-09 Thread Duncan Coutts
On 9 December 2010 21:04, Richard O'Keefe o...@cs.otago.ac.nz wrote:

 On 10/12/2010, at 12:18 AM, Markus Läll wrote:

 My take on the issue is that we should make it possible to easily mirror 
 hackage (what the OP asked for), so that people could use it when they 
 wanted to, and have a list of the mirrors on the wiki. This way those who 
 are interested can use them. Like when the mirror is faster/closer to them 
 or to help out when hackage is temporarily down. Those who need the security 
 can choose not to use mirrors, or make their own (private), or develop a 
 secure scheme, when it doesn't exist yet.

 Have I misunderstood something?
 I thought X is a mirror of Y meant X would be a read-only replica of Y,
 with some sort of protocol between X and Y to keep X up to date.
 As long as the material from Y replicated at X is *supposed* to be
 publicly available, I don't see a security problem here.  Only Y accepts
 updates from outside, and it continues to do whatever authentication it
 would do without a mirror.  The mirror X would *not* accept updates.

That's certainly what we've been planning on, that anyone can run a
mirror, no permissions needed. The issue people have raised is what
about having public mirrors that are used automatically or
semi-automatically by clients.

The suggestion about DNS round robin is transparent to clients but
requires all the mirrors to be a master, or to have some forwarding
system. Any transparent system also needs trust.

My opinion is that at this stage it is not really worth doing anything
complicated. We do not yet have a bandwidth problem. Once there are
more (unpriviledged) public and private mirrors then temporary
downtime on the main server is less problematic. Eventually we'll get
a bandwidth problem but I think we've  got a fair bit of time to
prepare and in the mean time we can get simple unpriviledged mirroring
working. That is mostly an issue of specifications and tools. The spec
for package archives is not as clear or as good as we'd like. We've
been discussing it recently on the cabal-devel mailing list.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] hackage dependencies for cabal setup scripts

2010-12-08 Thread Duncan Coutts
On 8 December 2010 17:02, Sönke Hahn sh...@cs.tu-berlin.de wrote:
 Hi!

 If you write a cabal setup script with user hooks, is there a way to tell
 cabal-install that the setup script itself depends on some package from
 hackage? (The dependency would be cabal-macosx in my case.)

No, the Cabal spec has no support for that. It is assumed that
Setup.hs scripts have minimal dependencies, since the original idea
was that these would be the first user interface, so they must
compile.

My plan is to move away from Setup.hs being the primary user interface
and that would in principle make it possible for Setup.hs scripts to
have other dependencies, however we will need to extend the Cabal
specification and .cabal format to enable it.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Offer to mirror Hackage

2010-12-07 Thread Duncan Coutts
On 4 December 2010 16:31, Dan Knapp dan...@gmail.com wrote:
 With Hackage down, now seemed like a good time to push this issue
 again.  It's such an important site to us that it's really rather a
 shame there are no mirrors of it.  I have a personal-and-business
 server in a data center in Newark, with a fair chunk of bandwidth,
 which I'd like to offer for a permanent mirror.  Is there interest in
 this?  Who do I need to talk to for it to happen?

At the recent hackathon we were working on hackage mirroring.

By this we do not mean just using rsync to sync the current
combination of filestore + cgi programs that make up the current
hackage implementation. We want to make it easy to set up dumb or
smart package archives and to do nearly-live mirroring.

We have a pototype hackage-mirror client that can poll two servers and
copy packages from one instance to the other. This assumes the target
is a smart mirror (e.g. an instance of the new hackage-server impl).
We also need to be able to target local dumb mirrors that are just
passive collections of files.

 Strategy-wise, I think the best approach is round-robin DNS, since
 that's transparent to the end user - everything would still appear at
 the URL it's at now, but behind-the-scenes magic would let things keep
 working when one or the other site is down.  I haven't personally set
 up such a system before but I'm willing to take on the burden of
 figuring it out.

This is a somewhat orthogonal issue since I think you're talking about
multiple master smart servers that can accept uploads.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal, xmonad-contrib, X11-xft, pkg-config ... questioning my sanity

2010-12-06 Thread Duncan Coutts
On 6 December 2010 11:02, Jacek Generowicz jacek.generow...@cern.ch wrote:

 It a appears that a program which does not exist on my computer seems to
 insist on a package version which does not exist in my universe, I'm
 starting to wonder whether I have lost my marbles. Could some kind soul
 please point me in some sensible direction?

 cabal install xmonad-contrib
 Resolving dependencies...
 Configuring X11-xft-0.3...
 cabal: pkg-config version =0.9.0 is required but it could not be found.

 X11-xft version 0.9.0 doesn't even exist, as far a I can tell

In the development version of cabal we have changed that error message
to try and make it clear that it is looking for a program called
pkg-config, not a Haskell package or a C lib.

cabal: The program pkg-config version =0.9.0 is required but it could not be
found.

Do you think that message would have helped avoid your confusion? Is
there an alternative message that would have been better?

pkg-config [1] is a tool used by C libraries to describe things like
dependencies on other C libs and what C compiler flags are needed to
use the packages.

The Haskell package X11-xft is a binding to the C library xft. On most
modern unix systems xft C library provides meta-data that pkg-config
can use. For example, on my system I can run:

$ pkg-config --cflags xft
-I/usr/include/freetype2

$ pkg-config --libs xft
-lXft -lXrender -lfontconfig -lfreetype -lX11

Cabal does exactly the same thing, to work out what flags are needed
to use the xft C library.

The problem on your system is that the pkg-config program is not
installed. Perhaps on OSX it comes with Xcode, I'm not sure. It may
well also be the case that you don't have the development files for
X11 or xft installed either (e.g. C header files). Hopefully some OSX
person can advise you on what you need to install to do X11
development on OSX.

[1]: http://pkg-config.freedesktop.org/wiki/

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal, xmonad-contrib, X11-xft, pkg-config ... questioning my sanity

2010-12-06 Thread Duncan Coutts
On 6 December 2010 14:48, Jacek Generowicz jacek.generow...@cern.ch wrote:

 Another thing that would probably have made it easier to understand: the
 message is strictly correct, but is *too* specific: it talks about
 pkg-config version =0.9.0 not being found, while the problem was that *no
 version at all* was present.

 I would understand if making the error message change according to such
 subtle distinctions would be more effort that it is worth.

Yes. The version is there so that when you do go and install it, you
will know what version to look for, so you don't end up in the
situation where you've gone to all the effort of installing
pkg-config, only to find it now complains that it found it, but it was
too old.

Perhaps something like this would be better:

cabal: The program 'pkg-config' is required but it could not be found
on the system (version 0.9.0 or later of pkg-config is required).


Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] build-type: configure on windows

2010-11-11 Thread Duncan Coutts
On 11 November 2010 11:23, Nils Schweinsberg m...@n-sch.de wrote:
Configuring pcre-light-0.4...
cabal: Missing dependency on a foreign library:
* Missing C library: pcre

On 11 November 2010 12:41, Stephen Tetley stephen.tet...@gmail.com wrote:
 Do you have the headers installed as well as the dlls?

With this error message, it really is the C libs. If cabal cannot find
the headers then it says so explicitly.

Cabal does the check by making a trivial .c program and linking it
with (approximately):

gcc main.o -lpcre

If that fails then cabal declares that it cannot find the C lib. It's
possible that the lib is present but that there is some other linking
error, it's a bit tricky to distinguish without looking at the error
messages from ld.exe.

So the first thing to try is the above test, or run cabal configure
-v3 and see what error message ld reports.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Wadler space leak

2010-11-08 Thread Duncan Coutts
On 8 November 2010 13:28, Simon Marlow marlo...@gmail.com wrote:

 There's another approach in Jan Sparud's paper here:

 http://portal.acm.org/citation.cfm?id=165196

 although it's not clear that this interacts very well with inlining either,
 and it has a suspicious-looking side-effecting operation.  It also looks
 like it creates a circular reference between the thunk and the selectors,
 which might hinder optimisations, and would probably also make things slower
 (by adding extra free variables to the thunk).

This proposal is mentioned favourably by Jörgen Gustavsson David Sands
in [1] (see section 6, case study 6). They mention that there is a
formalisation in Gustavsson's thesis [2]. That may say something about
inlining, since that's just the kind of transformation they'd want to
show is a space improvement.

[1]: Possibilities and Limitations of Call-by-Need Space Improvement (2001)
  http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.8.4097

[2]: Space-Safe Transformations and Usage Analysis for Call-by-Need
Languages (2001)
  (which I cannot immediately find online)

Duncan
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Re: How to make cabal pass flags to happy?

2010-10-20 Thread Duncan Coutts
On Wed, 2010-10-20 at 10:37 +0200, Niklas Broberg wrote:

  I think the right solution is for the happy source file to specify
  what kind of grammar it is / should be produced. Yes, that would mean
  modifying happy.
 
 Hmm, I agree with you in this particular case, regarding the grammar,
 since it really only makes sense for any given happy grammar to
 generate either kind of parser. But such a solution still wouldn't
 answer the general question about passing flags to preprocessors.
 Certainly not all such use cases are best fixed by pushing the flags
 into the pre-processed files.

I like to see it as an issue of the design of the whole system, not just
adapting cabal to existing half-thought-out tools.

With preprocessor flags the first question is who picks the flags. If it
is the person doing the build then we handle that already. If it is the
package author then it needs more careful analysis. Yes we could go and
add a foo-options for every single pre-processor but that's not
obviously a good idea.

Yes you can have full control by writing code in the Setup.hs (and it
could be easier to do that) but again it's not a very nice solution in
general. I think we have a habit of just adding flags to such tools and
saying let the build system handle it without really thinking
carefully about where the right place to specify things should be.

My view is that for the most part, source modules ought to be
self-descriptive so that they can be built with zero configuration. For
basic Haskell modules we do this already, we have a standard convention
on file extension, file - module name mapping, language pragmas, etc
etc. This is what allows ghci / ghc --make to work in most cases (and in
principle should allow a future cabal to work even without a .cabal file
for many simple projects -- like hmake did for years).

For preprocessors we have some convention, we mostly use file name
extensions. For more detailed variations I think it does make sense to
stick it in the source module. The harder cases are things like
preprocessors which genuinely have multiple modes. I don't mean things
like happy's GLR since as I understand it you really design the grammar
specifically for the approach. There are other tools where from the same
source you can and do generate multiple outputs. There it is harder to
establish a convention that allows for zero-configuration.

Similarly, documentation is a mess because there are few established
conventions, so much more grungy configuration is needed.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How to make cabal pass flags to happy?

2010-10-18 Thread Duncan Coutts
On Sat, 2010-10-16 at 16:44 +0100, Thomas Schilling wrote:

 I'm CC'ing Duncan, in case he has a better idea.

 On 13 October 2010 19:44, Niklas Broberg niklas.brob...@gmail.com wrote:

  What I want specifically is to have happy produce a GLR parser from my
  .ly file, and I want this to happen during 'cabal install'. Which in
  turn means I want cabal to pass the --glr flag to happy during
  setup. My best guess is that I might want to use 'ppHappy' [1], or
  something in the vicinity, but there's no documentation for the
  combinator and it's far from obvious how to pass arguments to it.

I think the right solution is for the happy source file to specify
what kind of grammar it is / should be produced. Yes, that would mean
modifying happy.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] allocation for pure FFI functions

2010-10-15 Thread Duncan Coutts
On Thu, 2010-10-14 at 17:45 +, Johannes Waldmann wrote:
 Hi. I wonder how to do the following properly.
 
 I have one (large) C  type, let's call it T,
 and I want to sell it as an abstract type in Haskell.
 
 I want to use C functions as if they were of type T - T  
 (pure function, returns a modified copy of the input)
 and the question is, how to do the memory allocation for that,
 in particular, how to avoid  IO  showing up 
 in the (visible) types on the Haskell side:
 
 I don't want IO because I don't want to declare some artificial
 order of execution - instead I want lazy evaluation.
 E.g., I might have some Haskell record with a T component
 which may or may not be evaluated (accessed) at all.

It is exactly for this purpose that the Haskell FFI library includes
unsafePerformIO. This is basically *the* legitimate use case for it, so
you don't need to feel bad about it.

The FFI spec says:

Sometimes an external entity is a pure function, except that it
passes arguments and/or results via pointers. To permit the
packaging of such entities as pure functions, Foreign provides
the following primitive: 

unsafePerformIO :: IO a - a

http://www.cse.unsw.edu.au/~chak/haskell/ffi/ffi/ffise5.html#x8-240005.1


Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Curious why cabal upgrade parsec not installing latest version

2010-09-17 Thread Duncan Coutts
On 17 September 2010 04:44, Peter Schmitz ps.hask...@gmail.com wrote:
 This gets a little hilarious (but better to laugh than cry).

 Well, I decided to try Parsec version 3 (i.e., 3.1.0) after all

[...]

 I did cabal upgrade parsec, which went great.

[...]

 (This is where I began laughing instead of crying  :-)


For the benefit of other haskell-cafe readers I should note that
problems like this is why in the next release of cabal-install I have
removed the upgrade command entirely. It does not do what people
expect it to do. For the benefit of people used to using upgrade,
cabal now tells them to use install instead.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A new cabal odissey: cabal-1.8 breaking its own neck by updating its dependencies

2010-09-15 Thread Duncan Coutts
On 13 September 2010 20:54, Paolo Giarrusso p.giarru...@gmail.com wrote:
 On Sun, Sep 12, 2010 at 20:46, Tillmann Rendel
 ren...@mathematik.uni-marburg.de wrote:
 Paolo Giarrusso wrote:
 in a tracker entry you linked to,
 http://hackage.haskell.org/trac/hackage/ticket/704, duncan argues that
 we also want to be able to do things like linking multiple versions
 of a Haskell package into a single application.
 [snip]
 Even with the technical ability to link all of foo, bar, pair-1 and pair-2
 together, I don't see how this program could be reasonably compiled.
 Therefore, I think that the notion of consistent install plans is relevant
 semantically, not just to work around some deficiency in the linking system.

 Your case is valid, but OTOH there other cases to support: if I link
 together two programs which use _internally_ different versions of
 regex packages, cabal should support that - and here I guess we agree.

Paolo,

If I've understood correctly, in this series of emails you're pointing
out two problems:

1. upgrading packages can break dependencies (and Cabal does not do a
lot to prevent/avoid this)

2. cabal ought to allow using multiple versions of a single package in
more circumstances than it does now


Both of these issues are known to the Cabal hackers (i.e. me and a few
other people). I'll share my views on the problem and the solution.

1. This is certainly a problem. The current situation is not awful but
it is a bit annoying sometimes. We do now accurately track when
packages get broken by upgrading dependencies so it should not be
possible to get segfaults by linking incompatible ABIs.

My preferred solution is to follow the example of Nix and use a
persistent package store. Then installing new packages (which includes
what people think of as upgrading) become non-destructive operations:
no existing packages would be broken by an upgrade. It would be
necessary to allow installing multiple instances of the same version
of a package.

If we do not allow multiple instances of a package then breaking
things during an upgrade will always remain a possibility. We could
work harder to avoid breaking things, or to try rebuilding things that
would become broken but it could never be a 100% solution.


2. This is a problem of information and optimisitic or pesimistic
assumptions. Technically there is no problem with typechecking or
linking in the presense of multiple versions of a package. If we have
a type Foo from package foo-1.0 then that is a different type to Foo
from package foo-1.1. GHC knows this.

So if for example a package uses regex or QC privately then other
parts of the same program (e.g. different libs) can also use different
versions of the same packages. There are other examples of course
where types from some common package get used in interfaces (e.g.
ByteString or Text). In these cases it is essential that the same
version of the package be used on both sides of the interface
otherwise we will get a type error because text-0.7:Data.Text.Text
does not unify with text-0.8:Data.Text.Text.

The problem for the package manager (i.e. cabal) is knowing which of
the two above scenarios apply for each dependency and thus whether
multiple versions of that dependency should be allowed or not.
Currently cabal does not have any information whatsoever to make that
distinction so we have to make the conservative assumption. If for
example we knew that particular dependencies were private
dependencies then we would have enough information to do a better job
in very many of the common examples.

My preference here is for adding a new field, build-depends-private
(or some such similar name) and to encourage packages to distinguish
between their public/visible dependencies and their private/invisible
deps.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage package download count

2010-09-09 Thread Duncan Coutts
On 9 September 2010 14:56, Tony Hannan tonyhann...@gmail.com wrote:
 Is there anywhere we can see the number of download for a particular
 package, especially ones you maintain yourself?

And the new hackage server implementation maintains download counts.

See the server test instance here: http://sparky.haskell.org:8080/

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A cabal odyssey

2010-08-20 Thread Duncan Coutts
On 19 August 2010 21:15, Andrew Coppin andrewcop...@btinternet.com wrote:
 Daniel Fischer wrote:

 Yes, when cabal runs haddock on a package, it generates a comprehensive
 index if none is present or expands it with the new docs.
 Quite cool that :)


 It's something I've always _wanted_ Cabal to do, but this is the first time
 I've ever seen it happen. I don't know what particularly I did to make this
 happen, and now it seems to be gone, so...

If you have documentation enabled then it is recreated every time you
install a package.

(Though only for user packages, since we have not yet worked out
somewhere sensible we can stick a global index).

 I gathered. Apparently there's no cabal uninstall or even merely a cabal
 unregister yet... (There must surely be a ticket for that already?)

Yup, there's a ticket for it.

 Well, the worst thing that can happen is I get no documentation, which isn't
 exactly a disaster. I'm just wondering how these files got created to start
 with; adding more packages doesn't appear to recreate it. I suppose I could
 try reinstalling all of them...

If you have documentation enabled (ie use --enable-documentation on
the command line, or have documentation: True in the ~/.cabal/config
file) then docs get created for each package you install, and the
haddock index/contents of all installed docs gets updated.

 Then again, all the links were broken anyway. They all had paths like
 C:\Program Files\Haskell\...whatever, and Mozilla apparently expects
 them to say file://C:/Program Files/Haskell/...whatever. It kept
 whining that the C:\ protocol is not registered

 Apparently, haddock links to absolute paths. That's of course not the
 right thing to do if the path begins with an invalid protocol specifier
 (C:). And it's annoying if you want to move the docs.


 I imagine it's so that each package can be placed in a completely arbitrary
 place in the filesystem, and the links still work. I'd actually be surprised
 if these URLs work on Linux either; they don't appear to follow the requisit
 web standards.

You may be right, or perhaps URL syntax is just liberal enough to let
unix style paths work. It's still a bug of course that we're not using
the file:// protocol which makes it not work on windows. I filed it
here:
http://hackage.haskell.org/trac/hackage/ticket/516#comment:6

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Creating binary distributions with Cabal

2010-08-20 Thread Duncan Coutts
On 20 August 2010 10:18, Christopher Done chrisd...@googlemail.com wrote:
 Does Cabal have a way to produce binary distributions from a package?

No but it's not too hard to do.

If you actually want an RPM or a DEB etc, then look into the cabal2rpm
etc tools, they help automate the process.

If you want a generic binary then:

You first prepare an image, but using:

cabal copy --destdir=./tmp/image/

Now you tar up the image directory, unpack it on the target.

Note that the prefix/paths you specified at configure time need to be
the same on the target machine. There is no support yet on unix for
relocatable / prefix independent binaries. In particular it needs the
paths to be correct to be able to find data files.

 I need to create a binary distribution of my project which does not
 depend on GHC or any development tools. The package should include all
 required data files and configuration files. I've got the latter
 covered with Data-Files and getDataFileName, but not sure about what
 to do regarding configuration files -- read/write to
 $HOME/.myproject/config or $HOME/.myprojectrc, etc., or what?

Right, config files you should just look in a per-user or global
location. You can use a data file to store a default so that the
program can work with no config file.

 I'm specifically targeting Redhat because that's the production
 server, but I'm wondering if there is or will be a way to agnostically
 access data files and configuration files without having to think
 about what OS it will be running on, in the same way I can use sockets
 or file access without worrying about the particular OS.

 Something like cabal sdist --binary --rpm/deb/arch/win/etc?

We might eventually add something for generic binaries but we will
leave specific distros and packaging systems to specialised tools.

 How does everyone else package up their Haskell programs for binary
 distribution?

As I mentioned there are also tools like cabal2rpm that help build
binary packages for specific distros.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cabal-install profiling libraries

2010-08-20 Thread Duncan Coutts
On 20 August 2010 14:20, Johannes Waldmann waldm...@imn.htwk-leipzig.de wrote:
 Here's another instance of the machine (*) telling me what to do,
 instead of doing it (or am I missing something):

 I have a large set of cabal packages installed with ghc.
 Then suddenly I need some package Foo with profiling.
 So I switch to library-profiling: True  in  my .cabal/config,
 and then cabal install Foo -  failing with the message:

  Perhaps you haven't installed the profiling libraries for package `Bar'

 for some package Bar that Foo depends upon. - Dear Cabal: Yes!
 I know that I haven't installed them! I want you to install them for me!
 But it isn't listening ...

 Of course you know that I have the highest respect for the work
 of the cabal authors. I'm just suggesting that the above feature
 (auto-re-install dependencies) would be helpful.

As usual the problem is lack of devevloper time to implement all these
nice features we all want.

http://hackage.haskell.org/trac/hackage/ticket/282

 Perhaps it's already there?
 If not - would it be hard to specify? To build? Or would it have
 bad consequences?

From the ticket:

Our current thinking on this issue is that we should track each
way separately.
That is we should register profiling, vanilla and any other ways
with ghc-pkg as
independent package instances. This needs coordination with ghc
since it means
a change to the package registration information to include the way.

The idea is that once we track each way separately then Cabal will
know if the profiling way is installed or not and we can install the
profiling instance if it is missing without messing up any existing
instances.

 Is it cabal upgrade --reinstall? But that was deprecated?

Yes, upgrade is deprecated, use install instead. (The meaning /
behaviour of upgrade just sowed confusion.)

 Here I really want reinstall with exactly the same versions.

Use: cabal install --reinstall foo-x.y.z

 Is it the problem that their sources may have vanished, meanwhile?
 Could it be solved by having cabal install storing a copy of
 the source package that it used?

No, the problem is we don't actually know if the profiling versions of
libs are installed or not. The ghc-pkg database does not contain this
information. Also, if we did know and started reinstalling packages,
what happens if we get half way and fail, we'd have messed up existing
installed working packages. Having profiling instances be separate
will make it all much easier.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cabal, haddock, hscolour

2010-08-18 Thread Duncan Coutts
On 13 August 2010 15:49, Johannes Waldmann waldm...@imn.htwk-leipzig.de wrote:
 How is the cabal magic that would run hscolour?

 I am using  cabal haddock --executables  --hyperlink-source
 and this generates the API docs for all modules,
 and all have source links, but the actual html-ized source
 is only generated for the main module, and missing for all others.

Your .cabal file probably does not list the other-modules as it
should, so Cabal does not know that the other modules exist.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] and from standard Prelude

2010-08-18 Thread Duncan Coutts
On 18 August 2010 10:05, Oleg Lobachev
lobac...@mathematik.uni-marburg.de wrote:
 Hello all,

 the and function, and :: [Bool] - Bool is defined in two different ways in 
 the latest Prelude.

 I would expect it to be

 and = foldr () True

 However, there is a further recursive definition, and it is the one used!

This is just an issue of specification vs implementation. The spec
from the H98 report is

and = foldr () True

An H98 implementation must provide an implementation of 'and' that is
equal to this specification. So the above can be used as the
implementation, or a directly recursive implementation or versions
using build/fold or unfold/destroy fusion, or any other implementation
would also be ok so long as they are equal to the spec. As Ivan says,
GHC's implementation of 'and' uses build/fold fusion.

Note that 'equal' includes all partial and total lists, so you can
rely on the above spec to reason about the behaviour on infinite lists
and expect that reasoning to work for correct H98 implementations.
That said, there are a couple functions where the obviously sensible,
and standard implementations differ from the H98 spec for some partial
values. Notably splitAt is specified badly in the H98 report (it is
too lazy).

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] and from standard Prelude

2010-08-18 Thread Duncan Coutts
On 18 August 2010 13:29, Oleg Lobachev
lobac...@mathematik.uni-marburg.de wrote:

 By the way, does some good reading on streams in Haskell exist? I am 
 interested primarily in a theoretical foundation, which should however be 
 somehow related to Haskell. My guess would be the relation of streams, 
 represented as lazy, not completely evaluated lists, and the evaluation 
 strategies. The practical part is easier, feeding [1..] to various functions 
 in GHCi. I have found [Ennals2003optimistic], but it seems to go in wrong 
 direction.

If you mean streams as in stream fusion then I hope chapter 3 of my
PhD thesis will be of some help (when it is finally published). If you
mean streams as in always-infinite co-inductive sequences then a good
starting point is probably Functional Pearl: Streams and Unique Fixed
Points by Ralf Hinze.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: cabal, haddock, hscolour

2010-08-18 Thread Duncan Coutts
On 18 August 2010 12:55, Johannes Waldmann waldm...@imn.htwk-leipzig.de wrote:
 Duncan Coutts duncan.coutts at googlemail.com writes:

 Your .cabal file probably does not list the other-modules as it
 should, so Cabal does not know that the other modules exist.

 Ah.

 Strange though that it seems to be able to do lots
 of reasonable things (build the executable, build all the API docs)
 without this other-modules section.

Right. It is required but not enforced. If you forget with executables
then usually nothing horrible happens. If you forget with libraries
then you end up with a broken library. Enforcing it properly is on the
TODO list.

 I figure it's a cabal design choice, but I don't see the reason for it.

In general it is impossible to use dependency chasing to automatically
discover all the modules/files required by a lib/exe because of things
like conditional compilation. You can find out what is needed for the
current configuration for the current machine, but that does not help
with preparing source distributions where e.g. you might have
different modules for windows vs unix.

 It seems it just forces me to construct manually
 some information that the machine already knows.

Almost, see above. Certainly we could check when the information
provided is wrong (by Cabal doing the dependency chasing rather than
leaving it for the tools like ghc --make or haddock). What we cannot
check is if it is right.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: String vs ByteString

2010-08-18 Thread Duncan Coutts
On 18 August 2010 15:04, Michael Snoyman mich...@snoyman.com wrote:

 For me, the whole point of this discussion was to
 determine whether we should attempt porting to UTF-8, which as I understand
 it would be a rather large undertaking.

And the answer to that is, yes but only if we have good reason to
believe it will actually be faster, and that's where we're most
interested in benchmarks rather than hand waving.

As Johan and others have said, the original choice to use UTF16 was
based on benchmarks showing it was faster (than UTF8 or UTF32). So if
we want to counter that then we need either to argue that these were
the wrong choice of benchmarks that do not reflect real usage, or that
with better implementations that the balance would shift.

Now there is an interesting argument to claim that we spend more time
shovling strings about than we do actually processing them in any
interesting way and therefore that we should pick benchmarks that
reflect that. This would then shift the balance to favour the internal
representation being identical to some particular popular external
representation --- even if that internal representation is slower for
many processing tasks.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: cabal, haddock, hscolour

2010-08-18 Thread Duncan Coutts
On 18 August 2010 15:13, Johannes Waldmann waldm...@imn.htwk-leipzig.de wrote:
 Duncan Coutts duncan.coutts at googlemail.com writes:

  Your .cabal file probably does not list the other-modules ...
 Enforcing it properly is on the TODO list.

 Please don't ... there's the design principle of
 making the typical case easy, but the complicated case possible
 (or something similar to that effect). It seems that
 you plan to make the typical case cumbersome.

Enforce is the wrong word. The more friendly thing is to tell users
when the list is incomplete.

As I mentioned before, in general it is necessary to list all the
modules because in general it is impossible to discover them all
automatically. Further, even when it is possible, there is value in
not requiring all tools that process .cabal files to embed all the
dependency chasing logic. Listing other modules also means we can
discover the full set of modules without having to configure and build
the package (in general it requires configuring the package, and then
compiling and running preprocessors).

Alternatively we could decide that we will allow users to specify
other modules just for the cases where discovering all the modules is
hard/impossible and then require all tools that handle .cabal files to
contain the logic for dependency chasing.

IMHO, the ideal is when cabal can mostly work without any .cabal file
for simple programs and then provides help to make and maintain
accurate .cabal files.

 The question in this thread is about generating hscolourized source.
 Everything else is working fine already - haddock is run on all modules
 (in the transitive import closure - how is it computed?),
 it's just that hscolour is not run.
 I hoped there is an easy fix for that (= hack, if you want).

 Actually, I don't need cabal here. Perhaps haddock should compute
 the module import closure (and then run hscolour)?

That would also work. That is independent of the need to list all
modules in the .cabal file.

 For reference, you don't need to list all source files here:
 http://download.oracle.com/javase/1.5.0/docs/tooldocs/windows/javadoc.html#proces
 singofsourcefiles

Right, if you do not have any preprocessing or conditional compilation
then import chasing is relatively easy and will cover all modules.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A cabal odyssey

2010-08-18 Thread Duncan Coutts
On 18 August 2010 18:13, Andrew Coppin andrewcop...@btinternet.com wrote:

 Then again, all the links were broken anyway. They all had paths like
 C:\Program Files\Haskell\...whatever, and Mozilla apparently expects them
 to say file://C:/Program Files/Haskell/...whatever. It kept whining that
 the C:\ protocol is not registered.

 Does anybody know anything about this feature?

Andrew, I just wanted to let you know that Cabal/cabal-install/hackage
has a bug tracker:

http://hackage.haskell.org/trac/hackage/

We rely on users reporting bugs so that we know there is anything
wrong. There is a guest account so that anyone can file tickets. There
is a search feature to see if anyone has reported a similar problem
previously (though we prefer getting duplicates than no reports at
all). For bugs that really annoy you, you can add your email address
to the 'cc list so that you get notified when the ticket is updated.
That also counts as a vote for that ticket, we take that into
account when prioritising things.

Thanks!

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: $thisdir for package.conf?

2010-08-12 Thread Duncan Coutts
On 12 August 2010 02:20, Greg Fitzgerald gari...@gmail.com wrote:
 Is there a way for a package.conf file to contain paths that are relative to
 the directory containing the .conf file?  GHC 6.12.1 chokes on relative
 paths.  I see the problem is solved for GHC's core libraries with the
 $topdir variable.  Is there something like a $thisdir we could use in
 inplace .conf files?

We came up with a specification for this but it is not yet implemented:

http://www.haskell.org/pipermail/libraries/2009-May/011772.html
http://hackage.haskell.org/trac/ghc/ticket/3268

Patches welcome.

Duncan
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] data.binary get reading beyond end of input bytestring?

2010-07-29 Thread Duncan Coutts
On 29 July 2010 07:53, Conrad Parker con...@metadecks.org wrote:

 Something smells fishy here. I have a hard time believing that binary is
 reading more input than is available? Could you post more code please?

 The issue seems to just be the return value for bytes consumed from
 getLazyByteString. Here's a small example.

http://hackage.haskell.org/packages/archive/binary/0.5.0.2/doc/html/Data-Binary-Get.html#v%3AgetLazyByteString

getLazyByteString :: Int64 - Get ByteString
An efficient get method for lazy ByteStrings. Does not fail if fewer
than n bytes are left in the input.


Because it does it lazily it cannot check if it's gone past the end of
the input. Arguably this is crazy and the function should not exist.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] data.binary get reading beyond end of input bytestring?

2010-07-29 Thread Duncan Coutts
On Thu, 2010-07-29 at 19:01 +0900, Conrad Parker wrote:
 On 29 July 2010 17:46, Duncan Coutts duncan.cou...@googlemail.com wrote:
  On 29 July 2010 07:53, Conrad Parker con...@metadecks.org wrote:
 
  Something smells fishy here. I have a hard time believing that binary is
  reading more input than is available? Could you post more code please?
 
  The issue seems to just be the return value for bytes consumed from
  getLazyByteString. Here's a small example.
 
  http://hackage.haskell.org/packages/archive/binary/0.5.0.2/doc/html/Data-Binary-Get.html#v%3AgetLazyByteString
 
  getLazyByteString :: Int64 - Get ByteString
  An efficient get method for lazy ByteStrings. Does not fail if fewer
  than n bytes are left in the input.
 
 
  Because it does it lazily it cannot check if it's gone past the end of
  the input. Arguably this is crazy and the function should not exist.
 
 cheers Duncan, that confirms my guess about the reason. Would you
 accept a patch quoting you on that last point to the comment? ;-)

The consensus plan amongst the binary hackers is to eliminate lazy
lookahead functions and to rebuild binary on top of a continuation style
using strict chunks (then with lazy decoding built on top).

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   4   5   6   7   8   9   10   >