Re: cpphs bug with pathnames beginning with more than one slash

2016-06-09 Thread Dan Aloni
On Thu, Jun 09, 2016 at 07:44:55PM -0500, Andrés Sicard-Ramírez wrote:
> Hi Dan,
> 
> FYI, cpphs has a (new) bug tracker in
> 
>   https://github.com/malcolmwallace/cpphs/issues

Would be worth linking to it from the docs and main site. I see t
there's an open issue there for that :)

-- 
Dan Aloni
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: cpphs bug with pathnames beginning with more than one slash

2016-06-09 Thread Andrés Sicard-Ramírez
Hi Dan,

FYI, cpphs has a (new) bug tracker in

  https://github.com/malcolmwallace/cpphs/issues

Best,



On 9 June 2016 at 16:11, Dan Aloni  wrote:
> Hi Malcolm,
>
> If we pass pathnames starting with more than one slash to '-include',
> cpphs generates invalid output. These are valid UNIX pathnames.
> I've tested with version 1.20.1 on Linux.
>
> Example:
>
> $ touch empty.hs
> $ cpphs --cpp -include //dev/null empty.hs
> #line 1 "test.hs"
> #line 1 "
> #line 2 "test.hs"
> #line 1 "test.hs"
>
> If I remove the extra '/', I get a good output:
>
> $ touch empty.hs
> $ cpphs --cpp -include /dev/null empty.hs
> #line 1 "test.hs"
> #line 1 "/dev/null"
> #line 2 "test.hs"
> #line 1 "test.hs"
>
> Thanks.
>
> --
> Dan Aloni
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
> "La información contenida en este correo electrónico está dirigida únicamente 
> a su destinatario y puede contener información confidencial, material 
> privilegiado o información protegida por derecho de autor. Está prohibida 
> cualquier copia, utilización, indebida retención, modificación, difusión, 
> distribución o reproducción total o parcial. Si usted recibe este mensaje por 
> error, por favor contacte al remitente y elimínelo. La información aquí 
> contenida es responsabilidad exclusiva de su remitente por lo tanto la 
> Universidad EAFIT no se hace responsable de lo que el mensaje contenga."
>
> "The information contained in this email is addressed to its recipient only 
> and may contain confidential information, privileged material or information 
> protected by copyright. Its prohibited any copy, use, improper retention, 
> modification, dissemination, distribution or total or partial reproduction. 
> If you receive this message by error, please contact the sender and delete 
> it. The information contained herein is the sole responsibility of the sender 
> therefore Universidad EAFIT is not responsible for what the message contains."



--
Andrés

"La información contenida en este correo electrónico está dirigida únicamente a 
su destinatario y puede contener información confidencial, material 
privilegiado o información protegida por derecho de autor. Está prohibida 
cualquier copia, utilización, indebida retención, modificación, difusión, 
distribución o reproducción total o parcial. Si usted recibe este mensaje por 
error, por favor contacte al remitente y elimínelo. La información aquí 
contenida es responsabilidad exclusiva de su remitente por lo tanto la 
Universidad EAFIT no se hace responsable de lo que el mensaje contenga."

"The information contained in this email is addressed to its recipient only and 
may contain confidential information, privileged material or information 
protected by copyright. Its prohibited any copy, use, improper retention, 
modification, dissemination, distribution or total or partial reproduction. If 
you receive this message by error, please contact the sender and delete it. The 
information contained herein is the sole responsibility of the sender therefore 
Universidad EAFIT is not responsible for what the message contains."
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


cpphs bug with pathnames beginning with more than one slash

2016-06-09 Thread Dan Aloni
Hi Malcolm,

If we pass pathnames starting with more than one slash to '-include',
cpphs generates invalid output. These are valid UNIX pathnames.
I've tested with version 1.20.1 on Linux.

Example:

$ touch empty.hs
$ cpphs --cpp -include //dev/null empty.hs
#line 1 "test.hs"
#line 1 "   
#line 2 "test.hs"
#line 1 "test.hs"

If I remove the extra '/', I get a good output:

$ touch empty.hs
$ cpphs --cpp -include /dev/null empty.hs
#line 1 "test.hs"
#line 1 "/dev/null"
#line 2 "test.hs"
#line 1 "test.hs"

Thanks.

--
Dan Aloni
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why upper bound version numbers?

2016-06-09 Thread Howard B. Golden
On June 9, 2016 10:43:00 -0700 David Fox wrote:

​> It seems to me that if you have any thought at all for your library's
> clients the chances of this happening are pretty insignificant.

Sadly (IMO), this happens all too frequently. Upward compatibility suffers 
because most package authors are (naturally) interested in solving their own 
problems, and they don't get paid to think about others using their package who 
might be affected by an upward-incompatible change. This is a hard problem to 
solve unless we can find a way to pay package authors to take the extra time 
and effort to satisfy their users. Most open source communities are stricter 
about requiring upward compatibility than the Haskell community is.


Howard


From: David Fox 
To: Herbert Valerio Riedel  
Cc: "ghc-devs@haskell.org" 
Sent: Thursday, June 9, 2016 10:43 AM
Subject: Re: Why upper bound version numbers?





On Mon, Jun 6, 2016 at 11:15 PM, Herbert Valerio Riedel  
wrote:

or even worse silent failures where the code behaves
>subtly wrong or different than expected. Testsuites mitigate this to
>some degree, but they too are an imperfect solution to this hard
>problem.
>

​It seems to me that if you have any thought at all for your library's clients 
the chances of this happening are pretty insignificant.

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why upper bound version numbers?

2016-06-09 Thread David Fox
On Mon, Jun 6, 2016 at 11:15 PM, Herbert Valerio Riedel 
wrote:

> or even worse silent failures where the code behaves
> subtly wrong or different than expected. Testsuites mitigate this to
> some degree, but they too are an imperfect solution to this hard
> problem.
>

​It seems to me that if you have any thought at all for your library's
clients the chances of this happening are pretty insignificant.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Harbourmaster is still not building diffs

2016-06-09 Thread Ben Gamari
Matthew Pickering  writes:

> Since a couple of months ago, harbourmaster no longer builds diffs.
> This is quite a large barrier to entry for new contributors as running
> ./validate takes a long time.

Hi Matthew,

Indeed it has been a very long time since Harbormaster has built diffs.

The story is this: in the past our Harbormaster infrastructure has
been rather brittle due to its reliance on `arc` to apply differentials.
With recently work in Phabricator this fragility can now be addressed,
but at the cost of reworking some of our existing build infrastructure.
Moreover, the new Harbormaster story seems to not have been designed
with a public project such as ours in mind, so there are a few security
issues which need to be worked out (namely it requires a git repository
to which all Phab users can push commits).

Getting to this point has unfortunately taken significantly longer than
expected to get to this point and it's still not entirely clear how the
new Harbormaster roll-out will work. At this point I suspect we ought to
just roll back to the previous Harbormaster automation script unless there
is a clear path forward with the new infrastructure.

Austin, what do you think? Can we set a concrete timeline for bringing
Harbormaster back up?

Cheers,

- Ben


signature.asc
Description: PGP signature
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why upper bound version numbers?

2016-06-09 Thread Oleg Grenrus
My five cents:

There is discussion/work in Cabal dev about splitting the solver out. [1] I 
hope that at the end, there will be functionality that you can construct 
build/install plan by whatever means and use cabal functionality to execute it.

Another functionality brought by work on haskell-security is that 
01-index.tar.gz is append only file, so it’s “easy” to travel back in time in 
Hackage history.

Combining those one can run an experiment, on how much
- existing upper bounds prevent cabal from finding working install-plan; and 
how this change over time, because of maintainers activity.
- non-existing upper bounds prevent found install-plan to compile properly; and 
how this change over time, because of a) maintainers own activity, b) Hackage 
trustee activity
- other stats

Until quantitative report justifying that upper bounds are clearly bad-idea, I 
would not propose any dramatical changes to how default solver works or 
Cabal-definition-format spec. With own solvers it would be easy to experiment 
and provide own metadata.

As Hackage Trustee I do see much more full-of-red (failed builds) matrices on 
matrix.h.h.o than dark green ones (no install plan).
Unfortunately only my gut feeling, no hard numbers but, but GHC <7.4/7.6 tends 
to be often red (because of base ==4.*, or base <5), or there are full lines of 
red because newer version of dependency causes breakage (which is no 
upper-bounds)


For example. fast-logger introduced new type alias and new functionality in 
System.Log.FastLogger in minor version 2.4.4 [3]:

--- Diff for | 2.4.3 → 2.4.4 | ---

+ System.Log.FastLogger.Date
  + type FormattedTime = ByteString
  + type TimeFormat = ByteString
  + newTimeCache :: TimeFormat -> IO (IO FormattedTime)
  + simpleTimeFormat :: TimeFormat
  + simpleTimeFormat' :: TimeFormat
× System.Log.FastLogger
  + type FastLogger = LogStr -> IO ()
  + LogCallback :: (LogStr -> IO ()) -> (IO ()) -> LogType
  + LogFile :: FilePath -> BufSize -> LogType
  + LogFileAutoRotate :: FileLogSpec -> BufSize -> LogType
  + LogNone :: LogType
  + LogStderr :: BufSize -> LogType
  + LogStdout :: BufSize -> LogType
  + data LogType
  + type TimedFastLogger = (FormattedTime -> LogStr) -> IO ()
  + newFastLogger :: LogType -> IO (FastLogger, IO ())
  + newTimedFastLogger :: (IO FormattedTime) -> LogType -> IO (TimedFastLogger, 
IO ())
  + withFastLogger :: LogType -> (FastLogger -> IO a) -> IO ()
  + withTimedFastLogger :: (IO FormattedTime) -> LogType -> (TimedFastLogger -> 
IO a) -> IO ()
× System.Log.FastLogger.File
  × New: check :: FilePath -> IO ()
Old: check :: FileLogSpec -> IO ()

[+ Added] [- Removed] [× Modified] [· Unmodified]

And according to PVP you can do it, bumping only minor version. (The change in 
System.Log.FastLogger.File is breaking change, so it should been major bump, 
but our example holds even it was 2.4.4 -> 2.5)

wai-logger package depends on fast-logger, and broke after new fast-logger 
release [4] because of:
- doesn’t have upper-bounds
- import(ed) modules improperly [5]

The new version was fixed immediately, so I suspect that whetever the issue 
were about breakage or restrictive-upper bounds (pinged from Stackage) it would 
been fixed as fast.

Yet now there is still lot’s of red in the matrix [6]. It’s highly unlikely 
that old versions would be picked by solver, but not impossible, happened to 
me, ekg-json picked very old aeson version (well again, no bounds there!) and 
broke. [7]

To summarise this example, I’d rather make PRs to relax version bounds, then 
try to guess what bounds I have to adjust so I get working install plan.

And to conclude,
- Please setup CI, e.g. using Herbert’s script [8]. It’s very easy!
- Have the base bounds restricting GHC’s to ones you test.
- Contributing (i.e. to bump version bounds) would be extremely easy for others 
and you.

Herbert is also very fast to make packages for newer (even unreleased version 
of GHC), so we had ability to test and fix! many packages before GHC-8.0 was 
officially released making adaptations as early as this year’s January.

- Oleg

- [1]: https://github.com/haskell/cabal/pull/3222
- [2]: http://matrix.hackage.haskell.org/packages see red-alert tag
- [3]: 
http://hackage.haskell.org/package/fast-logger-2.4.4/docs/System-Log-FastLogger.html
- [4]: https://github.com/kazu-yamamoto/logger/issues/88
- [5]: https://wiki.haskell.org/Import_modules_properly
- [6]: http://imgur.com/bTYI8KC
- [7] https://github.com/tibbe/ekg-json/pull/3
- [8] https://github.com/hvr/multi-ghc-travis



> On 09 Jun 2016, at 12:22, Jeremy .  wrote:
> 
> Versions of package dependencies can be categorised into 5 sets:
> 
> D1) Versions which the maintainer has tested and found to work
> D2) Versions which the maintainer has not tested but expects to work
> D3) Versions which the maintainer has tested and found to not work
> D4) Versions which the maintainer has not tested but expects to not work
> D5) Everything 

Re: Why upper bound version numbers?

2016-06-09 Thread Jeremy .
Versions of package dependencies can be categorised into 5 sets:

D1) Versions which the maintainer has tested and found to work
D2) Versions which the maintainer has not tested but expects to work
D3) Versions which the maintainer has tested and found to not work
D4) Versions which the maintainer has not tested but expects to not work
D5) Everything else

Cabal, however, only knows of 3 sets:

C1) Versions which satisfy build-depends
C2) Versions which satisfy build-depends with --allow-newer
C3) Everything else

The problem arises from the fact that the D sets to not map well onto the C 
sets, even after combining D1 and D3 Perhaps this could be solved with a 
new .cabal property, breaks-with. The solver will then prefer packages in this 
order:

1) Versions that satisfy build-depends
2) Versions that are not in breaks-with, unless a flag such as 
--strict-build-depends is applied

This may also lead to clearer build-depends, as instead of multiple ranges with 
gaps to skip know broken versions, build-depends can list a single range, and 
breaks-with can list the bad versions.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why upper bound version numbers?

2016-06-09 Thread Erik Hesselink
Sure, I'm just wondering about how this plays out in reality: of
people getting unsolvable plans, how many are due to hard upper bounds
and how many due to soft upper bounds? We can't reliably tell, of
course, since we don't have this distinction currently, but I was
trying to get some anecdotal data to add to my own.

Erik

On 9 June 2016 at 10:07, Alan & Kim Zimmerman  wrote:
> I think "hard" upper bounds would come about in situations where a new
> version of a dependency is released that breaks things in a package, so
> until the breakage is fixed a hard upper bound is required.  Likewise for
> hard lower bounds.
>
> And arguments about "it shouldn't happen with the PVP" don't hold, because
> it does happen, PVP is a human judgement thing.
>
> Alan
>
>
> On Thu, Jun 9, 2016 at 10:01 AM, Erik Hesselink  wrote:
>>
>> What do you expect will be the distribution of 'soft' and 'hard' upper
>> bounds? In my experience, all upper bounds currently are 'soft' upper
>> bounds. They might become 'hard' upper bounds for a short while after
>> e.g. a GHC release, but in general, if a package maintainer knows that
>> a package fails to work with a certain version of a dependency, they
>> fix it.
>>
>> So it seems to me that this is not so much a choice between 'soft' and
>> 'hard' upper bounds, but a choice on what to do when you can't resolve
>> dependencies in the presence of the current (upper) bounds. Currently,
>> as you say, we give pretty bad error messages. The alternative you
>> propose (just try) currently often gives the same result in my
>> experience: bad error messages, in this case not from the solver, but
>> unintelligible compiler errors in an unknown package. So it seems the
>> solution might just be one of messaging: make the initial resolver
>> error much friendlier, and give a suggestion to use e.g.
>> --allow-newer=foo. The opposite might also be interesting to explore:
>> if installing a dependency (so not something you're developing or
>> explicitly asking for) fails to install and doesn't have an upper
>> bound, suggest something like --constaint=foo>
>> Do you have different experiences regarding the number of 'hard' upper
>> bounds that exist?
>>
>> Regards,
>>
>> Erik
>>
>> On 8 June 2016 at 22:01, Michael Sloan  wrote:
>> > Right, part of the issue with having dependency solving at the core of
>> > your
>> > workflow is that you never really know who's to blame.  When running
>> > into
>> > this circumstance, either:
>> >
>> > 1) Some maintainer made a mistake.
>> > 2) Some maintainer did not have perfect knowledge of the future and has
>> > not
>> > yet updated some upper bounds.  Or, upper bounds didn't get
>> > retroactively
>> > bumped (usual).
>> > 3) You're asking cabal to do something that can't be done.
>> > 4) There's a bug in the solver.
>> >
>> > So the only thing to do is to say "something went wrong".  In a way it
>> > is
>> > similar to type inference, it is difficult to give specific, concrete
>> > error
>> > messages without making some arbitrary choices about which constraints
>> > have
>> > gotten pushed around.
>> >
>> > I think upper bounds could potentially be made viable by having both
>> > hard
>> > and soft constraints.  Until then, people are putting 2 meanings into
>> > one
>> > thing.  By having the distinction, I think cabal-install could provide
>> > much
>> > better errors than it does currently.  This has come up before, I'm not
>> > sure
>> > what came of those discussions.  My thoughts on how this would work:
>> >
>> > * The dependency solver would prioritize hard constraints, and tell you
>> > which soft constraints need to be lifted.  I believe the solver even
>> > already
>> > has this.  Stack's integration with the solver will actually first try
>> > to
>> > get a plan that doesn't override any snapshot versions, by specifying
>> > them
>> > as hard constraints.  If that doesn't work, it tries again with soft
>> > constraints.
>> >
>> > * "--allow-soft" or something would ignore soft constraints.  Ideally
>> > this
>> > would be selective on a per package / upper vs lower.
>> >
>> > * It may be worth having the default be "--allow-soft" + be noisy about
>> > which constraints got ignored.  Then, you could have a
>> > "--pedantic-bounds"
>> > flag that forces following soft bounds.
>> >
>> > I could get behind upper bounds if they allowed maintainers to actually
>> > communicate their intention, and if we had good automation for their
>> > maintenance.  As is, putting upper bounds on everything seems to cause
>> > more
>> > problems than it solves.
>> >
>> > -Michael
>> >
>> > On Wed, Jun 8, 2016 at 1:31 AM, Ben Lippmeier 
>> > wrote:
>> >>
>> >>
>> >> On 8 Jun 2016, at 6:19 pm, Reid Barton  wrote:
>> >>
>> >>>  Suppose you maintain a library that is used by a lot of first year
>> >>> uni
>> >>> students (like gloss). Suppose the next GHC version comes 

Re: Why upper bound version numbers?

2016-06-09 Thread Alan & Kim Zimmerman
I think "hard" upper bounds would come about in situations where a new
version of a dependency is released that breaks things in a package, so
until the breakage is fixed a hard upper bound is required.  Likewise for
hard lower bounds.

And arguments about "it shouldn't happen with the PVP" don't hold, because
it does happen, PVP is a human judgement thing.

Alan


On Thu, Jun 9, 2016 at 10:01 AM, Erik Hesselink  wrote:

> What do you expect will be the distribution of 'soft' and 'hard' upper
> bounds? In my experience, all upper bounds currently are 'soft' upper
> bounds. They might become 'hard' upper bounds for a short while after
> e.g. a GHC release, but in general, if a package maintainer knows that
> a package fails to work with a certain version of a dependency, they
> fix it.
>
> So it seems to me that this is not so much a choice between 'soft' and
> 'hard' upper bounds, but a choice on what to do when you can't resolve
> dependencies in the presence of the current (upper) bounds. Currently,
> as you say, we give pretty bad error messages. The alternative you
> propose (just try) currently often gives the same result in my
> experience: bad error messages, in this case not from the solver, but
> unintelligible compiler errors in an unknown package. So it seems the
> solution might just be one of messaging: make the initial resolver
> error much friendlier, and give a suggestion to use e.g.
> --allow-newer=foo. The opposite might also be interesting to explore:
> if installing a dependency (so not something you're developing or
> explicitly asking for) fails to install and doesn't have an upper
> bound, suggest something like --constaint=foo
> Do you have different experiences regarding the number of 'hard' upper
> bounds that exist?
>
> Regards,
>
> Erik
>
> On 8 June 2016 at 22:01, Michael Sloan  wrote:
> > Right, part of the issue with having dependency solving at the core of
> your
> > workflow is that you never really know who's to blame.  When running into
> > this circumstance, either:
> >
> > 1) Some maintainer made a mistake.
> > 2) Some maintainer did not have perfect knowledge of the future and has
> not
> > yet updated some upper bounds.  Or, upper bounds didn't get retroactively
> > bumped (usual).
> > 3) You're asking cabal to do something that can't be done.
> > 4) There's a bug in the solver.
> >
> > So the only thing to do is to say "something went wrong".  In a way it is
> > similar to type inference, it is difficult to give specific, concrete
> error
> > messages without making some arbitrary choices about which constraints
> have
> > gotten pushed around.
> >
> > I think upper bounds could potentially be made viable by having both hard
> > and soft constraints.  Until then, people are putting 2 meanings into one
> > thing.  By having the distinction, I think cabal-install could provide
> much
> > better errors than it does currently.  This has come up before, I'm not
> sure
> > what came of those discussions.  My thoughts on how this would work:
> >
> > * The dependency solver would prioritize hard constraints, and tell you
> > which soft constraints need to be lifted.  I believe the solver even
> already
> > has this.  Stack's integration with the solver will actually first try to
> > get a plan that doesn't override any snapshot versions, by specifying
> them
> > as hard constraints.  If that doesn't work, it tries again with soft
> > constraints.
> >
> > * "--allow-soft" or something would ignore soft constraints.  Ideally
> this
> > would be selective on a per package / upper vs lower.
> >
> > * It may be worth having the default be "--allow-soft" + be noisy about
> > which constraints got ignored.  Then, you could have a
> "--pedantic-bounds"
> > flag that forces following soft bounds.
> >
> > I could get behind upper bounds if they allowed maintainers to actually
> > communicate their intention, and if we had good automation for their
> > maintenance.  As is, putting upper bounds on everything seems to cause
> more
> > problems than it solves.
> >
> > -Michael
> >
> > On Wed, Jun 8, 2016 at 1:31 AM, Ben Lippmeier 
> wrote:
> >>
> >>
> >> On 8 Jun 2016, at 6:19 pm, Reid Barton  wrote:
> >>
> >>>  Suppose you maintain a library that is used by a lot of first year uni
> >>> students (like gloss). Suppose the next GHC version comes around and
> your
> >>> library hasn’t been updated yet because you’re waiting on some
> dependencies
> >>> to get fixed before you can release your own. Do you want your
> students to
> >>> get a “cannot install on this version” error, or some confusing build
> error
> >>> which they don’t understand?
> >>
> >>
> >> This is a popular but ultimately silly argument. First, cabal dependency
> >> solver error messages are terrible; there's no way a new user would
> figure
> >> out from a bunch of solver output about things like "base-4.7.0.2" and
> >> "Dependency tree 

Re: Why upper bound version numbers?

2016-06-09 Thread Erik Hesselink
What do you expect will be the distribution of 'soft' and 'hard' upper
bounds? In my experience, all upper bounds currently are 'soft' upper
bounds. They might become 'hard' upper bounds for a short while after
e.g. a GHC release, but in general, if a package maintainer knows that
a package fails to work with a certain version of a dependency, they
fix it.

So it seems to me that this is not so much a choice between 'soft' and
'hard' upper bounds, but a choice on what to do when you can't resolve
dependencies in the presence of the current (upper) bounds. Currently,
as you say, we give pretty bad error messages. The alternative you
propose (just try) currently often gives the same result in my
experience: bad error messages, in this case not from the solver, but
unintelligible compiler errors in an unknown package. So it seems the
solution might just be one of messaging: make the initial resolver
error much friendlier, and give a suggestion to use e.g.
--allow-newer=foo. The opposite might also be interesting to explore:
if installing a dependency (so not something you're developing or
explicitly asking for) fails to install and doesn't have an upper
bound, suggest something like --constaint=foo wrote:
> Right, part of the issue with having dependency solving at the core of your
> workflow is that you never really know who's to blame.  When running into
> this circumstance, either:
>
> 1) Some maintainer made a mistake.
> 2) Some maintainer did not have perfect knowledge of the future and has not
> yet updated some upper bounds.  Or, upper bounds didn't get retroactively
> bumped (usual).
> 3) You're asking cabal to do something that can't be done.
> 4) There's a bug in the solver.
>
> So the only thing to do is to say "something went wrong".  In a way it is
> similar to type inference, it is difficult to give specific, concrete error
> messages without making some arbitrary choices about which constraints have
> gotten pushed around.
>
> I think upper bounds could potentially be made viable by having both hard
> and soft constraints.  Until then, people are putting 2 meanings into one
> thing.  By having the distinction, I think cabal-install could provide much
> better errors than it does currently.  This has come up before, I'm not sure
> what came of those discussions.  My thoughts on how this would work:
>
> * The dependency solver would prioritize hard constraints, and tell you
> which soft constraints need to be lifted.  I believe the solver even already
> has this.  Stack's integration with the solver will actually first try to
> get a plan that doesn't override any snapshot versions, by specifying them
> as hard constraints.  If that doesn't work, it tries again with soft
> constraints.
>
> * "--allow-soft" or something would ignore soft constraints.  Ideally this
> would be selective on a per package / upper vs lower.
>
> * It may be worth having the default be "--allow-soft" + be noisy about
> which constraints got ignored.  Then, you could have a "--pedantic-bounds"
> flag that forces following soft bounds.
>
> I could get behind upper bounds if they allowed maintainers to actually
> communicate their intention, and if we had good automation for their
> maintenance.  As is, putting upper bounds on everything seems to cause more
> problems than it solves.
>
> -Michael
>
> On Wed, Jun 8, 2016 at 1:31 AM, Ben Lippmeier  wrote:
>>
>>
>> On 8 Jun 2016, at 6:19 pm, Reid Barton  wrote:
>>
>>>  Suppose you maintain a library that is used by a lot of first year uni
>>> students (like gloss). Suppose the next GHC version comes around and your
>>> library hasn’t been updated yet because you’re waiting on some dependencies
>>> to get fixed before you can release your own. Do you want your students to
>>> get a “cannot install on this version” error, or some confusing build error
>>> which they don’t understand?
>>
>>
>> This is a popular but ultimately silly argument. First, cabal dependency
>> solver error messages are terrible; there's no way a new user would figure
>> out from a bunch of solver output about things like "base-4.7.0.2" and
>> "Dependency tree exhaustively searched" that the solution is to build with
>> an older version of GHC.
>>
>>
>> :-) At least “Dependency tree exhaustively searched” sounds like it’s not
>> the maintainer’s problem. I prefer the complaints to say “can you please
>> bump the bounds on this package” rather than “your package is broken”.
>>
>> Ben.
>>
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Call for talks: Haskell Implementors Workshop 2016, Sep 24 (FIXED), Nara

2016-06-09 Thread Edward Z. Yang
(...and now with the right date in the subject line!)

Call for Contributions
   ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2016
Nara, Japan, 24 September, 2016

Co-located with ICFP 2016
   http://www.icfpconference.org/icfp2016/

Important dates
---
Proposal Deadline:  Monday, 8 August, 2016
Notification:   Monday, 22 August, 2016
Workshop:   Saturday, 24 September, 2016

The 8th Haskell Implementors' Workshop is to be held alongside ICFP
2016 this year in Nara. It is a forum for people involved in the
design and development of Haskell implementations, tools, libraries,
and supporting infrastructure, to share their work and discuss future
directions and collaborations with others.

Talks and/or demos are proposed by submitting an abstract, and
selected by a small program committee. There will be no published
proceedings; the workshop will be informal and interactive, with a
flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.

Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2016. The
Haskell Symposium is for the publication of Haskell-related research. In
contrast, the Haskell Implementors' Workshop will have no proceedings --
although we will aim to make talk videos, slides and presented data
available with the consent of the speakers.

In the Haskell Implementors' Workshop, we hope to study the underlying
technology. We want to bring together anyone interested in the
nitty-gritty details behind turning plain-text source code into a
deployed product. Having said that, members of the wider Haskell
community are more than welcome to attend the workshop -- we need your
feedback to keep the Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even if
it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and tools for development or deployment

Talks
-

At this stage we would like to invite proposals from potential speakers
for talks and demonstrations. We are aiming for 20 minute talks with 10
minutes for questions and changeovers. We want to hear from people
writing compilers, tools, or libraries, people with cool ideas for
directions in which we should take the platform, proposals for new
features to be implemented, and half-baked crazy ideas. Please submit a
talk title and abstract of no more than 300 words.

Submissions should be made via HotCRP. The website is:
  https://icfp-hiw16.hotcrp.com/

We will also have a lightning talks session which will be organised on
the day. These talks will be 5-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.

Organisers
--

  * Joachim Breitner(Karlsruhe Institut für Technologie)
  * Duncan Coutts   (Well Typed)
  * Michael Snoyman (FP Complete)
  * Luite Stegeman  (ghcjs)
  * Niki Vazou  (UCSD)
  * Stephanie Weirich   (University of Pennsylvania) 
  * Edward Z. Yang - chair  (Stanford University)
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Call for talks: Haskell Implementors Workshop 2016, Aug 24, Nara

2016-06-09 Thread Edward Z. Yang
Call for Contributions
   ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2016
Nara, Japan, 24 September, 2016

Co-located with ICFP 2016
   http://www.icfpconference.org/icfp2016/

Important dates
---
Proposal Deadline:  Monday, 8 August, 2016
Notification:   Monday, 22 August, 2016
Workshop:   Saturday, 24 September, 2016

The 8th Haskell Implementors' Workshop is to be held alongside ICFP
2016 this year in Nara. It is a forum for people involved in the
design and development of Haskell implementations, tools, libraries,
and supporting infrastructure, to share their work and discuss future
directions and collaborations with others.

Talks and/or demos are proposed by submitting an abstract, and
selected by a small program committee. There will be no published
proceedings; the workshop will be informal and interactive, with a
flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.

Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2016. The
Haskell Symposium is for the publication of Haskell-related research. In
contrast, the Haskell Implementors' Workshop will have no proceedings --
although we will aim to make talk videos, slides and presented data
available with the consent of the speakers.

In the Haskell Implementors' Workshop, we hope to study the underlying
technology. We want to bring together anyone interested in the
nitty-gritty details behind turning plain-text source code into a
deployed product. Having said that, members of the wider Haskell
community are more than welcome to attend the workshop -- we need your
feedback to keep the Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even if
it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and tools for development or deployment

Talks
-

At this stage we would like to invite proposals from potential speakers
for talks and demonstrations. We are aiming for 20 minute talks with 10
minutes for questions and changeovers. We want to hear from people
writing compilers, tools, or libraries, people with cool ideas for
directions in which we should take the platform, proposals for new
features to be implemented, and half-baked crazy ideas. Please submit a
talk title and abstract of no more than 300 words.

Submissions should be made via HotCRP. The website is:
  https://icfp-hiw16.hotcrp.com/

We will also have a lightning talks session which will be organised on
the day. These talks will be 5-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.

Organisers
--

  * Joachim Breitner(Karlsruhe Institut für Technologie)
  * Duncan Coutts   (Well Typed)
  * Michael Snoyman (FP Complete)
  * Luite Stegeman  (ghcjs)
  * Niki Vazou  (UCSD)
  * Stephanie Weirich   (University of Pennsylvania) 
  * Edward Z. Yang - chair  (Stanford University)
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs