[Haskell-cafe] Haskell type system and the lambda cube

2009-05-24 Thread Petr Pudlak
Hi, I'm trying to get some better understanding of the theoretical foundations
behind Haskell. I wonder, where exactly does Haskell type system fit within the
lambda cube? http://en.wikipedia.org/wiki/Lambda_cube
I guess it could also vary depending on what extensions are turned on.

Thanks, Petr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell type system and the lambda cube

2009-05-24 Thread Eugene Kirpichov
Haskell has terms depending on types (polymorphic terms) and types
depending on types (type families?), but no dependent types.

2009/5/24 Petr Pudlak d...@pudlak.name:
 Hi, I'm trying to get some better understanding of the theoretical foundations
 behind Haskell. I wonder, where exactly does Haskell type system fit within 
 the
 lambda cube? http://en.wikipedia.org/wiki/Lambda_cube
 I guess it could also vary depending on what extensions are turned on.

    Thanks, Petr
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Web IR developer, market.yandex.ru
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell type system and the lambda cube

2009-05-24 Thread Petr Pudlak
On Sun, May 24, 2009 at 12:18:40PM +0400, Eugene Kirpichov wrote:
 Haskell has terms depending on types (polymorphic terms) and types
 depending on types (type families?), but no dependent types.

But how about undecidability? I'd say that lambda2 or lambda-omega have
undecidable type checking, whereas Haskell's Hindley-Milner type system is
decidable. From this I get that Haskell's type system can't be one of the
vertices of the cube.

(BWT, will some future version of Haskell consider including some kind of
dependent types?)

Petr

 
 2009/5/24 Petr Pudlak d...@pudlak.name:
  Hi, I'm trying to get some better understanding of the theoretical 
  foundations
  behind Haskell. I wonder, where exactly does Haskell type system fit within 
  the
  lambda cube? http://en.wikipedia.org/wiki/Lambda_cube
  I guess it could also vary depending on what extensions are turned on.
 
     Thanks, Petr
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
 
 -- 
 Eugene Kirpichov
 Web IR developer, market.yandex.ru
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell type system and the lambda cube

2009-05-24 Thread Eugene Kirpichov
2009/5/24 Petr Pudlak d...@pudlak.name:
 On Sun, May 24, 2009 at 12:18:40PM +0400, Eugene Kirpichov wrote:
 Haskell has terms depending on types (polymorphic terms) and types
 depending on types (type families?), but no dependent types.

 But how about undecidability? I'd say that lambda2 or lambda-omega have
 undecidable type checking, whereas Haskell's Hindley-Milner type system is
 decidable.

It isn't, in the presence type classes (what else could the {-# LANG
UndecidableInstances #-} be for? :) ) and type families.

If all Haskell had would be HM, it would be System F.

 From this I get that Haskell's type system can't be one of the
 vertices of the cube.

Well, it surely has some features not described by the cube, but I'd
leave an explanation to the gurus around here :)


 (BWT, will some future version of Haskell consider including some kind of
 dependent types?)

 Petr


 2009/5/24 Petr Pudlak d...@pudlak.name:
  Hi, I'm trying to get some better understanding of the theoretical 
  foundations
  behind Haskell. I wonder, where exactly does Haskell type system fit 
  within the
  lambda cube? http://en.wikipedia.org/wiki/Lambda_cube
  I guess it could also vary depending on what extensions are turned on.
 
     Thanks, Petr
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 



 --
 Eugene Kirpichov
 Web IR developer, market.yandex.ru




-- 
Eugene Kirpichov
Web IR developer, market.yandex.ru
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell type system and the lambda cube

2009-05-24 Thread voigt
 2009/5/24 Petr Pudlak d...@pudlak.name:
 If all Haskell had would be HM, it would be System F.

That cannot be quite right, can it? System F has more powerful
polymorphism than HM.

Ciao,
Janis.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal, Time GHC 6.10.2

2009-05-24 Thread Dominic Steinitz
Duncan Coutts wrote:
 On Sun, 2009-05-17 at 09:17 +0100, Dominic Steinitz wrote:
 I get

 d...@linux-6ofq:~/asn1 runghc Setup.hs configure
 Configuring PER-0.0.20...
 Setup.hs: At least the following dependencies are missing:
 time -any  -any
 but I have time

 d...@linux-6ofq:~/asn1 ghc-pkg list | grep time
 old-locale-1.0.0.1, old-time-1.0.0.2, packedstring-0.1.0.1,
 time-1.1.2.4
 I think I can see why cabal isn't finding it:

 ghc-pkg dump --global | grep time-1.1.2.4
 finds nothing and I believe that is what cabal uses to find things.
 
 The default for runghc Setup.hs configure is --global, but the default
 for cabal configure is --user. So if you're using the cabal program
 to install packages, then you can also us it to configure other
 packages. If for you need to use the runghc Setup.hs interface (e.g. in
 some system build scripts) and you want it to pick up packages from the
 user package db then use the --user flag. If you're constantly having to
 use the runghc Setup.hs interface and doing per-user installs is a pain
 then you can set the default for the cabal program to be global installs
 in the cabal config file (~/.cabal/config).
 
 I'll add this issue to the FAQ, it come up enough. If anyone else
 reading would like to eliminate this FAQ, then implementing this ticket
 is the answer:
 
 suggest use of --user if configure fails with missing deps that
 are in the user db
 http://hackage.haskell.org/trac/hackage/ticket/384
 
 Duncan
 
 
 
Duncan,

Thanks very much. I'm happy to update the FAQ this weekend unless you
have already done it.

Dominic.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] FGL Question

2009-05-24 Thread Hans van Thiel
Hello,

I want to get the top or the bottom elements of a graph, but the
following code appears to give the wrong answer in most cases, and the
right answer in a few cases. Any ideas?

-- get the most general or the least general  elements 
graphMLGen :: Bool -  Gr [Rule] () -  Gr [Rule] () 
graphMLGen pm gr =  buildGr unctxls  where 
  unctxls = map remadj ctxls 
  remadj (_,n,r,_) = ([],n,r,[])
  ctxls | pm = gsel (\x - outdeg' x == 0) gr
| otherwise = gsel (\x - indeg' x == 0) gr


Many thanks,

Hans van Thiel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Binary suboptimal instance

2009-05-24 Thread Khudyakov Alexey
On Saturday 23 May 2009 23:23:05 Henning Thielemann wrote:
  Interesting solution however it does not perform very nice. I wrote
  microbenchmark
 
  ... skipped ...
 
  I didn'd do any profiling so I have no idea why writing is so slow.

 If you use top-level definition 'xs' the program might cache the list
 and second write is faster. You may change the order of tests in order
 check, whether that is the cause.

There was separate process for each test. So caching could not affect results.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal, Time GHC 6.10.2

2009-05-24 Thread Duncan Coutts
On Sun, 2009-05-24 at 12:04 +0100, Dominic Steinitz wrote:

  I'll add this issue to the FAQ, it come up enough. If anyone else
  reading would like to eliminate this FAQ, then implementing this ticket
  is the answer:
  
  suggest use of --user if configure fails with missing deps that
  are in the user db
  http://hackage.haskell.org/trac/hackage/ticket/384
  

 Thanks very much. I'm happy to update the FAQ this weekend unless you
 have already done it.

Oh that'd be great. The Cabal website is managed as a darcs repo so
just:

darcs get http://haskell.org/cabal/

then edit the FAQ.markdown and run make to update the FAQ.html (uses
pandoc). Then darcs send the patches to me or the cabal-devel mailing
list.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] hackage version scheme survey

2009-05-24 Thread Duncan Coutts
On Sat, 2009-05-23 at 19:57 -0500, br...@lorf.org wrote:
 On Saturday, 23.05.09 at 17:26, Don Stewart wrote:
  http://haskell.org/haskellwiki/Package_versioning_policy  ?
 
 That helps a lot. I should have found that. But putting the policy on a
 web page doesn't seem to be working; there are a lot of non-compliant
 packages. I guess I'm surprised thah 'cabal check' doesn't complain
 about it and HDB doesn't reject them.

We cannot force maintainers to follow the PVP, however we do have a plan
to encourage adoption. The key is to get maintainers to opt-in. For
packages that opt-in we will enforce it.

Following the PVP is extra work for a maintainer so there need to be two
sides to the bargain. The benefit to maintainers that we can enforce
that all newly uploaded hackage packages that depend on their
PVP-following package do actually specify an upper version bound. This
benefits the maintainer because it lets them release new versions
knowing that they are not breaking dependent packages.

The bargain on the other side is that it's a benefit to you as a package
author if you can rely on the proper versioning of the packages you
depend on. In return however you must actually make proper use of that
by specifying appropriate upper bounds (and the tools should be able to
give helpful suggestions).

However, like most of our grand plans, there's nobody actually working
on implementing them at the moment. The key part of this plan is the PVP
checker tool.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re: Re: [Haskell-cafe] Re: A problem with par and modules boundaries...

2009-05-24 Thread Duncan Coutts
On Sat, 2009-05-23 at 20:42 -0400, Mario Blažević wrote:
 On Sat 23/05/09  2:51 PM , Duncan Coutts duncan.cou...@worc.ox.ac.uk sent:
  On Sat, 2009-05-23 at 13:31 -0400, Mario Blažević wrote:
  ...
  So the function is not strict, and I don't understand
  why GHC should evaluate the arguments before the call.
  
  Right, it's lazy in the first and strict in the second argument. As far
  as I can see we have no evidence that is is evaluating anything before
  the call.
 
 
 When I look at the Core definition of `test', it begins with
 
 
 \ (n1axl::integer:GHCziIntegerziInternals.Integer)
   (n2axn::integer:GHCziIntegerziInternals.Integer) -
 %let as1sU :: integer:GHCziIntegerziInternals.Integer =
base:DataziList.prod1
(main:Main.factors2
 (base:DataziList.prod1
  (base:GHCziNum.upzulist main:Main.lvl main:Main.lvl n1axl)
  base:DataziList.lvl1))
base:DataziList.lvl1
 %in %case integer:GHCziIntegerziInternals.Integer 
 (ghczmprim:GHCziPrim.parzh
@
 integer:GHCziIntegerziInternals.Integer
as1sU)
 %of (dsapq::ghczmprim:GHCziPrim.Intzh)

I recommend using -ddump-simpl, as it produces more readable output.

 To my untrained eyes, this looks like it's evaluating

  product $ factors $ product [1..n1])
 
 which is the first argument to `parallelize'.

That core code is doing:

 let blah = product $ factors $ product thelist
  in case par# blah of
   _ - ...

So it's calling the primitive par# but nothing is forced to WHNF (except
the unused dummy return value of par#).

 I assume that %case in Core evaluates the argument to WHNF, just like
 case in Haskell.

Yes.

(In fact case in core always reduces to WHNF, where as in Haskell case
(...) of bar - (...)  does not force anything.)

 Then again, I could be completely misinterpreting what Core is, because
 I can't find any call to `parallelize' before or after that. It appears
 to be inlined in Core, regardless of whether the pragma
 
  {-# INLINE parallelize #-}
 
 is there or not.

Yes, because even without the pragma, ghc decides it's profitable to
inline.

 Actually, I can't see any effect of that pragma in the
 core files whatsoever, but it certainly has effect on run time.

How about diffing the whole core output (and using -ddump-simpl). If
there's a performance difference then there must be a difference in the
core code too.

 Or do you mean to say that *your* installation of GHC behaves
 the same when the function `parallelize' is defined in the same module and 
 when
 it's imported?

Yes, exactly. With both ghc-6.10.1 and a very recent build of ghc-6.11

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] HPC Website FAQ

2009-05-24 Thread Dominic Steinitz
There's a nice website for HPC but it looks a bit out of date.

http://projects.unsafeperformio.com/hpc/

I wanted to send a patch to the FAQ for using HPC with .lhs files (you
have to run ghc -E to generate .hs files and strip some of the the lines
ghc generates: {-# LINE 1 ASNTYPE.lhs #-}  #line 1 ASNTYPE.lhs).

Would it be possible for the author of HPC to make the website into a
darcs repo so that we send in patches? Apparently the Cabal website is
managed like that and it sounds like a good way of doing things in
general to me.

Thanks, Dominic.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re: Re: Re: [Haskell-cafe] Re: A problem with par and modules boundaries...

2009-05-24 Thread Mario Blažević

 I recommend using -ddump-simpl, as it produces more readable output.
 
 Actually, I can't see any effect of that pragma in the
 core files whatsoever, but it certainly has effect on
 run time.
 
 How about diffing the whole core output (and using -ddump-simpl). If
 there's a performance difference then there must be a difference in the
 core code too.

Ok.

$ ghc-6.11.20090421 --make primes-test.hs -threaded -O2 -ddump-simpl  
main.simpl
$ time ./primes-test +RTS -N2
4001

real0m9.636s
user0m18.201s
sys 0m0.088s

$ ghc-6.11.20090421 --make primes-test.hs -threaded -O2 -ddump-simpl 
imported.simpl
$ time ./primes-test +RTS -N24001

real0m17.547s
user0m17.453s
sys 0m0.052s


I can't exactly use diff because the generated identifier names are not the 
same,
but after poring over with Emacs ediff I have found only one difference that's
not attributable to identifiers:

$diff main.simpl imported.simpl
...
223c232
   a_s1rs [ALWAYS Just L] :: GHC.Integer.Internals.Integer
---
   a_s1sV [ALWAYS Just S] :: GHC.Integer.Internals.Integer
...


Does this S vs. L difference have anything to do with strictness and laziness?
That line is a part of the `Main.test' definition:

$ diff -C 3 main.simpl imported.simpl
*** 217,244 
  [Arity 2
   Str: DmdType LL]
  Main.test =
!   \ (n1_ahR :: GHC.Integer.Internals.Integer)
! (n2_ahT :: GHC.Integer.Internals.Integer) -
  let {
!   a_s1rs [ALWAYS Just L] :: GHC.Integer.Internals.Integer
LclId
[Str: DmdType]
!   a_s1rs =
  Data.List.prod1
(Main.factors2
   (Data.List.prod1
! (GHC.Num.up_list Main.lvl Main.lvl n1_ahR) Data.List.lvl1))
Data.List.lvl1 } in
! case GHC.Prim.par# @ GHC.Integer.Internals.Integer a_s1rs
  of _ { __DEFAULT -
  case Data.List.prod1
 (Main.factors2
(Data.List.prod1
!  (GHC.Num.up_list Main.lvl Main.lvl n2_ahT) Data.List.lvl1))
 Data.List.lvl1
! of x1_aUS { __DEFAULT -
! case GHC.Real.$wdivMod x1_aUS a_s1rs of _ { (# ww1_aUo, _ #) -
! ww1_aUo
  }
  }
  }


 Or do you mean to say that *your* installation of GHC
 behaves the same when the function `parallelize' is defined in
 the same module and when it's imported?
 
 Yes, exactly. With both ghc-6.10.1 and a very recent build of ghc-6.11

As you can see, I'm using the 6.11.20090421 build. I looked at recent ones, but
the Linux builds seem to be less stable in May. I have the same results (though 
I
didn't use -ddump-simpl) with 6.8.2. Can you recommend a different version to 
try?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: Haskell Hackathon in Philadelphia

2009-05-24 Thread Andrew Wagner
Is there a list of projects that will be worked on during this, or how will
that work?

On Thu, May 21, 2009 at 5:39 PM, Brent Yorgey byor...@seas.upenn.eduwrote:

 Hi all!

 We are in the early stages of planning a Haskell hackathon/get
 together, Hac φ, to be held this summer at the University of
 Pennsylvania, in Philadelphia.  Right now we're looking at two
 possible dates:

  June 19-21or   July 24-26

 If you might be interested in attending, please add your name on the
 wiki page:

  http://www.haskell.org/haskellwiki/Hac_%CF%86

 You can also note whether either of those dates absolutely doesn't
 work for you.  (If you don't have an account on the wiki, you can
 email Ashley Yakeley for an account [1], or feel free to just respond
 to this email.)  Expect more details (such as a nailed-down date)
 soon, once we have gauged the level of interest.

 Hope to see you in Philadelphia!

 Brent (byorgey)
 Daniel Wagner (dmwit)

 [1] http://haskell.org/haskellwiki/HaskellWiki:New_accounts
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re: Re: Re: [Haskell-cafe] Re: A problem with par and modules boundaries...

2009-05-24 Thread Duncan Coutts
On Sun, 2009-05-24 at 12:48 -0400, Mario Blažević wrote:

  How about diffing the whole core output (and using -ddump-simpl). If
  there's a performance difference then there must be a difference in the
  core code too.

 I can't exactly use diff because the generated identifier names are not the 
 same,
 but after poring over with Emacs ediff I have found only one difference that's
 not attributable to identifiers:
 
 $diff main.simpl imported.simpl
 ...
 223c232
a_s1rs [ALWAYS Just L] :: GHC.Integer.Internals.Integer
 ---
a_s1sV [ALWAYS Just S] :: GHC.Integer.Internals.Integer
 ...

Good find!

 Does this S vs. L difference have anything to do with strictness and laziness?

Yes.

So, I think we should open a ghc but report with all the details,
particularly the example's source, the ghc version and that highlight of
that strictness difference.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] `seq` and categories

2009-05-24 Thread Jason Dusek
  On the IRC channel a few days ago, it was said that, as long
  as we allow `seq`, Hask is not a valid category.

  Doesn't this basically mean that a very large amount of
  Haskell -- anything with strictness annotations -- can not be
  described in a category Hask?

--
Jason Dusek
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FGL Question

2009-05-24 Thread Neil Brown

Hans van Thiel wrote:

Hello,

I want to get the top or the bottom elements of a graph, but the
following code appears to give the wrong answer in most cases, and the
right answer in a few cases. Any ideas?

-- get the most general or the least general  elements 
graphMLGen :: Bool -  Gr [Rule] () -  Gr [Rule] () 
graphMLGen pm gr =  buildGr unctxls  where 
  unctxls = map remadj ctxls 
  remadj (_,n,r,_) = ([],n,r,[])

  ctxls | pm = gsel (\x - outdeg' x == 0) gr
| otherwise = gsel (\x - indeg' x == 0) gr


  

Hi Hans,

I believe the problem is to do with the inductive nature of the FGL 
library.  A graph in FGL is a series of contexts, each corresponding to 
a node.  Each context contains lists of links to/from the latest node to 
nodes in previous contexts.  Each link is only recorded once, and 
whether it appears in the context for the source or destination node 
depends on the order that the graph is constructed.  For example, here's 
a simple graph:


graph :: Gr Int String
graph = mkGraph (zip [0..4] [0..4])
 [(0,1,a), (0,2,b), (1,2,c), (2,1,d), (3,0,e)]

main :: IO ()
main = putStrLn $ show $ gsel (const True) graph

The output is (here, the last/outermost context is listed first):

[([(c,1),(b,0)],2,2,[(d,1)]),([(a,0)],1,1,[]),([],3,3,[(e,0)]),([],0,0,[]),([],4,4,[])]

Even though 0 has two outgoing links, its context shows no links in 
either direction -- hence why your code wasn't always working.


Generally, don't use the Context functions much, and use the functions 
involving Node instead.  Here's some simple solutions:


If all you need afterwards is the node labels, I'd suggest something like:

labelsZeroOut gr = [l | (n, l) - labNodes gr, outdeg gr n == 0]

If you do need to retain the graph structure, I'd suggest:

graphZeroOut gr = delNodes (filter ((== 0) . outdeg gr) $ nodes gr) gr

Hope that helps,

Neil.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] `seq` and categories

2009-05-24 Thread Don Stewart
jason.dusek:
   On the IRC channel a few days ago, it was said that, as long
   as we allow `seq`, Hask is not a valid category.
 
   Doesn't this basically mean that a very large amount of
   Haskell -- anything with strictness annotations -- can not be
   described in a category Hask?

I'm not sure of the category theoretic story, but I'd imagine so. For
background on how seq confuses reasoning (and how to restore it), see
Janis Voigtlaender's extensive, thorough work on the topic.

http://wwwtcs.inf.tu-dresden.de/~voigt/

Also, mail:

seq does not preclude parametricity
http://www.mail-archive.com/haskell-cafe@haskell.org/msg19820.html

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Defining a CPP macro when building documentation

2009-05-24 Thread Maxime Henrion
Hello all,


I have been exposed to a problem that have happened to others too, and
since I have found a (scary) solution with the help of Duncan Coutts,
I'm now sharing it with you.

The reason I wanted to pass specific CPP flags to haddock was to allow
the documentation of the full module to get built, even when only parts
of the module would end up getting compiled because of restrictions on
the build platform.

In this mail, Duncan Coutts gives a very helpful bit of code to override
the haddockHook and pass additional options to haddock :

http://www.haskell.org/pipermail/haskell-cafe/2008-December/051785.html

In my case, I needed something very similar, except that I needed to
pass those additional options to hsc2hs which my code uses.  This worked
fine as long as the cabal commands sequence was [configure,haddock]
and not [configure,build,haddock].  The reason for this is that in
the latter case, the library wasn't being processed again, but output of
previous runs of hsc2hs were reused.

So I worked around this with an ugly hack that removes the .hs files
before running haddock, in order to force re-processing!  Here's the
final code I ended up with :

import Distribution.Simple
import Distribution.Simple.LocalBuildInfo (LocalBuildInfo(withPrograms), 
buildDir)
import Distribution.Simple.Program (userSpecifyArgs)

import System.Directory
import System.FilePath

-- Define __HADDOCK__ when building documentation.
main = defaultMainWithHooks simpleUserHooks {
  haddockHook = \pkg lbi h f - do
let progs = userSpecifyArgs hsc2hs [-D__HADDOCK__] (withPrograms lbi)
removePreProcessedFiles (buildDir lbi)
haddockHook simpleUserHooks pkg lbi { withPrograms = progs } h f
}

-- Horrible hack to force re-processing of the .hsc file.  Otherwise
-- the __HADDOCK__ macro doesn't end up being defined.
removePreProcessedFiles :: FilePath - IO ()
removePreProcessedFiles dir =
  removeFile (dir / System/BSD/Sysctl.hs)
`catch` \_ - return ()

Cheers,
Maxime

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: bsd-sysctl 1.0.3

2009-05-24 Thread Maxime Henrion
I'm pleased to announce the release of bsd-sysctl 1.0.3, a package that
provides a System.BSD.Sysctl module allowing access to the C sysctl(3)
API.

It should fully work on FreeBSD, NetBSD and Mac OS X platforms; it
should also work on OpenBSD and Linux, although looking up sysctl's by
name isn't supported there, and I believe the sysctl API doesn't return
anything useful under Linux.

The package is available here:

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/bsd-sysctl-1.0.3

As this is my first package on hackage, I'm eager to hear about any
comments on this code.

Cheers,
Maxime

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] the problem of design by negation

2009-05-24 Thread Richard O'Keefe

Design-by-negativity can *be* a way of being creative.
I've lost count of the number of times that I've been
explaining to someone why something can't be done, and
suddenly realised that one of the reasons was invalid
and seen how to do it.

The key is not whether you explore the design space
from a positive end or from a negative end, but whether
you *explore* it.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Binary suboptimal instance

2009-05-24 Thread Tim Docker
andrewcoppin:
 The problem seems to boil down to this: The Binary instance for
 Double (and Float, by the way) is... well I guess you could argue
 it's very portable, but efficient it isn't. As we all know, an
 IEEE-754 double-precision floating-point number occupies 64 bits;
 1 sign bit, 11 exponent bits, and 52 mantissa bits (implicitly 53).
 I had assumed that the Binary instance for Double would simply write
 these bits to disk, requiring approximately 0 computational power, and
 exactly 64 bits of disk space. I was wrong.

 Is there any danger that there might be some kind of improvement to the
 Double instance in the next version of Data.Binary?

This was discussed last week. A patch was posted implementing more
efficient low level double encodings. Google for the thread: Data.Binary
and little endian encoding.

Tim




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] the problem of design by negation

2009-05-24 Thread Michael P Mossey

Richard O'Keefe wrote:

Design-by-negativity can *be* a way of being creative.
I've lost count of the number of times that I've been
explaining to someone why something can't be done, and
suddenly realised that one of the reasons was invalid
and seen how to do it.

The key is not whether you explore the design space
from a positive end or from a negative end, but whether
you *explore* it.


Hi Richard,

I think we using positive and negative in a bit of a different sense (which 
may be my fault for not explaining perfectly in the first post). There are both 
positive and negative *facts* about design. There are things you can do, and 
things you can't. These are facts. I'm referring more to a specific kind of 
process (a specific kind of exploration)---in my terms, design by negation 
means that you dominant activity in design in cutting away possibilities, and 
what's left (however awkward) is what you must build. I have done this by habit, 
but I would like to shift into a mode of design that is focused on construction 
rather than destruction---to view design as an opportunity to meet most goals by 
clever combining of facets.


Thanks,
Mike
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] the problem of design by negation

2009-05-24 Thread Conal Elliott
The main objection I have to the negative process (can't be done) is that is
so often bogus.  Proof by lack of imagination.  I guess it works for
Richard, though not for Michael's architect, because Richard is able to
catch his bogus reasoning *and he is willing*** to do so, which requires
humility and ego-strength.

   - Conal

On Sun, May 24, 2009 at 6:35 PM, Michael P Mossey 
m...@alumni.caltech.eduwrote:

 Richard O'Keefe wrote:

 Design-by-negativity can *be* a way of being creative.
 I've lost count of the number of times that I've been
 explaining to someone why something can't be done, and
 suddenly realised that one of the reasons was invalid
 and seen how to do it.

 The key is not whether you explore the design space
 from a positive end or from a negative end, but whether
 you *explore* it.


 Hi Richard,

 I think we using positive and negative in a bit of a different sense
 (which may be my fault for not explaining perfectly in the first post).
 There are both positive and negative *facts* about design. There are things
 you can do, and things you can't. These are facts. I'm referring more to a
 specific kind of process (a specific kind of exploration)---in my terms,
 design by negation means that you dominant activity in design in cutting
 away possibilities, and what's left (however awkward) is what you must
 build. I have done this by habit, but I would like to shift into a mode of
 design that is focused on construction rather than destruction---to view
 design as an opportunity to meet most goals by clever combining of facets.

 Thanks,

 Mike
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe