Re: [Haskell-cafe] An APL library for Haskell

2013-09-16 Thread Austin Seipp
The message was held by Mailman, because it thought you had too many
recipients in the message. Gershom noticed this while we were doing
some maintenance, and released it. We also bumped the recipient limit
to 20 people, so this shouldn't be a problem again.

On Mon, Sep 16, 2013 at 4:32 PM, Simon Peyton-Jones
simo...@microsoft.com wrote:
 PS: Oddly I sent this message in March 2012. I don’t know why it has taken
 over year for it to be delivered!


 Simon



 From: Haskell-Cafe [mailto:haskell-cafe-boun...@haskell.org] On Behalf Of
 Simon Peyton-Jones
 Sent: 08 March 2012 13:45
 To: hask...@haskell.org; Haskell Cafe
 Cc: Lennart Augustsson; John Scholes; nic...@chalmers.se; Nate Foster; Andy
 Gill; Mary Sheeran; Fritz Henglein
 Subject: [Haskell-cafe] An APL library for Haskell



 Friends



 Many of you will know the array language APL.   It focuses on arrays and in
 particular has a rich, carefully-thought-out array algebra.



 An obvious idea is: what would a Haskell library that embodies APL’s array
 algebra look like?  In conversation with John Scholes and some of his
 colleagues in the APL community a group of us developed some ideas for a
 possible API, which you can find on the Haskell wiki here:
 http://www.haskell.org/haskellwiki/APL



 However, we have all gone our separate ways, and I think it’s entirely
 possible that that the idea will go no further.  So this message is to ask:

 ·   Is anyone interested in an APL-style array library in Haskell?

 ·   If so, would you like to lead the process of developing the API?



 I think there are quite a few people who would be willing to contribute,
 including some core gurus from the APL community: John Scholes,  Arthur
 Whitney, and Roger Hui.



 Simon


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Regards,
Austin - PGP: 4096R/0x91384671
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Renumbered mailing list posts

2013-08-10 Thread Austin Seipp
Henning,

Thanks for the report. I'm currently investigating this, and think it
should be possible to keep all of the old URLs intact.

On Sat, Aug 10, 2013 at 11:01 AM, Niklas Hambüchen m...@nh2.me wrote:
 On 11/08/13 00:50, Brandon Allbery wrote:
 Those at least are recoverable, just replace hpaste.org
 http://hpaste.org with lpaste.net http://lpaste.net (content is
 still there). But still.

 Unfortunately I cannot amend emails that I have sent.

 Could we not just have kept the domain and set a CNAME entry to the new one?

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Regards,
Austin - PGP: 4096R/0x91384671

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why isn't Program Derivation a first class citizen?

2013-02-13 Thread Austin Seipp
On Tue, Feb 12, 2013 at 4:47 PM, Nehal Patel nehal.a...@gmail.com wrote:

 A few months ago I took the Haskell plunge, and all goes well...
 ... snip ...
 And so my question is, that in 2013, why isn't this process a full fledged 
 part of the language? I imagine I just don't know what I'm talking about, so 
 correct me if I'm wrong, but this is how I understand the workflow used in 
 practice with program derivation:  1) state the problem pedantically in code, 
 2) work out a bunch of proofs with pen and paper, 3) and then translate that 
 back into code, perhaps leaving you with function_v1, function_v2, 
 function_v3, etc   -- that seems fine for 1998, but why is it still being 
 done this way?

There is no one-stop shop for this sort of stuff. There are plenty of
ways to show equivalence properties for certain classes of programs.

I am unsure why you believe people tend to use pen and paper -
machines are actually pretty good at this stuff! For example, I have
been using Cryptol lately, which is a functional language from Galois
for cryptography. One feature it has is equivalence checking: you can
ask if two functions are provably equivalent. This is used in the
examples to prove that a naive Rijndael implementation is equivalent
to an optimized AES implementation.

You might be asking why you can't do this for Haskell. Well, you can -
kind of. There is a library called SBV which is very similar in spirit
to Cryptol but expressed as a Haskell DSL (and not really for Crypto
specifically.) All SBV programs are executable as Haskell programs -
but we can also compile them to C, and prove properties about them by
using an SMT solver. This is 'free' and requires no programmer
intervention beyond stating the property. So we can specify faster
implementations, and properties that show they're equivalent. SMT
solvers have a very low cost and are potentially very useful for
certain problems.

So there's a lot of good approaches to tackling these problems in
certain domains, some of them more automated than others. You can't
really provide constructive 'proofs' like you do in Agda. The language
just isn't meant for it, and isn't really expressive enough for it,
even when GHC features are heavily abused.

 What I'm asking about might sound related to theorem provers, -- but if so ,I 
 feel like what I'm thinking about is not so much the very difficult problem 
 of automated proofs or even proof assistants, but the somewhat simpler task 
 of proof verification. Concretely, here's a possibility of how I imagine   
 the workflow could be:

I believe you are actually precisely talking about theorem provers, or
at least this is the impression I get. The reason for that is the next
part:

 ++ in code, pedantically setup the problem.
 ++ in code, state a theorem, and prove it -- this would require a revision to 
 the language (Haskell 201x) and perhaps look like Isabella's ISAR -- a 
 -structured- proof syntax that is easy for humans to construct and understand 
 -- in particular it would possible to easily reuse previous theorems and 
 leave out various steps.  At compile time, the compiler would check that the 
 proof was correct, or perhaps complain that it can't see how I went from step 
 3 to step 4, in which case I might have to add in another step (3b) to  help 
 the compiler verify the proof.
 ++ afterwards, I would use my new theorems to create semantically identical 
 variants of my original functions (again this process would be integrated 
 into the language)

Because people who write things in theorem provers reuse things! Yes,
many large programs in Agda and Coq for example use external libraries
of reusable proofs. An example of this is Agda's standard library for
a small case, or CoRN for Coq in the large. And the part about
'leaving things out' and filling them in if needed - well, both of
these provers have had implicit arguments for quite a while. But you
can't really make a proof out of axioms and letting the compiler
figure out everything. It's just not how those tools in particular
work.

I find it very difficult to see the difference between what you are
proposing and what people are doing now - it's often possible to give
proof of something regardless of the underlying object. You may prove
some law holds over a given structure for example, and then show some
other definitive 'thing' is classifed by that structure. An example is
a monoid, which has an associative binary operation. The /fact/ that
operation is associative is a law about a monoids' behavior.

And if you think about it, we have monoids in Haskell. And we expect
that to hold. And then actually, if you think about it, that's pretty
much the purpose of a type class in Haskell: to say some type abides
by a set of laws and properties regardless of its definitive
structure. And it's also like an interface in Java, sort of: things
which implement that basic interface must abide by *some* rules of
nature that make it so.

My point is, the 

Re: [Haskell-cafe] How far compilers are allowed to go with optimizations?

2013-02-06 Thread Austin Seipp
This is pretty much a core idea behind Data Parallel Haskell - it
transforms nested data parallel programs into flat ones. That's
crucial to actually making it perform well and is an algorithmic
change to your program. If you can reason about your program, and
perhaps have an effective cost model for reasoning about how it *will*
behave, then I don't see why not.*

Now, on a slight tangent, in practice, I guess it depends on your
target market. C programs don't necessarily expose the details to make
such rich optimizations possible. And Haskell programmers generally
rely on optimizations to make order of magnitude performance
difference in constant factors, although perhaps not in direct big-O
terms (and no, this isn't necessarily bad) - you will rarely see such
a change from various optimizing C compilers. The kinds and scope of
optimization opportunities are simply different. We're also not immune
to violations of intuition because of Compiler Things that have
nothing to do with data structures; even 'functionally equivalent'
programs can have drastically different space characteristics, just
due to sharing changes from CSE, eta reduction, or how the compiler is
feeling on any particular day of the week (I kid, I kid.) But overall,
we really do have vast amounts of robust, general optimization
opportunities - so I don't see why not try and exploit them if
reasonable.

* Note that intuitions about how the compiler performs tasks like
inlining/rewrites/boxing are not necessarily the most theoretically
nice or even sane ways to reason about things. Which is what we do
now. In practice you do need to know about performance characteristics
with pretty much any language and a million other specific things, and
I'm not sure Haskell is necessarily worse off here than anything else.

On Wed, Feb 6, 2013 at 5:45 AM, Jan Stolarek jan.stola...@p.lodz.pl wrote:
 Hi all,

 some time ago me and my friend had a discussion about optimizations performed 
 by GHC. We wrote a
 bunch of nested list comprehensions that selected objects based on their 
 properties (e.g. finding
 products with a price higher than 50). Then we rewrote our queries in a more 
 efficient form by
 using tree-based maps as indices over the data. My friend knows how to turn 
 nested list
 comprehensions into queries that use maps and claims this could be automated. 
 He
 argued that it would be fine if compiler did such an optimization 
 automatically, since we can
 guarantee that both versions have exactly the same semantics and if they have 
 the same semantics
 we are free to use faster version. From a functional point of view this is a 
 good point, but
 nevertheless I objected to his opinion, claiming that if compiler performed 
 such a high-level
 optimization - replace underlying data structure with a different one and 
 turn one algorithm into
 a completely different one - programmer wouldn't be able to reason about 
 space behaviour of a
 program. I concluded that such a solution should not be built into a compiler 
 but instead turned
 into an EDSL.

 I would like to hear your opinion on this. How far a compiler can go with 
 transforming code? I
 don't want to concentrate on discussing whether such an optimization is 
 possible to implement
 from a technical point of view. What I'm interested in is whether such kind 
 of high-level
 optimization could be accepted.

 Janek

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 9.3 - (2 * 4.5) = 0.3000000000000007

2013-01-16 Thread Austin Seipp
This is a rounding error. It will happen in any language due to the
imprecision of floats; for example, using Ruby:

$ irb
1.9.3-p286 :001  9.3 - (2 * 4.5)
 = 0.3007
1.9.3-p286 :001  ^D
$

Read this:

http://floating-point-gui.de/

On Wed, Jan 16, 2013 at 7:25 AM, ivan dragolov i...@dragolov.net wrote:

 9.3 - (2 * 4.5) = 0.3007

 I expected 0.3

 ?

 --
 Иван Драголов
 dragolov.net

 GSM: 0888 63 19 46
 GSM за SMS: 0878 82 83 93
 facebook.com/ivan.dragolov
 twitter.com/dragolov

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] using/building ghc head?

2013-01-14 Thread Austin Seipp
Frankly I wouldn't be surprised if snapshots are dodgy, to be quite
honest. As far as I'm aware, most people just build GHC HEAD
themselves, from the git repository. And a snapshot build does not
necessarily guarantee anything works in any case (there could be a
million things wrong in the repository causing failures in a binary
distribution you might not see in a developer build.)

So, if you want to build GHC, you can. This can be done in well under
an hour (or even 30 minutes) with a moderately powerful machine and
paying attention to build settings.

Check out the first section, 'Building GHC', here

http://hackage.haskell.org/trac/ghc/wiki/Building

And this page which has some good tricks for speeding up the build,
like using 'mk/build.mk':

http://hackage.haskell.org/trac/ghc/wiki/Building/Hacking

On Mon, Jan 14, 2013 at 1:10 PM, Johannes Waldmann
waldm...@imn.htwk-leipzig.de wrote:
 Hi. I wanted to do some experiments with GHC Head but

 * I cannot use the snapshot bindist:

 ./configure --prefix=/opt
 checking for path to top of build tree... ./configure: line 2138:
 utils/ghc-pwd/dist-install/build/tmp/ghc-pwd-bindist: No such file or 
 directory

 * I cannot compile snapshot from source (with ghc-7.6.1)

   utils/haddock/src/Haddock/Convert.hs:201:42:
 Not in scope: data constructor `HsBang'

 this is for ghc-7.7.20121213 (date Dec 21),
 which is the latest snapshot according to
 http://www.haskell.org/ghc/dist/current/dist/

 what am I missing? - J.W.


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: cabal-install-1.16.0 (and Cabal-1.16.0.1)

2012-10-03 Thread Austin Seipp
Hi,

Just a heads up: on Ubuntu 12.04 with GHC 7.4.1 out of apt (no
haskell-platform,) using the bootstrap.sh script fails, because the
constraints for CABAL_VER_REGEXP are too lax:

$ sh ./bootstrap.sh
Checking installed packages for ghc-7.4.1...
Cabal is already installed and the version is ok.
transformers is already installed and the version is ok.
mtl is already installed and the version is ok.
deepseq is already installed and the version is ok.
text is already installed and the version is ok.
parsec is already installed and the version is ok.
network is already installed and the version is ok.
time is already installed and the version is ok.
HTTP is already installed and the version is ok.
zlib is already installed and the version is ok.
random is already installed and the version is ok.
cleaning...
Linking Setup ...
Configuring cabal-install-1.16.0...
Setup: At least the following dependencies are missing:
Cabal =1.16.0  1.18

Error during cabal-install bootstrap:
Configuring the cabal-install package failed
$ ghc-pkg list Cabal
/var/lib/ghc/package.conf.d
   Cabal-1.14.0
/home/a/.ghc/x86_64-linux-7.4.1/package.conf.d
$

In bootstrap.sh, we see:

CABAL_VER=1.16.0;CABAL_VER_REGEXP=1\.(13\.3|1[4-7]\.)  # =
1.13.3   1.18

The constraint should be updated so that it requires 1.16. It can be
fixed by saying:

CABAL_VER=1.16.0;CABAL_VER_REGEXP=1\.(1[6-7]\.)  # = 1.16   1.18

Otherwise, you can't get any new cabal install without the platform.
Or you have to get 1.14, then install 1.16 via 'cabal install'.

I'll file a bug later today.

On Wed, Oct 3, 2012 at 11:06 AM, Johan Tibell johan.tib...@gmail.com wrote:
 On the behalf of the many contributors to cabal, I'm proud to present
 cabal-install-1.16.0. This release contains almost a year worth of
 patches. Highlights include:

  * Parallel installs (cabal install -j)
  * Several improvements to the dependency solver.
  * Lots of bugfixes

 We're also simultaneously releasing Cabal-1.16.0.1, which address a few bugs.

 To install:

 cabal update
 cabal install cabal-install-1.16.0 Cabal-1.16.0.1

 Complete list of changes in cabal-install-1.16.0:

 * Bump cabal-install version number to 1.16.0
 * Extend the unpack command for the .cabal file updating
 * On install, update the .cabal file with the one from the index
 * Make compatible with `network-2.4` API
 * Update ZLIB_VER in bootstrap.sh for ghc-7.6 compatibility
 * cabal-install.cabal: add Distribution.Client.JobControl and
 Distribution.Compat.Time
 * Adapt bootstrap.sh to ghc-7.6 changes
 * Move comment that was missed in a refactoring
 * cabal-install: Adapt for GHC 7.6
 * Ensure that the cabal-install logfile gets closed
 * Make cabal-install build with Cabal-1.16.
 * Initialise the 'jobs' config file setting with the current number of
 CPU cores.
 * Update version bounds for directory.
 * Update bootstrap.sh to match platform 2012.2.0.0
 * Relax dependency on containers.
 * Bump versions.
 * Better output for parallel install.
 * Fix warnings.
 * Remove 'tryCachedSetupExecutable'.
 * Redundant import.
 * Use a lock instead of 'JobControl 1'.
 * Comments, cosmetic changes.
 * Implement the setup executable cache.
 * Add the missing JobControl module
 * Fix missing import after merge of par build patches
 * Fix impl of PackageIndex.allPackagesByName
 * Drop the ghc-options: -rtsopts on cabal-install. We do not need it.
 * Parallelise the install command This is based on Mikhail Glushenkov's 
 patches.
 * InstallPlan: Add a Processing package state.
 * Add a '-j' flag for the 'install' command.
 * Add -threaded and -rtsopts to cabal-install's ghc-options.
 * Fix typos.
 * Fix warnings.
 * 80-col violation.
 * Spelling.
 * Fix warnings.
 * Extended a comment.
 * Force the log for the error to be printed in parallel with the complete 
 trace.
 * Remove goal choice nodes after reordering is completed.
 * Make modular solver handle manual flags properly.
 * Store manual flag info in search tree.
 * Maintain info about manual flags in modular solver.
 * Fix cabal-install build.
 * Merge pull request #6 from pcapriotti/master
 * Adapt to change in GHC package db flags.
 * Merge pull request #1 from iustin/master
 * Add support for Apache 2.0 license to cabal-install
 * Handle test and bench stanzas without dependencies properly in modular 
 solver.
 * Updated repo location in cabal files.
 * last-minute README changes
 * updated copyright year for Duncan
 * updated changelog
 * added deepseq to bootstrap.sh
 * handling the solver options properly in config file
 * handling the optimization option properly in config file
 * Update cabal-install bootstrap.sh
 * treat packages that are unknown no longer as an internal error in
 modular solver
 * minor wording change when printing install plans
 * no longer pre-filter broken packages for modular solver
 * for empty install plans, print the packages that are installed
 * make the reinstall check less noisy
 * disable line-wrapping for solver 

Re: [Haskell-cafe] Over general types are too easy to make.

2012-09-01 Thread Austin Seipp
On Sat, Sep 1, 2012 at 5:18 AM,  timothyho...@seznam.cz wrote:
 So after having played with it a little, it looks like GADTs are the way to
 go.  The method of manipulating the module system won't work because of two
 reasons:

 A) How can a user pattern match against data constructors that are hidden by
 the module system?

You can't directly, but the correct way to do this if you want it is
to use view patterns and export a view specifically for your hidden
data type. This gives you some extra flexibility in what you give to
clients, although view patterns add a little verbosity.

 B) It's an awful hack.

To each his own. The module system in Haskell serves as nothing more
than a primitive namespace really, so I don't see why hiding things by
default in namespaces seems like a hack (you should of course export
hidden stuff too, but in another module, as a last ditch effort in
case users need it. Sometimes this is very very useful.)

 Do I understand correctly that with the GADTs I have to create my own record
 selectors/lenses separately?

I don't know. I suppose it depends on the lens library in question.
Ones that use template haskell to drive accessors may, or may not,
work (I suppose it would depend on whether or not the derivation is
prepared to deal with GADTs when invoked, which isn't guaranteed -
Template Haskell is kind of awful like that.)

 Thanks,

 Timothy


 -- Původní zpráva --
 Od: Austin Seipp mad@gmail.com


 Datum: 31. 8. 2012
 Předmět: Re: [Haskell-cafe] Over general types are too easy to make.

 What you are essentially asking for is a refinement on the type of

 'BadFoo' in the function type, such that the argument is provably
 always of a particular constructor.

 The easiest way to encode this kind of property safely with Haskell
 2010 as John suggested is to use phantom types and use the module
 system to ban people from creating BadFrog's etc directly, by hiding
 the constructors. That is, you need a smart constructor for the data
 type. This isn't an uncommon idiom and sometimes banning people (by
 default) from those constructors is exactly what you have to do. It's
 also portable and easy to understand.

 Alternatively, you can use GADTs to serve witness to a type equality
 constraint, and this will discharge some of the case alternatives you
 need to write. It's essentially the kind of refinement you want:

 data Frog
 data Bar

 data Foo x :: * where
 Bar :: Int - Foo Bar
 Frog :: String - Int - Foo Frog

 You can't possibly then pattern match on the wrong case if you specify
 the type, because that would violate the type equality constraint:

 deFrog :: Foo Frog - String
 deFrog (Frog x _) = x
 -- not possible to define 'deFrog (Bar ...) ...', because that would
 violate the constraint 'Foo x' ~ 'Foo Frog'

 It's easier to see how this equality constraint works if you
 deconstruct the GADT syntax into regular equality constraints:

 data Bar
 data Frog

 data Foo x =
 (x ~ Bar) = Bar Int
 | (x ~ Frog) = Frog String Int

 It's then obvious the constructor carries around the equality
 constraint at it's use sites, such as the definition of 'deFrog'
 above.

 Does this solve your problem?

 On Fri, Aug 31, 2012 at 1:00 PM, timothyho...@seznam.cz wrote:
 I'd have to say that there is one(and only one) issue in Haskell that bugs
 me to the point where I start to think it's a design flaw:

 It's much easier to type things over generally than it is to type things
 correctly.

 Say we have a

data BadFoo =
 BadBar{
 badFoo::Int} |
 BadFrog{
 badFrog::String,
 badChicken::Int}

 This is fine, until we want to write a function that acts on Frogs but not
 on Bars. The best we can do is throw a runtime error when passed a Bar and
 not a Foo:

deBadFrog :: BadFoo - String
deBadFrog (BadFrog s _) = s
deBadFrog BadBar{} = error Error: This is not a frog.

 We cannot type our function such that it only takes Frogs and not Bars.
 This makes what should be a trivial compile time error into a nasty
 runtime
 one :(

 The only solution I have found to this is a rather ugly one:

data Foo = Bar BarT | Frog FrogT

 If I then create new types for each data constructor.

data FrogT = FrogT{
 frog::String,
 chicken::Int}

data BarT = BarT{
 foo :: Int}

 Then I can type deFrog correctly.

deFrog :: FrogT - String
deFrog (FrogT s _) = s

 But it costs us much more code to do it correctly. I've never seen it done
 correctly. It's just too ugly to do it right :/ and for no good reason. It
 seems to me, that it was a design mistake to make data constructors and
 types as different entities and this is not something I know how to fix
 easily with the number of lines of Haskell code in existence today :/

main = do
 frog - return (Frog (FrogT Frog 42))
 print $
 case frog of
 (Frog myFrog) - deFrog myFrog
 badFrog - return (BadBar 4)
 print $
 case badFrog of
 (notAFrog@BadBar{}) - deBadFrog notAFrog

 The ease with which we make bad design choices and write bad code

Re: [Haskell-cafe] Over general types are too easy to make.

2012-08-31 Thread Austin Seipp
What you are essentially asking for is a refinement on the type of
'BadFoo' in the function type, such that the argument is provably
always of a particular constructor.

The easiest way to encode this kind of property safely with Haskell
2010 as John suggested is to use phantom types and use the module
system to ban people from creating BadFrog's etc directly, by hiding
the constructors. That is, you need a smart constructor for the data
type. This isn't an uncommon idiom and sometimes banning people (by
default) from those constructors is exactly what you have to do. It's
also portable and easy to understand.

Alternatively, you can use GADTs to serve witness to a type equality
constraint, and this will discharge some of the case alternatives you
need to write. It's essentially the kind of refinement you want:

data Frog
data Bar

data Foo x :: * where
  Bar :: Int - Foo Bar
  Frog :: String - Int - Foo Frog

You can't possibly then pattern match on the wrong case if you specify
the type, because that would violate the type equality constraint:

deFrog :: Foo Frog - String
deFrog (Frog x _) = x
-- not possible to define 'deFrog (Bar ...) ...', because that would
violate the constraint 'Foo x' ~ 'Foo Frog'

It's easier to see how this equality constraint works if you
deconstruct the GADT syntax into regular equality constraints:

data Bar
data Frog

data Foo x =
(x ~ Bar) = Bar Int
  | (x ~ Frog) = Frog String Int

It's then obvious the constructor carries around the equality
constraint at it's use sites, such as the definition of 'deFrog'
above.

Does this solve your problem?

On Fri, Aug 31, 2012 at 1:00 PM,  timothyho...@seznam.cz wrote:
 I'd have to say that there is one(and only one) issue in Haskell that bugs
 me to the point where I start to think it's a design flaw:

 It's much easier to type things over generally than it is to type things
 correctly.

 Say we have a

data BadFoo =
 BadBar{
  badFoo::Int} |
 BadFrog{
  badFrog::String,
  badChicken::Int}

 This is fine, until we want to write a function that acts on Frogs but not
 on Bars.  The best we can do is throw a runtime error when passed a Bar and
 not a Foo:

deBadFrog :: BadFoo - String
deBadFrog (BadFrog s _) = s
deBadFrog BadBar{}  = error Error: This is not a frog.

 We cannot type our function such that it only takes Frogs and not Bars.
 This makes what should be a trivial compile time error into a nasty runtime
 one :(

 The only solution I have found to this is a rather ugly one:

data Foo = Bar BarT | Frog FrogT

 If I then create new types for each data constructor.

data FrogT = FrogT{
 frog::String,
 chicken::Int}

data BarT = BarT{
 foo :: Int}

 Then I can type deFrog correctly.

deFrog :: FrogT - String
deFrog (FrogT s _) = s

 But it costs us much more code to do it correctly.  I've never seen it done
 correctly.  It's just too ugly to do it right :/ and for no good reason.  It
 seems to me, that it was a design mistake to make data constructors and
 types as different entities and this is not something I know how to fix
 easily with the number of lines of Haskell code in existence today :/

main = do
 frog - return (Frog (FrogT Frog 42))
 print $
  case frog of
   (Frog myFrog) - deFrog myFrog
 badFrog - return (BadBar 4)
 print $
  case badFrog of
   (notAFrog@BadBar{}) - deBadFrog notAFrog

 The ease with which we make bad design choices and write bad code(in this
 particular case) is tragically astounding.

 Any suggestions on how the right way could be written more cleanly are very
 welcome!

 Timothy Hobbs

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to check thunk

2012-07-02 Thread Austin Seipp
Hi,

Just a word of note: a while back, I decided to take up maintainership
of Vacuum and some associated stuff. In the process of doing this, I
realized that the ClosureType code in vacuum may not accurately model
reality depending on the GHC version. In particular, the definition of
ClosureType in vacuum as currently released on Hackage is woefully out
of date with respect to Modern GHC. Matt Morrow it seems originally
just copied the definition wholesale when he developed it, but as GHC
has evolved, so too has the ClosureType definition. Vacuum has not
kept up.

I have a github repository[1] containing several cleanups of Vacuum
which I'm calling Vacuum 2.0, one of these cleanups being that the
ClosureType definition is automatically generated from the GHC
definition, and compiled-in accordingly. I have also dropped support
for older GHCs (although it could be re-added.)

In this case, based on the changes I made I believe it is fairly
unlikely that isThunk would turn up being wrong here. The THUNK*
definitions seem to have been relatively untouched over GHC's
evolution. But for more advanced functionality, it could possibly
result in Vacuum as it stands constructing an invalid heap
representation based on object misinterpretations.

I have been busy with a new job but I am not totally absent, and I
think I will take this opportunity to release v2.0 in short order,
which is mostly intended as a polish and maintenance release for
modern GHCs. In the future I would like to see vacuum deprecated and
move all its functionality into GHC - much of the code is deeply
intertwined with GHC's runtime representations and in some cases I
have seen code in the package that was literally copied verbatim from
the GHC sources itself, obviously dating several years. It's hard to
assess the validity of much of this code. All of this feels brittle
and makes me uneasy, and given the complexity of the package as it
stands, will make it extremely difficult to maintain as GHC moves
forward.

Sorry for the ramble, but I figured it might be relevant if people are
going to rely on this functionality.

[1] https://github.com/thoughtpolice/vacuum

On Mon, Jul 2, 2012 at 2:54 AM, Erik Hesselink hessel...@gmail.com wrote:
 There is also the 'isevaluated' package (which depends on vacuum, but
 seems to do something more involved than your code).

 Erik

 On Mon, Jul 2, 2012 at 7:40 AM, Chaddaï Fouché chaddai.fou...@gmail.com 
 wrote:
 On Mon, Jul 2, 2012 at 5:29 AM, Kazu Yamamoto k...@iij.ad.jp wrote:
 Hello,

 Are there any ways to see if a value is a thunk or memorized?
 I would like to have a function like:
   isThunk :: a - IO Bool


 vacuum allow that and much more though I don't know if it still works
 correctly on GHC 7.4. Anyway your isThunk is

 isThunk a = fmap GHC.Vacuum.ClosureType.isThunk GHC.Vacuum.closureType

 (Feel free to arrange your imports to make that a bit more readable ;)

 http://hackage.haskell.org/package/vacuum
 --
 Jedaï

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to check thunk

2012-07-02 Thread Austin Seipp
Hi Rico,

This is certainly a possibility I have not considered. However, I
still think the Right Way Forward is to ultimately deprecate the
current package as quickly as possible and move its outstanding
functionality into GHC. Vacuum is still undeniably useful for older
GHCs, so I would be willing to maintain the current package past that
point. But GHC is quick moving, and considering how Vacuum is so
closely linked to its internals, I think overall this is the right
thing to do to reduce maintenance burden in the long haul (and it
would certainly reduce a lot of code, since a good portion of it
exists in GHC already as I mentioned.)

I'm not completely opposed to other options here, and it's not
entirely my decision either - in particular, it does add more code to
GHC, which GHC HQ has to maintain possibly past my lifetime as a
contributor. So I'm open to being convinced otherwise.

On Mon, Jul 2, 2012 at 6:15 AM, Rico Moorman rico.moor...@gmail.com wrote:
 Dear Austin,

 Wouldn't it be a good idea to link the Vacuum version-number to the
 related ghc version number directly as it's functionality is directly
 tied to the ghc version anyway.

 vacuum 7.4 for ghc 7.4; vacuum 7.2 for ghc 7.2 aso.

 Best regards,

 Rico Moorman

 On Mon, Jul 2, 2012 at 12:04 PM, Austin Seipp mad@gmail.com wrote:
 Hi,

 Just a word of note: a while back, I decided to take up maintainership
 of Vacuum and some associated stuff. In the process of doing this, I
 realized that the ClosureType code in vacuum may not accurately model
 reality depending on the GHC version. In particular, the definition of
 ClosureType in vacuum as currently released on Hackage is woefully out
 of date with respect to Modern GHC. Matt Morrow it seems originally
 just copied the definition wholesale when he developed it, but as GHC
 has evolved, so too has the ClosureType definition. Vacuum has not
 kept up.

 I have a github repository[1] containing several cleanups of Vacuum
 which I'm calling Vacuum 2.0, one of these cleanups being that the
 ClosureType definition is automatically generated from the GHC
 definition, and compiled-in accordingly. I have also dropped support
 for older GHCs (although it could be re-added.)

 In this case, based on the changes I made I believe it is fairly
 unlikely that isThunk would turn up being wrong here. The THUNK*
 definitions seem to have been relatively untouched over GHC's
 evolution. But for more advanced functionality, it could possibly
 result in Vacuum as it stands constructing an invalid heap
 representation based on object misinterpretations.

 I have been busy with a new job but I am not totally absent, and I
 think I will take this opportunity to release v2.0 in short order,
 which is mostly intended as a polish and maintenance release for
 modern GHCs. In the future I would like to see vacuum deprecated and
 move all its functionality into GHC - much of the code is deeply
 intertwined with GHC's runtime representations and in some cases I
 have seen code in the package that was literally copied verbatim from
 the GHC sources itself, obviously dating several years. It's hard to
 assess the validity of much of this code. All of this feels brittle
 and makes me uneasy, and given the complexity of the package as it
 stands, will make it extremely difficult to maintain as GHC moves
 forward.

 Sorry for the ramble, but I figured it might be relevant if people are
 going to rely on this functionality.

 [1] https://github.com/thoughtpolice/vacuum

 On Mon, Jul 2, 2012 at 2:54 AM, Erik Hesselink hessel...@gmail.com wrote:
 There is also the 'isevaluated' package (which depends on vacuum, but
 seems to do something more involved than your code).

 Erik

 On Mon, Jul 2, 2012 at 7:40 AM, Chaddaï Fouché chaddai.fou...@gmail.com 
 wrote:
 On Mon, Jul 2, 2012 at 5:29 AM, Kazu Yamamoto k...@iij.ad.jp wrote:
 Hello,

 Are there any ways to see if a value is a thunk or memorized?
 I would like to have a function like:
   isThunk :: a - IO Bool


 vacuum allow that and much more though I don't know if it still works
 correctly on GHC 7.4. Anyway your isThunk is

 isThunk a = fmap GHC.Vacuum.ClosureType.isThunk GHC.Vacuum.closureType

 (Feel free to arrange your imports to make that a bit more readable ;)

 http://hackage.haskell.org/package/vacuum
 --
 Jedaï

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 --
 Regards,
 Austin

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo

Re: [Haskell-cafe] how to check thunk

2012-07-02 Thread Austin Seipp
On Mon, Jul 2, 2012 at 6:56 AM, Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com wrote:
 If you've taken over maintainership, should we remove it from
 haskell-pkg-janitors?

I haven't removed it from haskell-pkg-janitors because I haven't made
a release and the current package points there as the homepage and
source repository. My intentions were to remove the original
repository (as I'm a member of the github group) once a release was
made.

 (I would also appreciate having my copyright being listed in
 https://github.com/thoughtpolice/vacuum/blob/master/pkgs/graphviz/src/GHC/Vacuum/GraphViz.hs
 as that looks suspiciously like my code [1,2]).

I've pushed a commit to fix this. I'm very sorry - I imagine I was
copy-pasting module-header boilerplate across the source tree to get
some decent haddock results, and forgot about that (I also put the
code for vacuum-graphviz up as LGPLv3 to match Vacuum's license even
though I prefer BSD. Perhaps this should be changed as well, if you're
not opposed.)

I'll probably make a release later today after doing a glance over the
source tree and adding attributions I otherwise forgot (including
Conrad's work.)

-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to check thunk

2012-07-02 Thread Austin Seipp
On Mon, Jul 2, 2012 at 7:33 AM, Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com wrote:
 I'm OK with BSD for this.  And I understand that copy-pasting
 boilerplate could mess things up ;-)

I think I'll change it then, thanks :)

 There is a 2999.13.* series of graphviz out, I haven't actually tested
 this but I think for the limited part of the API that gets used it
 remains the same.

I can test this later today before pushing out something on Hackage.

 When you get around to updating vacuum-cairo, it should then also dep
 on vacuum-graphviz and use customise vacuumParams if need be and then
 a custom call to graphToDotParams (that was the point of why I wrote
 the graphviz usage like that; Don copied a lot of the old Dot usage
 out just to be able to customise the attributes).

I actually have not updated the Cairo code to work with the updated
packages; it is likely broken as hell, I merely put it in the source
tree to keep track of it.

 Well, you should probably mention Matt Morrow in Authors.txt ... ;-)
 (and the only thing I did was remove the old manual Dot manipulation,
 so it could be argued that my name isn't as important in there).

Matt is in the AUTHORS.txt at the very top :) Although he should
rightfully be accredited with Copyright in all the modules, which I
will get to later today as well.

(Sorry for hijacking this thread, by the way.)


 --
 Regards,
 Austin



 --
 Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com
 http://IvanMiljenovic.wordpress.com



-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bytestring, array and Safe Haskell

2012-05-08 Thread Austin Seipp
The reasoning is outlined in the user manual here:

http://www.haskell.org/ghc/docs/7.4.1/html/users_guide/safe-haskell.html#safe-inference

Basically, these modules will compile without error if they were to be
compiled with -XSafe enabled. Thus, they are safe-inferred. The check
does not scrutinize individual functions in any way; all of SH works
on the level of module boundaries, as well as packages (whether or not
to enforce package trust when compiling clients.) As to why they are
safe-inferred, considering they are importing primitive
libraries/modules, and the module inlinePerformIO lives in should be
marked unsafe, well, I don't know. This is just a simple check, and
anything more powerful would likely be considerably more complex and
difficult to get right.

They should be marked as -XUnsafe since their use in a safe program
could cause crashes. It's likely not a conscious choice that this is
the case, as much as it is probably an oversight. Many of the core
libraries have needed refactoring/changes at the module level due to
Safe Haskell (and hence there is now a proliferation of *.Unsafe and
*.Safe modules, a la ForeignPtr.)

I do not know if the next major version of the ByteString library
(0.10) or array has marked these as unsafe or not. They should be if
not.

Perhaps someone else who's more aware of the new Safe Haskell design
can comment further.

On Tue, May 8, 2012 at 3:03 PM, Francesco Mazzoli f...@mazzo.li wrote:
 Why are
 http://hackage.haskell.org/packages/archive/bytestring/0.9.2.1/doc/html/Data-ByteString-Unsafe.html
 and
 http://hackage.haskell.org/packages/archive/array/0.4.0.0/doc/html/Data-Array-Unsafe.html
 Safe-inferred?

 The first one uses inlinePerformIO, so it clearly shouldn't be marked as
 Safe. Maybe Safe Haskell doesn't check that function?
 The second is a bit messier since it uses unboxed types and primitive
 operations... But they clearly should be marked as Unsafe, and it surprises
 me that Safe Haskell is that relaxed when checking for safe functions.

 Francesco.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-06 Thread Austin Seipp
In this case it doesn't matter; while it isn't technically tail
recursive, GCC is very capable of transforming it into a direct loop
likely because it knows about the associative/commutative properties
of + so it's able to re-arrange the body as it sees fit since
combined, both calls are in 'tail position.'

With GCC 4.6.3 at -O3 optimization level on my Ubuntu 12.04 machine,
the example in the linked article executes in 9s (Intel Core i5-2520M
CPU @ 2.50GHz, dual core with 4 total logical cores) and the body of
fib does no contain any call instructions - in fact, it seems to not
only transform the body into a loop, but unroll a very good portion of
it leading to very very fast code.

Technically, if you want to be a pedant, the benchmarks aren't even
equivalent anyway; GCC is able to use the native 'long long' datatype
while GHC promotes the results of 'fib' to Integer, and thus is backed
by GMP and a lot of extra code. GMP is fast, but it's not as fast as a
native data type. It should use Int64.That change alone speeds up the
benchmark by approximately 30% for me using GHC 7.4.1 (~30s at -N4 vs
~19s at -N4.)

I don't think the point of that post was to outwit the C compiler, but
show how easy it is to add parallelism to a Haskell program
(ironically, pseq/par has proven quite difficult to leverage for nice
wall-clock speedups in practice, hence the advent of libraries like
strategies and more recently monad-par, that make speedups easier to
get.) I do think it's much easier to add parallelism to Haskell
programs - even if they are not as fast, it's far easier to write,
understand, and safer to do so. And ultimately all this code is very
old (5+ years) and not reflective of either toolchain anymore. GHC has
had immesurable amounts of overhauls in its parallelism support as
well as the libraries supporting it, and both GHC and GCC have far
better optimisation capabilities.

On Sun, May 6, 2012 at 4:09 AM, Roman Cheplyaka r...@ro-che.info wrote:
 It is not tail-recursive.

 * Yves Parès yves.pa...@gmail.com [2012-05-06 10:58:45+0200]
 I do not agree: the fib function is tail-recursive, any good C compiler is
 able to optimize away the calls and reduce it to a mere loop.
 At least that's what I learnt about tail recursion in C with GCC.

 2012/5/6 Artur apeka1...@gmail.com

  On 06.05.2012 10:44, Ivan Lazar Miljenovic wrote:
 
  On 6 May 2012 16:40, Janek S.fremenz...@poczta.onet.pl  wrote:
 
  Hi,
 
  a couple of times I've encountered a statement that Haskell programs can
  have performance
  comparable to programs in C/C++. I've even read that thanks to
  functional nature of Haskell,
  compiler can reason and make guarantess about the code and use that
  knowledge to automatically
  parallelize the program without any explicit parallelizing commands in
  the code. I haven't seen
  any sort of evidence that would support such claims. Can anyone provide
  a code in Haskell that
  performs better in terms of execution speed than a well-written C/C++
  program? Both Haskell and C
  programs should implement the same algorithm (merge sort in Haskell
  outperforming bubble sort in
  C doesn't count), though I guess that using Haskell-specific idioms and
  optimizations is of
  course allowed.
 
  How about http://donsbot.wordpress.com/**2007/11/29/use-those-extra-**
  cores-and-beat-c-today-**parallel-haskell-redux/http://donsbot.wordpress.com/2007/11/29/use-those-extra-cores-and-beat-c-today-parallel-haskell-redux/
  ?
 
   Hi,
 
   isn't it that particular Haskell code is outperforming C (22 seconds vs.
  33), just because the author uses recursion in C? I surely love Haskell,
  and the way it's code is easy parallelized, but that example seams not 
  fair.
 
 
  __**_
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe
 

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 --
 Roman I. Cheplyaka :: http://ro-che.info/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Prettier pretty-printing of data types?

2012-03-13 Thread Austin Seipp
It's not exactly hierarchical, but Groom most certainly should help
with getting much prettier output:

http://hackage.haskell.org/package/groom

On Tue, Mar 13, 2012 at 5:33 PM, Johan Tibell johan.tib...@gmail.com wrote:
 Hi all,

 The derived Show instance is useful, but I sometimes wish for
 something that's easier to read for big data types. Does anyone have
 an implementation of show that draws things in a hierarchical manner?
 Example:

 Given

    data Tree a = Leaf | Bin Int a (Tree a) (Tree a)

 and

    value = Bin 1 (Bin 2 Leaf Leaf) (Bin 3 Leaf Leaf)

 draw as

 Bin
  1
  Bin
    2
    Leaf
    Leaf
  Bin
    3
    Leaf
    Leaf

 Cheers,
 Johan

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell development in Mac OS X after Gatekeeper

2012-02-22 Thread Austin Seipp
Manuel,

Thanks for the references and follow up. I had seen Kennith's posts
about the new command line tools for XCode, but didn't seen John
Gruber's take! Much appreciated.

On Tue, Feb 21, 2012 at 2:52 AM, Manuel M T Chakravarty
c...@cse.unsw.edu.au wrote:
 Austin Seipp:
 On Sun, Feb 19, 2012 at 8:39 PM, Tom Murphy amin...@gmail.com wrote:
 On the other hand,
 it's impossible for a software company to maintain a sense of
 professionalism, when a user has to know a weird secret handshake to
 disable what they may perceive as equivalent to antivirus software.

 I'll also just add that if you're an actual software company, large or
 small, the $100 for the developer ID, certificate, ability to do
 iOS/App store apps, whatever, is a business expense, that is utterly
 dominated by a million other factors, as developing high quality
 applications isn't exactly cheap, and the price of a license is really
 the last thing you're going to worry about.

 If you're more worried about the potential to impact individual
 developers and small open source teams who want to get their work out
 there, you are right it's a concern.

 I think, Apple has made their stance quite clear by releasing the command 
 line dev tools:

  http://kennethreitz.com/xcode-gcc-and-homebrew.html

 Manuel




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell development in Mac OS X after Gatekeeper

2012-02-19 Thread Austin Seipp
On Sun, Feb 19, 2012 at 6:01 PM, Tom Murphy amin...@gmail.com wrote:
 0) Distributing non-Cocoa-built apps, even if you're approved by Apple

Do you just mean binaries that you expect users run under
/usr/local/bin or something, not app bundles? If that's the case, I
cannot say if the same restrictions will apply. From my reading on the
issue though, in the case of an app bundle, is that you don't have to
be 'approved' per se by Apple. By having a Developer ID, you have a
key you can then sign your binaries with. A binary signed with a valid
key from Apple will run on any OS X machine, provided the Gatekeeper
settings allow it - should that app later be discoverd as malware, or
the *key* is used to sign some other piece of malware (because maybe
it got stolen,) they just blacklist your key and no users can then run
it.

As a result, none of your applications you distribute outside of the
Mac App Store have to be 'approved' - you just need to sign the
binaries with your key before distributing them. It's blacklisting
based, not whitelisted.

 1) Writing software for widespread use (a security setting is to only
 run software from the App Store, and I'd like to have my software
 function on users' computers.)

Settings like this are beyond your control, it's just a fact of life.
This basically affects every single thing that *isn't* in the Mac App
Store, and if users decide to enable this, there's nothing you can
really do other than telling them to change gatekeeper settings. Users
can always over-ride this on a temporary, per-app basis, by holding
control and clicking on the binary instead.

     1.0) Aren't you unable to put software under the GPL or certain
 other open-source licenses on the App Store?

Someone more familiar with the AS on Mac/iOS will have to comment
here. I'm fairly certain the iOS store does not do GPL'd applications,
but I don't know about the Mac App Store.

-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell development in Mac OS X after Gatekeeper

2012-02-19 Thread Austin Seipp
On Sun, Feb 19, 2012 at 8:19 PM, Tom Murphy amin...@gmail.com wrote:
     Actually, what I was more concerned about was the ability to
 distribute a full Mac application, with a GUI, made with a method
 other than calling Haskell from Objective-C.
     It seems that *none* of these applications will be usable by
 anyone except users with all security settings turned off (it doesn't
 sound great in a user manual: Every time you run this program, be
 sure to turn the malware-detector all the way off)

     The reason I'm concerned is that having a security signature
 requires a membership to the Apple Developers program, which is
 exclusively for XCode [0]. Isn't it logical to assume that the
 signature-bundling process [1] occurs within XCode?
     (I'm assuming the digital summary of the contents of the
 application is a hash, which (I think) would imply that
 XCode-compilation would have to be the final step in the development
 chain)

On OS X, you can sign applications or paths using any certificate you
like using the 'codesign' utility, including a .app bundle. If you're
going to distribute an OS X application to average users, let's face
it: you're going to give them an .app bundle.

You can do it yourself with a self-trusted code signing certificate
already. Building LLDB on OS X for example, requires self signing in
this manner, because the debugging engine needs permissions granted by
the signature (AFAIK.) Regular LLDB with XCode already comes signed by
Apple, obviously.

     Which (again, unless I'm reading it wrong) means that most
 Haskell OS X GUI work (incl. FRP) goes out the window?!

No. Just sign your .app bundle with your Developer ID cert using
codesign after the build and bundling process, and it'll work unless
they only have Gatekeeper enabled to allow Mac App store apps. There's
nothing you can do about this if they have it enabled if you're not
willing to put it on the store, other than advise them to disable it.
If it's on the store, you've already paid for the developer license
and signed it anyway.

The only differences mountain lion adds is that now you must at least
sign those applications which you intend to distribute to regular
users by whatever means, but not put them on the App Store. That's
really it at the core. And tech demos and code examples will never be
relevant if the target is programmers really, because developers are
just going to have it disabled (equivalent to the way OS X is now, in
effect.)

The only two things not clear at this point, at least to me, are:

1) Will Apple require the paid development program, as opposed to the
free one, if you only want to self-sign applications with a cert they
trust?
2) What will the default Gatekeeper setting in Mountain Lion be?

These 2 factors will control whether or not you'd have to pay and the
user impact. In an ideal world, you won't require the paid dev ID (I
don't know the expense of giving out certs however,) and the default
setting would be App store + Dev signed. Unfortunately we'll just have
to wait and see on that note.

-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell development in Mac OS X after Gatekeeper

2012-02-19 Thread Austin Seipp
On Sun, Feb 19, 2012 at 8:39 PM, Tom Murphy amin...@gmail.com wrote:
 On the other hand,
 it's impossible for a software company to maintain a sense of
 professionalism, when a user has to know a weird secret handshake to
 disable what they may perceive as equivalent to antivirus software.

I'll also just add that if you're an actual software company, large or
small, the $100 for the developer ID, certificate, ability to do
iOS/App store apps, whatever, is a business expense, that is utterly
dominated by a million other factors, as developing high quality
applications isn't exactly cheap, and the price of a license is really
the last thing you're going to worry about.

If you're more worried about the potential to impact individual
developers and small open source teams who want to get their work out
there, you are right it's a concern. We'll have to wait and see when
more details arise.

-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Transactional memory going mainstream with Intel Haswell

2012-02-09 Thread Austin Seipp
Duncan Coutts talked a bit about this on Reddit here:

http://www.reddit.com/r/programming/comments/pfnkx/intel_details_hardware_transactional_memory/c3p4oq7

On Thu, Feb 9, 2012 at 12:43 PM, Ben midfi...@gmail.com wrote:
 http://arstechnica.com/business/news/2012/02/transactional-memory-going-mainstream-with-intel-haswell.ars

 would any haskell STM expert care to comment on the possibilities of hardware 
 acceleration?

 best, ben
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The State of Testing?

2012-02-07 Thread Austin Seipp
If you're writing a library, you need to compile the library with
`-fhpc`, i.e. put it in the library stanza, not the testsuite stanza,
and then you can compile the test program using your library - the
resulting 'tix' file will contain the library coverage reports. You
can link a HPC-built library into an executable not compiled with HPC
just fine.

Normally I only compile the library under HPC mode, link it in a test,
and distribute the results from that. That way your coverage reports
don't include the test module (which may or may not be relevant.)

I normally add a cabal flag called 'hpc' which optionally enables
coverage reports for my library, e.g.

flag hpc
  default: False

library
  ...
  ...
  if flag(hpc)
ghc-options: -fhpc

Then when you want coverage reports, just say 'cabal install -fhpc
--enable-tests' and the resulting properties executable will spit out
the results when run.

On Tue, Feb 7, 2012 at 3:16 PM, Michael Craig mks...@gmail.com wrote:
 Thanks for the advice, all. I've got test-framework, quickcheck, and cabal's
 test-suite all working together nicely.

 Cabal seems to support using hpc to check test coverage. If I add -fhpc to
 the ghc-options under the test-suite, I get output like Test coverage
 report written to dist/hpc/html/tests/hpc_index.html and Package coverage
 report written to dist/hpc/html/test-0.0.0/hpc_index.html, but those html
 files are just empty tables. How does this work?

 Mike Craig




 On Thu, Feb 2, 2012 at 8:45 PM, Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com wrote:

 On 03/02/2012 12:22 PM, Johan Tibell johan.tib...@gmail.com wrote:
 
  On Thu, Feb 2, 2012 at 4:46 PM, Conrad Parker con...@metadecks.org
  wrote:
 
  On 3 February 2012 08:30, Johan Tibell johan.tib...@gmail.com wrote:
   On Thu, Feb 2, 2012 at 4:19 PM, Conrad Parker con...@metadecks.org
   wrote:
  
   I've followed what Johan Tibbell did in the hashable package:
  
  
   If I had known how much confusion my childhood friends would unleash
   on the
   Internet when they, at age 7, gave me a nickname that's spelled
   slightly
   differently from my last name, I would have asked them to pick
   another one.
   ;)
 
  lol, sorry, I actually double-checked the number of l's before writing
  that but didn't consider the b's. For future reference I've produced a
  handy chart:
 
 
 
  Letter | Real-name count | Nickname count
  ---+-+---
  b      | 1               | 2
  l      | 2               | 0
  ---+-+---
  SUM    | 3               | 2
 
 
  Excellent. I will tattoo it on my forehead.

 There is, of course, a simpler (but not necessarily easier :p) solution:
 change your name to match your nickname!

 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Switching GHC Version

2012-02-06 Thread Austin Seipp
Personally I prefer just using 'virthualenv' these days, which
installs copies of GHC locally that you can then activate with your
shell, similar to 'virtualenv' in python. It's how I test packages on
multiple copies of GHC.

http://hackage.haskell.org/package/virthualenv

The nicest part is that it sandboxes cabal packages, but because it
works via shell scripts, the workflow for any version of GHC remains
the same: you can invoke ghci, cabal, etc all as you would expect, but
it only takes place in the virtual environment if you have activated
it.

On Mon, Feb 6, 2012 at 5:27 PM, HASHIMOTO, Yusaku nonow...@gmail.com wrote:
 Hi, I wrote a simple shell function for switching GHC version on the
 system. It works only under Mac OSX, and only switch GHCs installed
 via .pkg installers. It's useful to experiment newer features without
 worrying breaking environment.

 GHC_BASE_DIR=/Library/Frameworks/GHC.framework/Versions/

 ghcs () {
  VERSION=$1
  sudo $GHC_BASE_DIR/$VERSION/Tools/create-links . /Library/Frameworks /
 }

 Usage:
 ~/work/today 08:21 ghcs 7.4.0.20111219-x86_64
 Password:
 ~/work/today 08:21 ghc --version
 The Glorious Glasgow Haskell Compilation System, version 7.4.0.20111219
 ~/work/today 08:21 ghcs 7.4.1-x86_64
 ~/work/today 08:22 ghc --version
 The Glorious Glasgow Haskell Compilation System, version 7.4.1

 Now I'm curious of better way to achieve this. This have limitations
 described above. Even it requires sudo because it modified symbolic
 links under `/usr`.

 Suggestions?
 -nwn

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Loading a texture in OpenGL

2012-02-06 Thread Austin Seipp
It's a precise GC of course (conservative collection would be madness
considering how much memory Haskell programs chew through.) That still
doesn't ensure your finalizer will run during the next GC even if all the
references are gone by then.

Sent from my iPhone^H^H^H^H^HPortable Turing machine

On Feb 6, 2012, at 10:09 PM, Clark Gaebel cgae...@csclub.uwaterloo.ca
wrote:

Is the Haskell garbage collector conservative, or precise?

If it's conservative, then this will only usually work. If it's precise, it
should always work.

On Mon, Feb 6, 2012 at 10:56 PM, Ben Lippmeier b...@ouroborus.net wrote:


 On 07/02/2012, at 2:50 PM, Clark Gaebel wrote:

  I would be running the GC manually at key points to make sure it gets
 cleaned up. Mainly, before any scene changes when basically everything gets
 thrown out anyways.


 From the docs:

 newForeignPtr :: FinalizerPtr a - Ptr a - IO (ForeignPtr a)Source
 Turns a plain memory reference into a foreign pointer, and associates a
 finalizer with the reference. The finalizer will be executed after the last
 reference to the foreign object is dropped. There is no guarantee of
 promptness, however the finalizer will be executed before the program exits.


 No guarantee of promptness. Even if the GC knows your pointer is
 unreachable, it might choose not to call the finaliser. I think people have
 been bitten by this before.

 Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Loading a texture in OpenGL

2012-02-06 Thread Austin Seipp
Just to clarify, this guarantee possibly could be made, ghc just doesn't do
it now. In the past ghc never guaranteed a finalizer would ever be run.

Regardless I would be wary of trusting finalizers to clean up very scarce
resources. A malloc'd buffer is probably fine to have a finalizer for,
texture objects or file descriptors are a different matter. Predictability
matters in those cases.

Sent from my iPhone^H^H^H^H^HPortable Turing machine

On Feb 6, 2012, at 10:16 PM, Austin Seipp mad@gmail.com wrote:

It's a precise GC of course (conservative collection would be madness
considering how much memory Haskell programs chew through.) That still
doesn't ensure your finalizer will run during the next GC even if all the
references are gone by then.

Sent from my iPhone^H^H^H^H^HPortable Turing machine

On Feb 6, 2012, at 10:09 PM, Clark Gaebel cgae...@csclub.uwaterloo.ca
wrote:

Is the Haskell garbage collector conservative, or precise?

If it's conservative, then this will only usually work. If it's precise, it
should always work.

On Mon, Feb 6, 2012 at 10:56 PM, Ben Lippmeier b...@ouroborus.net wrote:


 On 07/02/2012, at 2:50 PM, Clark Gaebel wrote:

  I would be running the GC manually at key points to make sure it gets
 cleaned up. Mainly, before any scene changes when basically everything gets
 thrown out anyways.


 From the docs:

 newForeignPtr :: FinalizerPtr a - Ptr a - IO (ForeignPtr a)Source
 Turns a plain memory reference into a foreign pointer, and associates a
 finalizer with the reference. The finalizer will be executed after the last
 reference to the foreign object is dropped. There is no guarantee of
 promptness, however the finalizer will be executed before the program exits.


 No guarantee of promptness. Even if the GC knows your pointer is
 unreachable, it might choose not to call the finaliser. I think people have
 been bitten by this before.

 Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strict version of Haskell - does it exist?

2012-01-29 Thread Austin Seipp
The strict-ghc-plugin (under my maintenance) is just a continuation of
one of the original demos Max had for plugin support in the compiler.
The idea is fairly simple: 'let' and 'case' are the forms for creating
lazy/strict bindings in Core. It just systematically replaces all
occurrences of 'let' in Core with 'case'. So 'let b = e1 in e2'
becomes 'case e1 of { b - e2 }', making 'b' strict. It also replaces
all applications of the form 'App e1 e2' (which is also lazy, as e2 is
evaluated on demand) with an equivalent binding like 'case e2 of { x
- App e1 x }'. Pretty simple, and results in a totally strict
program.

The idea is just a proof of concept; in particular, I (and likely Max
although I cannot speak for him) am not using it as a position to say
that sometimes you want everything strict. You don't; at some point,
you're not even using Haskell anymore I suppose (remember: non-strict
semantics.) I can't think of any instance in which I would need or
want to use this plugin, honestly. But maybe someone else would - I
did refactor it to where you can strictify individual functions, as
opposed to full-blown modules, via annotations. So you could
selectively strictify things if you found it beneficial on certain
identifiers. But then there's the question of what affect that has on
the rest of GHC's optimizers, which I cant answer: the strictifier
modifies the pipeline to be the *first* pass, and the remaining ones
run afterwords. Compilers are built on heuristics and built for
'average' code. Sometimes these heuristics interact in odd ways,
especially with code that may deviate from 'the norm.' Once you're
fighting the optimizer, it can become a very difficult battle to win.
Careful analysis and selective optimization is probably going to take
you farther than hitting it with a giant hammer.

Having lazy and strict data structures and knowing when/where to use
them is crucial for good performance, and both have their merits (same
with every other thing under the sun, like by-ref/by-val semantics in
`data` types, which you can control with UNPACK etc.) I think we could
most certainly use better tools for analyzing low-level performance
details and the tradeoff between strictness/laziness and (especially
in large codebases,) but I don't think systematically making
everything strict is going to be the right idea in a vast majority of
situations.

On Sun, Jan 29, 2012 at 4:12 PM, Chris Wong
chrisyco+haskell-c...@gmail.com wrote:
 On Mon, Jan 30, 2012 at 10:13 AM, Marc Weber marco-owe...@gmx.de wrote:
 A lot of work has been gone into GHC and its libraries.
 However for some use cases C is still preferred, for obvious speed
 reasons - because optimizing an Haskell application can take much time.

 As much as any other high-level language, I guess. Don't compare
 apples to oranges and complain oranges aren't crunchy enough ;)

 Is there any document describing why there is no ghc --strict flag
 making all code strict by default?

 Yes -- it's called the Haskell Report :)

 GHC does a lot of optimization already. If making something strict
 won't change how it behaves, it will, using a process called
 strictness analysis.

 The reason why there is no --strict flag is that strictness isn't just
 something you turn on and off willy-nilly: it changes how the whole
 language works. Structures such as infinite lists and Don Stewart's
 lazy bytestrings *depend* on laziness for their performance.

 Wouldn't this make it easier to apply Haskell to some additional fields
 such as video processing etc?

 Wouldn't such a '--strict' flag turn Haskell/GHC into a better C/gcc
 compiler?

 See above.

 Projects like this: https://github.com/thoughtpolice/strict-ghc-plugin
 show that the idea is not new.

 Not sure what that does, but I'll have a look at it.

 Eg some time ago I had to do some logfile analysis. I ended doing it in
 PHP because optimizing the Haskell code took too much time.

 That probably because you're using linked lists for strings. For
 intensive text processing, it's better to use the text package instead
 [1]

 Chris

 [1] http://hackage.haskell.org/package/text

 Marc Weber

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] black Wikipedia

2012-01-18 Thread Austin Seipp
Aside from being a horrible oversimplification of the matter (because
it's *never* that simple - Wikipedia is not in this movement for
commercial interest or the side of SV/HW, but because it opposes the
censoring of the internet; neither are people like Dan Kaminsky, who
are also opposing from the point of the large-scale security
ramifications due to the subversion of DNS' universal nature,) and the
grounds at stake extending far beyond either SV or Hollywood [1] - I
don't really think boiling it down to 2 contenders is terribly
important: It passes, and we lose. Or we'll end up having to fight an
even tougher battle.

This bill cannot be fixed; it must be killed. - The EFF

On Wed, Jan 18, 2012 at 5:42 PM, Hans Aberg haber...@telia.com wrote:
 Actually, it is a battle between the Hollywood and Silicon Valley industries.

 Hans


[1] It will cause substantial damage to large number of existing jobs
in plenty of places as a result of massive amounts of litigation, it
will stunt investment in anything which could potentially suffer from
such litigation, it sets horrific precedents, goes beyond just
'piracy' with the Monster case as John pointed out, and could result
in possible follow up laws in similar countries.

-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC 7.4 and binutils-gold

2011-12-27 Thread austin seipp
I encountered this problem approximately a month ago building HEAD and
reported it to Ian:

http://www.haskell.org/pipermail/cvs-ghc/2011-November/068562.html

His fix worked - but I was doing a build from source. The problem now
is that this is a -build-time- option, not a runtime option, but
you're using pre-built binaries: ones that were built on Linux systems
using GNU ld, not gold. So removing gold is basically your only hope
for the 7.4.1 RC.

Alternatively, you could probably tell GHC which ld to use by aliasing
GHC to something like 'ghc -pgml ld.ld' - Oneiric installs gold under
'ld.gold' and moves GNU ld to 'ld.ld' so you still have both
installed. It just updates the ld symlink to point to the gold binary
by default.

So if 7.4.1 final wants to support gold, this logic needs to be moved
to runtime somehow.

This should probably be discussed on cvs-ghc or glasgow-haskell-users
with Ian et al.

On Tue, Dec 27, 2011 at 4:00 PM, Michael Snoyman mich...@snoyman.com wrote:
 Hi all,

 One other little GHC 7.4 note. When I first tried building code with
 it, I got the following error message:

 /usr/bin/ld: --hash-size=31: unknown option

 Once I uninstalled binutils-gold, everything went just fine. Has
 anyone else experienced this? I'm running Ubuntu Oneiric.

 Michael

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] could not create compact unwind

2011-12-01 Thread austin seipp
The 'could not create compact unwind' message is a known (and still
outstanding) linking issue on OS X. It should be harmless - it refers
to the fact that OS X 10.6 uses compact unwind info for exceptions
instead of DWARF unwind information, when possible. The exact cause
isn't (yet) known. Generally this just means that the exception
information falls back to being DWARF-based.

If you want to blow away all your local packages, say:

$ rm -rf ~/.ghc

Note that will blow away every local package for every GHC version. If
you have multiple versions installed, you should look in that
directory first and then blow away the appropriate subdirectory (named
something like 'arch-darwin-ghc-ver'.)

On Thu, Dec 1, 2011 at 7:07 PM, Richard O'Keefe o...@cs.otago.ac.nz wrote:
 I just did
        cabal install cabal-install
 on a Mac running Mac OS 10.6.8 and got the eventual response
 [44 of 44] Compiling Main             ( Main.hs, 
 dist/build/cabal/cabal-tmp/Main.o )
 Linking dist/build/cabal/cabal ...
 ld: warning: could not create compact unwind for .LFB3: non-standard register 
 5 being saved in prolog
 Installing executable(s) in /home/cshome/o/ok/.cabal/bin

 I also had this problem today:
 m% cabal install quickcheck
 Resolving dependencies...
 cabal: dependencies conflict: ghc-6.12.3 requires pretty ==1.0.1.2 however
 pretty-1.0.1.2 was excluded because ghc-6.12.3 requires pretty ==1.0.1.1

 What's the procedure for wiping everything out and starting again?


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] SMP parallelism increasing GC time dramatically

2011-10-07 Thread austin seipp
It's GHC, and partly the OS scheduler in some sense. Oversaturating,
i.e. using an -N option  your number of logical cores (including
hyperthreads) will slow down your program typically. This isn't
uncommon, and is well known - GHC's lightweight threads have an M:N
threading model, but for good performance, you typically want the OS
threads and cores to have a 1:1 correspondence. Creating a massive
amount of OS threads will cause much context switching between them,
which is going to slow things down. When GHC needs to synchronize OS
threads in order to do a GC, there are going to be a lot of threads
being context swapped in/out in order to achieve this (because the GC
must halt all mutator threads to do its thing.)

Furthermore, oversaturation isn't the only problem - having the same
number of threads as cores will mean *some* thread is going to get
de-scheduled. Many times this means that the thread in which GHC does
GC will get descheduled by the OS. A corollary of this descheduling
phenomenon is that even using the same # of OS threads as you have
cores could result in -worse- performance than N-1 OS threads. This
was mitigated a bit in 7.2.1 I think. Linux was affected much more
drastically than others (OS X and Solaris have vastly different
schedulers, and as a result the performance wouldn't just tank - it
would actually get better IIRC, it just wouldn't scale as well at that
point.) At the very least, on Linux, using an -N option equivalent to
your number of logical cores should not drastically slow things down
anymore - but it won't make them faster either. This is the dreaded
last core slowdown bug that's been known about for a while, and as a
result, you typically only see parallel speedup on Linux up to N-1
threads, where N = the number of cores you have.

As a result, with dual-core only (and no hyperthreading,) on Linux,
you're very unlikely to be able to get parallel speedup in any case.
There's work to fix this in the garbage collector among other things,
but it's not clear if it's going into GHC just yet.

On Fri, Oct 7, 2011 at 2:31 PM, Oliver Batchelor saul...@gmail.com wrote:
 I'm not sure if this is at all related, but if I run a small Repa program
 with more threads than I have cores/CPUs then it gets drastically slower, I
 have a dual core laptop - and -N2 makes my small program take approximately
 0.6 of the time. Increasing to -N4 and we're running about 2x the time, -N8
 and it's taking 20x or more. I guess this is probably more down to the
 design of Repa rather than GHC itself?
 Oliver

 On Sat, Oct 8, 2011 at 1:21 AM, Tom Thorne thomas.thorn...@gmail.com
 wrote:

 I have made a dummy program that seems to exhibit the same GC
 slowdown behavior, minus the segmentation faults. Compiling with -threaded
 and running with -N12 I get very bad performance (3x slower than -N1),
 running with -N12 -qg it runs approximately 3 times faster than -N1. I don't
 know if I should submit this as a bug or not? I'd certainly like to know why
 this is happening!
 import Numeric.LinearAlgebra
 import Numeric.GSL.Special.Gamma
 import Control.Parallel.Strategies
 import Control.Monad
 import Data.IORef
 import Data.Random
 import Data.Random.Source.PureMT
 import Debug.Trace
 --
 subsets s n = (subsets_stream s) !! n
 subsets_stream [] = [[]] : repeat []
 subsets_stream (x:xs) =
 let r = subsets_stream xs
    s = map (map (x:)) r
 in [[]] : zipWith (++) s (tail r)
 testfun :: Matrix Double - Int - [Int] - Double
 testfun x k cs = lngamma (det (c+u))
 where
 (m,c) = meanCov xx
 m' = fromRows [m]
 u = (trans m')  m'
 xx = fromColumns ( [(toColumns x)!!i] ++ [(toColumns x)!!j] ++ [(toColumns
 x)!!k] )
 i = cs !! 0
 j = cs !! 1

 test :: Matrix Double - Int - Double
 test x i = sum p
 where
 p = parMap (rdeepseq) (testfun x (i+1)) (subsets [0..i] 2)


 ranMatrix :: Int - RVar (Matrix Double)
 ranMatrix n = do
 xs - mapM (\_ - mapM (\_ - uniform 0 1.0) [1..n]) [1..n]
 return (fromLists xs)

 loop :: Int - Double - Int - RVar Double
 loop n s i = traceShow i $ do
 x - ranMatrix n
 let r = sum $ parMap (rdeepseq) (test x) [2..(n-2)]
 return (r+s)
 main = do
 let n = 100
 let iter = 5
 rng - newPureMT
 rngr - newIORef rng
 p - runRVar (foldM (loop n) 0.0 [1..iter]) rngr
 print p
 I have also found that the segmentation faults in my code disappear if I
 switch from Control.Parallel to Control.Monad.Par, which is quite strange. I
 get slightly better performance with Control.Parallel when it completes
 without a seg. fault, and the frequency with which it does so seems to
 depend on the number of sparks that are being created.
 On Thu, Oct 6, 2011 at 1:56 PM, Tom Thorne thomas.thorn...@gmail.com
 wrote:

 I'm trying to narrow it down so that I can submit a meaningful bug
 report, and it seems to be something to do with switching off parallel GC
 using -qg, whilst also passing -Nx.
 Are there any known issues with this that people are aware of? At the
 moment I am using the latest haskell platform release on 

Re: [Haskell-cafe] Turn GC off

2011-09-29 Thread austin seipp
On Thu, Sep 29, 2011 at 3:14 PM, David Barbour dmbarb...@gmail.com wrote:

 minor collections of this nursery do not result in whole system pauses.

Yes, they do. GHC has a parallel garbage collector (so collection
pauses the mutator threads, and collects garbage -in parallel- on
multiple CPUs) but it in no way has a concurrent one. Every OS thread
has its own young-gen heap space, and the old-generation heap space is
shared amongst every CPU. But any young-gen GC will cause all mutator
threads to pause no matter what the state of the others is.

That said, Simon^2 has done research on this problem recently. They
recently published a paper about 'local' garbage collection for
individual OS threads, where every thread has its own, private
nursery, that may be collected independently of all other CPUs, which
promotes objects into the global heap when necessary for access from
other threads. The design is reminiscent of older work on the same
topic (thread-local heaps,) but adds a bunch of tasty work they did.

You can find this branch of GHC with 'local heaps' here, in the
local-gc branch of the git repository:

https://github.com/ghc/ghc/tree/local-gc

This new local collector branch is not, I repeat, not part of GHC and
hasn't been merged. It's not certain if it ever will be, I think. The
paper conclusion addresses the fact the scalability improvements as a
result of this new collector are nowhere near as dramatic as they had
hoped, and it's not certain the improvements they did get are worth
the substantial complexity increase. It doesn't address the old-gen GC
- any old-gen GCs still pause all mutator threads before continuing.
They do note however that this local allocation strategy could be
combined with a real concurrent/incremental GC for the old-generation,
which would help control pause times more over the whole lifetime of
an application.

You can find all the juicy details in their paper Multicore garbage
collection with thread local heaps, located near the top of Simon's
webpage here:

http://research.microsoft.com/en-us/people/simonpj/

-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fyi GHC 7.2.1 release on the benchmarks game

2011-08-12 Thread austin seipp
Hello Isaac,

On Fri, Aug 12, 2011 at 10:18 AM, Isaac Gouy igo...@yahoo.com wrote:
 1) Some of the GHC programs contributed to the benchmarks game have problems 
 with recent GHC releases

 - meteor-contest #5 - Ambiguous occurrence `permutations'

 http://shootout.alioth.debian.org/u64q/program.php?test=meteorlang=ghcid=5#log

This can be fixed by changing the line:

import Data.List

to:

import Data.List hiding (permutations)

Also, the program needs to be compiled with -XBangPatterns

 - regex-dna #2 - Precedence parsing error

 http://shootout.alioth.debian.org/u64q/program.php?test=regexdnalang=ghcid=2#log

This can be fixed by changing this line (line 74):

mapM_ putStrLn $ results `using` parList rdeepseq

to:

mapM_ putStrLn (results `using` parList rdeepseq)

There are also some deprecation warnings related to `rwhnf` among
others being renamed, but those are harmless (may want to fix them
anyway though.)

 - reverse-complement #2 - parse error on input `-'

 http://shootout.alioth.debian.org/u64q/program.php?test=revcomplang=ghcid=2#log


You can fix this by adding this pragma:

{-# LANGUAGE MagicHash, UnboxedTuples #-}

or compiling with -XMagicHash and -XUnboxedTuples

Also, this program imports 'GHC.IOBase' which is a warning, that
import should be changed to 'GHC.IO' instead.

 - reverse-complement #3 - Could not find module `Monad'

 http://shootout.alioth.debian.org/u64q/program.php?test=revcomplang=ghcid=3#log

You can fix this one by adding this pragma to the top of revcomp.hs

{-# LANGUAGE BangPatterns #-}

Alternatively you can compile with -XBangPatterns. Also, change the line

import Monad

to:

import Control.Monad

 2) I noticed `-fvia-C` has now gone away and there were half-a-dozen programs 
 that had been written to use `-fvia-C` so how might that effect performance 
 of those programs?

I can't forsee the potential performance ramifications, but frankly
-fvia-C has been deprecated/not-advised-for-use for quite a while now,
and I wonder how many of these programs just have not been
updated/tested with the native code generator since they were written.

In any case it's not an option anymore, so your only choice is to nuke
it from orbit (orbit being the Makefiles.)

-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] ANNOUNCE: GHC version 7.2.1

2011-08-11 Thread austin seipp
VECTORISE is for Data Parallel Haskell. It's only relevant to GHC's
internal vectorisation pass - I don't actually think there is any use
case for it in user code at the moment, it's only used by the DPH
libraries/special prelude, etc.

On Thu, Aug 11, 2011 at 12:16 PM, Henning Thielemann
schlepp...@henning-thielemann.de wrote:
 On 09.08.2011 22:01, Ian Lynagh wrote:

 The GHC Team is pleased to announce a new major release of GHC, 7.2.1.

 The 7.2 branch is intended to be more of a technology preview than
 normal GHC stable branches; in particular, it supports a significantly
 improved version of DPH, as well as new features such as compiler
 plugins and safe Haskell. The design of these new features may evolve
 as we get more experience with them. See the release notes for more
 details of what's new and what's changed.

 These sound like a lot of exciting news. The release notes also mention a
 VECTORISE pragma, that is not mentioned in
   http://www.haskell.org/ghc/docs/7.2.1/html/users_guide/pragmas.html
 and not in the index. Is it about the planned GHC support for SSE/AltiVec
 vector units or is it about DPH?

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A language that runs on the JVM or .NET has the advantage of Oracle Microsoft making those layers more parallelizable.

2011-07-30 Thread austin seipp
No, there aren't. At least none that I know of. Don Stewart did work
years ago on a JVM backend for GHC for his Bachelors thesis. You may
be able to find it online (I don't know the name, sorry.) This was
never integrated mainline however.

These questions have been asked many many times, but the real answer
is it's a whole lot of work. Not impossible, but a whole lot of
work. And it's not clear what the ultimate tradeoffs are. See this for
some more info:

http://www.haskell.org/haskellwiki/GHC:FAQ#.NET.2FJVM_Availability

In particular I'm reminded of the story of Don Syme, F# author, who
initially did work I believe for a Haskell.NET compiler, but
inevitably abandoned it and went to create F#. See some of the history
behind F#, SML.NET and Haskell.NET here:

http://www.infoq.com/interviews/F-Sharp-Don-Syme#

In particular you can just look at Don's answers to the related questions.

Hope it helps.

On Sat, Jul 30, 2011 at 5:07 PM, KC kc1...@gmail.com wrote:
 Are there plans a foot (or under fingers) to make a version of Haskell
 that runs on the JVM?

 --
 --
 Regards,
 KC

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Software patents covered in GHC?

2011-06-20 Thread austin seipp
*sigh* CC'ing to the rest of haskell-cafe for completeness. I need to
change 'reply all' to a default in my email I guess.

On Mon, Jun 20, 2011 at 12:19 PM, austin seipp a...@hacks.yi.org wrote:
 Hello,

 Realistically, there probably is. Considering everything down to
 linked lists are patented by the USPO, I'd be willing to bet (at least
 a dollar or two) that there are many patents out there that apply to
 the code used in GHC. I don't know if you could say anything different
 about *any* nontrivial software, to be quite honest.

 IANAL however, so I won't attempt to claim in what countries these
 patents apply, nor could I cite you patent numbers (legalese being
 terse and software patents so seemingly open to interpretation
 sometimes, it would be hard to narrow down anyway I think.)

 On Mon, Jun 20, 2011 at 12:10 PM, Shakthi Kannan shakthim...@gmail.com 
 wrote:
 Hi,

 I would like to know if there are any software algorithms used in the
 Glasgow Haskell Compiler that have been patented? If yes, in which
 countries do they apply?

 Just curious to know.

 SK

 --
 Shakthi Kannan
 http://www.shakthimaan.com

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Regards,
 Austin




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Building Haskell Platform natively for 64bit Windows

2011-06-10 Thread austin seipp
It's worth mentioning 'foreign prim' is still a bit different from
inline code - while you can certainly write Cmm and have GHC link it
into your program, it is not really inline. GHC has two different
kinds of primitive operations: inline primops, and out of line
primops. foreign primops are currently always out of line. Inline
primops require actual GHC hacking, as they are short code sequences
emitted directly by the code generator, that don't block or require
allocation. As far as I know, GHC will naturally still generate a call
to these out of line primops when you use them - in the form of a
'jmp' instruction, which is all that exists in Cmm (if someone out
there knows more, like Johan perhaps who recently did memcpy inline
primops, please correct me.) GHC will not necessarily have to spill
STG registers (as it would in the case of a true foreign call,) but
the instructions certainly won't be inline.

Truthfully the use case for foreign prim is frighteningly small I
think. It's been around for a while, but currently it's only client is
integer-gmp, which was part of the motivation for the implementation
anyway (to split GMP away from the GHC RTS, so it would be possibly to
build full BSD3 haskell executables for commercial purposes, etc.)
Outside of perhaps, an integer-openssl binding that instead used
OpenSSL bignums, I can't think of many use cases personally. And a
true 64bit GHC port to windows is of course, going to help the
situation much more than any amount of hacking around the issue with
32bits and 8 registers will, while trying to retain some sense of
efficiency.

On Fri, Jun 10, 2011 at 12:16 PM, Andrew Coppin
andrewcop...@btinternet.com wrote:
 On 10/06/2011 01:44 AM, Jason Dagit wrote:

 On Thu, Jun 9, 2011 at 2:06 PM, Andrew Coppin
 andrewcop...@btinternet.com  wrote:

 Too bad GHC doesn't support inline assembly yet... (Or does it? I know it
 supports inline Core now.)

 Really?  I found this in the manual so I think either the docs need to
 be updated or you are mistaken:

 Apparently I am mistaken. It allows inline C-- (as of 6.12.1):

 http://www.haskell.org/ghc/docs/6.12.1/html/users_guide/ffi.html#ffi-prim

 Of course, C-- is not nearly the same as Core. Sorry about that...

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Building Haskell Platform natively for 64bit Windows

2011-06-09 Thread austin seipp
On Thu, Jun 9, 2011 at 1:53 PM, Andrew Coppin
andrewcop...@btinternet.com wrote:
 I'm still left wondering if using 32-bit instructions to manipulate 64-bit
 values is actually that much slower. Back in the old days of non-pipelined,
 uniscalar CPUs, it would certainly have been the case. Today's processors
 are far more complex than that. Things like cache misses tend to have a way,
 way bigger performance hit than anything arithmetic-related.


The problem is you're probably going to need to spill things
(somewhere) in order to operate on the upper 32bits of any given
register in the non trivial case. x86 already is pathetic in its 8 GP
registers. amd64 brings it up to 16. GHC already has to put a
significant amount of stuff on the stack on x86 because of that, IIRC
(I think only 3 STG registers are actually pinned, and the remaining 5
are for use by the register allocator.)

 I'm wondering if you could write the operations you want is small C stub
 functions, and FFI to them and do it that way. I don't really know enough
 about this sort of thing to know whether that'll work...

It's highly unlikely that the cost of a foreign call (even an unsafe
one) in terms of just CPU cycles will be cheaper than executing a few
primitive arithmetic ops on the CPU. GHC will at least need to spill
to the stack before and reload after every call. If the arithmetic ops
were to somehow pipeline, that would be even worse in your favor,
because the cycle count for your core operation would go *down* as a
result of the pipelining, making the foreign call that much more
expensive.

Furthermore, GHC will invoke the C compiler on a FFI'd .c file to
produce an object file. It's unlikely the linker will be able to
inline copies of the function at call sites at link time, so if you
use C, the function will likely exist as object-code far away from the
call sites, and that will probably hurt your L1i cache locality a bit.

-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Enabling GADTs breaks Rank2Types code compilation - Why?

2011-06-01 Thread austin seipp
On Tue, May 31, 2011 at 11:13 PM,
dm-list-haskell-c...@scs.stanford.edu wrote:
 It definitely felt like I was running up against something like the
 monomorphism restriction, but my bindings were function and not
 pattern bindings, so I couldn't understand what was going on.  I had
 even gone and re-read the GHC documentation
 (http://www.haskell.org/ghc/docs/7.0-latest/html/users_guide/data-type-extensions.html#gadt),
 which says that -XGADTs enables -XRelaxedPolyRec, but makes no mention
 of -XMonoLocalBinds.

 It might save users some frustration if the GHC manual and/or the
 error message mentioned something about let bindings being monomorphic
 by default.

I'd agree it should be documented behavior and put somewhere in the
users guide. It's bitten a few people before, but Simon's blog post is
the only reference I've really seen on the matter I think.

 However, is it reasonable to conclude that if
 I'm going to use GADTs anyway, then additionally enabling
 ScopedTypeVariables doesn't really make my code any less future-proof?

I'd say so. For nontrivial extensions like this, I'd wager you're
likely to rely on GHC inadvertently in some form anyway - a result of
all this new typechecking stuff for example, is that GHC can correctly
infer types for more programs involving these complex extensions. You
may end up relying on such inference in the future, as we've already
seen happen here, which could break things down the road.
ScopedTypeVariables seems much more simple and future proof in this
regard, IMO.

-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Enabling GADTs breaks Rank2Types code compilation - Why?

2011-05-31 Thread austin seipp
Hi David,

It seems to be a result of the new typechecker and more specifically
the new behavior for GADTs in GHC 7.

The short story is thus: when you turn on GADTs, it also now turns on
another extension implicitly (MonoLocalBinds) which restricts let
generalization. More specifically, it causes GHC to not generalize the
inferred type of polymorphic let bindings (and where clauses too.)
This results in the error you see. GHC telling you that the type of
the argument to mkUnit (in this case mkConst_r) is not polymorphic
enough.

The correct fix is to simply give an explicit type to any polymorphic
let binding. This will let GHC happily compile your program with GADTs
with otherwise no changes needed for correctness. The reasoning for
this behavior is because SPJ et al found it a lot more difficult to
build a robust type inference engine, when faced with non-monomorphic
local bindings. Luckily the overall impact of such a change was
measured to be relatively small, but indeed, it will break some
existing programs. The changes aren't huge though - this program
successfully typechecks with GHC 7.0.3:

---
{-# LANGUAGE Rank2Types #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE GADTs #-}

module Main where

mkUnit :: (forall t. t - m t) - m ()
mkUnit f = f ()

data Const b a = Const { unConst :: b }

sillyId :: forall r. r - r
sillyId r = unConst (mkUnit mkConst_r) -- This line can't compile with GADTs
  where mkConst_r :: t - Const r t
mkConst_r t = mkConst r t
mkConst :: b - t - Const b t
mkConst b _ = Const b

main :: IO ()
main = return ()
---

There is also another 'fix' that may work, which Simon talks about
himself: you can enable GADTs, but turn off enforced monomorphic local
bindings by giving the extension 'NoMonoLocalBinds'. This will merely
tell GHC to try anyway, but it'd be better to just update your
program, as you can't give as many guarantees about the type
inferencer when you give it such code.

You can find a little more info about the change here:

http://hackage.haskell.org/trac/ghc/blog/LetGeneralisationInGhc7

And in Simon's paper (Let should not be generalized.)

Hope it helps,

On Tue, May 31, 2011 at 6:42 PM,  dm-list-haskell-c...@scs.stanford.edu wrote:
 I'm using GHC 7.0.2 and running into a compiler error that I cannot
 understand.  Can anyone shed light on the issue for me?  The code does
 not make use of GADTs and compiles just fine without them.  But when I
 add a {-# LANGUAGE GADTs #-} pragma, it fails to compile.

 Here is the code:

 

 {-# LANGUAGE Rank2Types #-}
 {-# LANGUAGE GADTs #-}

 mkUnit :: (forall t. t - m t) - m ()
 mkUnit f = f ()

 data Const b a = Const { unConst :: b }

 sillyId :: r - r
 sillyId r = unConst (mkUnit mkConst_r) -- This line can't compile with GADTs
    where mkConst_r t = mkConst r t
          mkConst :: b - t - Const b t
          mkConst b _ = Const b

 

 And the error I get is:

    Couldn't match type `t0' with `t'
      because type variable `t' would escape its scope
    This (rigid, skolem) type variable is bound by
      a type expected by the context: t - Const r t
    The following variables have types that mention t0
      mkConst_r :: t0 - Const r t0
        (bound at /u/dm/hack/hs/gadt.hs:11:11)
    In the first argument of `mkUnit', namely `mkConst_r'
    In the first argument of `unConst', namely `(mkUnit mkConst_r)'
    In the expression: unConst (mkUnit mkConst_r)

 I've found several workarounds for the issue, but I'd like to
 understand what the error message means and why it is caused by GADTs.

 Thanks in advance for any help.

 David

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Lisp Curse

2011-05-19 Thread austin seipp
I too am not all that concerned about the library proliferation, and I
think such work can definitely help find the best design for certain
abstractions. There are no less than 3 iteratee libraries - 4
including liboleg's original IterateeM formulation - and a number of
FRP implementations as well, but part of that is because we haven't
found the best designs for such things yet. Doing that will take time
and effort on the part of the community, whether there's 3 competing
libraries or 30.

In the case of Unicode, I think the community has clearly spoken in
its adoption of the `text` package into the Platform, as well as it
being one of the most depended on hackage packages. Certainly the
platform doesn't recommend a 'blessed' package for every need. JSON is
a simple example, and there are many various JSON implementations on
Hackage. Outside of the platform, I think there should definitely be a
better way to communicate to users what library might be best. I don't
know how that would look - Hackage 2 will have commenting and reverse
dependencies I think, but when that future hackage will rise is not
clear.

One thing is for certain: in the Haskell ecosystem, if your code does
not exist on Hackage, and *especially* if it does not use Cabal,
realistically speaking, it is dead as far as the larger community is
concerned (the exception being GHC itself, perhaps. Even gtk2hs - who
was one of the biggest outliers for a long time, now is cabalized.) I
think I would ultimately rather encourage people to submit code - even
if it is small or perhaps duplicate. At least give it and its ideas
the chance to survive, be used and talked about, and die if not -
rather than resign it to such a fate prematurely.

On Thu, May 19, 2011 at 5:00 PM, Daniel Peebles pumpkin...@gmail.com wrote:
 The way I understand it, you're saying not that we shouldn't be doing it
 this way (since it isn't centrally managed, it's the only possible way), but
 that we shouldn't be bragging (for lack of a better word) that we have
 lots of libraries that do a specific thing. Or if not that, then at least
 that it isn't a clear win.
 I agree that from an end-user's perspective it isn't always a clear win, but
 I do think that having a bunch of libraries (even ones that do the same
 thing) an indicator of a healthy, active, and enthusiastic community. Sure,
 it's decentralized and people will often duplicate effort, but different
 variations on the same idea can also help explore the design space and will
 reveal to everyone interested what works and what doesn't.
 But yeah, if you want to do X and you encounter 15 libraries that do X and
 can't find a clear consensus on what's best, I can understand why that might
 be frustrating. I don't think there's really a clear solution to that
 though, other than gently encouraging collaboration and scoping out of
 existing work before starting new work. But people generally hate working
 with other people's code, so I doubt that'll have much of an effect :)
 Dan

 On Thu, May 19, 2011 at 4:56 PM, Andrew Coppin andrewcop...@btinternet.com
 wrote:

 On 19/05/2011 09:34 PM, vagif.ve...@gmail.com wrote:

 Andrew, you are being non constructive.

 It seems I'm being misunderstood.

 Some people seem to hold the opinion that more libraries = better. I'm
 trying to voice the opinion that there is such a thing as too many
 libraries. The article I linked to explains part of why this is the case, in
 a better way than I've been able to phrase it myself.

 I'm not trying to say OMG, the way it is now completely sucks! I'm not
 trying to say you must do X right now! I'm just trying to put forward an
 opinion. The opinion that having too many libraries can be a problem, which
 some people don't seem to agree with. (Obviously it isn't *always* bad, I'm
 just saying that sometimes it can be.)

 That's all I was trying to say.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] = definition for list monad in ghc

2011-05-16 Thread austin seipp
Looking at the Core for an utterly trivial example (test x = concatMap
k x where k i = [i..i*2]), the foldr definition seems to cause a
little extra optimization rules to fire, but the result seems pretty
big. The definition using concatMap results in core like this:

main_go2 =
  \ (ds_aqV :: [Int]) -
case ds_aqV of _ {
  [] - [] @ Int;
  : y_ar0 ys_ar1 -
case y_ar0 of _ { I# x_arj -
let {
  y1_ase [Dmd=Just L] :: Int#

  y1_ase = *# x_arj 2 } in
let {
  n_sRv :: [Int]

  n_sRv = main_go2 ys_ar1 } in
case # x_arj y1_ase of _ {
  False -
letrec {
  go_sRx [Occ=LoopBreaker] :: Int# - [Int]

  go_sRx =
\ (x1_asi :: Int#) -
  :
@ Int
(I# x1_asi)
(case ==# x1_asi y1_ase of _ {
   False - go_sRx (+# x1_asi 1);
   True - n_sRv
 }); } in
go_sRx x_arj;
  True - n_sRv
}
}
}

But with the foldr definition, we get:

Main.main_go [Occ=LoopBreaker] :: GHC.Prim.Int# - [GHC.Types.Int]
[GblId, Arity=1, Caf=NoCafRefs, Str=DmdType L]
Main.main_go =
  \ (x_asu :: GHC.Prim.Int#) -
GHC.Base.++
  @ GHC.Types.Int
  (GHC.Enum.eftInt x_asu (GHC.Prim.*# x_asu 2))
  (case x_asu of wild_B1 {
 __DEFAULT - Main.main_go (GHC.Prim.+# wild_B1 1);
 10 - GHC.Types.[] @ GHC.Types.Int
   })
end Rec }

There seems to be a binding for my 'test' example, but it seems to be
abandoned in the final core for some reason (there are no references
too it, but it's not eliminated as an unused binding?) Main simply
calls the simplified/inlined version above.

As you can see, with the foldr definition, GHC is able to fuse the
iteration of the input list with the generation of the result - note
the 'GHC.Base.++' call with the first argument being a list from
[x..x*2], and the second list to append being a recursive call. So it
simplifies the code quite a bit - it doesn't really end up traversing
a list, but instead generating a list only in this case, as it stores
current iteration in a Int# and has the upper limit inlined into the
case statement.

I don't know why GHC doesn't have this rule by default, though. We can
at least rig it with a RULES pragma, however:

$ cat concatmap.hs
module Main where

{-# RULES
concatMap/foldr forall x k. concatMap k x = foldr ((++) . k) [] x
  #-}

test :: [Int] - [Int]
--test x = foldr ((++) . k) [] x
test x = concatMap k x
  where k i = [i..i*2]

main :: IO ()
main = do
  print $ test [1..10]

$ ghc -fforce-recomp -O2 -ddump-simpl concatmap.hs

   1 ↵
[1 of 1] Compiling Main ( concatmap.hs, concatmap.o )

 Tidy Core 
Rec {
Main.main_go [Occ=LoopBreaker] :: GHC.Prim.Int# - [GHC.Types.Int]
[GblId, Arity=1, Caf=NoCafRefs, Str=DmdType L]
Main.main_go =
  \ (x_ato :: GHC.Prim.Int#) -
GHC.Base.++
  @ GHC.Types.Int
  (GHC.Enum.eftInt x_ato (GHC.Prim.*# x_ato 2))
  (case x_ato of wild_B1 {
 __DEFAULT - Main.main_go (GHC.Prim.+# wild_B1 1);
 10 - GHC.Types.[] @ GHC.Types.Int
   })
end Rec }
...
...
...
-- Local rules for imported ids 
concatMap/foldr [ALWAYS]
forall {@ b_aq7 @ a_aq8 x_abH :: [a_aq8] k_abI :: a_aq8 - [b_aq7]}
  GHC.List.concatMap @ a_aq8 @ b_aq7 k_abI x_abH
  = GHC.Base.foldr
  @ a_aq8
  @ [b_aq7]
  (GHC.Base..
 @ [b_aq7]
 @ ([b_aq7] - [b_aq7])
 @ a_aq8
 (GHC.Base.++ @ b_aq7)
 k_abI)
  (GHC.Types.[] @ b_aq7)
  x_abH


Linking concatmap ...
$

Maybe it should be added to the base libraries?

On Mon, May 16, 2011 at 1:03 PM, Andrew Coppin
andrewcop...@btinternet.com wrote:
 On 16/05/2011 10:07 AM, Michael Vanier wrote:

 Usually in monad tutorials, the = operator for the list monad is
 defined as:

 m = k = concat (map k m) -- or concatMap k m

 but in the GHC sources it's defined as:

 m = k = foldr ((++) . k) [] m

 As far as I can tell, this definition is equivalent to the previous one
 (correct me if I'm wrong), so I was wondering why this definition was
 chosen instead of the other one. Does anybody know?

 Any time you see a more convoluted definition which ought to be equivilent
 to a simpler one, the answer is usually because this way makes some
 important compiler optimisation fire. It's even possible that the
 optimisation in question would fire anyway now, but way back when the code
 was written, the compiler wasn't as smart.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org

Re: [Haskell-cafe] = definition for list monad in ghc

2011-05-16 Thread austin seipp
You're both right indeed - I didn't look for the definition of
concatMap in GHC.List.

I thought it could be some behavior with the new inliner, so I defined
concatMap in terms of foldr, put it in a seperate module, and then
included it and used it in my test:

Concatmap2.hs:
module Concatmap2 (concatMap2) where

concatMap2 :: (a - [b]) - [a] - [b]
concatMap2 f x = foldr ((++) . f) [] x

And concatmap.hs:

module Main where
import Concatmap2

test :: [Int] - [Int]
test x = concatMap2 k x
  where k i = [i..i*2]

main :: IO ()
main = do
  print $ test [1..10]

Initially I thought it might be something to do with the new inliner
heuristics (something about only inlining if call sites are 'fully
saturated' with the amount of arguments they explicitly take,) but
defining concatMap as a partial function or in terms of 'x' didn't
make a difference - both resulted in generating the longer version of
core.

Attaching an INLINEABLE pragma to the definition of concatMap2
however, causes its definition in the interface file (Concatmap2.hi)
to change, and it results in the core turning into the small version.
Compiling with the pragma causes the persisted version of concatMap2
in the iface file to change from:

8d333e8d08e5926fd304bc7152eb286d
  concatMap2 :: forall a b. (a - [b]) - [a] - [b]
{- Arity: 2, HasNoCafRefs, Strictness: LS,
   Unfolding: (\ @ a @ b f :: a - [b] x :: [a] -
   letrec {
 go :: [a] - [b] {- Arity: 1, Strictness: S -}
 = \ ds :: [a] -
   case @ [b] ds of wild {
 [] - GHC.Types.[] @ b : y ys - GHC.Base.++
@ b (f y) (go ys) }
   } in
   go x) -}

To:

075ec6b9bcabc12777955494312f5e61
  concatMap2 :: forall a b. (a - [b]) - [a] - [b]
{- Arity: 2, HasNoCafRefs, Strictness: LS,
   Inline: INLINABLE[ALWAYS],
   Unfolding: stable (\ @ a @ b f :: a - [b] x :: [a] -
GHC.Base.foldr
  @ a
  @ [b]
  (\ x1 :: a - GHC.Base.++ @ b (f x1))
  (GHC.Types.[] @ b)
  x) -}

Which I assume exposes the needed code (namely the foldr) for
additional RULES to fire later, resulting in the small code.

So perhaps we should mark concatMap INLINEABLE, instead?

On Mon, May 16, 2011 at 2:46 PM, Daniel Fischer
daniel.is.fisc...@googlemail.com wrote:
 On Monday 16 May 2011 20:49:35, austin seipp wrote:
 Looking at the Core for an utterly trivial example (test x = concatMap
 k x where k i = [i..i*2]), the foldr definition seems to cause a
 little extra optimization rules to fire, but the result seems pretty
 big. The definition using concatMap results in core like this:


 Hmm, something seems to be amiss, writing

 test :: [Int] - [Int]
 test x = concat (map k x)
  where
    k :: Int - [Int]
    k i = [i .. 2*i]

 the core I get is

 Rec {
 ConcatMap.test_go [Occ=LoopBreaker]
  :: [GHC.Types.Int] - [GHC.Types.Int]
 [GblId, Arity=1, Caf=NoCafRefs, Str=DmdType S]
 ConcatMap.test_go =
  \ (ds_aoS :: [GHC.Types.Int]) -
    case ds_aoS of _ {
      [] - GHC.Types.[] @ GHC.Types.Int;
      : y_aoX ys_aoY -
        case y_aoX of _ { GHC.Types.I# x_aom -
        GHC.Base.++
          @ GHC.Types.Int
          (GHC.Enum.eftInt x_aom (GHC.Prim.*# 2 x_aom))
          (ConcatMap.test_go ys_aoY)
        }
    }
 end Rec }

 which is identical to the core I get for foldr ((++) . k) [].
 Writing concatMap, I get the larger core (I'm not sure which one's better,
 the concatMap core uses only (:) to build the result, that might make up
 for the larger code).

 But, as Felipe noted, concatMap is defined as

 -- | Map a function over a list and concatenate the results.
 concatMap               :: (a - [b]) - [a] - [b]
 concatMap f             =  foldr ((++) . f) []

 in GHC.List. There are no RULES or other pragmas involving concatMap either
 there or in Data.List. In the absence of such pragmas, I would expect
 concatMap to be inlined and thus to yield exactly the same core as
 foldr ((++) . f) []




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Python is lazier than Haskell

2011-04-28 Thread austin seipp
Dan,

I believe there was some work on this functionality for GHC some time
ago (agda-like goals for GHC, where ? in agda merely becomes
'undefined' in haskell.) See:

https://github.com/sebastiaanvisser/ghc-goals

This work was done a few years ago during a hackathon (the 09 Utrecht
hackathon.) There is a frontend-executable providing goal information,
as well as a patch to GHC to support it. It was never integrated into
GHC and was left dead in the water essentially (for exactly what
reasons, I do not know.)

I find myself using the 'undefined' trick somewhat often as well. I'm
not very familiar with agda, but familiar enough to have seen goals
before in the interactive emacs mode, and I think this would be a nice
feature for people who find themselves doing similar things.

On Thu, Apr 28, 2011 at 3:18 PM, Dan Doel dan.d...@gmail.com wrote:
 (Sorry if you get this twice, Ertugrul; and if I reply to top. I'm
 stuck with the gmail interface and I'm not used to it.)

 On Thu, Apr 28, 2011 at 11:27 AM, Ertugrul Soeylemez e...@ertes.de wrote:
 I don't see any problem with this.  Although I usually have a bottom-up
 approach, so I don't do this too often, it doesn't hurt, when I have to.

 I do. It's low tech and inconvenient.

 Whenever I program in Haskell, I miss Agda's editing features, where I
 can write:

    foo : Signature
    foo x y z = ?

 Then compile the file. The ? stands in for a term of any type, and
 becomes a 'hole' in my code. The editing environment will then tell me
 what type of term I have to fill into the hole, and give me
 information on what is available in the scope. Then I can write:

    foo x y z = { partialImpl ? ? }

 and execute another command. The compiler will make sure that
 'partialImpl ? ?' has the right type to fill in the hole (with ?s
 again standing in for terms of arbitrary type). If the type checking
 goes through, it expands into:

    foo x y z = partialImpl { } { }

 and the process repeats until my function is completely written. And
 of course, let's not forget the command for automatically going from:

    foo x y z = { x }

 to

    foo Con1 y z = { }
    foo Con2 y z = { }
    foo Con3 y z = { }
    ...

 I don't think there's anything particularly Agda-specific to the
 above. In fact, the inference required should be easier with
 Haskell/GHC. Features like this would be pretty killer to have for
 Haskell development; then I wouldn't have to prototype in Agda. :)

 -- Dan

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Questioning seq

2011-04-14 Thread austin seipp
As usual, I'm foolish and forget to hit 'reply to all'. Original
message unedited below, so it can be sent to -cafe.

To answer question #3, pseq and seq are semantically equivalent
(indeed, if you look at the source for Control.Parallel, if you are
not using GHC, pseq is defined as 'pseq = seq'.) There is a subtle
operational difference however: seq is strict in both of its
arguments, while pseq is only strict in its first argument as far as
GHC is concerned.

When you annotate code for parallelism, seq is a little more
problematic, because you want more control over the evaluation order.
For example, given a `seq` b, the compiler can freely rearrange this
in a number of ways - but if we are evaluating code in parallel,
that's a little too strict. So typically you want to say a `pseq` b -
you will strictly evaluate 'a' before 'b', presumably because 'b' has
already been sparked off in parallel using `par`. If you were using
seq, the compiler can rearrange a `seq` b into say, b `seq` a `seq` b.
Which won't gain you anything from a parallel perspective, for the
most part.

For more info, see the Control.Parallel module:

http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel.html

Also see Simon  Simon's paper, 'Runtime support for Multicore
Haskell', particularly section 2, 'Background: programming model'.

http://community.haskell.org/~simonmar/papers/multicore-ghc.pdf

On Thu, Apr 14, 2011 at 12:57 PM, Andrew Coppin
andrewcop...@btinternet.com wrote:
 A couple of questions:

 1. Why is the existence of polymorphic seq bad?

 2. Why would the existence of polymorphic rnf be worse?

 3. How is pseq different from seq?

 That is all...

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Emscripten: compiling LLVM to JavaScript

2011-04-11 Thread austin seipp
I do wonder how Emscripten handles the GHC calling convention that is
part of LLVM. In particular, global register declarations in the RTS
scare me from a side line view, and LLVM's calling convention support
is what makes the combination work at all in a registered environment.
It's currently not even possible to compile code using global register
variables to LLVM bitcode - it requires support everywhere from the
frontend all the way to code generation. This is why LLVM uses a
custom calling convention for GHC. The custom CC is also needed to
make sure things like GC etc can work accurately - GC roots for
example are always in predictable physical registers or somesuch.

The RTS will probably need to be heavily modified or flat-out
re-implemented in JS for this to work, I'd think (similar to ghcjs.)
It's possible to get GHC to generate code that does not use global
register variables (normally used when compiling via C, use ack and
search for 'NO_REGS' in the source. It's primarily for unregisterized
builds.) The result is fairly POSIX-compliant code that could be
compiled with, say, clang to produce bitcode.

However, forcing NO_REGS across GHC will mean that certain STG
'registers' are instead mapped to stack locations, not real physical
registers. So it changes the calling convention. The LLVM calling
convention is currently only built to handle a registerized GHC from
what I've seen, and upon entry to STG-land, it's going to expect that
certain STG registers are pinned to physical machine registers.
Result: disasterous CC mismatch.

Of course, perhaps re-implementing the RTS in JS isn't a terrible,
terrible idea. You could for example, swap out GHC's lightweight
threading to use webworkers or somesuch. This would probably be a hell
of a lot more difficult if you were compiling the real C RTS code into
JS indirectly via LLVM.

I don't know if Emscripten is the way to go for compiling Haskell to
JS, but it does open up more possibilities. It's going to be a lot of
work no matter how you approach it though - perhaps we need some
interested team of people willing to do the research.

(NB: I could be wrong on the LLVM note; David T. or one of the Simons
would probably know more and be more qualified to answer.)

On Mon, Apr 11, 2011 at 7:54 AM, Sönke Hahn sh...@cs.tu-berlin.de wrote:
 Hi all!

 I haven't tried the tool myself, but it seems interesting to the Haskell
 efforts to compile to JavaScript:

 http://syntensity.blogspot.com/2011/04/emscripten-10.html

 Cheers,
 Sönke



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell for Gingerbread

2010-12-28 Thread austin seipp
There was work ongoing for an ARM port of GHC. See here:

http://tommd.wordpress.com/2010/01/19/ghc-on-arm/

Also see:

http://alpheccar.org/en/posts/show/94

Alpheccar's build uses the work of Stephen Blackheath to cross
compile, which originated in the GHC-iPhone project, based on ghc
6.10.2 I believe. They are both based on the unregistered C backend,
i.e. using the -fvia-C backend.

None of this work has been accepted upstream as of me typing this, as
far as I know. Certainly the large majority of Stephen's work is not
in GHC (in particular things like the POOLSIZE wrapper, which is a
workaround for not being able to generate code at runtime via libffi,
because the iPhone doesn't allow it. I don't think Android has the
same restrictions, here.)

There are some plans underway for adding cross compilation support to
the GHC build system, see here:

http://hackage.haskell.org/trac/ghc/wiki/CrossCompilation

In particular this avoids the need for non-standard and painful build
setups like alpheccar went through, or the peculiar dance that e.g.
HalVM has to go through to cross compile GHC for Xen.

Overwhelmingly, I would say focusing efforts on GHC is the way to go.
JHC is fiddly (and still fails badly on some simple programs,) ditto
for LHC which is highly experimental and not quite as portable. nhc98
would probably be a good baseline, but the lack of language extensions
and support from the community probably means you're not going to get
a lot to work with it. yhc seems unmaintained. GHC also generates
better code than pretty much anything else, too (except when JHC
manages to actually work.) GHC is what everybody uses and knows, so it
seems best to focus efforts here.

Getting GHC to actually support cross compilation in the build system
is IMO probably the biggest thing that needs to be done, before any
sort of registered ARM port, or adding support for android to the
runtime (if that would even be necessary, I have not looked at the
Android Gingerbread NDK notes - it may be possible to just have a
library that takes care of interfacing with the NDK.) I say this
because I think GHC being cross compilable is ultimately a much
greater goal than just a port to work on Android/ARM (even debian has
had unregistered arm/armel GHC builds for a while, IIRC) and would
make using GHC with more exotic toolchains much, much easier (android
included.) Until then, your only hope is porting the unregistered
compiler, which can be quite frustrating and delicate (and doesn't
always work - see the GHC wiki page on Porting.)

Perhaps we should be paging Mr. Simon Marlow in here, since he can
probably give a better and more concrete answer.

On Tue, Dec 28, 2010 at 1:27 PM, Chris Kuklewicz
hask...@list.mightyreason.com wrote:
 Hi folks,

  I have been looking at developing for my Android phone which is
 running Gingerbread (Android version 2.3).  The important thing about
 the official development kit is this:

 The new native development kit (C/C++ cross compiler for ARM) is that
 the you can create android applications from pure C without using the
 Dalvik/Java virtual machine at all.  The thinking behind this was
 probably for game developers to be able to avoid their VM.

 So all that might be needed is a Haskell compiler with a C-backend that
 emits ARM-compatible code and an initially minimal android runtime.
 Implementing to the new native_activity.h allow for the usual
 application life-cycle: onStart, onPause, onResume, onStop...

 Some options I have not had a chance to look into:

 1) GHC arm port and -via-C
 2) jhc (and lhc) generated C
 3) port nhc98
 4) port yhc bytecode runtime

 Does anyone know who else is thinking along any of these lines?  Are
 there 5th or 6th routes to take?

 Cheers,
  Chris Kuklewicz



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Regards,
Austin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-21 Thread austin seipp
Patrick,

Dependent types are program types that depend on runtime values. That
is, they are essentially a type of the form:

f :: (a :: X) - T

where 'a' is a *value* of type 'X', which is mentioned in the *type* 'T'.

You do not see such things in Haskell, because Haskell separates
values from types - the language of runtime values, and the language
of compile-time values are separate by design. In dependently typed
languages, values and types are in fact, the same terms and part of
the same language, not separated by compiler phases. The distinction
between runtime and compile time becomes blurry at this stage.

By the Curry-Howard isomorphism, we would like to make a proof of some
property, particularly that mirror (mirror x) = x - but by CH, types
are the propositions we must prove, and values are the terms we must
prove them with (such as in first-order or propositional logics.) This
is what Eugene means when he says it is not possible to prove this at
the value level. Haskell cannot directly express the following type:

forall a. forall x:Tree a, mirror (mirror x) = x

We can think of the 'forall' parts of the type as bringing the
variables 'a' and 'x' into scope with fresh names, and they are
universally quantified so that this property is established for any
given value of type 'a'. As you can see, this is a dependent type - we
establish we will be using something of type a, and then we establish
that we also have a *value* x of type 'Tree a', which is mentioned
finally by saying that 'mirror (mirror x) = x'.

Because we cannot directly express the above type (as we cannot mix
runtime values and type level values) we cannot also directly express
a proof of such a proposition directly in Haskell.

However, if we can express such a type - that is, a type which can
encode the 'Mirror' function - it is possible to construct a
value-level term which is a proof of such a proposition. By CH, this
is also possible, only not nearly as palatable because we encode the
definition of 'mirror' at the value level as well as the type level -
again, types and values in haskell exist in different realms. With
dependent types we only need to write 'mirror' *once* because we can
use it at both the type and value level. Also, because dependently
typed languages have more expressive type systems in general, it
quickly becomes a burden to do dependent-type 'tricks' in Haskell,
although some are manageable.

For amusement, I went ahead and actually implemented 'Mirror' as a
type family, and used a little bit of hackery thanks to GHC to prove
that indeed, 'mirror x (mirror x) = x' since with a type family we can
express 'mirror' as a type level function via type families. It uses
Ryan Ingram's approach to lightweight dependent type programming in
Haskell.

https://gist.github.com/750279

As you can see, it is quite unsatisfactory since we must encode Mirror
at the type level, as well as 'close off' a typeclass to constrain the
values over which we can make our proof (in GHC, typeclasses are
completely open and can have many nonsensical instances. We could make
an instance for, say 'Mirror Int = Tip', for example, which breaks our
proof. Ryan's trick is to encode 'closed' type classes by using a type
class that implements a form of 'type case', and any instances must
prove that the type being instantiated is either of type 'Tip' or type
'Node' by parametricity.) And well, the tree induction step is just a
little bit painful, isn't it?

So that's an answer, it's just probably not what you wanted.

However, at one point I wrote about proving exactly such a thing (your
exact code, coincidentally) in Haskell, only using an 'extraction
tool' that extracts Isabelle theories from Haskell source code,
allowing you to prove such properties with an automated theorem
prover.

http://hacks.yi.org/~as/posts/2009-04-20-A-little-fun.html

Of course, this depends on the extractor tool itself being correct -
if it is not, it could incorrectly translate your Haskell to similar
but perhaps subtly wrong Isabelle code, and then you'll be proving the
wrong thing! Haskabelle also does support much more than Haskell98 if
I remember correctly, although that may have changed. I also remember
reading that Galois has a project of having 'provable haskell' using
Isabelle, but I can't verify that I suppose.

On Tue, Dec 21, 2010 at 5:53 AM, Patrick Browne patrick.bro...@dit.ie wrote:
 Hi,
 In a previous posting[1] I asked was there a way to achieve a proof of
 mirror (mirror x) = x

 in Haskell itself. The code for the tree/mirror is below:

  module BTree where
  data Tree a = Tip | Node (Tree a) a (Tree a)

  mirror ::  Tree a - Tree a
  mirror (Node x y z) = Node (mirror z) y (mirror x)
  mirror Tip = Tip

 The reply from Eugene Kirpichov was:
 It is not possible at the value level, because Haskell does not
 support dependent types and thus cannot express the type of the
 proposition
 forall a . forall x:Tree a, mirror (mirror x) = x,
 and 

Re: [Haskell-cafe] Quick Question for QuickCheck2

2010-08-30 Thread austin seipp
You can create a wrapper with a newtype and then define an instance for that.

newtype Char2 = Char2 Char

instance Arbitrary Char2 where
  arbitrary = ...

You'll have to do some wrapping and unwrapping when calling your
properties to get/set the underlying Char, but this is probably the
easiest way to 'constrain' the possible arbitrary results when the
default instance for Char can be too much.

On Mon, Aug 30, 2010 at 10:12 AM, Sebastian Höhn
sebastian.ho...@googlemail.com wrote:
 Hello,

 perhaps I am just blind or is it a difficult issue: I would like to
 generate Char values in a given Range for QuickCheck2. There is this
 simple example from the haskell book:

 instance Arbitrary Char where
   arbitrary = elements (['A'..'Z'] ++ ['a' .. 'z'] ++  ~...@#$%^*())

 This does not work in QuickCheck2 since the instance is already
 defined. How do I achieve this behaviour in QC2?

 Thanks for helping.

 Sebastian
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
- Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Takusen 0.8.6

2010-08-01 Thread austin seipp
A reasonable guess (I think, anyway): the reason is because support
for ODBC, Oracle, Postgres etc isn't compiled in by default. You have
to specify it with a flag with cabal install to get support for those
things. But the reason they show up in API docs I would guess is
because Haddock doesn't check constraints on what modules are
'exposed' given certain flags in the cabal file, it just looks over
every 'exposed' module no matter what.

In this case, it's not really a huge burden because I believe those
modules each have almost identical interfaces, in particular they
specify a 'connect' like function to get a database handle, and that's
about all. The rest is common code under Database.Enumerator.

On Sun, Aug 1, 2010 at 1:49 PM, aditya siram aditya.si...@gmail.com wrote:
 I meant the links to the API docs.
 -deech

 [1]
 http://hackage.haskell.org/packages/archive/Takusen/0.8.6/doc/html/Database-ODBC-Enumerator.html

 On Sun, Aug 1, 2010 at 1:46 PM, Don Stewart d...@galois.com wrote:

 aditya.siram:
  Why are the Takusen module links on Hackage dead?

 Hmm.  The links look fine:

    http://hackage.haskell.org/package/Takusen-0.8.6

  this opportunity to request a Takusen tutorial and to thank you for this




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
- Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Takusen 0.8.6

2010-08-01 Thread austin seipp
Hi Jason,

I've had my eye on the 'Takusen' approach for a while. In particular I
think it's a wonderful idea to use the left-fold based interface.
Takusen is also well supported and pretty stable, having been around
for a while.

Despite this, it seems to have a couple faults:
 * Few tutorials, aside from the Haddocks in Database.Enumerator
 * Very crufty old code in some sports (I see lots of references to
GHC 6.6 and the 'extensible exceptions' changes in the cabal file
among other places, which I believe we're all well beyond now. There
also seems to be random tidbits that could be removed in favor of a
library/removed because they're not necessary.) This should IMO all be
removed under the argument Get a more recent GHC although people may
disagree here.
 * It would be nice if we could make it depend on nicer libraries
instead of rolling its own stuff - for example, we have Lato's
excellent iteratee package, and Bas van Dijk has written a (IMO
woefully underappreciated!) 'regions' package with encapsulate the
lightweight monadic regions idea Oleg proposed. Of course, due to
design, neither of these may work properly for Takusen's case, and if
they did they would very likely facilitate API changes, but it's
something I've been thinking about in the name of making the library
smaller and more lightweight.

These things were somewhat motivating me to perhaps write a small
competitor library, but if Takusen is looking for a maintainer, it may
certainly be more productive to remove some of the old cruft and
modernize a bit of the existing code.

Does anybody else have similar feelings? I am somewhat busy, but if
someone else has some spare time to dedicate with me - either to
coding or simply brainstorming right now - perhaps we could transform
Takusen into a much more lightweight, better documented library. If it
comes down to this I think I might be willing to maintain it, but I'd
like feedback before just going out and wildly making changes to a
venerable library like this.

On Sat, Jul 31, 2010 at 1:10 PM, Jason Dagit da...@codersbase.com wrote:
 Hello,
 The Takusen team would like to announce the latest release of Takusen,
 0.8.6.  This is primarily a bug fix and test suite enhancement
 release.  The most notable new feature is limited support for string
 encodings with ODBC.  The full list of changes is included at the
 at the end of this announcement.
 = Interested in Takusen development? =
 Takusen is looking for a new long term maintainer.  I have agreed to
 fill the role of maintainer for now, but we are seeking an
 enthusiastic individual with spare time and a desire to lead Takusen
 development.
 = How to get it =
 This release is available on hackage:
   cabal update  cabal install Takusen
 The source code is available on code.haskell.org:
   darcs get http://code.haskell.org/takusen

 = New features =
 - Alistair Bayley:
   * Database/PostgreSQL/PGFunctions.lhs: show instance for UUID prints
     Word64s in hex.
   * Database/PostgreSQL/Enumerator.lhs: add UUID and marshaling
     functions.
   * Database/PostgreSQL/PGFunctions.lhs: add UUID and marshalling
     functions.
   * Database/ODBC/OdbcFunctions.hsc: add support for different String
     encodings. New functions to marshal to/from various encodings
     (Latin1, UTF8, UTF16), and bind/get functions changed to use
     these.
 - Daniel Corson
   * binary data with postgres
 = Bug fixes =
 - Alistair Bayley:
   * Database/ODBC/OdbcFunctions.hsc: fix bug in
     mkBindBufferForStorable for Nothing (null) case: pass -1
     (sqlNullData) as value size.
   * Database/ODBC/OdbcFunctions.hsc: use sqlNullData in
     bindParamString Nothing case, rather than -1. A bit more
     descriptive.
   * Database/ODBC/Enumerator.lhs: store bind buffers in stmt object,
     so we retain reference to them for lifetime of statement. Destroy
     with statement (well, lose the reference). Should fix bind errors
     found by Jason Dagit.
   * Database/ODBC/Enumerator.lhs: Oracle only supports two transaction
     isolation levels (like Postgres). String output bind parameters
     have max size 8000 (we use 7999 because module OdbcFunctions adds
     one to the size).
   * Database/ODBC/OdbcFunctions.hsc: string parameters have different
     SQL data types when binding columns (SQL_CHAR) and parameters
     (SQL_VARCHAR). Oracle only supports two transaction isolation
     levels.
   * Database/PostgreSQL/PGFunctions.lhs: fix byteaUnesc and add
     byteaEsc.

 = Refactoring =
 - Jason Dagit:
   * update urls in cabal file
 - Alistair Bayley:
   * Takusen.cabal: fixed QuickCheck version spec.
   * Takusen.cabal: bump version to 0.8.6.
   * Database/ODBC/OdbcFunctions.hsc: makeUtcTimeBuffer: pass struct
     size as both buffer size and data size in call to mkBindBuffer.
   * Database/ODBC/OdbcFunctions.hsc: mkBindBufferForStorable calls
     mkBindBuffer (reduces duplicated code).
   * Database/ODBC/Enumerator.lhs: add instance for EnvInquiry to
     

Re: [Haskell-cafe] Hot-Swap with Haskell

2010-07-16 Thread austin seipp
Sorry Andy! CC'ing to the rest of -cafe in case anybody notices (I
need to stop haskelling so early in the morning...)

On Fri, Jul 16, 2010 at 8:59 AM, austin seipp a...@0xff.ath.cx wrote:
 You also may like one project I wrote, an IRC bot that used hs-plugins
 to do hot code reloading (only works on GHC 6.8.) Inspired by
 lambdabot, it could swap the entire running program as well as its own
 plugins, on the fly and maintain state pretty successfully. I haven't
 touched the source code in a long time - it was one of my first bigger
 Haskell projects after reading Dons paper about yi. I tried a bunch of
 things with various versions of it.

 If I wrote the code again today (providing plugins worked) it would
 likely be 1/10th the size and more manageable as I have become much
 better at Haskell, but oh well. Looking over it now though, most of
 the code is pretty simple. :)

 http://code.haskell.org/infinity/src/

 On Fri, Jul 16, 2010 at 8:04 AM, Andy Stewart lazycat.mana...@gmail.com 
 wrote:
 Bartek Ćwikłowski paczesi...@gmail.com writes:

 Hello Andy,

 2010/7/16 Andy Stewart lazycat.mana...@gmail.com:

 There are some problems with re-compile solution:

 1) You can't save *all* state with some FFI code, such as gtk2hs, you
 can't save state of GTK+ widget. You will lost some state after
 re-launch new entry.

 For my 2008 GSOC application I wrote a demo app that used hot code
 reloading and it maintained gtk state just fine. The code (and video)
 is available at http://paczesiowa.dw.pl/ , it worked in ghc 6.8 and
 needed hs-plugins, so it won't work now, but the approach should work.
 Wow, thanks for your hint, i will wait dons fix hs-plugins then try this
 cool feature..

 Thanks,

  -- Andy
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 - Austin




-- 
- Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-30 Thread austin seipp
(sorry for the dupe aaron! forgot to add haskell-cafe to senders list!)

Perhaps the best course of action would be to try and extend cpphs to
do things like this? From the looks of the interface, it can already
do some of these things e.g. do not strip comments from a file:

http://hackage.haskell.org/packages/archive/cpphs/1.11/doc/html/Language-Preprocessor-Cpphs.html#t%3ABoolOptions

Malcolm would have to attest to how complete it is w.r.t. say, gcc's
preprocessor, but if this were to be a SOC project, extending cpphs to
include needed functionality would probably be much more realistic
than writing a new one.

On Tue, Mar 30, 2010 at 12:30 PM, Aaron Tomb at...@galois.com wrote:
 Hello,

 I'm wondering whether there's anyone on the list with an interest in doing
 additional work on the Language.C library for the Summer of Code. There are
 a few enhancements that I'd be very interested seeing, and I'd love be a
 mentor for such a project if there's a student interested in working on
 them.

 The first is to integrate preprocessing into the library. Currently, the
 library calls out to GCC to preprocess source files before parsing them.
 This has some unfortunate consequences, however, because comments and macro
 information are lost. A number of program analyses could benefit from
 metadata encoded in comments, because C doesn't have any sort of formal
 annotation mechanism, but in the current state we have to resort to ugly
 hacks (at best) to get at the contents of comments. Also, effective
 diagnostic messages need to be closely tied to original source code. In the
 presence of pre-processed macros, column number information is unreliable,
 so it can be difficult to describe to a user exactly what portion of a
 program a particular analysis refers to. An integrated preprocessor could
 retain comments and remember information about macros, eliminating both of
 these problems.

 The second possible project is to create a nicer interface for traversals
 over Language.C ASTs. Currently, the symbol table is built to include only
 information about global declarations and those other declarations currently
 in scope. Therefore, when performing multiple traversals over an AST, each
 traversal must re-analyze all global declarations and the entire AST of the
 function of interest. A better solution might be to build a traversal that
 creates a single symbol table describing all declarations in a translation
 unit (including function- and block-scoped variables), for easy reference
 during further traversals. It may also be valuable to have this traversal
 produce a slightly-simplified AST in the process. I'm not thinking of
 anything as radical as the simplifications performed by something like CIL,
 however. It might simply be enough to transform variable references into a
 form suitable for easy lookup in a complete symbol table like I've just
 described. Other simple transformations such as making all implicit casts
 explicit, or normalizing compound initializers, could also be good.

 A third possibility, which would probably depend on the integrated
 preprocessor, would be to create an exact pretty-printer. That is, a
 pretty-printing function such that pretty . parse is the identity.
 Currently, parse . pretty should be the identity, but it's not true the
 other way around. An exact pretty-printer would be very useful in creating
 rich presentations of C source code --- think LXR on steroids.

 If you're interested in any combination of these, or anything similar, let
 me know. The deadline is approaching quickly, but I'd be happy to work
 together with a student to flesh any of these out into a full proposal.

 Thanks,
 Aaron

 --
 Aaron Tomb
 Galois, Inc. (http://www.galois.com)
 at...@galois.com
 Phone: (503) 808-7206
 Fax: (503) 350-0833

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
- Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Just 3 = (1+)?

2009-05-09 Thread Austin Seipp
Excerpts from michael rice's message of Sat May 09 14:31:20 -0500 2009:
 Why doesn't this work?
 
 Michael
 
 
 
 data Maybe a = Nothing | Just a
 
 instance Monad Maybe where
     return = Just
     fail   = Nothing
     Nothing  = f = Nothing
     (Just x) = f = f x
     
 instance MonadPlus Maybe where
     mzero = Nothing
     Nothing `mplus` x = x
     x `mplus` _   = x
 
 
 
 [mich...@localhost ~]$ ghci
 GHCi, version 6.10.1: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer ... linking ... done.
 Loading package base ... linking ... done.
 Prelude Just 3 = (1+)
 
 interactive:1:0:
     No instance for (Num (Maybe b))
   arising from a use of `it' at interactive:1:0-14
     Possible fix: add an instance declaration for (Num (Maybe b))
     In the first argument of `print', namely `it'
     In a stmt of a 'do' expression: print it
 Prelude 
 

Look at the types:

Prelude :t (=)
(=) :: (Monad m) = m a - (a - m b) - m b
Prelude :t (+1)
(+1) :: (Num a) = a - a
Prelude 

The return type of '(+1)' in this case should be 'm b' but it instead
only returns 'b'. If we tag a return on there, it will work fine:

Prelude Just 3 = return . (+1)
Just 4
Prelude 

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GLFW - Mac OS X

2009-05-07 Thread Austin Seipp
Excerpts from GüŸnther Schmidt's message of Thu May 07 14:12:04 -0500 2009:
 Hi,
 
 has anybody recently install the GLFW package on Mac OS X?
 
 It won't install on my machine.
 
 Günther
 

I ran into this problem - with GHC 6.10.2 or above if you try to
install GLFW with cabal install you get a host of errors that are
SSE-based and for that matter make absolutely no sense at all, but
they do appear.

The solution (unfortunately) is manually to alter the GLFW.cabal file
to pass the '-msse2' option to gcc *while under OS X*. That is, the
cabal file should look something like:

if os(darwin)
  include-dirs: glfw/include glfw/lib glfw/lib/macosx
  c-sources:
 
--   cc-options: -msse2
  frameworks: AGL Carbon OpenGL

So just add that line marked with '--'.

This will make sure GCC does not die while building. I'm honestly not
sure why this change has occurred, but it's pretty annoying, and I
guess I should probably send a patch to Paul. H. Liu...

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to get program command line arguments in Unicode-aware way (Unix/Linux)?

2009-03-11 Thread Austin Seipp
Excerpts from Dimitry Golubovsky's message of Wed Mar 11 21:42:14 -0500 2009:
 Hi,
 
 I am trying to process command line arguments that may contain Unicode
 (cyrillic in this example) characters.
 
 The standard GHC's getArgs seems to pass whatever was obtained from
 the underlying C library
 without any regard to encoding, e. g the following program (testarg.hs):
 
 module Main where
 
 import System.Environment
 
 main = do
   x - getArgs
   mapM (putStrLn . show) x
 
 being invoked (ghc 6.10.1)
 
 runghc testarg -T 'привет'
 
 prints the following:
 
 -T
 \208\191\209\128\208\184\208\178\208\181\209\130
 
 (not correct, all bytes were passed without proper encoding)
 
 Is there any way to get program arguments in GHC Unicode-aware? Or at
 least assuming that they are always in UTF-8?
 Something like System.IO.UTF8, but for command line arguments?
 
 Thanks.
 
 PS: BTW  runhugs testarg -T 'привет' prints:
 
 -T
 \1087\1088\1080\1074\1077\1090
 
 which is correct.
 

Hello,

Would this approach work using utf8-string?

import Codec.Binary.UTF8.String
import System.Environment
import Control.Monad

main = do
x - liftM (map decodeString) getArgs
mapM_ (putStrLn . encodeString) x

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I want to write a compiler

2009-03-09 Thread Austin Seipp
Excerpts from John Meacham's message of Mon Mar 09 07:28:25 -0500 2009:
 On Sat, Mar 07, 2009 at 07:45:06PM -0600, Austin Seipp wrote:
  (On that note, I am currently of the opinion that most of LHC's major
  deficiencies, aside from a few parser bugs or some needed
  optimizations, comes from the fact that compiling to C is currently
  our only option; because of it, we have no exception handling or
  proper garbage collection at all. As well, the runtime system is a
  little needlessly 'clever' (if small and understandable) so it can
  deal with that.)
 
 It would be interesting if you could revive the ghc back end I wrote for
 jhc in lhc. the code is still in the repository but was disabled a while
 ago, and I was just fretting over whether I should just delete it from
 the codebase as an interesting experiment. I mainly used it as a
 debugging aid once upon a time, but it was difficult to keep up to date
 with the C back end. I know it is sort of a silly back end, but it might
 be interesting.

Indeed, I stumbled upon it whilst looking at how unsafeCoerce worked
(to find out it is super-duper-special and implemented as part of E.)
I think it's actually pretty clever, and who knows, maybe it could be
useful as at least a debugging aid. :)

 I think a big deciding factor here would be the answer to one question
 do you want to deal with unboxed values in your compiler internally?
 As in, you plan on a lazy language, so, do you ever want to open up
 those thunks and deal with unboxed values in your compiler guts or do
 you want to treat them as abstract boxes to be evaluated by the runtime?
 if you do want to think about unboxed values, for optimization or other
 purposes, bite the bullet and go for something like GRIN as the back end
 and support unboxed values all the way through to the front end from the
 get go. If you really only want to support lazy thunks, go with one of
 the quasi virtual machine style implementations like STG.
 
 John
 

This is a very good point I hadn't even thought about! Indeed, since
GRIN represents thunks in a defunctionalized way - encoded as nodes -
dealing with boxed/unboxed values becomes more of the compiler's job,
since the nature of unboxed values etc. becomes more transparent.

Since you bring this up, I figure this decision also had some
influence on E in lhc/jhc, considering its type system is rich enough
to distinguish values in whnf/boxed/unboxed etc.?

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I want to write a compiler

2009-03-07 Thread Austin Seipp
Hi,

(Please note this is coming from my own experience working with the LHC haskell
compiler, as well as a compiler I'm currently working on in SML. I'm
not an authority, but as another greenhorn compiler hacker I thought I
might give some advice.)

Excerpts from Loup Vaillant's message of Sat Mar 07 17:32:48 -0600 2009:
 Ideally, I would very much like to compile to C.
 
 The requirements are easily stated. My language must
 - be lazy by default,
 - support algebraic data types and case expressions (unless I can get
 away with encoding them as functions),
 - have some kind of FFI with C (I suppose it imply some support for
 unboxed values).
 
 There is also the first class functions thing, but lambda lifting is okay.

Unfortunately, after working on LHC, I can verifiably say (like all
the researchers who wrote the papers which I *didn't* read
beforehand,) that for a lot of purposes, C is an unsuitable fit for a
compilers' target language. It works pretty well if you can make the
source language execution model fit well enough, but if you can't,
you've a price to pay (both in sanity and/or performance.)

(On that note, I am currently of the opinion that most of LHC's major
deficiencies, aside from a few parser bugs or some needed
optimizations, comes from the fact that compiling to C is currently
our only option; because of it, we have no exception handling or
proper garbage collection at all. As well, the runtime system is a
little needlessly 'clever' (if small and understandable) so it can
deal with that.)

However, since your goal is *not* efficiency, you will be happy to
know that the issues with compiling to C (like garbage collection and
exception handling) can be worked around viably.

For garbage collection, please see.

Accurate Garbage Collection in an Uncooperative Environment -
http://citeseer.ist.psu.edu/538613.html

This strategy is currently used in Mercury as well as Ben L.'s DDC
language; on that note, I think if you spent some time looking through
the runtime/generated code of DDC, you can see exactly what the paper
is talking about, because it's actually a very simple strategy for
holding onto GC roots:

http://code.haskell.org/ddc/ddc-head/runtime/

However, it's reasons like this (C is really hard to compile to
effectively) that Simon  co. have spent so much time on the C--
project. C-- is one of the backend languages used in GHC, and it is
designed to be a 'uniform assembly language' for high level languages
to compile to.

You can find a lot of info on C-- here; I recommend the paper at the
bottom to start off:

http://research.microsoft.com/en-us/um/people/simonpj/papers/c--/

These papers should further illustrate the issues with compiling to C;
for a pedagogical excursion, these issues can all be (simply) worked
around for the most part like Henderson's paper illustrates, but
they're worth keeping in mind, too.

Hopefully those papers should help you concerning your compilation
model and the route you would like to take. I can't say much about
laziness, other than read Simon Peyton-Jones' actual full book (it's
an amazing read):

http://research.microsoft.com/en-us/um/people/simonpj/papers/slpj-book-1987/

That should help you concerning laziness/compilation etc..

As for the FFI, I don't really have any papers on 'how to implement an
FFI', but Matthias Blume's paper might be of interest:

No-Longer-Foreign: Teaching an ML compiler to speak c natively 

http://ttic.uchicago.edu/~blume/papers/nlffi-entcs.pdf

libffi may also be worth investigating:

http://sourceware.org/libffi/


 I have chosen the STG machine because I thought i would be easier to
 get an FFI and case exprs out of it. Maybe I am wrong, and template
 instantiation is really easier. There is also the BRISK machine, and
 GRIN. But the paper I read were less readable to me than the three
 mentioned.

How far into GRIN have you looked? It is one of the intermediate
languages for lhc/jhc, and for a low-level intermediate representation
it works quite well I think; it is a strict, monadic intermediate
language - being strict, we must still be able to model closures
('suspendeded applications' or thunks,) so we model closures/partial
applications in a sort of 'defunctionalized' way (making grin a sort
of 'defunctionalized intermediate language' really,) by turning them
into something along the lines of an algebraic data type in GRIN. A
good benefit of this is that you actually /don't/ have to write
eval/apply yourself, because the compiler is supposed to generate it
*for* you.

In fact, this is a core part of GRIN - the generation of eval by the
compiler is critical for many important optimizations to take
place. It also makes the runtime a bit simpler.

This comes at the price that GRIN is by design a whole-program
intermediate form; in order to pull off any of these sophisticated
transformations everything must be in memory at once. But as we have
seen with LHC/JHC, it can make a *huge* difference (apps that are 10x

Re: [Haskell-cafe] bytestring vs. uvector

2009-03-07 Thread Austin Seipp
Excerpts from Alexander Dunlap's message of Sun Mar 08 00:23:01 -0600 2009:
 For a while now, we have had Data.ByteString[.Lazy][.Char8] for our
 fast strings. Now we also have Data.Text, which does the same for
 Unicode. These seem to be the standard for dealing with lists of bytes
 and characters.
 
 Now we also have the storablevector, uvector, and vector packages.
 These seem to be also useful for unpacked data, *including* Char and
 Word8 values.
 
 What is the difference between bytestring and these new fast array
 libraries? Are the latter just generalizations of the former?
 
 Thanks for any insight anyone can give on this.
 
 Alex


Data.Text provides functions for unicode over bytestrings, with several
encoding/decoding methods. So, I think that bytestring+text now solves
the general problem with the slow String type - we get various
international encodings, and fast, efficient packed strings.

(It's also worth mentioning utf8-string, which gives you utf8 over
bytestrings. text gives you more encodings and is probably still quite
efficient, however.)

But this is pretty much a separate effort to that of packages like
uvector/vector etc. etc.. To clarify, uvector and vector are likely to
be merged in the future I think - vector is based on the idea of
'recycling arrays' so that array operations are still very efficient,
while uvector only has the tested stream fusion technique behind it.

Actually, I think the inevitable plan is to merge the technology
behind both vector and uvector into the Data Parallel Haskell
project. Array recylcing and stream fusion goes into creating
extremely efficient sequential code, while the vectorisation pass
turns that into efficient multicore code at the same time.

In any case, I suppose that hypothetically if someone wanted to use a
package like uvector to create an efficient string type, they could,
but if they want that, why not just use bytestring? It's already
optimized, battle tested and in extremely wide use.

I think some library proliferation is good; in this case, the
libraries mentioned here are really for some different purposes, and
that's great, because they all lead to some nice, fast code with low
conceptual overhead when put together (hopefully...) But I'm not even
going to begin examining/comparing the different array interfaces or
anything, because that's been done many times here, so you best check
the archives if you want the 'in-depth' on the matter.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bytestring vs. uvector

2009-03-07 Thread Austin Seipp
Excerpts from Bryan O'Sullivan's message of Sun Mar 08 00:45:03 -0600 2009:
 uvector is, if my memory serves me correctly, a fork of the vector library.
 It uses modern stream fusion, but is under active development and is a
 little scary. I'm a little unclear on the exact difference between uvector
 and vector. Both use arrays that are not pinned, so they can't be readily
 used with foreign code. If you want to use either library, understand that
 you're embarking on a bracing adventure.

vector and uvector are roughly based on the same technology; uvector
is - as far as I remember - a fork of some of the old DPH code which
uses stream fusion which Don cleaned up and worked on (and it's proven
pretty useful, and people are still hacking on it.)

vector however, has the notion of 'recycling arrays' when it does
array operations. The technique is in fact quite similar to stream
fusion. Roman L. built this from scratch I think, so it's quite a bit
more unused and less stable than even uvector is maybe, but I suppose
you could say it's kind of a superset of uvector. Hopefully though
it should mature a little, and the plans are to have the technology
from both of these folded into the Data Parallel Haskell project so we
get fast array operations+automatic parallelisation.

For info, see Roman's paper, 'Recycle your arrays!'

http://www.cse.unsw.edu.au/~rl/publications/recycling.html

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] help optimizing memory usage for a program

2009-03-02 Thread Austin Seipp
Excerpts from Bulat Ziganshin's message of Mon Mar 02 10:14:35 -0600 2009:
 let's calculate. if at GC moment your program has allocated 100 mb of
 memory and only 50 mb was not a garbage, then memory usage will be 150
 mb

? A copying collector allocates a piece of memory (say 10mb) which is
used as the heap, and only one *half* of it ever has data in it. When
a copy occurs, all live data is transferred from one half to the other
half. 

So, if you have a 10mb heap initially, that means you have two sides
of the heap, both which are 5mb big. If your program has 2mb of live
data at the time of a GC, then those 2mb of data are copied over to
the other 5mb side. This copying can be done with pointer arithmetic
in a basic manner, so you don't need to allocate any more memory for a
copy to occur. So the program never uses more than 10mb heap. GHC
might be a little different (so correct me if I'm wrong in the case of
GHC.)

Copying collection has the advantage that for low-lived allocations,
GC moments are pretty quick, because the time spent copying is
proportional to the size of the live data. It gets slower if your
program has lots of live data at once, not only because collections
become more frequent, but because copies will become longer in time as
well.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Verifying Haskell Programs

2009-02-03 Thread Austin Seipp
Excerpts from Paulo J. Matos's message of Tue Feb 03 02:31:00 -0600 2009:
 Any references to publications related to this?

While it's not Haskell, this code may be of interest to you:

http://pauillac.inria.fr/~xleroy/bibrefs/Leroy-compcert-06.html

This paper is about the development of a compiler backend using the
Coq proof assistant, which takes Cminor (a C-like language) and
outputs PowerPC assembly code. Coq is used both to program the
compiler and prove it is correct.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Verifying Haskell Programs

2009-02-03 Thread Austin Seipp
Excerpts from Austin Seipp's message of Tue Feb 03 03:40:47 -0600 2009:
 ...

After noticing that I didn't give a link to the code in the last
message, I searched and found this more up to date page I think:

http://compcert.inria.fr/doc/index.html

 Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: HDBC v2.0 now available

2009-01-30 Thread Austin Seipp
Excerpts from John Goerzen's message of Fri Jan 30 18:31:00 -0600 2009:
 Why would cabal-install select a different base than running Setup
 manually?  
 
 I can't hard-code base = 4 into .cabal because that would break for
 GHC 6.8 users.  I have CPP code that selects what to compile based on
 GHC version.
 
 -- John

I think it would be easiest to simply not use CPP and set a dependency
on the extensible-exceptions package:

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/extensible-exceptions

Using this package, you don't need CPP at all - just import
'Control.Exception.Extensible' instead of Control.Exception and you're
done. This package works under both ghc 6.8 and 6.10 (when compiled
under 6.10, it just re-exports all of Control.Exception, and on =
6.10 it actually compiles the code) and means you don't have to have
two awkward copies of the same code that use different
exception-handling mechanisms.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: databases in Haskell type-safety

2009-01-29 Thread Austin Seipp
Excerpts from GŸuenther Schmidt's message of Thu Jan 29 07:42:51 -0600 2009:
 Hi Austin,
 
 could you post the patch please?
 
 So far there is no updated version of takusen that builds with ghc
 6.10
 
 Günther

Hi Gunther,

I recently got an email back from Alstair Bayley who is one of the
Takusen authors, and they said they are preparing a GHC 6.10 release
(I was *not* the only person to submit a patch for ghc 6.10 building)
but it may take a little while. You might want to get in contact with
Alstair and ask what the current progress is.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell not ready for Foo [was: Re: Hypothetical Haskell job in New York]

2009-01-08 Thread Austin Seipp
Excerpts from John A. De Goes's message of Thu Jan 08 12:14:18 -0600 2009:
 But really, what's the point? FFI code is fragile, often uncompilable  
 and unsupported, and doesn't observe the idioms of Haskell nor take  
 advantage of its powerful language features.

This is a completely unfair generalization. The FFI is an excellent
way to interoperate with an extraordinary amount of external
libraries, and if you ask me, it's *worth* taking those pieces of C
code and wrapping them up in a nice, safe haskell interface. I will also
mention that Haskell has *the* simplest FFI I have ever used, which to
me only means it's easier to get it right (the fact that there are
customized *languages* like e.g. cython to make writing python
extensions easier makes me wonder.)

I suggest you take a look at the haskell llvm bindings - they are
extraordinarily well documented, and the high level interface uses
*many* haskell idioms that make the library safe and easy to use:

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/llvm-0.4.2.0

This code most certainly takes advantage of powerful features and
idioms that only Haskell can provide. Please do not take your bad
experiences with a few crappy binding (or not even crappy bindings,
perhaps bindings that just aren't very abstract) and extend them to
the bindings that are excellent with a sweeping statement like that.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Taking Exception to Exceptions

2009-01-07 Thread Austin Seipp
Excerpts from Immanuel Litzroth's message of Wed Jan 07 16:53:30 -0600 2009:
 I'm trying to use the new (for me at least) extensible exceptions and
 I am little amazed that I cannot get catch, try or mapException to work
 without telling them which exceptions I want to catch.
 What is the rationale behind this?

The rational is that it's normally not a good idea to simply gobble
all exceptions; although the library makes it possible to do this
anyway.

You can either use the ScopedTypeVariables extension and do:

... `catch` \(e::SomeException) - ...

Or without an extension you can do:

... `catch` handler
  where handler :: SomeException - IO a
handler e = ...

(It's really a matter of taste if you want to use a non-haskell98
extension, although considering that the new extensible exceptions
library uses deriving data/typeable and existentials anyway, I think
ScopedTypeVariables are the way to go.)

 How does bracket manage to catch all exceptions?
 What should onException do?

onException takes an IO action and what to do if it fails - if the IO
action fails, it is caught and your 'failure action' is run, followed by
onException re-throwing the error.

 Is there some example code that uses these exceptions, or better
 documentation?

The GHC docs don't have source-code links (don't know why,) but
luckily in order to aid in using the new extensions system with older
GHCs there has been a hackage package uploaded that provides the
identical functionality:

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/extensible-exceptions

The source is here:

http://hackage.haskell.org/packages/archive/extensible-exceptions/0.1.1.0/doc/html/src/Control-Exception-Extensible.html

As for documentation e.g. haddock stuff, this is currently a bug as
there is none:

http://hackage.haskell.org/trac/ghc/ticket/2655

I recommend this paper for info, it's very easy to follow:

http://www.haskell.org/~simonmar/papers/ext-exceptions.pdf

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] databases in Haskell type-safety

2009-01-03 Thread Austin Seipp
Excerpts from Gour's message of Sat Jan 03 03:48:44 -0600 2009:
 Hi!
 
 I'd like to use sqlite3 as application storage in my haskell project...
 
 Browsing the available database options in Haskell it seems that:
 
 a) HSQL is dead (hackage reports build-failure with 6.8  6.10)
 
 b) haskelldb is also not in a good shape - build fails with 6.8  6.10
 
 For Haskell-newbie as myself, it looks that haskelldb is the one which
 provide(ed)s the most secure API (I was reading draft paper about
 MetaHDBC but, apparently, the type inference support in open-source
 databases is poor and that's why, according to the author This is
 unfortunately as it makes MetaHDBC a lot less valuable. 
 
 What remains is:
 
 c) Takusen which is also not up-to-date (it fails with 6.10) and
 
 d) HDBC and sqlite bindings which are the only packages which build with
 6.10.

Have you tried the simple sqlite3 bindings available?
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/sqlite

 I'm not familiar with Takusen which says: Takusen's unique selling
 point is safety and efficiency... and I would appreciate if someone
 could shed some more light to its 'safety' and the present status?

Takusen is based on the (unique) concept of a left-fold
enumerator. Having a left-fold interface guarantees timely (nearly
perfect, really) deallocation of resources while still having the
benefits of a 'lazy' stream. This interface has (as shown by Oleg and
others) proven to be very efficient in a number of cases as well as
favorable for many. The idea is very novel, and truly worth exploring
if you ask me.

For more information about left-fold enumerators and takusen, see here:

http://okmij.org/ftp/papers/LL3-collections-enumerators.txt
http://okmij.org/ftp/Haskell/fold-stream.lhs
http://okmij.org/ftp/Haskell/misc.html#takusen

NB: I have *just* (about 5 minutes ago) sent in a patch for takusen
to get it to build on GHC 6.10.1 to Oleg. Hopefully an updated version
will appear on hackage in the next few days.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type Family Relations

2009-01-03 Thread Austin Seipp
Excerpts from Thomas M. DuBuisson's message of Sat Jan 03 09:22:47 -0600 2009:
 Mandatory contrived example:
 
  type family AddressOf h
  type family HeaderOf a
 
  -- I'm looking for something to the effect of:
  type axiom HeaderOf (AddressOf x) ~ x
 
  -- Valid:
  type instance AddressOf IPv4Header = IPv4
  type instance HeaderOf IPv4 = IPv4Header
 
  -- Invalid
  type instance AddressOf AppHeader = AppAddress
  type instance HeaderOf AppAddress = [AppHeader]
 
 So this is  a universally enforced type equivalence.  The stipulation
 could be arbitrarily complex, checked at compile time, and must hold
 for all instances of either type family.
 
 Am I making this too hard?  Is there already a solution I'm missing?
 

The problem is that type classes are open - anybody can
extend this family i.e.

 type instance AddressOf IPv4Header = IPv4
 type instance HeaderOf IPv4 = IPv4Header
 type instance AddressOf IPv6Header = IPv4

 ipv4func :: (AddressOf x ~ IPv4) = x - ...

And it will perfectly accept arguments with a type of IPv6Header.

There is a proposal for extending GHC to support type invariants of
this nature, but it is not implemented:

  http://tomschrijvers.blogspot.com/2008/11/type-invariants-for-haskell.html

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Install Yi editor with GHC 6.10.1

2008-12-02 Thread Austin Seipp
Excerpts from lazycat.manatee's message of Tue Dec 02 23:18:50 -0600 2008:
 Hi all,
 
 I have install GHC 6.10.1 and Gtk2hs (darcs version) in Debian.
 So i want to ask, have anyone install Yi (darcs version) with GHC 6.10.1
 successfully?

Yes. cabal install is basically the easiest way to do it:

 $ cabal update
 $ cabal install yi

 And the best way to install darcs version Yi?

 $ darcs get http://code.haskell.org/yi yi-darcs
 $ cd yi-darcs
 $ cabal install

You will need darcs v2 (I normally use the yi darcs repo fwiw, jpb
pushes lots of little fixes/optimizations regularly. Recently in yi
0.5.2 and the darcs version some major memory leaks have been fixed it
seems.)

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers

2008-11-29 Thread Austin Seipp
Hi Daniel,

 1. cabal install lhc
 20 minutes later I have an lhc executable installed (and the graphviz 
 package), great, can't be any simpler.

Awesome! Glad it worked for you.

A tidbit: unfortunately, due to a mistake in the first upload of lhc,
you will need to provide an exact version if you want the latest and
greatest.

The reason behind this is because when David uploaded lhc initially,
he gave it a version of '20081121'. After a few days of hacking the
source code, I decided to upload a new version - but I changed the
version number to 0.6.20081127 (it is 0.6 because jhc is currently at
0.5, and I see the improvements we made as worthy of a minor version
bump.)

So, as far as cabal is concerned, 20081121  0.6.20081127, so it will
by default install the older version.

If you would be so kind as to try the latest lhc instead by running:

$ cabal install lhc-0.6.20081127

And reporting back, I would like to hear the results and if it went
well. :)

 Unfortunately:
 $ lhc -o lhcHeap heapMain
 lhc: user error (LibraryMap: Library base not found!)
 
 Oops.

There is a reason this is happening, and there isn't an easy way to
get around it right now, it seems.

The problem is that when you just install lhc, it has no libraries. To
install the base library, you are going to need a copy of the lhc
source code - this cannot be automated by hackage.

Why? Because we are afraid that by uploading lhc's version of base -
simply called 'base' - to hackage, will will inadvertantly stop every
continually uploaded package from building, and cabal install could
stop working too. Scary thought, huh? :)

The easiest way to fix this problem is by doing the following:

1. You probably want the darcs version of LHC anyway if you're willing
   to try it. Good improvements are being made pretty much every day.
2. After you get the darcs repository, just go into it and do 'cabal
   install'
3. To install base, you are probably going to want the latest versions
   of both cabal and cabal-install from the darcs repository - they
   include support for LHC already (cabal 1.7.x.)
4. After you've installed lhc and the latest cabal/cabal install, you
   can just do:
   $ cd lhc/lib/base
   $ cabal install --lhc

   And it should Just Work.

All of these instructions can be found here: 

http://lhc.seize.it/#development

Don Stewart just brought up this point in #haskell, so I think I will
modify the wiki page a bit (http://lhc.seize.it) and highlight these
notes and why it's currently like this.

I apologize for it being so cumbersome right now. We're trying to
figure out a good solution.

 Okay, ./configure --help and searching through the configure script (which I 
 completely don't know the syntax of) lead me to try 
 ./configure --prefix=$HOME DRIFTGHC=/home/dafis/.cabal/bin/DrIFT
 which successsfully completes the configure step, but shouldn't configure 
 find 
 executables in the path?

The reason is because the configure.ac script is designed to search
for an executable named 'drift-ghc', not 'DrIFT'. I have no idea why.

 import System (getArgs). Now
 ... myriads of lines of output ...
 jhc: user error (Grin.FromE - Unknown primitive: (eqRef__,[EVar (::ELit 
 (Data.IORef.Ref__ (ELit ([EMAIL PROTECTED]::ESort *))::ESort #)),EVar 
 (6670::ELit 
 (Data.IORef.Ref__ (ELit ([EMAIL PROTECTED]::ESort *))::ESort #))]))
 
 What? 
 And I get the same error for every nontrivial programme I tried to compile, 
 but not for a couple of trivial programmes.

LHC and JHC are still extremely incomplete. They're nowhere near as
supportive of extensions or libraries as GHC is. Don't count on them
compiling anything non-trivial just yet.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] IRC question

2008-11-26 Thread Austin Seipp
 Does anyone have an IRC client hiding somewhere that is console friendly (I
 IRC from a screen session) which is also extensible in Haskell?
 

http://www.haskell.org/hircules/

Last update was over 5 years ago - you could try to still build
it. But it uses gtk2hs, not ncurses.

Personally, I've thought about this as a project *several* times in the past, 
and
somewhere around here, I might have a few bits of code thrown together
laying around.

The -main- reasons I haven't worked on it any more than I have already
is because:

A) I was under the impression I was still the only one looking for
something like this - or maybe lots of people want an 'xmonad
equivilant' of their IRC Client? :)
B) My uni. blocks 6667 for some reason, so in order to connect with
a terminal IRC client outside of here, I need to use SSL.
And there are *no* good SSL wrappers out there at all for Haskell as
it stands - this alone is a major inhibitor of usage; I know I always
want my clients with SSL support.
C) I've been busy.

That said, if you would like to really get something started and
perhaps hack on some stuff, that would be terrifically fun and
interesting. Does anybody have name candidates? Perhaps we should go
with the yi scheme from confucianism - the zhi (knowledge) IRC client? ;)

 Thanks,
 Jason

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to use Unicode strings?

2008-11-22 Thread Austin Seipp
Excerpts from Dmitri O.Kondratiev's message of Sat Nov 22 05:40:41 -0600 2008:
 Please advise how to write Unicode string, so this example would work:
 
 main = do
   putStrLn Les signes orthographiques inclus les accents (aigus, grâve,
 circonflexe), le tréma, l'apostrophe, la cédille, le trait d'union et la
 majuscule.
 
 I get the following error:
 hello.hs:4:68:
 lexical error in string/character literal (UTF-8 decoding error)
 Failed, modules loaded: none.
 Prelude
 
 Also, how to read Unicode characters from standard input?
 
 Thanks!
 

Hi,

Check out the utf8-string package on hackage:

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/utf8-string

In particular, you probably want the System.IO.UTF8 functions, which
are identical to to their non-utf8 counterparts in System.IO except,
well, they handle unicode properly.

More specifically, you will probably want to mainly look at
Codec.Binary.UTF8.String.encodeString and decodeString, respectively
(in fact, most of the System.IO.UTF8 functions are defined in terms of
these, e.g. 'putStrLn x = IO.putStrLn (encodeString x)' and 'getLine =
liftM decodeString IO.getLine'.)

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What *not* to use Haskell for

2008-11-14 Thread Austin Seipp
Excerpts from Andrew Coppin's message of Fri Nov 14 14:13:01 -0600 2008:
 Yeah. I figure if I knew enough about this stuff, I could poke code 
 numbers directly into RAM representing the opcodes of the machine 
 instructions. Then I only need to figure out how to call it from 
 Haskell. It all sounds pretty non-trivial if you ask me though... ;-)

Save yourself some time:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/harpy

Using harpy, you can generate x86 assembly /inside/ your code and
execute it, using a DSL. This makes it excellent for code generators
and playing around with code generation.

Here's a calculator I wrote using it:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/calc

For more information, http://uebb.cs.tu-berlin.de/harpy/

 Never heard of LLVM, but from the Wikipedia description it sound like 
 warm trippy goodness. Pitty there's no Haddock. :-(

It's a pretty excellent little system, to be honest. One of the
cleanest APIs I've ever used, too (especially for C++.)

 [From the build log, it looks like it failed because the build machine 
 doesn't have the LLVM library installed. Is that really necessary just 
 for building the docs?]

It's necessary to even get through the 'cabal configure' step, since
the configure script bundled with the haskell llvm bindings is run
then, which checks for the llvm-c headers.

Austin.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Weekly News: Issue 92 - November 8, 2008

2008-11-10 Thread Austin Seipp
 Anyway, I don't see it anywhere in the release notes, but I get the vibe 
 that type families are supposed to be fully working now. Is that 
 correct? If so, why no mention anywhere?

Type families have been completely reimplemented and should be stable
now, but there are some bugs - notably equality constraints in
superclasses are not supported in GHC 6.10.1, i.e.

 class (F a ~ b) = C a b where
   type F a

As indicated by this bug report: http://hackage.haskell.org/trac/ghc/ticket/2715
And here: http://haskell.org/haskellwiki/GHC/Indexed_types#Equality_constraints

 Also, the release notes tantelisingly hint that the long-awaited 
 parallel-array stuff is finally working in this release, but I can't 
 find any actual description of how to use it. All the DPH stuff seems on 
 the wiki was last updated many months ago. You would have thought that 
 such a big deal would be well-documented. It must have taken enough 
 effort to get it to work! You'd think somebody would want to shout about 
 it...

I put up a DPH version of the binarytrees benchmark in the shootout:

http://haskell.org/haskellwiki/Shootout/Parallel/BinaryTreesDPH

There are some notes there; the only documentation I really used was
the documentation built by the GHC build process on the 'dph-*'
libraries (you can see them in 6.10 by just doing 'ghc-pkg list' and
looking through it.)

I was thinking of porting more of the parallel shootout entries to use
DPH, but I'm busy right now - results could be interesting.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Automatic parallelism in Haskell, similar to make -j4?

2008-11-05 Thread Austin Seipp
Excerpts from Chad Scherrer's message of Tue Nov 04 21:34:01 -0600 2008:
 Does anyone have any thought what it would take to get this going?
 
 Chad
 

Currently, franchise supports building in parallel with a -j flag, but
the code could definitely be optimized (in my experience, running with
something like -j3 on my dual core reduces compile times with
franchise on arbitrary projects about 20% currently.) During the 2008
SOC, there was also work on adding this support to cabal, which
eventually ended up as the hbuild project.

http://hackage.haskell.org/trac/hackage/wiki/HBuild

darcs get http://darcs.net/repos/franchise

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Automatic parallelism in Haskell, similar to make -j4?

2008-11-03 Thread Austin Seipp
Excerpts from t.r.willingham's message of Sun Nov 02 17:28:08 -0600 2008:
 What would it take to implement a -j equivalent for, say, GHC?  Or if
 this is not possible, what is wrong with my reasoning?
 
 Thanks,
 TW

Hi,

The main issue has to do with the decisions the compiler needs to make
in order to generate adequate code for a general case. The problem is
the compiler has to make decisions about the *granularity* of the
threading in particular - the generated code for example may waste a
lot of time sparking off threads using 'par' or something, for
computations that take less time than the thread-creation itself,
meaning the overall cost of that thread being spawned was negative. So
your code would need the threading to be more coarse-grained. On the
other hand, if you have some computation, the compiler may not
adequately sprinkle enough par's throughout the program, and therefore
we miss opportunities where the parallelism could give us a speed up -
in this case, we need more fine-grained threading.

So the problem is (in the general case): given an arbitrary input
program, how do you adequately decide what should and should not be
parallelized in that program? Even then, how do you decide the
granularity at which the threads should operate?

It's a pretty tough problem, and I don't think we're even close to
full-blown automagically-parallelizing compilers (although with things
like parallel strategies and DPH we can get close.) But there is work
in this area using profiling information to guide these optimizations,
see Feedback directed implicit parallelism here:

http://research.microsoft.com/~tharris/papers/2007-fdip.pdf

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Syntax question: class (Monad m) = MonadReader r m | m - r where

2008-11-02 Thread Austin Seipp
This message is literate haskell.

 {-# LANGUAGE FunctionalDependencies, MultiParamTypeClasses #-}
 {-# LANGUAGE TypeFamilies, EmptyDataDecls, FlexibleContexts #-}

Just to add on for people watching, a fundep pretty much just says that if:

 class Foo a b | a - b where
   bar :: a - b
   baz :: b - a

Then if you have an instance:

 data A -- we hold types abstract for simplicity
 data B
 data C
 
 u = undefined -- convenience
 
 instance Foo A B where 
   bar x = u
   baz y = u

Then there can be no other instance for the class Foo of the form 'A
x' where x /= B, e.g. it would be invalid to then have:

 -- uncommenting this will error:
 -- instance Foo A C where
 --   bar x = u
 --   baz y = u

Why? We have established in the class declaration that 'a' determines 'b'
and in an actual instance of that class (concretely) stated that in
this case, the type A determines the type B and that relation holds
true for any usage of the class (in something like a constraint.)

The only way the typechecker can resolve that /relation/ is if
(roughly speaking I think) the set of types that can be of type 'b'
given an already known type 'a' are constrained in some way.

If you do not have the fundep, then the class will work and so will
instances, but you will get ambiguous type errs when trying to use
functions of the type class.

 class Foo2 a b where
   foobar :: a - b
   foobaz :: b - a

In the usage of 'foobar,' with no functional dependency, the compiler
cannot determine the type 'b' because in the instances of of 'Foo' we
do not 'bind' types together in any way, so for any instance of 'Foo A
x' an x could work here:

 instance Foo2 Bool Int where
foobar x = u
foobaz y = u
 instance Foo2 Bool Char where
foobar x = u
foobaz y = u
 

This instance will compile, but attempting to use it does not:

$ ghci ...
GHCi, version 6.8.3: http://www.haskell.org/ghc/  :? for help
Loading package base ... linking ... done.
[1 of 1] Compiling Main ( fundeps-assoc.lhs, interpreted )
Ok, modules loaded: Main.
*Main :t foobar
foobar :: (Foo2 a b) = a - b
*Main let z = foobar True

interactive:1:8:
No instance for (Foo2 Bool b)
  arising from a use of `foobar' at interactive:1:8-18
Possible fix: add an instance declaration for (Foo2 Bool b)
In the expression: foobar True
In the definition of `z': z = foobar True
*Main 


The compiler is not able to determine what to declare the result type
of 'foobar True' as, since there is both an instance of 'Foo2 Bool'
for multiple types!

However, in the original Foo class we do have a fundep, and so we know
that if we give 'bar' a value of type 'A' then we get a 'B' in return
- hence, the fundep is a relation and constrains the possible values
of the type variable 'b' in the declaration, so the compiler can
figure this out.

*Main let z = bar (u::A)
*Main :t z
z :: B
*Main 

Functional dependencies are a little tricky, but they can basically
set up relations between types however you want to (for example, using
a fundep of 'a - b, b - a' specifies a 1-to-1 correlation between
the types which instantiate 'a' and 'b'.)

However, there is an alternative which I think is far easier to grok
and can express the same 'set of equations' - associated types. I
think they are nicer mainly because they make the connections between
types in a /functional/ way, not a relational one (despite the name of
functional dependencies.)

For instance, the Foo class above could be rewritten as:

 class (Show a, Show (Boom a), Eq (Boom a)) = Foo3 a where
   type Boom a :: *
   barfoo :: a - Boom a
   bazfoo :: Boom a - a

This states that along with method declarations for an instance we
must also specify a type declaration - 'Boom' is an associated type
synonym that takes one type a, and the overall kind of 'Boom a' is *
(that is, it is a 'fully satured' type.) The functional part is seen
in the instance.

 instance Foo3 Bool where
   type Boom Bool = Int
   barfoo x = 10
   bazfoo y = u

So we can see 'Boom' as a function at the type level, indexed by
types.

*Main let z = barfoo True
*Main :t z
z :: Boom Bool
*Main print (z::Int)
10
*Main 

In this case evaluation is undefined but we can see that the equation
overall results in a type of 'Int' as the typechecker can unify the
types 'Boom Bool' and Int, even though we have to explicitly type it.

For an even better example, I think the type-vector trick is pretty
cute too (uses GADTs): 

http://thoughtpolice.stringsandints.com/code/haskell/misc/type-vectors.hs

This message is getting somewhat long but, I hope that's an adequate
explanation for those who might be hazy or interested. For more on
associated types you should probably look here:

http://haskell.org/haskellwiki/GHC/Indexed_types#An_associated_type_synonym_example

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] any idea why cabal install cabal-install cant update itself in windows?

2008-10-22 Thread Austin Seipp
Windows will not let you modify/delete binaries if they're running as a
process, and it won't let you delete .DLL files that're in use by
applications either (mapped to shared memory, that is.) So cabal
install cannot overwrite the cabal.exe binary after it builds it,
because it's already running.

I'm not sure anybody's found an adequate solution yet (Duncan?) - .bat
files maybe?

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] There is something wrong in derive-0.1.2 package.

2008-10-14 Thread Austin Seipp
Excerpts from Magicloud Magiclouds's message of Mon Oct 13 23:58:58 -0500 2008:
 Hi,
 I wanted to install it with cabal. Well
 $ cabal install derive
 Resolving dependencies...
 cabal: Couldn't read cabal file ./derive/0.1.2/derive.cabal
 As I traced a little, it seemed that line: 'build-depends: base == 
 4.*, syb' was wrong.

Hi,

I apologize; I was helping to get derive to build on ghc 6.10, and was
unaware that 'pkg == x.*' was new syntax. So it's my offending patch.

I have removed that and have instead replaced it with 1.4 supported
syntax and uploaded it to hackage; if you would 'cabal update' and try
again and report back that would be nice. (I'm using ghc 6.10 RC1 so I
don't have 6.8 anymore - OS X installer pkg got rid of it.)

Austin.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Packages

2008-09-23 Thread Austin Seipp
Excerpts from Cetin Sert's message of Tue Sep 23 05:55:21 -0500 2008:
 Let's say I go and compile a library from sources and install it through
 Cabal.
 How can I update the binary version of the library Cabal installed after
 recompiling the library using newer/modified sources?

I'm not quite sure I understand the question - if you for example
install foo 0.1 using cabal, and then you want foo 0.2, you just
install it with the same procedures. In this case, the higher version
number means the other is 'shadowed' by it, and if you explicitly need
to use foo 0.1, you can either specify this as a constraint of the
form:

 build-depends: foo == 0.1

in the package's .cabal file, or you will have to explicitly pass
ghc/ghci the flag:

 -hide-package foo-0.2

Otherwise, GHC will default to building anything that depends on foo
against the newest version.

OTOH, if you for example install foo 0.1 from stable sources, then
checkout the HEAD version which is also listed as v0.1 in the .cabal
file, reinstalling it will simply overwrite the old version entirely.

Does that answer your query?

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Packages

2008-09-23 Thread Austin Seipp
Excerpts from Dougal Stanton's message of Tue Sep 23 06:09:58 -0500 2008:
 That should happen automatically with cabal-install if the version
 number in the .cabal file has changed.
 
 There doesn't seem to be a good way of forcing cabal-install to
 recreate a build (eg, if you want to rebuild/reinstall with
 profiling). I think you need to unregister the package manually first.
 :-(
 

With cabal-install 0.5.2, you simply have to pass the '--reinstall'
flag to the 'install' command and it will reconfigure, rebuild and
reinstall the packages with whatever flags you specify.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Packages

2008-09-21 Thread Austin Seipp
Excerpts from Andrew Coppin's message of Sun Sep 21 02:44:10 -0500 2008:
 1. How is putting something into a Cabal package different from just 
 handing somebody the source code and telling them to run ghc --make?

Cabal can handle things for you like when your package depends on
external data files; when cabal compiles the code it autogenerates a
module which you import that will give you the path name to where-ever
it is installed, which is nifty in case you for example are uploading
a language for an interpreter and want to store the languages' prelude
somewhere.

It also allows for more tight constraints on dependent package versions
i.e. with it you can specify the exact package and version you need to
build (for example, parsec  2.1   3.0 which a few packages do.)
With --make you would have to include a lot of -hide-package flags and
a lot of other things, too.

It is also naturally the easiest way to actually install libraries
since it will register that library with ghc-pkg for you.

 2. If we already have a Cabal package, why do we also need seperate 
 packages for Arch, Gentoo, Debian...? Isn't Cabal cross-platform?

Having these packages available as native packages for a package
manager lets you distribute applications easier for example; if
someone just wants to install appname from say, the archlinux user
repository (which is source-based, mind you - the official repo is
binary based,) it may also depend on parsec, haskell98, HTTP,
etc.. Having these packages available natively to the manager means it
can track the dependencies and install the application by itself -
this is useful for example if someone just wants to use a haskell
application but they don't code haskell (example: darcs or pandoc.)

Of course, if you are doing haskell development, the best possible way
to go (IMO) full-blown cabal install since you will always get the
most up-to-date code - plus mixing and matching native package
managers and e.g. cabal install doesn't really work well IME; for
example if you want to install alex from macports, it will
automatically install ghc too, even if you actually have it installed,
but you didn't install it through macports (I just used the .dmg file
that Manual Chak. made.) Likewise you may want to get something and it
might depend on the HTTP library - it could very well already be
installed via cabal-install, but the native manager won't dig into GHC
to find out and will instead just work based on what it knows is
installed in its package database.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Drawing an existing image into a cairo surface?

2008-09-21 Thread Austin Seipp
Excerpts from Rafal Kolanski's message of Sun Sep 21 07:28:37 -0500 2008:
 The best I can find is withImageSurfaceFromPNG, but I can't
 make it work.

Why not? Seems to me all you need to do is:

 withImageSurfaceFromPNG blah.png $ \surface - do
...

Lots of code is written this way (create a resource, pass it to a
function and then release it after the function is over.)

 I tried playing around with unsafePerformIO but that just gets me into:

Without further context as to what you are doing, I really see no
reason why you would have to use unsafePerformIO at all.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mac OS X dylib woes

2008-09-17 Thread Austin Seipp
I'm not getting this issue, but to fix it, given whatever you shell
you use with your terminal (Terminal.app, iTerm, etc) program, just
stick this into the rc file for it:

 export DYLD_LIBRARY_PATH=/opt/local/lib:$DYLD_LIBRARY_PATH

For example, in this case it would exist in my ~/.zshrc - it should
solve the problem for the most part.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Consequences of implementing a library in Haskell for C consumption?

2008-09-04 Thread Austin Seipp
Excerpts from Justin Bailey's message of Thu Sep 04 17:00:58 -0500 2008:

 Looking at the package, I think would be pretty painful though. It
 seems I'd have to build the AST by hand,

The AST Language.C defines for C is actually fairly regular once you
wrap your head around it - I got it to generate working programs that
I could compile in approximately an hour after looking through nothing
but the documentation, essentially.

The AST is very 'raw' though: I found that defining some simple
functions for things like just creating a global variable, creating
declarations and the like cut down on overall AST size tremendously
(although hints of the AST types/constructors were still around, naturally.)

Here's the example I put on HPaste a few days ago:

http://hpaste.org/10059#a1

You'll notice that the actual shim you're looking at - with the help
of the defined functions - is actually fairly small, and those
functions help out with those *a lot.* That was the first thing I
wrote with it though, so the functions could probably be further
generalized and abstracted.

On that note (although a little OT,) does anybody think it would be
nice to have a higher level library designed specifically around
emitting C that uses Language.C? A lot of repetetive stuff can be cut
down considerably, I think.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] a recent Haskell contribution?

2008-08-20 Thread Austin Seipp
Hi,

Perhaps you are talking about Communicating Haskell Processes (CHP)?

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/chp

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Houston-area FPers?

2008-08-17 Thread Austin Seipp

Hi,

In less than a week I'll be moving to Houston TX in order to start
school at university (University of Houston.) I'm wondering if there
are any functional programmers (particularly haskellers) in that part
of the state? If so, a group meeting and perhaps eventually a
user-group would be lovely - if only to chat and talk about what
everybody is working on for the time being until something more formal
can be set up.

I'm posting this a little early to try and get the most responses
possible - unfortunately at the moment I will not have my own mode of
transportation around town (see: car) and I'm not a houston native as
you can see, so it will take time for me to get up to speed with college
*and* getting around town, but if anybody is in that area I'd like to
hear back from you.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell garbage collector notes?

2008-08-06 Thread Austin Seipp
Excerpts from Galchin, Vasili's message of Wed Aug 06 04:09:58 -0500 2008:
  Is http://hackage.haskell.org/trac/ghc/wiki/GarbageCollectorNotes a
 reliable source of info on the ghc garbage collector?

The page seems to be a little light for the most part, and it does not
seem to take into account the recent changes to give GHC a parallel
garbage collector (which is now integrated into the HEAD as far as I
can tell.)

For notes on the new parallel GC, you will want to see this paper:

http://research.microsoft.com/%7Esimonpj/papers/parallel-gc/index.htm

You may also want to look through the GHC Commentary regarding the
RTS, as I am sure it also contains information you will want to be
aware of (there are likely many cross-cutting concerns):

http://hackage.haskell.org/trac/ghc/wiki/Commentary/Rts

In particular, the storage manager overview should be of interest:

http://hackage.haskell.org/trac/ghc/wiki/Commentary/Rts/Storage

Hope it helps.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: poll: how can we help you contribute to darcs?

2008-08-03 Thread Austin Seipp
Excerpts from Andrew Coppin's message of Sun Aug 03 04:35:32 -0500 2008:
 Correct me if I'm wrong, but... I was under the impression that Darcs is 
 a revision control system. It controls revisions.
 
 Well Darcs already does that. So... what's to develop? It's not like 
 it's slow or buggy. I can't actually think of any features it doesn't 
 have that I want. So... what now?
 
 In case that sounds really negative and critical, let me phrase it 
 another way: Given that Darcs is more or less perfect now, what's to add?

At this point I feel it is an issue of what can be fixed rather than
what can be added.

The current issues with darcs right now I feel are:

 * Predictability. There is no such thing with darcs. If I try to get
 copies of the base libraries up to date in the latest GHC-head
 (via a ./darcs-all pull -a,) it might pull all the libraries. It
 might not. It might randomly stop at identifying the repository
 format of 'base/array.' It might get all the way to 'base/unix' and
 begin pulling patches before it stops. In either case, darcs never
 quits, I'm never sure of what state it is in during the pull and
 frankly I just don't know what's going on sometimes.

 * Usability is dodgy under high load. To date, I think GHC is the
 best candidate for qualifying as 'high load,' but even then I'm not
 sure if the comparison is quite as fair if GHC is still using an
 old-fashioned darcs format (i.e. darcs1-style repo.) Regardless I am
 still skeptical at this point of darcs ability to handle high load.

 * Random quirks, e.g. some file edits may unknowingly introduce an
 explicit dependency on patches, meanining while two patches A and B
 are unrelated in all forms of the word, it will still mark 'A' as
 dependent on 'B'. So you've basically lost the ability to unrecord
 flexibly, since unrecord is going to skip the patches that are marked
 as dependent on something else, when they really are not.

Quite honestly I think darcs 2 is a fine release and a very good
improvement; it fixed a lot of the older nasty quirks
(exponential-time merge bug in particular) and any project considering
darcs should use the new format immediately. I think for medium to
small sized projects it is great. It will continue to fit those
projects very well, but when push comes to shove, darcs won't cut it,
I feel.

To answer the question and be honest about it: I would work on darcs
if I was getting paid.

Darcs lost I think because of its internals, because it simply was not
feasible for people to get involved and make core improvements. You
basically were either David or you weren't. Other VCS's generally
speaking have a lot more simple foundation: git only has only 4 base
objects that you encounter every day in your workflow. These are the
primitives of all operations in git, and everything else is built on
top of them. Because of it, anybody with some experience in using git
can probably get reasonably comfortable with the source code in a
non-crazy amount of time. There is a lower barrier of entry to the
source code, ideas and development process. The simpler design simply
facilitates contributions where it matters, IMO.

While I don't want to sound so unsupportive of the excellent work
everybody involved in darcs has contributed, that is currently the way
I see darcs and unless I was getting paid I would not be compelled to
work on the source code as I have already moved to another system
(git.)

I think darcs made a landmark is usability for a DVCS, and I also
think the way in which patches are managed with darcs is very
nice. However, I've found git's approach sufficiently simple (to the
point where my limited brain can comprehend it) and cherry-picking is
a common point among version control systems these days (git add -i.)
So I've found home elsewhere.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal + yi + alex problem.

2008-07-30 Thread Austin Seipp
Excerpts from Yann Golanski's message of Wed Jul 30 02:34:05 -0500 2008:
 I cannot seem to be able to install yi via cabal install.  The error I
 get is as follows.  I suspect alex is not installed in the correct
 place.  
 ...

Hi,

cabal-install will put installed binaries in $HOME/.cabal/bin by
default as far as I can tell, you should move the binary somewhere in
your $PATH if you want cabal-install to pick it up for installing yi.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal + yi + alex problem.

2008-07-30 Thread Austin Seipp
Excerpts from John Dorsey's message of Wed Jul 30 13:58:26 -0500 2008:
 Is something amiss with cabal-install?  Shouldn't it have automatically
 installed alex?  Or does it only do that with libraries, by design?

AFAICT, dependencies are only downloaded and installed if they are
listed in a .cabal files 'dependency' field (providing they aren't
already installed) which yes, means libraries by design. I've not had
cabal-install ever track down necessary applications in my
experience.

As well, this issue came up on #haskell the other day and my
conclusion is that no, cabal does not track down and install any
*applications* that are necessary (someone tried to install yi, needed
alex, build failed, and after 'cabal-install alex' everything was
peachy.)

 For that matter, ghc-pkg list | grep -i alex doesn't list anything,
 after I cabal-installed it.  How does cabal verify the prerequisite alex
 version?  (Or does it?)

ghc-pkg lists libraries that are registered with the ghc package
manager (globally and locally); alex is simply an application, not a
library, so there's really nothing for GHC to register. As for telling
what version is necessary, I haven't the slightest.

Duncan, can you shed some light here?

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: RandomDotOrg-0.1

2008-07-28 Thread Austin Seipp
Hi,

I've just uploaded a package to hackage which is an interface to the
random.org random number generator.

For those who don't know, random.org provides random data through the
use of atmospheric noise rather than a PRNG that would typically be
invoked if you were to use the System.Random module or somesuch.

As of current, it can only generate a list of random numbers. However,
random.org also provides an interface to generate random strings,
randomize an input list and a random sequence generator (the
difference being that the sequence generator will not repeat itself,
while the integer one may.)

Would anybody find these interfaces useful? I think the random number
generator alone is pretty nice, but I'm thinking of expanding it in
the future with these other things since they're nice to have if you
want randomness anyway.

You can get it from [1] or just:

 $ cabal install RandomDotOrg

The development repository is tracked using git; you can get yourself
a clone by doing

 $ git clone git://github.com/thoughtpolice/randomdotorg.git


Austin

[1] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/RandomDotOrg-0.1
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] carry state around ....

2008-07-28 Thread Austin Seipp
Excerpts from Galchin, Vasili's message of Mon Jul 28 21:14:56 -0500 2008:
 ok guys .. what is this phantom type concept? Is it a type theory thing or
 just Haskell type concept?
 
 Vasili

Phantom types are more of an idiom than anything else; they are types
with no real concrete representation, i.e. they only exist at compile
time, and not at runtime.

They can be used to hijack the type system, essentially. For example:

 data Expr = EInt  Int
   | EBool Bool
   | ECond Expr Expr Expr
   | EAdd Expr Expr
  deriving (Show)

This basically represents the constructs for a simple expression
language. However, wouldn't it be nice if we could say that 'EAdd' can
only take an 'EInt' or that ECond's first parameter must be 'EBool'
and check that at *compile* time? In this case, we can't. We would
have to check at runtime when we try to evaluate things that indeed,
the types fit together properly for our little 'expression language.'

But not all is lost.

With phantom types, we can 'hijack' the type system so that
it'll verify this for us. We can do this by simply making a newtype
over 'Expr' and giving it a type variable that does not show up on the
right side:

 newtype Expression a = E Expr
  deriving (Show)

In this case, the 'a' is the phantom type. It does not show up in the
constructors, so it does not exist at runtime.

Now all we simply have to do is 'lift' all of the constructors of Expr
into their own functions that stick the data into an 'Expression'
using the E constructor:

 intE :: Int - Expression Int
 intE = E . EInt

 boolE :: Bool - Expression Bool
 boolE = E . EBool

 condE :: Expression Bool 
   - Expression a 
   - Expression a
   - Expression a
 condE (E a) (E b) (E c) = E $ ECond a b c

 addE :: Expression Int - Expression Int - Expression Int
 addE (E a) (E b) = E $ EAdd a b

You'll notice: in the type signatures above, we give the phantom type
of Expression a concrete type such as 'Int' or 'Bool'.

What does this get is? It means that if we construct values via intE
and boolE, and then subsequently use them with condE or addE, we can't
use values of the wrong type in place of one another and the type system
will make sure of it.

For example:

$ ghci
...
Prelude :l phantom.lhs
[1 of 1] Compiling Main ( phantom.lhs, interpreted )
Ok, modules loaded: Main.
*Main let e1 = boolE True
*Main let e2 = intE 12
*Main let e3 = intE 21
*Main condE e1 e2 e3
E (ECond (EBool True) (EInt 12) (EInt 21))
*Main condE e2 e1 e3

interactive:1:6:  
 
Couldn't match expected type `Bool' against inferred type `Int'
  Expected type: Expression Bool
  Inferred type: Expression Int
In the first argument of `condE', namely `e2'
In the expression: condE e2 e1 e3
*Main

As you can see, we 'hijack' the type system so we can specify exactly
what type of 'Expression' functions like intE/boolE will return. We
then simply have other functions which operate over them, and also
specify *exactly* what kind of expression is necessary for them to
work. The phantom type never exists, it only is there to verify we're
doing the right thing.

That was a bit OT though.

In the case of empty data declarations, it's essentially the same
idea. It's just a type with no actual runtime representation. You can
exploit them to force invariants in your types or numerous other
things; a good example is type arithmetic which can be found on the wiki:

http://www.haskell.org/haskellwiki/Type_arithmetic

The example here is based off of Lennart's blog post about
representing DSLs in haskell, and he also shows an example of the same
thing which uses GADTs which allow you to do the same thing (roughly.)
So I'd say give the credit wholly to him. :)

http://augustss.blogspot.com/2007/06/representing-dsl-expressions-in-haskell.html

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC Error: FATAL:Symbol _XJv_srt already defined.

2008-07-21 Thread Austin Seipp
I can replicate this err with 6.8.3 on my macbook (os 10.5.4.) It also
appears to fail with a copy of the GHC HEAD as well:

$ uname -a
Darwin existential.local 9.4.0 Darwin Kernel Version 9.4.0: Mon Jun  9
19:30:53 PDT 2008; root:xnu-1228.5.20~1/RELEASE_I386 i386
$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 6.8.3
$ ~/ghc-head/bin/ghc --version
The Glorious Glasgow Haskell Compilation System, version 6.9.20080615
$ ghc --make DerivingError.hs
[1 of 1] Compiling DerivingError( DerivingError.hs, DerivingError.o )

/var/folders/27/27CWjmd8HK8wG5FT3nu3pk+++TI/-Tmp-//ghc39082_0/ghc39082_0.s:6080:0:
FATAL:Symbol _XxH_srt already defined.
$ rm DerivingError.hi
$ ~/ghc-head/bin/ghc --make DerivingError.hs
[1 of 1] Compiling DerivingError( DerivingError.hs, DerivingError.o )

/var/folders/27/27CWjmd8HK8wG5FT3nu3pk+++TI/-Tmp-/ghc39116_0/ghc39116_0.s:6207:0:
FATAL:Symbol _XyQ_srt already defined.
$

However, things get even wackier because I *can* get it to build on
the HEAD, only through a very odd compilation step (I figured this out
while trying to examine the ASM output from the -S flag):

$ ~/ghc-head/bin/ghc --version   
The Glorious Glasgow Haskell Compilation System, version 6.9.20080615
$  ~/ghc-head/bin/ghc --make DerivingError.hs 
[1 of 1] Compiling DerivingError( DerivingError.hs, DerivingError.o )

/var/folders/27/27CWjmd8HK8wG5FT3nu3pk+++TI/-Tmp-/ghc44257_0/ghc44257_0.s:6207:0:
FATAL:Symbol _XyP_srt already defined.
$ rm DerivingError.hi  
$ ~/ghc-head/bin/ghc -S DerivingError.hs 
$ ~/ghc-head/bin/ghc --make DerivingError.hs
[1 of 1] Compiling DerivingError( DerivingError.hs, DerivingError.o )
$ 

Although in 6.8.3 it doesn't work regardless; but if we make it build
the object file through this odd step, we can then link it with a main
stub:

$ cat  main.hs 
main = putStrLn hello world
$ ~/ghc-head/bin/ghc --make main.hs DerivingError.o
[1 of 1] Compiling Main ( main.hs, main.o )
Linking main ...
$ 

In particular the issue appears to be related to generics (duh,) since
if we search through the .s file generated when using -S, we can see
what it's referring to (6.8.3):

.align 2
_Xxn_srt:  ;; line 5757
.long   _base_DataziGenericsziBasics_mkDataType_closure
.long   _s1Zg_closure
.long   _s1Zi_closure
.const_data
.align 2
_Xw9_srt:
.long   _Xxn_closure
...

_Xxn_srt: ;; line 6080
.long   _base_DataziGenericsziBasics_mkDataType_closure
.long   _s1Zs_closure
.long   _s1Zu_closure
.data
.align 2
...

So it seems like a bug in the native code generator for generics.

The ghc-6.9 version was actually a snapshot of the head a few weeks
back as you can see, I'm building the latest HEAD from darcs as we speak,
but in the mean time I would file a bug report with the code attached:

http://hackage.haskell.org/trac/ghc/newticket?type=bug

If you do, I'll be sure to post what I've done here so the devs might
be able to track it easier. If the latest HEAD turns out to work I'll
get back to you as well.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell code contributions

2008-07-21 Thread Austin Seipp
From the looks of the User Accounts page on hackage, Ross Patterson
seems to be responsible, you can contact him here:

[EMAIL PROTECTED]

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC Error: FATAL:Symbol _XJv_srt already defined.

2008-07-21 Thread Austin Seipp
Status update: after checking out the latest HEAD and building it, the
above error does not occur:

$ ~/ghc-head/bin/ghc --version
The Glorious Glasgow Haskell Compilation System, version 6.9.20080720
$ ~/ghc-head/bin/ghc --make DerivingError.hs 

no location info:
Warning: -fallow-overlapping-instances is deprecated: Use the
OverlappingInstances language instead
[1 of 1] Compiling DerivingError( DerivingError.hs, DerivingError.o )
$ 

However, this doesn't exactly help your immediate problem, and it
likely won't until 6.10.1 is released, as 6.8.3 is going to be the
last release of the 6.8 branch.

Truly sorry I couldn't have helped you more.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Building NDP with latest GHC

2008-07-21 Thread Austin Seipp
Hi,

After my last issue with GHC's HEAD, I tried checking it out again and
getting the patches for the libraries and lo and behold, it worked. So
now I'm up to date with the latest libraries and the compiler, but it
appears that building NDP itself is proving to be troublesome.

(This is on GHC 6.9.20080720, btw)

I've pulled the latest ndp package from
http://darcs.haskell.org/packages/ndp into ./libraries.

Instructions that are listed in [1]:

 % cd libraries
 % darcs get http://darcs.haskell.org/packages/ndp/
 % make make.library.ndp
 % cd ndp/examples
 % make

This fails on step 3; the Makefile in the libraries subdirectory
doesn't recognize 'make.library.ndp' as a valid target.

Instructions in the actual README that comes with package ndp:

 cd ndp
 make boot
 make
 cd examples
 make

This fails on the 'make boot' step, because if you look at the
Makefile, it thinks the top of the GHC source tree is at '..', when
it's actually '../..' in relation to ./libraries/ndp:

 TOP=..
 include $(TOP)/mk/boilerplate.mk
 include ndp.mk
 ... rest of makefile ...

No big deal in this case, just a fix of changing TOP to '../..'
Having done that, however, now it fails because it it cannot correctly
parse bootstrapping.conf which is located in ./libraries:

 $ make boot
 ../../mk/target.mk:454: warning: overriding commands for target `libHSndp'
 ../../mk/package.mk:249: warning: ignoring old commands for target `libHSndp'
 ../../mk/target.mk:454: warning: overriding commands for target `ghc-prim.a'
 ../../mk/package.mk:249: warning: ignoring old commands for target 
 `ghc-prim.a'
 /Users/austinseipp/src/ghc-head/ghc/stage1-inplace/bin/ghc -M
 -optdep-f -optdep.depend  -osuf o -package-conf
 /Users/austinseipp/src/ghc-head/libraries/bootstrapping.conf-H32m
 -O -fasm -Rghc-timing -package-name ndp ghc-prim-0.1 -O -fgenerics
 -package base -XGenerics -fglasgow-exts -fbang-patterns -O2
 -funbox-strict-fields -fdicts-cheap -fno-method-sharing
 -fno-spec-constr-threshold -fmax-simplifier-iterations20 -threaded
 -XTypeFamilies -fcpr-off
 
 ... lots of files trying to get compiled here ...
 
 no location info:
 Warning: -fgenerics is deprecated: Use the Generics language
 instead
 
 no location info:
 Warning: -fbang-patterns is deprecated: Use the BangPatterns
 language instead
 ghc:
 /Users/austinseipp/src/ghc-head/libraries/bootstrapping.conf:1:62:
 parse error on input `'
 ghc: 41436660 bytes, 4 GCs, 118784/118784 avg/max bytes residency (1
 samples), 31M in use, 0.00 INIT (0.00 elapsed), 0.08 MUT (0.12
 elapsed), 0.01 GC (0.02 elapsed) :ghc
 make: *** [depend] Error 1

For the record, the thing it fails on is the *first* occurance of a
quotation mark, and that's here in bootstrapping.conf:

 [InstalledPackageInfo {package = PackageIdentifier {pkgName =
 filepath, pkgVersion = ...

So it's failing when it sees the quote at the start of filepath

I have also attempted to follow the instructions at the top
./libraries/Makefile, which are:

 # To do a fresh build:
 #
 #   make clean
 #   make boot
 #   make
 #
 # To rebuild a particular library package:
 #
 #   make clean.library.package
 #   make make.library.package
 #
 # or the following is equivalent:
 #
 #   make remake.library.package
 #
 # To add a new library to the tree, do
 #
 #   darcs get http://darcs.haskell.org/packages/foo
 #   [ -e foo/configure.ac ]  ( cd foo  autoreconf )
 #   make make.library.foo

As said above, 'make.library.ndp' does not work, doing a full clean,
boot and then make again doesn't pick up NDP either, and in the last
case (adding a new library,) it doesn't matter because there is no
configure.ac at the top level of ./libraries/ndp

I think I might simply have a *lot* of outdated information. For the
record, this is what I want to try and compile:

 {-# OPTIONS_GHC -fglasgow-exts -fparr -fvectorise #-}
 module Main where
 import GHC.PArr
 
 dotp :: Num a = [:a:] - [:a:] - a
 dotp xs ys = sumP [:x * y | x - xs, y - ys :]
 
 main = print $ dotp [:1..5:] [:6..10:]

Naturally when trying, it fails because DPH isn't there:

 $ ~/ghc-head/bin/ghc --make dotp.hs
 
 no location info:
 Warning: -fparr is deprecated: Use the PArr language instead
 [1 of 1] Compiling Main ( dotp.hs, dotp.o )
 GHC error in desugarer lookup in main:Main:
   Failed to load interface for `Data.Array.Parallel.Lifted.PArray':
 no package matching dph-par was found
 ghc: panic! (the 'impossible' happened)
   (GHC version 6.9.20080720 for i386-apple-darwin):
   initDs user error (IOEnv failure)
 
 Please report this as a GHC bug:
 http://www.haskell.org/ghc/reportabug

(This leads me to believe I actually want
http://darcs.haskell.org/packages/dph, but it's not mentioned. Anywhere.)

If I'm just doing everything wrong, I'd really appreciate knowing and
I'd be *more* than happy to update the wiki pages so that more people
can try it and have it build successfuly, because as it stands I'm
coming to the conclusion that basically 

Re: [Haskell-cafe] Trouble with non-exhaustive patterns

2008-07-21 Thread Austin Seipp
Hi Fernando,

 final []  = [] - I consider that the end of an empty list is the empty list
 final [a] = a
 final (_:t) = final t
 
 Suddenly, the function stoped working with a rather cryptic (for a newbie 
 at least) error message:
 
 *Temp final [4,5]
 
 interactive:1:9:
 No instance for (Num [a])
   arising from the literal `5' at interactive:1:9
 Possible fix: add an instance declaration for (Num [a])
 In the expr*Temp ession: 5
 In the first argument of `final', namely `[4, 5]'
 In the expression: final [4, 5]


The problem is that final has the type [a] - a, so you cannot have a
pattern match that simply returns the empty list; this makes the
function partial because taking the tail of an empty list is
undefined.

If you wish to avoid it, you can wrap it in a Maybe:

final :: [a] - Maybe a
final []= Nothing
final [a]   = Just a
final (_:t) = final t

$ cat  final.hs
final :: [a] - Maybe a
final []= Nothing
final [a]   = Just a
final (_:t) = final t
$ ghci final.hs 
GHCi, version 6.8.3: http://www.haskell.org/ghc/  :? for help
Loading package base ... linking ... done.
[1 of 1] Compiling Main ( final.hs, interpreted )
Ok, modules loaded: Main.
*Main final [1,2]
Just 2
*Main final []
Nothing
*Main 

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Getting latest GHC-head/libraries takes forever

2008-07-15 Thread Austin Seipp
For the purpose of experimenting with NDP I went through the
process of getting the GHC head from darcs.haskell.org. As 
specified in the developer wiki[1], using darcs get is basically
not possible because there're so many patches. So I downloaded

http://darcs.haskell.org/ghc-HEAD-2008-06-06-ghc-corelibs-testsuite.tar.bz2

This alone took over *7 hours* using wget because it could never get
a connection faster than 5kb/s.

After downloading I changed to the directory and ran 'darcs pull -a'.
Nothing bad happened here, darcs pulled most of the changes pretty quick
so I had an up to date GHC.

However, when trying to use the ./darcs-all script to pull in the
latest library/testsuite changes, things go horribly, horribly wrong. 

I let darcs sit for over another hour and a half trying to pull patches
from darcs.haskell.org for the libraries and the testsuite.

It never got past packages/base. In fact, it never even got to *getting* the
patches from the base repository, it sat there, stuck at the 'identifying 
repository'
stage (darcs 2.0.2) for over an hour, making my processor go round and round at
99% CPU usage the entire way, never accomplishing anything. What makes it even 
stranger
is that it got the latest patches for the testsuite and the array package 
pretty quick.
It just got permanently stuck on base for some reason and never went further.

Also, I tried pulling package-ndp a few days ago while using an older 
development snapshot
of GHC (not the darcs repository, a nightly HEAD tarball of ghc.)
Just pulling 500 patches from http://darcs.haskell.org/packages/ndp took over
an hour, but at least it got done. It didn't build properly so I thought I'd
get the latest darcs version of GHC instead, and here I am.

Has anybody experienced anything like this recently? Off the top of my head the 
only
things that possibly come to mind are:

1) darcs.haskell.org is just really slow or under -constant- heavy load.
2) the darcs repositories are in darcs-1 format, so darcs-2 is having issues 
when
 trying to pull the patches.
3) Diabolical connection-killing ninjas.

None of these 3 possibilities are really 'great,' and none of them help me get 
the
latest version of GHC, the libs, and NDP any faster then a narcoleptic snail.

If instead this is a techincal issue with, say, darcs that someone is aware of, 
I'm
using OS X Leopard, 10.5.4 with a statically linked darcs-2.0.2 binary from [2].

[1] http://hackage.haskell.org/trac/ghc/wiki/Building/GettingTheSources
[2] 
http://wiki.darcs.net/DarcsWiki/CategoryBinaries#head-25c8108e9d719be30a8cc6bcc86ce243a78b8c25
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Trying to build a stand-alone executable GLUT app with ghc, Windows XP

2008-04-25 Thread Austin Seipp
Perhaps try:

$ ghc --make -static -optl-static -lpath to libHSGLUT.a here x.hs

The -optl-static passes the '-static' argument to ld so it will link
statically; you may also need a copy of a compatable GLUT library in .a
format on your windows machine which should be linked in with -l as well
(where you can get this, I do not know.)

-- 
It was in the days of the rains that their prayers went up, 
not from the fingering of knotted prayer cords or the spinning 
of prayer wheels, but from the great pray-machine in the
monastery of Ratri, goddess of the Night.
 Roger Zelazny
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   >