Re: [Haskell-cafe] bytestring vs. uvector

2009-03-07 Thread Don Stewart
bos:
> On Sat, Mar 7, 2009 at 10:23 PM, Alexander Dunlap 
> wrote:
> 
> Hi all,
> 
> For a while now, we have had Data.ByteString[.Lazy][.Char8] for our
> fast strings. Now we also have Data.Text, which does the same for
> Unicode. These seem to be the standard for dealing with lists of bytes
> and characters.
> 
> Now we also have the storablevector, uvector, and vector packages.
> These seem to be also useful for unpacked data, *including* Char and
> Word8 values.
> 
> What is the difference between bytestring and these new "fast array"
> libraries? Are the latter just generalizations of the former?
> 
> 
> There are quite a few overlaps and differences among them.
> 
> bytestring is mature and useful for low-level byte buffer manipulations, and
> also for efficient I/O. This is in part because it uses pinned pointers that
> can interoperate easily with foreign code. It used to have an early fusion
> rewriting framework, but that was abandoned. So it will not fuse multiple
> ByteString traversals into single loops. This library is widely used, and also
> somewhat abused for text I/O.
> 
> storablevector is not mature (I'm not even sure if it's actually used) and is 
> a
> derivative of an old version of the bytestring library, and so has similar
> characteristics for interacting with foreign code. It contains some old fusion
> code that is sketchy in nature and somewhat likely to be broken. I'm not sure 
> I
> would recommend using this library.
> 
> uvector is, if my memory serves me correctly, a fork of the vector library. It
> uses modern stream fusion, but is under active development and is a little
> scary. I'm a little unclear on the exact difference between uvector and 
> vector.
> Both use arrays that are not pinned, so they can't be readily used with 
> foreign
> code. If you want to use either library, understand that you're embarking on a
> bracing adventure.
> 
> text is not mature, and is based on the same modern fusion framework as 
> uvector
> and vector. It uses unpinned arrays, but provides functions for dealing with
> foreign code. It uses a denser encoding than uvector for text, and provides
> text-oriented functions like splitting on word and line boundaries. Although
> it's intended for use with Unicode text, it does not yet provide proper
> Unicode-aware functions for things like case conversion. It interacts with
> bytestring to perform conversion to and from standard representations like
> UTF-8, and (via the text-icu package) ICU for others (SJIS, KOI-8, etc). If 
> you
> want to use this library, understand that you're embarking on a bracing
> adventure.

I endorse this message.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bytestring vs. uvector

2009-03-07 Thread Austin Seipp
Excerpts from Bryan O'Sullivan's message of Sun Mar 08 00:45:03 -0600 2009:
> uvector is, if my memory serves me correctly, a fork of the vector library.
> It uses modern stream fusion, but is under active development and is a
> little scary. I'm a little unclear on the exact difference between uvector
> and vector. Both use arrays that are not pinned, so they can't be readily
> used with foreign code. If you want to use either library, understand that
> you're embarking on a bracing adventure.

vector and uvector are roughly based on the same technology; uvector
is - as far as I remember - a fork of some of the old DPH code which
uses stream fusion which Don cleaned up and worked on (and it's proven
pretty useful, and people are still hacking on it.)

vector however, has the notion of 'recycling arrays' when it does
array operations. The technique is in fact quite similar to stream
fusion. Roman L. built this from scratch I think, so it's quite a bit
more unused and less stable than even uvector is maybe, but I suppose
you could say it's kind of a superset of uvector. Hopefully though
it should mature a little, and the plans are to have the technology
from both of these folded into the Data Parallel Haskell project so we
get fast array operations+automatic parallelisation.

For info, see Roman's paper, 'Recycle your arrays!'

http://www.cse.unsw.edu.au/~rl/publications/recycling.html

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bytestring vs. uvector

2009-03-07 Thread Austin Seipp
Excerpts from Alexander Dunlap's message of Sun Mar 08 00:23:01 -0600 2009:
> For a while now, we have had Data.ByteString[.Lazy][.Char8] for our
> fast strings. Now we also have Data.Text, which does the same for
> Unicode. These seem to be the standard for dealing with lists of bytes
> and characters.
> 
> Now we also have the storablevector, uvector, and vector packages.
> These seem to be also useful for unpacked data, *including* Char and
> Word8 values.
> 
> What is the difference between bytestring and these new "fast array"
> libraries? Are the latter just generalizations of the former?
> 
> Thanks for any insight anyone can give on this.
> 
> Alex


Data.Text provides functions for unicode over bytestrings, with several
encoding/decoding methods. So, I think that bytestring+text now solves
the general problem with the slow String type - we get various
international encodings, and fast, efficient packed strings.

(It's also worth mentioning utf8-string, which gives you utf8 over
bytestrings. text gives you more encodings and is probably still quite
efficient, however.)

But this is pretty much a separate effort to that of packages like
uvector/vector etc. etc.. To clarify, uvector and vector are likely to
be merged in the future I think - vector is based on the idea of
'recycling arrays' so that array operations are still very efficient,
while uvector only has the tested stream fusion technique behind it.

Actually, I think the inevitable plan is to merge the technology
behind both vector and uvector into the Data Parallel Haskell
project. Array recylcing and stream fusion goes into creating
extremely efficient sequential code, while the vectorisation pass
turns that into efficient multicore code at the same time.

In any case, I suppose that hypothetically if someone wanted to use a
package like uvector to create an efficient string type, they could,
but if they want that, why not just use bytestring? It's already
optimized, battle tested and in extremely wide use.

I think some library proliferation is good; in this case, the
libraries mentioned here are really for some different purposes, and
that's great, because they all lead to some nice, fast code with low
conceptual overhead when put together (hopefully...) But I'm not even
going to begin examining/comparing the different array interfaces or
anything, because that's been done many times here, so you best check
the archives if you want the 'in-depth' on the matter.

Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bytestring vs. uvector

2009-03-07 Thread Bryan O'Sullivan
On Sat, Mar 7, 2009 at 10:23 PM, Alexander Dunlap <
alexander.dun...@gmail.com> wrote:

> Hi all,
>
> For a while now, we have had Data.ByteString[.Lazy][.Char8] for our
> fast strings. Now we also have Data.Text, which does the same for
> Unicode. These seem to be the standard for dealing with lists of bytes
> and characters.
>
> Now we also have the storablevector, uvector, and vector packages.
> These seem to be also useful for unpacked data, *including* Char and
> Word8 values.
>
> What is the difference between bytestring and these new "fast array"
> libraries? Are the latter just generalizations of the former?


There are quite a few overlaps and differences among them.

bytestring is mature and useful for low-level byte buffer manipulations, and
also for efficient I/O. This is in part because it uses pinned pointers that
can interoperate easily with foreign code. It used to have an early fusion
rewriting framework, but that was abandoned. So it will not fuse multiple
ByteString traversals into single loops. This library is widely used, and
also somewhat abused for text I/O.

storablevector is not mature (I'm not even sure if it's actually used) and
is a derivative of an old version of the bytestring library, and so has
similar characteristics for interacting with foreign code. It contains some
old fusion code that is sketchy in nature and somewhat likely to be broken.
I'm not sure I would recommend using this library.

uvector is, if my memory serves me correctly, a fork of the vector library.
It uses modern stream fusion, but is under active development and is a
little scary. I'm a little unclear on the exact difference between uvector
and vector. Both use arrays that are not pinned, so they can't be readily
used with foreign code. If you want to use either library, understand that
you're embarking on a bracing adventure.

text is not mature, and is based on the same modern fusion framework as
uvector and vector. It uses unpinned arrays, but provides functions for
dealing with foreign code. It uses a denser encoding than uvector for text,
and provides text-oriented functions like splitting on word and line
boundaries. Although it's intended for use with Unicode text, it does not
yet provide proper Unicode-aware functions for things like case conversion.
It interacts with bytestring to perform conversion to and from standard
representations like UTF-8, and (via the text-icu package) ICU for others
(SJIS, KOI-8, etc). If you want to use this library, understand that you're
embarking on a bracing adventure.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] bytestring vs. uvector

2009-03-07 Thread Alexander Dunlap
Hi all,

For a while now, we have had Data.ByteString[.Lazy][.Char8] for our
fast strings. Now we also have Data.Text, which does the same for
Unicode. These seem to be the standard for dealing with lists of bytes
and characters.

Now we also have the storablevector, uvector, and vector packages.
These seem to be also useful for unpacked data, *including* Char and
Word8 values.

What is the difference between bytestring and these new "fast array"
libraries? Are the latter just generalizations of the former?

Thanks for any insight anyone can give on this.

Alex
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I want to write a compiler

2009-03-07 Thread wren ng thornton

Loup Vaillant wrote:

-> support algebraic data types and case expressions (unless I can get
away with encoding them as functions),


Which you always can,

data Foo = A a1...an | B b1...bn |...

==

type Foo :: forall r.
(a1->...->an -> r) ->
(b1->...->bn -> r) ->...
-> r

A a1...an = \fa _... -> fa a1...an

B b1...bn = \_ fb... -> fb b1...bn

...


The only trick is that you need to have closures (in order to build the 
Foos) and you need to have first-class functions (in order to pass the 
destructors in). If you're familiar with the STG machine, this is what 
they do under the covers anyways. At the core level, all you need is 
some primitive to force evaluation.


(Church Encoding)++

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Purely Functional Data Structures

2009-03-07 Thread Jeremy Shaw
At Sun, 08 Mar 2009 02:28:43 +0100,
G?uenther Schmidt wrote:
> 
> [1  ]
> Hi Jeremy,
> 
> I had used HAppS-IxSet before and was very happy with it, it offered 
> pretty much everything I needed. I switched (back) to SQL once I had hit 
> a bump in the road that I wasn't able to fix, a stack-overflow that 
> occurred once I ran the code against the largest sample data I had. It 
> occurred because I could not make the inserts into the set strict.
> 
> However since then my haskell skills have improved somewhat and right 
> now I'm giving it another go.

If you still run into the issue, and it can be best solved by adding a
strict version of insert (or something similar), definitely submit a
patch. Now that the project is alive again, the patch should be
accepted and applied in a matter of hours.

-- jeremy 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: cabal-install 0.6.2 does not bootstrap with ghc-6.10.1 debian distribution

2009-03-07 Thread Ahn, Ki Yung
Ahn, Ki Yung 쓴 글:
> Dear Haskellers and especially who are working on cabal-install
> and debian packaging,
> 
> I sometimes clean up .ghc and .cabal in my home directory to start from
> scratch because of dependency loopholes (cabal-install does not have
> remove option yet, so it's hard to fix when such loophole happens).
> 
> Today, I had some time in the airport and decided to start from scratch
> again because of the dependency loophole with process 1.0.1.1 and
> haddock.  I downloaded the most recent version of cabal-install the
> version 0.6.2, and found out that the ./bootstrap.sh does not work. So,
> I had to bootstrap from version 0.6.2 and do "cabal update" and "cabal
> upgrade cabal-install" to upgrade to 0.6.2.

Sorry for my typo.  What I meant was: I was able to bootstrapped from
version 0.6.0 and then upgrade to 0.6.2.

> I am not sure whether this is a cabal-install problem or debian
> dstribution ghc-6.10.1 packaging probelm, since I have not tried to test
>  this with any other ghc-6.10.1 distribution.
> 
> If anyone who are not using debian distribution ghc-6.10.1 (e.g.,
> general linux binary ghc-6.10.1 or source compiled one) can try
> bootstrapping cabal-install 0.6.2 from scratch also finds the same
> problem, I think someone should make a ticket for cabal-install.
> 
> Thanks,
> 
> Ahn, Ki Yung

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] cabal-install 0.6.2 does not bootstrap with ghc-6.10.1 debian distribution

2009-03-07 Thread Ahn, Ki Yung
Dear Haskellers and especially who are working on cabal-install
and debian packaging,

I sometimes clean up .ghc and .cabal in my home directory to start from
scratch because of dependency loopholes (cabal-install does not have
remove option yet, so it's hard to fix when such loophole happens).

Today, I had some time in the airport and decided to start from scratch
again because of the dependency loophole with process 1.0.1.1 and
haddock.  I downloaded the most recent version of cabal-install the
version 0.6.2, and found out that the ./bootstrap.sh does not work. So,
I had to bootstrap from version 0.6.2 and do "cabal update" and "cabal
upgrade cabal-install" to upgrade to 0.6.2.

I am not sure whether this is a cabal-install problem or debian
dstribution ghc-6.10.1 packaging probelm, since I have not tried to test
 this with any other ghc-6.10.1 distribution.

If anyone who are not using debian distribution ghc-6.10.1 (e.g.,
general linux binary ghc-6.10.1 or source compiled one) can try
bootstrapping cabal-install 0.6.2 from scratch also finds the same
problem, I think someone should make a ticket for cabal-install.

Thanks,

Ahn, Ki Yung

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I want to write a compiler

2009-03-07 Thread Austin Seipp
Hi,

(Please note this is coming from my own experience working with the LHC haskell
compiler, as well as a compiler I'm currently working on in SML. I'm
not an authority, but as another greenhorn compiler hacker I thought I
might give some advice.)

Excerpts from Loup Vaillant's message of Sat Mar 07 17:32:48 -0600 2009:
> Ideally, I would very much like to compile to C.
> 
> The requirements are easily stated. My language must
> -> be lazy by default,
> -> support algebraic data types and case expressions (unless I can get
> away with encoding them as functions),
> -> have some kind of FFI with C (I suppose it imply some support for
> unboxed values).
> 
> There is also the first class functions thing, but lambda lifting is okay.

Unfortunately, after working on LHC, I can verifiably say (like all
the researchers who wrote the papers which I *didn't* read
beforehand,) that for a lot of purposes, C is an unsuitable fit for a
compilers' target language. It works pretty well if you can make the
source language execution model fit well enough, but if you can't,
you've a price to pay (both in sanity and/or performance.)

(On that note, I am currently of the opinion that most of LHC's major
deficiencies, aside from a few parser bugs or some needed
optimizations, comes from the fact that compiling to C is currently
our only option; because of it, we have no exception handling or
proper garbage collection at all. As well, the runtime system is a
little needlessly 'clever' (if small and understandable) so it can
deal with that.)

However, since your goal is *not* efficiency, you will be happy to
know that the issues with compiling to C (like garbage collection and
exception handling) can be worked around viably.

For garbage collection, please see.

"Accurate Garbage Collection in an Uncooperative Environment" -
http://citeseer.ist.psu.edu/538613.html

This strategy is currently used in Mercury as well as Ben L.'s DDC
language; on that note, I think if you spent some time looking through
the runtime/generated code of DDC, you can see exactly what the paper
is talking about, because it's actually a very simple strategy for
holding onto GC roots:

http://code.haskell.org/ddc/ddc-head/runtime/

However, it's reasons like this (C is really hard to compile to
effectively) that Simon & co. have spent so much time on the C--
project. C-- is one of the backend languages used in GHC, and it is
designed to be a 'uniform assembly language' for high level languages
to compile to.

You can find a lot of info on C-- here; I recommend the paper at the
bottom to start off:

http://research.microsoft.com/en-us/um/people/simonpj/papers/c--/

These papers should further illustrate the issues with compiling to C;
for a pedagogical excursion, these issues can all be (simply) worked
around for the most part like Henderson's paper illustrates, but
they're worth keeping in mind, too.

Hopefully those papers should help you concerning your compilation
model and the route you would like to take. I can't say much about
laziness, other than read Simon Peyton-Jones' actual full book (it's
an amazing read):

http://research.microsoft.com/en-us/um/people/simonpj/papers/slpj-book-1987/

That should help you concerning laziness/compilation etc..

As for the FFI, I don't really have any papers on 'how to implement an
FFI', but Matthias Blume's paper might be of interest:

"No-Longer-Foreign: Teaching an ML compiler to speak c "natively"" 

http://ttic.uchicago.edu/~blume/papers/nlffi-entcs.pdf

libffi may also be worth investigating:

http://sourceware.org/libffi/


> I have chosen the STG machine because I thought i would be easier to
> get an FFI and case exprs out of it. Maybe I am wrong, and template
> instantiation is really easier. There is also the BRISK machine, and
> GRIN. But the paper I read were less readable to me than the three
> mentioned.

How far into GRIN have you looked? It is one of the intermediate
languages for lhc/jhc, and for a low-level intermediate representation
it works quite well I think; it is a strict, monadic intermediate
language - being strict, we must still be able to model closures
('suspendeded applications' or thunks,) so we model closures/partial
applications in a sort of 'defunctionalized' way (making grin a sort
of 'defunctionalized intermediate language' really,) by turning them
into something along the lines of an algebraic data type in GRIN. A
good benefit of this is that you actually /don't/ have to write
eval/apply yourself, because the compiler is supposed to generate it
*for* you.

In fact, this is a core part of GRIN - the generation of eval by the
compiler is critical for many important optimizations to take
place. It also makes the runtime a bit simpler.

This comes at the price that GRIN is by design a whole-program
intermediate form; in order to pull off any of these sophisticated
transformations everything must be in memory at once. But as we have
seen with LHC/JHC, it can make a *huge* differ

Re: [Haskell-cafe] Purely Functional Data Structures

2009-03-07 Thread G?uenther Schmidt

Hi Jeremy,

I had used HAppS-IxSet before and was very happy with it, it offered 
pretty much everything I needed. I switched (back) to SQL once I had hit 
a bump in the road that I wasn't able to fix, a stack-overflow that 
occurred once I ran the code against the largest sample data I had. It 
occurred because I could not make the inserts into the set strict.


However since then my haskell skills have improved somewhat and right 
now I'm giving it another go.


At that time it seemed that not many people were using IxSet so no-one 
was really able to help me with it.


I'm glad that even though the original project is discontinued someone 
else took up the torch.


Günther


Jeremy Shaw schrieb:

At Sun, 08 Mar 2009 00:13:14 +0100,
G?uenther Schmidt wrote:
  
In SQL I would have the data indexed by several 
different columns, if I use maps I'd only have one key, so if I need to 
lookup data in the map by a value that is not the key the lookups will 
become quite expensive.



happstack-ixset offers a data-type similar to Map except that you can
have multiple keys. You can even have keys that are calculated from
the data but don't actually appear in the data itself. For, example,
if your ixset just contains Strings, one of the keys could be the
length of the String.

happstack-ixset (and its dependencies) also offers compact
serialization/deserialization of the ixset to disk, data migration
options, and a smattering of other features that may or may not be
useful to you.

While happstack-ixset is built to work with happstack, it is does not
depend on the happstack http server or persistent store layer, so it
should be useful even if you are not being an application server.

- jeremy  
  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I want to write a compiler

2009-03-07 Thread Jeremy Shaw
Hello,

This book is pretty good IMO:

http://research.microsoft.com/en-us/um/people/simonpj/papers/slpj-book-1987/

It does not cover STG, but it does cover a ton of useful stuff and is
very well written.

I can say from experience that the spineless-tagless-gmachine paper
you referenced does contain all the information needed to implement an
STG->C compiler. But, you should expect to spend a lot of time
alternating between trying to implement the thing and rereading the
paper.

Also, you may find that it is easier to do STG->Assembly at first (via
harpy or something). With assembly it is easier to express exactly
what you want to happen. With C you have to try to figure out how to
trick the C compiler into doing your bidding.

Also, maybe it is more satisfying to output assembly, because then you
really feel like you are writing a compiler ;)

I am sure others will say that LLVM is a better option than C or
assembly. I know almost nothing about LLVM, so they may be right.

-- jeremy

At Sun, 8 Mar 2009 01:32:48 +0200,
Loup Vaillant wrote:
> 
> This is a homework question. I am mainly looking for guidance or
> pointers. Source code, even. (Not GHC's, though, it is too complicated
> for me right now. The AST of STG is fine, but the rest kinda scares
> me.)
> 
> Ideally, I would very much like to compile to C.
> 
> The requirements are easily stated. My language must
> -> be lazy by default,
> -> support algebraic data types and case expressions (unless I can get
> away with encoding them as functions),
> -> have some kind of FFI with C (I suppose it imply some support for
> unboxed values).
> 
> There is also the first class functions thing, but lambda lifting is okay.
> 
> I don't care about any kind of execution efficiency. I just want my
> compiler to be As Simple As Possible. Eventually, I intend to
> bootstrap. Then, I will start dreaming about doing better than the
> Mighty Simons. (I can dream, right?)
> 
> I figured I would start by the back-end. I began to write 15 pathetic
> lines of code to start an STG machine. Needles to say, I am stuck. I
> don't know where to start.
> 
> I am already familiar with some papers. In particular, [1] (on
> template instantiation), [2] and [3]. I once wrote a simple graph
> reducer using template instantiation (about 20 lines at most).
> 
> I have chosen the STG machine because I thought i would be easier to
> get an FFI and case exprs out of it. Maybe I am wrong, and template
> instantiation is really easier. There is also the BRISK machine, and
> GRIN. But the paper I read were less readable to me than the three
> mentioned.
> 
> So, If any of you can give me some pointer or guidance or advice about
> where to start, I would be very grateful.
> 
> Loup
> 
> 
> [1] http://www.cs.otago.ac.nz/cosc459/books/pjlester.pdf
> [2] 
> http://research.microsoft.com/~simonpj/Papers/spineless-tagless-gmachine.ps.gz
> [3] http://research.microsoft.com/~simonpj/Papers/eval-apply/index.htm
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Purely Functional Data Structures

2009-03-07 Thread Jeremy Shaw
At Sun, 08 Mar 2009 00:13:14 +0100,
G?uenther Schmidt wrote:
> In SQL I would have the data indexed by several 
> different columns, if I use maps I'd only have one key, so if I need to 
> lookup data in the map by a value that is not the key the lookups will 
> become quite expensive.

happstack-ixset offers a data-type similar to Map except that you can
have multiple keys. You can even have keys that are calculated from
the data but don't actually appear in the data itself. For, example,
if your ixset just contains Strings, one of the keys could be the
length of the String.

happstack-ixset (and its dependencies) also offers compact
serialization/deserialization of the ixset to disk, data migration
options, and a smattering of other features that may or may not be
useful to you.

While happstack-ixset is built to work with happstack, it is does not
depend on the happstack http server or persistent store layer, so it
should be useful even if you are not being an application server.

- jeremy  
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: I want to write a compiler

2009-03-07 Thread Achim Schneider
Rick R  wrote:

> http://en.wikibooks.org/wiki/Write_Yourself_a_Scheme_in_48_Hours
>
Or go for the Real Thing:
http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-35.html#%_sec_5.5

(although starting to read the Wizard book with the last section might
not be the most easiest thing in the world)

-- 
(c) this sig last receiving data processing entity. Inspect headers
for copyright history. All rights reserved. Copying, hiring, renting,
performance and/or quoting of this signature prohibited.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I want to write a compiler

2009-03-07 Thread Rick R
http://en.wikibooks.org/wiki/Write_Yourself_a_Scheme_in_48_Hours

Not sure if this is exactly what you want, but you could certainly fufill
all of your requirements using this as a baseline. Instead of evaluating the
actual statements in your eval function, you could simply render them to C.

As far as FFI, if you are statically compiling to C, you can leave your
variables unboxed. This would allow you to call C functions directly.

Hope that helps.


On Sat, Mar 7, 2009 at 6:32 PM, Loup Vaillant wrote:

> This is a homework question. I am mainly looking for guidance or
> pointers. Source code, even. (Not GHC's, though, it is too complicated
> for me right now. The AST of STG is fine, but the rest kinda scares
> me.)
>
> Ideally, I would very much like to compile to C.
>
> The requirements are easily stated. My language must
> -> be lazy by default,
> -> support algebraic data types and case expressions (unless I can get
> away with encoding them as functions),
> -> have some kind of FFI with C (I suppose it imply some support for
> unboxed values).
>
> There is also the first class functions thing, but lambda lifting is okay.
>
> I don't care about any kind of execution efficiency. I just want my
> compiler to be As Simple As Possible. Eventually, I intend to
> bootstrap. Then, I will start dreaming about doing better than the
> Mighty Simons. (I can dream, right?)
>
> I figured I would start by the back-end. I began to write 15 pathetic
> lines of code to start an STG machine. Needles to say, I am stuck. I
> don't know where to start.
>
> I am already familiar with some papers. In particular, [1] (on
> template instantiation), [2] and [3]. I once wrote a simple graph
> reducer using template instantiation (about 20 lines at most).
>
> I have chosen the STG machine because I thought i would be easier to
> get an FFI and case exprs out of it. Maybe I am wrong, and template
> instantiation is really easier. There is also the BRISK machine, and
> GRIN. But the paper I read were less readable to me than the three
> mentioned.
>
> So, If any of you can give me some pointer or guidance or advice about
> where to start, I would be very grateful.
>
> Loup
>
>
> [1] http://www.cs.otago.ac.nz/cosc459/books/pjlester.pdf
> [2]
> http://research.microsoft.com/~simonpj/Papers/spineless-tagless-gmachine.ps.gz
> [3] 
> http://research.microsoft.com/~simonpj/Papers/eval-apply/index.htm
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>



-- 
We can't solve problems by using the same kind of thinking we used when we
created them.
   - A. Einstein
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] I want to write a compiler

2009-03-07 Thread Loup Vaillant
This is a homework question. I am mainly looking for guidance or
pointers. Source code, even. (Not GHC's, though, it is too complicated
for me right now. The AST of STG is fine, but the rest kinda scares
me.)

Ideally, I would very much like to compile to C.

The requirements are easily stated. My language must
-> be lazy by default,
-> support algebraic data types and case expressions (unless I can get
away with encoding them as functions),
-> have some kind of FFI with C (I suppose it imply some support for
unboxed values).

There is also the first class functions thing, but lambda lifting is okay.

I don't care about any kind of execution efficiency. I just want my
compiler to be As Simple As Possible. Eventually, I intend to
bootstrap. Then, I will start dreaming about doing better than the
Mighty Simons. (I can dream, right?)

I figured I would start by the back-end. I began to write 15 pathetic
lines of code to start an STG machine. Needles to say, I am stuck. I
don't know where to start.

I am already familiar with some papers. In particular, [1] (on
template instantiation), [2] and [3]. I once wrote a simple graph
reducer using template instantiation (about 20 lines at most).

I have chosen the STG machine because I thought i would be easier to
get an FFI and case exprs out of it. Maybe I am wrong, and template
instantiation is really easier. There is also the BRISK machine, and
GRIN. But the paper I read were less readable to me than the three
mentioned.

So, If any of you can give me some pointer or guidance or advice about
where to start, I would be very grateful.

Loup


[1] http://www.cs.otago.ac.nz/cosc459/books/pjlester.pdf
[2] 
http://research.microsoft.com/~simonpj/Papers/spineless-tagless-gmachine.ps.gz
[3] http://research.microsoft.com/~simonpj/Papers/eval-apply/index.htm
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell Weekly News: Issue 108 - March 7, 2009

2009-03-07 Thread Brent Yorgey
---
Haskell Weekly News
http://sequence.complete.org/hwn/20090307
Issue 108 - March 07, 2009
---

   Welcome to issue 108 of HWN, a newsletter covering developments in the
   [1]Haskell community.

   The ICFP programming contest will be held from 26-29th June! It's not
   too early to start thinking about putting a team together.

Announcements

   tar 0.3.0.0. Duncan Coutts [2]announceda major new release of the
   [3]tar package for handling ".tar" archive files. This release has a
   completely new and much improved API.

   storable 0.1 -- Storable type class for variable-sized data. Tomáš
   Janoušek [4]announced the first release of the [5]storable library,
   which fills the gap between Foreign.Storable and Data.Binary by adding
   support for marshalling (finite) values of variable-sized data types,
   like lists or trees, while preserving the performance and memory
   efficiency one expects from the Storable class. It also provides a
   (monadic) syntactic sugar that takes care of alignment restrictions by
   itself and makes instance deriving easy.

   CFP: Submit a talk proposal to CUFP. Kathleen Fisher [6]requested talk
   proposals for [7]CUFP.

   The Industrial Haskell Group. Duncan Coutts [8]announced the creation
   of the [9]Industrial Haskell Group (IHG). The IHG is an organisation to
   support the needs of commercial users of the Haskell programming
   language. Currently, the main activity of the IHG is a collaborative
   development scheme, in which multiple companies fund work on the
   Haskell development platform to their mutual benefit. The scheme has
   started with three partners of the IHG, including Galois and Amgen.

   pandoc 1.2. John MacFarlane [10]announced the release of [11]pandoc
   version 1.2. The most significant new feature is support for literate
   Haskell; you can now use pandoc directly on literate Haskell source
   files to produce syntax-highlighted HTML output.

   A Haskell binding for the Augeas API. Jude [12]announced a [13]Haskell
   FFI binding for the Augeas configuration editing API.

   Haskell Logo Voting will start soon!. Eelco Lempsink [14]announced that
   [15]voting for the new Haskell logo will begin on March 16! Everyone
   subscribed to haskell-cafe will receive a ballot; if you are not
   subscribed but would like to vote, email Eelco with the subject
   "haskell logo voting ballot request" and include a short, unique
   message.

   Happstack 0.2 Released. Matthew Elder [16]announced the [17]release of
   [18]Happstack 0.2.

   Extensible and Modular Generics for the Masses: emgm-0.3. Sean Leather
   [19]announced the third major release of [20]Extensible and Modular
   Generics for the Masses (EMGM), a library for generic programming in
   Haskell using type classes and a sum-of-products view. Deriving is now
   greatly improved, and there are several new functions, including case,
   everywhere, and everywhere'.

   major speed improvement: regex-tdfa reaches 1.0.0. ChrisK proudly
   [21]announced the version 1.0.0 release of [22]regex-tdfa. This is is
   not just a bug fix release; it is a serious improvement in the
   asymptotic running time of the library algorithms.

Discussion

   Definitions of purity and Lazy IO. Oleg began a [23]discussion on lazy
   IO.

   Left fold enumerator - a real pearl overlooked?. Günther Schmidt began
   a [24]discussion of left-fold enumerators and their current status
   within the community.

Jobs

   Looking for a co-founder for a startup using Haskell. Ed McCaffrey
   [25]is looking for a co-founder to work on a startup music project in
   Haskell. Email Ed for more information.

   Fully-funded doctoral studentships in dependently type programming at
   Oxford and Strathclyde. Jeremy Gibbons [26]announced two fully-funded
   doctoral student positions in dependently-typed programming at
   [27]Oxford and [28]Strathclyde.

Blog noise

   [29]Haskell news from the [30]blogosphere.
 * Holumbus: [31]First release of Holumbus-MapReduce.
 * FP Lunch: [32]Breadth first labelling.
 * Sean Leather: [33]Experiments with EMGM: Emacs org files.
 * Galois, Inc: [34]Call for Proposals: CUFP 2009.
 * Matthew Elder: [35]Happstack 0.2 Released.
 * Yi: [36]Lazy and Incremental Parsing: the paper.
 * Xmonad: [37]xmonad as a multi-head sliding block puzzle.
 * Don Stewart (dons): [38]Playing with GHC's parallel runtime.
 * Bjorn Buckwalter: [39]Extended sessions with the Haskell Curl
   bindings.
 * Manuel M T Chakravarty: [40]Installing GtK2Hs on a Mac with the
   native GTK+ OS X Framework..
 * Alex Mason: [41]If you need speed, don't talk to main!.
 * Luke Palmer: [42]New game: SpaceBattle.
 * Galois, Inc: [43]Galois Tech Talk: Specializing Generators for
   High-Perfo

[Haskell-cafe] Re: a newbies confusion with repositories - darcs or git

2009-03-07 Thread Trent W. Buck
Eric Kow  writes:

> One of the darcs team members, Thorkil Naur, felt that in my enthusiasm I
> was not being sufficiently forthright about darcs's shortcomings.

As for me, I tend to start any review with a list of all the problems I
have with , on the basis that it's a lot harder to find
informed criticism of a technology than it is to find praise (informed
or otherwise).  So I would write an article "Why you shouldn't choose
Darcs", and let readers decide whether the arguments are inadequate.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Purely Functional Data Structures

2009-03-07 Thread G?uenther Schmidt

Hi Don,

damn, that was quick!

And thx, I'll look into that. The reading it in wasn't much of a 
problem, I had been able to use MS-ODBC for that, there's a driver for 
ODBC files. The problem is more the type of data structure I'd be 
reading it into. In SQL I would have the data indexed by several 
different columns, if I use maps I'd only have one key, so if I need to 
lookup data in the map by a value that is not the key the lookups will 
become quite expensive.


Any suggestions, what do you do in these cases?

Günther


Don Stewart schrieb:

gue.schmidt:
  

Hi,

is the above mentioned book still *the* authority on the subject?

I bought the book, read about 10 pages and then put it back on the  
shelf. Um.
In my app I have to deal with 4 csv files, each between 5 - 10 mb, and  
some static data.


I had put all that data into an Sqlite3 database and used SQL on it.  
But, as the requirements keep changing the SQL becomes a bit messy. I  
guess we've all had that experience.


So I'm wondering if I will find clues in this book how to do my querying  
and handling of moderately large data in a more haskellish way and be  
able to drop the SQL.



Use the fine libraries on http://hackage.haskell.org. 


E.g. bytestring-csv then load that into a finite map?

These days it is rare to have to roll your own new data structures...

-- don
  



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Purely Functional Data Structures

2009-03-07 Thread Don Stewart
gue.schmidt:
> Hi,
>
> is the above mentioned book still *the* authority on the subject?
>
> I bought the book, read about 10 pages and then put it back on the  
> shelf. Um.
> In my app I have to deal with 4 csv files, each between 5 - 10 mb, and  
> some static data.
> 
> I had put all that data into an Sqlite3 database and used SQL on it.  
> But, as the requirements keep changing the SQL becomes a bit messy. I  
> guess we've all had that experience.
>
> So I'm wondering if I will find clues in this book how to do my querying  
> and handling of moderately large data in a more haskellish way and be  
> able to drop the SQL.

Use the fine libraries on http://hackage.haskell.org. 

E.g. bytestring-csv then load that into a finite map?

These days it is rare to have to roll your own new data structures...

-- don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Purely Functional Data Structures

2009-03-07 Thread Gü?nther Schmidt

Hi,

is the above mentioned book still *the* authority on the subject?

I bought the book, read about 10 pages and then put it back on the 
shelf. Um.
In my app I have to deal with 4 csv files, each between 5 - 10 mb, and 
some static data.


I had put all that data into an Sqlite3 database and used SQL on it. 
But, as the requirements keep changing the SQL becomes a bit messy. I 
guess we've all had that experience.


So I'm wondering if I will find clues in this book how to do my querying 
and handling of moderately large data in a more haskellish way and be 
able to drop the SQL.


Your suggestions appreciated.

Günther

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Naturality condition for inits

2009-03-07 Thread Daniel Fischer
Am Samstag, 7. März 2009 23:18 schrieb R J:
> Here's another Bird problem that's stymied me:
>
> The function "inits" computes the list of initial segments of a list; its
> type is inits :: [a] -> [[a]].  What is the appropriate naturality
> condition for "inits"?
>
> The only discussion in the text concerning naturality conditions concerns
> map, where the naturality conditions are stated in what seem to be
> quasi-commutativity laws over the composition operator, as follows:
>
>f . head=  head . map f, where f is strict (i.e., f _|_ =
> _|_) map f . tail=  tail . map f
>map f (xs ++ ys)=  map f xs ++ map f ys
>map f . reverse =  reverse . map f
>map f . concat  =  concat . map (map f)
>
> I believe that none of the following naturality conditions, extrapolated
> from those above, hold:
>
>a. head . inits =  inits [head]
>b. tail . inits =  inits . tail
>c. reverse . inits  =  inits . reverse
>d. concat . inits   =  inits . concat

How does inits interplay with (map f)?
Though inits . tail  =/= tail . inits, there is an interesting relation 
between inits  (tail xs) and inits xs, which?
When you also consider the function tails, there is an interesting relation 
involving inits and reverse, too.

>
> In case the definition of "inits" is relevant, my definition is:
>
>
>
> inits  :: [a] -> [[a]]
>
> inits xs   =  [take n xs | n <- seglengths]
>
>   where
>
>   seglengths = [0..length xs]

Better define it recursively.

inits [] = [[]]
inits (x:xs) = []:map (x:) (inits xs)

>
> Thanks.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Naturality condition for inits

2009-03-07 Thread Derek Elkins
On Sat, 2009-03-07 at 22:18 +, R J wrote:
> Here's another Bird problem that's stymied me:
> 
> The function "inits" computes the list of initial segments of a list;
> its type is inits :: [a] -> [[a]].  What is the appropriate naturality
> condition for "inits"?

A natural transformation is between two Functors f and g is a
polymorphic function t :: (Functor f, Functor g) => f a -> g a.  The
naturality condition is the free theorem which states*:
for any function f :: A -> B, t . fmap f = fmap f . t
Note that fmap is being used in two different instances here.

For lists, fmap = map and so we have for any polymorphic function [a] ->
[a] using reverse as a representative,
map f . reverse = reverse . map f.

inits is a natural transformation between [] and [] . [] (where . is
type-level composition and not expressible in Haskell).  Functors
compose just by composing their fmaps, so fmap for [] . [] is simply
map . map, therefore the naturality condition for inits is the
following:
for any function f :: A -> B,
inits . map f = map (map f) . inits
which you can easily prove.

* Actually there are some restrictions relating to bottom.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Converting list comprehensions to combinatory style

2009-03-07 Thread Daniel Fischer
Am Samstag, 7. März 2009 23:06 schrieb R J:
> Can anyone help with this problem from Bird:
>
> a. Convert the following list comprehensions to combinatory style:
>
>i.   [(x, y) | x <- [1..n], odd x, y <- [1..n]]
>ii.  [(x, y) | x <- [1..n], y <- [1..n], odd x]
>
> b. Are they equal?
>
> c. Compare the costs of evaluating the two expressions.
>
> I gather that by combinatory style, he means the use of concat, map, and
> filter.  While Bird provides the following conversion rules, he doesn't
> prove them, justify them, or even provide an example using them:
>
>R1.  [ f x | x <- xs  ]  =  map f xs
>R2.  [   x | x <- xs, p x ]  =  filter p xs
>R3.  [ e   | Q, P ]  =  concat [[e | P] | Q]
>R4.  [ e   | x <- [d | P] ]  =  [e [x := d] | Q, P]
>
> Thanks.

You can take R1-R4 as definition of the list-comprehension syntax.

Then you can rewrite i. step by step:

[(x,y) | x <- [1 .. n], odd x, y <- [1 .. n]]
~> concat [[(x,y) | y <- [1 .. n]] | x <- [1 .. n], odd x]]
~> concat [map (\y -> (x,y)) [1 .. n] | x <- [1 .. n], odd x]
~> concat (map (\x -> map (\y -> (x,y)) [1 .. n]) (filter odd [1 .. n]))

(okay, I cheated, the last step is actually a sequence of steps, transforming
[f x | x <- xs, p x] into map f (filter p xs)).
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] do you have to use fix with forkio?

2009-03-07 Thread Martijn van Steenbergen

Derek Elkins wrote:

If you are doing something else, use something else.  This makes it
clear that you -aren't- going to break out (non-exceptionally), i.e. the
control flow is more obvious in this code than in the other versions.


Oh yes, of course! I wasn't saying forever is bad; in fact I agree with 
you that it's the best solution in this case. I just wanted to note that 
forever isn't always a good substitute for "fix $ \loop ->", without 
implying that was what you were suggesting.


Martijn.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] do you have to use fix with forkio?

2009-03-07 Thread Derek Elkins
On Sat, 2009-03-07 at 23:12 +0100, Martijn van Steenbergen wrote:
> Derek Elkins wrote:
> > Both are poorish style.
> > 
> > reader <- forkIO $ forever $ do (nr', line) <- readChan; when (nr /= nr') $ 
> > putStrLn hdl line
> 
> This is fine assuming you always want to re-enter the loop. If you want 
> to loop conditionally (which is most often the case), forever isn't 
> going to work, unless you use exceptions.

If you are doing something else, use something else.  This makes it
clear that you -aren't- going to break out (non-exceptionally), i.e. the
control flow is more obvious in this code than in the other versions.
By your logic 'map' would be bad because not everything is a map, of
course, this is precisely why using map, when applicable, is good.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Naturality condition for inits

2009-03-07 Thread R J

Here's another Bird problem that's stymied me:

The function "inits" computes the list of initial segments of a list; its type 
is inits :: [a] -> [[a]].  What is the appropriate naturality condition for 
"inits"?

The only discussion in the text concerning naturality conditions concerns map, 
where the naturality conditions are stated in what seem to be 
quasi-commutativity laws over the composition operator, as follows:

   f . head=  head . map f, where f is strict (i.e., f _|_ = _|_)
   map f . tail=  tail . map f
   map f (xs ++ ys)=  map f xs ++ map f ys
   map f . reverse =  reverse . map f
   map f . concat  =  concat . map (map f)

I believe that none of the following naturality conditions, extrapolated from 
those above, hold:

   a. head . inits =  inits [head]
   b. tail . inits =  inits . tail
   c. reverse . inits  =  inits . reverse
   d. concat . inits   =  inits . concat

In case the definition of "inits" is relevant, my definition is:



inits  :: [a] -> [[a]]

inits xs   =  [take n xs | n <- seglengths]

  where

  seglengths = [0..length xs]

Thanks.

_
Windows Live™: Life without walls.
http://windowslive.com/explore?ocid=TXT_TAGLM_WL_allup_1a_explore_032009___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] do you have to use fix with forkio?

2009-03-07 Thread Martijn van Steenbergen

Derek Elkins wrote:

Both are poorish style.

reader <- forkIO $ forever $ do (nr', line) <- readChan; when (nr /= nr') $ 
putStrLn hdl line


This is fine assuming you always want to re-enter the loop. If you want 
to loop conditionally (which is most often the case), forever isn't 
going to work, unless you use exceptions.


Martijn.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Converting list comprehensions to combinatory style

2009-03-07 Thread R J

Can anyone help with this problem from Bird:

a. Convert the following list comprehensions to combinatory style:

   i.   [(x, y) | x <- [1..n], odd x, y <- [1..n]]
   ii.  [(x, y) | x <- [1..n], y <- [1..n], odd x]

b. Are they equal?

c. Compare the costs of evaluating the two expressions.

I gather that by combinatory style, he means the use of concat, map, and 
filter.  While Bird provides the following conversion rules, he doesn't prove 
them, justify them, or even provide an example using them:

   R1.  [ f x | x <- xs  ]  =  map f xs
   R2.  [   x | x <- xs, p x ]  =  filter p xs
   R3.  [ e   | Q, P ]  =  concat [[e | P] | Q]
   R4.  [ e   | x <- [d | P] ]  =  [e [x := d] | Q, P]

Thanks.


_
Windows Live™ Groups: Create an online spot for your favorite groups to meet.
http://windowslive.com/online/groups?ocid=TXT_TAGLM_WL_groups_032009___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Ambiguous type with PolymorphicComponents

2009-03-07 Thread Daniel Fischer
Am Samstag, 7. März 2009 21:48 schrieb Maurí­cio:
>  >> (...)
> >
> > When you have
> >
> > data Test = Test (forall w. (C1 w, C2 w, ..., Cn w) => w)
> >
> > and
> >
> > function (Test w) = classmethod w,
> >
> > there is no way to decide which instance to use, hence the type variable
> > is ambiguous.
> > (...)
>
> But, then, how can I reach the data inside a
> polymorphic component? Or, better, what can I
> do with it? If I say:
>
> function (Test w) = classmethod (w :: specificType)
>
> then I have to suppose that w is always of
> 'specificType', and this may not be true.

If

w :: forall a. (Class a) => a,

then w is (can be) of all specific types which are instances of Class.

Perhaps what you wanted was an existential type:

{-# LANGUAGE ExistentialQuantification #-}

data ETest = forall w. WidgetClass w => ETest w

?

Or a GADT:

data GTest where
GTest :: forall a. WidgetClass a => a -> GTest

?

>
> Thanks,
> Maurício
>

Cheers,
Daniel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Ambiguous type with PolymorphicComponents

2009-03-07 Thread Maurí­cio

>> (...)

When you have 


data Test = Test (forall w. (C1 w, C2 w, ..., Cn w) => w)

and

function (Test w) = classmethod w,

there is no way to decide which instance to use, hence the type variable is 
ambiguous.

(...)


But, then, how can I reach the data inside a
polymorphic component? Or, better, what can I
do with it? If I say:

function (Test w) = classmethod (w :: specificType)

then I have to suppose that w is always of
'specificType', and this may not be true.

Thanks,
Maurício

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] serializing large data structures, stack overflow

2009-03-07 Thread Bulat Ziganshin
Hello friggin,

Saturday, March 7, 2009, 10:57:04 PM, you wrote:

> dec = B.decodeFile "C:/users/saftarn/desktop/bintest.txt" >>= \a ->
>    return $ (a :: M.Map (Int,Int) Int)

just a quick style hack:

dec = B.decodeFile "C:/users/saftarn/desktop/bintest.txt" :: IO (M.Map 
(Int,Int) Int)

-- 
Best regards,
 Bulatmailto:bulat.zigans...@gmail.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ambiguous type with PolymorphicComponents

2009-03-07 Thread Daniel Fischer
Am Samstag, 7. März 2009 20:28 schrieb Maurí­cio:
> Hi,
>
> When reading this code in ghci, I get an
> ambiguous type at last line:
>
>
> {-# LANGUAGE PolymorphicComponents #-}
> {-# LANGUAGE RankNTypes #-}
> import Graphics.UI.Gtk
> data Test = Test (forall w. WidgetClass w => w)
> toAction (Test w) = toWidget w
>
>
> It's interesting that if I replace 'Integral' for
> 'WidgetClass' and 'toInteger' for 'toWidget', I
> get no error messages, and I see no essencial
> diference between both classes and both functions.
>
> Do you know what am I missing?

Type defaulting.

When you have 

data Test = Test (forall w. (C1 w, C2 w, ..., Cn w) => w)

and

function (Test w) = classmethod w,

there is no way to decide which instance to use, hence the type variable is 
ambiguous.
But if at least one of these classes is numeric and all of the classes are 
defined in the standard libraries, then (section 4.3.4 of the report, IIRC) 
the ambiguous type variable is defaultable. In the case of Integral and 
toInteger, defaulting kicks in and defaults the type to Integer (unless you 
specified other defaults):

{-# LANGUAGE PolymorphicComponents #-}
{-# LANGUAGE RankNTypes #-}

data Test = Test (forall w. (Show w, Integral w) => w)

toAction (Test w) = show w


$ ghci Polymorph -fwarn-type-defaults
GHCi, version 6.8.3: http://www.haskell.org/ghc/  :? for help
Loading package base ... linking ... done.
[1 of 1] Compiling Main ( Polymorph.hs, interpreted )

Polymorph.hs:6:25:
Warning: Defaulting the following constraint(s) to type `Integer'
 `Integral w' arising from a use of `w' at Polymorph.hs:6:25
In the first argument of `show', namely `w'
In the expression: show w
In the definition of `toAction': toAction (Test w) = show w
Ok, modules loaded: Main.
*Main>

If you really want the contents of Test to bepolymorphic, before you use it, 
you must cast it to some specific type,

toAction (Test w) = toWidget (w :: SomeSpecificWidget)

should work.

>
> Thanks,
> Maurício

Cheers,
Daniel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] serializing large data structures, stack overflow

2009-03-07 Thread Don Stewart

import Data.Binary and then write a variant of something like how
Maps are currently serialised:

instance (Ord k, Binary k, Binary e) => Binary (Map.Map k e) where
put m = put (Map.size m) >> mapM_ put (Map.toAscList m)
get   = liftM Map.fromDistinctAscList get

So you might want something that avoids flattening it to a list first

-- Don

frigginfriggins:
> can you link to a good example of writing your own because I couldn't find 
> one.
> 
> On Sat, Mar 7, 2009 at 8:57 PM, Don Stewart  wrote:
> 
> Increase the stack size, or use a different serialiser (they're only a
> half dozen lines to write), or different data structure?
> 
> -- Don
> 
> frigginfriggins:
> > I'm playing around with Netflix, implementing a simple KNN-algorithm, I
> will
> > later try SVD which seems to be the most successful approach.
> >
> > Using a database like Postgresqk is to slow so I want to serialize a
> > datastructure containing the ratings. I'm not sure about the
> > representation I will use just yet, if I should use multiple arrays or 
> an
> Map/
> > IntMap.
> >
> > However I tried Data.Binary and already for small sizes I get stack
> overflow
> > when deserializing.
> > The serializing works fine but when bringing it back it overflows.
> > How can I solve this? This is just 2MB, I will eventually need soemthing
> like
> > 2-500MB to store everything depending on what representatin I choose.
> >
> > module Serialize where
> > import qualified Data.Binary as B
> > import qualified Data.Binary.Put as P
> > import qualified Data.Map as M
> > import qualified Data.List as L
> >
> > genTest :: Int -> M.Map (Int,Int) Int
> > genTest n = let movies = take n $ repeat 1
> > grades = take n $ repeat 4 in
> > M.fromList $ ([1..n] `zip` movies) `zip` grades
> >
> > main = do
> >   let a = genTest 5
> >   B.encodeFile "C:/users/saftarn/desktop/bintest.txt" a
> >   print "Success"
> >
> > dec = B.decodeFile "C:/users/saftarn/desktop/bintest.txt" >>= \a ->
> >   return $ (a :: M.Map (Int,Int) Int)
> >
> >
> >
> >
> 
> > ___
> > Haskell-Cafe mailing list
> > Haskell-Cafe@haskell.org
> > http://www.haskell.org/mailman/listinfo/haskell-cafe
> 
> 
> 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] serializing large data structures, stack overflow

2009-03-07 Thread Don Stewart
Increase the stack size, or use a different serialiser (they're only a
half dozen lines to write), or different data structure?

-- Don

frigginfriggins:
> I'm playing around with Netflix, implementing a simple KNN-algorithm, I will
> later try SVD which seems to be the most successful approach.
> 
> Using a database like Postgresqk is to slow so I want to serialize a
> datastructure containing the ratings. I'm not sure about the
> representation I will use just yet, if I should use multiple arrays or an Map/
> IntMap.
> 
> However I tried Data.Binary and already for small sizes I get stack overflow
> when deserializing.
> The serializing works fine but when bringing it back it overflows.
> How can I solve this? This is just 2MB, I will eventually need soemthing like
> 2-500MB to store everything depending on what representatin I choose.
> 
> module Serialize where
> import qualified Data.Binary as B
> import qualified Data.Binary.Put as P
> import qualified Data.Map as M
> import qualified Data.List as L
> 
> genTest :: Int -> M.Map (Int,Int) Int
> genTest n = let movies = take n $ repeat 1
> grades = take n $ repeat 4 in
> M.fromList $ ([1..n] `zip` movies) `zip` grades
> 
> main = do
>   let a = genTest 5
>   B.encodeFile "C:/users/saftarn/desktop/bintest.txt" a
>   print "Success"
> 
> dec = B.decodeFile "C:/users/saftarn/desktop/bintest.txt" >>= \a ->
>   return $ (a :: M.Map (Int,Int) Int)
> 
> 
> 
> 

> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] serializing large data structures, stack overflow

2009-03-07 Thread friggin friggin
I'm playing around with Netflix, implementing a simple KNN-algorithm, I will
later try SVD which seems to be the most successful approach.

Using a database like Postgresqk is to slow so I want to serialize a
datastructure containing the ratings. I'm not sure about the
representation I will use just yet, if I should use multiple arrays or an
Map/IntMap.

However I tried Data.Binary and already for small sizes I get stack overflow
when deserializing.
The serializing works fine but when bringing it back it overflows.
How can I solve this? This is just 2MB, I will eventually need soemthing
like 2-500MB to store everything depending on what representatin I choose.

module Serialize where
import qualified Data.Binary as B
import qualified Data.Binary.Put as P
import qualified Data.Map as M
import qualified Data.List as L

genTest :: Int -> M.Map (Int,Int) Int
genTest n = let movies = take n $ repeat 1
grades = take n $ repeat 4 in
M.fromList $ ([1..n] `zip` movies) `zip` grades

main = do
  let a = genTest 5
  B.encodeFile "C:/users/saftarn/desktop/bintest.txt" a
  print "Success"

dec = B.decodeFile "C:/users/saftarn/desktop/bintest.txt" >>= \a ->
  return $ (a :: M.Map (Int,Int) Int)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: a newbies confusion with repositories - darcs or git

2009-03-07 Thread Eric Kow
Hi again,

I wanted to echo Iavor's comment that you shouldn't feel 'obliged' to
use darcs out of Haskell brand loyalty.

Try both systems out and see how they feel.  My enthusiasm is really
about what I find to be a great and very comfortable workflow.  You'll
have to see for yourself what suits you best.

More importantly...

On Sun, Mar 01, 2009 at 10:40:46 +, Eric Y. Kow wrote:
> Why I love darcs
> 

One of the darcs team members, Thorkil Naur, felt that in my enthusiasm I
was not being sufficiently forthright about darcs's shortcomings.

First, I would like to express my gratitude to Thorkil for calling me
out.

Second, I wanted to state unequivocally that darcs, no matter how much I
love it, is not perfect.

> It's not just that it's written in Haskell; it's that the whole thing
> is driven by something which is very clean, powerful and which
> delivers a concrete impact in the real world.

When I said this, I was focusing on one idea: namely that most of the
darcs user interface is just as straightforward consequence of two
operations: patch inverse and patch commutation.  In my eyes, that was
very clean, the whole of darcs basically just boils down to these two
operations.

What I failed to add is that /defining/ how patches should commute is
not so beautiful.  The problem occurs when patches conflicts; the
question of how patches should commute when they conflict is still not
completely well understood.  We have some working ideas: code for
darcs-1 and darcs-2 patch theories which seems to work reasonably well.
But to call it clean and elegant would definitely be a stretch.

Furthermore, darcs has bugs, some of which are fairly deep (where
conflicts or duplicate patches are involved).  Some of these bugs are
documented and some we have only recently discovered, and some may be
looming just around the corner.  So far, these bugs have not affected a
lot of repositories, and in those cases, it has been fairly
straightforward to recover from them.  But know that they are there.

> We now have a stable and more sustainable development team (I'm working on 20%
> darcs time at my job with the University of Brighton and have committed myself
> to spending no more than 8 hours a week on darcs to avoid burnout; a lot of 
> our
> jobs have been assigned to dedicated managers -- hello!) and are adopting some
> new practices (hacking sprints, 6-month time based release schedule) to keep
> development running more smoothly.

Another reason to be concerned about the darcs team is that since David
Roundy retired from darcs life, we do not have anybody who deeply
understands darcs, that is, all the way down to the inner workings of a
darcs repository (for example the manipulation of the pending patch) and
to its patch theory core.

We have folks like me who have a rudimentary understanding of patch
theory (i.e. who grasp the relationship between commutation, inversion
and merging and who also understand some of what darcs-1 conflicts are),
but what we lack are people who deeply understand the darcs-2 core (Ian
Lynagh's work on camp is actually quite similar to darcs 2, so in a
sense, he does understand it), especially as it's implemented.

This puts us in an awkward position, one of not being able to quickly
work through any fundamental bugs if they should arise.  We would have
to scamper up that learning curve fairly quickly.

But I'm still optimistic over the long term.  These things will come.
We've made a lot of progress building up our community infrastructure, 
and we are learning how to make better use of our time.

For what it's worth, a few of us on the team will be focusing on darcs
fundamentals during the sprint.  We need more testing, we need to look
at our deep bugs more closely and we need to learn our way through
darcs's guts.  And I think we can do it.

Thanks,

Eric

PS. We have started a fundraising effort to get folks to the darcs
hacking sprint.  More details on that shortly.
 
-- 
Eric Kow 
PGP Key ID: 08AC04F9


pgp1oNLfX90zU.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Ambiguous type with PolymorphicComponents

2009-03-07 Thread Maurí­cio

Hi,

When reading this code in ghci, I get an
ambiguous type at last line:


{-# LANGUAGE PolymorphicComponents #-}
{-# LANGUAGE RankNTypes #-}
import Graphics.UI.Gtk
data Test = Test (forall w. WidgetClass w => w)
toAction (Test w) = toWidget w


It's interesting that if I replace 'Integral' for
'WidgetClass' and 'toInteger' for 'toWidget', I
get no error messages, and I see no essencial
diference between both classes and both functions.

Do you know what am I missing?

Thanks,
Maurício

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Zippers

2009-03-07 Thread Heinrich Apfelmus
Cristiano Paris wrote:
> Heinrich Apfelmus wrote:
>> ...
>> Such self-reference is usually called "tying the knot", see also
>>
>>  http://www.haskell.org/haskellwiki/Tying_the_Knot
> 
> I didn't know. Would you call this "Tying the knot" as well?
> 
> http://yi-editor.blogspot.com/2008/12/prototypes-encoding-oo-style.html

Yes, or rather, I would call it "untying the knot". ;)


Regards,
apfelmus

--
http://apfelmus.nfshost.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mystified by Cabal

2009-03-07 Thread Don Stewart
colin:
> I have just attempted Cabal-izing my program (splitting it into a
> library and main program as well), and I'm mystified by some problems
> I am having.
> 
> First, when I try to build the library I get:
> 
> [co...@susannah game-tree]$ runhaskell Setup build
> Preprocessing library game-tree-1.0.0.0...
> Building game-tree-1.0.0.0...
> 
> Data/Tree/Game/Negascout.hs:31:0: Unrecognised pragma
> [1 of 2] Compiling Data.Tree.Game.Tree ( Data/Tree/Game/Tree.hs, 
> dist/build/Data/Tree/Game/Tree.o )
> 
> Data/Tree/Game/Tree.hs:1:0:
> Failed to load interface for `Prelude':
>   it is a member of package base-3.0.3.0, which is hidden
> 

build-depends: base

You certainly depend on the base library, unless you're doing only
haskell98 work.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FRP + physics / status of hpysics

2009-03-07 Thread Roman Cheplyaka
* Peter Verswyvelen  [2009-03-07 18:34:10+0100]
> On Fri, Mar 6, 2009 at 10:39 PM, Roman Cheplyaka  wrote:
> 
> > > The blog
> > > seems to be inactive since december 2008; has development ceased?
> >
> > Sort of. One reason is that DPH does not seem to be ready for hpysics
> > yet, another one is that I don't see any potential users around (read: I
> > just need a kick in the ass).
> >
> 
> Is it a performance issue?

Yes. As Roman Leshchinskiy explained it's because parallel arrays are
not fused properly yet.
But it's quite trivial to modify Hpysics to use ordinary lists or
arrays, and I think it'll give reasonable performance.

> > > Integrating hpysics with Grapefruit might be a good topic for the
> > Hackaton,
> > > trying to make a simple game (e.g. Pong or Breakout) without using
> > recursive
> > > signal functions, but with correct collision response and
> > better-than-Euler
> > > integration, all handled by the physics engine. Other FRP engines could
> > be
> > > tried, but Grapefruit hacking is already a topic on the Hackaton, so it
> > > would combine efforts.
> >
> > Yes, I'm actively pondering Hpysics+Grapefruit (it's the primary reason
> > of my interest in Grapefruit). But first of all we need to get graphics
> > support into Grapefruit.
> >
> > Does your proposal re Hackathon indicate that you'd like to join?
> >
> 
> Yes, Thomas 'Bob' Davie and I already joined the Hackaton, but the wiki is
> not yet updated.I see you've joined too, cool

Great! I'll have more free time after March 15, and we can arrange an
IRC meeting to discuss this.

-- 
Roman I. Cheplyaka :: http://ro-che.info/
"Don't let school get in the way of your education." - Mark Twain
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FRP + physics / status of hpysics

2009-03-07 Thread Felipe Lessa
2009/3/6 Peter Verswyvelen :
> Do alternatives exist? Maybe good wrappers (hopefully pure...)  around
> existing engines?

There's Hipmunk, but it is not pure and not that good ;). But if you
don't mess with the global variables (which you normally wouldn't mess
anyway) then you can wrap everything in a separate monad with
unsafePerformIO's, I guess.

--
Felipe.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mystified by Cabal

2009-03-07 Thread Colin Paul Adams
> "Robin" == Robin Green  writes:

Robin> The build-depends line needs to go in the Library section,
Robin> I think. It doesn't seem to be having any effect in its
Robin> current location. Likewise for ghc-options.

Thanks everyone - it's working now.
-- 
Colin Adams
Preston Lancashire
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mystified by Cabal

2009-03-07 Thread Robin Green
On Sat, 07 Mar 2009 17:30:43 +
Colin Paul Adams  wrote:

> > "Svein" == Svein Ove Aas  writes:
> 
> >> Preprocessing library game-tree-1.0.0.0...  Building
> >> game-tree-1.0.0.0...
> >> 
> >> Data/Tree/Game/Negascout.hs:31:0: Unrecognised pragma [1 of 2]
> >> Compiling Data.Tree.Game.Tree ( Data/Tree/Game/Tree.hs,
> >> dist/build/Data/Tree/Game/Tree.o )
> >> 
> >> Data/Tree/Game/Tree.hs:1:0:    Failed to load interface for
> >> `Prelude':      it is a member of package base-3.0.3.0, which
> >> is hidden
> >> 
> Svein> What does your .cabal file look like?
> 
> name:game-tree
> version: 1.0.0.0
> cabal-version:   >= 1.6
> synopsis:Searching game trees with alpha-beta pruning
> description: A data type for game trees, as used in decision
> theory and game theory, along with standard algorithms for searching
> the tree using alpha-beta pruning. Can be used as the basis of an AI
> for two-player zero-sum  games, such as chess. category:
> Data license: GPL
> license-file:LICENSE
> author:  Colin Adams
> maintainer:  co...@colina.demon.co.uk
> copyright:   Copyright: 2009 Colin Adams
> build-type:  Simple
> build-depends:   base >= 4
> ghc-options: -Wall -fno-warn-unrecognised-pragmas
> 
> Library
>  exposed-modules: Data.Tree.Game.Tree, Data.Tree.Game.Negascout
> source-repository head
>  type: darcs
>  location: http://code.haskell.org/game-tree/

The build-depends line needs to go in the Library section, I think. It
doesn't seem to be having any effect in its current location. Likewise
for ghc-options.
-- 
Robin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] Mystified by Cabal

2009-03-07 Thread Bulat Ziganshin
Hello Colin,

Saturday, March 7, 2009, 8:30:43 PM, you wrote:

> >> Data/Tree/Game/Tree.hs:1:0:    Failed to load interface for
> >> `Prelude':      it is a member of package base-3.0.3.0, which
> >> is hidden
> build-depends:   base >= 4

and which ghc version you are running? :)


-- 
Best regards,
 Bulatmailto:bulat.zigans...@gmail.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FRP + physics / status of hpysics

2009-03-07 Thread Peter Verswyvelen
On Fri, Mar 6, 2009 at 10:39 PM, Roman Cheplyaka  wrote:

> > The blog
> > seems to be inactive since december 2008; has development ceased?
>
> Sort of. One reason is that DPH does not seem to be ready for hpysics
> yet, another one is that I don't see any potential users around (read: I
> just need a kick in the ass).
>

Is it a performance issue?


>
> > Integrating hpysics with Grapefruit might be a good topic for the
> Hackaton,
> > trying to make a simple game (e.g. Pong or Breakout) without using
> recursive
> > signal functions, but with correct collision response and
> better-than-Euler
> > integration, all handled by the physics engine. Other FRP engines could
> be
> > tried, but Grapefruit hacking is already a topic on the Hackaton, so it
> > would combine efforts.
>
> Yes, I'm actively pondering Hpysics+Grapefruit (it's the primary reason
> of my interest in Grapefruit). But first of all we need to get graphics
> support into Grapefruit.
>
> Does your proposal re Hackathon indicate that you'd like to join?
>

Yes, Thomas 'Bob' Davie and I already joined the Hackaton, but the wiki is
not yet updated.I see you've joined too, cool
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mystified by Cabal

2009-03-07 Thread Colin Paul Adams
> "Svein" == Svein Ove Aas  writes:

>> Preprocessing library game-tree-1.0.0.0...  Building
>> game-tree-1.0.0.0...
>> 
>> Data/Tree/Game/Negascout.hs:31:0: Unrecognised pragma [1 of 2]
>> Compiling Data.Tree.Game.Tree ( Data/Tree/Game/Tree.hs,
>> dist/build/Data/Tree/Game/Tree.o )
>> 
>> Data/Tree/Game/Tree.hs:1:0:    Failed to load interface for
>> `Prelude':      it is a member of package base-3.0.3.0, which
>> is hidden
>> 
Svein> What does your .cabal file look like?

name:game-tree
version: 1.0.0.0
cabal-version:   >= 1.6
synopsis:Searching game trees with alpha-beta pruning
description: A data type for game trees, as used in decision theory and 
game theory,
 along with standard algorithms for searching the tree 
using alpha-beta pruning.
 Can be used as the basis of an AI for two-player zero-sum  
games, such as chess.
category:Data
license: GPL
license-file:LICENSE
author:  Colin Adams
maintainer:  co...@colina.demon.co.uk
copyright:   Copyright: 2009 Colin Adams
build-type:  Simple
build-depends:   base >= 4
ghc-options: -Wall -fno-warn-unrecognised-pragmas

Library
 exposed-modules: Data.Tree.Game.Tree, Data.Tree.Game.Negascout
source-repository head
 type: darcs
 location: http://code.haskell.org/game-tree/
-- 
Colin Adams
Preston Lancashire
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mystified by Cabal

2009-03-07 Thread Svein Ove Aas
On Sat, Mar 7, 2009 at 6:18 PM, Colin Paul Adams
 wrote:
> I have just attempted Cabal-izing my program (splitting it into a
> library and main program as well), and I'm mystified by some problems
> I am having.
>
> First, when I try to build the library I get:
>
> [co...@susannah game-tree]$ runhaskell Setup build
> Preprocessing library game-tree-1.0.0.0...
> Building game-tree-1.0.0.0...
>
> Data/Tree/Game/Negascout.hs:31:0: Unrecognised pragma
> [1 of 2] Compiling Data.Tree.Game.Tree ( Data/Tree/Game/Tree.hs, 
> dist/build/Data/Tree/Game/Tree.o )
>
> Data/Tree/Game/Tree.hs:1:0:
>    Failed to load interface for `Prelude':
>      it is a member of package base-3.0.3.0, which is hidden
>
What does your .cabal file look like?


>
> Perhaps Cabal should be named Caballa instead.
>
Nah, it's not /that/ bad. :P

You might want to install the mkcabal package, use that to generate
the initial .cabal file. Though I'm not sure what you might have
gotten wrong.. well, show us the file first.

- Svein Ove
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Mystified by Cabal

2009-03-07 Thread Colin Paul Adams
I have just attempted Cabal-izing my program (splitting it into a
library and main program as well), and I'm mystified by some problems
I am having.

First, when I try to build the library I get:

[co...@susannah game-tree]$ runhaskell Setup build
Preprocessing library game-tree-1.0.0.0...
Building game-tree-1.0.0.0...

Data/Tree/Game/Negascout.hs:31:0: Unrecognised pragma
[1 of 2] Compiling Data.Tree.Game.Tree ( Data/Tree/Game/Tree.hs, 
dist/build/Data/Tree/Game/Tree.o )

Data/Tree/Game/Tree.hs:1:0:
Failed to load interface for `Prelude':
  it is a member of package base-3.0.3.0, which is hidden

That mystifies me. Googling, it appears to be a common error in the
past, but none of the reasons apparently apply to my
case. Incidentally, Tree.hs imports nothing. It just looks like:

-- | Nodes in game trees
-- Copyright 2009 Colin Adams
--
-- This file is part of game-tree.
--
--  Game-tree is free software: you can redistribute it and/or modify
--  it under the terms of the GNU General Public License as published by
--  the Free Software Foundation, either version 3 of the License, or
--  (at your option) any later version.

--  Game-tree is distributed in the hope that it will be useful,
--  but WITHOUT ANY WARRANTY; without even the implied warranty of
--  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
--  GNU General Public License for more details.

--  You should have received a copy of the GNU General Public License
--  along with game-tree.  If not, see .

module Data.Tree.Game.Tree where

-- | Nodes in a game search tree
class Tree a where
-- | Is this a game-terminating node (e.g. checkmate)?
is_terminal ::  a -> Bool
-- | Heuristic value of node
node_value :: a -> Int
-- | Child nodes in the game tree (positions more deeply searched)
children :: a -> [a]

So I couldn't build my program using runhaskell, as I couldn't install
the library. Instead, i tried ghc --make instead with the -idir
option. It looks in the right place, but still doesn't recognise the
module:

Chasing modules from: *Chu-shogi.hs

Generator.hs:27:7:
Could not find module `Data.Tree.Game.Tree':
  locations searched:
Data/Tree/Game/Tree.hs
Data/Tree/Game/Tree.lhs
dirs=../game-tree/Data/Tree/Game/Tree.hs
dirs=../game-tree/Data/Tree/Game/Tree.lhs

What am I doing wrong?

Perhaps Cabal should be named Caballa instead.
-- 
Colin Adams
Preston Lancashire
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Running HUnit tests when pushing to darcs on haskell.org

2009-03-07 Thread Colin Paul Adams
Hello cafe,

I've just followed the instructions at
http://www.haskell.org/haskellwiki/How_to_write_a_Haskell_program 
to create my first library project on haskell.org, using darcs.

I tried using the darcs setpref test command to execute:

runhaskell tests/*.hs

(which works locally)

but it fails because it can't find Test.HUnit.

When I looked back at the webpage, I see it specifically mentions
QuickCheck and not HUnit, so I guess I should not have been
surprised. But I would like to know if this is possible, and if so,
how to arrange that runhaskell can find Test.HUnit.
-- 
Colin Adams
Preston Lancashire
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Interesting problem from Bird (4.2.13)

2009-03-07 Thread Bas van Dijk
2009/3/4 R J :
> Could someone provide an elegant solution to Bird problem 4.2.13?
>
> Here are the problem and my inelegant solution:
>
> Problem
> ---
>
> Since concatenation seems such a basic operation on lists, we can try to
> construct a data type that captures
> concatenation as a primitive.
>
> For example,
>
> data (CatList a)   =  CatNil
>    |  Wrap a
>    |  Cat (CatList a) (CatList a)
>
> The intention is that CatNil represents [], Wrap x represents [x], and Cat x
> y represents
> x ++ y.
>
> However, since "++" is associative, the expressions "Cat xs (Cat ys zs)" and
> "Cat (Cat xs ys) zs" should be regarded as equal.
>
> Define appropriate instances of "Eq" and "Ord" for "CatList".
>
> Inelegant Solution
> --
>
> The following solution works:
>
> instance (Eq a) => Eq (CatList a) where
>     CatNil  ==  CatNil   =    True
>     CatNil  ==  Wrap   z =    False
>     CatNil  ==  Cat    z w   =  ( z == CatNil  && w == CatNil )
>
>     Wrap   x    ==  CatNil   =    False
>     Wrap   x    ==  Wrap   z =    x == z
>     Wrap   x    ==  Cat    z w   =  ( Wrap x == z  && w == CatNil ) ||
>     ( Wrap x == w  && z == CatNil )
>
>     Cat    x y  ==  CatNil   =    x == CatNil  && y == CatNil
>     Cat    x y  ==  Wrap   z =  ( x == Wrap z  && y == CatNil ) ||
>     ( x == CatNil  && y == Wrap z )
>     Cat    x y  ==  Cat    z w   =  unwrap (Cat x y) == unwrap (Cat z w)
>
> unwrap               :: CatList a -> [a]
> unwrap CatNil    =  []
> unwrap (Wrap x)              =  [x]
> unwrap (Cat x y) =  unwrap x ++ unwrap y
>
> instance (Eq a, Ord a) => Ord (CatList a) where
>     x < y = unwrap x < unwrap y
>
> This solution correctly recognizes the equality of the following, including
> nested lists(represented, for example, by Wrap (Wrap 1), which corresponds
> to [[1]]):
>
> Wrap 1   == Cat (Wrap 1) CatNil
> Cat (Wrap 1) (Cat (Wrap 2) (Wrap 3)) == Cat (Wrap 1) (Cat (Wrap 2) (Wrap 3))
> Wrap (Wrap 1)    == Wrap (Cat (Wrap 1) CatNil)
>
> Although this solution works, it's a hack, because unwrap converts CatLists
> to lists.  The question clearly seeks a pure solution that does not rely on
> Haskell's built-in lists.
>
> What's the pure solution that uses cases and recursion on
> CatList, not Haskell's built-in lists, to capture the equality of nested
> CatLists?
>
>
> 
> Windows Live™ Contacts: Organize your contact list. Check it out.
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>

Here's my solution.

I first "right factor" each catlist by converting a catlist into a
difference catlist[1] and then turning that into a catlist again by
applying it to Nil.

At this point the converted catlist always has a right factored form:
'Cat a (Cat b (Cat c Nil)))'
which also doesn't contain Nils except at the end.

That right factored catlist is easy to compare with 'eq'.

---

data CatList a = Nil
   | Wrap a
   | Cat (CatList a) (CatList a)
 deriving Show

type DiffCatList a = CatList a -> CatList a

diff :: CatList a -> DiffCatList a
diff (Cat xs ys) = diff xs . diff ys
diff Nil = id
diff w   = Cat w

rightFactor :: CatList a -> CatList a
rightFactor xs = diff xs Nil

instance Eq a => Eq (CatList a) where
xs == ys = rightFactor xs `eq` rightFactor ys
where
  Nil `eq` Nil = True
  Wrap x  `eq` Wrap y  = x == y
  Cat xs1 ys1 `eq` Cat xs2 ys2 = xs1 `eq` xs2 &&
 ys1 `eq` ys2
  _   `eq` _   = False

---

(Right now I'm thinking if it's possible to fuse the 'diff' and the
'eq' somehow so that we don't have to turn the DiffCatList into a
catlist again...)

regards,

Bas

[1] http://haskell.org/haskellwiki/Difference_list
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe