Re: [Haskell-cafe] HaskellWiki images disappeared

2013-07-20 Thread Thomas Schilling
Should be fixed now. The wiki was recently transferred to a new server and
this got unfortunately broken in the process.
On 18 Jul 2013 22:45, Henk-Jan van Tuyl hjgt...@chello.nl wrote:



 L.S.,

 It looks like the HaskellWiki images have disappeared; can anybody repair
 this? (See for example http://www.haskell.org/**haskellwiki/Special:**
 MostLinkedFileshttp://www.haskell.org/haskellwiki/Special:MostLinkedFiles)

 Regards,
 Henk-Jan van Tuyl


 --
 Folding@home
 What if you could share your unused computer power to help find a cure? In
 just 5 minutes you can join the world's biggest networked computer and get
 us closer sooner. Watch the video.
 http://folding.stanford.edu/


 http://Van.Tuyl.eu/
 http://members.chello.nl/**hjgtuyl/tourdemonad.htmlhttp://members.chello.nl/hjgtuyl/tourdemonad.html
 Haskell programming
 --

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Two GHC-related GSoC Proposals

2013-05-31 Thread Thomas Schilling
[I'll be the mentor for this GSoC project.]

I used the MVar approach a while ago and so did Simon Marlow's
original solution.  Using MVars and Threads for this should scale well
enough (1000s of modules) and be relatively straightforward.
Error/exception handling could be a bit tricky, but you could use (or
copy ideas from) the 'async' package to deal with that.

 / Thomas

On 30 May 2013 18:51, Ryan Newton rrnew...@gmail.com wrote:
 What's the plan for what control / synchronization structures you'll use in
 part 2 of the plan to implement a parallel driver?

 Is the idea just to use an IO thread for each compile and block them on
 MVars when they encounter dependencies?  Or you can use a pool of worker
 threads and a work queue, and only add modules to the work queue when all
 their dependencies are met (limits memory use)... many options for executing
 a task DAG.  Fortunately the granularity is coarse.

   -Ryan



 On Sun, Apr 21, 2013 at 10:34 PM, Patrick Palka patr...@parcs.ath.cx
 wrote:

 Good points. I did not take into account whether proposal #2 may be worth
 it in light of -fllvm. I suppose that even if the LLVM codegen is able to
 perform similar optimizations, it would still be beneficial to implement
 proposal #2 as a core-to-core pass because the transformations it performs
 would expose new information to subsequent core-to-core passes. Also,
 Haskell has different overflow rules than C does (whose rules I assume
 LLVM's optimizations are modeled from): In Haskell, integer overflow is
 undefined for all integral types, whereas in C it's only undefined for
 signed integral types. This limits the number of optimizations a C-based
 optimizer can perform on unsigned arithmetic.

 I'm not sure how I would break up the parallel compilation proposal into
 multiple self-contained units of work. I can only think of two units: making
 GHC thread safe, and writing the new parallel compilation driver. Other
 incidental units may come up during development (e.g. parallel compilation
 might exacerbate #4012), but I still feel that three months of full time
 work is ample time to complete the project, especially with existing
 familiarity with the code base.

 Thanks for the feedback.


 On Sun, Apr 21, 2013 at 5:55 PM, Carter Schonwald
 carter.schonw...@gmail.com wrote:

 Hey Patrick,
 both are excellent ideas for work that would be really valuable for the
 community!
 (independent of whether or not they can be made into GSOC sided chunks )

 ---
 I'm actually hoping to invest some time this summer investigating
 improving the numerics optimization story in ghc. This is in large part
 because I'm writing LOTs of numeric codes in haskell presently (hopefully on
 track to make some available to the community ).

 That said, its not entirely obvious (at least to me) what a tractable
 focused GSOC sized subset of the numerics optimization improvement would be,
 and that would have to also be a subset that has real performance impact and
 doesn't benefit from eg using -fllvm rather than -fasm .
 -

 For helping pave the way to better parallel builds, what are some self
 contained units of work on ghc (or related work on cabal) that might be
 tractable over a summer? If you can put together a clear roadmap of work
 chunks that are tractable over the course of the summer, I'd favor choosing
 that work, especially if you can give a clear outline of the plan per chunk
 and how to evaluate the success of each unit of work.

 basically: while both are high value projects, helping improve the
 parallel build tooling (whether in performance or correctness or both!) has
 a more obvious scope of community impact, and if you can layout a clear plan
 of work that GHC folks agree with and seems doable, i'd favor that project
 :)

 hope this feedback helps you sort out project ideas

 cheers
 -Carter




 On Sun, Apr 21, 2013 at 12:20 PM, Patrick Palka patr...@parcs.ath.cx
 wrote:

 Hi,

 I'm interested in participating in the GSoC by improving GHC with one of
 these two features:

 1) Implement native support for compiling modules in parallel (see
 #910). This will involve making the compilation pipeline thread-safe,
 implementing the logic for building modules in parallel (with an emphasis 
 on
 keeping compiler output deterministic), and lots of testing and
 benchmarking. Being able to seamlessly build modules in parallel will
 shorten the time it takes to recompile a project and will therefore improve
 the life of every GHC user.

 2) Improve existing constant folding, strength reduction and peephole
 optimizations on arithmetic and logical expressions, and optionally
 implement a core-to-core pass for optimizing nested comparisons (relevant
 tickets include #2132,#5615,#4101). GHC currently performs some of these
 simplifications (via its BuiltinRule framework), but there is a lot of room
 for improvement. For instance, the core for this snippet is essentially
 identical to the Haskell source:

 foo :: 

Re: [Haskell-cafe] Parallel ghc --make

2013-05-15 Thread Thomas Schilling
To have a single-process ghc --make -j you first of all need internal
thread-safety:

GHC internally keeps a number of global caches that need to be made thread-safe:

  - table of interned strings (this is actually written in C and
accessed via FFI)
  - cache of interface files loaded, these are actually loaded lazily
using unsafeInterleaveIO magic (yuck)
  - cache of packages descriptions (I think)
  - the NameCache: a cache of string - magic number.  This is used to
implement fast comparisons between symbols.  The magic numbers are
generated non-deterministically (more unsafeInterleaveIO) so you need
to keep this cache around.
  - HomeModules: These are the modules that have been compiled in this
--make run.

The NameCache is used when loading interface files and also by the Parser.

Making these things thread-safe basically involves updating these
caches via atomicModifyIORef instead of just modifyIORef.  I made
those changes a few years ago, but at least one of them was
rolled-back.  I forgot the details, but I think it was one use of
unsafePerformIO that caused the issues.  unsafePerformIO needs to
traverse the stack to look for thunks that are potentially evaluated
by multiple threads.  If you have a deep stack that can be expensive.
SimonM since added stack chunks which should reduce the overhead of
this.  Could be worthwhile re-evaluating the patch.


To have a multi-process ghc --make you don't need thread-safety.
However, without sharing the caches -- in particular the interface
file caches -- the time to read data from the disk may outweigh any
advantages from parallel execution.


Evan's approach of using a long-running worker process avoids issues
with reloading the most of the caches for each module, but it probably
couldn't take advantage of the HomeModule cache.  It would be
interesting to see if that was the issues.  Then, it would be
interesting if the disk access or the serialisation overhead is the
issue; if it's the former, some clever use of mmap could help.


HTH,
 / Thomas

On 13 May 2013 17:35, Evan Laforge qdun...@gmail.com wrote:
 I wrote a ghc-server that starts a persistent process for each cpu.
 Then a 'ghc' frontend wrapper sticks each job in a queue.  It seemed
 to be working, but timing tests didn't reveal any speed-up.  Then I
 got a faster computer and lost motivation.  I didn't investigate very
 deeply why it didn't speed up as I hoped.  It's possible the approach
 is still valid, but I made some mistake in the implementation.

 So I can stop writing this little blurb I put it on github:

 https://github.com/elaforge/ghc-server

 On Mon, May 13, 2013 at 8:40 PM, Niklas Hambüchen m...@nh2.me wrote:
 I know this has been talked about before and also a bit in the recent
 GSoC discussion.

 I would like to know what prevents ghc --make from working in parallel,
 who worked at that in the past, what their findings were and a general
 estimation of the difficulty of the problem.

 Afterwards, I would update
 http://hackage.haskell.org/trac/ghc/ticket/910 with a short summary of
 what the current situation is.

 Thanks to those who know more!

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ghc-heap-view now with recursive pretty-printing

2012-12-25 Thread Thomas Schilling
On 21 December 2012 11:16, Joachim Breitner m...@joachim-breitner.de wrote:
 Prelude :script /home/jojo/.cabal/share/ghc-heap-view-0.4.0.0/ghci
 Prelude let x = [1..10]
 Prelude x
 [1,2,3,4,5,6,7,8,9,10]
 Prelude :printHeap x
 _bh [S# 1,S# 2,S# 3,S# 4,S# 5,S# 6,S# 7,S# 8,S# 9,S# 10]

 Note that the tools shows us that the list is a list of S# constructors,
 and also that it is still hidden behind a blackhole. After running
 System.Mem.performGC, this would disappear.

Why do you call it a blackhole?  I assume you mean a thunk that has
been evaluated and updated with its value.  The commonly used term for
this is indirection.  A blackhole is used to detect when a thunk's
value depends on itself (e.g., in let x = id x in ... the thunk for
x may get turned into a black hole).

It's a minor thing, but I think it's a good idea to stick to existing
terminology. Otherwise, it looks like a useful tool. Eventually, we
probably want an interactive graph where we can click a node to
evaluate it (or to show/hide children nodes).

--
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to correctly benchmark code with Criterion?

2012-10-18 Thread Thomas Schilling
Yes, Criterion always discards the time of the first evaluation.

On 18 October 2012 15:06, Janek S. fremenz...@poczta.onet.pl wrote:
 So the evaluation will be included in the benchmark, but if bench is
 doing enough trials it will be statistical noise.
 When I intentionally delayed my dataBuild function (using delayThread 
 100) the estimated time
 of benchmark was incorrect, but when I got the final results all runs were 
 below 50ms, which
 means that initial run that took 1 second was discarded. So it seems to me 
 that the first
 evaluation is discarded. Would be good if someone could definitely confirm 
 that.

 Janek

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to correctly benchmark code with Criterion?

2012-10-18 Thread Thomas Schilling
On 18 October 2012 13:15, Janek S. fremenz...@poczta.onet.pl wrote:
 Something like this might work, not sure what the canonical way is.
 (...)

 This is basically the same as the answer I was given on SO. My concerns about 
 this solutions are:
 - rnf requires its parameter to belong to NFData type class. This is not the 
 case for some data
 structures like Repa arrays.

For unboxed arrays of primitive types WHNF = NF.  That is, once the
array is constructed all its elements will be in WHNF.

 - evaluate only evaluates its argument to WHNF - is this enough? If I have a 
 tuple containing two
 lists won't this only evaluate the tuple construtor and leave the lists as 
 thunks? This is
 actually the case in my code.

That is why you use rnf from the NFData type class. You use
evaluate to kick-start rnf which then goes ahead and evaluates
everything (assuming the NFData instance has been defined correctly.)


 As I said previously, it seems that Criterion somehow evaluates the data so 
 that time needed for
 its creation is not included in the benchmark. I modified my dataBuild 
 function to look lik this:

 dataBuild gen = unsafePerformIO $ do
 let x = (take 6 $ randoms gen, take 2048 $ randoms gen)
 delayThread 100
 return x

 When I ran the benchmark, criterion estimated the time needed to complete it 
 to over 100 seconds
 (which means that delayThread worked and was used as a basis for estimation), 
 but the benchamrk
 was finished much faster and there was no difference in the final result 
 comparing to the normal
 dataBuild function. This suggests that once data was created and used for 
 estimation, the
 dataBuild function was not used again. The main question is: is this 
 observation correct? In this
 question on SO:
 http://stackoverflow.com/questions/6637968/how-to-use-criterion-to-measure-performance-of-haskell-programs
 one of the aswers says that there is no automatic memoization, while it looks 
 that in fact the
 values of dataBuild are memoized. I have a feeling that I am misunderstanding 
 something.

If you bind an expression to a variable and then reuse that variable,
the expression is only evaluated once. That is, in let x = expr in
... the expression is only evaluated once. However, if you have f y
= let x = expr in ... then the expression is evaluated once per
function call.




 I don't know if you have already read them,
 but Tibell's slides on High Performance Haskell are pretty good:

 http://www.slideshare.net/tibbe/highperformance-haskell

 There is a section at the end where he runs several tests using Criterion.
 I skimmed the slides and slide 59 seems to show that my concerns regarding 
 WHNF might be true.

It's usually safe if you benchmark a function. However, you most
likely want the result to be in normal form.  The nf does this for
you. So, if your benchmark function has type f :: X - ([Double],
Double), your benchmark will be:

  bench f (nf f input)

The first run will evaluate the input (and discard the runtime) and
all subsequent runs will evaluate the result to normal form. For repa
you can use deepSeqArray [1] if your array is not unboxed:

  bench f' (whnf (deepSeqArray . f) input)

[1]: 
http://hackage.haskell.org/packages/archive/repa/3.2.2.2/doc/html/Data-Array-Repa.html#v:deepSeqArray

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Panic loading network on windows (GHC 7.6.1)

2012-10-06 Thread Thomas Schilling
Just to explain what's going on.  It looks like you are compiling a
module that uses template haskell, which in turn relies on GHCi bits.
In particular, GHCi has a custom linker for loading compiled code.
This linker is very fragile and tends to break whenever the platform
GCC/linker changes. Similar issues happen a lot on OS X, because Apple
tends to change their library formats on most major releases.

The only workaround I can think of is to avoid using GHCi or Template
Haskell, but I understand that's usually very tricky (especially if
one of the dependencies uses TH).

On 6 October 2012 09:57, Henk-Jan van Tuyl hjgt...@chello.nl wrote:
 On Fri, 05 Oct 2012 17:31:49 +0200, JP Moresmau jpmores...@gmail.com
 wrote:

 Hello, I've installed Cabal and cabal-install 1.16 (which required
 network) on a new GHC 7.6.1 install and everything went well, except
 now when building a package requiring network I get:

 Loading package network-2.4.0.1 ... ghc.exe: Unknown PEi386 section name
 `.idata
 $4' (while processing: c:/ghc/ghc-7.6.1/mingw/lib\libws2_32.a)
 ghc.exe: panic! (the 'impossible' happened)
   (GHC version 7.6.1 for i386-unknown-mingw32):
 loadArchive c:/ghc/ghc-7.6.1/mingw/lib\\libws2_32.a: failed


 It's a GHC bug and will be solved in GHC 7.6.2, according to:
   http://hackage.haskell.org/trac/ghc/ticket/7103

 Regards,
 Henk-Jan van Tuyl


 --
 http://Van.Tuyl.eu/
 http://members.chello.nl/hjgtuyl/tourdemonad.html
 Haskell programming
 --


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] object file cannot be loaded.

2012-10-06 Thread Thomas Schilling
Does `ghc-pkg check` report any issues?

On 6 October 2012 15:24, Magicloud Magiclouds
magicloud.magiclo...@gmail.com wrote:
 Hi,
   I am installing postgres hackage (cannot remember the exact name
 right now). When it compiling the template haskell part I got the
 following error message.
   I tried to clear all user space hackages. Not helping.

 Loading package text-0.11.2.3 ... linking ... ghc:
 /home/magicloud/.cabal/lib/text-0.11.2.3/ghc-7.6.1/HStext-0.11.2.3.o:
 unknown symbol 
 `bytestringzm0zi10zi0zi1_DataziByteStringziInternal_PS_con_info'
 ghc: unable to load package `text-0.11.2.3'
 --
 竹密岂妨流水过
 山高哪阻野云飞

 And for G+, please use magiclouds#gmail.com.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Wiki News

2012-09-22 Thread Thomas Schilling
It's a wiki.  I went ahead and fixed it, this time.

To paraphrase Bryan O'Sullivan: Whenever you think why hasn't anyone done
..., or why doesn't somebody fix ..., you should ask yourself Why don't
*I* do ... or Why don't *I* fix  That's how open source works.

(Not trying to be offensive, just pointing out that's how we should think
about open source. That's how we got Real World Haskell, that's how we
got Criterion, that's how we got the text package in its current
version.)

On 22 September 2012 10:40, Colin Adams colinpaulad...@gmail.com wrote:

 This doesn't seem to be up-to-date. It announces GHC 7.4 released (but
 7.6.1 was released a couple of weeks ago).

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A first glimps on the {-# NOUPDATE #-} pragma

2012-08-30 Thread Thomas Schilling
On 30 August 2012 09:34, Joachim Breitner breit...@kit.edu wrote:

 but from a first glance it seems that you are not using that part of GHC
 in your project, right?


No, I don't think I can make use of your work directly. Lambdachine uses
GHC up until the CorePrep phase (the last phase before conversion to STG, I
think). I'll probably need a dynamic tagging mechanism to keep track of
what's potentially shared. Not sure what the overhead of that will be.

/ Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A first glimps on the {-# NOUPDATE #-} pragma

2012-08-29 Thread Thomas Schilling
On 29 August 2012 15:21, Joachim Breitner breit...@kit.edu wrote:

 Hi Facundo,

 Am Mittwoch, den 29.08.2012, 10:26 -0300 schrieb Facundo Domínguez:
   upd_noupd n =
   let l = myenum' 0 n
   in last l + length l
 
  This could be rewritten as
 
   upd_noupd n =
   let l n = myenum' 0 n
   in last (l n) + length (l n)
 
  Or a special form of let could be introduced to define locally-scoped
 macros:
 
   upd_noupd n =
   let# l = myenum' 0 n
   in last l + length l
 
  What's the strength of the {-# NOUPDATE #-} approach?

 it does not require refactoring of the code. There is not always a
 parameter handy that you can use to prevent sharing, and using () for
 that sometimes fails due to the full-lazyness-transformation.

 And a locally-scoped macros would not help in this case:

 test g n = g (myenum' 0 n)

 Now you still might want to prevent the long list to be stored, but here
 it cannot be done just by inlining or macro expansion.


Syntactically, I'd prefer something that looks more like a function.  With
a pragma it's difficult to see what expression it actually affects.  For
example, we already have the special functions lazy and inline. Since
avoiding updates essentially turns lazy evaluation into call-by-name you
could call it cbn. Perhaps that suggests a different use case, though, so
nonstrict, unshared, or nonlazy might be more self-explanatory.

I'd also like to point out that avoiding updates can dramatically improve
the kind of speedups my tracing JIT can achieve. In particular, on some
benchmarks, it can optimise idiomatic Haskell code to the same level as
stream fusion. I can simulate that by avoiding *all* updates (i.e., use
call-by-name), but that would break some programs badly. (Technically,
that's all legal, but I'm pretty sure it breaks any tying-the-knot
tricks.)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-17 Thread Thomas Schilling
My thoughts on the matter got a little long, so I posted them here:

http://nominolo.blogspot.co.uk/2012/08/beyond-package-version-policies.html

On 17 August 2012 12:48, Heinrich Apfelmus apfel...@quantentunnel.dewrote:

 Brent Yorgey wrote:

 Yitzchak Gale wrote:

 For actively maintained packages, I think the
 problem is that package maintainers don't find
 out promptly that an upper bound needs to be
 bumped. One way to solve that would be a
 simple bot that notifies the package maintainer
 as soon as an upper bound becomes out-of-date.


 This already exists:

   http://packdeps.haskellers.**com/ http://packdeps.haskellers.com/


 Indeed. It even has RSS feeds, like this

  
 http://packdeps.haskellers.**com/feed/reactive-bananahttp://packdeps.haskellers.com/feed/reactive-banana

 Extremely useful!


 Best regards,
 Heinrich Apfelmus

 --
 http://apfelmus.nfshost.com



 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Platform - BSD License?

2012-07-31 Thread Thomas Schilling
You may concatenate the licenses of all the packages you are using. GHC
includes the LGPL libgmp. The license file for each package is mentioned in
the .cabal file.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Current state of garbage collection in Haskell

2012-07-29 Thread Thomas Schilling
GHC does not provide any form of real-time guarantees (and support for
them is not planned).

That said, it's not as bad as it sounds:

 - Collecting the first (young) generation is fast and you can control
the size of that first generation via runtime system (RTS) options.

 - The older generation is collected rarely and can be collected in parallel.

 - You can explicitly invoke the GC via System.Mem.performGC

In a multi-threaded / multi-core program collecting the first
generation still requires stopping all application threads even though
only one thread (CPU) will perform GC (and having other threads help
out usually doesn't work out due to locality issues). This can be
particularly expensive if the OS decides to deschedule an OS thread,
as then the GHC RTS has to wait for the OS. You can avoid that
particular problem by properly configuring the OS via (linux boot
isolcpus=... and taskset(8)). The GHC team has been working on a
independent *local* GC, but it's unlikely to make it into the main
branch at this time. It turned out to be very difficult to implement,
with not large enough gains. Building a fully-concurrent GC is
(AFAICT) even harder.

I don't know how long the pause times for your 500MB live heap would
be. Generally, you want your heap to be about twice the size of your
live data. Other than that it depends heavily on the characteristics
of you heap objects. E.g., if it's mostly arrays of unboxed
non-pointer data, then it'll be very quick to collect (since the GC
doesn't have to do anything with the contents of these arrays). If
it's mostly many small objects with pointers to other objects, GC will
be very expensive and bound by the latency of your RAM. So, I suggest
you run some tests with realistic heaps.

Regarding keeping up, Simon Marlow is the main person working on GHC's
GC (often collaborating with others) and he keeps a list of papers on
his homepage: http://research.microsoft.com/en-us/people/simonmar/

If you have further questions about GHC's GC, you can ask them on the
glasgow-haskell-us...@haskell.org mailing list (but please consult the
GHC user's guide section on RTS options first).

HTH

On 29 July 2012 08:52, C K Kashyap ckkash...@gmail.com wrote:
 Hi,
 I was looking at a video that talks about GC pauses. That got me curious
 about the current state of GC in Haskell - say ghc 7.4.1.
 Would it suffer from lengthy pauses when we talk about memory in the range
 of 500M +?
 What would be a good way to keep abreast with the progress on haskell GC?
 Regards,
 Kashyap

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Logging pure code

2012-07-29 Thread Thomas Schilling
On 27 July 2012 14:52, Marco Túlio Gontijo e Silva
marcotmar...@gmail.com wrote:
 thread blocked indefinitely in an MVar operation

IIRC, that means that a thread is blocked on an MVar and the MVar is
only reachable by that thread.  You said you tried adding NOINLINE,
which is usually required for unsafePerformIO. Did you make sure to
recompile everything from scratch after doing so? Other than that, you
may ask on the glasgow-haskell-users mailing list as this is read more
frequently by the GHC team.

/ Thomas
-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Criterion setup/teardown functions?

2012-07-17 Thread Thomas Schilling
On 17 July 2012 20:45, tsuraan tsur...@gmail.com wrote:
 Is there anything in Criterion that allows for a benchmark to run some
 code before or after the thing that it's timing?  As an example, I'd
 like to time a bunch of database inserts, but beforehand I want to
 create the target table, and afterwards I'd like to delete it.  I
 don't really care to have the time spent on the create/delete recorded
 in the test run's timing, if that's possible to do.

See the second argument of defaultMainWith
http://hackage.haskell.org/packages/archive/criterion/0.6.0.0/doc/html/Criterion-Main.html#v:defaultMainWith.

The Criterion monad supports running arbitrary IO actions via liftIO.

/ Thomas

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Memory corruption issues when using newAlignedPinnedByteArray, GC kicking in?

2012-07-10 Thread Thomas Schilling
I think you should ask this question on the glasgow-haskell-users
mailing list: http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

On 10 July 2012 18:20, Nicolas Trangez nico...@incubaid.com wrote:
 All,

 While working on my vector-simd library, I noticed somehow memory I'm
 using gets corrupted/overwritten. I reworked this into a test case, and
 would love to get some help on how to fix this.

 Previously I used some custom FFI calls to C to allocate aligned memory,
 which yields correct results, but this has a significant (+- 10x)
 performance impact on my benchmarks. Later on I discovered the
 newAlignedPinnedByteArray# function, and wrote some code using this.

 Here's what I did in the test case: I created an MVector instance, with
 the exact same implementation as vector's
 Data.Vector.Storable.Mutable.MVector instance, except for basicUnsafeNew
 where I pass one more argument to mallocVector [1].

 I also use 3 different versions of mallocVector (depending on
 compile-time flags):

 mallocVectorOrig [2]: This is the upstream version, discarding the
 integer argument I added.

 Then here's my first attempt, very similar to the implementation of
 mallocPlainForeignPtrBytes [3] at [4] using GHC.* libraries.

 Finally there's something similar at [5] which uses the 'primitive'
 library.

 The test case creates vectors of increasing size, then checks whether
 they contain the expected values. For the default implementation this
 works correctly. For both others it fails at some random size, and the
 values stored in the vector are not exactly what they should be.

 I don't understand what's going on here. I suspect I lack a reference
 (or something along those lines) so GC kicks in, or maybe the buffer
 gets relocated, whilst it shouldn't.

 Basically I'd need something like

 GHC.ForeignPtr.mallocPlainAlignedForeignPtrBytes :: Int - Int - IO
 (ForeignPtr a)

 Thanks,

 Nicolas

 [1] https://gist.github.com/3084806#LC37
 [2] https://gist.github.com/3084806#LC119
 [3]
 http://hackage.haskell.org/packages/archive/base/latest/doc/html/src/GHC-ForeignPtr.html
 [4] https://gist.github.com/3084806#LC100
 [5] https://gist.github.com/3084806#LC81




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] not enough fusion?

2012-06-27 Thread Thomas Schilling
It's described in Andy Gill's PhD thesis (which describes the
foldr/build fusion).
http://ittc.ku.edu/~andygill/paper.php?label=GillPhD96 Section 4.4
describes the basic ideas.  There aren't any further details, though.

Max's Strict Core paper also describes it a bit (Section 6):
http://www.cl.cam.ac.uk/~mb566/papers/tacc-hs09.pdf

On 27 June 2012 08:58, Dominic Steinitz domi...@steinitz.org wrote:
 Duncan Coutts duncan.coutts at googlemail.com writes:

 This could in principle be fixed with an arity raising transformation,

 Do you have a reference to arity raising transformations?

 Thanks, Dominic.


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] attoparsec double precision, quickCheck and aeson

2012-06-11 Thread Thomas Schilling
Bryan, do you remember what the issue is with C++ in this case?  I
thought, adding a wrapper with extern C definitions should do the
trick for simpler libraries (as this one seems to be).  Is the
interaction with the memory allocator the issue?  Linker flags?

On 11 June 2012 06:38, Bryan O'Sullivan b...@serpentine.com wrote:
   On Wed, Jun 6, 2012 at 6:20 AM, Doug McIlroy d...@cs.dartmouth.edu
 wrote:

 Last I looked (admittedly quite a while ago), the state of
 the art was strtod in http://www.netlib.org/fp/dtoa.c.
 (Alas, dtoa.c achieves calculational perfection via a
 murmuration of #ifdefs.)


 That was indeed the state of the art for about three decades, until Florian
 Loitsch showed up in 2010 with an algorithm that is usually far
 faster: http://www.serpentine.com/blog/2011/06/29/here-be-dragons-advances-in-problems-you-didnt-even-know-you-had/

 Unfortunately, although I've written Haskell bindings to his library, said
 library is written in C++, and our FFI support for C++ libraries is
 negligible and buggy. As a result, that code is disabled by default.

 It's disheartening to hear that important Haskell code has
 needlessly fallen from perfection--perhaps even deliberately.


 Indeed (and yes, it's deliberate). If I had the time to spare, I'd attempt
 to fix the situation by porting Loitsch's algorithm to Haskell or C, but
 either one would be a lot of work - the library is 5,600 lines of tricky
 code.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] High memory usage with 1.4 Million records?

2012-06-08 Thread Thomas Schilling
On 8 June 2012 01:39, Andrew Myers asm...@gmail.com wrote:
 Hi Cafe,
 I'm working on inspecting some data that I'm trying to represent as records
 in Haskell and seeing about twice the memory footprint than I was
 expecting.

That is to be expected in a garbage-collected language. If your
program requires X bytes of memory then allocators will usually
trigger garbage collection once the heap reaches a size of 2X bytes.
If it didn't do this then every allocation would require a GC.  You
can change this factor with +RTS -F option.  E.g., +RTS -F1.5 should
reduce this to only 50% overhead, but will trigger more frequent
garbace collections. To find the actual residency (live data) see the
output of +RTS -s

There may still be room for improvement.  For example, you could try
turning on the compacting GC -- which trades GC performance for lower
memory usage.  You can enable it with +RTS -c

The reason that -hc runs slowly is that it performs a GC every 1s (I
think).  You can change this using the -i option.  E.g.  -i60 only
examines the heap every 60s.  It will touch almost all your live data,
so it is an inherently RAM-speed bound operation.

HTH,
 / Thomas

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] for = flip map

2012-03-29 Thread Thomas Schilling
On 29 March 2012 22:03, Sjoerd Visscher sjo...@w3future.com wrote:
 Some more bikeshedding:

 Perhaps ffor, as in

    ffor = flip fmap

 or perhaps

    infixr 0 $$
    ($$) = flip ($)

    xs $$ \x - ...

I don't think it makes sense to add a whole new operator for that.
You can just use sections:

  ($ xs) \x - ...

The reason you can't do this with * is the ordering of effects.

I have to admit, though, that the above isn't exactly readable.  The
non-operator version is somewhat more readable:

  (`map` xs) \x - ...

I'd still prefer for or foreach.


 (cf. **)

 In both cases they should go in Data.Functor

 Sjoerd

 On Mar 28, 2012, at 11:26 PM, e...@ezrakilty.net wrote:

 I would very much like to see a standard function for flip map along
 these lines. I think it would make a lot of code more readable.

 Like the OP, I use for in my own code. It's unfortunate that
 Data.Traversable takes the name with another type. Two options would be
 to (a) reuse the name in Data.List and force people to qualify as
 necessary, or (b) choose another name for flip map.

 Regarding other possible names: forall is a keyword and forAll is used
 by QuickCheck. One possibility would be foreach.

 Ezra

 On Wed, Mar 28, 2012, at 10:19 PM, Christopher Done wrote:
 On 28 March 2012 22:05, Matthew Steele mdste...@alum.mit.edu wrote:
 Doesn't for already exist, in Data.Traversable?   Except that for =
 flip traverse.

 Traverse doesn't fit the type of fmap, it demands an extra type
 constructor:

 traverse :: (Traversable t,Applicative f) = (a - f b) - t a - f (t b)

 fmap :: Functor f = (a - b) - f a - f b

 Note the (a - f b) instead of (a - b).

 E.g.

 fmap :: (a - b) - [a] - [b]

 can't be expressed with traverse, you can only get this far:

 traverse :: (a - [b]) - [a] - [[b]]

 Unless I'm missing something.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 --
 Sjoerd Visscher
 https://github.com/sjoerdvisscher/blog






 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell-platform vs macports

2012-03-22 Thread Thomas Schilling
If you're not otherwise attached to MacPorts, you might want to check
out Homebrew [1].  Its integration with the rest of OS X is generally
more smoothly and I haven't come across any missing packages yet.

[1]: http://mxcl.github.com/homebrew/

On 22 March 2012 16:34, Warren Harris warrensomeb...@gmail.com wrote:
 I assume that many haskell users out there on macs who are also users of 
 macports, and I bet they've hit this same issue that I've hit numerous times. 
 The problem is that there are 2 incompatible versions of libiconv -- one that 
 ships with the mac, and one that's built with macports and that many 
 macports-compiled libraries depend on.

 Work-arounds have been documented in numerous places (notably here: 
 http://blog.omega-prime.co.uk/?p=96), but if you are trying to link with a 
 macports-compiled libraries that depends on /opt/local/lib/libiconv.2.dylib, 
 your only alternative seems to be building ghc from source in order to avoid 
 the incompatibility. I hit this problem while trying to build a 
 foreign-function interface to libarchive.

 So I built ghc from scratch -- in fact I built the whole haskell-platform. 
 This was relatively painless, and fixed the libiconv problem. However, it 
 brings me to the real point of my message, which is that the version of 
 haskell-platform available via macports is way out of date (2009.2.0.2 
 http://www.macports.org/ports.php?by=namesubstr=haskell-platform).

 I'm wondering whether the haskell-platform team has considered maintaining a 
 macports version of the haskell-platform for us mac users in addition to the 
 binary install that's available now. This would avoid the incompatibilities 
 such as this nagging one with libiconv. Perhaps it's just a matter of 
 maintaining template versions of the port files?

 Warren

 Undefined symbols for architecture x86_64:
  _locale_charset, referenced from:
     _localeEncoding in libHSbase-4.3.1.0.a(PrelIOUtils.o)
  _iconv_close, referenced from:
     _hs_iconv_close in libHSbase-4.3.1.0.a(iconv.o)
    (maybe you meant: _hs_iconv_close)
  _iconv, referenced from:
     _hs_iconv in libHSbase-4.3.1.0.a(iconv.o)
    (maybe you meant: _hs_iconv, _hs_iconv_open , _hs_iconv_close )
  _iconv_open, referenced from:
     _hs_iconv_open in libHSbase-4.3.1.0.a(iconv.o)
    (maybe you meant: _hs_iconv_open)
 ld: symbol(s) not found for architecture x86_64
 collect2: ld returned 1 exit status
 cabal: Error: some packages failed to install:
 Libarchive-0.1 failed during the building phase. The exception was:
 ExitFailure 1


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there arithmetic composition of functions?

2012-03-19 Thread Thomas Schilling
I don't understand this discussion.  He explicitly said If you are
willing to depend on a recent version of base.  More precisely, he
meant GHC 7.4 which includes the latest version of base.  Yes, this is
incompatible with the Haskell2010 standard, but it did go through the
library submission process (unless I'm mistaken).

It is also easy to add nonsense instances for functions to make this
work with the Haskell2010 definition of the Num class.

   instance Eq (a - b) where
 f == g = error Cannot compare two functions (undecidable for
infinite domains)
   instance Show (a - b) where show _ = function

Yes, these instances are not very useful, but they let you get around
this unnecessary restriction of the Num class.  (I expect this to be
fixed in future versions of the Haskell standard.)

On 20 March 2012 02:37, Richard O'Keefe o...@cs.otago.ac.nz wrote:

 On 20/03/2012, at 2:27 PM, Jerzy Karczmarczuk wrote:

 Richard O'Keefe:
     class (Eq a, Show a) = Num a
       where (+) (-) (*) negate abs signum fromInteger

 where functions are for good reason not members of Eq or Show.

 This is an old song, changed several times. I have no intention to discuss, 
 but please, Richard O'Keefe:
 WHICH GOOD REASONS??

 It is still there in the Haskell 2010 report.

 The UHC user manual at
 http://www.cs.uu.nl/groups/ST/Projects/ehc/ehc-user-doc.pdf
 lists differences between UHC and both Haskell 98 and
 Haskell 2010, but is completely silent about any change to
 the interface of class Num, and in fact compiling a test
 program that does 'instance Num Foo' where Foo is *not* an
 instance of Eq or Show gives me this response:

 [1/1] Compiling Haskell                  mynum                  (mynum.hs)
 EH analyses: Type checking
 mynum.hs:3-11:
  Predicates remain unproven:
    preds: UHC.Base.Eq mynum.Foo:


 This is with ehc-1.1.3, Revision 2422:2426M,
 the latest binary release, downloaded and installed today.
 The release date was the 31st of January this year.

 GHC 7.0.3 doesn't like it either.  I know ghc 7.4.1 is
 out, but I use the Haskell Platform, and the currently
 shipping version says plainly at
 http://hackage.haskell.org/platform/contents.html
 that it provides GHC 7.0.4.

 You may have no intention of discussing the issue,
 but it seems to *me* that this will not work in 2012
 Haskell compiler mostly conforming to Haskell 2010
 because Haskell 2010 says it shouldn't work is a pretty
 sound position to take.



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using multiplate to get free variables from a syntax tree

2012-02-25 Thread Thomas Schilling
That will give you the wrong answer for an expression like:

  (let x = 1 in x + y) + x

Unless you do a renaming pass first, you will end up both with a bound
x and a free x.

On 25 February 2012 16:29, Sjoerd Visscher sjo...@w3future.com wrote:

 On Feb 24, 2012, at 10:09 PM, Stephen Tetley wrote:

 I'm not familiar with Multiplate either, but presumably you can
 descend into the decl - collect the bound vars, then descend into the
 body expr.

 Naturally you would need a monadic traversal
 rather than an applicative one...


 It turns out the traversal is still applicative. What we want to collect are 
 the free and the declared variables, given the bound variables. ('Let' will 
 turn the declared variables into bound variables.) So the type is [Var] - 
 ([Var], [Var]). Note that this is a Monoid, thanks to the instances for ((-) 
 r), (,) and []. So we can use the code from preorderFold, but add an 
 exception for the 'Let' case.

 freeVariablesPlate :: Plate (Constant ([Var] - ([Var], [Var])))
 freeVariablesPlate = handleLet (varPlate `appendPlate` multiplate 
 freeVariablesPlate)
  where
    varPlate = Plate {
      expr = \x - Constant $ \bounded - ([ v | EVar v - [x], v `notElem` 
 bounded], []),
      decl = \x - Constant $ const ([], [ v | v := _ - [x]])
    }
    handleLet plate = plate { expr = exprLet }
      where
        exprLet (Let d e) = Constant $ \bounded -
          let
            (freeD, declD) = foldFor decl plate d bounded
            (freeE, _)     = foldFor expr plate e (declD ++ bounded)
          in
            (freeD ++ freeE, [])
        exprLet x = expr plate x

 freeVars :: Expr - [Var]
 freeVars = fst . ($ []) . foldFor expr freeVariablesPlate

 freeVars $ Let (x := Con 42) (Add (EVar x) (EVar y))
 [y]

 --
 Sjoerd Visscher
 https://github.com/sjoerdvisscher/blog





 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using multiplate to get free variables from a syntax tree

2012-02-25 Thread Thomas Schilling
No that's correct.  I have to say the multiplate code is incredibly
hard to decipher.

On 25 February 2012 19:47, Sjoerd Visscher sjo...@w3future.com wrote:
 I don't understand what you mean.

 ($[]) . foldFor expr freeVariablesPlate $ Add (Let (x := Con 1) (Add 
 (EVar x) (EVar y))) (EVar x)
 ([y,x],[])

 I.e. free variables y and x, no bound variables. Is that not correct?

 Sjoerd

 On Feb 25, 2012, at 7:15 PM, Thomas Schilling wrote:

 That will give you the wrong answer for an expression like:

  (let x = 1 in x + y) + x

 Unless you do a renaming pass first, you will end up both with a bound
 x and a free x.

 On 25 February 2012 16:29, Sjoerd Visscher sjo...@w3future.com wrote:

 On Feb 24, 2012, at 10:09 PM, Stephen Tetley wrote:

 I'm not familiar with Multiplate either, but presumably you can
 descend into the decl - collect the bound vars, then descend into the
 body expr.

 Naturally you would need a monadic traversal
 rather than an applicative one...


 It turns out the traversal is still applicative. What we want to collect 
 are the free and the declared variables, given the bound variables. ('Let' 
 will turn the declared variables into bound variables.) So the type is 
 [Var] - ([Var], [Var]). Note that this is a Monoid, thanks to the 
 instances for ((-) r), (,) and []. So we can use the code from 
 preorderFold, but add an exception for the 'Let' case.

 freeVariablesPlate :: Plate (Constant ([Var] - ([Var], [Var])))
 freeVariablesPlate = handleLet (varPlate `appendPlate` multiplate 
 freeVariablesPlate)
  where
    varPlate = Plate {
      expr = \x - Constant $ \bounded - ([ v | EVar v - [x], v `notElem` 
 bounded], []),
      decl = \x - Constant $ const ([], [ v | v := _ - [x]])
    }
    handleLet plate = plate { expr = exprLet }
      where
        exprLet (Let d e) = Constant $ \bounded -
          let
            (freeD, declD) = foldFor decl plate d bounded
            (freeE, _)     = foldFor expr plate e (declD ++ bounded)
          in
            (freeD ++ freeE, [])
        exprLet x = expr plate x

 freeVars :: Expr - [Var]
 freeVars = fst . ($ []) . foldFor expr freeVariablesPlate

 freeVars $ Let (x := Con 42) (Add (EVar x) (EVar y))
 [y]

 --
 Sjoerd Visscher
 https://github.com/sjoerdvisscher/blog





 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 --
 Push the envelope. Watch it bend.


 --
 Sjoerd Visscher
 https://github.com/sjoerdvisscher/blog






 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Undocumented cost-centres (.\) using auto-all, and SCC pragma not being honored

2012-02-15 Thread Thomas Schilling
On 15 February 2012 16:17, Dan Maftei ninestrayc...@gmail.com wrote:

 1 When profiling my code with -auto-all, my .prof file names some 
 sub-expressions with a backslash. Cf. below. What are these?

      e_step
       e_step.ewords
       e_step.\
        e_step.\.\
         e_step.update_counts
       e_step.fwords

 My e_step function binds seven expressions inside a let, then uses them in 
 two ugly nested folds. (Yes, very hackish)  As you can see, three such 
 expressions are named explicitly (ewords, fwords, and update_counts). But 
 where are the rest? Further, perhaps the backslashes have something to do 
 with the lambda expressions I am using in the two nested folds?

Yup, those are anonymous functions.


 2. A related question: I tried using the SCC pragma instead of auto-all. I 
 added it to all seven expressions inside the let, and to the nested folds. 
 However, only two showed up in the .prof file! How come?

It would be helpful if you pasted the code.  I think SCC pragmas
around lambdas get ignored and you should put them inside.  (It may be
the other way around, though.)



 Thanks
 ninestraycats

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




--
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How do I get official feedback (ratings) on my GSoC proposal?

2012-02-13 Thread Thomas Schilling
It's usually the (potential) mentors who do the rating.  I know we did that
two years ago; can't remember last year, though.

On 13 February 2012 23:45, Greg Weber g...@gregweber.info wrote:

 http://hackage.haskell.org/trac/summer-of-code/report/1
 There is a column 'Priority'. And there are now several unrated proposals.

 On Mon, Feb 13, 2012 at 3:40 PM, Johan Tibell johan.tib...@gmail.com
 wrote:
  On Mon, Feb 13, 2012 at 3:20 PM, Greg Weber g...@gregweber.info wrote:
 
  Other than changing the status myself, how do I get a priority
  attached to my GSoC proposal?
 
 
  What priorities are you referring to?
 
  -- Johan
 

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ghc-api Static Semantics?

2012-01-26 Thread Thomas Schilling
On 26 January 2012 09:24, Christopher Brown cm...@st-andrews.ac.uk wrote:
 Hi Thomas,

 By static semantics I mean use and bind locations for every name in the
AST.

Right, that's what the renamer does in GHC.  The GHC AST is parameterised
over the type of identifiers used.  The three different identifier types
are:


   - RdrName: is the name as it occurred in source code. This is the output
   of the parser.
   - Name: is basically RdrName + unique ID, so you can distinguish two
   xs bound at different locations (this is what you want). This is the
   output of the renamer.
   - Id: is Name + Type information and consequently is the output of the
   type checker.

Diagram:

   String  --parser--  HsModule RdrName  --renamer--  HsModule Name
 --type-checker--  HsBinds Id

Since you can't hook in-between renamer and type checker, it's perhaps more
accurately depicted as:

   String  --parser--  HsModule RdrName  --renamer+type-checker--
 (HsModule Name,  HsBinds Id)

The main reasons why it's tricky to use the GHC API are:


   1. You need to setup the environment of packages etc.  E.g., the renamer
   needs to look up imported modules to correctly resolve imported names (or
   give a error).
   2. The second is that the current API is not designed for external use.
As I mentioned, you cannot run renamer and typechecker independently,
   there are dozens of invariants, there are environments being updated by the
   various phases, etc.  For example, if you want to generate code it's
   probably best to either generate HsModure RdrName or perhaps the Template
   Haskell API (never tried that path).


 I'm steering towards haskell-src-exts right now as the sheer complexity
of the ghc-api is putting me off. I need something simple, as I can't be
spending all my time learning the ghc-api and hacking it together to do
what I want. It does look a bit of a mess. Just trying to do simple things
like parsing a file and showing its output proved to be much more
complicated than it really needed to be

 We have decided to take it on. :)

Could you clarify that?  Are you doing everything in haskell-src-exts or
are you using the GHC API and translate the result into haskell-src-exts?
The former might be easier to implement, the latter could later be extended
to give you type info as well (without the need to implement a whole type
checker that most likely will bit rot compared to GHC sooner or later).

/ Thomas

-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ghc-api Static Semantics?

2012-01-26 Thread Thomas Schilling
On 26 January 2012 16:33, JP Moresmau jpmores...@gmail.com wrote:

 Thomas, thank you for that explanation about the different type of
 identifiers in the different phases of analysis. I've never seen that
 information so clearly laid out before, can it be added to the wikis
 (in http://hackage.haskell.org/trac/ghc/wiki/Commentary/Compiler/API
 or http://www.haskell.org/haskellwiki/GHC/As_a_library maybe)? I think
 it would be helpful to all people that want to dive into the GHC API.


Will do.



 On a side note, I'm going to do something very similar in my
 BuildWrapper project (which is now the backend of the EclipseFP IDE
 plugins): instead of going back to the API every time the user
 requests to know the type of something in the AST, I'm thinking of
 sending the whole typed AST to the Java code. Maybe that's something
 Christopher could use. Both the BuildWrapper code and Thomas's scion
 code are available on GitHub, as they provide examples on how to use
 the GHC API.


I really don't think you want to do much work on the front-end as that will
just need to be duplicated for each front-end.  That was the whole point of
building Scion in the first place.  I understand, of course, that Scion is
not useful enough at this time.

Well, I currently don't have much time to work on Scion, but the plan is as
follows:

  - Scion becomes a multi-process architecture.  It has to be since it's
not safe to run multiple GHC sessions inside the same process.  Even if
that were possible, you wouldn't be able to, say, have a profiling compiler
and a release compiler in the same process due to how static flags work.
Separate processes have the additional advantage that you can kill them if
they use too much memory (e.g., because you can't unload loaded interfaces).

  - Scion will be based on Shake and GHC will mostly be used in one-shot
mode (i.e., not --make).  This makes it easier to handle preprocessed
files.  It also allows us to generate and update meta-information on
demand.  I.e., instead of parsing and typechecking a file and then caching
the result for the current file, Scion will simply generate meta
information whenever it (re-)compiles a source file and writes that meta
information to a file.  Querying or caching that meta information then is
completely orthogonal to generating it.  The most basic meta information
would be a type-annotated version of the compiled AST (possibly + warnings
and errors from the last time it was compiled).  Any other meta information
can then be generated from that.

 - The GHCi debugger probably needs to be treated specially.  There also
should be automatic detection of files that aren't supported by the
bytecode compiler (e.g., those using UnboxedTuples) and force compilation
to machine code for those.

 - The front-end protocol should be specified somewhere.  I'm thinking
about using protobuf specifications and then use ways to generate custom
formats from that (e.g., JSON, Lisp S-Expressions, XML?).  And if the
frontend supports protocol buffers, then it can use that and be fast.  That
also means that all serialisation code can be auto-generated.

I won't have time to work on this before the ICFP deadline (and only very
little afterwards), but Scion is not dead (just hibernating).



 JP


 On Thu, Jan 26, 2012 at 2:31 PM, Thomas Schilling
 nomin...@googlemail.com wrote:
 
 
  On 26 January 2012 09:24, Christopher Brown cm...@st-andrews.ac.uk
 wrote:
  Hi Thomas,
 
  By static semantics I mean use and bind locations for every name in the
  AST.
 
  Right, that's what the renamer does in GHC.  The GHC AST is parameterised
  over the type of identifiers used.  The three different identifier types
  are:
 
  RdrName: is the name as it occurred in source code. This is the output of
  the parser.
  Name: is basically RdrName + unique ID, so you can distinguish two xs
  bound at different locations (this is what you want). This is the output
 of
  the renamer.
  Id: is Name + Type information and consequently is the output of the type
  checker.
 
  Diagram:
 
 String  --parser--  HsModule RdrName  --renamer--  HsModule Name
   --type-checker--  HsBinds Id
 
  Since you can't hook in-between renamer and type checker, it's perhaps
 more
  accurately depicted as:
 
 String  --parser--  HsModule RdrName  --renamer+type-checker--
   (HsModule Name,  HsBinds Id)
 
  The main reasons why it's tricky to use the GHC API are:
 
  You need to setup the environment of packages etc.  E.g., the renamer
 needs
  to look up imported modules to correctly resolve imported names (or give
 a
  error).
  The second is that the current API is not designed for external use.  As
 I
  mentioned, you cannot run renamer and typechecker independently, there
 are
  dozens of invariants, there are environments being updated by the various
  phases, etc.  For example, if you want to generate code it's probably
 best
  to either generate HsModure RdrName or perhaps the Template Haskell API
  (never

Re: [Haskell-cafe] ghc-api Static Semantics?

2012-01-25 Thread Thomas Schilling
I assume by static semantics you mean the renamed Haskell source code.
Due to template Haskell it (currently) is not possible to run the
renamer and type checker separately.  Note that the type checker
output is very different in shape from the renamed output.  The
renamed output mostly follows the original source definitions, but the
type checker output is basically only top-level definitions (no
instances, classes have become data types, etc.) so it can be a bit
tricky to map types back to the input terms.

Still, I think solution with the best trade-off between
maintainability and usability is to use the GHC API and annotate a
haskell-src-exts representation of the given input file.  The GHC AST
structures are volatile, and have lots of ugly invariants that need to
be maintained.  E.g., some fields are only defined after renaming and
are others may no longer be defined after renaming.  If you look at
those fields when they are not defined, you get an error.

Let me know if you decide to take on this project.


On 24 January 2012 10:35, Christopher Brown cm...@st-andrews.ac.uk wrote:


 Have you looked at ghc-syb-utils, which gives a neat way to print an AST?

 http://hackage.haskell.org/packages/archive/ghc-syb-utils/0.2.1.0/doc/html/GHC-SYB-Utils.html


 Yes I found that yesterday!

 Chris.




 --
 JP Moresmau
 http://jpmoresmau.blogspot.com/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Code generation and optimisation for compiling Haskell

2012-01-13 Thread Thomas Schilling
JHC itself is based upon Boquist's GRIN language described in his PhD
thesis: Code Optimization Techniques for Lazy Functional Languages
http://mirror.seize.it/papers/Code%20Optimization%20Techniques%20for%20Lazy%20Functional%20Languages.pdf

On 13 January 2012 01:50, Jason Dagit dag...@gmail.com wrote:
 On Tue, Jan 10, 2012 at 9:25 AM, Steve Horne
 sh006d3...@blueyonder.co.uk wrote:

 Also, what papers should I read? Am I on the right lines with the ones I've
 mentioned above?

 Thomas Schilling gave you a good response with papers so I will give
 you a different perspective on where to look.

 Most of the Haskell implementations were written by academics studying
 languages and compilers.  This is good but it also implies that the
 implementors are likely to share biases and assumptions.  I know of
 one Haskell compiler in particular that was written by someone who did
 not know Haskell when starting the project.  The compiler was
 developed to be different than GHC.  That person was John Meacham.  He
 created JHC, a work in progress, so you might want to study his
 compiler and implementation notes as they should provide a different
 perspective on how to tackle Haskell implementation and optimization.

 http://repetae.net/computer/jhc/

 I hope that helps,
 Jason

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Code generation and optimisation for compiling Haskell

2012-01-11 Thread Thomas Schilling
Based on your stated background, the best start would be the (longer)
paper on the Spineless Tagless G-machine [1].  It describes how graph
reduction is actually implemented efficiently.  Since then there have
been two major changes to this basic implementation: Use of eval/apply
(a different calling convention) [2] and constructor tagging [3]
(which drastically reduces the number of indirect branches from the
original STG approach).

In addition to this fairly low-level stuff, there are very powerful
optimisations performed at a higher level.  For a taste see stream
fusion [4].

If you're done with these, feel free to ask for more. :)

[1]: http://research.microsoft.com/apps/pubs/default.aspx?id=67083
[2]: http://research.microsoft.com/en-us/um/people/simonpj/papers/eval-apply/
[3]: 
http://research.microsoft.com/en-us/um/people/simonpj/papers/ptr-tag/index.htm
[4]: http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.104.7401

On 10 January 2012 17:25, Steve Horne sh006d3...@blueyonder.co.uk wrote:

 Although I'm far from being an expert Haskell programmer, I think I'm ready
 to look into some of the details of how it's compiled. I've a copy of Modern
 Compiler Design (Grune, Bal, Jacobs and Langendoen) - I first learned a lot
 of lexical and parsing stuff from it quite a few years ago. Today, I started
 looking through the functional languages section - I've read it before, but
 never absorbed much of it.

 Graph reduction, lambda lifing, etc - it seems pretty simple. Far too
 simple. It's hard to believe that decent performance is possible if all the
 work is done by a run-time graph reduction engine.

 Simon Peyton Jones has written a couple of books on implementing functional
 languages which are available for free download. At a glance, they seem to
 covers similar topics in much more detail. However, they're from 1987 and
 1992. Considering SPJs period of despair when he couldn't get practical
 performance for monadic I/O, these seem very dated.

 Some time ago, I made a note to look up the book Functional Programming and
 Parallel Graph Rewriting (I forget why) but again that's from the early
 90's. I've also got a note to look up Urban Boquists thesis.

 SPJ also has some papers on compilation -
 http://research.microsoft.com/en-us/um/people/simonpj/papers/papers.html#compiler
 - and the papers on optimisation by program transformation have caught my
 eye.

 Are there any current text-books that describe the techniques used by
 compilers like GHC to generate efficient code from functional languages?
 It's OK to assume some knowledge of basic compiler theory - the important
 stuff is code generation and optimisation techniques for lazy functional
 languages in particular.

 Also, what papers should I read? Am I on the right lines with the ones I've
 mentioned above?


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type checker for haskell-src-exts (was: Typechecking Using GHC API)

2011-12-18 Thread Thomas Schilling
On 17 December 2011 05:39, Gregory Crosswhite gcrosswh...@gmail.com wrote:


 On Dec 17, 2011, at 9:58 AM, Thomas Schilling wrote:

 Wll... I've gotten a little bit of a different perspective on this
 since working at a company with very high code quality standards (at
 least for new code).  There is practically no observable code review
 happening.  I'm sure Dimitrios and Simon PJ review most of each
 other's code every now and then, but overall there is very little code
 review happening (and no formal, recorded code review whatsoever).
 Cleaning up the GHC code base is a huge task -- it uses lots of dirty
 tricks (global variables, hash tables, unique generation is
 non-deterministic, ...) which often complicate efforts tremendously (I
 tried).  The lack of a unit tests doesn't help (just rewriting code so
 that it can be tested would help quite a bit).


 So in other words, would it be helpful it we recruited GHC janitors?  That
 is, similar to how we have the Trac which gives people bug reports to pick
 out and work on, would it make sense to have a Trac or some other process
 which gives people chunks of code to clean up and/or make easier to test?

 (I am of course inspired in suggesting this by the Linux kernel janitors,
 though it doesn't look like the project has survived, and maybe that
 portends ill for trying to do the same for GHC...)


I'm not sure that would work too well.  GHC is a bit daunting to start with
(it gets better after a while) and just cleaning up after other people is
little fun.  I would be more interested in setting up a process that
enables a clean code base (and gradually cleans up existing shortcomings).
 Of course, I'd prefer to do rather than talk, so I'm not pushing this at
this time.  At the moment, I think we should:

 1. Find a plan to get rid of the existing bigger design issues, namely:
the use of global variables for static flags (may require extensive
refactorings throughout the codebase), the use of nondeterministic uniques
for symbols (may cost performance)

 2. Build up a unit test suite (may include QuickCheck properties).  The
idea is that if our code needs to be tested from within Haskell (and not
the just the command line) then that encourages a design that can be used
better as a library.  It may also catch some bugs earlier and make it
easier to change some things.  (Note: the high-level design of GHC is
indeed very modular, but the implementation isn't so much.)

 3. Set up a code review system.  Every commit should have to go through
code review -- even by core contributors.  Even experienced developers
don't produce perfect code all the time.  Currently, we have some
post-commit review.  A possible code review system for Git is Gerrit.

Of course, the GHC developers would need to get on board with this.  As I
said, I currently don't have the time to pursue this any further, but I'm
planning to apply this to my projects as much as possible.

 / Thomas
-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] Proposal to incorporate Haskell.org

2011-12-16 Thread Thomas Schilling
On 16 December 2011 11:10, Ganesh Sittampalam gan...@earth.li wrote:
 On 16/12/2011 10:59, Giovanni Tirloni wrote:
 On Fri, Dec 16, 2011 at 7:08 AM, Ganesh Sittampalam gan...@earth.li
 mailto:gan...@earth.li wrote:

      Q: If an umbrella non-profit organisation The Haskell Foundation was
          created, would haskell.org http://haskell.org be able to
     join it?
      A: Yes. It's likely that in such a scenario, the Haskell Foundation
          would become the owner of the haskell.org http://haskell.org
     domain name, with the cost
          divided between the members. The entity that is part of the
     SFC would
          be renamed community.haskell.org
     http://community.haskell.org in order to avoid confusion.


 Would it be a too ambitious goal to create the Haskell Foundation at
 this moment?

 It would be a lot of administrative effort - managing accounts, tax
 filings, etc. While it would give us more control, I don't think the
 benefits would be very significant.

 So in my view for now it's best not to go it alone.

I agree.  If at some point we feel that having a Haskell Foundation
would be desirable (despite the additional overheads) there shouldn't
be anything stopping us from doing so.  Are there any drawbacks in
joining such an organisation?  How do they finance their overheads?
Would a donation to haskell.org include a fee to SPI?  I couldn't find
any information on their website.

/ Thomas

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] Proposal to incorporate Haskell.org

2011-12-16 Thread Thomas Schilling
On 16 December 2011 13:36, Ganesh Sittampalam gan...@earth.li wrote:

 Would a donation to haskell.org include a fee to SPI?  I couldn't find
 any information on their website.

 Yes - 5% goes to SPI to cover their overheads. It's detailed in
 http://www.spi-inc.org/projects/associated-project-howto/ but not on
 their donations page at http://www.spi-inc.org/donations/.

 5% seems reasonable to me and in line with what similar donation
 aggregators charge, for example the Charities Aid Foundation in the UK
 charges 4%:
 https://www.cafonline.org/my-personal-giving/plan-your-giving/individual-charity-account.aspx

Yes, that sounds reasonable.  Credit cards donations also cause an
overhead of 4.5% (and the 5% are deducted from the rest), so the total
overhead for a credit card donation would be: (1 - 0.955 * 0.95) =
9.3%  That's fairly high, but I don't see a way around that.

 In effect we've been getting the admin for free from Galois up till now,
 but it's been getting too troublesome for them.

Yes, I certainly understand that.  The other reason are tax-deductible
donations, which I assume isn't really possible with Galois handling
all the financials.

/ Thomas

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type checker for haskell-src-exts (was: Typechecking Using GHC API)

2011-12-16 Thread Thomas Schilling
On 16 December 2011 17:44, Niklas Broberg niklas.brob...@gmail.com wrote:
 With all due respect, the sentiments you give voice to here are a large part
 of what drives me to do this project in the first place. Haskell is not GHC,
 and I think that the very dominant position of GHC many times leads to ill
 thought-through extensions. Since GHC has no competitor, often extensions
 (meaning behavior as denoted and delimited by some -X flag) are based on
 what is most convenient from the implementation point of view, rather than
 what would give the most consistent, logical and modular user experience
 (not to mention
 third-party-tool-implementor-trying-to-support-GHC-extensions experience).

I agree.  Various record proposals have been rejected because of the
not easily implementable in GHC constraint.  Of course, ease of
implementation (and maintenance) is a valid argument, but it has an
unusual weight if GHC is the (in practise) only implementation.  Other
extensions seem to just get added on (what feels like) a whim (e.g.,
RecordPuns).

 As such, I'm not primarily doing this project to get a development tool out,
 even if that certainly is a very neat thing. I'm just as much doing it to
 provide a Haskell (front-end) implementation that can serve as a better
 reference than GHC, one that very explicitly does *not* choose the
 convenient route to what constitutes an extension, and instead attempts to
 apply rigid consistency and modularity between extensions. Also, just like
 for haskell-src-exts I hope to build the type checker from the roots with
 the user interface as a core design goal, not as a tacked-on afterthought.

 Mind you, in no way do I intend any major criticism towards GHC or its
 implementors. GHC is a truly amazing piece of software, indeed it's probably
 my personal favorite piece of software of all times. That does not mean it
 comes free of quirks and shady corners though, and it is my hope that by
 doing what I do I can help shine a light on them.

Wll... I've gotten a little bit of a different perspective on this
since working at a company with very high code quality standards (at
least for new code).  There is practically no observable code review
happening.  I'm sure Dimitrios and Simon PJ review most of each
other's code every now and then, but overall there is very little code
review happening (and no formal, recorded code review whatsoever).
Cleaning up the GHC code base is a huge task -- it uses lots of dirty
tricks (global variables, hash tables, unique generation is
non-deterministic, ...) which often complicate efforts tremendously (I
tried).  The lack of a unit tests doesn't help (just rewriting code so
that it can be tested would help quite a bit).

Don't get me wrong, I certainly appreciate the work the GHC team is
doing, but GHC strikes a fine balance between industrial needs and
research needs.  I'm just wondering whether the balance is always
right.

 Answering your specific issues:

 1) Yes, it's a lot of work. Probably not half as much as you'd think though,
 see my previous mail about walking in the footsteps of giants. But beyond
 that, I think it's work that truly needs to be done, for the greater good of
 Haskell.

Right OutsideIn(X) (the Journal paper description, not the ICFP'09
version) seems like the right way to go.  I wasn't aware of the other
paper (the original work on bidirectional type inference seemed very
unpredictable in terms of when type annotations are needed, so I'm
looking forward to how this new paper handles things).

 2) Well, I think I've done a reasonably good job keeping haskell-src-exts up
 to date so far, even if the last year has been pretty bad in that regard
 (writing a PhD thesis will do that to you). I'll keep doing it for the type
 checker as well. But I am but one man, so if anyone else feels like joining
 the project then they are more than welcome to.

 Sort-of-3) Yes, both implementation and maintenance are likely going to be
 far more costly than the alternative to do a translation via the GHC API.
 I'm not interested in that alternative though.

Fair enough.  As I am interested in building reliable (and
maintainable) development tools my priorities are obviously different.
 For that purpose, using two different implementations can lead to
very confusing issues for the user (that's why I was asking about bug
compatibility).  Apart from the bus factor, there is also the
bitrotting issue due to GHC's high velocity.  For example, even though
HaRe does build again it doesn't support many commonly used GHC
extensions and it is difficult to add them into the existing code base
(which isn't pretty).

Anyway, good luck with your endeavours.

/ Thomas
-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Alternative] some/many narrative

2011-12-15 Thread Thomas Schilling
On 15 December 2011 06:29, Chris Wong chrisyco+haskell-c...@gmail.com wrote:

 class (Applicative f, Monoid f) = Alternative f where
    -- | Keep repeating the action (consuming its values) until it
 fails, and then return the values consumed.

I think this should be collect rather than consume and you can
omit the parentheses.  I also think that we should include the
original definition, which is more formally precise (although it could
use with some examples).

    --
    -- [Warning]: This is only defined for actions that eventually fail

Perhaps add the remark that we expect non-deterministic actions.

    -- after being performed repeatedly, such as parsing. For pure values such
    -- as 'Maybe', this will cause an infinite loop.
    some :: f a - f [a]
    some v = ...

    -- | Similar to 'many', but if no values are consumed it returns
 'empty' instead of @f []@.
    --
    -- [Warning]: This is only defined for actions that eventually fail
    -- after being performed repeatedly, such as parsing. For pure values such
    -- as 'Maybe', this will cause an infinite loop.
    many :: f a - f [a]
    many v = ...

 Warnings are repeated for emphasis :)

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type checker for haskell-src-exts (was: Typechecking Using GHC API)

2011-12-15 Thread Thomas Schilling
What exactly are the hopes for such a type checker?  I can understand
it being interesting as a research project, but as a realistic tools
there are two huge issues:

 1. It's going to take a LOT of time to reach feature parity with
GHC's type checker.

 2. Assuming that can be done, how is it going to be maintained and
kept up to date with GHC?

If it is going to be used as a development tool, both of these are a
major requirement.  I haven't looked into the issues, but I'd expect
it would be more realistic (although definitely not trivial) to
translate from GHC's internal AST into an annotated haskell-src-exts
AST.

On 15 December 2011 16:33, Sean Leather leat...@cs.uu.nl wrote:
 On Thu, Dec 15, 2011 at 11:07, Niklas Broberg wrote:

 Envisioned: The function you ask for can definitely be written for
 haskell-src-exts, which I know you are currently using. I just need to
 complete my type checker for haskell-src-exts first. Which is not a small
 task, but one that has been started.


 That's good to know! I presume it's something like Haskell98 to start with?
 I'd be even more impressed (and possibly also concerned for your health) if
 you were going to tackle all of the extensions!

 I've been trying to find a student to write a haskell-src-exts type checker
 for me. It should use a particular kind of mechanism though, using
 constraints similar to [1]. Then, I want to adapt that to do
 transformations. What approach are you using? Maybe I can somehow steal your
 work... ;)

 Regards,
 Sean

 [1] http://www.staff.science.uu.nl/~heere112/phdthesis/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to get a file path to the program invoked?

2011-12-15 Thread Thomas Schilling
May I ask what the problem is you're trying to solve?

If you want to access datafiles in an installed program then Cabal can
help you with that.  See
http://www.haskell.org/cabal/users-guide/#accessing-data-files-from-package-code

If you want to do more complicated things, maybe take a look at how
GHC does it.  For example, on OS X (and other Unix-based systems) the
ghc command is actually a script:

$ cat `which ghc`
#!/bin/sh
exedir=/Library/Frameworks/GHC.framework/Versions/7.0.3-x86_64/usr/lib/ghc-7.0.3
exeprog=ghc-stage2
executablename=$exedir/$exeprog
datadir=/Library/Frameworks/GHC.framework/Versions/7.0.3-x86_64/usr/share
bindir=/Library/Frameworks/GHC.framework/Versions/7.0.3-x86_64/usr/bin
topdir=/Library/Frameworks/GHC.framework/Versions/7.0.3-x86_64/usr/lib/ghc-7.0.3
pgmgcc=/Developer/usr/bin/gcc
executablename=$exedir/ghc
exec $executablename -B$topdir -pgmc $pgmgcc -pgma $pgmgcc
-pgml $pgmgcc -pgmP $pgmgcc -E -undef -traditional ${1+$@}

/ Thomas

On 1 December 2011 16:12, dokondr doko...@gmail.com wrote:
 Hi,
 When my program starts it needs to know a complete path to the directory
 from which it was invoked.
 In terms of standard shell (sh) I need the Haskell function that will do
 equivalent to:

 #!/bin/sh
 path=$(dirname $0)

 How to get this path in Haskell?

 getProgName :: IO String
 defined System.Environment only returns a file name of the program without
 its full path.

 Thanks!

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Splitting off many/some from Alternative

2011-12-13 Thread Thomas Schilling
On 12 December 2011 22:39, Antoine Latter aslat...@gmail.com wrote:
 But now they look as if they are of equal importance with the other
 class methods, which is not really true.

Maybe, but something like this is best fixed by improving
documentation, not by shuffling things around and needlessly breaking
APIs.  I also agree that if an Alternative instance doesn't make sense
it should be removed.   The current documentation is indeed very terse
indeed.  In particular it needs a section on the pitfalls that users
are likely to run into (like infinite loops).

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Summers of Code retrospective (updated for 2011)

2011-12-11 Thread Thomas Schilling
I would be interested in what the hold-up is with the two Cabal
projects.  Does the work need more clean-up or is it just stuck in the
Duncan-code-review pipeline?  If Duncan is indeed the bottleneck,
maybe we should look into ways of taking some of the work off Duncan.

On 11 December 2011 02:57, Gwern Branwen gwe...@gmail.com wrote:
 The Wheel turns, and months come and pass, leaving blog posts that
 fade into 404s; a wind rose in Mountain View, whispering of the coming
 Winter...

 Tonight I sat down and finally looked into the 2011 SoCs to see how
 they turned out and judge them according to my whimsically arbitrary
 and subjective standards:
 http://www.gwern.net/Haskell%20Summer%20of%20Code#results-1

 They turned out pretty much as I predicted - but then I *would* say
 that, wouldn't I?

 (Also submitted to /r/haskell for those who swing that way:
 http://www.reddit.com/r/haskell/comments/n82ln/summer_of_code_2011_retrospective/
 )

 --
 gwern
 http://www.gwern.net

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Superset of Haddock and Markdown

2011-11-21 Thread Thomas Schilling
On 21 November 2011 17:34, Brandon Allbery allber...@gmail.com wrote:
 Haddock carries the same license as GHC.

 More to the point, Haddock uses ghc internals these days; it's not just a
 matter of bundling, and the licenses *must* be compatible.

No.  If the haddock library any program that links against that
library (e.g., the haddock program) effectively becomes GPL licensed
(roughly speaking).  However, ghc and none of the core libraries link
against haddock, so that's not a problem.  You could run into issues
if you wanted to write a closed-source program that links against the
haddock library.

I generally don't like GPL-licensed libraries for that reason, but in
this case I think it would be fine.

 (Of course, the issue is that many tools start out as programs and
then turn into libraries -- pandoc is a good example.  At that point
GPL becomes an issue again.)

-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] compiler construction

2011-11-03 Thread Thomas Schilling
Chalmers University in Gothenburg, Sweden has a master's programme
that includes a compiler construction course.  For the lectures from
last term, see:
http://www.cse.chalmers.se/edu/course/TDA282/lectures.html

When I took it in 2006 it was a very practical course -- your task was
to implement a basic compiler and the final grade was based on the
number of features you implemented.  You could choose to do it in
Haskell, Java or C++.

The master's course is Secure and Dependable Computer Systems and
takes 2 years. You could also become an exchange student at Chalmers
via Erasmus, but the course is only given once a year, so if you plan
to do a 6 month exchange then you have to time it right.

Of course, Chalmers is in Sweden and therefore in none of your
preferred countries.  However, all MSc courses are taught in English
and almost everyone in Sweden speaks English very well.  Learning
Swedish also isn't very difficult if you're (I assume) German.  If you
stick to it, you can be fluent in 6 months (I didn't, but I can read a
lot of Swedish).

As an alternative in the UK, you might consider Nottingham University.
 They too have a strong FP research group and their compiler
construction course seems to use Haskell as well:
http://www.cs.nott.ac.uk/~nhn/G53CMP/

Other Universities in your preferred countries you might want to check
out (though I don't know anything about their taught programs), they
are known to have FP researchers:

  - UNSW, Sydney
  - Oxford, England
  - Edinburgh Univ. or Harriot Watts (though HW is more O'Caml/SML)
  - Univ. of St. Andrews, Scotland
  - Univ. of Strathclyde, Glasgow, Scotland
  - Leicester, England (teaches Haskell to undergrads, not sure about
compiler constr.)
  - Imperial College, London
  - University College London (UCL)
  - (there're probably more...)

On 3 November 2011 17:02, Timo von Holtz timo.v.ho...@googlemail.com wrote:
 Hi,

 I study computer science in Kiel, Germany and I want to study abroad.
 Now I look for Universities, which offer compiler construction, since
 I need that course, preferably in the UK, Ireland, Australia or New
 Zealand.
 Ideally it would be in Haskell of course.

 Greetings
 Timo

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Best bit LIST data structure

2011-10-09 Thread Thomas Schilling
On 9 October 2011 14:54, Joachim Breitner m...@joachim-breitner.de wrote:
 Hi,

 Am Freitag, den 07.10.2011, 10:52 -0400 schrieb Ryan Newton:
 What about just using the Data.Bits instance of Integer?  Well,
 presently, the setBit instance for very large integers creates a whole
 new integer, shifts, and xors:

 http://haskell.org/ghc/docs/latest/html/libraries/base/src/Data-Bits.html#setBit
 (I don't know if it's possible to do better.  From quick googling GMP
 seems to use an array of limbs rather than a chunked list, so maybe
 there's no way to treat large Integers as a list and update only the
 front...)

 interesting idea. Should this be considered a bug in ghc? (Not that it
 cannot represent the result, but that it crashes even out of ghci):

 $ ghci
 GHCi, version 7.0.4: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer-gmp ... linking ... done.
 Loading package base ... linking ... done.
 Prelude :m + Data.Bits
 Prelude Data.Bits setBit 0 (2^63-1::Int)
 gmp: overflow in mpz type
 Abgebrochen

Yes, that's a bug.  GMP shouldn't call abort(), but it should be
turned into a Haskell exception.  It probably doesn't make much of a
difference in practise, but safe could should never crash GHC.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] SMP parallelism increasing GC time dramatically

2011-10-09 Thread Thomas Schilling
It would be really useful to see the threadscope output for this.
Apart from cache effects (which may well be significant at 12 cores),
the usual problems with parallel GHC are synchronisation.

When GHC wants to perform a parallel GC it needs to stop all Haskell
threads.  These are lightweight threads and are scheduled
cooperatively, i.e., there's no way to interrupt them from the outside
(except for the OS, but that doesn't help with GC).  Usually, a thread
is able to yield whenever it tries to do an allocation which is common
enough in normal Haskell.  However, your work contains lots of matrix
computation which likely don't do allocations in the inner loop or
call C to do their work, which isn't interruptible, either.  My guess
would be that (at least part of) the reason for your slowdown is that
the parallel GC spends a lot of time waiting for threads to stop.
This would be apparent in Threadscope.  (I may be wrong, because even
the single-threaded GC needs to stop all threads)

On 7 October 2011 18:21, Tom Thorne thomas.thorn...@gmail.com wrote:
 I have made a dummy program that seems to exhibit the same GC
 slowdown behavior, minus the segmentation faults. Compiling with -threaded
 and running with -N12 I get very bad performance (3x slower than -N1),
 running with -N12 -qg it runs approximately 3 times faster than -N1. I don't
 know if I should submit this as a bug or not? I'd certainly like to know why
 this is happening!
 import Numeric.LinearAlgebra
 import Numeric.GSL.Special.Gamma
 import Control.Parallel.Strategies
 import Control.Monad
 import Data.IORef
 import Data.Random
 import Data.Random.Source.PureMT
 import Debug.Trace
 --
 subsets s n = (subsets_stream s) !! n
 subsets_stream [] = [[]] : repeat []
 subsets_stream (x:xs) =
 let r = subsets_stream xs
    s = map (map (x:)) r
 in [[]] : zipWith (++) s (tail r)
 testfun :: Matrix Double - Int - [Int] - Double
 testfun x k cs = lngamma (det (c+u))
 where
 (m,c) = meanCov xx
 m' = fromRows [m]
 u = (trans m')  m'
 xx = fromColumns ( [(toColumns x)!!i] ++ [(toColumns x)!!j] ++ [(toColumns
 x)!!k] )
 i = cs !! 0
 j = cs !! 1

 test :: Matrix Double - Int - Double
 test x i = sum p
 where
 p = parMap (rdeepseq) (testfun x (i+1)) (subsets [0..i] 2)


 ranMatrix :: Int - RVar (Matrix Double)
 ranMatrix n = do
 xs - mapM (\_ - mapM (\_ - uniform 0 1.0) [1..n]) [1..n]
 return (fromLists xs)

 loop :: Int - Double - Int - RVar Double
 loop n s i = traceShow i $ do
 x - ranMatrix n
 let r = sum $ parMap (rdeepseq) (test x) [2..(n-2)]
 return (r+s)
 main = do
 let n = 100
 let iter = 5
 rng - newPureMT
 rngr - newIORef rng
 p - runRVar (foldM (loop n) 0.0 [1..iter]) rngr
 print p
 I have also found that the segmentation faults in my code disappear if I
 switch from Control.Parallel to Control.Monad.Par, which is quite strange. I
 get slightly better performance with Control.Parallel when it completes
 without a seg. fault, and the frequency with which it does so seems to
 depend on the number of sparks that are being created.
 On Thu, Oct 6, 2011 at 1:56 PM, Tom Thorne thomas.thorn...@gmail.com
 wrote:

 I'm trying to narrow it down so that I can submit a meaningful bug report,
 and it seems to be something to do with switching off parallel GC using -qg,
 whilst also passing -Nx.
 Are there any known issues with this that people are aware of? At the
 moment I am using the latest haskell platform release on arch.
 I'd like to give 7.2 a try in case that fixes it, but I'm using rather a
 lot of libraries (hmatrix, fclabels, random fu) and I don't know how to
 install them for multiple ghc versions
 On Wed, Oct 5, 2011 at 10:43 PM, Johan Tibell johan.tib...@gmail.com
 wrote:

 On Wed, Oct 5, 2011 at 2:37 PM, Tom Thorne thomas.thorn...@gmail.com
 wrote:

 The only problem is that now I am getting random occasional segmentation
 faults that I was not been getting before, and once got a message saying:
 Main: schedule: re-entered unsafely
 Perhaps a 'foreign import unsafe' should be 'safe'?
 I think this may be something to do with creating a lot of sparks
 though, since this occurs whether I have the parallel GC on or not.

 Unless you (or some library you're using) is doing what the error message
 says then you should file a GHC bug here:

 http://hackage.haskell.org/trac/ghc/

 -- Johan




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Much faster complex monad stack based on CPS state

2011-09-28 Thread Thomas Schilling
Well, you can get something close with the help of IORefs, but I
forgot the details.  I believe this is the paper that explains it:

Value recursion in the continuation monad by Magnus Carlsson
http://www.carlssonia.org/ogi/mdo-callcc.pdf

On 28 September 2011 15:15, Bas van Dijk v.dijk@gmail.com wrote:
 On 28 September 2011 14:25, Ertugrul Soeylemez e...@ertes.de wrote:
 Bas van Dijk v.dijk@gmail.com wrote:

 Because of this reason I don't provide a MonadTransControl instance
 for ContT in monad-control[2].

 Is that even possible?

 I once tried and failed so I believe it's not possible.

 Bas

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Improvements to Vim Haskell Syntax file - Is anyone the maintainer?

2011-09-10 Thread Thomas Schilling
How about moving/adding the repo to https://github.com/haskell? That
would be a nice canonical and easy to find location, IMHO.

On 9 September 2011 05:03, steffen steffen.sier...@googlemail.com wrote:
 Hi,
 check out this one:
 https://github.com/urso/dotrc/blob/master/vim/syntax/haskell.vim

 A (not up to date) version of it can also be found on vim.org:
 http://www.vim.org/scripts/script.php?script_id=3034

 It is not an official version, but already a modification of the original
 syntax file, plus some new features.
 Unfortunately I haven't had time to maintain and update the syntax file for
 a (lng) while, but hopefully next week I will have some time to do
 some maintenance. I plan to incorporate out standing pull requests, do some
 improvements and split of the script itself into a new project on github
 with test cases, readme and screen shots.
 If you've done any changes which may benefit the syntax file I would be glad
 about patches or pull requests on github.
 I'm using github at the moment, but am open for suggestions about other
 hosting ideas (e.g. bitbucket plus github mirror or hackage). As I've made
 some extensive changes I will continue maintaining the syntax file (unless
 someone else really wants to do it...), but I'd prefer it to be a
 haskell-comunity project so other people can join in easily and propose
 changes.
 - Steffen

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Undefined symbol error coming from shared, dynamic library.

2011-09-08 Thread Thomas Schilling
stg_newByteArrayzh is defined in the runtime system.  Presumably you
need to link against the GHC runtime system.  If that doesn't help try
asking your question on the glasgow-haskell-us...@haskell.org mailing
list.

On 6 September 2011 16:52, David Banas dba...@banasfamily.net wrote:
 Hi all,

 I'm trying to build a shared, dynamic library for use with a C program.
 I'm getting an `undefined symbol' error, when I try to run that C
 program,
 and was hoping that the last line in the output, below, might mean
 something to someone.
 I include the entire output of a `make rebuild' command, below, hoping
 that, maybe, I just have my command line options a little wrong.

 Thanks!
 -db

 dbanas@dbanas-eeepc:~/prj/haskell/AMIParse/trunk$ make rebuild
 make clean
 make[1]: Entering directory `/home/dbanas/prj/haskell/AMIParse/trunk'
 rm -f *.hi *.o *.out ami_test *.so
 make[1]: Leaving directory `/home/dbanas/prj/haskell/AMIParse/trunk'
 make all
 make[1]: Entering directory `/home/dbanas/prj/haskell/AMIParse/trunk'
 gcc -I/usr/lib/ghc-6.12.3/include/ -g -fPIC   -c -o ami_test.o
 ami_test.c
 gcc -rdynamic -o ami_test ami_test.o -ldl
 ghc -c ApplicativeParsec.hs -cpp -package parsec-3.1.1 -package dsp
 -dynamic -fPIC
 ghc -c AMIParse.hs -cpp -package parsec-3.1.1 -package dsp -dynamic
 -fPIC
 ghc -c ExmplUsrModel.hs -cpp -package parsec-3.1.1 -package dsp -dynamic
 -fPIC
 ghc -c AMIModel.hs -cpp -package parsec-3.1.1 -package dsp -dynamic
 -fPIC
 gcc -I/usr/lib/ghc-6.12.3/include/ -g -fPIC   -c -o ami_model.o
 ami_model.c
 rm -f libami.so
 ghc -o libami.so -shared -dynamic -package parsec-3.1.1 -package dsp
 AMIParse.o AMIModel.o ami_model.o AMIModel_stub.o ApplicativeParsec.o
 ExmplUsrModel.o
 make[1]: Leaving directory `/home/dbanas/prj/haskell/AMIParse/trunk'
 dbanas@dbanas-eeepc:~/prj/haskell/AMIParse/trunk$ ./ami_test test.ami
 /usr/lib/ghc-6.12.3/ghc-prim-0.2.0.0/libHSghc-prim-0.2.0.0-ghc6.12.3.so:
 undefined symbol: stg_newByteArrayzh



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Smarter do notation

2011-09-05 Thread Thomas Schilling
On 5 September 2011 13:41, Sebastian Fischer fisc...@nii.ac.jp wrote:

 Hi again,

 I think the following rules capture what Max's program does if applied
 after the usual desugaring of do-notation:

 a = \p - return b
  --
 (\p - b) $ a

 a = \p - f $ b -- 'free p' and 'free b' disjoint
  --
 ((\p - f) $ a) * b


Will there also be an optimisation for some sort of simple patterns?  I.e.,
where we could rewrite this to:

  liftA2 (\pa pb - f ...) a b

I think I remember someone saying that the one-at-a-time application of *
inhibits certain optimisations.



 a = \p - f $ b -- 'free p' and 'free f' disjoint
  --
 f $ (a = \p - b)

 a = \p - b * c -- 'free p' and 'free c' disjoint
  --
 (a = \p - b) * c

 a = \p - b = \q - c -- 'free p' and 'free b' disjoint
  --
 liftA2 (,) a b = \(p,q) - c

 a = \p - b  c -- 'free p' and 'free b' disjoint
  --
 (a  b) = \p - c


I find (a  b) confusing.  The intended semantics seem to be effect a,
then effect b, return result of a.  That doesn't seem intuitive to me
because it contradicts with the effect ordering of (=) which reverses the
effect ordering of (=).  We already have (*) and (*) for left-to-right
effect ordering and pointed result selection.  I understand that () = (*)
apart from the Monad constraint, but I would prefer not to have () =
(*).




 The second and third rule overlap and should be applied in this order.
 'free' gives all free variables of a pattern 'p' or an expression
 'a','b','c', or 'f'.

 If return, , and  are defined in Applicative, I think the rules also
 achieve the minimal necessary class constraint for Thomas's examples that do
 not involve aliasing of return.

 Sebastian

 On Mon, Sep 5, 2011 at 5:37 PM, Sebastian Fischer fisc...@nii.ac.jpwrote:

 Hi Max,

 thanks for you proposal!

 Using the Applicative methods to optimise do desugaring is still
 possible, it's just not that easy to have that weaken the generated
 constraint from Monad to Applicative since only degenerate programs
 like this one won't use a Monad method:


 Is this still true, once Monad is a subclass of Applicative which defines
 return?

 I'd still somewhat prefer if return get's merged with the preceding
 statement so sometimes only a Functor constraint is generated but I think, I
 should adjust your desugaring then..

 Sebastian





-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Smarter do notation

2011-09-05 Thread Thomas Schilling
On 5 September 2011 15:49, Sebastian Fischer fisc...@nii.ac.jp wrote:

 On Mon, Sep 5, 2011 at 10:19 PM, Thomas Schilling nomin...@googlemail.com 
 wrote:

 a = \p - f $ b -- 'free p' and 'free b' disjoint
  --
 ((\p - f) $ a) * b

 Will there also be an optimisation for some sort of simple patterns?  I.e., 
 where we could rewrite this to:
   liftA2 (\pa pb - f ...) a b
 I think I remember someone saying that the one-at-a-time application of * 
 inhibits certain optimisations.

 liftA2 is defined via one-at-a-time application and cannot be redefined 
 because it is no method of Applicative. Do you remember more details?

Good point.  I can't find the original post, so I don't know what
exactly the issue was (or maybe I'm misremembering).

--
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Smarter do notation

2011-09-04 Thread Thomas Schilling
I don't quite understand how this would work.  For example, would it work
for these examples?

  do x - blah
 let foo = return
 foo (f x)  -- Using an alias of return/pure

  do x - Just blah
 Just (f x)  -- another form of aliasing

  do x - blah
 return (g x x)  -- could perhaps be turned into:
 -- (\x - g x x) $ blah

  do x - blah
 y - return x
 return (f y)-- = f $ blah ?

  do x1 - foo1-- effect order must not be reversed
 x2 - foo2
 return (f x2 x1)  -- note reversed order

  -- multiple uses of applicative
  do x1 - foo1
 y - return (f x1)
 x2 - foo2
 y2 - return (g y x2)
 return y2

So I guess it's possible to detect the pattern:

  do x1 - foo1; ...; xN - fooN; [res -] return (f {x1..xN})

where {x1..xN} means x1..xN in some order and turn it into:

  do [res -] (\x1..xN - f {x1..xN}) $ foo1 * ... * fooN

Open issues would be detection of the correct return-like thing.  This is
why using monad comprehensions would help somewhat, but not fully because
it's still possible to put x - return y in the generators part.  The
current desugaring of do-notation is very simple because it doesn't even
need to know about the monad laws.  They are used implicitly by the
optimiser (e.g., foo = \x - return x is optimised to just foo after
inlining), but the desugarer doesn't need to know about them.


On 4 September 2011 03:34, Daniel Peebles pumpkin...@gmail.com wrote:

 Hi all,

 I was wondering what people thought of a smarter do notation. Currently,
 there's an almost trivial desugaring of do notation into (=), (), and
 fail (grr!) which seem to naturally imply Monads (although oddly enough,
 return is never used in the desugaring). The simplicity of the desugaring is
 nice, but in many cases people write monadic code that could easily have
 been Applicative.

 For example, if I write in a do block:

 x - action1
 y - action2
 z - action3
 return (f x y z)

 that doesn't require any of the context-sensitivty that Monads give you,
 and could be processed a lot more efficiently by a clever Applicative
 instance (a parser, for instance). Furthermore, if return values are
 ignored, we could use the ($), (*), or (*) operators which could make the
 whole thing even more efficient in some instances.

 Of course, the fact that the return method is explicitly mentioned in my
 example suggests that unless we do some real voodoo, Applicative would have
 to be a superclass of Monad for this to make sense. But with the new default
 superclass instances people are talking about in GHC, that doesn't seem too
 unlikely in the near future.

 On the implementation side, it seems fairly straightforward to determine
 whether Applicative is enough for a given do block. Does anyone have any
 opinions on whether this would be a worthwhile change? The downsides seem to
 be a more complex desugaring pass (although still something most people
 could perform in their heads), and some instability with making small
 changes to the code in a do block. If you make a small change to use a
 variable before the return, you instantly jump from Applicative to Monad and
 might break types in your program. I'm not convinced that's necessary a bad
 thing, though.

 Any thoughts?

 Thanks,
 Dan

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC API question

2011-08-29 Thread Thomas Schilling
OK, I guess I misunderstood you.  I don't know how SafeHaskell works,
so I don't know whether there might be some interaction.  I know that
profiling is a static flag which must be set when you initialise the
session and cannot be changed afterwards.  I assume you are doing
that.

I checked the source code for getHValue (in 7.0.4) and it calls
linkDependencies if the name is external (not 100 percent sure what
that means).  There is an interesting comment in linkDependencies,
though:

-- The interpreter and dynamic linker can only handle object code built
-- the normal way, i.e. no non-std ways like profiling or ticky-ticky.
-- So here we check the build tag: if we're building a non-standard way
-- then we need to find  link object files built the normal way.

This is what I've was referring to in my previous mail. Even though
you're compiling to machine code, you are using the in-memory linker
(i.e., the GHCi linker).  It seems like that this is a fundamental
limitation of the internal linker. You may be using it in a way that
doesn't trigger the sanity check and end up causing a panic.  I
suggest you pose this question on the glasgow-haskell-users mailing
list.

On 28 August 2011 17:57, Chris Smith cdsm...@gmail.com wrote:
 On Sun, 2011-08-28 at 17:47 +0100, Thomas Schilling wrote:
 I don't think you can link GHCi with binaries compiled in profiling
 mode.  You'll have to build an executable.

 Okay... sorry to be obtuse, but what exactly does this mean?  I'm not
 using GHCi at all: I *am* in an executable built with profiling info.

 I'm doing this:

        dflags - GHC.getSessionDynFlags
        let dflags' = dflags {
            GHC.ghcMode = GHC.CompManager,
            GHC.ghcLink = GHC.LinkInMemory,
            GHC.hscTarget = GHC.HscAsm,
            GHC.optLevel = 2,
            GHC.safeHaskell = GHC.Sf_Safe,
            GHC.packageFlags = [GHC.TrustPackage gloss ],
            GHC.log_action = addErrorTo codeErrors
            }
        GHC.setSessionDynFlags dflags'
        target - GHC.guessTarget filename Nothing
        GHC.setTargets [target]
        r      - fmap GHC.succeeded (GHC.load GHC.LoadAllTargets)

 and then if r is true:

        mods - GHC.getModuleGraph
        let mainMod = GHC.ms_mod (head mods)
        Just mi - GHC.getModuleInfo mainMod
        let tyThings = GHC.modInfoTyThings mi
        let var = chooseTopLevel varname tyThings
        session - GHC.getSession
        v       - GHC.liftIO $ GHC.getHValue session (GHC.varName var)
        return (unsafeCoerce# v)

 Here, I know that chooseTopLevel is working, but the getHValue part only
 works without profiling.  So is this still hopeless, or do I just need
 to find the right additional flags to add to dflags'?

 --
 Chris Smith






-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC API question

2011-08-28 Thread Thomas Schilling
I don't think you can link GHCi with binaries compiled in profiling
mode.  You'll have to build an executable.

On 28 August 2011 16:38, Chris Smith cdsm...@gmail.com wrote:
 Okay, I should have waited until morning to post this... so actually,
 things still work fine when I build without profiling.  However, when I
 build with profiling, I get the segfault.  I'm guessing either I need to
 set different dynamic flags with the profiling build to match the
 options of the compiler that built the executable... or perhaps it's
 still impossible to do what I'm looking for with profiling enabled.
 Does anyone know which is the case?

 --
 Chris


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] For class Monoid; better names than mempty mappend might have been: mid (mident) mbinop

2011-07-25 Thread Thomas Schilling
On 25 July 2011 08:22, Paul R paul.r...@gmail.com wrote:
 Hi Café,

 Thomas I think () is fairly uncontroversial because:
 Thomas (...)
 Thomas 2. It's abstract. i.e., no intended pronunciation

 How can that be an advantage ? A text flow with unnamed (or
 unpronounceable) symbols makes reading, understanding and remembering
 harder, don't you think ? I really think any operator or symbol should
 be intended (and even designed !) for pronunciation.

 Some references state that the monoid binary operation is often named
 dot or times in english. That does not mean the operator must be
 `dot`, `times`, (.) or (x) but at least the doc should provide
 a single, consistent and pronounceable name for it, whatever its
 spelling.

Well, in this case I think it can be beneficial because the
pronunciation depends on the underlying monoid.  E.g., sometimes it
would be append or plus, other times dot or times.  It can, of
course, be useful to also have a good name for the generic operator.
In this case I'd call it diamond.

-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] For class Monoid; better names than mempty mappend might have been: mid (mident) mbinop

2011-07-24 Thread Thomas Schilling
Yes, this has sort-of been agreed upon in a GHC ticket about a year
ago: http://hackage.haskell.org/trac/ghc/ticket/3339

I had a patch in Darcs, but then came the switch to Git.  I ported it
to Git, but didn't iron out all the issues.  That was quite a while
ago so it's currently a bit bitrotten.  I don't think it's enough time
to get it into 7.2, but since that's largely an unstable release, it
wouldn't hurt until 7.4.  I'll probably work on it at CamHac in a few
weeks.

On 24 July 2011 13:14, Maciej Wos maciej@gmail.com wrote:
 Personally, I have nothing against mempty (although I agree that mid makes
 more sense), but I don't like mappend at all. I wonder what happened to the
 idea of using  instead of mappend (that's what I always do). I think

 a  b  c

 looks so much better than

 a `mappend` b `mappend` c

 and it solves the name problem altogether.

 -- Maciej

 On Jul 24, 2011 3:42 AM, KC kc1...@gmail.com wrote:

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] For class Monoid; better names than mempty mappend might have been: mid (mident) mbinop

2011-07-24 Thread Thomas Schilling
 So is the change taking effect?

 We were approaching consensus for the addition of of:
 infixr 6 

 () :: Monoid m = m - m - m
 () = mappend
 and a matching change for (+) in the pretty package.

 It was also suggested to make () a method of Monoid and insert the 
 following default definitions:
 () = mappend
 mappend = ()


 And how about the following:
 () = mconcat
 mconcat = ()

 And mempty could be id.

The problems with this are:

  - mconcat is not a binary operator, so using  would bring little
advantage.  You'd always have to use it in its bracketed form, for
example.

  - Changing mempty to id could break a lot of existing code (e.g. due
to typeclass ambiguity).  This is the main reason.  It's also not
clear to me that this is a better name (it's just shorter).  The
current naming of Monoid methods is modelled after list operations,
because list is a (the?) free monoid.  However, in many cases zero
or one would be an equally good name.

I think () is fairly uncontroversial because:

  1. It's short
  2. It's abstract.  i.e., no intended pronunciation and it looks like
LaTeX's \diamond operator
  3. Changing it would be compatible with most existing libraries.

For this reason, I think a larger change would have to come with a
larger library re-organization.  Johan Tibell suggested something like
that a while ago: instead of lots of little cuts (backwards
incompatible changes), a working group of activists should redesign a
whole new (incompatible) alternative set of core libraries.



 On Sun, Jul 24, 2011 at 11:39 AM, Thomas Schilling
 nomin...@googlemail.com wrote:
 Yes, this has sort-of been agreed upon in a GHC ticket about a year
 ago: http://hackage.haskell.org/trac/ghc/ticket/3339

 I had a patch in Darcs, but then came the switch to Git.  I ported it
 to Git, but didn't iron out all the issues.  That was quite a while
 ago so it's currently a bit bitrotten.  I don't think it's enough time
 to get it into 7.2, but since that's largely an unstable release, it
 wouldn't hurt until 7.4.  I'll probably work on it at CamHac in a few
 weeks.



 --
 --
 Regards,
 KC




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Automatic Reference Counting

2011-07-02 Thread Thomas Schilling
Reference counting usually has much higher overheads than garbage
collection and is tricky to parallise.  It's main advantage is quicker
release of memory.

I believe the main feature of ARC is that the user does not need to
manually keep reference counts up to date.  I heard from people using
CPython (which uses reference counting) as a library from C that it's
very easy to accidentally forget to update the reference count
correctly.  With ARC the compiler takes care of it, so there's less
opportunity for mistakes.  ARC also optimizes away redundant reference
updates within a function (Haskell functions are usually small, so I
don't know how well that would work).

The reason why reference counting is usually slower is:

  - Say you update a field f of object o from pointing to a to
pointing to b.  This entails three memory writes instead of one,
because you also need to update the reference counts of a and b.

  - You also need some extra space to store the reference counts.

  - You need to take special care to avoid cascades to avoid
occasional long pauses.  E.g., if an object's reference count goes to
zero, that will cause all objects pointed-to by that object to be
decreased.  This might cause another object's reference count to go to
zero etc.

  - You need a backup tracing collector to collect cycles as you mentioned.

There are many optimizations possible, e.g. delayed reference
counting, but AFAIK reference counting is in general slower than
tracing GC.  It does get used in situations where quicker resource
release and more predictable GC pauses are more important than
absolute performance (or peak throughput).


On Sat, Jul 2, 2011 at 16:51, Thomas Davie tom.da...@gmail.com wrote:
 Hi guys,

 Apple recently announced a new static analysis in Clang called ARC (Automatic 
 Reference Counting).  The idea is to provide what GC provides (zero memory 
 management code by the programmer), but not to incur the runtime penalty of 
 having to have the GC run.  It seems to be extremely effective in objective-C 
 land.

 I was wondering if any of the compiler gurus out there could comment on the 
 applicability of this kind of analysis to Haskell.  Dropping the GC and hence 
 stopping it blocking all threads and killing parallel performance seems like 
 nirvana.

 The one major problem that's still left with ARC is that it does not solve 
 the retain cycle problem of reference counting.  Programmers must insert a 
 weak keyword to break cycles in their data graphs.  Could even this 
 analysis be automated statically?  Could we put up with a language extension 
 that added this annotation?

 I'd love to hear some comments from people more-experienced-than-I on this.

 Thanks

 Tom Davie
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Toy implementation of the STG machine

2011-06-11 Thread Thomas Schilling
Does Bernie Pope's http://www.haskell.org/haskellwiki/Ministg work for you?

On 11 June 2011 21:19, Florian Weimer f...@deneb.enyo.de wrote:
 I'm looking for a simple implementation of the STG machine to do some
 experiments, preferably implemented in something with memory safety.
 Performance is totally secondary.  I'm also not interested in garbage
 collection details, but I do want to look at the contents of the
 various stacks.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Unbelievable parallel speedup

2011-06-03 Thread Thomas Schilling
While I would guess that your superlinear speedup is due to the large
variance of your single-core case, it is indeed possible to have
superlinear speedup.

Say you have a problem set of size 32MB and an L2 cache of 8MB per
core.  If you run the same program on one CPU it won't fit into the
cache, so you'll have lots of cache misses and this will show in
overall performance.  If you run the same problem on 4 cores and
manage to evenly distribute the working set, then it will fit into the
local caches and you will have very few cache misses.  Because caches
are an order of magnitude faster than main memory, the parallel
program can be more than 4x faster.  To counteract this effect, you
can try to scale the problem with the number of cores (but it then has
to be a truly linear problem).

That said, the variance in your single-CPU case is difficult to
diagnose without knowing more about your program.  It could be due to
GC effects, it cold be interaction with the OS scheduler, it could be
many other things.  On many operating systems, if you run a single
core program for a while, the OS scheduler may decide to move it to a
different core in order to spread out wear among the cores.  It's
possible that something like this is happening and, unfortunately,
some Linux system hide this from the user.  Still there could be many
other explanations.

On 3 June 2011 13:10, John D. Ramsdell ramsde...@gmail.com wrote:
 I've enjoyed reading Simon Marlow's new tutorial on parallel and
 concurrent programming, and learned some surprisingly basic tricks.  I
 didn't know about the '-s' runtime option for printing statistics.  I
 decided to compute speedups for a program I wrote just as Simon did,
 after running the program on an unloaded machine with four processors.
  When I did, I found the speedup on two processors was 2.4, on three
 it was 3.2, and on four it was 4.4!  Am I living in a dream world?

 I ran the test nine more times, and here is a table of the speedups.

 2.35975 3.42595 4.39351
 1.57458 2.18623 2.94045
 1.83232 2.77858 3.41629
 1.58011 2.37084 2.94913
 2.36678 3.63694 4.42066
 1.58199 2.29053 2.95165
 1.57656 2.34844 2.94683
 1.58143 2.3242  2.95098
 2.36703 3.36802 4.41918
 1.58341 2.30123 2.93933

 That last line looks pretty reasonable to me, and is what I expected.
 Let's look at a table of the elapse times.

 415.67  176.15  121.33  94.61
 277.52  176.25  126.94  94.38
 321.37  175.39  115.66  94.07
 277.72  175.76  117.14  94.17
 415.63  175.61  114.28  94.02
 277.75  175.57  121.26  94.10
 277.68  176.13  118.24  94.23
 277.51  175.48  119.40  94.04
 415.58  175.57  123.39  94.04
 277.62  175.33  120.64  94.45

 Notice that the elapse times for two and four processors is pretty
 consistent, and the one for three processors is a little inconsistent,
 but the times for the single processor case are all over the map.  Can
 anyone explain all this variance?

 I have enclosed the raw output from the runs and the script that was
 run ten times to produce the output.

 John

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: Leksah 0.10.0

2011-04-29 Thread Thomas Schilling
My guess is that you're doing all indexing work inside a single GHC
API session.  When loading external packages GHC caches all .hi files
in memory -- and never unloads them.   Therefore, if you have a large
package DB, that'll consume a lot of memory.  For similar reasons you
can also run into problems with redefined instances if you happen to
process two packages that define the same instances because they too
are cached and never flushed.

The workaround is to start multiple sessions and then combine the
resulting output.

I don't know how much of a problem the Haddock + TH issue is that
David mentioned.  In any case you should make sure that haddock can
see the installed packages so it doesn't need to compile any
dependencies for TH.

On 28 April 2011 09:04, jutaro j...@arcor.de wrote:
 Hi Daniel,

 that seemed to be a real odyssey. I will try to install the statistics
 package
 when I find time. Guess it is this one on hackage:
 http://hackage.haskell.org/package/statistics.
 Just some remarks:
 In case of problems with metadata it is helpful to stop the GUI and call
 leksah-server from the command line. (leksah-server -s collects metainfo for
 new packages).
 What happens then is that leksah-server calls GHC-API and Haddock as a
 library, which itself uses GHC-API.
 So its a bit like running Haddock on a package, which usually may fail, but
 it is uncommon to have this kind of problems. (It happened one time before
 with a type level library, which defined all integers between 1 and several
 thousands...).

 Jürgen

 PS: The server at leksah.org has reached its limit yesterday, the Windows
 installer alone was downloaded about 2000 times! But it should work now.

 --
 View this message in context: 
 http://haskell.1045720.n5.nabble.com/ANN-Leksah-0-10-0-tp4332741p4345891.html
 Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Platform 2011.x - planned release date?

2011-02-01 Thread Thomas Schilling
My guess is that they're waiting for the next (and final) stable
release of 7.0, which should happen in the next few weeks.

On 1 February 2011 08:27, Max Cantor mxcan...@gmail.com wrote:
 January has come and gone and HP 2011 has not come with it.  Is there an 
 updated timetable for the next version of the HP?  I'm not complaining or 
 upset or whining, just trying to plan.

 Great work so far, looking forward to HP 2011!

 mc
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] OT: Monad co-tutorial: the Compilation Monad

2010-12-18 Thread Thomas Schilling
The haskell.org domain expired.  It's being worked on.

On 17 December 2010 12:45, Larry Evans cppljev...@suddenlink.net wrote:
 On 12/17/10 01:32, Max Bolingbroke wrote:
 [snip]
 I can't speak for your monad based approach, but you may be interested
 in Neil Mitchell's Haskell DSL for build systems, called Shake:
 http://community.haskell.org/~ndm/downloads/slides-shake_a_better_make-01_oct_2010.pdf

 WARNING: I clicked on that link in my thunderbird news reader
 and got a page which was something about registering domains.
 It was nothing about Neil's slides.

 I then tried directing my Firfox browser to:

  http://community.haskell.org/

 but got the same web page.

 Am I doing something wrong or has somehow community.haskell.org been
 hijacked somehow?

 -Larry



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Behaviour of System.Directory.getModificationTime

2010-12-16 Thread Thomas Schilling
Yes, modification times are reported in seconds, so you'll have to
wait on average 0.5s for a file change to be visible via the
modification date.  Due to buffers and filesystem optimisations it
might even take longer.

On 16 December 2010 16:50, Arnaud Bailly arnaud.oq...@gmail.com wrote:
 actually, IRL the code works as expected. Might it be possible that
 the speed of test execution is greater than the granularity of the
 system's modification timestamp?

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Rendering of hask in new wiki (MSIE6)

2010-12-15 Thread Thomas Schilling
Yes, the current syntax highligthing plugin (SyntaxHighlight_GeSHi) is
quite annoying.  It's the version that comes with debian which has the
advantage that it will be updated automatically.  However, it the
surrounding div class=inline-code is my attempt at hacking around
the fact that it doesn't even support inline-markup.  Another issue is
the weird way of trimming white space only on the first line.

I'm now leaning towards writing a new custom plugin, but that means i
have to write (and test) PHP code.  Maybe I find some time over the
holidays.  If anyone else wants to propose another plugin, let me
know.

On 15 December 2010 14:01, Dimitry Golubovsky golubov...@gmail.com wrote:
 Hi,

 In MSIE6, hask tags are rendered like this (from the Monad_Transformers 
 page):

 transformers: provides the classes
 MonadTrans
 and
 MonadIO
 , as well as concrete monad transformers such as
 StateT

 ... etc.

 The Wiki source:

 [http://hackage.haskell.org/package/transformers transformers]:
 provides the classes haskMonadTrans/hask and haskMonadIO/hask,
 as well as concrete monad transformers such as haskStateT/hask.

 HTML (a small piece of it):

 provides the classes div class=inline-codediv dir=ltr
 style=text-align: left;div class=source-haskell
 style=font-family: monospace;MonadTrans/div/div/div

 Words MonadTrans, MonadIO, StateT etc are enclosed in hask tags.
 They show up in monospace, each starting a new line. Is this only
 MSIE6, or this is how it is supposed to render?

 Thanks.

 PS I am not attaching a screenshot to the mailing list; if anyone
 needs to see it I'll send via personal e-mail.

 --
 Dimitry Golubovsky

 Anywhere on the Web

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCE] haskell-mpi-1.0.0

2010-12-13 Thread Thomas Schilling
Could you please add your package to the wiki section at
http://haskell.org/haskellwiki/Applications_and_libraries/Concurrency_and_parallelism#MPI
?

On 9 December 2010 21:40, Dmitry Astapov dmi...@well-typed.com wrote:
 Dear Haskellers,
 We are pleased to announce the release of haskell-mpi-1.0.0, a suite of
 Haskell bindings to the C MPI library and convenience APIs on top of it.
 About MPI
 -
 MPI, the Message Passing Interface, is a popular communications protocol for
 distributed parallel computing (http://www.mpi-forum.org/).
 MPI applications consist of independent computing processes which share
 information by message passing. It supports both point-to-point and
 collective communication operators, and manages much of the mundane aspects
 of message delivery. There are several high-quality implementations of MPI
 available, all of which conform to the standard API specification (the
 latest version of which is 2.2). The MPI specification defines interfaces
 for C, C++ and Fortran, and bindings are available for many other
 programming languages.
 About Haskell-MPI
 -
 As the name suggests, Haskell-MPI provides a Haskell interface to MPI, and
 thus facilitates distributed parallel programming in Haskell. It is
 implemented on top of the C API via Haskell's foreign function interface.
 Haskell-MPI provides three different ways to access MPI's functionality:
    * A direct binding to the C interface (see
 Control.Parallel.MPI.Internal).
    * A convenient interface for sending arbitrary serializable Haskell data
 values as messages (see Control.Parallel.MPI.Simple).
    * A high-performance interface for working with (possibly mutable) arrays
 of storable Haskell data types (see Control.Parallel.MPI.Fast).
 We do not currently provide exhaustive coverage of all the functions and
 types defined by MPI 2.2, although we do provide bindings to the most
 commonly used parts. In future we plan to extend coverage based on the needs
 of projects which use the library.
 The package is available from
 http://hackage.haskell.org/package/haskell-mpi. Examples and comprehensive
 testsuite are included in the source distribution.
 Code was tested on 32- and 64-bit platforms, with MPICH2 and OpenMPI. The
 Fast API shows performance comparable to C, and the Simple API is generally
 2-7 time slower due to (de)serialization overhead and necessity to issue
 additional MPI requests behind the curtains in some cases.
 Bernie Pope started this project as a rewrite of hMPI which was written by
 Michael Weber and Hal Daume III. He was later joined by Dmitry Astapov,
 working on the library as part of Well-Typed LLP's Parallel Haskell Project.
 Development is happening on GitHub, in git://github.com/bjpop/haskell-mpi.
 Please join in!
 Sincerely yours,
 Dmitry Astapov, Bernie Pope
 --
 Dmitry Astapov
 Well-Typed LLP, http://www.well-typed.com/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Monad Reader links are broken

2010-12-12 Thread Thomas Schilling
I don't have access to the wordpress site, but here's a quick way to fix the
links:

 - Replace links of the form: http://www.haskell.org*/sitewiki/images*
/8/85/TMR-Issue13.pdf

 - With: http://www.haskell.org*/wikiupload*/8/85/TMR-Issue13.pdf

/ Thomas

On 11 December 2010 23:28, Jason Dagit da...@codersbase.com wrote:
 Hello,
 I noticed today that the links in this article point to Haskell.org and
they
 are broken:
 http://themonadreader.wordpress.com/previous-issues/
 Maybe someone can fix this?
 Thanks!
 Jason



-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell2010 and state question.

2010-12-09 Thread Thomas Schilling
On 10 December 2010 01:40, Magicloud Magiclouds
magicloud.magiclo...@gmail.com wrote:
 On Wed, Dec 8, 2010 at 6:46 PM, Henk-Jan van Tuyl hjgt...@chello.nl wrote:
 On Wed, 08 Dec 2010 10:03:40 +0100, Magicloud Magiclouds
 magicloud.magiclo...@gmail.com wrote:

 Hi,
  Formerly, I had IORef and some state monad to do the task of keeping
 states.
  Now in haskell 2010, I cannot find anything about it. Do I have to
 use ghc base package for this function?

 These are not standard Haskell '98 or Haskell 2010. You can find IORef in
 the base package; the state monad is in both the mtl and the transformers
 package (mtl is deprecated). The state monad uses the multi-parameter type
 class extension and is therefore not in standard Haskell '98 or Haskell 2010
 code. IORef also uses non-standard code.

 Regards,
 Henk-Jan van Tuyl


 --
 http://Van.Tuyl.eu/
 http://members.chello.nl/hjgtuyl/tourdemonad.html
 --


 First to notice that. So standard-wise, there is no way to do state thing?

It's not hard to create your own state monad.  You are right, though,
that there is no standard library support for this in the Haskell2010
standard.


-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] haskell.org migration complete

2010-12-05 Thread Thomas Schilling
On 5 December 2010 08:29, Henning Thielemann
schlepp...@henning-thielemann.de wrote:
 Thomas Schilling schrieb:
 I created http://www.haskell.org/haskellwiki/MigratingWikiContent to
 list known issues and workarounds.  Please feel free to extend that
 page where needed.

 I liked to add that the Category link on each Wiki page is missing.

It's at the bottom of each page.

 How about an alternate CSS layout for the old Wiki style?
 Btw. unfortunately also in Konqueror CSS selections are not preserved
 across sessions.

 I did not like to add these items while having to log in over unsecure
 HTTP connection. Formerly I could just replace http: by https: which
 is no longer possible.

As Mark mentioned, there's the MonoBook theme which you can set as
your default when you log in.  Of course, if you don't want to log in
that doesn't help.  I'll talk to Ian and see what we can do about SSL
support.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Most images broken on haskellwiki pages

2010-12-03 Thread Thomas Schilling
Should be fixed.  PDF previews are currently broken, but images should be fine.

2010/12/3 Eugene Kirpichov ekirpic...@gmail.com:
 Hello,

 Any news on this one?



 01.12.2010, в 11:53, Yitzchak Gale g...@sefer.org написал(а):

 Eugene Kirpichov wrote:
 I looked at a couple pages of mine...
 and looks
 like the vast majority of images are not displaying.

 This probably has to do with moving the wiki to the
 new server during the past few days. I forwarded your
 email to the admin team there for them to have a look.

 Regards,
 Yitz

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Who's in charge of the new Haskell.org server?

2010-12-01 Thread Thomas Schilling
On 1 December 2010 18:55, Christopher Done chrisd...@googlemail.com wrote:
 If someone will allow me to send them an extension for WikiMedia I will
 write one. It shouldn't be complex at all, so code review will be trivial.
 Please let me know! I'm sure other people here are interested to know who to
 go to to get things like this done.
 It would be cool to write an extension to support an embedded Haskell prompt
 (e.g. tryhaskell.org abd http://haskell.quplo.com/) inside the wiki. It's
 possible to extend the parser to support custom token types[1], so this
 could be a neat addition. For instance, the _Blow your mind_ article is full
 of examples, you can link to them with tryhaskell.org[3], but that's not as
 cool, why not make these runnable inside the page? :-) If tryhaskell.org
 ever goes down the script can just fall back to displaying code.

The recently-formed haskell.org[1] committee is responsible for the server.

We currently only use plugins distributed with debian by default +
very small modifications for backwards compatibility (the hask and
haskell tags).  The reasoning is that those will be updated
automatically when the server is upgraded.

I think adding new extensions would be possible if they are considered
useful enough.  Running Haskell code has some security implications,
so that'll require a good argument for why it's safe.

[1]: http://www.haskell.org/haskellwiki/Haskell.org_committee

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conditional compilation for different versions of GHC?

2010-11-30 Thread Thomas Schilling
I think a nicer way to solve that issue is to use Cabal's MIN_VERSION macros.

 1. Add CPP to your extensions.  This will cause cabal to
auto-generate a file with MIN_VERSION_pkg macros for each pkg in
build-depends.

 2. GHC 6.12.* comes with template-haskell 2.4, so to test for that use:

#ifdef MIN_VERSION_template_haskell(2,4,0)
  .. ghc-6.12.* code here.
#endif

This should make it more transparent to the user.

On 27 November 2010 16:59, Jinjing Wang nfjinj...@gmail.com wrote:
 Dear list,

 From ghc 7.0.1 release notes:

 The Language.Haskell.TH.Quote.QuasiQuoter type has two new fields: quoteType 
 and quoteDec.

 Some of my code needs to be conditionally compiled to support both
 version 6 and 7, what is the recommended way to do it?

 ref:

 * 
 http://new-www.haskell.org/ghc/docs/7.0.1/html/users_guide/release-7-0-1.html

 --
 jinjing
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haddock: patch to generate single-index page in addition to perl-letter indexes indices.

2010-10-24 Thread Thomas Schilling
For packages with many items in the index, these pages can get a bit
huge.  How about a permuted index like
http://www.lispworks.com/documentation/HyperSpec/Front/X_Symbol.htm?

E.g., for your use case, you would go to E and then the row with all
the End entries, which would contain all the names with End
anywhere in their name.

I don't know if page size can be a problem, but at least for mobile or
otherwise low-bandwidth devices this can be a nice alternative.

On 24 October 2010 04:41, Ryan Newton new...@mit.edu wrote:
 When I encounter a split-index (A-Z) page it can be quite frustrating if I
 don't know the first letter of what I'm searching for.  I want to use my
 browser find!  For example, tonight I wanted to look at all the functions
 that END in Window in the Chart package -- no luck:
   http://hackage.haskell.org/packages/archive/Chart/0.13.1/doc/html/doc-index.html
 Therefore I propose that even when generating the A-Z individual pages
 that there also be an All option for the single-page version.  Attached is
 a patch against haddock's HEAD (darcs get http://code.haskell.org/haddock/
 right?) that implements this behavior.  As an example, here is FGL's
 documentation built with the patched haddock:
   http://people.csail.mit.edu/newton/fgl_example_doc/doc-index.html
 The great thing about hackage being centralized is that if people are happy
 with this fix it can be widely deployed where it counts, and quickly!
 Cheers,
    -Ryan
 P.S. At the other end of the spectrum, when considering a central index for
 all of hackage (as in the below ticket) maybe it would be necessary to have
 more than 26 pages,  I.e. Aa-Am | An-Az or whatever.
     http://hackage.haskell.org/trac/hackage/ticket/516#comment:6

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Haskell theme

2010-10-22 Thread Thomas Schilling
I agree with Mark that we shouldn't try to over-constrain things.
However, basic startup-resources are perfectly fine.  It gives a good
default for people who don't really like (web-)design, and can serve
as a baseline for others.  I.e., encourage consistency but don't
enforce it.

On 12 October 2010 22:17, Christopher Done chrisd...@googlemail.com wrote:

 http://img840.imageshack.us/img840/3577/ideasv.png

I mostly like it, with the following remarks:

  - Only blue for a colour scheme is too cold.  This is why I used
this orange-red for the new wiki.  There are other ways to do this, of
course.  I'd just like to encourage people to not only use blueish
tones.

  - The wiki edit links in your sketch are very dark.  I prefer the
light grey as used in the new wiki, because it doesn't stand out that
much.


 To download the Inkscape SVG grab it here:
 http://chrisdone.com/designs/haskell-theme-1.svg

 I don't know what I was thinking here but it seemed like fun:
 http://img412.imageshack.us/img412/1827/rect5935.png

With a bit of tuning, this can look nice.


 I get bored really quickly, I was alright for 15 minutes and then I
 lost the will to point and click. Anyway, I was thinking rather than
 trying to come up with a brilliant all-encompassing design, we could
 agree on conventions:

 * colours
 * headings
 * spacing
 * links
 * the particular incarnation of the logo

 etc.

 I'm pretty happy to go with the colour theme of
 http://new-www.haskell.org/haskellwiki/Haskell

 Haskellers.com also makes me think we need an umbrella theme that
 encompasses all Haskell sites.

 Maybe we should do a theme poll like the logo poll. I don't know.
 Regardless, I think we do need to decide on something and stick with
 it. Should this discussion be taken to the web devel mailing list? Is
 that appropriate?
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are newtypes optimised and how much?

2010-10-20 Thread Thomas Schilling
Do we really want to treat every newtype wrappers as a form of 'id'?
For example:

newtype Nat = Nat Integer   -- must always be positive

A possible rule (doesn't actually typecheck, but you get the idea):

forall (x :: Nat). sqrt (x * x) = x

If we ignore newtyping we get an incorrect rewrite rule.  It depends
on the exact implementation of which 'id's would be recognised.


On 20 October 2010 21:08, James Andrew Cook mo...@deepbondi.net wrote:
 On Oct 20, 2010, at 11:58 AM, Gregory Crosswhite gcr...@phys.washington.edu 
 wrote:

 On 10/20/10 4:09 AM, Simon Peyton-Jones wrote:
 No, this isn't optimised.  The trouble is that you write (map Foo xs), but 
 GHC doesn't know about 'map'.  We could add a special case for map, but 
 then you'd soon want (mapTree Foo my_tree).

 How about a special case for fmap?  That seems like it should handle a lot 
 of cases.


 Or even better, a special handling of 'id' in rules pragmas that would cause 
 any rule matching id to also match any newtype constructor or projection. Or 
 have the compiler automatically add rules that map all newtype wrappers and 
 unwrappers to unsafeCoerce and make sure that unsafeCoerce has rules for map, 
 fmap, (.), etc.

 --James___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] An interesting paper on VM-friendly GC

2010-10-16 Thread Thomas Schilling
On 16 October 2010 10:35, Andrew Coppin andrewcop...@btinternet.com wrote:
  On 15/10/2010 11:50 PM, Gregory Crosswhite wrote:

  On 10/15/2010 03:15 PM, Andrew Coppin wrote:

 On the other hand, their implementation uses a modified Linux kernel, and
 no sane person is going to recompile their OS kernel with a custom patch
 just to run Haskell applications, so we can't do quite as well as they did.
 But still, and interesting read...

 Ah, but you are missing an important fact about the article:  it is not
 about improving garbage collection for Haskell, it is about improving
 collection for *Java*, which a language in heavy use on servers.  If this
 performance gain really is such a big win, then I bet that it would highly
 motivate people to make this extension as part of the standard Linux kernel,
 at which point we could use it in the Haskell garbage collector.

 Mmm, that's interesting. The paper talks about Jikes, but I have no idea
 what that is. So it's a Java implementation then?

Jikes as a virtual machine used for research, it actually has a decent
just in time compiler.  Its memory management toolkit (MMTk) also
makes it quite easy to experiment with new GC designs.

 Also, it's news to me that Java finds heavy use anywhere yet. (Then again,
 if they run Java server-side, how would you tell?)

Oh, it's *very* heavily used.  Many commercial products run on Java
both server and client.

 It seems to me that most operating systems are designed with the assumption
 that all the code being executed will be C or C++ with manual memory
 management. Ergo, however much memory the process has requested, it actually
 *needs* all of it. With GC, this assumption is violated. If you ask the GC
 nicely, it may well be able to release some memory back to you. It's just
 that the OS isn't designed to do this, so the GC has no idea whether it's
 starving the system of memory, or whether there's plenty spare.

 I know the GC engine in the GHC RTS just *never* releases memory back to the
 OS. (I imagine that's a common choice.) It means that if the amount of truly
 live data fluctuates up and down, you don't spend forever allocating and
 freeing memory from the OS. I think we could probably do better here.
 (There's an [ancient] feature request ticket for it somewhere on the
 Traq...) At a minimum, I'm not even sure how much notice the current GC
 takes of memory page boundaries and cache effects...

Actually that's been fixed in GHC 7.

 GC languages are not exactly rare, so maybe we'll see some OSes start adding
 new system calls to allow the OS to ask the application whether there's any
 memory it can cheaply hand back. We'll see...

I wouldn't be surprised if some OS kernels already have some
undocumented features to aid VM-friendly GC.  I think it's probably
going to have to be the other way around, though.  Not the OS should
ask for its memory back, but the application should ask for the page
access bits and then decide itself (as done in the paper).  I don't
know how that interacts with the VM paging strategy, though.
Microkernels such as L4 already support these things (e.g., L4 using
the UNMAP system call).  Xen and co. probably have something similar.


-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How to make cabal pass flags to happy?

2010-10-16 Thread Thomas Schilling
You probably want to customise Setup.lhs to use defaultMainWithHooks
and add your own custom suffix handler to the UserHooks, see:

http://hackage.haskell.org/packages/archive/Cabal/1.8.0.6/doc/html/Distribution-Simple.html#t:UserHooks

Take a look at PPSuffixHandler
(http://hackage.haskell.org/packages/archive/Cabal/1.8.0.6/doc/html/Distribution-Simple-PreProcess.html#t:PPSuffixHandler)
and see the source code for ppHappy to get started

I'm CC'ing Duncan, in case he has a better idea.

On 13 October 2010 19:44, Niklas Broberg niklas.brob...@gmail.com wrote:
 On Fri, Oct 8, 2010 at 4:55 PM, Niklas Broberg niklas.brob...@gmail.com 
 wrote:
 Hi all,

 I want to do something I thought would be quite simple, but try as I
 might I can't find neither information nor examples on how to achieve
 it.

 What I want specifically is to have happy produce a GLR parser from my
 .ly file, and I want this to happen during 'cabal install'. Which in
 turn means I want cabal to pass the --glr flag to happy during
 setup. My best guess is that I might want to use 'ppHappy' [1], or
 something in the vicinity, but there's no documentation for the
 combinator and it's far from obvious how to pass arguments to it.

 ... help? :-)

 ... anyone? ... please? :-)

 Cheers,

 /Niklas
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] IORef memory leak

2010-10-15 Thread Thomas Schilling
Correct, here's a video of Simon explaining the thunk blackholing
issue and its solution in GHC 7:

http://vimeo.com/15573590

On 15 October 2010 21:31, Gregory Collins g...@gregorycollins.net wrote:
 Evan Laforge qdun...@gmail.com writes:

 The only workaround I could find is to immediately read the value back
 out and 'seq' on it, but it's ugly.

 Yep! C'est la vie unfortunately.

 The way atomicModifyIORef works is that the new value isn't actually
 evaluated at all; GHC just swaps the old value with a thunk which will
 do the modification when the value is demanded.

 It's done like that so that the atomic modification can be done with a
 compare-and-swap CPU instruction; a fully-fledged lock would have to be
 taken otherwise, because your function could do an unbounded amount of
 work. While that's happening, other mutator threads could be writing
 into your memory cell, having read the same old value you did, and then
 *splat*, the souffle is ruined.

 Once you're taking a lock, you've got yourself an MVar. This is why
 IORefs are generally (always?) faster than MVars under contention; the
 lighter-weight lock mechanism means mutator threads don't block, if the
 CAS fails atomicModifyIORef just tries again in a busy loop. (I think!)

 Of course, the mutator threads themselves then tend to bump into each
 other or do redundant work when it's time to evaluate the thunks (GHC
 tries to avoid this using thunk blackholing). Contention issues here
 have gotten radically better in recent versions of GHC I think.

 Forgive me if I've gotten anything wrong here, I think Simon Marlow
 might be the only person who *really* understands how all this stuff
 works. :)


 So two questions:

 writeIORef doesn't have this problem.  If I am just writing a simple
 value, is writeIORef atomic?  In other words, can I replace
 'atomicModifyIORef r (const (x, ())' with 'writeIORef r x'?

 Any reason to not do solution 1 above?

 Well if you're not inspecting or using the old value then it's safe to
 just blow it away, yes.

 Cheers,

 G
 --
 Gregory Collins g...@gregorycollins.net
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] can Haskell do everyting as we want?

2010-08-05 Thread Thomas Schilling
On 4 August 2010 21:21, Jason Dagit da...@codersbase.com wrote:
 Is scion still being developed?  I have the impression it's dead now. Really
 a shame, I think it has a good solid design and just needs work/polish.

It is:  http://github.com/nominolo/scion/network

I changed the architecture to use separate processes (I tried to hold
it off, but in the end it was necessary). I hope that this will make a
few things easier.  The other issue is time of course.

Also JP has been adding some features in his Scion branch for
EclipseFP.  The plan is to merge that back eventually.

/ Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] hsb2hs preprocessor looking for a maintainer

2010-08-04 Thread Thomas Schilling
On 4 August 2010 12:05, Ivan Lazar Miljenovic ivan.miljeno...@gmail.com wrote:
 Max Bolingbroke batterseapo...@hotmail.com writes:

 On 4 August 2010 11:39, Ivan Lazar Miljenovic ivan.miljeno...@gmail.com 
 wrote:
 Joachim Breitner m...@joachim-breitner.de writes:
 the problem is that Template Haskell does not work on all architectures,
 so the Debian people prefer solutions that avoid TH if it is not
 needed.

 Yeah, we've just come across this problem in Gentoo when dealing with
 how Haddock behaves when there's TH to be documented in some
 architectures :s

 I didn't know this: is there a corresponding GHC ticket? I can't find
 one, but I could just have chosen the wrong keywords. It seems like
 the right thing to do would just be to make TH work properly rather
 than maintain a special-purpose preprocessor.

 Not that I know of; all I know is that Sergei (aka slyfox) disabled
 building documentation (for libraries that come with GHC) on some
 architectures where ghci isn't available because of this.  My
 understanding is that TH uses ghci (or something like it) to evaluate
 the expressions, and that when Haddock started understanding TH it also
 needed to run ghci to build documentation containing TH.

Correct.  It also needs to be able to generate machine code from the
currently compiled package and its dependencies.  This is because ghci
does not support some features, in particular unboxed tuples.
Therefore, to be on the safe side, Haddock has to create (unoptimised)
binary code as well.

I believe the main reason why ghci isn't available on all platforms is
the dynamic linker.  I don't think it would be easy (or even feasible)
to switch to something like 'ld', though.



 --
 Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com
 IvanMiljenovic.wordpress.com
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
If it looks like a duck, and quacks like a duck, we have at least to
consider the possibility that we have a small aquatic bird of the
family Anatidae on our hands.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Preview the new haddock look and take a short survey

2010-08-04 Thread Thomas Schilling
On 4 August 2010 10:11, Magnus Therning mag...@therning.org wrote:
 On Wed, Aug 4, 2010 at 06:00, Mark Lentczner ma...@glyphic.com wrote:
 The Haddock team has spent the last few months revamping the look of the 
 generated output. We're pretty close to done, but we'd like to get the 
 community's input before we put it in the main release.

 Please take a look, and then give us your feedback through a short survey

 Sample pages:  http://www.ozonehouse.com/mark/snap-xhtml/index.html

 I really like it, especially the synopsis tab on the right.  Brilliant!  The
 TOC is nice too!  The over-all impression is that it doesn't look as
 auto-generated as the old style.

 Frame version: http://www.ozonehouse.com/mark/snap-xhtml/frames.html

 Also very good looking.  Does the current stable version of Haddock really
 create a frame version?
 I've never seen one before...

Yes, I added it two years ago, but we never advertised it much,
because of a problem on Firefox.  You had to press the back button
twice, which was annoying.  I've just found a fix, though, so it
should work fine in the next  release.  It already works fine in
Chrome, though.


 /M

 --
 Magnus Therning                        (OpenPGP: 0xAB4DFBA4)
 magnus@therning.org          Jabber: magnus@therning.org
 http://therning.org/magnus         identi.ca|twitter: magthe
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
If it looks like a duck, and quacks like a duck, we have at least to
consider the possibility that we have a small aquatic bird of the
family Anatidae on our hands.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Preview the new haddock look and take a short survey

2010-08-04 Thread Thomas Schilling
On 4 August 2010 15:44, aditya siram aditya.si...@gmail.com wrote:
 I really like the color scheme and the Javadoc looking frames.

 One suggestion I can make is to have the index show all the functions with
 type signatures without having to pick a letter. A lot of times I'll be
 looking for a function of a certain signature as opposed to a name. Indeed
 an index of type signatures would great! I remember wishing I had this when
 trying the understand the Parsec package.

 -deech

Wouldn't hoogle be better for this kind of use case?  The index can
become very large already.

More direct hoogle/hayoo integration (at least on Hackage) sounds like
a worthwhile goal, though.  Noted.


 On Wed, Aug 4, 2010 at 8:55 AM, Yitzchak Gale g...@sefer.org wrote:

 Mark Lentczner wrote:
  The Haddock team...
  Please take a look, and then give us your feedback

 Very very nice. I took the survey, but here are some comments
 I left out.

 I like the idea of the Snappy style the best, but there are two
 serious problems with it, at least in my browser (Safari):

 1. The black on dark blue of the Snap Packages title makes it
 nearly unreadable for me.
 2. The wide fonts stretch things out so far on my screen that the
 page becomes almost unusable.

 The other styles are fine, I would use them instead.

 Here is a comment I'll repeat from the survey because of its
 importance: Please add a collapse all button for the tree on
 the contents page. For me, that is perhaps the most urgent
 thing missing in all of Haddock. It would make that tree so
 much more usable.

 Thanks for the great work,
 Yitz
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
If it looks like a duck, and quacks like a duck, we have at least to
consider the possibility that we have a small aquatic bird of the
family Anatidae on our hands.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Design for 2010.2.x series Haskell Platform site

2010-07-17 Thread Thomas Schilling
Haters gonna hate.

The new wiki will have a user preference to switch back to the default
monobook style.  You can always do that if you want.  It doesn't work
fully, yet, but that's on my ToDo list.

On 17 July 2010 11:53, Andrew Coppin andrewcop...@btinternet.com wrote:
 Thomas Schilling wrote:

 It would be great if the new design were compatible with the new wiki
 design ( http://lambda-haskell.galois.com/haskellwiki/ ).  It doesn't
 have to be *that* similar, just compatible.


 Hmm. That's really not very pretty... (Or maybe it's just that I dislike
 brown? I didn't like Ubuntu mainly because it's brown.)

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
If it looks like a duck, and quacks like a duck, we have at least to
consider the possibility that we have a small aquatic bird of the
family Anatidae on our hands.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Design for 2010.2.x series Haskell Platform site

2010-07-17 Thread Thomas Schilling
Webdesign for an open source project is pretty much doomed from the
beginning.  Design requires a few opinionated people rather than
democracy.  This is design is a result of a haskell-cafe thread which
naturally involved a lot of bikeshedding.  It has its flaws, but it's
certainly better than the old design and I know of no programming
language website that has a particular great design, either.

Sure, there's always room for improvement.  Usability tests would be
nice, but they're also time consuming.  Fighting CSS to do what you
want it to and make it work on at least all modern browsers is
annoying and a huge time sink as well.  I put the search field on the
right (it's not very useful anyway), but otherwise I disagree with
your requested changes.  I would be willing to consider a different
background image if you send me one (I may play around with a few
myself).

The logo on the left is inspired by http://www.alistapart.com.  It
works quite well on pages that are not the home page.  The main
feature of the design is that it scales quite nicely to different
screen sizes (on recent enough browsers) -- try resizing your window.
Also note that the exact contents can be edited (and probably shoud
be).

  / Thomas

On 17 July 2010 13:31, Christopher Done chrisd...@googlemail.com wrote:
 On 17 July 2010 13:37, Andrew Coppin andrewcop...@btinternet.com wrote:
 Thomas Schilling wrote:
 Haters gonna hate.
 Well, I don't *hate* it. It just looks a little muddy, that's all. I tend to
 go for bright primary colours. But, as you say, each to their own...
 The actual layout isn't bad. A bit tall-and-thin, but otherwise OK.
 The new wiki will have a user preference to switch back to the default
 monobook style.  You can always do that if you want.  It doesn't work
 fully, yet, but that's on my ToDo list.
 Heh, well, maybe if we make half a dozen styles, there will be at least one
 that everyone is happy with. ;-)

 Hi Andy, thanks for the kind words. Whether we like the default theme
 or not right now, I still think it's important that the first thing a
 newbie sees makes a good impression. The fact that you can change the
 default theme to something else is irrelevant. Personally I agree it's
 a bit Ubuntu without the modernness, it's more Age of Empires/CIV,
 we-do-archeology-with-our-italics-serif-font (I find it a chore to
 read, can't imagine what people who aren't native to the Latin
 character would think), and the Haskell logo is oddly placed so that
 it looks more like an advertisement, search should always be on the
 right hand side, navigation should really be on the left, putting on
 the right is iffy. I do like the orange links. But also if we liked
 it, regardless, we should do user testing (checkout Don't Make Me
 Think, Rocket Surgery Made Easy).

 Sadly nobody has the time nor inclination to do proper web development
 and actually test designs and get feedback, so I suppose we're working
 with the time we've got. At least with theme support, we can write a
 load of themes, and then perhaps do a vote on what people think makes
 the best impression as a default. That seems most efficient and fair.
 I'll certainly make a couple.

 Hats off to Thomas for implementing a more friendly theme.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
If it looks like a duck, and quacks like a duck, we have at least to
consider the possibility that we have a small aquatic bird of the
family Anatidae on our hands.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hot-Swap with Haskell

2010-07-16 Thread Thomas Schilling
What would be the semantics of hot-swapping?  For, example, somewhere
in memory you have a thunk of expression e.  Now the user wants to
upgrade e to e'.  Would you require all thunks to be modified?  A
similar problem occurs with stack frames.

You'd also have to make sure that e and e' have the same (or
compatible types).  e' most likely has different free variables than
e, how can you translate thunk one to the other?

Now assuming you don't try to do this, how would you handle the case
when something goes wrong?

On 16 July 2010 04:05, Andy Stewart lazycat.mana...@gmail.com wrote:
 Hi all,

 I'm research to build a hot-swap Haskell program to developing itself in
 Runtime, like Emacs.

 Essentially, Yi/Xmonad/dyre solution is replace currently executing
 technology:

   re-compile new code with new binary entry

   when re-compile success
      $ do
          save state before re-launch new entry
          replace current entry with new binary entry (executeFile)
          store state after re-launch new entry

 There are some problems with re-compile solution:

 1) You can't save *all* state with some FFI code, such as gtk2hs, you
 can't save state of GTK+ widget. You will lost some state after
 re-launch new entry.

 2) Sometimes re-execute is un-acceptable, example, you running some command
 in temrinal before you re-compile, you need re-execute command to
 restore state after re-launch, in this situation re-execute command is 
 un-acceptable.

 I wonder have a better way that hot-swapping new code without
 re-compile/reboot.

 Thanks,

  -- Andy

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
If it looks like a duck, and quacks like a duck, we have at least to
consider the possibility that we have a small aquatic bird of the
family Anatidae on our hands.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Design for 2010.2.x series Haskell Platform site

2010-07-16 Thread Thomas Schilling
It would be great if the new design were compatible with the new wiki
design ( http://lambda-haskell.galois.com/haskellwiki/ ).  It doesn't
have to be *that* similar, just compatible.

On 16 July 2010 19:37, Don Stewart d...@galois.com wrote:
 chrisdone:
 Hi Don,

 What's the ETA on getting the site wiki upgraded and to what version
 will it be? If we're looking at another couple of weeks I'll come up
 with a new wiki template this weekend to replace the current one.

    For haskell.org? Thomas Schilling and Ian Lynagh are working on that
    (CC'd).

 Regarding the Haskell Platform, maybe a summer theme is in order?
 Sunrise, here's a whole platform upgrade. Get it while it's hot, etc.

    That's a great idea! :-)

 Regarding the home page, I think we should involve more piccies of
 people active in the community at conferences and hackathons, etc.
 Seeing pictures of Haskellers is great. It tells everyone this
 language is busy and active, it motivates existing or budding
 Haskellers to contribute and get active, and it's easy to slap a
 picture up on the home page.

    http://cufp.org is a bit like that now.

 -- Don

 ___
 Haskell-platform mailing list
 haskell-platf...@projects.haskell.org
 http://projects.haskell.org/cgi-bin/mailman/listinfo/haskell-platform




-- 
If it looks like a duck, and quacks like a duck, we have at least to
consider the possibility that we have a small aquatic bird of the
family Anatidae on our hands.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Docs on the current and future constraint solver?

2010-07-14 Thread Thomas Schilling
The latest work is OutsideIn(X):
  http://www.haskell.org/haskellwiki/Simonpj/Talk:OutsideIn

This is quite long paper.  It describes a framework for
constraint-based type inference and then instantiates it with a
constraint solver that supports type families, GADTs and type classes.

Constraint-based type inference divides type checking into two phases:
 constraint-generation and solving.  In practice the two may be
interleaved for efficiency reasons, of course.

As an example (does not type check):

type family C a
type instance C Int = Int
c :: a - C a

f :: Int - Bool
f = \n - c n

In order to type check f we generate the *wanted* constraints (~
denotes equality)

(c - C c) ~ (d - e)   -- from c - n
(d - e) ~ (Int - Bool)  -- from type signature

From the type family declarations we additionally get the top-level axiom:

C Int ~ Int

The wanted constraints are now simplified, e.g.,

c ~ d,  C c ~ e -- from the first constraint
d ~ Int, e ~ Bool   -- from the second constraint
c ~ Int,  C c ~ Bool, d ~ Int, e ~ Bool-- from the above constraints
C Int ~ Bool -- also

Now we get an error when combining this with the top-level axiom.

If the user specifies type class constraints in the type signature or
performs a GADT pattern match we additionally get *given* constraints.
 The general solver state thus takes the form:

 G = W

where G are the given constraints and W the wanted constraints and
= is implication.  The solver then reacts two constraints from G,
two constraints from W, or one from each, until no more
simplifications are possible.  To make this efficient, the solver also
regularly canonicalises constraints.  E.g., function symbols go to the
left and constructors to the right.  Further performance improvements
must come from clever indexing the current state to make the search
for applicable rules efficient.

This solver is currently being implemented in GHC (there's a branch on
darcs.h.o), but correctness comes first.  It'll probably take a while
until this new solver becomes efficient.

The paper does not talk about efficiency, but it lists all the rules
and many other details.

/ Thomas

On 14 July 2010 18:39, Corey O'Connor coreyocon...@gmail.com wrote:
 I believe I have run headlong into issue #3064 in ghc
 (http://hackage.haskell.org/trac/ghc/ticket/3064). All I think I know
 is this:
 * this is a performance issue with the system used to solve type constraints.
 * the solver is undergoing an overhaul to resolve performance issues
 in addition to other issues.
 * An efficient constraint solver is difficult. NP-Complete in the general 
 case?

 Beyond that I'm at a loss. What can I read to understand the
 constraint satisfaction problem as it stands in GHC? Is there a paper
 on the implementation of the current solver? Anything on how the new
 solver will differ from the current?

 I think I located one page on the new solver:
 http://hackage.haskell.org/trac/ghc/wiki/TypeFunctionsSolving

 Cheers,
 Corey O'Connor
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
If it looks like a duck, and quacks like a duck, we have at least to
consider the possibility that we have a small aquatic bird of the
family Anatidae on our hands.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why Either = Left | Right instead of something like Result = Success | Failure

2010-05-27 Thread Thomas Schilling
It's indeed arbitrary.  Other common names are Inl and Inr (presumably
standing for inject left/right).  Some Haskell project do indeed use
a more specific name.  The advantage of using the generic Left/Right
is reusability of library code.  The particular name of the datatype
and its constructors are competely arbitrary.  The use of Right for
Success is a handy pun -- the program returned the right answer.

HTH,

/ Thomas

On 27 May 2010 15:25, Ionut G. Stan ionut.g.s...@gmail.com wrote:
 Hi,

 I was just wondering if there's any particular reason for which the two
 constructors of the Either data type are named Left and Right. I'm thinking
 that something like Success | Failure or Right | Wrong would have been a
 little better.

 I've recently seen that Scala uses a similar convention for some error
 notifications so I'm starting to believe there's more background behind it
 than just an unfortunate naming.

 Thanks,
 --
 Ionuț G. Stan  |  http://igstan.ro
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Performance Issue

2010-05-22 Thread Thomas Schilling
On 22 May 2010 16:06, Daniel Fischer daniel.is.fisc...@web.de wrote:
 On Saturday 22 May 2010 16:48:27, Daniel Fischer wrote:
 The boxing is due to the use of (^).
 If you write x*x instead of x^2, it can use the primop *## and needn't
 box it.
 As a side effect, the original time leak probably wouldn't have occured
 with x*x instead of x^2 because one would've made it
    let x = newton a (n-1) in (x*x +a) / (2*x)
 instead of writing out newton a (n-1) thrice anyway, wouldn't one?


 Even if. With

 newton :: Double - Int - Double
 newton a 0 = a
 newton a n =
    (((newton a (n-1)) * (newton a (n-1)) ) + a)/(2*(newton a (n-1)))

 (and optimisations of course), GHC does share newton a (n-1).

 Lesson: Writing x^2 is a baad thing.

Interesting.  Clearly GHC needs a better partial evaluator! :)  (^) is
not inlined because it's recursive (or rather it's worker is) and
there also is no SPECIALISE pragma for Double - Integer - Double.
Yes, it's Integer, not Int, because the literal 2 defaults to
Integer.

It doesn't seem to be possible to add SPECIALISE pragmas for non-local
functions.  If I copy over the definition of (^) no pragma is needed.
GHC creates an worker for Double# - Integer - Double# and that seems
to be sufficient to make CSE work.



-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] making the GHC Api not write to stderr

2010-05-21 Thread Thomas Schilling
You could try changing the log_action[1] member of the DynFlags.  A
while ago I turned most printed errors into some form of error
message, but I wouldn't be surprised if I missed some places.  All
output should go through log_action, though, so try changing that to
intercept any output.

[1]: 
http://haskell.org/ghc/docs/6.12-latest/html/libraries/ghc-6.12.2/DynFlags.html#v%3Alog_action

On 20 May 2010 19:05, Phyx loneti...@gmail.com wrote:
 I was wondering how to forcibly quiet down the API. I have a custom handler
 in place, but when I call the function on failure both my handler gets
 called and somewhere somehow errors get printed to the stderr, which I
 really need to avoid.



 My current code looks like



 getModInfo :: Bool - String - String - IO (ApiResults ModuleInfo)

 getModInfo qual file path = handleSourceError processErrors $

 runGhc (Just libdir) $ do

     dflags - getSessionDynFlags

     setSessionDynFlags $ configureDynFlags dflags

     target - guessTarget file Nothing

     addTarget target

     setSessionDynFlags $ dflags { importPaths = [path] }

     load LoadAllTargets

 graph - depanal [] False

     let modifier = moduleName . ms_mod

     modName  = modifier $ head graph

     includes = includePaths dflags

     imports  = importPaths dflags



 dflags' - Debug.trace (moduleNameString modName) getSessionDynFlags

     setSessionDynFlags $ dflags' { includePaths = path:includes

  , importPaths  = path:imports

  }



     parsed  - parse modName

     checked - typecheckModule parsed

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] executeFile failing on macosx

2010-05-16 Thread Thomas Schilling
Works fine on 10.6.3.  If you run with +RTS -N2, though, you'll get
forking not supported with +RTS -Nn greater than 1

The reason for this is that forking won't copy over the threads which
means that the Haskell IO manager stops working (you'd have to somehow
reinitialise the RTS while leaving heap and runtime stacks in tact --
very tricky).

I'm using http://hackage.haskell.org/package/process to run external
processes.  I haven't had any problems with it.

On 17 May 2010 00:06, David Powell da...@drp.id.au wrote:

 On Mon, May 17, 2010 at 1:33 AM, Bulat Ziganshin bulat.zigans...@gmail.com
 wrote:

 Hello David,

 Sunday, May 16, 2010, 7:18:29 PM, you wrote:

  executeFile is failing for me on Mac OS X 10.5.8, with ghc 6.12.1
  when compiling with -threaded.  Compiling without -threaded, or
  running on linux is fine.
   forkProcess $ executeFile /bin/echo False [Ok] Nothing

 afair, forkProcess and -threaded shouldn't work together on any Unix.
 can you try forkIO or forkOS instead?


 Hi Bulat,

 Both, forkIO and forkOS fail in the same way for me with -threaded.  I
 believe this is because macosx requires the process to only have a single
 thread when doing an execv(), which I thought was the purpose of
 forkProcess?

 Cheers,

 -- David

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ghc api renamed source

2010-05-12 Thread Thomas Schilling
The difficulty is that renamer and type-checker are mutually recursive
because of Template Haskell.  I've been looking into this a while ago
and I have a basic idea how a better API could look like, but I
haven't sorted out the details.

One very hacky workaround -- and I'm not sure whether it can actually
work -- is to copy the file TcRnDriver from the GHC sources and modify
it to do what you need.  It wouldn't be an easy task, though.

On 12 May 2010 14:35, Phyx loneti...@gmail.com wrote:
 Hi all,



 I was wondering if it’s possible to get to the renamed source before
 actually typechecking,



 I currently have



     parsed  - parse modName

     checked - typecheckModule parsed

     let renamed = tm_renamed_source checked

     value   = tm_typechecked_source checked



 but if typechecking failes, I get no information at all (an error), while I
 should be able to still get the renamed source.

 Any suggestions?



 Cheers,

 Phyx

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC 6.12 on OS X 10.5

2010-05-08 Thread Thomas Schilling
Building from source alone didn't help, but building from source
together with the following extra lines to .cabal/config worked:

extra-lib-dirs: /usr/lib
extra-lib-dirs: /opt/local/lib

This is not an ideal solution because this means that any OS X library
will shadow the corresponding Macports library.  This could lead to
problems if you use a Macports library that has several dependencies
one of which is shadowed.

HTH

On 7 May 2010 22:42, Jason Dagit da...@codersbase.com wrote:


 On Mon, Dec 28, 2009 at 9:03 AM, Aaron Tomb at...@galois.com wrote:

 On Dec 22, 2009, at 9:36 PM, wren ng thornton wrote:

 Aaron Tomb wrote:

 I've come across the issue with iconv, as well.
 The problem seems to be that some versions of iconv define iconv_open
 and some related functions as macros (that then call libiconv_open, etc.),
 and some versions of iconv have exported functions for everything. In
 particular, the iconv bundled with OS X (1.11) defines iconv_open, but the
 iconv installed with MacPorts (1.13) does not. The binary package for GHC
 6.12.1 seems to have been compiled on a system without MacPorts, and
 therefore references iconv_open (etc.) from the Apple-distributed version 
 of
 the library.
 If you set up an installation of GHC 6.12 on OS X (I've only tried 10.5)
 with no references to /opt/local/lib, everything works fine. If you include
 /opt/local/lib in the extra-lib-dirs field of your .cabal/config file, it
 tries to link with the MacPorts version and fails with undefined 
 references.

 Is this a problem with *versions* of iconv, or with branches/forks? If
 it's versions, then it seems like migrating to =1.13 would be good for
 everyone. If it's just branches, do you know whether this afflicts Fink
 users as well as MacPorts users, or should I be the guinea pig to test that?

 It's a problem with versions. I checked that the official iconv repository
 does indeed make the change between 1.11 and 1.13 that causes the problem.
 The issue is that OS X includes an old version. So migrating to =1.13 means
 convincing Apple to upgrade what they include with the system. If we can
 accomplish that, I'd be thrilled. But it hasn't happened yet as of 10.6.

 Fink comes with 1.12. I'm not sure whether 1.12 is more like 1.11 or 1.13.

 Resurrecting an old thread.

 I would like to find out:
   1. Has there been any update on how to workaround this?
   2. Does building 6.12 from macports (or from source on a macports enabled
 system) fix the problem?

 It seems like more than just GHC is affected by this, but I'm having trouble
 digging up solid information on the web about how to workaround it.  Many
 sources (such as stackoverflow) point to this thread so posting a solution
 here would be a win.

 Thanks,
 Jason

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Is anyone using Haddock's support for frames?

2010-05-05 Thread Thomas Schilling
Ok, I think I should clarify.

I believe that the framed view with a long list of modules on the left
and the haddocks on the right is still useful.  What I don't mind
getting rid off is the third frame which shows the contents of the
mini_* files.  I would have preferred to have something similar in the
regular haddock (instead of the synopsis or the contents) make it less
obtrusive by using a small font and make it float: right.  That will
be possible with the new HTML backend so I'm all for removing the
mini_* stuff.

The long list of modules should ideally be grouped by package, but
that can be added later.  I don't think it took longer than a few days
to implement, so it shouldn't be much of an issue to port it over to a
new backend.

On 5 May 2010 10:34, Tristan Allwood t...@zonetora.co.uk wrote:
 +1 to keep it until equivalent functionality is made mainline

 I've had tinyurl.com/haskelldoc aliased to the main frame page
 (http://www.haskell.org/ghc/docs/6.12.2/html/libraries/frames.html) and
 used it extensively on a daily basis for GHC libraries and GHC API
 browsing.  Navigating the current non-framed, disparate, seperate
 documetation feels painful and slow.  I would note though that the
 frames pages arn't currently working on hackage:
 e.g. 
 http://hackage.haskell.org/packages/archive/text/0.7.1.0/doc/html/frames.html).

 BTW, I would point out the two best documentation systems I've seen in
 other languages (javadoc[1] and rubydock[2]) are frame based and (IMO)
 very easy to navigate.

 [1] http://java.sun.com/javase/6/docs/api/
 [2] http://www.ruby-doc.org/core/

 Cheers,

 Tris


 On Tue, May 04, 2010 at 08:19:45PM +0200, David Waern wrote:
 Hi

 Since version 2.4.0 Haddock has generated HTML output that uses frames
 (index-frames.html) in addition to the normal output. We'd like to
 deprecate this feature unless there is a significant amount of users.
 The reason is two-fold:

   * We probably want to replace the frames with something more modern
 (like a sidebar on the same page) in the future

   * We are rewriting the HTML backend and it would be nice to avoid
 unnecessary work

 So if you're using this feature and want to keep it, please speak up!

 cc:ing cvs-ghc@ in case they have any users of the frames due to the
 size of the GHC code base. (This might have been the the original
 motivation for the feature).

 Thanks,
 David

 ___
 Cvs-ghc mailing list
 cvs-...@haskell.org
 http://www.haskell.org/mailman/listinfo/cvs-ghc

 ___
 Cvs-ghc mailing list
 cvs-...@haskell.org
 http://www.haskell.org/mailman/listinfo/cvs-ghc




-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Is anyone using Haddock's support for frames?

2010-05-04 Thread Thomas Schilling
I think it will no longer be needed once Haddock outputs table-less
layout code.  Frames caused problems with the back-button, so they
weren't really an improvement.  A simple CSS float:right + smaller
font on the div containing the index would be a lot better.

I think it would be best to keep the headings in the index, though.
Haddock's current long list of things is often not very helpful,
because the organisation by categories is gone (and the headings alone
are often not sufficient).


On 4 May 2010 19:19, David Waern david.wa...@gmail.com wrote:
 Hi

 Since version 2.4.0 Haddock has generated HTML output that uses frames
 (index-frames.html) in addition to the normal output. We'd like to
 deprecate this feature unless there is a significant amount of users.
 The reason is two-fold:

  * We probably want to replace the frames with something more modern
 (like a sidebar on the same page) in the future

  * We are rewriting the HTML backend and it would be nice to avoid
 unnecessary work

 So if you're using this feature and want to keep it, please speak up!

 cc:ing cvs-ghc@ in case they have any users of the frames due to the
 size of the GHC code base. (This might have been the the original
 motivation for the feature).

 Thanks,
 David




-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell (wiki) Logo license

2010-04-23 Thread Thomas Schilling
It was posted to the Wiki, IIUC.  Anything posted to the wiki is
implicitly licensed by the wiki license:
http://haskell.org/haskellwiki/HaskellWiki:Copyrights

On 23 April 2010 07:10, Jens Petersen peter...@haskell.org wrote:
 Hi,

 We use the Haskell Logo in Fedora OS installer for the Haskell packages group:
 http://git.fedorahosted.org/git/?p=comps-extras.git;a=tree

 I trying to get the logo updated to the nice new logo designed by
 Jeff Wheeler:
 http://haskell.org/sitewiki/images/a/a8/Haskell-logo-60.png

 To do that we need to update the license in
 http://git.fedorahosted.org/git/?p=comps-extras.git;a=blob;f=COPYING

 Can someone tell me what the correct license for the above image file is?
 Is it the wiki license or something else?

 Thank you, Jens
 --
 on behalf of Fedora Haskell SIG
 https://fedoraproject.org/wiki/Haskell_SIG
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC Api typechecking

2010-04-18 Thread Thomas Schilling
Looking at the code for GHC, it turns out that your use case is not
supported.  It is not allowed to have in-memory-only files.  If you
specify a buffer it will still try to find the module file on the
disk, but it will (or at least should) use the contents from the
specified string buffer.

I've been thinking about changing the Finder (the part that maps
module names to source files and .hi files) to use a notion of a
virtual file.  This way, the API client could define how and where
data is stored.

On 18 April 2010 11:01, Phyx loneti...@gmail.com wrote:
 Hi,

 I checked out how Hint is doing it, but unfortunately they're calling a 
 function in the GHC api's interactive part to typecheck a single statement, 
 much like :t in ghci,
 So I can't use it to typecheck whole modules.
 I've tried working around not being able to construct a TargetId but ran into 
 another wall.
 I can't find anyway to do dependency analysis on the in-memory target, so the 
 dependency graph would be empty which is ofcourse a big problem.

 Does anyone know if Leksah uses the GHC api for typechecking? And if it only 
 gives type errors after you save a file?

 The code I've been trying is

 typeCheckStringOnly :: String - IO (ApiResults Bool)
 typeCheckStringOnly contents = handleSourceError processErrors $
  runGhc (Just libdir) $ do
    buffer - liftIO $ stringToStringBuffer contents
    clock  - liftIO getClockTime
    dflags - getSessionDynFlags
    setSessionDynFlags dflags
    let srcLoc   = mkSrcLoc (mkFastString internal:string) 1 1
        dynFlag  = defaultDynFlags
        state    = mkPState buffer srcLoc dynFlag
        parsed   = unP Parser.parseModule state
        pkgId    = stringToPackageId internal
        name     = mkModuleName Unknown
        mod'     = mkModule pkgId name
        location = ModLocation Nothing  
        summary  = ModSummary mod' HsSrcFile location clock Nothing [] []  
 dynFlag Nothing
    (\a-setSession $ a { hsc_mod_graph = [summary] }) = getSession
    case parsed of
       PFailed _ _        - return $ ApiOk False
       POk newstate mdata - do let module' = ParsedModule summary mdata
                                check - typecheckModule module'
                                return $ ApiOk True

 this fails with a ghc panic

 : panic! (the 'impossible' happened)
  (GHC version 6.12.1 for i386-unknown-mingw32):
        no package state yet: call GHC.setSessionDynFlags

 Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug

 :(

 Cheers,
 Phyx

 -Original Message-
 From: Gwern Branwen [mailto:gwe...@gmail.com]
 Sent: Saturday, April 17, 2010 20:59
 To: Phyx
 Subject: Re: [Haskell-cafe] GHC Api typechecking

 On Sat, Apr 17, 2010 at 1:49 PM, Phyx loneti...@gmail.com wrote:
 Hi all, I was wondering if someone knows how to do the following:



 I’m looking to typecheck a string using the GHC Api, where I run into
 problems is that I need to construct a Target, but the TargetId only seem to
 reference physical files.



 Ofcourse I can write the string to a file and typecheck that file, but I
 would like to do it all in memory and avoid IO if possible.



 Does anyone know if this is possible?



 For the record I’m trying to create the target as follows



 createTarget :: String - IO Target

 createTarget content =

  do clock  - getClockTime

     buffer - stringToStringBuffer content

     return $ Target { targetId           = TargetModule (mkModuleName
 string:internal) ß problem

                     , targetAllowObjCode = True

                     , targetContents     = Just (buffer,clock)

                     }



 typeCheckStringOnly :: String - IO (ApiResults Bool)

 typeCheckStringOnly contents = handleSourceError processErrors $

 runGhc (Just libdir) $ do

     dflags - getSessionDynFlags

     setSessionDynFlags dflags

     target - liftIO $ createTarget contents

     addTarget target

     load LoadAllTargets

     let modName = mkModuleName string:internal ß problem again, don’t know
 how to create the dependency graph then.

     graph - depanal [modName] True

     (\a-setSession $ a { hsc_mod_graph = graph }) = getSession

     value - fmap typecheckedSource (typeCheck modName)

     return $ ApiOk True



 Cheers,

 Phyx

 Have you looked at how the Hint package does things?

 --
 gwern

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ghc #defines

2010-04-14 Thread Thomas Schilling
__NHC__ is defined when the code is compiled with the nhc98 compiler
[1].  Similarly Hugs is another Haskell implementation.

[1]: http://www.haskell.org/nhc98/

On 14 April 2010 11:36,  gladst...@gladstein.com wrote:
 I may need to put some #ifdef conditionalizations into some
 cross-platform code. I'm wondering where I can find the definitions of
 symbols like __NHC__. I tried googling but I'm swamped by uses, no
 definitions that I can see.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Difficulties installing LLVM bindings

2010-04-09 Thread Thomas Schilling
Bryan said a while ago that Manuel Chakravarty had some Mac related
patches for LLVM, don't know if they have been integrated yet.

On 9 April 2010 23:11, Max Bolingbroke batterseapo...@hotmail.com wrote:
 On 9 April 2010 18:38, Aran Donohue aran.dono...@gmail.com wrote:
 Hi Haskell-Cafe,
 I can't get the LLVM bindings for Haskell to install. Does anyone know what
 I might need to do? Has anyone seen this error before?
 Here's the problem: (Installing from latest darcs source)

 I just tried this on my Mac and got the same problem. The problem is
 described in config.log:

 
 configure:3659: g++ -o conftest -g -O2 -I/usr/local/include  -D_DEBUG
 -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS  -m32
 -L/usr/local/lib  -lpthread -lm    conftest.c -lLLVMCore
 -lLLVMSupport -lLLVMSystem  5
 ld: warning: in /usr/local/lib/libLLVMCore.a, file is not of required
 architecture
 ld: warning: in /usr/local/lib/libLLVMSupport.a, file is not of
 required architecture
 ld: warning: in /usr/local/lib/libLLVMSystem.a, file is not of
 required architecture
 Undefined symbols:
  _LLVMModuleCreateWithName, referenced from:
      _main in cc5D6X0z.o
 

 So perhaps LLVM needs to be built universal or something?

 Cheers,
 Max
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell.org re-design

2010-04-07 Thread Thomas Schilling
Yup, I have to agree.  The Ruby web site certainly is the best web
site for a programming language that I've come across, but it's
certainly not amazing.  I like the python documentation design, but
their home page is a bit dull.  Anyway, here's another variation, this
time with more colour:

http://i.imgur.com/Lj3xM.png

The image is about 80k (while the website alone is  10k) so I hope
there won't be any bandwidth issues.  Regarding the particular
contents:

  (a) I won't post another version for every tiny wibble.  You know,
you can actually post text via email (yes, really!) so if anyone has
improvements for how the sections should look like, post the suggested
alternative contents on this list.

  (b) A little redundancy is no problem at all.  I try to follow the
inverted pyramid model: put all the important information at the top,
and add more details below.  If that leads to a small amount of
duplication so be it.

  (c) As mentioned before, we don't want a perfect home page, we
simply want a better one.  Incremental improvements can be made later
on.

  (d) Who actually *can* update the homepage?  Ian, Ross, Malcolm, Simon M?

  (e) I don't have an iPhone, *Droid, or iPad, so I'd need some help
testing on any of those.

  (f) The design is not fixed width, and most sizes are specified in
terms of font size or percentages.  I merely added a max-width
restriction so that it still looks decent on maximised screens.  I
tried to remove it, but that just doesn't look good anymore.

On 7 April 2010 14:19, Daniel Fischer daniel.is.fisc...@web.de wrote:
 Am Mittwoch 07 April 2010 04:09:17 schrieb Gregory Crosswhite:
 While I think that (d) is a valid concern, it is also important not to
 let the perfect be the enemy of the good.  If we agree that the proposed
 web site layout is sufficiently better than the current one and is good
 enough aesthetically, then I think we should go ahead and switch to the
 new layout and *then* start thinking about how we could make it

 Good plan.

 *completely amazing* like the Ruby web site,

 www.ruby-lang.org ?

 Sure, that looks pretty good, but completely amazing?

 because if we demand
 completely amazing for our *first* try then I fear that all that will
 happen is that nothing will change because the bar will have been set
 too high.

 Cheers,
 Greg

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell.org re-design

2010-04-07 Thread Thomas Schilling
http://i.imgur.com/kFqP3.png   Didn't know about CSS's rgba to
describe transparency.  Very useful.

On 7 April 2010 18:19, Gregory Crosswhite gcr...@phys.washington.edu wrote:
 Ooo, I really like this revision;  it is a major improvement in your design!  
 I particularly like the picture you chose for the top, and the new way that 
 you have laid out all of the boxes and made the bottom right box a different 
 shade so that it is easier to distinguish it as a different column.  Also, I 
 concur with your use of the inverted pyramid model, even if it comes at the 
 expense of a little redundancy.

 My only quibble is that I don't like the fact that the summary text at the 
 top has a font background color, so that there are in essence several boxes 
 around the text of different sizes and with space in between the lines.  I 
 recognize that the purpose of the font background was to help the text 
 contrast with the picture behind it, but it would be nicer if there were a 
 better solution, such as by putting a box around all of the text and then 
 filling that with color (so there aren't boxes of different sizes containing 
 the text and empty spaces between the lines), or by putting a translucent box 
 around the text so that we can still see the background but it's faded a bit 
 so that the text still shows up.

 Cheers,
 Greg

 On Apr 7, 2010, at 9:53 AM, Thomas Schilling wrote:

 Yup, I have to agree.  The Ruby web site certainly is the best web
 site for a programming language that I've come across, but it's
 certainly not amazing.  I like the python documentation design, but
 their home page is a bit dull.  Anyway, here's another variation, this
 time with more colour:

 http://i.imgur.com/Lj3xM.png

 The image is about 80k (while the website alone is  10k) so I hope
 there won't be any bandwidth issues.  Regarding the particular
 contents:

  (a) I won't post another version for every tiny wibble.  You know,
 you can actually post text via email (yes, really!) so if anyone has
 improvements for how the sections should look like, post the suggested
 alternative contents on this list.

  (b) A little redundancy is no problem at all.  I try to follow the
 inverted pyramid model: put all the important information at the top,
 and add more details below.  If that leads to a small amount of
 duplication so be it.

  (c) As mentioned before, we don't want a perfect home page, we
 simply want a better one.  Incremental improvements can be made later
 on.

  (d) Who actually *can* update the homepage?  Ian, Ross, Malcolm, Simon M?

  (e) I don't have an iPhone, *Droid, or iPad, so I'd need some help
 testing on any of those.

  (f) The design is not fixed width, and most sizes are specified in
 terms of font size or percentages.  I merely added a max-width
 restriction so that it still looks decent on maximised screens.  I
 tried to remove it, but that just doesn't look good anymore.

 On 7 April 2010 14:19, Daniel Fischer daniel.is.fisc...@web.de wrote:
 Am Mittwoch 07 April 2010 04:09:17 schrieb Gregory Crosswhite:
 While I think that (d) is a valid concern, it is also important not to
 let the perfect be the enemy of the good.  If we agree that the proposed
 web site layout is sufficiently better than the current one and is good
 enough aesthetically, then I think we should go ahead and switch to the
 new layout and *then* start thinking about how we could make it

 Good plan.

 *completely amazing* like the Ruby web site,

 www.ruby-lang.org ?

 Sure, that looks pretty good, but completely amazing?

 because if we demand
 completely amazing for our *first* try then I fear that all that will
 happen is that nothing will change because the bar will have been set
 too high.

 Cheers,
 Greg

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Push the envelope.  Watch it bend.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Push the envelope.  Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   >