Re: [Haskell-cafe] automatically inserting type declarations

2009-04-07 Thread Brandon S. Allbery KF8NH

On 2009 Apr 6, at 23:06, FFT wrote:

I remember hearing about a Haskell mode for Vim, Emacs, Yi or
VisualHaskell that inserts type declarations automatically (it's
lazier to just check the type than to write it manually), but I can't
remember any details. What editor mode / IDE was it?


I think you're talking about Shim.  Sadly I've heard zero about it  
since its announcement, and at the time it was rather buggy.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allb...@kf8nh.com
system administrator [openafs,heimdal,too many hats] allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon universityKF8NH




PGP.sig
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Generating arbitrary function in QuickCheck

2009-04-07 Thread Janis Voigtlaender

Jason Dagit wrote:

On Mon, Apr 6, 2009 at 10:09 PM, Eugene Kirpichov ekirpic...@gmail.com wrote:


Since the argument to sortBy must impose a linear ordering on its
arguments, and any linear ordering may as well be generated by
assigning an integer to each element of type 'a', and your sorting
function is polymorphic, from the free theorem for the sorting
function we may deduce that it suffices to test your function on
integer lists with a casual comparison function (Data.Ord.compare),
and there is no need to generate a random comparison function.



Interesting.  How is this free theorem stated for the sorting
function?  Intuitively I understand that if the type is polymorphic,
then it seems reasonable to just pick one type and go with it.


You can try free theorems here:

http://linux.tcs.inf.tu-dresden.de/~voigt/ft/

For example, for

sort :: Ord a = [a] - [a]

it generates the following:

forall t1,t2 in TYPES(Ord), f :: t1 - t2, f respects Ord.
 forall x :: [t1]. map f (sort x) = sort (map f x)

where:

f respects Ord if f respects Eq and
  forall x :: t1.
   forall y :: t1. compare x y = compare (f x) (f y)
  forall x :: t1. forall y :: t1. () x y = () (f x) (f y)
  forall x :: t1. forall y :: t1. (=) x y = (=) (f x) (f y)
  forall x :: t1. forall y :: t1. () x y = () (f x) (f y)
  forall x :: t1. forall y :: t1. (=) x y = (=) (f x) (f y)

f respects Eq if
  forall x :: t1. forall y :: t1. (==) x y = (==) (f x) (f y)
  forall x :: t1. forall y :: t1. (/=) x y = (/=) (f x) (f y)

Assuming that all the comparison functions relate to each other in the
mathematically sensible way, the latter can be reduced to:

f respects Ord if
  forall x :: t1. forall y :: t1. (x = y) = (f x = f y)

For sortBy you would get a similar free theorem.

To see how the free theorem allows you to switch from an arbitrary type
to just integers, set t2=Int and simply use f to build a
order-preserving bijection between elements in the list x and a prefix
of [1,2,3,4,...]

Ciao, Janis.

--
Dr. Janis Voigtlaender
http://wwwtcs.inf.tu-dresden.de/~voigt/
mailto:vo...@tcs.inf.tu-dresden.de

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] replicateM should be called mreplicate?

2009-04-07 Thread Thomas Davie


On 7 Apr 2009, at 07:37, David Menendez wrote:

On Mon, Apr 6, 2009 at 1:46 PM, Luke Palmer lrpal...@gmail.com  
wrote:
On Mon, Apr 6, 2009 at 11:42 AM, David Menendez d...@zednenem.com  
wrote:


Of course, this suggests that mfix should be fixM, so perhaps a  
better

distinction is that mplus and mfix need to be defined per-monad,
whereas filterM and replicateM are generic.


Don't you think that is an incidental distinction, not an essential  
one?  It
would be like naming our favorite operations mbind and joinM, just  
because

of the way we happened to write the monad class.


Fair enough. I only added that comment when I noticed that my
explanation for picking replicateM over mreplicate also applied to
mfix.

Looking through Control.Monad, I see that all the *M functions require
Monad, whereas the m* functions require MonadPlus (or MonadFix).


Actually, most of the *M functions only require Applicative – they're  
just written in a time when that wasn't in the libraries.



I wonder to what extent that pattern holds in other libraries?


I'm not sure how to generalise this pattern, but it's probably worth  
noting that fmap is fmap, not mapF.  I can't see any pattern that it  
fits into, really I suspect it's a case of what shall we name this  
and not enough thought about consistant naming as the libraries evolved.


Bob___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] tail recursion

2009-04-07 Thread Daryoush Mehrtash
Is the call to go in the following code considered as tail recursion?

data DList a = DLNode (DList a) a (DList a)

mkDList :: [a] - DList a

mkDList [] = error
http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:error
must have at least one element
mkDList xs = let (first,last
http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:last)
= go last 
http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:last
xs first
 in  first

  where go :: DList a - [a] - DList a - (DList a, DList a)
go prev [] next = (next,prev)
go prev (x:xs) next = let this= DLNode prev x rest
  (rest,last
http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:last)
= *go this xs next*
  in  (this,last
http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:last)


Daryoush
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] automatically inserting type declarations

2009-04-07 Thread Claus Reinke

I remember hearing about a Haskell mode for Vim, Emacs, Yi or
VisualHaskell that inserts type declarations automatically (it's
lazier to just check the type than to write it manually), but I can't
remember any details. What editor mode / IDE was it?


As far as I know, my haskellmode plugins for Vim were the first
to do that, in their long-gone predecessor incarnation of hugs.vim.

But I'm pretty sure this feature was adopted by the Emacs folks
as soon as people started saying they liked it. That is, types for
top-level declarations - more precise types are on everyone's 
todo list, I think (by just doing what VisualHaskell used to do:

ask a dedicated GHC API client for details).

Take the identifier under cursor, run something like 


   ghc -e :t id current_module

and either show the result, or insert it. These days, the plugins no
longer call out to GHC every time, instead they update an internal
map whenever you use ':make' in quickfix mode and get no errors
from GHC. Anyway, the plugins have recently moved here:

http://projects.haskell.org/haskellmode-vim


What do most people use with GHC on Linux? I'm more used to Vim than
to Emacs. Yi sounds like something I might like. Is it stable enough
to solve more problems than it would create? (I hate buggy and broken
stuff)


haskellmode-vim isn't free of problems (mostly to do with large
numbers of installed libraries vs naive scripts). The reason it exists
is to solve problems I don't want to have, so it tends to improve
whenever a problem bugs me enough, but whether the result works
for you, you have to try for yourself!-)

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad transformer to consume a list

2009-04-07 Thread Henning Thielemann


On Tue, 7 Apr 2009, Stephan Friedrichs wrote:


Henning Thielemann wrote:


is there a monad transformer to consume an input list? I've got external
events streaming into the monad that are consumed on demand and I'm
not sure if there's something better than a StateT.


I wondered that, too. I wondered whether there is something inverse to
Writer, and Reader is appearently not the answer. Now I think, that
State is indeed the way to go to consume a list. Even better is StateT
List Maybe:

next :: StateT [a] Maybe a
next = StateT Data.List.HT.viewL   -- see utility-ht package



But a StateT provides the power to modify the list in other ways than
reading the first element (modify (x:)). Maybe ParsecT is closer to what
I'm looking for ;)


If you want to restrict the functionality of StateT, then wrap it in a 
newtype.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Configuring cabal dependencies at install-time

2009-04-07 Thread John Lato
 From: Jeff Heard jefferson.r.he...@gmail.com

 Is there a way to do something like autoconf and configure
 dependencies at install time?  Building buster, I keep adding
 dependencies and I'd like to keep that down to a minimum without the
 annoyance of littering Hackage with dozens of packages.  For instance,
 today I developed an HTTP behaviour and that of course requires
 network and http, which were previously not required.  I'm about to
 put together a haxr XML-RPC behaviour as well, and that of course
 would add that much more to the dependency list.  HaXml, haxr, and
 haxr-th most likely.

 so... any way to do that short of making a bunch of separate packages
 with one or two modules apiece?  Otherwise I'm thinking of breaking
 things up into buster, buster-ui, buster-network, buster-console, and
 buster-graphics to start and adding more as I continue along.


I'd be interested in hearing answers to this as well.  I'm not a fan
of configure-style compile-time conditional compilation, at least for
libraries.  It makes it much harder to specify dependencies.  With
this, if package Foo depends on buster and the HTTP behavior, it's no
longer enough to specify build-depends: buster because that will
only work if buster was configured properly on any given system.

I think that the proper solution is to break up libraries into
separate packages as Jeff suggests (buster, buster-ui, etc.), but then
the total packages on hackage would explode.  I don't feel great about
doing that with my own packages either; is it a problem?  If so, maybe
there could be just one extra package, e.g. buster and buster-extras.
Is there a better solution I'm missing?

John Lato
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] UPDATE: haskellmode for Vim now at projects.haskell.org (+screencast; -)

2009-04-07 Thread Claus Reinke

I have become aware that many Haskellers are not aware of the
Haskell mode plugins for Vim, in spite of the 100 downloads
per month I saw when I last checked. Since the plugins have just
completed their move to their new home at

http://projects.haskell.org/haskellmode-vim/

this seems to be a good opportunity to mention them again (now
with 100% more screencasts!-). They do much (though certainly
not all) of the stuff people often wish for here, such as showing or 
adding types, looking up source where available locally or looking 
up documentation where source isn't available. Mostly, they collect

my attempts to automate tasks when I have to do them often enough.

Here is a section of the quick reference (:help haskellmode-quickref):

   |:make|   load into GHCi, show errors (|quickfix| |:copen|)
   |_ct| create |tags| file 
   |_si| show info for id under cursor

   |_t|  show type for id under cursor
   |_T|  insert type declaration for id under cursor
   |balloon| show type for id under mouse pointer
   |_?|  browse Haddock entry for id under cursor
   |:IDoc| {identifier}  browse Haddock entry for unqualified {identifier}
   |:MDoc| {module}  browse Haddock entry for {module}
   |:FlagReference| {s}  browse Users Guide Flag Reference for section {s}
   |_.|  qualify unqualified id under cursor
   |_i|  add 'import module(identifier)' for id under cursor
   |_im| add 'import module' for id under cursor
   |_iq| add 'import qualified module(identifier)' for id 
under cursor
   |_iqm|add 'import qualified module' for id under cursor
   |_ie| make imports explit for import statement under cursor
   |_opt|add OPTIONS_GHC pragma
   |_lang|   add LANGUAGE pragma
   |i_CTRL-X_CTRL-O| insert-mode completion based on imported ids 
(|haskellmode-XO|)
   |i_CTRL-X_CTRL-U| insert-mode completion based on documented ids 
(|haskellmode-XU|)
   |i_CTRL-N|insert-mode completion based on imported sources
   |:GHCi|{command/expr} run GHCi command/expr in current module

For those who have never used these plugins, or haven't used Vim
at all, it has often been difficult to imagine what editing in Vim can
be like. The old quick tour of features available has now been updated
from screenshots to screencasts (my first venture into this area - please 
let me know whether that is useful or a waste of time!-), so you can 
watch them on Youtube before deciding to invest time into learning Vim.


For those who are using Vim, the only reason not to use my 
haskellmode plugins would be if you had your own (not uncommon
among Vim users;-), in which case I hope you make yours available 
as well (feel free to adopt features from my plugins, and let me know

if you have some useful features to contribute), here:

http://www.haskell.org/haskellwiki/Libraries_and_tools/Program_development#Vim

For those who have happily been using these plugins: in the process
of moving the site, I noticed that I hadn't updated the published
version for a quite some time - apparently, noone had missed the 
fixes, but in case you want to check, the relevant section of my update

log is appended below.

Happy Vimming!-)
Claus

- updates since last published version on old site
04/04/2009
 haskell_doc.vim: when narrowing choices by qualifier for _?, take
lookup index from un-narrowed list (else we could
end up in the docs for the wrong module)

02/04/2009
 ghc.vim: actually, we can try to make a reasonable guess at the parent
   type for constructors in _ie, from their type signature

01/04/2009
 ghc.vim: try a bit harder to escape  and ' in :GHCi
  eliminate duplicates in _ie and mark data constructor imports
as ???(Cons) - we can't reliably figure out the parent type
for the constructor:-(
  handle Prelude as special case in _ie (can't just comment out
import, need to import [qualified] Prelude [as X]() )

 haskell_doc.vim: fix keys (no namespace tags) and urls (modules) for :MDoc

31/03/2009
 all files: new home page at projects.haskell.org

28/03/2009
 haskell_doc.vim: in ProcessHaddockIndexes2, fix a case where making new entries
   could lose old ones (eg, zipWith's base package locations got
   lost when adding its vector package locations) 


07/12/2008
 haskell_doc.vim: since we're now reading from multiple haddock indices in
   DocIndex, we need to extend, not overwrite entries..

03/12/2008
 ghc.vim: do not reset b:ghc_static_options on every reload

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Configuring cabal dependencies at install-time

2009-04-07 Thread Martijn van Steenbergen

John Lato wrote:

From: Jeff Heard jefferson.r.he...@gmail.com

Is there a way to do something like autoconf and configure
dependencies at install time?  Building buster, I keep adding
dependencies and I'd like to keep that down to a minimum without the
annoyance of littering Hackage with dozens of packages.  For instance,
today I developed an HTTP behaviour and that of course requires
network and http, which were previously not required.  I'm about to
put together a haxr XML-RPC behaviour as well, and that of course
would add that much more to the dependency list.  HaXml, haxr, and
haxr-th most likely.

so... any way to do that short of making a bunch of separate packages
with one or two modules apiece?  Otherwise I'm thinking of breaking
things up into buster, buster-ui, buster-network, buster-console, and
buster-graphics to start and adding more as I continue along.



I'd be interested in hearing answers to this as well.  I'm not a fan
of configure-style compile-time conditional compilation, at least for
libraries.  It makes it much harder to specify dependencies.  With
this, if package Foo depends on buster and the HTTP behavior, it's no
longer enough to specify build-depends: buster because that will
only work if buster was configured properly on any given system.

I think that the proper solution is to break up libraries into
separate packages as Jeff suggests (buster, buster-ui, etc.), but then
the total packages on hackage would explode.  I don't feel great about
doing that with my own packages either; is it a problem?  If so, maybe
there could be just one extra package, e.g. buster and buster-extras.
Is there a better solution I'm missing?


Cabal's flag system sounds like a nice solution for this, except I don't 
know if it's possible to add specific flags to your build dependencies, i.e.


build-depends: buster -fhttp

Martijn.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strange performance issue with takusen 0.8.3

2009-04-07 Thread Alistair Bayley
2009/4/6 Marko Schütz markoschu...@web.de:

 I have an application where some simple data extracted from some
 source files is inserted into a PostgreSQL database. The application
 uses Takusen and is compiled with GHC 6.8.3. Some (59 in the test
 data) of the selects take on average 460ms each for a total time for
 this sample run of 30s. I prepare one select statement at the
 beginning of the run into which I then bind the specific values for
 every one of the selects. It does not seem to make a difference
 whether I do this or whether I just use a new statement for every
 select.

 For comparison, I have collected the SQL statements in a file with
 PREPARE ... and DEALLOCATE for _every_ select and then run this file
 through psql. This takes 2s!

Hello Marko,

I'm finding it hard to see what the problem is here. Is it that your
query takes 460ms, and you need it to be quicker? Or is it something
else? It would help to have some example code. Can you make a test
case which reproduces te problem, that you could share?


 For comparison, I have collected the SQL statements in a file with
 PREPARE ... and DEALLOCATE for _every_ select and then run this file
 through psql. This takes 2s!

If all you are doing is preparing and deallocating - i.e. not
executing - then that will be very quick, because the queries are
never executed.

Alistair
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] UPDATE: haskellmode for Vim now at projects.haskell.org (+screencast; -)

2009-04-07 Thread Matthijs Kooijman
Hi Claus,

 http://projects.haskell.org/haskellmode-vim/
The download link on this page seems to use \ instead of /, making it not
work.

For anyone eager to download it, just replace \ (or %5C) in your address bar
with \ and it should work.

Gr.

Matthijs


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] tail recursion

2009-04-07 Thread wren ng thornton

Daryoush Mehrtash wrote:

Is the call to go in the following code considered as tail recursion?

data DList a = DLNode (DList a) a (DList a)

mkDList :: [a] - DList a

mkDList [] = error must have at least one element
mkDList xs = let (first,last) = go last xs first
 in  first

  where go :: DList a - [a] - DList a - (DList a, DList a)
go prev [] next = (next,prev)
go prev (x:xs) next = let this= DLNode prev x rest
  (rest,last) = go this xs next
  in  (this,last)



No. For @go _ (_:_) _@ the tail expression is @(this,last)@ and so the 
tail call is to @(,)@. Consider this general transformation[1]:


go prev [] next = (next,prev)
go prev (x:xs) next =
case DLNode prev x rest of this -
case go this xs nextof (rest,last) -
(this,last)

Let binding is ignored when determining tail-callingness, and case 
evaluation only contributes in as far as allowing multiple tails.



[1] Which isn't laziness-preserving and so will break on your recursive 
let binding. It's a valid transformation for non-recursive let bindings, 
though, provided the appropriate strictness analysis.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange type error with associated type synonyms

2009-04-07 Thread Manuel M T Chakravarty

Matt Morrow:
On Mon, Apr 6, 2009 at 7:39 PM, Manuel M T Chakravarty c...@cse.unsw.edu.au 
 wrote:

Peter Berry:

3) we apply appl to x, so Memo d1 a = Memo d a. unify d = d1

But for some reason, step 3 fails.

Step 3 is invalid - cf, http://www.haskell.org/pipermail/haskell-cafe/2009-April/059196.html 
.


More generally, the signature of memo_fmap is ambiguous, and hence,  
correctly rejected.  We need to improve the error message, though.   
Here is a previous discussion of the subject:


 http://www.mail-archive.com/haskell-cafe@haskell.org/msg39673.html

Manuel

The thing that confuses me about this case is how, if the type sig  
on memo_fmap is omitted, ghci has no problem with it, and even gives  
it the type that it rejected:


Basically, type checking proceeds in one of two modes: inferring or  
checking.  The former is when there is no signature is given; the  
latter, if there is a user-supplied signature.  GHC can infer  
ambiguous signatures, but it cannot check them.  This is of course  
very confusing and we need to fix this (by preventing GHC from  
inferring ambiguous signatures).  The issue is also discussed in the  
mailing list thread I cited in my previous reply.


Manuel

PS: I do realise that ambiguous signatures are the single most  
confusing issue concerning type families (at least judging from the  
amount of mailing list traffic generated).  We'll do our best to  
improve the situation before 6.12 comes out.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] threadDelay granularity

2009-04-07 Thread Ulrik Rasmussen
Hello.

I am writing a simple game in Haskell as an exercise, and in the
rendering loop I want to cap the framerate to 60fps. I had planned to do
this with GHC.Conc.threadDelay, but looking at it's documentation, I
discovered that it can only delay the thread in time spans that are
multiples of 20ms:

http://www.haskell.org/ghc/docs/6.4/html/libraries/base/Control.Concurrent.html

I need a much finer granularity than that, so I wondered if it is
possible to either get a higher resolution for threadDelay, or if there
is an alternative to threadDelay?

I noticed that the SDL library includes the function delay, which
indeed works with a resolution down to one millisecond. However, since
I'm using HOpenGL and GLUT, I think it would be a little overkill to
depend on SDL just for this :).


Thanks,

Ulrik Rasmussen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] advice on space efficient data structure with efficient snoc operation

2009-04-07 Thread Manlio Perillo

Hi.

I'm still working on my Netflix Prize project.

For a function I wrote, I really need a data structure that is both 
space efficient (unboxed elements) and with an efficient snoc operation.


I have pasted a self contained module with the definition of the 
function I'm using:

http://hpaste.org/fastcgi/hpaste.fcgi/view?id=3453


The movie ratings are loaded from serialized data, and the result is 
serialized again, using the binary package:


transcodeIO :: IO ()
transcodeIO = do
  input - L.hGetContents stdin
  let output = encodeZ $ transcode $ decodeZ input
  L.hPut stdout output

(here encodeZ and decodeZ are wrappers around Data.Binary.encode and 
Data.Binary.decode, with support to gzip compression/decompression)



This function (transcodeIO, not transcode) takes, on my
Core2 CPU T7200  @ 2.00GHz:

real30m8.794s
user29m30.659s
sys 0m10.313s

1068 Mb total memory in use


The problem here is with snocU, that requires a lot of copying.

I rewrote the transcode function so that the input data set is split in 
N parts:

http://hpaste.org/fastcgi/hpaste.fcgi/view?id=3453#a3456

The mapReduce function is the one defined in the Real World Haskell.


The new function takes (using only one thread):

real18m48.039s
user18m30.901s
sys 0m6.520s

1351 Mb total memory in use


The additional required memory is probably caused by unionsWith, that is 
not strict.

The function takes less time, since array copying is optimized.
I still use snocU, but on small arrays.

GC time is very high: 54.4%


Unfortunately I can not test with more then one thread, since I get 
segmentation faults (probably a bug with uvector packages).


I also got two strange errors (but this may be just the result of the 
segmentation fault, I'm not able to reproduce them):


tpp.c:63: __pthread_tpp_change_priority: Assertion `new_prio == -1 || 
(new_prio = __sched_fifo_min_prio  new_prio = 
__sched_fifo_max_prio)' failed.



internal error: removeThreadFromQueue: not found
(GHC version 6.8.2 for i386_unknown_linux)
Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug



Now the question is: what data structure should I use to optimize the 
transcode function?


IMHO there are two solutions:

1) Use a lazy array.
   Something like ByteString.Lazy, and what is available in
   storablevector package.

   Using this data structure, I can avoid the use of appendU.

2) Use an unboxed list.

   Something like:
  http://mdounin.ru/hg/nginx-vendor-current/file/tip/src/core/ngx_list.h

   That is: a linked list of unboxed arrays, but unlike the lazy array
   solution, a snoc operation avoid copying if there is space in the
   current array.

   I don't know if this is easy/efficient to implement in Haskell.


Any other suggestions?


Thanks  Manlio Perillo
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] UPDATE: haskellmode for Vim now atprojects.haskell.org (+screencast; -)

2009-04-07 Thread Claus Reinke

http://projects.haskell.org/haskellmode-vim/

The download link on this page seems to use \ instead of /, making it not work.



For anyone eager to download it, just replace \ (or %5C) in your address bar
with \ and it should work.


argh, thanks, now fixed (filename completion in Vim, :help i_CTRL-X_CTRL-F,
is useful for making sure one links to the correct file names, but gives 
platform-
specific paths, which I usually correct before publishing; this one I missed).

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] UPDATE: haskellmode for Vim now at projects.haskell.org (+screencast; -)

2009-04-07 Thread Matthijs Kooijman
Hi Claus,

I've installed the vimball, and it spit a few errors at me. In particular, it
couldn't find the haddock documentation directory. A quick look at
haskell_doc.vim shows that it should autodetect the directory. However, for
some reason my ghc-pkg command returns the doc directory twice:

  $ ghc-pkg field base haddock-html
  haddock-html: /usr/local/ghc-6.10.1/share/doc/ghc/libraries/base
  haddock-html: /usr/local/ghc-6.10.1/share/doc/ghc/libraries/base

The haskell_doc.vim contains the following line, which seems to deal with
multiple lines:

  let field = substitute(system(g:ghc_pkg . ' field base 
haddock-html'),'\n','','')

However, this simply concats the lines, which obviously makes a mess of the
output and makes the detection fail. I've made things work by throwing away
everything except for the first line, by replacing the above line with:

  let field = substitute(system(g:ghc_pkg . ' field base 
haddock-html'),'\n.*','','')

This solution works for me, though it might be better to iterate all lines and
try each of them in turn, for the case that ghc-pkg returns different paths? I
can't really think of a case why this would be needed, though.

Gr.

Matthijs


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad transformer to consume a list

2009-04-07 Thread Stephan Friedrichs
Henning Thielemann wrote:

 is there a monad transformer to consume an input list? I've got external
 events streaming into the monad that are consumed on demand and I'm
 not sure if there's something better than a StateT.
 
 I wondered that, too. I wondered whether there is something inverse to
 Writer, and Reader is appearently not the answer. Now I think, that
 State is indeed the way to go to consume a list. Even better is StateT
 List Maybe:
 
 next :: StateT [a] Maybe a
 next = StateT Data.List.HT.viewL   -- see utility-ht package
 

But a StateT provides the power to modify the list in other ways than
reading the first element (modify (x:)). Maybe ParsecT is closer to what
I'm looking for ;)

-- 

Früher hieß es ja: Ich denke, also bin ich.
Heute weiß man: Es geht auch so.

 - Dieter Nuhr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Parallel combinator, performance advice

2009-04-07 Thread ChrisK
You create one MVar for each task in order to ensure all the tasks are done.
This is pretty heavyweight.

You could create a single Control.Concurrent.QSemN to count the completed tasks,
starting with 0.

Each task is followed by signalQSemN with a value of 1.  (I would use 
finally).

As parallel_ launches the tasks it can count their number, then it would call
waitQSemN for that quantity to have finished.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange type error with associated type synonyms

2009-04-07 Thread Matt Morrow
On Mon, Apr 6, 2009 at 7:39 PM, Manuel M T Chakravarty c...@cse.unsw.edu.au
 wrote:

 Peter Berry:

 3) we apply appl to x, so Memo d1 a = Memo d a. unify d = d1

 But for some reason, step 3 fails.


 Step 3 is invalid - cf, 
 http://www.haskell.org/pipermail/haskell-cafe/2009-April/059196.html.

 More generally, the signature of memo_fmap is ambiguous, and hence,
 correctly rejected.  We need to improve the error message, though.  Here is
 a previous discussion of the subject:

  http://www.mail-archive.com/haskell-cafe@haskell.org/msg39673.html

 Manuel


The thing that confuses me about this case is how, if the type sig on
memo_fmap is omitted, ghci has no problem with it, and even gives it the
type that it rejected:



{-# LANGUAGE TypeFamilies #-}

class Fun d where
  type Memo d :: * - *
  abst :: (d - a) - Memo d a
  appl :: Memo d a - (d - a)

memo_fmap f x = abst (f . appl x)

-- [...@monire a]$ ghci -ignore-dot-ghci
-- GHCi, version 6.10.1: http://www.haskell.org/ghc/  :? for help
--
-- Prelude :l ./Memo.hs
-- [1 of 1] Compiling Main ( Memo.hs, interpreted )
-- Ok, modules loaded: Main.
--
-- *Main :t memo_fmap
-- memo_fmap :: (Fun d) = (a - c) - Memo d a - Memo d c

-- copy/paste the :t sig

memo_fmap_sig :: (Fun d) = (a - c) - Memo d a - Memo d c
memo_fmap_sig f x = abst (f . appl x)

-- and,

-- *Main :r
-- [1 of 1] Compiling Main ( Memo.hs, interpreted )
--
-- Memo.hs:26:35:
-- Couldn't match expected type `Memo d'
--against inferred type `Memo d1'
-- In the first argument of `appl', namely `x'
-- In the second argument of `(.)', namely `appl x'
-- In the first argument of `abst', namely `(f . appl x)'
-- Failed, modules loaded: none.



Matt
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Parallel combinator, performance advice

2009-04-07 Thread Neil Mitchell
Hi,

I've written a parallel_ function, code attached. I'm looking for
criticism, suggestions etc on how to improve the performance and
fairness of this parallel construct. (If it turns out this construct
is already in a library somewhere, I'd be interested in that too!)

The problem I'm trying to solve is running system commands in
parallel. Importantly (unlike other Haskell parallel stuff) I'm not
expecting computationally heavy Haskell to be running in the threads,
and only want a maximum of n commands to fire at a time. The way I'm
trying to implement this is with a parallel_ function:

parallel_ :: [IO a] - IO ()

The semantics are that after parallel_ returns each action will have
been executed exactly once. The implementation (attached) creates a
thread pool of numCapabililties-1 threads, each of which reads from a
task pool and attempts to do some useful work. I use an idempotent
function to ensure that all work is done at most one, and a sequence_
to ensure all work is done at least once.

Running a benchmark of issuing 1 million trivial tasks (create,
modify, read and IO ref) the version without any parallelism is really
fast ( 0.1 sec), and the version with parallelism is slow ( 10 sec).
This could be entirely due to space leaks etc when queueing many
tasks.

I'm useful for any thoughts people might have!

Thanks in advance,

Neil


Parallel.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parallel combinator, performance advice

2009-04-07 Thread Bulat Ziganshin
Hello Neil,

Tuesday, April 7, 2009, 2:25:12 PM, you wrote:

 The problem I'm trying to solve is running system commands in
 parallel.

system commands means execution of external commands or just system
calls inside Haskell?

 Running a benchmark of issuing 1 million trivial tasks (create,
 modify, read and IO ref) the version without any parallelism is really
 fast ( 0.1 sec), and the version with parallelism is slow ( 10 sec).
 This could be entirely due to space leaks etc when queueing many
 tasks.

i think it's just because use of MVar/Chan is much slower than IORef
activity. once i checked that on 1GHz cpu and got 2 million withMVar-s
per second

i don't understood exactly what you need, but my first shot is
to create N threads executing commands from channel:

para xs = do
  done - newEmptyMVar
  chan - newChan
  writeList2Chan chan (map Just xs ++ [Nothing])
  
  replicateM_ numCapabilities $ do
forkIO $ do
  forever $ do
x - readChan chan
case x of
  Just cmd - cmd
  Nothing - putMVar done ()
  takeMVar done
  


-- 
Best regards,
 Bulatmailto:bulat.zigans...@gmail.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] threadDelay granularity

2009-04-07 Thread Peter Verswyvelen
I think this is an RTS option.
http://www.haskell.org/ghc/docs/latest/html/users_guide/using-concurrent.html



On Tue, Apr 7, 2009 at 1:41 PM, Ulrik Rasmussen hask...@utr.dk wrote:

 Hello.

 I am writing a simple game in Haskell as an exercise, and in the
 rendering loop I want to cap the framerate to 60fps. I had planned to do
 this with GHC.Conc.threadDelay, but looking at it's documentation, I
 discovered that it can only delay the thread in time spans that are
 multiples of 20ms:


 http://www.haskell.org/ghc/docs/6.4/html/libraries/base/Control.Concurrent.html

 I need a much finer granularity than that, so I wondered if it is
 possible to either get a higher resolution for threadDelay, or if there
 is an alternative to threadDelay?

 I noticed that the SDL library includes the function delay, which
 indeed works with a resolution down to one millisecond. However, since
 I'm using HOpenGL and GLUT, I think it would be a little overkill to
 depend on SDL just for this :).


 Thanks,

 Ulrik Rasmussen
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad transformer to consume a list

2009-04-07 Thread Tom Schrijvers

Hello,

is there a monad transformer to consume an input list? I've got external
events streaming into the monad that are consumed on demand and I'm
not sure if there's something better than a StateT.


I wondered that, too. I wondered whether there is something inverse to 
Writer, and Reader is appearently not the answer. Now I think, that State is 
indeed the way to go to consume a list. Even better is StateT List Maybe:


next :: StateT [a] Maybe a
next = StateT Data.List.HT.viewL   -- see utility-ht package


Or make the transformer a MonadPlus transformer and call mzero for the 
empty list?


Tom

--
Tom Schrijvers

Department of Computer Science
K.U. Leuven
Celestijnenlaan 200A
B-3001 Heverlee
Belgium

tel: +32 16 327544
e-mail: tom.schrijv...@cs.kuleuven.be
url: http://www.cs.kuleuven.be/~toms/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] System.Process.Posix

2009-04-07 Thread John Goerzen
Bulat Ziganshin wrote:
 Hello Cristiano,
 
 Sunday, April 5, 2009, 12:05:02 AM, you wrote:
 
 Is it me or the above package is not included in Hoogle?
 
 afair, Neil, being windows user, includes only packages available for
 his own system
 
 there was a large thread a few months ago and many peoples voted for
 excluding any OS-specific packages at all since this decreases
 portability of code developed by Hoogle users :)))
 
 

Urm, I realize that was half in jest, but no.  It just makes Hoogle less
useful.  If I need to fork, I need to fork, and no amount of
sugarcoating is going to get around that.

-- John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Configuring cabal dependencies at install-time

2009-04-07 Thread John Dorsey
John Lato wrote:
 I think that the proper solution is to break up libraries into
 separate packages as Jeff suggests (buster, buster-ui, etc.), but then
 the total packages on hackage would explode.  I don't feel great about

I thought about this a while back and came to the conclusion that the
package count should only grow by a small contant factor due to this,
and that's a lot better than dealing with hairy and problematic
dependencies.

It should usually be:

  libfoo
  libfoo-blarg
  libfoo-xyzzy
  etc.

and more rarely:

  libbar-with-xyzzy
  libbar-no-xyzzy
  etc.

each providing libbar.  Although I don't remember whether Cabal has
'provides'.  The latter case could explode exponentially for weird
packages that have several soft dependencies that can't be managed in
the plugin manner, but I can't see that being a real issue.

This looks manageable to me, but I'm no packaging guru.  I guess it's a
little harder for authors/maintainers of packages that look like leaves
in the dependency tree, which could be bad.  Am I missing something bad?

Regards,
John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] UPDATE: haskellmode for Vim now atprojects.haskell.org (+screencast; -)

2009-04-07 Thread Claus Reinke

Hi Matthijs,


I've installed the vimball, and it spit a few errors at me. In particular, it
couldn't find the haddock documentation directory. A quick look at
haskell_doc.vim shows that it should autodetect the directory. However, for
some reason my ghc-pkg command returns the doc directory twice:

 $ ghc-pkg field base haddock-html
 haddock-html: /usr/local/ghc-6.10.1/share/doc/ghc/libraries/base
 haddock-html: /usr/local/ghc-6.10.1/share/doc/ghc/libraries/base


Interesting. The reason for the double listing is that recent GHCs come
with two base packages (since the packages differ in content, having
both point to the same documentation location looks wrong to me, btw).


The haskell_doc.vim contains the following line, which seems to deal with
multiple lines:

 let field = substitute(system(g:ghc_pkg . ' field base 
haddock-html'),'\n','','')


This just used to remove the final '\n', in the days when multiple versions 
of base were still unthinkable. What I'm really after in that part of the script

is the location of the GHC docs, and the library index (the actual package
docs are processed later). Unfortunately, past GHC versions haven't been 
too helpful (#1226, #1878, #1572), hence all that guesswork in my scripts 
(for a while, there was a ghc --print-docdir, but that didn't quite work and 
disappeared quickly, nowadays, there is the nice ghc-paths package, but 
that doesn't give a concrete path for docdir, so I still need to find the http 
top dir for GHC).


I hadn't noticed this change, because (a) the scripts look in likely suspects 
for the docs location as well, and (b) the docs location can be configured
(bypassing all that guesswork) by setting 'g:haddock_docdir' before 
loading the scripts (:help g:haddock_docdir, :help haskellmode-settings-fine).


Using g:haddock_docdir to configure the right path for your installation 
is probably the least wrong thing to do for now, and requires no changes
to the scripts, but I'll have a look at how to improve the guesswork code 
for convenience, looking at the first match only or looking for the relevant 
directories in all matches..


Thanks for the report!
Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Parallel combinator, performance advice

2009-04-07 Thread ChrisK
Neil Mitchell wrote:
 Sorry, accidentally sent before finishing the response! I also noticed
 you sent this directly to me, not to -cafe, was that intentional?

The mail/news gateway makes it look like that, but I also sent to the mailing 
list.

 You mean something like:

 parallel_ xs =
sem - createSemapore (length xs)
enqueue [x  signalSemapore sem | x - xs]
waitOnSemaphore sem

 I thought of something like this, but then the thread that called
 parallel_ is blocked, which means if you fire off N threads you only
 get N-1 executing. If you have nested calls to parallel, then you end
 up with thread exhaustion. Is there a way to avoid that problem?

 Thanks

 Neil

Your parallel_ does not return until all operations are finished.

 parallel_ (x:xs) = do
 ys - mapM idempotent xs
 mapM_ addParallel ys
 sequence_ $ x : reverse ys

By the way, there is no obvious reason to insert reverse there.

What I meant was something like:

 para [] = return ()
 para [x] = x
 para xs = do
   q - newQSemN 0
   let wrap x = finally x (signalQSemN q 1)
   go [y] n = wrap x  waitQSemN q (succ n)
   go (y:ys) n = addParallel (wrap y)  go ys $! succ n
   go xs 0

This is nearly identical to your code, and avoid creating the MVar for each
operation.  I use finally to ensure the count is correct, but if a worker
threads dies then bas things will happen.  You can replace finally with () if
speed is important.

This is also lazy since the length of the list is not forced early.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] UPDATE: haskellmode for Vim now at projects.haskell.org (+screencast; -)

2009-04-07 Thread Matthijs Kooijman
Hi Claus,

I've found two more little bugs. The first is that version comparison is
incorrect. It now requires that all components are greater, so comparing
6.10.1 = 6.8.2 returns false (since 1  2).

Also, there is a ghc-pkg field * haddock-html call, but here the * will be
expanded by the shell into the files in the current directory. To prevent
this, the * should be escaped.

Both of these are fixed in the attached patch.

I'm also looking at the Qualify() function, which allows you to select a
qualification using tab completion. However, when there is only a single
choice, its a bit silly to have to use tabcompletion. At the very least, the
value should be prefilled, but ideally the qualification should just happen.

Also, I think that a dropdown menu is also available in text mode vim (at
least with vim7), which would be nice for multiple choices (since you can see
all choices in one glance).

I'll have a look at these things as well, expect another patch :-)

Gr.

Matthijs


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] System.Process.Posix

2009-04-07 Thread Jonathan Cast
On Tue, 2009-04-07 at 14:31 +0100, Neil Mitchell wrote:
 Hi
 
  Is it me or the above package is not included in Hoogle?
 
  afair, Neil, being windows user, includes only packages available for
  his own system
 
  there was a large thread a few months ago and many peoples voted for
  excluding any OS-specific packages at all since this decreases
  portability of code developed by Hoogle users :)))
 
  Urm, I realize that was half in jest, but no.  It just makes Hoogle less
  useful.  If I need to fork, I need to fork, and no amount of
  sugarcoating is going to get around that.
 
 I was implementing full package support last weekend. With any luck,
 I'll manage to push the changes tonight. If not, I'll push them as
 soon as I get back from holiday (a week or so)

Yay!

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange type error with associated type synonyms

2009-04-07 Thread Claus Reinke
Basically, type checking proceeds in one of two modes: inferring or  
checking.  The former is when there is no signature is given; the  
latter, if there is a user-supplied signature.  GHC can infer  
ambiguous signatures, but it cannot check them.  This is of course  
very confusing and we need to fix this (by preventing GHC from  
inferring ambiguous signatures).  The issue is also discussed in the  
mailing list thread I cited in my previous reply.


As the error message demonstrates, the inferred type is not
correctly represented - GHC doesn't really infer an ambiguous
type, it infers a type with a specific idea of what the type variable
should match. Representing that as an unconstrained forall-ed 
type variable just doesn't seem accurate (as the unconstrained

type variable won't match the internal type variable) and it is this
misrepresentation of the inferred type that leads to the ambiguity.

Here is a variation to make this point clearer:

{-# LANGUAGE NoMonomorphismRestriction #-}
{-# LANGUAGE TypeFamilies, ScopedTypeVariables #-}

class Fun d where
   type Memo d :: * - *
   abst :: (d - a) - Memo d a
   appl :: Memo d a - (d - a)

f = abst . appl

-- f' :: forall d a. (Fun d) = Memo d a - Memo d a
f' = abst . (id :: (d-a)-(d-a)) . appl

There is a perfectly valid type signature for f', as given in 
comment, but GHCi gives an incorrect one (the same as for f):


*Main :browse Main
class Fun d where
 abst :: (d - a) - Memo d a
 appl :: Memo d a - d - a
f :: (Fun d) = Memo d a - Memo d a
f' :: (Fun d) = Memo d a - Memo d a

What I suspect is that GHCi does infer the correct type, with
constrained 'd', but prints it incorrectly (no forall to indicate the
use of scoped variable). Perhaps GHCi should always indicate
in its type output which type variables have been unified with 
type variables that no longer occur in the output type (here the 
local 'd'). 

If ScopedTypeVariables are enabled, that might be done via 
explicit forall, if the internal type variable occurs in the source file 
(as for f' here). Otherwise, one might use type equalities. 


In other words, I'd expect :browse output more like this:

f :: forall a d. (Fun d, d~_d) = Memo d a - Memo d a
f' :: forall a d. (Fun d) = Memo d a - Memo d a

making the first signature obviously ambiguous, and the
second signature simply valid.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] System.Process.Posix

2009-04-07 Thread Neil Mitchell
Hi

 Is it me or the above package is not included in Hoogle?

 afair, Neil, being windows user, includes only packages available for
 his own system

 there was a large thread a few months ago and many peoples voted for
 excluding any OS-specific packages at all since this decreases
 portability of code developed by Hoogle users :)))

 Urm, I realize that was half in jest, but no.  It just makes Hoogle less
 useful.  If I need to fork, I need to fork, and no amount of
 sugarcoating is going to get around that.

I was implementing full package support last weekend. With any luck,
I'll manage to push the changes tonight. If not, I'll push them as
soon as I get back from holiday (a week or so)

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] System.Process.Posix

2009-04-07 Thread Cristiano Paris
On Tue, Apr 7, 2009 at 3:31 PM, Neil Mitchell ndmitch...@gmail.com wrote:

 I was implementing full package support last weekend. With any luck,
 I'll manage to push the changes tonight. If not, I'll push them as
 soon as I get back from holiday (a week or so)

Thank you, Neil.

Cristiano
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] UPDATE: haskellmode for Vim now atprojects.haskell.org (+screencast; -)

2009-04-07 Thread Claus Reinke

Matthijs,

thanks for the reports, and analyses and suggested patches are even
better! We should probably take this offlist at some point, though.


I've found two more little bugs. The first is that version comparison is
incorrect. It now requires that all components are greater, so comparing
6.10.1 = 6.8.2 returns false (since 1  2).


quite right! will fix.


Also, there is a ghc-pkg field * haddock-html call, but here the * will be
expanded by the shell into the files in the current directory. To prevent
this, the * should be escaped.


there is a comment in there suggesting that escaping was the wrong thing
to do on some configurations/platforms (obviously, the current code works
on my platform, so I depend on reports from users on other platforms), 
but I can't recall what the issue was at the moment. Will have to check,

and implement a platform-specific solution, if necessary.


I'm also looking at the Qualify() function, which allows you to select a
qualification using tab completion. However, when there is only a single
choice, its a bit silly to have to use tabcompletion. At the very least, the
value should be prefilled, but ideally the qualification should just happen.


Yes, that has been requested before, at least as an option (see the file
haskellmode-files.txt for other open issues and change log). But it should
be done consistently for all menus (all functions, and both GUI and
terminal mode).


Also, I think that a dropdown menu is also available in text mode vim (at
least with vim7), which would be nice for multiple choices (since you can 
see all choices in one glance).


There is a note on using :emenu instead of the old home-brewn
haskellmode-files.txt menu code. I would generally want to clean 
up the script code (some of which is very old), and factor out the 
handling of menus into a single place before making these changes.



I'll have a look at these things as well, expect another patch :-)


Looking forward to it!-)
Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Configuring cabal dependencies at install-time

2009-04-07 Thread Edward Kmett
This has been a lot on my mind lately as my current library provides
additional functionality to data types from a wide array of other packages.
I face a version of Wadler's expression problem.

I provide a set of classes for injecting into monoids/seminearrings/etc. to
allow for quick reductions over different data structures. The problem is
that of course the interfaces are fairly general so whole swathes of types
(including every applicative functor!) qualifies for certain operations.

Perhaps the ultimate answer would be to push more of the instances down into
the source packages. I can do this with some of the monoid instances, but
convincing folks of the utility of the fact that their particular
applicative forms a right-seminearring when it contains a monoid is another
matter entirely.

The problem is there is no path to get there from here. Getting another
library to depend on mine, they have to pick up the brittle dependency set I
have now. Splitting my package into smaller packages fails because I need to
keep the instances for 3rd party data types packed with the class
definitions to avoid orphan instances and poor API design. So the option to
split things into the equivalent of 'buster-ui', 'buster-network' and so
forth largely fails on that design criterion. I can do that for new monoids,
rings and so forth that I define that purely layer on top of the base
functionality I provide, but not for ones that provide additional instances
for 3rd party data types.

I can keep adding libraries as dependencies like I am doing now, but that
means that my library continues to accrete content at an alarming rate and
more importantly every one introduces a greater possibility of build issues,
because I can only operate in an environment where every one of my
dependencies can install.

This further exacerbates the problem that no one would want to add all of my
pedantic instances because to do so they would have to inject a huge brittle
dependency into their package.

The only other alternative that I seem to have at this point in the cabal
packaging system is to create a series of flags for optional functionality.
This solves _my_ problem, in particular it lets me install on a much broader
base of environments, but now the order in which my package was installed
with respect to its dependencies matters. In particular, clients of the
library won't know if they have access to half of the instances, and so are
stuck limiting themselves to working either on a particular computer, or
using the intersection of the functionality I can provide.

Perhaps, what I would ideally like to have would be some kind of 'augments'
or 'codependencies' clause in the cabal file inside of flags and build
targets that indicates packages that should force my package to
reinstall after a package matching the version range inside the
codependencies clause is installed or at least prompt indicatig that new
functionality would be available and what packages you should reinstall.

This would let me have my cake and eat it too. I could provide a wide array
of instances for different stock data types, and I could know that if
someone depends on both, say,  'monoids' and 'parsec 3' that the parsec
instances will be present and usable in my package.

Most importantly, it would allow me to fix my 'expression problem'. Others
could introduce dependencies on the easier to install library allowing me to
shrink the library and I would be able to install in more environments.

-Edward Kmett

On Tue, Apr 7, 2009 at 9:20 AM, John Dorsey hask...@colquitt.org wrote:

 John Lato wrote:
  I think that the proper solution is to break up libraries into
  separate packages as Jeff suggests (buster, buster-ui, etc.), but then
  the total packages on hackage would explode.  I don't feel great about

 I thought about this a while back and came to the conclusion that the
 package count should only grow by a small contant factor due to this,
 and that's a lot better than dealing with hairy and problematic
 dependencies.

 It should usually be:

  libfoo
  libfoo-blarg
  libfoo-xyzzy
  etc.

 and more rarely:

  libbar-with-xyzzy
  libbar-no-xyzzy
  etc.

 each providing libbar.  Although I don't remember whether Cabal has
 'provides'.  The latter case could explode exponentially for weird
 packages that have several soft dependencies that can't be managed in
 the plugin manner, but I can't see that being a real issue.

 This looks manageable to me, but I'm no packaging guru.  I guess it's a
 little harder for authors/maintainers of packages that look like leaves
 in the dependency tree, which could be bad.  Am I missing something bad?

 Regards,
 John

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Parallel combinator, performance advice

2009-04-07 Thread Neil Mitchell
Hi

  The problem I'm trying to solve is running system commands in
  parallel.

 system commands means execution of external commands or just system
 calls inside Haskell?

Calls to System.Cmd.system, i.e. running external console processes.
It's a make system I'm writing, so virtually all the time is spent in
calls to ghc etc.

To Bulat: I should have been clearer with the spec. The idea is that
multiple calls to paralell_ can execute, and a function executing
inside parallel_ can itself call parallel_. For this reason I need one
top-level thread pool, which requires unsafePerformIO. If I create a
thread pool every time, I end up with more threads than I want.

 Your parallel_ does not return until all operations are finished.

 parallel_ (x:xs) = do
     ys - mapM idempotent xs
     mapM_ addParallel ys
     sequence_ $ x : reverse ys

 By the way, there is no obvious reason to insert reverse there.

There is a reason :-)

Imagine I do parallel_ [a,b,c]

That's roughly doing (if b' is idempotent b):

enqueue b'
enqueue c'
a
b'
c'

If while executing a the thread pool starts on b', then after I've
finished a, I end up with both threads waiting for b', and nothing
doing c'. If I do a reverse, then the thread pool and me are starting
at different ends, so if we lock then I know it's something important
to me that the thread pool started first. It's still not idea, but it
happens less often.

 What I meant was something like:

 para [] = return ()
 para [x] = x
 para xs = do
   q - newQSemN 0
   let wrap x = finally x (signalQSemN q 1)
       go [y] n = wrap x  waitQSemN q (succ n)
       go (y:ys) n = addParallel (wrap y)  go ys $! succ n
   go xs 0

 This is nearly identical to your code, and avoid creating the MVar for each
 operation.  I use finally to ensure the count is correct, but if a worker
 threads dies then bas things will happen.  You can replace finally with () 
 if
 speed is important.

Consider a thread pool with 2 threads and the call parallel_ [parallel_ [b,c],a]

You get the sequence:
enqueue (parallel_ [b,c])
a
wait on parallel_ [b,c]

While you are executing a, a thread pool starts:
enqueue b
c
wait for b

Now you have all the threads waiting, and no one dealing with the
thread pool. This results in deadlock.

I guess the nested calls to parallel_ bit is the part of the spec
that makes everything much harder!

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] threadDelay granularity

2009-04-07 Thread Ulrik Rasmussen
On Tue, Apr 07, 2009 at 04:01:01PM +0200, Peter Verswyvelen wrote:
 Are you on Windows? Because this might be because GHC is not using a high
 resolution timer for doing its scheduling, I don't know...

No, Ubuntu 8.10. That may very well be, I'm not that much into the
details of GHC. 

I'm thinking about importing a system dependent delay method through the
FFI to get more control over the delay interval, but that seems like
overkill too. Also, I don't know if that would introduce some nasty
problems, again, I'm not that into the inner workings of GHC.

For now I'll just allow the application to render as fast as possible,
the primary motivation for capping the framerate was to prevent my laptop
from getting so hot that I can't have it in my lap :).

(I see that I replied to you instead of to the list by accident. I'm
CC'ing the message to the list now)

/Ulrik

 On Tue, Apr 7, 2009 at 3:21 PM, Ulrik Rasmussen hask...@utr.dk wrote:
 
  On Tue, Apr 07, 2009 at 02:53:20PM +0200, Peter Verswyvelen wrote:
   I think this is an RTS option.
  
  http://www.haskell.org/ghc/docs/latest/html/users_guide/using-concurrent.html
 
  Ahh, I found it now. Seems like the `-C' option didn't work because it
  still limited by the master tick interval. Using `+RTS -V0.001' helps.
  It is a bit inaccurate at that resolution though.
 
  
  
  
   On Tue, Apr 7, 2009 at 1:41 PM, Ulrik Rasmussen hask...@utr.dk wrote:
  
Hello.
   
I am writing a simple game in Haskell as an exercise, and in the
rendering loop I want to cap the framerate to 60fps. I had planned to
  do
this with GHC.Conc.threadDelay, but looking at it's documentation, I
discovered that it can only delay the thread in time spans that are
multiples of 20ms:
   
   
   
  http://www.haskell.org/ghc/docs/6.4/html/libraries/base/Control.Concurrent.html
   
I need a much finer granularity than that, so I wondered if it is
possible to either get a higher resolution for threadDelay, or if there
is an alternative to threadDelay?
   
I noticed that the SDL library includes the function delay, which
indeed works with a resolution down to one millisecond. However, since
I'm using HOpenGL and GLUT, I think it would be a little overkill to
depend on SDL just for this :).
   
   
Thanks,
   
Ulrik Rasmussen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
   
 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] threadDelay granularity

2009-04-07 Thread Peter Verswyvelen
Do you want to cap the rendering framerate at 60FPS or the animation
framerate?
Because when you use OpenGL and GLFW, you can just

GLFW.swapInterval $= 1

to cap the rendering framerate at the refresh rate of your monitor or LCD
screen (usually 60Hz)


On Tue, Apr 7, 2009 at 1:41 PM, Ulrik Rasmussen hask...@utr.dk wrote:

 Hello.

 I am writing a simple game in Haskell as an exercise, and in the
 rendering loop I want to cap the framerate to 60fps. I had planned to do
 this with GHC.Conc.threadDelay, but looking at it's documentation, I
 discovered that it can only delay the thread in time spans that are
 multiples of 20ms:


 http://www.haskell.org/ghc/docs/6.4/html/libraries/base/Control.Concurrent.html

 I need a much finer granularity than that, so I wondered if it is
 possible to either get a higher resolution for threadDelay, or if there
 is an alternative to threadDelay?

 I noticed that the SDL library includes the function delay, which
 indeed works with a resolution down to one millisecond. However, since
 I'm using HOpenGL and GLUT, I think it would be a little overkill to
 depend on SDL just for this :).


 Thanks,

 Ulrik Rasmussen
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] advice on space efficient data structure with efficient snoc operation

2009-04-07 Thread Edward Kmett
I'm in the process of adding a Data.Sequence.Unboxed to unboxed-containers.
I hope to have it in hackage today or tomorrow, should its performance work
out as well as Data.Set.Unboxed.

Be warned the API will likely be shifting on you for a little while, while I
figure out a better way to manage all of the instances.

-Edward Kmett
On Tue, Apr 7, 2009 at 7:49 AM, Manlio Perillo manlio_peri...@libero.itwrote:

 Hi.

 I'm still working on my Netflix Prize project.

 For a function I wrote, I really need a data structure that is both space
 efficient (unboxed elements) and with an efficient snoc operation.

 I have pasted a self contained module with the definition of the function
 I'm using:
 http://hpaste.org/fastcgi/hpaste.fcgi/view?id=3453


 The movie ratings are loaded from serialized data, and the result is
 serialized again, using the binary package:

 transcodeIO :: IO ()
 transcodeIO = do
  input - L.hGetContents stdin
  let output = encodeZ $ transcode $ decodeZ input
  L.hPut stdout output

 (here encodeZ and decodeZ are wrappers around Data.Binary.encode and
 Data.Binary.decode, with support to gzip compression/decompression)


 This function (transcodeIO, not transcode) takes, on my
 Core2 CPU T7200  @ 2.00GHz:

 real30m8.794s
 user29m30.659s
 sys 0m10.313s

 1068 Mb total memory in use


 The problem here is with snocU, that requires a lot of copying.

 I rewrote the transcode function so that the input data set is split in N
 parts:
 http://hpaste.org/fastcgi/hpaste.fcgi/view?id=3453#a3456

 The mapReduce function is the one defined in the Real World Haskell.


 The new function takes (using only one thread):

 real18m48.039s
 user18m30.901s
 sys 0m6.520s

 1351 Mb total memory in use


 The additional required memory is probably caused by unionsWith, that is
 not strict.
 The function takes less time, since array copying is optimized.
 I still use snocU, but on small arrays.

 GC time is very high: 54.4%


 Unfortunately I can not test with more then one thread, since I get
 segmentation faults (probably a bug with uvector packages).

 I also got two strange errors (but this may be just the result of the
 segmentation fault, I'm not able to reproduce them):

 tpp.c:63: __pthread_tpp_change_priority: Assertion `new_prio == -1 ||
 (new_prio = __sched_fifo_min_prio  new_prio = __sched_fifo_max_prio)'
 failed.


 internal error: removeThreadFromQueue: not found
(GHC version 6.8.2 for i386_unknown_linux)
Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug



 Now the question is: what data structure should I use to optimize the
 transcode function?

 IMHO there are two solutions:

 1) Use a lazy array.
   Something like ByteString.Lazy, and what is available in
   storablevector package.

   Using this data structure, I can avoid the use of appendU.

 2) Use an unboxed list.

   Something like:
  http://mdounin.ru/hg/nginx-vendor-current/file/tip/src/core/ngx_list.h

   That is: a linked list of unboxed arrays, but unlike the lazy array
   solution, a snoc operation avoid copying if there is space in the
   current array.

   I don't know if this is easy/efficient to implement in Haskell.


 Any other suggestions?


 Thanks  Manlio Perillo
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Strange type error with associated type synonyms

2009-04-07 Thread Simon Peyton-Jones
| Here is a variation to make this point clearer:
|
| {-# LANGUAGE NoMonomorphismRestriction #-}
| {-# LANGUAGE TypeFamilies, ScopedTypeVariables #-}
|
| class Fun d where
| type Memo d :: * - *
| abst :: (d - a) - Memo d a
| appl :: Memo d a - (d - a)
|
| f = abst . appl
|
| -- f' :: forall d a. (Fun d) = Memo d a - Memo d a
| f' = abst . (id :: (d-a)-(d-a)) . appl
|
| There is a perfectly valid type signature for f', as given in
| comment, but GHCi gives an incorrect one (the same as for f):
|
| *Main :browse Main
| class Fun d where
|   abst :: (d - a) - Memo d a
|   appl :: Memo d a - d - a
| f :: (Fun d) = Memo d a - Memo d a
| f' :: (Fun d) = Memo d a - Memo d a

I'm missing something here.  Those types are identical to the one given in your 
type signature for f' above, save that the forall is suppressed (because you 
are allowed to omit it, and GHC generally does when printing types).

I must be missing the point.

Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] CFP: JFP Special Issue on Generic Programming

2009-04-07 Thread Matthew Fluet (ICFP Publicity Chair)
 OPEN CALL FOR PAPERS

   JFP Special Issue on Generic Programming

   Deadline: 1 October 2009

  http://www.comlab.ox.ac.uk/ralf.hinze/JFP/cfp.html

Scope
-

Generic programming is about making programs more adaptable by making
them more general. Generic programs often embody non-traditional kinds
of polymorphism; ordinary programs are obtained from them by suitably
instantiating their parameters. In contrast to normal programs, the
parameters of a generic program are often quite rich in structure; for
example they may be other programs, types or type constructors,
classes, concepts, or even programming paradigms.

This special issue aims at documenting state-of-the-art research, new
developments and directions for future investigation in the broad
field of Generic Programming. It is an outgrowth of the series of
Workshops on Generic Programming, which started in 1998 and which
continues this year with an ICFP affiliated workshop in
Edinburgh. Participants of the workshops are invited to submit a
suitably revised and expanded version of their paper to the special
issue. The call for papers is, however, open. Other contributions are
equally welcome and are, indeed, encouraged. All submitted papers will
be subjected to the same quality criteria, meeting the standards of
the Journal of Functional Programming.

The special issue seeks original contributions on all aspects of
generic programming including but not limited to

o adaptive object-oriented programming,
o aspect-oriented programming,
o case studies,
o concepts (as in the STL/C++ sense),
o component-based programming,
o datatype-generic programming,
o generic programming with dependent types,
o meta-programming,
o polytypic programming, and
o programming with modules.

Submission details
--

Manuscripts should be unpublished works and not submitted elsewhere.
Revised versions of papers published in conference or workshop
proceedings that have not appeared in archival journals are eligible
for submission.

Deadline for submission:   1 October 2009
Notification of acceptance or rejection:  15 January 2010
Revised version due:  15 March   2010

For submission details, please consult
http://www.comlab.ox.ac.uk/ralf.hinze/JFP/cfp.html
or see the Journal's web page
http://journals.cambridge.org/jfp

Guest Editor


Ralf Hinze
University of Oxford
Computing Laboratory
Wolfson Building, Parks Road, Oxford OX1 3QD, UK.
Telephone: +44 (1865) 610700
Fax: +44 (1865) 283531
Email: ralf.hi...@comlab.ox.ac.uk
WWW: http://www.comlab.ox.ac.uk/ralf.hinze/

---
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re[2]: Parallel combinator, performance advice

2009-04-07 Thread Bulat Ziganshin
Hello Neil,

Tuesday, April 7, 2009, 6:13:29 PM, you wrote:

 Calls to System.Cmd.system, i.e. running external console processes.
 It's a make system I'm writing, so virtually all the time is spent in
 calls to ghc etc.

 To Bulat: I should have been clearer with the spec. The idea is that
 multiple calls to paralell_ can execute, and a function executing
 inside parallel_ can itself call parallel_. For this reason I need one
 top-level thread pool, which requires unsafePerformIO. If I create a
 thread pool every time, I end up with more threads than I want.

this is smth new to solve

i propose to use concept similar to Capability of GHC RTS:

we have one Capability provided by thread calling para and N-1
Capabilities provided by your thread pool. all that we need is to
reuse current thread Capability as part of pool!

para xs = do
  sem - newQSem
  for xs $ \x - do
writeChan chan (x `finally` signalQSem sem)
  tid - forkIO (executing commands from chan...)
  waitQSem sem
  killThread tid

instead of killThread we really should send pseudo-job (like my
Nothing value) that will led to self-killing of job that gets this
signal

this solution still may lead to a bit more or less than N threads
executed at the same time. your turn!




-- 
Best regards,
 Bulatmailto:bulat.zigans...@gmail.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange type error with associated type synonyms

2009-04-07 Thread Peter Berry
On 07/04/2009, Ryan Ingram ryani.s...@gmail.com wrote:
 On Mon, Apr 6, 2009 at 2:36 PM, Peter Berry pwbe...@gmail.com wrote:
 As I understand it, the type checker's thought process should be along
 these lines:

 1) the type signature dictates that x has type Memo d a.
 2) appl has type Memo d1 a - d1 - a for some d1.
 3) we apply appl to x, so Memo d1 a = Memo d a. unify d = d1

 This isn't true, though, and for similar reasons why you can't declare
 a generic instance Fun d = Functor (Memo d).  Type synonyms are not
 injective; you can have two instances that point at the same type:

Doh! I'm too used to interpreting words like Memo with an initial
capital as constructors, which are injective, when it's really a
function, which need not be.

 You can use a data family instead, and then you get the property you
 want; if you make Memo a data family, then Memo d1 = Memo d2 does
 indeed give you d1 = d2.

I'm now using Data.MemoTrie, which indeed uses data families, instead
of a home-brewed solution, and now GHC accepts the type signature. In
fact, it already has a Functor instance, so funmap is redundant.

-- 
Peter Berry pwbe...@gmail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange type error with associated type synonyms

2009-04-07 Thread Claus Reinke

| Here is a variation to make this point clearer:
|
| {-# LANGUAGE NoMonomorphismRestriction #-}
| {-# LANGUAGE TypeFamilies, ScopedTypeVariables #-}
|
| class Fun d where
| type Memo d :: * - *
| abst :: (d - a) - Memo d a
| appl :: Memo d a - (d - a)
|
| f = abst . appl
|
| -- f' :: forall d a. (Fun d) = Memo d a - Memo d a
| f' = abst . (id :: (d-a)-(d-a)) . appl
|
| There is a perfectly valid type signature for f', as given in
| comment, but GHCi gives an incorrect one (the same as for f):
|
| *Main :browse Main
| class Fun d where
|   abst :: (d - a) - Memo d a
|   appl :: Memo d a - d - a
| f :: (Fun d) = Memo d a - Memo d a
| f' :: (Fun d) = Memo d a - Memo d a


I'm missing something here.  Those types are identical to the one given
in your type signature for f' above, save that the forall is suppressed
(because you are allowed to omit it, and GHC generally does when
printing types).


Not with ScopedTypeVariables, though, where explicit foralls have
been given an additional significance. Uncommenting the f' signature works, while dropping the 
'forall d a' from it fails with

the usual match failure due to ambiguity Couldn't match expected
type `Memo d1' against inferred type `Memo d'.


I must be missing the point.


The point was in the part you didn't quote:

|In other words, I'd expect :browse output more like this:
|
|f :: forall a d. (Fun d, d~_d) = Memo d a - Memo d a
|f' :: forall a d. (Fun d) = Memo d a - Memo d a
|
|making the first signature obviously ambiguous, and the
|second signature simply valid.

Again, the validity of the second signature depends on
ScopedTypeVariables - without that, both f and f' should
get a signature similar to the first one (or some other notation
that implies that 'd' isn't freely quantified, but must match a
non-accessible '_d').

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re[3]: Parallel combinator, performance advice

2009-04-07 Thread Bulat Ziganshin
Hello Bulat,

Tuesday, April 7, 2009, 6:50:14 PM, you wrote:

   tid - forkIO (executing commands from chan...)
   waitQSem sem
   killThread tid

 instead of killThread we really should send pseudo-job (like my
 Nothing value) that will led to self-killing of job that gets this
 signal

 this solution still may lead to a bit more or less than N threads
 executed at the same time. your turn!

solved! every job should go together with Bool flag `killItself`.
last job should have this flag set to True. thread will execute job
and kill itself if this flag is True. so we get strong guarantees that
there are exactly N threads in the system:

para xs = do
  sem - newQSem
  for (init xs) $ \x - do
writeChan chan (x `finally` signalQSem sem, False)
  writeChan chan (last x `finally` signalQSem sem, True)
  --
  tid - forkIO $ do
   let cycle = do
 (x,flag) - readChan chan
 x
 unless flag cycle
   cycle
  --
  waitQSem sem


btw, this problem looks a great contribution into Haskell way
book of exercises

-- 
Best regards,
 Bulatmailto:bulat.zigans...@gmail.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Parallel combinator, performance advice

2009-04-07 Thread ChrisK
Neil Mitchell wrote:
 
 I guess the nested calls to parallel_ bit is the part of the spec
 that makes everything much harder!
 
 Thanks
 
 Neil

Yes.  Much more annoying.

But the problem here is generic.  To avoid it you must never allow all thread to
block at once.

The parallel_ function is such a job, so you solved this with the 'idempotent'
trick.  You solution works by blocking all but 1 thread.

1a) Some worker thread 1 executes parallel_ with some jobs
1b) These get submitted the work queue 'chan'
1c) worker thread 1 starts on those same jobs, ignoring the queue
1d) worker thread 1 reaches the job being processed by thread 2
1e) worker thread 1 blocks until the jobs is finished in modifyMVar

2a) Worker thread 2 grabs a job posted by thread 1, that calls parallel_
2b) This batch of jobs gets submitted to the work queue 'chan'
2c) worker thread 2 starts on those same jobs, ignoring the queue
1d) worker thread 2 reaches the job being processed by thread 3
1e) worker thread 2 blocks until the jobs is finished in modifyMVar

3...4...5...

And now only 1 thread is still working, and it has to work in series.

I think I can fix this...

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parallel combinator, performance advice

2009-04-07 Thread Sebastian Sylvan
This is a random idea, that's probably not going to work, but I don't have a
way of testing it so I'll just post it!
How about using unsafeInterleaveIO to get a lazy suspension of the result of
each action, and then using par to spark off each of them? If that works you
can reuse the existing task-parallel system of GHC to do the heavily lifting
for you, instead of having to write your own.

On Tue, Apr 7, 2009 at 11:25 AM, Neil Mitchell ndmitch...@gmail.com wrote:

 Hi,

 I've written a parallel_ function, code attached. I'm looking for
 criticism, suggestions etc on how to improve the performance and
 fairness of this parallel construct. (If it turns out this construct
 is already in a library somewhere, I'd be interested in that too!)

 The problem I'm trying to solve is running system commands in
 parallel. Importantly (unlike other Haskell parallel stuff) I'm not
 expecting computationally heavy Haskell to be running in the threads,
 and only want a maximum of n commands to fire at a time. The way I'm
 trying to implement this is with a parallel_ function:

 parallel_ :: [IO a] - IO ()

 The semantics are that after parallel_ returns each action will have
 been executed exactly once. The implementation (attached) creates a
 thread pool of numCapabililties-1 threads, each of which reads from a
 task pool and attempts to do some useful work. I use an idempotent
 function to ensure that all work is done at most one, and a sequence_
 to ensure all work is done at least once.

 Running a benchmark of issuing 1 million trivial tasks (create,
 modify, read and IO ref) the version without any parallelism is really
 fast ( 0.1 sec), and the version with parallelism is slow ( 10 sec).
 This could be entirely due to space leaks etc when queueing many
 tasks.

 I'm useful for any thoughts people might have!

 Thanks in advance,

 Neil

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Sebastian Sylvan
+44(0)7857-300802
UIN: 44640862
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] advice on space efficient data structure with efficient snoc operation

2009-04-07 Thread Manlio Perillo

Edward Kmett ha scritto:
I'm in the process of adding a Data.Sequence.Unboxed to 
unboxed-containers. I hope to have it in hackage today or tomorrow, 
should its performance work out as well as Data.Set.Unboxed.
 


Looking at the data definition of Data.Sequence I suspect that it is not 
really space efficient.


Please note that I have 480189 arrays, where each array has, on average, 
209 (Word16 :*: Word8) elements.


Using a [(Word32, UArr (Word16 :*: Word8))] takes about 800 MB (but it's 
hard to measure exact memory usage).



 [...]


Thanks  Manlio
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange type error with associated type synonyms

2009-04-07 Thread Peter Berry
On Mon, Apr 6, 2009 at 7:39 PM, Manuel M T Chakravarty
c...@cse.unsw.edu.au wrote:
 Peter Berry:

 3) we apply appl to x, so Memo d1 a = Memo d a. unify d = d1

 But for some reason, step 3 fails.

 Step 3 is invalid - cf,
 http://www.haskell.org/pipermail/haskell-cafe/2009-April/059196.html.

  More generally, the signature of memo_fmap is ambiguous, and hence,
  correctly rejected.  We need to improve the error message, though.
  Here is a previous discussion of the subject:

   http://www.mail-archive.com/haskell-cafe@haskell.org/msg39673.html

Aha! Very informative, thanks.

On 07/04/2009, Manuel M T Chakravarty c...@cse.unsw.edu.au wrote:
 Matt Morrow:
 The thing that confuses me about this case is how, if the type sig
 on memo_fmap is omitted, ghci has no problem with it, and even gives
 it the type that it rejected:

 Basically, type checking proceeds in one of two modes: inferring or
 checking.  The former is when there is no signature is given; the
 latter, if there is a user-supplied signature.  GHC can infer
 ambiguous signatures, but it cannot check them.  This is of course
 very confusing and we need to fix this (by preventing GHC from
 inferring ambiguous signatures).  The issue is also discussed in the
 mailing list thread I cited in my previous reply.

I see. So GHC is wrong to accept memo_fmap?

-- 
Peter Berry pwbe...@gmail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re[2]: Parallel combinator, performance advice

2009-04-07 Thread Bulat Ziganshin
Hello Neil,

Tuesday, April 7, 2009, 6:13:29 PM, you wrote:

 Consider a thread pool with 2 threads and the call parallel_ [parallel_ 
 [b,c],a]

 You get the sequence:
 enqueue (parallel_ [b,c])
 a
 wait on parallel_ [b,c]

 While you are executing a, a thread pool starts:
 enqueue b
 c
 wait for b

 Now you have all the threads waiting, and no one dealing with the
 thread pool. This results in deadlock.

i think the only way to solve this problem is to create one more
thread each time. let's see: on every call to para you need to alloc
one thread to wait for jobs completion. so on each nested call to para
you have minus one worker thread. finally you will eat them all!

so you need to make fork: one thread should serve jobs and another one
wait for completion of this jobs bucket. and with killItself flag you
will finish superfluous thread JIT


-- 
Best regards,
 Bulatmailto:bulat.zigans...@gmail.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] threadDelay granularity

2009-04-07 Thread Ulrik Rasmussen
On Tue, Apr 07, 2009 at 04:34:22PM +0200, Peter Verswyvelen wrote:
 Do you want to cap the rendering framerate at 60FPS or the animation
 framerate?
 Because when you use OpenGL and GLFW, you can just
 
 GLFW.swapInterval $= 1
 
 to cap the rendering framerate at the refresh rate of your monitor or LCD
 screen (usually 60Hz)

I just want to cap the rendering framerate. The game logic is running in
other threads, and sends rendering information via a Chan to the
renderer. 

I'm using GLUT, and have never heard of GLFW. However, that seems to be
a better tool to get the job done. I'll check it out, thanks :).

/Ulrik

 
 
 On Tue, Apr 7, 2009 at 1:41 PM, Ulrik Rasmussen hask...@utr.dk wrote:
 
  Hello.
 
  I am writing a simple game in Haskell as an exercise, and in the
  rendering loop I want to cap the framerate to 60fps. I had planned to do
  this with GHC.Conc.threadDelay, but looking at it's documentation, I
  discovered that it can only delay the thread in time spans that are
  multiples of 20ms:
 
 
  http://www.haskell.org/ghc/docs/6.4/html/libraries/base/Control.Concurrent.html
 
  I need a much finer granularity than that, so I wondered if it is
  possible to either get a higher resolution for threadDelay, or if there
  is an alternative to threadDelay?
 
  I noticed that the SDL library includes the function delay, which
  indeed works with a resolution down to one millisecond. However, since
  I'm using HOpenGL and GLUT, I think it would be a little overkill to
  depend on SDL just for this :).
 
 
  Thanks,
 
  Ulrik Rasmussen
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Re[2]: Parallel combinator, performance advice

2009-04-07 Thread Neil Mitchell
Hi

Sebastian:

 How about using unsafeInterleaveIO to get a lazy suspension of the result of 
 each action,
  and then using par to spark off each of them? If that works you can reuse 
 the existing
 task-parallel system of GHC to do the heavily lifting for you, instead of 
 having to write your
 own.

par is likely to spark all the computations, and then switch between
them - which will mean I've got more than N things running in
parallel.

 i think the only way to solve this problem is to create one more
 thread each time. let's see: on every call to para you need to alloc
 one thread to wait for jobs completion. so on each nested call to para
 you have minus one worker thread. finally you will eat them all!

 so you need to make fork: one thread should serve jobs and another one
 wait for completion of this jobs bucket. and with killItself flag you
 will finish superfluous thread JIT

You are right, your previous solution was running at N-1 threads if
the order was a little unlucky. I've attached a new version which I
think gives you N threads always executing at full potential. It's
basically your idea from the last post, with the main logic being:

parallel_ (x1:xs) = do
sem - newQSem $ 1 - length xs
forM_ xs $ \x -
writeChan queue (x  signalQSem sem, False)
x1
addWorker
waitQSem sem
writeChan queue (signalQSem sem, True)
waitQSem sem

Where the second flag being True = kill, as you suggested. I think
I've got the semaphore logic right - anyone want to see if I missed
something?

With this new version running 100 items takes ~1 second, instead
of ~10 seconds before, so an order of magnitude improvement, and
greater fairness. Very nice, thanks for all the help!

Thanks

Neil


Parallel3.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re[2]: Parallel combinator, performance advice

2009-04-07 Thread Bulat Ziganshin
Hello Neil,

Tuesday, April 7, 2009, 6:13:29 PM, you wrote:

 Calls to System.Cmd.system, i.e. running external console processes.
 It's a make system I'm writing, so virtually all the time is spent in
 calls to ghc etc.

btw, if all that you need is to limit amount of simultaneous
System.Cmd.system calls, you may go from opposite side: wrap this call
into semaphore:

sem = unsafePerformIO$ newQSem numCapabilities

mysystem = bracket_ (waitQSem sem) (signalQSem sem) . system

and implement para as simple thread population:

para = mapM_ forkIO


-- 
Best regards,
 Bulatmailto:bulat.zigans...@gmail.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Re[2]: Parallel combinator, performance advice

2009-04-07 Thread Neil Mitchell
Hi Bulat,

 btw, if all that you need is to limit amount of simultaneous
 System.Cmd.system calls, you may go from opposite side: wrap this call
 into semaphore:

 sem = unsafePerformIO$ newQSem numCapabilities

 mysystem = bracket_ (waitQSem sem) (signalQSem sem) . system

 and implement para as simple thread population:

 para = mapM_ forkIO

My main motivation is to limit the number of system calls, but it's
also useful from a user point of view if the system is doing a handful
of things at a time - it makes it easier to track what's going on.

I might try that tomorrow and see if it makes a difference to the
performance. While the majority of computation is in system calls,
quite a few of the threads open files etc, and having them all run in
parallel would end up with way too many open handles etc.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re[4]: Parallel combinator, performance advice

2009-04-07 Thread Bulat Ziganshin
Hello Neil,

Tuesday, April 7, 2009, 7:33:25 PM, you wrote:
 How about using unsafeInterleaveIO to get a lazy suspension of the result of 
 each action,
  and then using par to spark off each of them? If that works you can reuse 
 the existing
 task-parallel system of GHC to do the heavily lifting for you, instead of 
 having to write your
 own.

 par is likely to spark all the computations, and then switch between
 them - which will mean I've got more than N things running in
 parallel.

par/GHC RTS limits amount of Haskell threads running simultaneously.
with a system call marked as safe, Capability will be freed while we
execute external program so nothing will be limited except for amount
of tasks *starting* (as opposite to running) simultaneously :)))


-- 
Best regards,
 Bulatmailto:bulat.zigans...@gmail.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re[4]: Parallel combinator, performance advice

2009-04-07 Thread Bulat Ziganshin
Hello Neil,

Tuesday, April 7, 2009, 7:33:25 PM, you wrote:

 parallel_ (x1:xs) = do
 sem - newQSem $ 1 - length xs
 forM_ xs $ \x -
 writeChan queue (x  signalQSem sem, False)
 x1
 addWorker
 waitQSem sem
 writeChan queue (signalQSem sem, True)
 waitQSem sem

 Where the second flag being True = kill, as you suggested. I think
 I've got the semaphore logic right - anyone want to see if I missed
 something?

Neil, executing x1 directly in parallel_ is incorrect idea. you should
have N worker threads, not N-1 threads plus one job executed in main
thread. imagine that you have 1000 jobs and N=4. that you will got
here is 3 threads each executed 333 jobs and 1 job executed by main
thread

so you still need to detach one more worker job and finish it just
before we are ready to finish waiting for QSem and continue in main
thread which is sole reason why we need killItself flag. in this code
snipped this flag is completely useless, btw


-- 
Best regards,
 Bulatmailto:bulat.zigans...@gmail.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re[4]: Parallel combinator, performance advice

2009-04-07 Thread Bulat Ziganshin
Hello Neil,

Tuesday, April 7, 2009, 7:47:17 PM, you wrote:

 para = mapM_ forkIO

 I might try that tomorrow and see if it makes a difference to the
 performance. While the majority of computation is in system calls,
 quite a few of the threads open files etc, and having them all run in
 parallel would end up with way too many open handles etc.

if you have too much threads, you may replace forkIO with one more
QSem-enabled call:

semIO = unsafePerformIO$ newQSem 100

myForkIO = bracket_ (waitQSem semIO) (signalQSem semIO) . forkIO

this limit may be much higher than for System.Cmd.system


or you may go further and replace it with thread pool approach. the
main problem behind is raw calls to forkIO since these increases
amount of threads capable to call System.Cmd.system without any
control from us 



-- 
Best regards,
 Bulatmailto:bulat.zigans...@gmail.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] threadDelay granularity

2009-04-07 Thread Duane Johnson
The Hipmunk 2D physics engine comes with a playground app which  
includes the following function:



-- | Advances the time.
advanceTime :: IORef State - Double - KeyButtonState - IO Double
advanceTime stateVar oldTime slowKey = do
  newTime - get time

  -- Advance simulation
  let slower = if slowKey == Press then slowdown else 1
  mult   = frameSteps / (framePeriod * slower)
  framesPassed   = truncate $ mult * (newTime - oldTime)
  simulNewTime   = oldTime + toEnum framesPassed / mult
  advanceSimulTime stateVar $ min maxSteps framesPassed

  -- Correlate with reality
  newTime' - get time
  let diff = newTime' - simulNewTime
  sleepTime = ((framePeriod * slower) - diff) / slower
  when (sleepTime  0) $ sleep sleepTime
  return simulNewTime



I think the get time is provided by GLFW.

-- Duane Johnson


On Apr 7, 2009, at 9:25 AM, Ulrik Rasmussen wrote:


On Tue, Apr 07, 2009 at 04:34:22PM +0200, Peter Verswyvelen wrote:

Do you want to cap the rendering framerate at 60FPS or the animation
framerate?
Because when you use OpenGL and GLFW, you can just

GLFW.swapInterval $= 1

to cap the rendering framerate at the refresh rate of your monitor  
or LCD

screen (usually 60Hz)


I just want to cap the rendering framerate. The game logic is  
running in

other threads, and sends rendering information via a Chan to the
renderer.

I'm using GLUT, and have never heard of GLFW. However, that seems to  
be

a better tool to get the job done. I'll check it out, thanks :).

/Ulrik




On Tue, Apr 7, 2009 at 1:41 PM, Ulrik Rasmussen hask...@utr.dk  
wrote:



Hello.

I am writing a simple game in Haskell as an exercise, and in the
rendering loop I want to cap the framerate to 60fps. I had planned  
to do

this with GHC.Conc.threadDelay, but looking at it's documentation, I
discovered that it can only delay the thread in time spans that are
multiples of 20ms:


http://www.haskell.org/ghc/docs/6.4/html/libraries/base/Control.Concurrent.html

I need a much finer granularity than that, so I wondered if it is
possible to either get a higher resolution for threadDelay, or if  
there

is an alternative to threadDelay?

I noticed that the SDL library includes the function delay, which
indeed works with a resolution down to one millisecond. However,  
since

I'm using HOpenGL and GLUT, I think it would be a little overkill to
depend on SDL just for this :).


Thanks,

Ulrik Rasmussen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re[5]: Parallel combinator, performance advice

2009-04-07 Thread Bulat Ziganshin
Hello Bulat,

Tuesday, April 7, 2009, 7:50:08 PM, you wrote:

 parallel_ (x1:xs) = do
 sem - newQSem $ 1 - length xs
 forM_ xs $ \x -
 writeChan queue (x  signalQSem sem, False)
 x1
 addWorker
 waitQSem sem
 writeChan queue (signalQSem sem, True)
 waitQSem sem

 Neil, executing x1 directly in parallel_ is incorrect idea.

forget this. but it still a bit suboptimal: after everything was
finished, we schedule one more empty job and wait while some worker
thread will pick up it. it will go into Chan after all jobs scheduled
at the time our jobs was executed so that we are doing here is
eventually don't do any internal activity while we have all N external
programs running

instead, my solution packed this flag together with last job so once
last job is finished we are immediately returned from parallel_ so
other internal activity may go on

-- 
Best regards,
 Bulatmailto:bulat.zigans...@gmail.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Re[5]: Parallel combinator, performance advice

2009-04-07 Thread Neil Mitchell
Hi

 par is likely to spark all the computations, and then switch between
 them - which will mean I've got more than N things running in
 parallel.

| par/GHC RTS limits amount of Haskell threads running simultaneously.
| with a system call marked as safe, Capability will be freed while we
| execute external program so nothing will be limited except for amount
| of tasks *starting* (as opposite to running) simultaneously :)))

Yeah, I misspoke - I want to avoid starting N things.

 parallel_ (x1:xs) = do
     sem - newQSem $ 1 - length xs
     forM_ xs $ \x -
         writeChan queue (x  signalQSem sem, False)
     x1
     addWorker
     waitQSem sem
     writeChan queue (signalQSem sem, True)
     waitQSem sem

 Neil, executing x1 directly in parallel_ is incorrect idea.

It's a very slight optimisation, as it saves us queueing and
dequeueing x1, since we know the worker we're about the spawn on the
line below will grab x1 immediately.

 forget this. but it still a bit suboptimal: after everything was
 finished, we schedule one more empty job and wait while some worker
 thread will pick up it. it will go into Chan after all jobs scheduled
 at the time our jobs was executed so that we are doing here is
 eventually don't do any internal activity while we have all N external
 programs running

 instead, my solution packed this flag together with last job so once
 last job is finished we are immediately returned from parallel_ so
 other internal activity may go on

There is no guarantee that the last job finishes last. If the first
job takes longer than the last job we'll be one thread short while
waiting on the first job. It's a shame, since removing that additional
writeChan isn't particularly useful.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] threadDelay granularity

2009-04-07 Thread Peter Verswyvelen
yes get time comes from GLFW; that is get comes from OpenGL, time from
GLFW.
I think the time provided by GLFW has a very high resolution but is not very
accurate in the long run, which is not a real problem for games I guess

On Tue, Apr 7, 2009 at 6:09 PM, Duane Johnson duane.john...@gmail.comwrote:

 The Hipmunk 2D physics engine comes with a playground app which includes
 the following function:

  -- | Advances the time.
 advanceTime :: IORef State - Double - KeyButtonState - IO Double
 advanceTime stateVar oldTime slowKey = do
  newTime - get time

  -- Advance simulation
  let slower = if slowKey == Press then slowdown else 1
  mult   = frameSteps / (framePeriod * slower)
  framesPassed   = truncate $ mult * (newTime - oldTime)
  simulNewTime   = oldTime + toEnum framesPassed / mult
  advanceSimulTime stateVar $ min maxSteps framesPassed

  -- Correlate with reality
  newTime' - get time
  let diff = newTime' - simulNewTime
  sleepTime = ((framePeriod * slower) - diff) / slower
  when (sleepTime  0) $ sleep sleepTime
  return simulNewTime



 I think the get time is provided by GLFW.

 -- Duane Johnson



 On Apr 7, 2009, at 9:25 AM, Ulrik Rasmussen wrote:

  On Tue, Apr 07, 2009 at 04:34:22PM +0200, Peter Verswyvelen wrote:

 Do you want to cap the rendering framerate at 60FPS or the animation
 framerate?
 Because when you use OpenGL and GLFW, you can just

 GLFW.swapInterval $= 1

 to cap the rendering framerate at the refresh rate of your monitor or LCD
 screen (usually 60Hz)


 I just want to cap the rendering framerate. The game logic is running in
 other threads, and sends rendering information via a Chan to the
 renderer.

 I'm using GLUT, and have never heard of GLFW. However, that seems to be
 a better tool to get the job done. I'll check it out, thanks :).

 /Ulrik



 On Tue, Apr 7, 2009 at 1:41 PM, Ulrik Rasmussen hask...@utr.dk wrote:

  Hello.

 I am writing a simple game in Haskell as an exercise, and in the
 rendering loop I want to cap the framerate to 60fps. I had planned to do
 this with GHC.Conc.threadDelay, but looking at it's documentation, I
 discovered that it can only delay the thread in time spans that are
 multiples of 20ms:



 http://www.haskell.org/ghc/docs/6.4/html/libraries/base/Control.Concurrent.html

 I need a much finer granularity than that, so I wondered if it is
 possible to either get a higher resolution for threadDelay, or if there
 is an alternative to threadDelay?

 I noticed that the SDL library includes the function delay, which
 indeed works with a resolution down to one millisecond. However, since
 I'm using HOpenGL and GLUT, I think it would be a little overkill to
 depend on SDL just for this :).


 Thanks,

 Ulrik Rasmussen
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

  ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re[6]: Parallel combinator, performance advice

2009-04-07 Thread Bulat Ziganshin
Hello Bulat,

Tuesday, April 7, 2009, 8:10:43 PM, you wrote:

 parallel_ (x1:xs) = do
 sem - newQSem $ 1 - length xs
 forM_ xs $ \x -
 writeChan queue (x  signalQSem sem, False)
 x1
 addWorker
 waitQSem sem
 writeChan queue (signalQSem sem, True)
 waitQSem sem

 Neil, executing x1 directly in parallel_ is incorrect idea.

 forget this. but it still a bit suboptimal...

i think i realized why you use this schema. my solution may lead to
N-1 worker threads in the system if last job is too small - after its
execution we finish one thread and have just N-1 working threads until
parallel_ will be finished

but problem i mentioned in previous letter may also take place
although it looks like less important. we may solve both problems by
allowing worker thread to actively select its death time: it should
die only at the moment when *last* job in bucket was finished - this
guarantees us exactly N worker threads at any time. so:

parallel_ (x1:xs) = do
sem - newQSem $ - length xs
jobsLast - newMVar (length xs)
addWorker
forM_ (x1:xs) $ \x - do
writeChan queue $ do
   x
   signalQSem sem
   modifyMVar jobsLast $ \jobs - do
   return (jobs-1, jobs==0)
--
waitQSem sem


and modify last 3 lines of addWorker:

addWorker :: IO ()
addWorker = do
forkIO $ f `E.catch` \(e :: SomeException) -
throwTo mainThread $ ErrorCall Control.Concurrent.Parallel: parallel 
thread died.
return ()
where
f :: IO ()
f = do
act - readChan queue
kill - act
unless kill f



-- 
Best regards,
 Bulatmailto:bulat.zigans...@gmail.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Configuring cabal dependencies at install-time

2009-04-07 Thread John Dorsey
Edward,

Thanks for straightening me out; I see the problems better now.  In
particular I was missing:

1)  Orphaned types (and related design issues) get in the way of
splitting the package.

2)  Viral dependencies work in two directions, since upstream packages
must pick up your deps to include instances of your classes.

I'm thinking out loud, so bear with me.

 The problem is there is no path to get there from here. Getting another
 library to depend on mine, they have to pick up the brittle dependency set I
 have now. Splitting my package into smaller packages fails because I need to
 keep the instances for 3rd party data types packed with the class
 definitions to avoid orphan instances and poor API design. So the option to

Some class instances can go in three places:

a)  The source package for the type, which then picks up your deps.  Bad.

b)  Your package, which then has a gazillion deps.  Bad.

c)  Your sub-packages, in which case they're orphaned.  Bad.

I have to wonder whether (c) isn't the least of evils.  Playing the
advocate:

-  Orphaned instances are bad because of the risk of multiple instances.
That risk should be low in this case; if anyone else wanted an instance
of, say, a Prelude ADT for your library's class, their obvious option is
to use your sub-package.

-  If you accept the above, then orphaning the intance in a sub-package
that's associated with either the type's or the class's home is morally
better than providing an instance in an unaffiliated third package.

-  Orphaning in sub-packages as a stopgap could make it much easier to
get your class (and the instance) added to those upstream packages where
it makes sense to do so.

This clearly doesn't solve all parts of the problem.  You may have other
design concerns that make sub-packages undesirable.  Even with instance
definitions removed you may still have enough dependencies to deter
integration.  The problem probably extends beyond just class instances.

 The only other alternative that I seem to have at this point in the cabal
 packaging system is to create a series of flags for optional functionality.

This sounds like rat hole of a different nature.  You lose the ability
to tell if an API is supported based on whether the package that implements
it is installed.  An installed and working package can cease to function
after (possibly automatic) reinstallation when other packages become
available.  Complicated new functionality is required in Cabal.

Regards,
John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Configuring cabal dependencies at install-time

2009-04-07 Thread Simon Michael
I'm facing this problem too, with hledger. It has optional happstack and 
vty interfaces which add to the difficulty and platform-specificity of 
installation. Currently I publish all in one package with a cabal flag 
for each interface, with happstack off and vty on by default. vty isn't 
available on windows, but I understand that cabal is smart enough to 
flip the flags until it finds a combination that is installable, so I 
hoped it would just turn off vty for windows users. It didn't, though.


An alternative is to publish separate packages, debian style: 
libhledger, hledger, hledger-vty, hledger-happs etc. These are more 
discoverable and easier to document for users. It does seem hackage 
would be less fun to browse if it fills up with all these variants. But 
maybe it's simpler.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad transformer to consume a list

2009-04-07 Thread Stephan Friedrichs
My solution is this transformer:



newtype ConsumerT c m a
= ConsumerT { runConsumerT :: [c] - m (a, [c]) }

instance (Monad m) = Monad (ConsumerT c m) where
return x = ConsumerT $ \cs - return (x, cs)
m = f  = ConsumerT $ \cs - do
 ~(x, cs') - runConsumerT m cs
 runConsumerT (f x) cs'
fail msg = ConsumerT $ const (fail msg)

consume :: (Monad m) = ConsumerT c m (Maybe c)
consume = ConsumerT $ \css - case css of
   [] - return (Nothing, [])
   (c:cs) - return (Just c, cs)

consumeAll :: (Monad m) = ConsumerT c m [c]
consumeAll = ConsumerT $ \cs - return (cs, [])


-- 

Früher hieß es ja: Ich denke, also bin ich.
Heute weiß man: Es geht auch so.

 - Dieter Nuhr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad transformer to consume a list

2009-04-07 Thread Henning Thielemann


On Tue, 7 Apr 2009, Stephan Friedrichs wrote:


My solution is this transformer:


newtype ConsumerT c m a
   = ConsumerT { runConsumerT :: [c] - m (a, [c]) }

instance (Monad m) = Monad (ConsumerT c m) where
   return x = ConsumerT $ \cs - return (x, cs)
   m = f  = ConsumerT $ \cs - do
~(x, cs') - runConsumerT m cs
runConsumerT (f x) cs'
   fail msg = ConsumerT $ const (fail msg)


But this is precisely the StateT, wrapped in a newtype and with restricted 
operations on it. You could as well define


newtype ConsumerT c m a =
   ConsumerT { runConsumerT :: StateT [c] m a }

instance (Monad m) = Monad (ConsumerT c m) where
   return x = ConsumerT $ return x
   m = f  = ConsumerT $ runConsumerT . f = runConsumerT m
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Configuring cabal dependencies at install-time

2009-04-07 Thread John Lato
The problem of type class instances for third-party types is exactly
how I ran into this.  Currently I don't know of a good solution, where
good means that it meets these criteria:

1.  Does not introduce orphan instances
2.  Allows for instances to be provided based upon the user's
installed libraries
3.  Allows for a separation of core package dependencies and
dependencies that are only
 included to provide instances
4.  Has sane dependency requirements (within the current Cabal framework)

This seems harder than the problem Jeff has with buster, because the
separate packages of buster-http, buster-ui, etc. makes sense both
organizationally and as an implementation issue, it's more a question
of the politeness of putting that collection on hackage.  For type
class instances, this isn't an option unless one provides orphan
instances.

I like Edward's suggestion of an augments flag.  As I envision it,
package Foo would provide something like a phantom instance of a
type from package Bar, where the instance is not actually available
until the matching library Bar is installed, at which point the
compiler would compile the instance (or at least flag Foo for
recompilation) and make it available.  I have no idea how much work
this would take, or where one would go about starting to implement it,
though.

John

On Tue, Apr 7, 2009 at 3:10 PM, Edward Kmett ekm...@gmail.com wrote:
 This has been a lot on my mind lately as my current library provides
 additional functionality to data types from a wide array of other packages.
 I face a version of Wadler's expression problem.

 I provide a set of classes for injecting into monoids/seminearrings/etc. to
 allow for quick reductions over different data structures. The problem is
 that of course the interfaces are fairly general so whole swathes of types
 (including every applicative functor!) qualifies for certain operations.

 Perhaps the ultimate answer would be to push more of the instances down into
 the source packages. I can do this with some of the monoid instances, but
 convincing folks of the utility of the fact that their particular
 applicative forms a right-seminearring when it contains a monoid is another
 matter entirely.

 The problem is there is no path to get there from here. Getting another
 library to depend on mine, they have to pick up the brittle dependency set I
 have now. Splitting my package into smaller packages fails because I need to
 keep the instances for 3rd party data types packed with the class
 definitions to avoid orphan instances and poor API design. So the option to
 split things into the equivalent of 'buster-ui', 'buster-network' and so
 forth largely fails on that design criterion. I can do that for new monoids,
 rings and so forth that I define that purely layer on top of the base
 functionality I provide, but not for ones that provide additional instances
 for 3rd party data types.

 I can keep adding libraries as dependencies like I am doing now, but that
 means that my library continues to accrete content at an alarming rate and
 more importantly every one introduces a greater possibility of build issues,
 because I can only operate in an environment where every one of my
 dependencies can install.

 This further exacerbates the problem that no one would want to add all of my
 pedantic instances because to do so they would have to inject a huge brittle
 dependency into their package.

 The only other alternative that I seem to have at this point in the cabal
 packaging system is to create a series of flags for optional functionality.
 This solves _my_ problem, in particular it lets me install on a much broader
 base of environments, but now the order in which my package was installed
 with respect to its dependencies matters. In particular, clients of the
 library won't know if they have access to half of the instances, and so are
 stuck limiting themselves to working either on a particular computer, or
 using the intersection of the functionality I can provide.

 Perhaps, what I would ideally like to have would be some kind of 'augments'
 or 'codependencies' clause in the cabal file inside of flags and build
 targets that indicates packages that should force my package to
 reinstall after a package matching the version range inside the
 codependencies clause is installed or at least prompt indicatig that new
 functionality would be available and what packages you should reinstall.

 This would let me have my cake and eat it too. I could provide a wide array
 of instances for different stock data types, and I could know that if
 someone depends on both, say,  'monoids' and 'parsec 3' that the parsec
 instances will be present and usable in my package.

 Most importantly, it would allow me to fix my 'expression problem'. Others
 could introduce dependencies on the easier to install library allowing me to
 shrink the library and I would be able to install in more environments.

 -Edward Kmett

 On Tue, Apr 7, 2009 at 9:20 

Re: [Haskell-cafe] How to cut a file effciently?

2009-04-07 Thread Luke Palmer
split n [] = []
split n xs = take n xs : split n (drop n xs)

main = do
text - readFile source
mapM_ (\(n,dat) - writeFile (dest ++ show n) dat) . zip [0..] . split
1 . lines $ text

Modulo brainos... but you get the idea.  This is lazy (because readFile is).

Luke

On Tue, Apr 7, 2009 at 11:20 PM, Magicloud Magiclouds 
magicloud.magiclo...@gmail.com wrote:

 Hi,
  Let us say I have a text file of a million lines, and I want to cut
 it into smaller (10K lines) ones.
  How to do this? I have tried a few ways, none I think is lazy (I
 mean not reading the file all at the start).
 --
 竹密岂妨流水过
 山高哪阻野云飞
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe