Re: Proposed ghc-pkg and cabal feature - right list?
I'm replying to Simon M. and myself, as should have sent my first reply to the ghc users list, I guess. On 14.03.2010, at 10:54, Axel Simon wrote: Hi Dan, I reply to libraries, I think that's the right list for Cabal. On 13.03.2010, at 21:39, Dan Knapp wrote: There doesn't seem to be a mailing list for Cabal itself, so I'm posting here. I came up with an idea for a small feature that I believe would make a useful addition to ghc-pkg and Cabal. I'm willing to implement it myself, but I have had some previous experiences with other projects where I did some work and then the maintainers said sorry, not interested, so I want to gauge interest before I start. Who should I talk to? The feature itself is this: Arbitrary key-value pairs in Cabal package files and the installed-package database. The use-case is for an application supporting plugins to discover installed plugins compatible with it, interrogating these fields through the GHC API. For Gtk2Hs I would like to have a similar feature. Gtk2Hs is a wrapper for Gtk+. It evolves on its own (for which the package has a version number) but it may wrap different versions of Gtk+. I think people using Gtk2Hs need to be able to conditionally compile certain code fragments, depending on which Gtk+ version Gtk2Hs wraps. However, I had something simpler in mind than providing any kind of key,value pairs: I would like to export certain Cabal flags into the package, which could be as easy as specifying: Flag gtk_2_2 Description: Build objects for Gtk+ version 2.2. Exported: True Flag gtk_2_4 Description: Build objects for Gtk+ version 2.4. Exported: True Flag gtk_2_6 Description: Build objects for Gtk+ version 2.6. Exported: True where the 'Exported' would mean that this flag should be added to the package data base. A package would then be gtk-0.10.4{gtk_2_2,gtk_2_4} if the first two flags would be set by Cabal. A package could then depend on gtk-0.10.4 in which certain flags are set. Moreover, I would then want cabal to compile a users package with -Dgtk_2_2 - Dgtk_2_4 so the user can use CPP to conditionally compile code. You suggestion of using arbitrary key,value pairs is more general and needs more thought. You would have to extend Cabal quite a bit whereas my proposal is more lightweight in that it can build on top of Cabal's Boolean flags. May I ask: - could you express your package properties using Boolean flags (which are set by Cabal automatically)? - if not, could you not express your plug-in concept using several packages? Cheers, Axel. For example, my content-management system FruitTart could enumerate the list of installed packages looking for packages which export a field fruit-tart-plugin-interface-version with a numeric value matching the interface version it's expecting. I'm not quite sure I understand the use case here. Are you saying you want to writing something within Cabal? Or do you want to use the Cabal API to find out if a certain package is available? If you're talking about dynamic plug-ins then I assume it must be the latter. Besides the technical difficulty of loading a GHC package dynamically (that I don't know anything about), what prevents you from looking for a package that contains just the plug in? On 15.03.2010, at 16:38, Simon Marlow wrote: My first thought was hmm, there must be another way to do that, but I can't think of one, or at least a good one. Perhaps having arbitrary key-value pairs in the package database would be a good thing. It would help us to avoid breaking things when we need to change the format, for one thing. We could start using key-values for new fields rather than adding them to InstalledPackageInfo. However, then we have a strange situation where some fields get distinguished status in InstalledPackageInfo. Of course, for some of those fields we have richer types (e.g. License), so it makes sense. So for me, I can't see any serious objections to doing this, but I'd also ask on the cabal-de...@haskell.org list (in particular we should hear what Duncan Coutts thinks). Before implementing anything like general key,value pairs, I would like to see the exact usage of these fields? So Dan wants to query these dynamically using an API. I'm much more interested in having CPP macros defined so I can compile code conditionally. For this purpose, the key,value pairs are not necessarily suitable since Dan might want to define a pair that does not create a valid -Dkey=value instruction for CPP. Cheers, Axel. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Strange GHC/STM behaviour
On 15/03/2010 22:03, Simon Marlow wrote: On 15/03/10 16:02, Michael Lesniak wrote: Hello Simon, with 6.12.1.20100313 the behaviour is worse: even when using $! in the appropiate lines (see [2] in my original message) the programs hangs quite often. Hence, 6.12.1 works better in this (special?) case. Ok, I'll look into it, thanks for the report. I reproduced the deadlock, and it looks like a new one: a lock order reversal between Schedule.c:checkBlackHoles() and RtsAPI.c:rts_unlock(). It turns out I've already fixed it in the HEAD as a side effect of some other improvements, so I'm going to try to bring those into the 6.12.2 branch. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Strange GHC/STM behaviour
Hello Simon, I reproduced the deadlock, and it looks like a new one: a lock order reversal between Schedule.c:checkBlackHoles() and RtsAPI.c:rts_unlock(). It turns out I've already fixed it in the HEAD as a side effect of some other improvements, so I'm going to try to bring those into the 6.12.2 branch. Great! Thanks! Cheers, Michael -- Dipl.-Inf. Michael C. Lesniak University of Kassel Programming Languages / Methodologies Research Group Department of Computer Science and Electrical Engineering Wilhelmshöher Allee 73 34121 Kassel Phone: +49-(0)561-804-6269 ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Feedback request: priority queues in containers
On March 16, 2010 09:29:06 Louis Wasserman wrote: I'd like to request some more feedback on the proposedhttp://hackage.haskell.org/trac/ghc/ticket/3909implementation for priority queues in containers. Mostly, I feel like adding a new module to containers should be contentious, and there hasn't been as much griping or contention as I expected. The silence is feeling kind of eerie! Not sure if this is an appropriate question at all as I haven't looked at the code, but would it be possible to put any primary functionality into a class. I'm thinking something along the lines of how the vector code works. This allows you to use all the higher-order functions (i.e., those implemented using the primary functions) on a different underlying implementation. I've found this particularly useful in wrapping Perl data types. For the Perl array, all I had to do was write an class instance for the vector module, and I have all these higher-order functions I could use from existing code. It would be very nice to have had something similar to do for the hash tables. Even to just provide a native haskell immutable look into the data so Haskell code can extract the components it needs with standard functions. Cheers! -Tyson PS: I'm still working on the wrapping, so I might change my mind as to how useful this really is, but thought I should throw it out there. signature.asc Description: This is a digitally signed message part. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Feedback request: priority queues in containers
I'm not willing to do this sort of typeclass wrapper thing, primarily because nothing else in containers does -- even though we might have a Mapping type class that handles both IntMap and Map, we don't. I'm inclined to let that design choice stand, as far as containers is concerned. It would make perfect sense to write a new package with such a type class and offering instances for the containers priority queue implementations, but I prefer to stick with the style that containers already seems to use -- that is, exporting separate modules without a unifying type class, but with nearly-identical method signatures. Louis Wasserman wasserman.lo...@gmail.com http://profiles.google.com/wasserman.louis On Tue, Mar 16, 2010 at 11:10 AM, Tyson Whitehead twhiteh...@gmail.comwrote: On March 16, 2010 09:29:06 Louis Wasserman wrote: I'd like to request some more feedback on the proposedhttp://hackage.haskell.org/trac/ghc/ticket/3909implementation for priority queues in containers. Mostly, I feel like adding a new module to containers should be contentious, and there hasn't been as much griping or contention as I expected. The silence is feeling kind of eerie! Not sure if this is an appropriate question at all as I haven't looked at the code, but would it be possible to put any primary functionality into a class. I'm thinking something along the lines of how the vector code works. This allows you to use all the higher-order functions (i.e., those implemented using the primary functions) on a different underlying implementation. I've found this particularly useful in wrapping Perl data types. For the Perl array, all I had to do was write an class instance for the vector module, and I have all these higher-order functions I could use from existing code. It would be very nice to have had something similar to do for the hash tables. Even to just provide a native haskell immutable look into the data so Haskell code can extract the components it needs with standard functions. Cheers! -Tyson PS: I'm still working on the wrapping, so I might change my mind as to how useful this really is, but thought I should throw it out there. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Feedback request: priority queues in containers
Hi, I second that choice. There have been some attempts at using type classes, notably the Edison library, but it never got widespread. So I would follow the current containers design. Milan I'm not willing to do this sort of typeclass wrapper thing, primarily because nothing else in containers does -- even though we might have a Mapping type class that handles both IntMap and Map, we don't. I'm inclined to let that design choice stand, as far as containers is concerned. It would make perfect sense to write a new package with such a type class and offering instances for the containers priority queue implementations, but I prefer to stick with the style that containers already seems to use -- that is, exporting separate modules without a unifying type class, but with nearly-identical method signatures. Louis Wasserman wasserman.lo...@gmail.com http://profiles.google.com/wasserman.louis On Tue, Mar 16, 2010 at 11:10 AM, Tyson Whitehead twhiteh...@gmail.comwrote: On March 16, 2010 09:29:06 Louis Wasserman wrote: I'd like to request some more feedback on the proposedhttp://hackage.haskell.org/trac/ghc/ticket/3909implementation for priority queues in containers. Mostly, I feel like adding a new module to containers should be contentious, and there hasn't been as much griping or contention as I expected. The silence is feeling kind of eerie! Not sure if this is an appropriate question at all as I haven't looked at the code, but would it be possible to put any primary functionality into a class. I'm thinking something along the lines of how the vector code works. This allows you to use all the higher-order functions (i.e., those implemented using the primary functions) on a different underlying implementation. I've found this particularly useful in wrapping Perl data types. For the Perl array, all I had to do was write an class instance for the vector module, and I have all these higher-order functions I could use from existing code. It would be very nice to have had something similar to do for the hash tables. Even to just provide a native haskell immutable look into the data so Haskell code can extract the components it needs with standard functions. Cheers! -Tyson PS: I'm still working on the wrapping, so I might change my mind as to how useful this really is, but thought I should throw it out there. ___ Libraries mailing list librar...@haskell.org http://www.haskell.org/mailman/listinfo/libraries ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Feedback request: priority queues in containers
Specifically, we use a binomial heap, which offers O(log n) worst-case union O(log n) worst-case extract (this in particular was a key objective of ours) amortized O(1), worst-case O(log n) insertion. (In a persistent context, the amortized performance bound does not necessarily hold.) Why not use Okasaki Brodal's Optimal Purely Functional Priority Queues? They offer worst case: * O(1) union, findMin, and insert * O(lg n) deleteMin http://www.eecs.usma.edu/webs/people/okasaki/jfp96/index.html ftp://ftp.daimi.au.dk/pub/BRICS/Reports/RS/96/37/BRICS-RS-96-37.pdf ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Feedback request: priority queues in containers
Hi, I think this is my first post to this list although I've been reading it for a long time. So, please, be patient with me and my post. On 03/16/2010 02:29 PM, Louis Wasserman wrote: * We offer Functor, Foldable, and Traversable instances that do not respect key ordering. All are linear time, but Functor and Traversable in particular assume the function is monotonic. The Foldable instance is a good way to access the elements of the priority queue in an unordered fashion. (We also export mapMonotonic and traverseMonotonic, and encourage the use of those functions instead of the Functor or Traversable instances.) So, it is possible to break the invariants of your priority queue by using fmap with a non-monotonic function, right? I see that it might be usefull to have such instances, sometimes. As it is not possible to hide instances, once they are definded, I'd propose to not offer those instances by default. Instead you could provide implementations of all the instance functions necessary to define this instances yourself. Or one could have a newtype wrapper around the priority queue for which instances of Function, Foldable, and Traversable are defined. This would allow to activate the nice instances on demand but to stay safe by default. Best regards, Jean-Marie ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Feedback request: priority queues in containers
Louis Wasserman wrote: I'm not willing to do this sort of typeclass wrapper thing, primarily because nothing else in containers does -- even though we might have a Mapping type class that handles both IntMap and Map, we don't. I'm inclined to let that design choice stand, as far as containers is concerned. It would make perfect sense to write a new package with such a type class and offering instances for the containers priority queue implementations, but I prefer to stick with the style that containers already seems to use -- that is, exporting separate modules without a unifying type class, but with nearly-identical method signatures. Just an aside (and shameless plug ;-): Since the signatures overlap so much, it is in fact easy to wrap these modules into instances for many possible different type classes that one might consider using for containers --- I have a tool that mechanises this instance generation, available at: http://sqrl.mcmaster.ca/~kahl/Haskell/ModuleTools/ More about this in the forthcoming TFP 2009 proceedings paper: @InCollection{Kahl-2009_TFP, author = {Wolfram Kahl}, title ={Haskell Module Tools for Liberating Type Class Design}, crossref = {TFP2009}, pages = {129--144}, chapter = {9}, abstract ={Design of Haskell type class hierarchies for complex purposes, including for standard containers, is a non-trivial exercise, and evolution of such designs is additionally hampered by the large overhead of connecting to existing implementations. We systematically discuss this overhead, and propose a tool solution, implemented using the GHC API, to automate its generation.} } @Book{TFP2009, title = {Trends in Functional Programming, {TFP 2009}}, booktitle = {Trends in Functional Programming, {TFP 2009}}, year = 2010, editor ={Zolt\'an Horv{\'a}th and Vikt\'oia Zs{\'o}k and Peter Achten and Pieter Koopman}, address = {UK}, publisher = {Intellect}, note = {(In press)} } Wolfram ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Feedback request: priority queues in containers
So, it is possible to break the invariants of your priority queue by using fmap with a non-monotonic function, right? I see that it might be usefull to have such instances, sometimes. As it is not possible to hide instances, once they are definded, I'd propose to not offer those instances by default. Instead you could provide implementations of all the instance functions necessary to define this instances yourself. Or one could have a newtype wrapper around the priority queue for which instances of Function, Foldable, and Traversable are defined. This would allow to activate the nice instances on demand but to stay safe by default. H. I suggest: - Functor and Traversable should be modified as you suggest, that is, we continue exporting mapMonotonic and traverseMonotonic, but don't export Functor or Traversable instances. - I'm still going to insist that we retain Foldable. The most important reason is that we don't lose any invariants as a result of a fold, and the second reason is that reexporting functions named foldr and foldl would be awkward. Making this change now. Louis Wasserman wasserman.lo...@gmail.com http://profiles.google.com/wasserman.louis ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Feedback request: priority queues in containers
Why not use Okasaki Brodal's Optimal Purely Functional Priority Queues? They offer worst case: * O(1) union, findMin, and insert * O(lg n) deleteMin The primary reason that we don't use this implementation is, quoting from the paper you yourself linked to, Our data structure is reasonably efficient in practice; however, there are several competing data structures that, although not asymptotically optimal, are somewhat faster than ours in practice. Hence, our work is primarily of theoretical interest. The current implementation is considerably faster than Okasaki's implementation in practice, based on our benchmarks. Furthermore, the asymptotics are really pretty good, and the constant factors appear to be relatively low. I wrote a pretty efficient skew binomial heap implementation -- the first step of Okasaki's approach -- and found it was already slower than the optimized binomial heap. I'm not sure whether or not I uploaded that benchmark, though. I'll do that at some point today, just to keep everyone happy. Louis Wasserman wasserman.lo...@gmail.com http://profiles.google.com/wasserman.louis ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Feedback request: priority queues in containers
I'm not sure about some of that. Imperative queues sometimes offer O(1) decreaseKey and O(lg n) delete by storing pointers to elements in a priority queue. The usual purely functional PQs can only offer O(n) decreaseKey and O(n) delete. Purely functional PSQs can offer O(lg n) decreaseKey and O(lg n) delete. Minimum spanning tree is a common application for PQs that makes good use of decreaseKey. That example did occur to me, but I feel okay about following the examples of Java and C++ STL, which offer priority queue implementations, but those implementations don't support decreaseKey. You might be able to convince me that we should offer PSQueues in addition to PQueues, but I don't think they should be merged, for the following reason. Using the PSQueue package, which is essentially a copy of Ralf Hinze's original implementation, yields the following benchmark for heapsorting 25000 Ints: Binomial: 0.000 3.240 2.180 4.000 8.001 PSQ:8.001 13.241 2.882 12.001 24.002 I'm really not okay with that kind of performance loss for added functionality that not everyone needs. Louis Wasserman wasserman.lo...@gmail.com http://profiles.google.com/wasserman.louis On Tue, Mar 16, 2010 at 1:58 PM, Louis Wasserman wasserman.lo...@gmail.comwrote: Why not use Okasaki Brodal's Optimal Purely Functional Priority Queues? They offer worst case: * O(1) union, findMin, and insert * O(lg n) deleteMin The primary reason that we don't use this implementation is, quoting from the paper you yourself linked to, Our data structure is reasonably efficient in practice; however, there are several competing data structures that, although not asymptotically optimal, are somewhat faster than ours in practice. Hence, our work is primarily of theoretical interest. The current implementation is considerably faster than Okasaki's implementation in practice, based on our benchmarks. Furthermore, the asymptotics are really pretty good, and the constant factors appear to be relatively low. I wrote a pretty efficient skew binomial heap implementation -- the first step of Okasaki's approach -- and found it was already slower than the optimized binomial heap. I'm not sure whether or not I uploaded that benchmark, though. I'll do that at some point today, just to keep everyone happy. Louis Wasserman wasserman.lo...@gmail.com http://profiles.google.com/wasserman.louis ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Feedback request: priority queues in containers
I wrote a pretty efficient skew binomial heap implementation -- the first step of Okasaki's approach -- and found it was already slower than the optimized binomial heap. I'm not sure whether or not I uploaded that benchmark, though. I'll do that at some point today, just to keep everyone happy. The skew binomial heaps you implemented should only be asymptotically faster than the usual binomial heaps on one special case: comparing a binomial heap in a state that would case an \Omega(lg n) time cascade on insert to the worst-case O(1) insert of binomial heaps. I think it would also be worth comparing binomial heap merge against Brodal-Okasaki heap merge. Finally, if speed is the ultimate goal, I suspect the clever nested type approach to guaranteeing binomial tree shape slows things down a bit. For reference, the type in the latest patch is: data BinomForest rk a = Nil | Skip !(BinomForest (Succ rk) a) | Cons {-# UNPACK #-} !(BinomTree rk a) !(BinomForest (Succ rk) a) data BinomTree rk a = BinomTree a (rk a) data Succ rk a = Succ {-# UNPACK #-} !(BinomTree rk a) (rk a) data Zero a = Zero I suspect the following might be faster: data BinomForest2 a = Empty | NonEmpty a [BinomTree2 a] data BinomTree2 a = BinomTree2 a [BinomTree2 a] This eliminates the Skip constructor, which contributes only to the nested type guarantee. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: static_wrapper imports in the FFI
Hello, after I pulled and made the darcs patch again, now it is 116K, which is still too big to send to the list. If I gzip it, then it becomes 44K, which is almost the right size. For reference, if I use darcs diff to compute the changes, the resulting patch is 2.8K. Let me know how to proceed. -Iavor On Mon, Mar 15, 2010 at 5:57 PM, Ian Lynagh ig...@earth.li wrote: On Mon, Mar 15, 2010 at 09:48:02AM -0700, Iavor Diatchki wrote: I have a darcs patch implementing the feature but I could not send it to the list because for some reason the patch is 150K long (!!! my changes are only ~50 lines) and the list has a 40K limit. I've just tagged the repo, so if you pull then you should get a much smaller patch file generated. Thanks Ian ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
RE: static_wrapper imports in the FFI
Try darcs optimize --reorder in your repo, or pull your patch into a freshly got repo. (Annoying internal detail of darcs - it probably won't use the tag unless it's clean, which requires that any patches not included in the tag should be after it in the repo order) -Original Message- From: glasgow-haskell-users-boun...@haskell.org [mailto:glasgow-haskell-users-boun...@haskell.org] On Behalf Of Iavor Diatchki Sent: 16 March 2010 20:41 To: Iavor Diatchki; GHC Users Mailing List Subject: Re: static_wrapper imports in the FFI Hello, after I pulled and made the darcs patch again, now it is 116K, which is still too big to send to the list. If I gzip it, then it becomes 44K, which is almost the right size. For reference, if I use darcs diff to compute the changes, the resulting patch is 2.8K. Let me know how to proceed. -Iavor On Mon, Mar 15, 2010 at 5:57 PM, Ian Lynagh ig...@earth.li wrote: On Mon, Mar 15, 2010 at 09:48:02AM -0700, Iavor Diatchki wrote: I have a darcs patch implementing the feature but I could not send it to the list because for some reason the patch is 150K long (!!! my changes are only ~50 lines) and the list has a 40K limit. I've just tagged the repo, so if you pull then you should get a much smaller patch file generated. Thanks Ian ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users === Please access the attached hyperlink for an important electronic communications disclaimer: http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html === ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: static_wrapper imports in the FFI
On 15/03/10 23:54, Iavor Diatchki wrote: Hi, I think that one may view a static_wrapper import as a pair of an export like the one you wrote and an import of the address of the exported function (this is useful because usually it is convenient to install the handler on the Haskell side). Perhaps that's a better way to explain its meaning, rather then the long-winded example that I wrote. Thanks! In practice, the foreign export generates slightly less direct code, because the C code follows 2 indirections: first we enter the closure for cbind, and then _that_ code enters the closure for the actual Haskell function. Perhaps with sufficient optimization on the C side this can be avoided although I don't think that it happens at the moment. In terms of notation, I like the directness of the static_wrapper declaration (although not so much the static_wrapper name!) because it avoids duplication, thus reducing clutter and potential errors. It seems hard to justify adding this to GHC, since it's really just syntactic convenience for a particular case. Traditionally syntactic sugar for FFI declarations has been implemented in the preprocessing tools: c2hs and hsc2hs, whereas the FFI extension itself is purposefully minimal. So at the moment we're slightly inclined not to put it in - but feel free to make a compelling case. Note that as an extension in its own right it would need its own flag, documentation etc. Cheers, Simon -Iavor On Mon, Mar 15, 2010 at 3:56 PM, Tyson Whiteheadtwhiteh...@gmail.com wrote: On March 15, 2010 12:48:02 Iavor Diatchki wrote: I have implemented a small extension to the FFI which allows for static_wrapper imports. These are a variation on wrapper imports that do not use run-time code generation. This is important in security sensitive contexts because it avoids executable data. While static_wrapper imports are less general then wrapper imports, they can be used to install Haskell handlers in many C libraries where callbacks have an extra user-data parameter (e.g., GTK signal handlers). Hi Iavor, Would not the following also do what you want foreign export ccall haskellCIntCInt \ cbind :: CInt - StablePtr (CInt - IO CInt) - IO CInt cbind :: a - StablePtr (a- IO b) - IO b cbind x f = deRefStablePtr f= (\f_ - f_ x) On the C side you would then have something like register_callback(haskellCIntInt,wrapped haskell closure) wherewrapped haskell closure would be a stable pointer of type StablePtr (CInt - IO CInt) generated on the haskell side via newStablePtrhaskell closure and passed to the C code. Cheers! -Tyson ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Haskell Platform can't use OpenGL on OpenBSD.
Hello. I have successfully compiled ghc-6.10.4 on OpenBSD on i386. I have downloaded the Haskell Platform tarball, but ./configure fails: [ ... ] checking GL/gl.h usability... yes checking GL/gl.h presence... yes checking for GL/gl.h... yes checking for library containing glEnd... no configure: error: The OpenGL C library is required Looking into GL/gl.h, I see that the function glEnd is there: GLAPI void GLAPIENTRY glEnd( void ); My system's GL/gl.h seems to be v7.4: * Mesa 3-D graphics library * Version: 7.4 My config.log states the following, regarding OpenGL: [ ...lots of passed tests... ] configure:4247: checking GL/gl.h usability configure:4264: gcc -c -g -O2 -I/usr/X11R6/include -I/usr/local/include conftest.c 5 configure:4271: $? = 0 configure:4285: result: yes configure:4289: checking GL/gl.h presence configure:4304: gcc -E -I/usr/X11R6/include -I/usr/local/include conftest.c configure:4311: $? = 0 configure:4325: result: yes configure:4353: checking for GL/gl.h configure:4360: result: yes configure:4373: checking for library containing glEnd configure:4414: gcc -o conftest -g -O2 -I/usr/X11R6/include -I/usr/local/include conftest.c -lz 5 /tmp//ccanY01U.o(.text+0x12): In function `main': /home/thanos/third/haskell/platform-6.10.4/haskell-platform-2009.2.0.2/conftest.c:30: undefined ref erence to `glEnd' collect2: ld returned 1 exit status configure:4421: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME haskell-platform | #define PACKAGE_TARNAME haskell-platform | #define PACKAGE_VERSION 2009.2.0.2 | #define PACKAGE_STRING haskell-platform 2009.2.0.2 | #define PACKAGE_BUGREPORT | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_LIBZ 1 | /* end confdefs.h. */ | | /* Override any GCC internal prototype to avoid an error. |Use char because int might match the return type of a GCC |builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern C | #endif | char glEnd (); | int | main () | { | return glEnd (); | ; | return 0; | } Now, I may not be a C programmer, but is this even supposed to work? Testing this program is weird: conftest.c:5: error: conflicting types for `glEnd' /usr/X11R6/include/GL/gl.h:957: error: previous declaration of `glEnd' Which was expected, since glEnd is defined to return void in GL/gl.h, so I tried to delete that function declaration from this .c file, and I got: conftest.c: In function `main': conftest.c:8: error: void value not ignored as it ought to be Ok, this was expected, so I tried to delete the return and I got this final error: /tmp//ccq1Rmj0.o(.text+0x19): In function `main': : undefined reference to `glEnd' collect2: ld returned 1 exit status Please let me know if you have any ideas on how to procede with the installation of Haskell Platform. Thanks in advance. -- Thanos Tsouanas http://mpla.math.uoa.gr/~thanos/ ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: static_wrapper imports in the FFI
On 03/16/10 19:38, Iavor Diatchki wrote: The fact that, at present, all GHC-compiled programs require an executable data segment is a fairly significant problem. ... I thought of implementing the feature as a preprocessor but it did not seem very easy to do because we need to parse Haskell types. Something doesn't add up for me here. If *all* GHC-compiled programs require an executable data segment, then how could a preprocessor possibly remedy this problem? (and likewise, how does the static_wrapper patch do so...) -Isaac ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: static_wrapper imports in the FFI
Hi, Optionally disabling executable heap blocks would be a separate patch. As far as I know, the only reason that the heap is executable is to support the adjustor thunks used to implement wrapper imports. The static_wrapper patch provides a way to install Haskell callbacks in many C libraries without the need for adjustor thunks. -Iavor On Tue, Mar 16, 2010 at 4:47 PM, Isaac Dupree m...@isaac.cedarswampstudios.org wrote: On 03/16/10 19:38, Iavor Diatchki wrote: The fact that, at present, all GHC-compiled programs require an executable data segment is a fairly significant problem. ... I thought of implementing the feature as a preprocessor but it did not seem very easy to do because we need to parse Haskell types. Something doesn't add up for me here. If *all* GHC-compiled programs require an executable data segment, then how could a preprocessor possibly remedy this problem? (and likewise, how does the static_wrapper patch do so...) -Isaac ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Feedback request: priority queues in containers
I suspect the following might be faster: data BinomForest2 a = Empty | NonEmpty a [BinomTree2 a] data BinomTree2 a = BinomTree2 a [BinomTree2 a] This eliminates the Skip constructor, which contributes only to the nested type guarantee. Ehehehe. This is something I'm pretty proud of inventing, because your implementation is actually significantly *slower*. The reason is essentially that I can do a lot more constructor unpacking when I have access to that much type information about the structure of the tree at each level. You didn't quite implement it correctly, because among other things, we *need* to track tree rank, at least for the roots, if it's not encoded in the type. Here are your data types, with everything unpacked as much as possible. data BinomQueue a = Nil | Cons {-# UNPACK #-} !Int {-# UNPACK #-} !(BinHeap a) (BinomQueue a) -- equivalent to [(Int, BinHeap a)], but unrolled -- we only need to track ranks in the roots data BinomSubtree a = Nil' | Cons' {-# UNPACK #-} !(BinHeap a) (BinomSubtree a) -- equivalent to [BinHeap a], but unrolled data BinHeap a = Bin a (BinomSubtree a) I tested, and this implementation actually performs better if the spine is maintained lazily, so we'll test that version. The full implementation (that is, the basic methods: insert, extract, fromList, toAscList) of your approach was attached to the ticket herehttp://hackage.haskell.org/trac/ghc/attachment/ticket/3909/QuickBinom.hs. Feel free to ghc-core it, or tinker with the implementation, but I've done a fair bit of tinkering, and my results haven't changed. Running a benchpress test on heapsorting 25000 Ints, calling performGC after every run of each method. SBinomial stands for sparse binomial heap, which is your approach. With -O2, +RTS -H128m: minmean+/-sd medianmax Binomial:0.000 3.440 2.204 4.000 8.001 SBinomial: 24.001 28.562 5.600 28.001 48.003 (ratio: 8.3x slower average) With -O2: minmean+/-sd medianmax Binomial:4.000 8.801 2.606 8.001 24.002 SBinomial: 32.002 41.763 8.007 42.003 64.004 (ratio: 4.7x slower average) Without optimization, with +RTS -H128m: min mean+/-sdmedianmax Binomial: 4.001 10.0413.1408.001 20.002 SBinomial: 64.004 76.8058.790 76.005 100.006 (ratio: 7.6x slower average) Without optimization: Binomial:12.000 19.7615.328 16.001 40.002 SBinomial: 72.004 90.126 11.906 88.006 120.008 (ratio: 4.6x slower average) These times are measured in milliseconds. Conclusion: my implementation is *massively faster*, by a factor of at least 4.5x. (I spent the last half hour trying to optimize SBinomial -- it really is that big a difference, and it's not going to change.) Here's why I think that's the case: even though we might have the Skip constructor, how many Skip constructors are there, total? On average, half the forest is made up of Skips, so there are 1/2 log n Skip values there. But the thing is that the sparse binomial tree representation can't do anywhere near as much unpacking; in particular, the set of children of each node is not a single-constructor type. That's an O(n) increase in allocation, all for an O(log n) shortening of the spine. That's why it's a bad plan. Most of the work is being done in allocations of nodes in the *trees,* rather than along the spine among the roots. In this area, the crazy, type-system-exploiting approach actually does much less allocation, because it can do a lot more constructor unpacking. Let's test this hypothesis: My type-magical implementation, -O2, +RTS -sstderr (no -H128m this time): 3,050,272,052 bytes allocated in the heap 240,340,552 bytes copied during GC 1,087,992 bytes maximum residency (201 sample(s)) 53,136 bytes maximum slop 3 MB total memory in use (0 MB lost due to fragmentation) Generation 0: 5703 collections, 0 parallel, 0.48s, 0.53s elapsed Generation 1: 201 collections, 0 parallel, 0.22s, 0.24s elapsed INIT time0.00s ( 0.00s elapsed) MUT time4.52s ( 4.74s elapsed) GCtime0.70s ( 0.77s elapsed) EXIT time0.00s ( 0.00s elapsed) Total time5.23s ( 5.51s elapsed) %GC time 13.5% (13.9% elapsed) Alloc rate674,200,547 bytes per MUT second Productivity 86.5% of total user, 82.2% of total elapsed The sparse binomial forest representation, same options: 5,612,965,672 bytes allocated in the heap 563,671,500 bytes copied during GC 1,967,576 bytes maximum residency (202 sample(s)) 107,212 bytes maximum slop 5 MB total memory in use (1 MB lost due to fragmentation) Generation 0: 10602 collections, 0 parallel, 1.60s, 1.67s elapsed Generation 1: 202 collections, 0 parallel, 0.28s, 0.30s elapsed INIT time0.00s ( 0.00s elapsed) MUT time
Re: Haskell Platform can't use OpenGL on OpenBSD.
Solved! In case anyone has similar problems, here's what I did. Somehow my LDFLAGS were unset. With LDFLAGS='-L/usr/X11R6/lib -L/usr/local/lib' it passed that test, but still failed in a similar fashion at the next one. In short, I had to install the GLUT library which is not part of the system (see graphics/freeglut and graphics/glut). Also, here are the env vars I had exported at the time of my ./configure'ing: CFLAGS='-I/usr/local/include -I/usr/X11R6/include -I/usr/include' CPPFLAGS='-I/usr/X11R6/include -I/usr/local/include -I/usr/include' LDFLAGS='-L/usr/X11R6/lib -L/usr/local/lib -L/usr/lib' LD_LIBRARY_PATH=/usr/local/include:/usr/X11R6/include LIBS=-lm AUTOCONF_VERSION=2.62 AUTOMAKE_VERSION=1.9 Hope this helps. Cheers. On Wed, Mar 17, 2010 at 1:29 AM, Thanos Tsouanas tha...@sians.org wrote: Hello. I have successfully compiled ghc-6.10.4 on OpenBSD on i386. I have downloaded the Haskell Platform tarball, but ./configure fails: [ ... ] checking GL/gl.h usability... yes checking GL/gl.h presence... yes checking for GL/gl.h... yes checking for library containing glEnd... no configure: error: The OpenGL C library is required Looking into GL/gl.h, I see that the function glEnd is there: GLAPI void GLAPIENTRY glEnd( void ); My system's GL/gl.h seems to be v7.4: * Mesa 3-D graphics library * Version: 7.4 My config.log states the following, regarding OpenGL: [ ...lots of passed tests... ] configure:4247: checking GL/gl.h usability configure:4264: gcc -c -g -O2 -I/usr/X11R6/include -I/usr/local/include conftest.c 5 configure:4271: $? = 0 configure:4285: result: yes configure:4289: checking GL/gl.h presence configure:4304: gcc -E -I/usr/X11R6/include -I/usr/local/include conftest.c configure:4311: $? = 0 configure:4325: result: yes configure:4353: checking for GL/gl.h configure:4360: result: yes configure:4373: checking for library containing glEnd configure:4414: gcc -o conftest -g -O2 -I/usr/X11R6/include -I/usr/local/include conftest.c -lz 5 /tmp//ccanY01U.o(.text+0x12): In function `main': /home/thanos/third/haskell/platform-6.10.4/haskell-platform-2009.2.0.2/conftest.c:30: undefined ref erence to `glEnd' collect2: ld returned 1 exit status configure:4421: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME haskell-platform | #define PACKAGE_TARNAME haskell-platform | #define PACKAGE_VERSION 2009.2.0.2 | #define PACKAGE_STRING haskell-platform 2009.2.0.2 | #define PACKAGE_BUGREPORT | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define HAVE_LIBZ 1 | /* end confdefs.h. */ | | /* Override any GCC internal prototype to avoid an error. | Use char because int might match the return type of a GCC | builtin and then its argument prototype would still apply. */ | #ifdef __cplusplus | extern C | #endif | char glEnd (); | int | main () | { | return glEnd (); | ; | return 0; | } Now, I may not be a C programmer, but is this even supposed to work? Testing this program is weird: conftest.c:5: error: conflicting types for `glEnd' /usr/X11R6/include/GL/gl.h:957: error: previous declaration of `glEnd' Which was expected, since glEnd is defined to return void in GL/gl.h, so I tried to delete that function declaration from this .c file, and I got: conftest.c: In function `main': conftest.c:8: error: void value not ignored as it ought to be Ok, this was expected, so I tried to delete the return and I got this final error: /tmp//ccq1Rmj0.o(.text+0x19): In function `main': : undefined reference to `glEnd' collect2: ld returned 1 exit status Please let me know if you have any ideas on how to procede with the installation of Haskell Platform. Thanks in advance. -- Thanos Tsouanas http://mpla.math.uoa.gr/~thanos/ ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users