[Haskell-cafe] Wrappers, API's for Web Apps - Google Summer of Code
Hi, I am interested in participating in this year's Google Summer of Code. One of my proposals is going to be to write and extend existing Haskell wrappers and API's for web services. Some of the popular web services that I use are Google Maps, Flickr, digg, reddit, Facebook and twitter, Quick google searches reveal that great haskell tools already exist for twitter, while there is an ongoing project for a Facebook API by Jeremy Shaw. I couldn't find any haskell projects for the other tools. I find this project interesting primarily because I use these tools and would like to have haskell interfaces to them. Along with tools like Happstack, these interfaces would extend the power of haskell for developing Web 2.0 apps and mashups. I would like to enquire about the community's opinions and interests in such a project. I would also appreciate any suggestions either pertaining to any other web services. I have created a ticket at http://hackage.haskell.org/trac/summer-of-code/ticket/1578 Thanks, Hrushikesh Tilak ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Problem with prepose.lhs and ghc6.10.1
../haskell/prepose.lhs:707:0: Parse error in pattern which is pointing at: normalize a :: M s a = M (mod a (modulus (undefined :: s))) The code indeed used lexically scoped type variables -- which GHC at that time implemented differently. Incidentally, on the above line, M s a is the type annotation on the result of (normalize a) rather than on the argument 'a'. Therefore, putting a parentheses like (a :: M s a) is wrong. Since the implementation of local type variables in GHC changed over the years, it's best to get rid of them. Here is the same line in the Hugs version of the code normalize :: (Modular s a, Integral a) = a - M s a normalize a = r a __ where r :: (Modular s a, Integral a) = a - s - M s a r a s = M (mod a (modulus s)) Of course the signature for normalize can be omitted. Hugs did not support lexically scoped type variables then (and probably doesn't support now). Today I would have written this definition simpler: normalize :: (Modular s a, Integral a) = a - M s a normalize a = r where r = M (mod a (modulus (tcar2 r))) tcar2:: f s x - s tcar2 = undefined (the function tcar2 is used in a few other places in the Hugs code, for a similar purpose). ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANN: hledger 0.4 released
Dear all, I have released hledger 0.4 on hackage. There is also a new website at http://hledger.org , with screenshots (textual!), a demo (will it survive!?), and docs (not too many!) Release notes are at http://hledger.org/NEWS , and bravely pasted below. In case you forgot: hledger is a text-mode double-entry accounting tool. It reads a plain text journal file describing your transactions and generates precise activity and balance reports. I use it every day to track money and time, and as the basis for client invoices and tax returns. hledger is a partial clone, in haskell, of John Wiegley's excellent ledger. I wrote it because I did not want to hack on c++ and because haskell seemed a good fit. Please cabal update and cabal install hledger and give it a whirl. Install with the -f happs flag to enable the new happstack-based web interface. I am sm on the #ledger channel on freenode, and I welcome your feedback, especially if you notice a problem. Best - Simon - 2009/04/03 hledger 0.4 released Changes: * new web command serves reports in a web browser (install with -f happs to build this) * make the vty-based curses ui a cabal build option, which will be ignored on MS windows * drop the --options-anywhere flag, that is now the default * patterns now use not: and desc: prefixes instead of ^ and ^^ * patterns are now case-insensitive, like ledger * !include directives are now relative to the including file (Tim Docker) * Y2009 default year directives are now supported, allowing m/d dates in ledger * individual transactions now have a cleared status * unbalanced entries now cause a proper warning * balance report now passes all ledger compatibility tests * balance report now shows subtotals by default, like ledger 3 * balance report shows the final zero total when -E is used * balance report hides the final total when --no-total is used * --depth affects print and register reports (aggregating with a reporting interval, filtering otherwise) * register report sorts transactions by date * register report shows zero-amount transactions when -E is used * provide more convenient timelog querying when invoked as hours * multi-day timelog sessions are split at midnight * unterminated timelog sessions are now counted. Accurate time reports at last! * the test command gives better --verbose output * --version gives more detailed version numbers including patchlevel for dev builds * new make targets include: ghci, haddocktest, doctest, unittest, view-api-docs * a doctest-style framework for functional/shell tests has been added * performance has decreased slightly: || hledger-0.3 | hledger-0.4 | ledger-0.3 ==++== -f sample.ledger balance ||0.02 |0.01 | 0.07 -f sample1000.ledger balance ||1.02 |1.39 | 0.53 -f sample1.ledger balance || 12.72 | 14.97 | 4.63 Contributors: * Simon Michael * Tim Docker * HAppS, happstack and testpack developers Stats: * Known errors: 0 * Commits: 132 * Committers: 2 * Tests: 56 * Non-test code lines: 2600 * Days since release: 75 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Problem with prepose.lhs and ghc6.10.1
Hugs did not support lexically scoped type variables then (and probably doesn't support now). I may be misremembering, but I think Hugs had them first;-) http://cvs.haskell.org/Hugs/pages/hugsman/exts.html#sect7.3.3 It is just that Hugs and GHC interpret the language extension differently (as usual), so it doesn't quite support the same code in both. That is made worse by differences in other language features relevant to using this extension. Not to mention that all of that keeps evolving over time. Claus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Program using 500MB RAM to process 5MB file
lu...@die.net.au writes: I'm relatively new to haskell so as one does, I am rewriting an existing program in haskell to help learn the language. However, it eats up all my RAM whenever I run the program. This typically happens to me when I parse large files and either am a) using a parser that is too strict (like Luke says) or b) using a parser that is too lazy - or rather, it parses into a lazy data structure which is then populated with unevaluated thunks holding onto the input data. Also, you probably want memory profiling (+RTS -h and friends), not time profiling (+RTS -p). -k -- If I haven't seen further, it is by standing in the footprints of giants ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Reverting to any old version using Darcs
Thus the uploaded sdist was missing one of the source files, and consequently failed to build. I have a pre-release make target where I test everything I can think of. I think it prevents the above, am I right ? Not unless you run 'make check' in a separate pristine copy of the repo. The problem occurs when your local development repo contains some essential files that have not been checked into the VCS. Your 'make check' will work fine for you, but not for other people. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Program using 500MB RAM to process 5MB file
(2) You are parsing strictly, meaning you have to read the whole input file before anything can be output. This is likely the main performance problem. I'm guessing you are using parsec. Try switching to polyparse if you want to try out lazy parser combinators instead. (module Text.ParserCombinators.Poly.Lazy) http://www.cs.york.ac.uk/fp/polyparse/ There are other incremental parsers out there too, although they may be more complicated because they solve a larger problem: http://yi-editor.blogspot.com/2008/11/incremental-parsing-in-yi.html I also suspect that manyTill is a really bad choice, since it doesn't give you anything until the end token. This is not an also - it is the same problem of too much strictness. Polyparse's combinator 'manyFinally' is the corresponding combinator that works lazily if you want it to. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Wishful thinking: a text editor that expands function applications into function definitions
One word says more than a thousand pictures: Vim http://www.vim.org/. (well, okay, I'm sure Emacs will do just as well, and some of the more recent IDEs seem to be catching up;-) plus plugins, of course!-) - unfolding definitions: if you really want that, it is in the domain of program transformation systems and refactorers (HaRe, the Haskell refactorer, has been mentioned - it worked on Haskell'98 sources, plugging into Vim or Emacs; it would really be great to have funding for porting that to a modern GHC/Cabal-based environment, but if you're happy with Haskell'98, and have all the sources, the old HaRe should still do the job once you get it to build with recent GHCs/libraries) - looking up definitions: that is supported in various ways in Vim/Emacs and the like - I'll talk about some Vim examples, as that is what I use. - tag files (generated by running tools like 'ghc -e :ctags', hasktags,.. over the sources) are a simple database linking identifiers to definition sites. Based on these, one can jump from identifiers to definitions (keeping a stack of locations, so one can go back easily), or open split windows on the definition sites. See the Moving through programs section in Vim's help, also online at: http://vimdoc.sourceforge.net/htmldoc/usr_29.html . - the haskellmode plugins for Vim support documentation lookup (opening the haddocs for the identifier under cursor in a browser), and the documentation provides source links, if the docs themselves aren't sufficient. Useful for all those sourceless package installations. - the haskellmode plugins also support type tooltips (or, if you don't like tooltips, or are working in a terminal without gui, type signatures can be displayed in the status line, or added to the source code). This is currently based on GHCi's :browse!, though, so you can only get the types of toplevel definitions that way. One of the insertmode completions also displays types. - if you explain Haskell's import syntax to Vim, you can also search in (local) imported files, using Vim's standard keyword search, for instance ([I). The haskellmode plugins for Vim are currently in the process of moving to http://projects.haskell.org/haskellmode-vim/ . Which made me notice that I hadn't updated the publicly available version in quite some time (announcement to follow when that process has settled down somewhat). Claus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] how to upgrade?
Is there a nice way of upgrading ghc: I mean does cabal-upgrade know to install exactly the packages that I had with the previous ghc version? - J.W. signature.asc Description: OpenPGP digital signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Program using 500MB RAM to process 5MB file
On Fri, Apr 03, 2009 at 10:22:07AM +0200, Ketil Malde wrote: lu...@die.net.au writes: I'm relatively new to haskell so as one does, I am rewriting an existing program in haskell to help learn the language. However, it eats up all my RAM whenever I run the program. This typically happens to me when I parse large files and either am a) using a parser that is too strict (like Luke says) or b) using a parser that is too lazy - or rather, it parses into a lazy data structure which is then populated with unevaluated thunks holding onto the input data. Thanks for all the help everyone . I've decided to dump Parsec, as the file structure is simple enough to implement using basic list manipulation (which is, I've read, one of haskell's strong points) and has turned out to be much simpler code. I think I was reading the write your own scheme tutorial when I started writing that code, so natually started using parsec. As a side note, I was reading the I/O section of RWH last night and came across the lazy vs. strict I/O part, however it didn't occur to me that Parsec was strict. Anyway, thanks for all the help and suggestions. -- Lucas Hazel lu...@die.net.au pgp5StOlyFrnV.pgp Description: PGP signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Reverting to any old version using Darcs
Regarding these files that people forget to checkin. Doesn't every project have a well define directory structure? Shouldn't the prefs/boring file use this fact to encapsulate the rules of file inclusion and exclusion? Isn't it safer to checkin too many files (by accident) than forgetting one? Shouldn't this behavior be the default? To me version control also means if it works on my machine, it should work on all other peoples machines after they are in synch. Of course in reality people can also have different environment variables, different versions of operating systems, different hardware, etc so this idea certainly is utopia (however the version control system VESTAhttp://sourcefrog.net/weblog/software/vc/vesta/index.html tried to version everything, they even considered versioning the operating system :-) On Fri, Apr 3, 2009 at 11:23 AM, Malcolm Wallace malcolm.wall...@cs.york.ac.uk wrote: Thus the uploaded sdist was missing one of the source files, and consequently failed to build. I have a pre-release make target where I test everything I can think of. I think it prevents the above, am I right ? Not unless you run 'make check' in a separate pristine copy of the repo. The problem occurs when your local development repo contains some essential files that have not been checked into the VCS. Your 'make check' will work fine for you, but not for other people. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] List and description of language extensions
On Fri, Apr 3, 2009 at 7:16 AM, Brandon S. Allbery KF8NH allb...@ece.cmu.edu wrote: On 2009 Apr 3, at 0:00, Michael Snoyman wrote: It's been multiple times now that I've been confounded by something in Haskell which was then solved by a language extension (first FunctionalDependencies, most recently ScopedTypeVariables). I'm wondering if there is a list anywhere of all the language extensions supported by GHC and a brief description of them. I looked around, but couldn't find one. If there isn't one, would others be willing to fill one in on the Haskell wiki? I'll do what I can, but I obviously have a very limited knowledge of the extensions available. http://www.haskell.org/ghc/docs/latest/html/users_guide/ghc-language-features.html Thanks, that's what I was looking for. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Program using 500MB RAM to process 5MB file
On Fri, Apr 03, 2009 at 10:27:08PM +1100, lu...@die.net.au wrote: On Fri, Apr 03, 2009 at 10:22:07AM +0200, Ketil Malde wrote: lu...@die.net.au writes: I'm relatively new to haskell so as one does, I am rewriting an existing program in haskell to help learn the language. However, it eats up all my RAM whenever I run the program. Thanks for all the help everyone . I've decided to dump Parsec, as the file structure is simple enough to implement using basic list manipulation (which is, I've read, one of haskell's strong points) and has turned out to be much simpler code. Using lazy I/O has reduced run time by 75% and RAM consumption to 3MB Thank you :) -- Lucas Hazel lu...@die.net.au pgpPMNEI3nA3B.pgp Description: PGP signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Reverting to any old version using Darcs
Peter Verswyvelen bugf...@gmail.com writes: Regarding these files that people forget to checkin. Doesn't every project have a well define directory structure? Shouldn't the prefs/boring file use this fact to encapsulate the rules of file inclusion and exclusion? Isn't it safer to checkin too many files (by accident) than forgetting one? Shouldn't this behavior be the default? IMO: No. My development directories tend to litter up with files containing test input data, output data, profiling data, and all kinds of junk. Better to occasionally forget a file and get an error message in the mail when somebody else tries to use it (i.e. non-strictly) than have my darcs repository and Hackage sdists working but littered with junk. (My current testing involves 118GB of input data, you sure you want to see that on Hackage? :-) -k -- If I haven't seen further, it is by standing in the footprints of giants ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] List and description of language extensions
It's been multiple times now that I've been confounded by something in Haskell which was then solved by a language extension (first FunctionalDependencies, most recently ScopedTypeVariables). I'm wondering if there is a list anywhere of all the language extensions supported by GHC and a brief description of them. I looked around, but couldn't find one. If there isn't one, would others be willing to fill one in on the Haskell wiki? I'll do what I can, but I obviously have a very limited knowledge of the extensions available. http://www.haskell.org/ghc/docs/latest/html/users_guide/ghc-language-features.html Thanks, that's what I was looking for. Also: $ ghc --supported-languages or $ ghc --supported-languages | sort There's no description, but it's useful for a quick look-up and/or a copy-and-paste. Sean ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Reverting to any old version using Darcs
Okay. I always put these in the boring file. Matter of taste I guess. On Fri, Apr 3, 2009 at 3:18 PM, Ketil Malde ke...@malde.org wrote: Peter Verswyvelen bugf...@gmail.com writes: Regarding these files that people forget to checkin. Doesn't every project have a well define directory structure? Shouldn't the prefs/boring file use this fact to encapsulate the rules of file inclusion and exclusion? Isn't it safer to checkin too many files (by accident) than forgetting one? Shouldn't this behavior be the default? IMO: No. My development directories tend to litter up with files containing test input data, output data, profiling data, and all kinds of junk. Better to occasionally forget a file and get an error message in the mail when somebody else tries to use it (i.e. non-strictly) than have my darcs repository and Hackage sdists working but littered with junk. (My current testing involves 118GB of input data, you sure you want to see that on Hackage? :-) -k -- If I haven't seen further, it is by standing in the footprints of giants ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANNOUNCE: fad 1.0 -- Forward Automatic Differentiation library
Very nice to have! FYI- there is at least one more quantification-based automatic differentiation implementation in Hackage: http://comonad.com/haskell/monoids/dist/doc/html/monoids/Data-Ring-Module-AutomaticDifferentiation.html My implementation is/was focused upon use with monoids and other more-limited-than-Num classes and only included the equivalent of your 'lift' and 'diffUU' operations, however. -Edward Kmett On Thu, Apr 2, 2009 at 10:28 PM, Bjorn Buckwalter bjorn.buckwal...@gmail.com wrote: I'm pleased to announce the initial release of the Haskell fad library, developed by Barak A. Pearlmutter and Jeffrey Mark Siskind. Fad provides Forward Automatic Differentiation (AD) for functions polymorphic over instances of 'Num'. There have been many Haskell implementations of forward AD, with varying levels of completeness, published in papers and blog posts[1], but alarmingly few of these have made it into hackage -- to date Conal Elliot's vector-spaces[2] package is the only one I am aware of. Fad is an attempt to make as comprehensive and usable a forward AD package as is possible in Haskell. However, correctness is given priority over ease of use, and this is in my opinion the defining quality of fad. Specifically, Fad leverages Haskell's expressive type system to tackle the problem of _perturbation confusion_, brought to light in Pearlmutter and Siskind's 2005 paper Perturbation Confusion and Referential Transparency[3]. Fad prevents perturbation confusion by employing type-level branding as proposed by myself in a 2007 post to haskell-cafe[4]. To the best of our knowledge all other forward AD implementations in Haskell are susceptible to perturbation confusion. As this library has been in the works for quite some time it is worth noting that it hasn't benefited from Conal's ground-breaking work[5] in the area. Once we wrap our heads around his beautiful constructs perhaps we'll be able to borrow some tricks from him. As mentioned already, fad was developed primarily by Barak A. Pearlmutter and Jeffrey Mark Siskind. My own contribution has been providing Haskell infrastructure support and wrapping up loose ends in order to get the library into a releasable state. Many thanks to Barak and Jeffrey for permitting me to release fad under the BSD license. Fad resides on GitHub[6] and hackage[7] and is only a cabal install fad away! What follows is Fad's README, refer to the haddocks for detailed documentation. Thanks, Bjorn Buckwalter [1] http://www.haskell.org/haskellwiki/Functional_differentiation [2] http://www.haskell.org/haskellwiki/Vector-space [3]: http://www.bcl.hamilton.ie/~qobi/nesting/papers/ifl2005.pdf [4]: http://thread.gmane.org/gmane.comp.lang.haskell.cafe/22308/ [5]: http://conal.net/papers/beautiful-differentiation/ [6] http://github.com/bjornbm/fad/ [7] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/fad Copyright : 2008-2009, Barak A. Pearlmutter and Jeffrey Mark Siskind License: BSD3 Maintainer : bjorn.buckwal...@gmail.com Stability : experimental Portability: GHC only? Forward Automatic Differentiation via overloading to perform nonstandard interpretation that replaces original numeric type with corresponding generalized dual number type. Each invocation of the differentiation function introduces a distinct perturbation, which requires a distinct dual number type. In order to prevent these from being confused, tagging, called branding in the Haskell community, is used. This seems to prevent perturbation confusion, although it would be nice to have an actual proof of this. The technique does require adding invocations of lift at appropriate places when nesting is present. For more information on perturbation confusion and the solution employed in this library see: http://www.bcl.hamilton.ie/~barak/papers/ifl2005.pdf http://thread.gmane.org/gmane.comp.lang.haskell.cafe/22308/ Installation To install: cabal install Or: runhaskell Setup.lhs configure runhaskell Setup.lhs build runhaskell Setup.lhs install Examples Define an example function 'f': import Numeric.FAD f x = 6 - 5 * x + x ^ 2 -- Our example function Basic usage of the differentiation operator: y = f 2 -- f(2) = 0 y' = diff f 2 -- First derivative f'(2) = -1 y'' = diff (diff f) 2 -- Second derivative f''(2) = 2 List of derivatives: ys = take 3 $ diffs f 2 -- [0, -1, 2] Example optimization method; find a zero using Newton's method: y_newton1 = zeroNewton f 0 -- converges to first zero at 2.0. y_newton2 = zeroNewton f 10 -- converges to second zero at 3.0. Credits === Authors: Copyright 2008, Barak A. Pearlmutter ba...@cs.nuim.ie Jeffrey Mark Siskind q...@purdue.edu Work started as stripped-down version of higher-order tower code
Re: [Haskell-cafe] ANNOUNCE: fad 1.0 -- Forward Automatic Differentiation library
A somewhat tricky concern is that that the extra functionality in question depends on a bunch of primitive definitions that lie below this in the package and the AD engine is used by a layer on top. So moving it out would introduce a circular dependency back into the package or require me to stratify into two packages. When I looked into partitioning the package for another reason I found that I couldn't do so without introducing some orphan instances, so it'll probably be a tricky bit of surgery to split out. That said, it's probably still worth doing. I also agree that I should be somewhat more pedantic about the name. =) -Edward Kmett On Fri, Apr 3, 2009 at 10:49 AM, Barak A. Pearlmutter ba...@cs.nuim.iewrote: I feel silly, did not even notice that! Thanks for the pointer. Would be sensible to merge the functionalities; will try to import functionality in Data.Ring.Module.AutomaticDifferentiation currently missing from Numeric.FAD. (One pedantic note: should really be named Data.Ring.Module.AutomaticDifferentiation.Forward, since it is doing forward-mode accumulation automatic differentiation; reverse is an adjoint kettle of fish.) -- Barak A. Pearlmutter Hamilton Institute Dept Comp Sci, NUI Maynooth, Co. Kildare, Ireland http://www.bcl.hamilton.ie/~barak/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Announcement: Beta of Leksah IDE available
Jurgen... I have one more question, or rather request... I'm running under Ubuntu, and I get inconsistencies with packages that I build and install via Leksah not showing up when I configure other packages that depend on them. Then I notice that you're using runhaskell Setup.lhs ... to configure build and install. I wonder if you could change all that from runhaskell Setup.lhs to cabal wherever you run it? That would make things a lot more consistent overall, and probably jive better with the way most people install packages. -- Jeff On Thu, Apr 2, 2009 at 8:27 AM, jutaro j...@arcor.de wrote: Hi Simon, you quite nicely describe what leksah is doing already. Try to find find the source code for all installed packages by locating cabal files, parse the module sources via the Ghc API (actually not so much the API), using info from cabal files for this (which is a dark art). It extracts comments and locations. It's quite an ad hoc solution. On my machine it's 97% successful, but its a notorious support theme, because it depends so much on the environment. Jürgen Simon Marlow-7 wrote: David Waern wrote: 2009/4/2 Duncan Coutts duncan.cou...@worc.ox.ac.uk: On Wed, 2009-04-01 at 22:13 +0200, David Waern wrote: 2009/4/1 jutaro j...@arcor.de: I guess you mean the dialog which should help leksah to find sources for installed packages. It needs this so you can go to all the definitions in the base packages ... This is very handy if it works. Look to the manual for details. Maybe could add support to Cabal for installing sources? Should be very useful to have in general. http://hackage.haskell.org/trac/hackage/ticket/364 Jutaru, perhaps a nice Hackathon project? :-) I think there's some design work to do there. See the discussion on the GHC ticket: http://hackage.haskell.org/trac/ghc/ticket/2630. In short: just keeping the source code around isn't enough. You need some metadata in order to make sense of the source code - for example, you can't feed the source code to the GHC API without knowing which additional flags need to be passed, and those come from the .cabal file. Also you probably want to stash the results of the 'cabal configure' step so that you can get a view of the source code that is consistent with the version(s?) you compiled. We need to think about about backwards and forwards-compatibility of whatever metadata format is used. And then you'll need Cabal APIs to extract the metadata. So we need to think about what APIs make sense, and the best way to do that is to think about what tool(s) you want to write and use that to drive the API design. Perhaps all this is going a bit too far. Maybe we want to just stash the source code and accept that there are some things that you just can't do with it. However, I imagine that pretty soon people will want to feed the source code into the GHC API, and at that point we have to tackle the build metadata issues. Cheers, Simon ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe -- View this message in context: http://www.nabble.com/Announcement%3A-Beta-of-Leksah-IDE-available-tp22816032p22846713.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Announcement: Beta of Leksah IDE available
Hello Jeff, I'm not so shure if I understand what you mean (and I'm off for vacations in a few minute). So lets find out later. But you may try to set the --user to your config flags in menu: Packages/Edit Flags. Jürgen Jeff Heard wrote: Jurgen... I have one more question, or rather request... I'm running under Ubuntu, and I get inconsistencies with packages that I build and install via Leksah not showing up when I configure other packages that depend on them. Then I notice that you're using runhaskell Setup.lhs ... to configure build and install. I wonder if you could change all that from runhaskell Setup.lhs to cabal wherever you run it? That would make things a lot more consistent overall, and probably jive better with the way most people install packages. -- Jeff On Thu, Apr 2, 2009 at 8:27 AM, jutaro j...@arcor.de wrote: Hi Simon, you quite nicely describe what leksah is doing already. Try to find find the source code for all installed packages by locating cabal files, parse the module sources via the Ghc API (actually not so much the API), using info from cabal files for this (which is a dark art). It extracts comments and locations. It's quite an ad hoc solution. On my machine it's 97% successful, but its a notorious support theme, because it depends so much on the environment. Jürgen Simon Marlow-7 wrote: David Waern wrote: 2009/4/2 Duncan Coutts duncan.cou...@worc.ox.ac.uk: On Wed, 2009-04-01 at 22:13 +0200, David Waern wrote: 2009/4/1 jutaro j...@arcor.de: I guess you mean the dialog which should help leksah to find sources for installed packages. It needs this so you can go to all the definitions in the base packages ... This is very handy if it works. Look to the manual for details. Maybe could add support to Cabal for installing sources? Should be very useful to have in general. http://hackage.haskell.org/trac/hackage/ticket/364 Jutaru, perhaps a nice Hackathon project? :-) I think there's some design work to do there. See the discussion on the GHC ticket: http://hackage.haskell.org/trac/ghc/ticket/2630. In short: just keeping the source code around isn't enough. You need some metadata in order to make sense of the source code - for example, you can't feed the source code to the GHC API without knowing which additional flags need to be passed, and those come from the .cabal file. Also you probably want to stash the results of the 'cabal configure' step so that you can get a view of the source code that is consistent with the version(s?) you compiled. We need to think about about backwards and forwards-compatibility of whatever metadata format is used. And then you'll need Cabal APIs to extract the metadata. So we need to think about what APIs make sense, and the best way to do that is to think about what tool(s) you want to write and use that to drive the API design. Perhaps all this is going a bit too far. Maybe we want to just stash the source code and accept that there are some things that you just can't do with it. However, I imagine that pretty soon people will want to feed the source code into the GHC API, and at that point we have to tackle the build metadata issues. Cheers, Simon ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe -- View this message in context: http://www.nabble.com/Announcement%3A-Beta-of-Leksah-IDE-available-tp22816032p22846713.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe -- View this message in context: http://www.nabble.com/Announcement%3A-Beta-of-Leksah-IDE-available-tp22816032p22871892.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Monad transformer, liftIO
Hello list, maybe I'm just stupid, I'm trying to do something like this: import Control.Monad import Control.Monad.Trans import Control.Monad.List foobar = do a - [1,2,3] b - [4,5,6] liftIO $ putStrLn $ (show a) ++ ++ (show b) return (a+b) main = do sums - foobar print sums But this apparently doesn't work... I'm total clueless how to achieve the correct solution. Maybe my mental image on the monad transformer thing is totally wrong? Michael ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Monad transformer, liftIO
You haven't really said what happens when you try this, but I would bet that things would be clarified greatly if you put type signatures on your two definitions. On Fri, Apr 3, 2009 at 12:49 PM, Michael Roth mr...@nessie.de wrote: Hello list, maybe I'm just stupid, I'm trying to do something like this: import Control.Monad import Control.Monad.Trans import Control.Monad.List foobar = do a - [1,2,3] b - [4,5,6] liftIO $ putStrLn $ (show a) ++ ++ (show b) return (a+b) main = do sums - foobar print sums But this apparently doesn't work... I'm total clueless how to achieve the correct solution. Maybe my mental image on the monad transformer thing is totally wrong? Michael ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Monad transformer, liftIO
On Fri, Apr 3, 2009 at 11:49 AM, Michael Roth mr...@nessie.de wrote: Hello list, maybe I'm just stupid, I'm trying to do something like this: import Control.Monad import Control.Monad.Trans import Control.Monad.List foobar = do a - [1,2,3] b - [4,5,6] liftIO $ putStrLn $ (show a) ++ ++ (show b) return (a+b) main = do sums - foobar print sums But this apparently doesn't work... I'm total clueless how to achieve the correct solution. Maybe my mental image on the monad transformer thing is totally wrong? Okay, so I think what you want is import Control.Monad import Control.Monad.Trans import Control.Monad.List foobar :: ListT IO Int foobar = do a - msum . map return $ [1,2,3] b - msum . map return $ [4,5,6] liftIO $ putStrLn $ (show a) ++ ++ (show b) return (a+b) main = do sums - runListT foobar print sums There were a couple of things going on here: first, that you tried to use literal list syntax in do notation which I believe only works in the actual [] monad. Second, you didn't have the runListT acting on the foobar, which is how you go from a ListT IO Int to a IO [Int]. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANNOUNCE hgettext-0.1.5 - GetText based internationalization of Haskell programs
Hello, I have extended my previous version of the library to support distribution and installation of PO files. Source tarball - http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hgettext Also I described how to use this feature to distribute haskell packages in my blog entry http://progandprog.blogspot.com/2009/04/configure-and-install-internationalized.html Same description was added to the Haskell Wiki - http://www.haskell.org/haskellwiki/Internationalization_of_Haskell_programs Complete example, which uses internationalization capabilities is here: http://hgettext.googlecode.com/files/hello-0.1.3.tar.gz --- Best regards, Vasyl Pasternak ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] A fair number of updates to Buster since the other day - 0.99.5
http://vis.renci.org/jeff/2009/04/03/major-updates-to-buster/ Added several new widgets and several new behaviours related to file reading and writing, exceptions, and the system/program environment. -- Jeff ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Possible floating point bug in GHC?
For days I'm fighting against a weird bug. My Haskell code calls into a C function residing in a DLL (I'm on Windows, the DLL is generated using Visual Studio). This C function computes a floating point expression. However, the floating point result is incorrect. I think I found the source of the problem: the C code expects that all the Intel's x86's floating point register tag bits are set to 1, but it seems the Haskell code does not preserve that. Since the x86 has all kinds of floating point weirdnesshttp://www.informit.com/articles/article.aspx?p=770362 - it is both a stack based and register based system - so it is crucially important that generated code plays nice. For example, when using MMX one must always emit an EMMS instructionhttp://msdn.microsoft.com/en-us/library/590b9ks9(VS.80).aspxto clear these tag bits. If I manually clear these tags bits, my code works fine. Is this something other people encountered as well? I'm trying to make a very simple test case to reproduce the behavior... I'm not sure if this is a visual C compiler bug, GHC bug, or something I'm doing wrong... Is it possible to annotate a foreign imported C function to tell the Haskell code generator the functioin is using floating point registers somehow? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Monad transformer, liftIO
Creighton Hogg schrieb: Okay, so I think what you want is [...] Yes. Your solution works. Thank you. But: a - msum . map return $ [1,2,3] Why Do I need this msum . map return thing? The map return part is somewhat clear. But not entirely. Which type of monad is created here? The msum-part ist totally confusing me: First we create a list with some monads (?) and then msum them? What is going on there? first, that you tried to use literal list syntax in do notation which I believe only works in the actual [] monad. Is the x - xs in the list monad a form of syntactic sugar? Second, you didn't have the runListT acting on the foobar, which is how you go from a ListT IO Int to a IO [Int]. Ah, yes. This point I understood now. Thank you again. Michael ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Announcement: Beta of Leksah IDE available
What's the chance things like hsc2hs and c2hs will ever be supported? :) I'm aware this is a horribly difficult task (or I think it is). Perhaps it would be possible to find the .hsc and .chs files and run the corresponding processor over them and extract data/types/functions from the corresponding .hs files? I tried running leksah on one of my projects which uses a lot of FFI without much success. /jve 2009/3/31 Jürgen Nicklisch-Franken j...@arcor.de I'm proud to announce release 0.4.4 of Leksah, the Haskell IDE written in Haskell. Leksahs current features include: * On the fly error reporting with location of compilation errors * Completion * Import helper for constructing the import statements * Module browser with navigation to definition * Search for identifiers with information about types and comments * Project management support based on Cabal with a visual editor * Haskell customised editor with source candy * Configuration with session support, keymaps and flexible panes For further information: leksah.org Please don't compare what we have reached to IDE's like VisualStudio, Eclipse or NetBeans. I started Leksah June 1997 and work on it in my spare time for fun. I started the project for various reasons. One was to contribute to make Haskell successful in industry, because I suffer from the use of inappropriate programming languages like C, C++, C# or Java in my daily job. Another was to contribute to open source, which I'm using privately almost exclusively. The first alpha version of Leksah was published February 2008. Since the beginning of this year Hamish Mackenzie joined the project and merged his Funa project with Leksah, which gave a real boost. I thank the people who have encouraged and helped me with their comments, enthusiasm and support. I learned as well that the IDE issue is a controversial theme in the community. I learned that IDEs are big evil nasty things, that if you need an IDE, something is wrong with your language, that it is scientifically proved, that real cool hackers will always use Emacs or vi. Most stupid I found the recurring comment: Every few years there is someone who starts a Haskell IDE project and then gives up after a few years.. That will be true for Leksah as well, if it will not be accepted and supported by the community. The current state of Leksah is a proof of concept, that an IDE for Haskell is not a difficult thing to do if the community supports it and that it will in my view be of great help and will contribute tremendously to spread Haskell. So I please the members of the community to pause for a moment and try out Leksah with a benevolent attitude. Jürgen Nicklisch-Franken ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe -- /jve ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Possible floating point bug in GHC?
Interesting. This could be the cause of a weird floating point bug that has been showing up in the ghc testsuite recently, specifically affecting MacOS/Intel (but not MacOS/ppc). http://darcs.haskell.org/testsuite/tests/ghc-regress/lib/Numeric/num009.hs That test compares the result of the builtin floating point ops with the same ops imported via FFI. The should not be different, but on Intel they sometimes are. Regards, Malcolm On 3 Apr 2009, at 18:58, Peter Verswyvelen wrote: For days I'm fighting against a weird bug. My Haskell code calls into a C function residing in a DLL (I'm on Windows, the DLL is generated using Visual Studio). This C function computes a floating point expression. However, the floating point result is incorrect. I think I found the source of the problem: the C code expects that all the Intel's x86's floating point register tag bits are set to 1, but it seems the Haskell code does not preserve that. Since the x86 has all kinds of floating point weirdness - it is both a stack based and register based system - so it is crucially important that generated code plays nice. For example, when using MMX one must always emit an EMMS instruction to clear these tag bits. If I manually clear these tags bits, my code works fine. Is this something other people encountered as well? I'm trying to make a very simple test case to reproduce the behavior... I'm not sure if this is a visual C compiler bug, GHC bug, or something I'm doing wrong... Is it possible to annotate a foreign imported C function to tell the Haskell code generator the functioin is using floating point registers somehow? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Problem with prepose.lhs and ghc6.10.1
With the changes to ScopedTypeVariables in GHC you can't pick up the type from the return type of your function directly, so you'll need either a combinator to do the work or to pass the type in question in as an argument to a helper function. normalize :: (Modular s a, Integral a) = a - (M s a) normalize = normalize' undefined where normalize' :: (Modular s a, Integral a) = s - a - (M s a) normalize' s a = M (a `mod` modulus s) There is an implementation of the reflection code with minor modifications to work with the modern version of GHC's ScopedTypeVariables in hackage as 'reflection' and a minimalist implementation of modular arithmetic from the same paper (if not yet including the residue number system based optimizations) available in 'monoids' as Data.Ring.ModularArithmetic. Both have only been fleshed out as far as I've needed them for other purposes, but should be usable. -Edward Kmett On Thu, Apr 2, 2009 at 11:59 AM, Henry Laxen nadine.and.he...@pobox.comwrote: Dear Group, I'm trying to read the paper: Functional Pearl: Implicit Configurations at http://www.cs.rutgers.edu/~ccshan/prepose/ and when running the code in prepose.lhs I get: ../haskell/prepose.lhs:707:0: Parse error in pattern which is pointing at: normalize a :: M s a = M (mod a (modulus (undefined :: s))) The paper says it uses lexically scoped type variables. I tried reading about them at: http://www.haskell.org/ghc/docs/latest/html/users_guide/other-type-extensions.html#scoped-type-variables so I added -XScopedTypeVariables to my OPTIONS but I still get the same error message. I would really like to play with the code in the paper, but I'm stuck at this point. Any pointers would be appreciated. Best wishes, Henry Laxen ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Possible floating point bug in GHC?
Well this situation can indeed not occur on PowerPCs since these CPUs just have floating point registers, not some weird dual stack sometimes / registers sometimes architecture. But in my case the bug is consistent, not from time to time. So I'll try to reduce this to a small reproducible test case, maybe including the assembly generated by the VC++ compiler. On Fri, Apr 3, 2009 at 9:02 PM, Malcolm Wallace malcolm.wall...@cs.york.ac.uk wrote: Interesting. This could be the cause of a weird floating point bug that has been showing up in the ghc testsuite recently, specifically affecting MacOS/Intel (but not MacOS/ppc). http://darcs.haskell.org/testsuite/tests/ghc-regress/lib/Numeric/num009.hs That test compares the result of the builtin floating point ops with the same ops imported via FFI. The should not be different, but on Intel they sometimes are. Regards, Malcolm On 3 Apr 2009, at 18:58, Peter Verswyvelen wrote: For days I'm fighting against a weird bug. My Haskell code calls into a C function residing in a DLL (I'm on Windows, the DLL is generated using Visual Studio). This C function computes a floating point expression. However, the floating point result is incorrect. I think I found the source of the problem: the C code expects that all the Intel's x86's floating point register tag bits are set to 1, but it seems the Haskell code does not preserve that. Since the x86 has all kinds of floating point weirdness - it is both a stack based and register based system - so it is crucially important that generated code plays nice. For example, when using MMX one must always emit an EMMS instruction to clear these tag bits. If I manually clear these tags bits, my code works fine. Is this something other people encountered as well? I'm trying to make a very simple test case to reproduce the behavior... I'm not sure if this is a visual C compiler bug, GHC bug, or something I'm doing wrong... Is it possible to annotate a foreign imported C function to tell the Haskell code generator the functioin is using floating point registers somehow? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Possible floating point bug in GHC?
What floating point model is your DLL compiled with? There are a variety of different options here with regards to optimizations, and I don't know about the specific assembly that each option produces, but I know there are options like Strict, Fast, or Precise, and maybe when you do something like that it makes different assumptions about the caller. Although that doesn't say anything about whose fault it is, but at least it might be helpful to know if changing the floating point model causes the bug to go away. On Fri, Apr 3, 2009 at 2:31 PM, Peter Verswyvelen bugf...@gmail.com wrote: Well this situation can indeed not occur on PowerPCs since these CPUs just have floating point registers, not some weird dual stack sometimes / registers sometimes architecture. But in my case the bug is consistent, not from time to time. So I'll try to reduce this to a small reproducible test case, maybe including the assembly generated by the VC++ compiler. On Fri, Apr 3, 2009 at 9:02 PM, Malcolm Wallace malcolm.wall...@cs.york.ac.uk wrote: Interesting. This could be the cause of a weird floating point bug that has been showing up in the ghc testsuite recently, specifically affecting MacOS/Intel (but not MacOS/ppc). http://darcs.haskell.org/testsuite/tests/ghc-regress/lib/Numeric/num009.hs That test compares the result of the builtin floating point ops with the same ops imported via FFI. The should not be different, but on Intel they sometimes are. Regards, Malcolm On 3 Apr 2009, at 18:58, Peter Verswyvelen wrote: For days I'm fighting against a weird bug. My Haskell code calls into a C function residing in a DLL (I'm on Windows, the DLL is generated using Visual Studio). This C function computes a floating point expression. However, the floating point result is incorrect. I think I found the source of the problem: the C code expects that all the Intel's x86's floating point register tag bits are set to 1, but it seems the Haskell code does not preserve that. Since the x86 has all kinds of floating point weirdness - it is both a stack based and register based system - so it is crucially important that generated code plays nice. For example, when using MMX one must always emit an EMMS instruction to clear these tag bits. If I manually clear these tags bits, my code works fine. Is this something other people encountered as well? I'm trying to make a very simple test case to reproduce the behavior... I'm not sure if this is a visual C compiler bug, GHC bug, or something I'm doing wrong... Is it possible to annotate a foreign imported C function to tell the Haskell code generator the functioin is using floating point registers somehow? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Possible floating point bug in GHC?
I tried both precise and fast, but that did not help. Compiling to SSE2 fixed it, since that does not use a floating point stack I guess. I'm preparing a repro test case, but it is tricky since removing code tends to change the optimizations and then the bug does not occur. Does anybody know what the calling convention for floating points is for cdecl on x86? The documentation says that the result is returned in st(0), but it says nothing about the floating point tags. I assume that every function expects the FP stack to be empty, potentially containing just argument values. But GHC calls the C function with some FP registers reserved on the stack... On Fri, Apr 3, 2009 at 9:54 PM, Zachary Turner divisorthe...@gmail.comwrote: What floating point model is your DLL compiled with? There are a variety of different options here with regards to optimizations, and I don't know about the specific assembly that each option produces, but I know there are options like Strict, Fast, or Precise, and maybe when you do something like that it makes different assumptions about the caller. Although that doesn't say anything about whose fault it is, but at least it might be helpful to know if changing the floating point model causes the bug to go away. On Fri, Apr 3, 2009 at 2:31 PM, Peter Verswyvelen bugf...@gmail.comwrote: Well this situation can indeed not occur on PowerPCs since these CPUs just have floating point registers, not some weird dual stack sometimes / registers sometimes architecture. But in my case the bug is consistent, not from time to time. So I'll try to reduce this to a small reproducible test case, maybe including the assembly generated by the VC++ compiler. On Fri, Apr 3, 2009 at 9:02 PM, Malcolm Wallace malcolm.wall...@cs.york.ac.uk wrote: Interesting. This could be the cause of a weird floating point bug that has been showing up in the ghc testsuite recently, specifically affecting MacOS/Intel (but not MacOS/ppc). http://darcs.haskell.org/testsuite/tests/ghc-regress/lib/Numeric/num009.hs That test compares the result of the builtin floating point ops with the same ops imported via FFI. The should not be different, but on Intel they sometimes are. Regards, Malcolm On 3 Apr 2009, at 18:58, Peter Verswyvelen wrote: For days I'm fighting against a weird bug. My Haskell code calls into a C function residing in a DLL (I'm on Windows, the DLL is generated using Visual Studio). This C function computes a floating point expression. However, the floating point result is incorrect. I think I found the source of the problem: the C code expects that all the Intel's x86's floating point register tag bits are set to 1, but it seems the Haskell code does not preserve that. Since the x86 has all kinds of floating point weirdness - it is both a stack based and register based system - so it is crucially important that generated code plays nice. For example, when using MMX one must always emit an EMMS instruction to clear these tag bits. If I manually clear these tags bits, my code works fine. Is this something other people encountered as well? I'm trying to make a very simple test case to reproduce the behavior... I'm not sure if this is a visual C compiler bug, GHC bug, or something I'm doing wrong... Is it possible to annotate a foreign imported C function to tell the Haskell code generator the functioin is using floating point registers somehow? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Possible floating point bug in GHC?
On Fri, Apr 03, 2009 at 10:10:17PM +0200, Peter Verswyvelen wrote: I tried both precise and fast, but that did not help. Compiling to SSE2 fixed it, since that does not use a floating point stack I guess. You didn't say what version of GHC you are using, but it sounds like this might already be fixed in 6.10.2 by: Tue Nov 11 12:56:19 GMT 2008 Simon Marlow marlo...@gmail.com * Fix to i386_insert_ffrees (#2724, #1944) The i386 native code generator has to arrange that the FPU stack is clear on exit from any function that uses the FPU. Unfortunately it was getting this wrong (and has been ever since this code was written, I think): it was looking for basic blocks that used the FPU and adding the code to clear the FPU stack on any non-local exit from the block. In fact it should be doing this on a whole-function basis, rather than individual basic blocks. Thanks Ian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Possible floating point bug in GHC?
Ouch, what a waste of time on my side :-( This bugfix is not mentioned in the notable bug fixes herehttp://haskell.org/ghc/docs/6.10.2/html/users_guide/release-6-10-2.html Since this is such a severe bug, I would recommend listing it :) Anyway, I have a very small repro test case now. Will certainly test this with GHC 6.10.2. On Fri, Apr 3, 2009 at 10:35 PM, Ian Lynagh ig...@earth.li wrote: On Fri, Apr 03, 2009 at 10:10:17PM +0200, Peter Verswyvelen wrote: I tried both precise and fast, but that did not help. Compiling to SSE2 fixed it, since that does not use a floating point stack I guess. You didn't say what version of GHC you are using, but it sounds like this might already be fixed in 6.10.2 by: Tue Nov 11 12:56:19 GMT 2008 Simon Marlow marlo...@gmail.com * Fix to i386_insert_ffrees (#2724, #1944) The i386 native code generator has to arrange that the FPU stack is clear on exit from any function that uses the FPU. Unfortunately it was getting this wrong (and has been ever since this code was written, I think): it was looking for basic blocks that used the FPU and adding the code to clear the FPU stack on any non-local exit from the block. In fact it should be doing this on a whole-function basis, rather than individual basic blocks. Thanks Ian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Possible floating point bug in GHC?
Okay, I can confirm the bug is fixed. It's insane this bug did not cause any more problems. Every call into every C function that uses floating point could have been affected (OpenGL, BLAS, etc) On Fri, Apr 3, 2009 at 10:47 PM, Peter Verswyvelen bugf...@gmail.comwrote: Ouch, what a waste of time on my side :-( This bugfix is not mentioned in the notable bug fixes herehttp://haskell.org/ghc/docs/6.10.2/html/users_guide/release-6-10-2.html Since this is such a severe bug, I would recommend listing it :) Anyway, I have a very small repro test case now. Will certainly test this with GHC 6.10.2. On Fri, Apr 3, 2009 at 10:35 PM, Ian Lynagh ig...@earth.li wrote: On Fri, Apr 03, 2009 at 10:10:17PM +0200, Peter Verswyvelen wrote: I tried both precise and fast, but that did not help. Compiling to SSE2 fixed it, since that does not use a floating point stack I guess. You didn't say what version of GHC you are using, but it sounds like this might already be fixed in 6.10.2 by: Tue Nov 11 12:56:19 GMT 2008 Simon Marlow marlo...@gmail.com * Fix to i386_insert_ffrees (#2724, #1944) The i386 native code generator has to arrange that the FPU stack is clear on exit from any function that uses the FPU. Unfortunately it was getting this wrong (and has been ever since this code was written, I think): it was looking for basic blocks that used the FPU and adding the code to clear the FPU stack on any non-local exit from the block. In fact it should be doing this on a whole-function basis, rather than individual basic blocks. Thanks Ian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANN: logfloat 0.12.0.1
-- logfloat 0.12.0.1 This package provides a type for storing numbers in the log-domain, primarily useful for preventing underflow when multiplying many probabilities as in HMMs and other probabilistic models. The package also provides modules for dealing with floating numbers correctly. -- Now using FFI We now use the FFI to gain access to C's accurate log1p (and expm1) functions. This greatly increases the range of resolution, especially for very small LogFloat values. These are currently exposed from Data.Number.LogFloat, though they may move to a different module in future versions. On GHC 6.10 the use of -fvia-C had to be disabled because it conflicts with the FFI (version 0.12.0.0 still used it, which is fine on GHC 6.8). I'm still investigating the use of -fasm and getting proper benchmarking numbers. Contact me if you notice significant performance degradation. Using the FFI complicates the build process for Hugs; details are noted in the INSTALL file. It may also complicate building on Windows (due to ccall vs stdcall), though I'm not familiar with Windows FFI and don't have a machine to test on. The calling convention is unsafe in order to avoid overhead. However, in the past this has been noted to cause issues with multithreaded applications since it locks all RTS threads instead of just the calling thread. If you're using logfloat in a multithreaded application and notice a slowdown, or if you're more familiar with these details than I, tell me so I can fix things. If you have any difficulties with the FFI, let me know. As an interim solution the FFI can be disabled by turning off the useFFI Cabal flag during configure, which will compile the package to use the naive log1p implementation from earlier versions. -- Other changes since 0.11.0 * (0.11.1) Felipe Lessa added an instance for IArray UArray LogFloat. On GHC we use newtype deriving; On Hugs we fall back to unsafeCoerce to distribute the newtype over IArray UArray Double. * (0.11.2 Darcs) Moved the log/exp fusion rules from Data.Number.LogFloat into Data.Number.Transfinite where our log function is defined. * Added a Storable LogFloat instance for GHC. No implementation is available yet for Hugs. * Removed orphaned toRational/fromRational fusion rules, which were obviated by the introduction of the Data.Number.RealToFrac module in 0.11.0. * Changed the Real LogFloat instance to throw errors when trying to convert transfinite values into Rational. -- Compatibility / Portability The package is compatible with Hugs (September 2006) and GHC (6.8, 6.10). For anyone still using GHC 6.6, the code may still work if you replace LANGUAGE pragma by equivalent OPTIONS_GHC pragma. The package is not compatible with nhc98 and Yhc because Data.Number.RealToFrac uses MPTCs. The other modules should be compatible. -- Links Homepage: http://code.haskell.org/~wren/ Hackage: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/logfloat Darcs: http://code.haskell.org/~wren/logfloat/ Haddock (Darcs version): http://code.haskell.org/~wren/logfloat/dist/doc/html/logfloat/ -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: logfloat 0.12.0.1
wren ng thornton wrote: Using the FFI complicates the build process for Hugs; details are noted in the INSTALL file. It may also complicate building on Windows (due to ccall vs stdcall), though I'm not familiar with Windows FFI and don't have a machine to test on. On XP with GHC 6.10.1 it installed cleanly and easily via cabal-install (and a test program comparing results of (log . (1+)) v. log1p) showed that it worked properly). ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Optional EOF in Parsec.
I'm writing a parser with Parsec. In the input language, elements of a sequence are separated by commas: [1, 2, 3] However, instead of a comma, you can also use an EOL: [1, 2 3] Anywhere else, EOL is considered ignorable whitespace. So it's not as simple as just making EOL a token and looking for (comma | eol). I've implemented this functionality in a hand-written parser (basically a hack that keeps track of whether the last read token was preceded by an EOL, without making EOL itself a token). Does anybody have ideas about how to do this with Parsec? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Generating functions for games
So some time ago I saw mentioned the game of Zendo https://secure.wikimedia.org/wikipedia/en/wiki/Zendo_(game) as a good game for programmers to play (and not just by Okasaki). The basic idea of Zendo is that another player is creating arrangements of little colored plastic shapes and you have to guess what rule they satisfy. I thought it'd be fun to play, but not knowing anyone who has it, I figured a Haskell version would be best. 3D graphics and that sort of geometry is a bit complex, though. Better to start with a simplified version to get the fundamentals right. Why not sequences of numbers? For example: [2, 4, 6] could satisfy quite a few rules - the rule could be all evens, or it could be ascending evens, or it could be incrementing by 2, or it could just be ascending period. Now, being in the position of the player who created the rule is no good. A good guesser is basically AI, which is a bit far afield. But it seems reasonable to have the program create a rule and provide examples. Have a few basic rules, some ways to combine them (perhaps a QuickCheck generator), and bob's your uncle. So I set off creating a dataype. The user could type in their guessed rules and then read could be used to compare. I got up to something like 'data Function = Not | Add' and began writing a 'translate' function, along the lines of 'translate Not = not\ntranslate Add = (+)', at which point I realized that translate was returning different types and this is a Bad Thing in Haskell. After trying out a few approaches, I decided the basic idea was flawed and sought help. Someone in #haskell suggested GADTs, which I've never used. Before I plunge into the abyss, I was wondering: does anyone know of any existing examples of this sort of thing or alternative approachs? I'd much rather crib than create. :) -- gwern signature.asc Description: OpenPGP digital signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Generating functions for games
2009/4/4 gwe...@gmail.com: So some time ago I saw mentioned the game of Zendo https://secure.wikimedia.org/wikipedia/en/wiki/Zendo_(game) as a good game for programmers to play (and not just by Okasaki). The basic idea of Zendo is that another player is creating arrangements of little colored plastic shapes and you have to guess what rule they satisfy. I thought it'd be fun to play, but not knowing anyone who has it, I figured a Haskell version would be best. 3D graphics and that sort of geometry is a bit complex, though. Better to start with a simplified version to get the fundamentals right. Why not sequences of numbers? For example: [2, 4, 6] could satisfy quite a few rules - the rule could be all evens, or it could be ascending evens, or it could be incrementing by 2, or it could just be ascending period. Now, being in the position of the player who created the rule is no good. A good guesser is basically AI, which is a bit far afield. But it seems reasonable to have the program create a rule and provide examples. Have a few basic rules, some ways to combine them (perhaps a QuickCheck generator), and bob's your uncle. So I set off creating a dataype. The user could type in their guessed rules and then read could be used to compare. I got up to something like 'data Function = Not | Add' and began writing a 'translate' function, along the lines of 'translate Not = not\ntranslate Add = (+)', at which point I realized that translate was returning different types and this is a Bad Thing in Haskell. After trying out a few approaches, I decided the basic idea was flawed and sought help. Someone in #haskell suggested GADTs, which I've never used. Before I plunge into the abyss, I was wondering: does anyone know of any existing examples of this sort of thing or alternative approachs? I'd much rather crib than create. :) -- gwern A simple example for this using GADTs might be: data Function a where Not :: Function (Bool - Bool) Add :: Function (Int - Int - Int) ... -- BTW, type signatures are mandatory if you're using GADTs translate :: Function a - a translate Not = not translate Add = (+) ... The important part here is the parameter to Function, and how it's made more specific in the constructors: even though you're returning values of different types from translate, it's always the same as the type the argument is parametrised over. Not to be all RTFM, but I found the example in the GHC docs quite helpful as well: http://www.haskell.org/ghc/docs/latest/html/users_guide/data-type-extensions.html#gadt ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Optional EOF in Parsec.
On Fri, Apr 3, 2009 at 8:17 PM, Kannan Goundan kan...@cakoose.com wrote: I'm writing a parser with Parsec. In the input language, elements of a sequence are separated by commas: [1, 2, 3] However, instead of a comma, you can also use an EOL: [1, 2 3] Anywhere else, EOL is considered ignorable whitespace. So it's not as simple as just making EOL a token and looking for (comma | eol). Untested, but hopefully enough so you get an idea of where to start: -- End of line parser. Consumes the carriage return, if present. eol :: Parser () eol = eof | char '\n' -- list-element separator. listSep :: Parser () listSep = eol | (char ',' spaces) -- list parser. The list may be empty - denoted by [] myListOf :: Parser a - Parser [a] myListOf p = char '[' sepBy p listSep = \vals - char ']' return vals This would probably be better off with a custom version of the 'spaces' parser that didn't parse newlines. Antoine ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe