Re[2]: [Haskell-cafe] packages and QuickCheck

2008-09-10 Thread Bulat Ziganshin
Hello Johannes,

Wednesday, September 10, 2008, 9:39:15 AM, you wrote:

 Has there ever been a discussion of typed, user-definable,
 user-processable source code annotations for Haskell?

afair it was on haskell-prime list

btw, Template Haskell may be used for this purpose (although not in
portable way, of course)



-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] trying to understand monad transformers....

2008-09-10 Thread wren ng thornton

wren ng thornton wrote:

Daryoush Mehrtash wrote:

The MaybeT transformer is defined as:

newtype MaybeT m a = MaybeT {runMaybeT :: m (Maybe a)}




Question:  What does runMaybeT x mean?


As for what does it do, I think everyone else has handled that pretty 
well. As far as what does it mean, it may help to think categorically.


For every monad |m| we have another monad |MaybeT m|. If we ignore some 
details, we can think of the transformed monad as |Maybe| composed with 
|m|, sic: |Maybe . m|. With this perspective, runMaybeT is inverting 
|MaybeT m| into |m . Maybe| by pushing the Maybe down over/through the 
monad m. Hence we can envision the function as:


 | runMaybeT   :: (MaybeT m) a - (m . Maybe) a
 | runMaybeT NothingT   = return Nothing
 | runMaybeT (JustT ma) = fmap Just ma



Erh, whoops, I said that backwards. |MaybeT m| is like |m . Maybe| and 
runMaybeT breaks apart the implicit composition into an explicit one. 
The MaybeT monad just gives the additional possibility of a Nothing 
value at *each* object in |m a|.


To see why this is different than the above, consider where |m| is a 
list or Logic. |MaybeT []| says that each element of the list 
independently could be Nothing, whereas |ListT Maybe| says that we have 
either Nothing or Just xs. The pseudocode above gives the latter case 
since it assumes a top level Nothing means Nothing everywhere, which is 
wrong for |MaybeT m|.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell and Java

2008-09-10 Thread Andy Smith
2008/9/9 Maurí­cio [EMAIL PROTECTED]:
 I use Haskell, and my friends at
 work use Java. Do you think it
 could be a good idea to use Haskell
 with Java, so I could understand
 and cooperate with them? Is there a
 a Haskell to Java compiler that's
 already ready to use?

Besides the other approaches people have suggested, maybe you could
use a combination of the Haskell FFI and JNI (Java Native Interface)
or JNA (Java Native Access). I have no idea how practical this would
be, and it would only work on platforms where you can compile Haskell
to native code rather than on any JVM, but if that's OK it might be
worth exploring.

If you want to call Haskell functions from Java, it seems to me it
should be possible to write FFI export declarations for the Haskell
functions you want to use from Java, and write a C wrapper to package
the exported functions for JNI as if they were native C functions.
With JNA you might be able to avoid writing a C wrapper, although I
know even less about JNA than I do about JNI. It should also be
possible to go the other way if you want to call Java code from
Haskell, at least with JNI but possibly not with JNA.

Andy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] packages and QuickCheck

2008-09-10 Thread Johannes Waldmann



Has there ever been a discussion of typed, user-definable,
user-processable source code annotations for Haskell?


afair it was on haskell-prime list


http://hackage.haskell.org/trac/haskell-prime/ticket/88

if you can call that a discussion :-)



signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell and Java

2008-09-10 Thread Janis Voigtlaender

Bulat Ziganshin wrote:

Is there a
a Haskell to Java compiler that's
already ready to use?



CAL



Just in case this answer was a bit cryptic for the original poster...

What Bulat means is the following:

http://labs.businessobjects.com/cal/

--
Dr. Janis Voigtlaender
http://wwwtcs.inf.tu-dresden.de/~voigt/
mailto:[EMAIL PROTECTED]



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage policy question

2008-09-10 Thread Wolfgang Jeltsch
Am Mittwoch, 10. September 2008 00:59 schrieb Duncan Coutts:
 […]

 The .tar.gz packages are pristine and must not change, however
 the .cabal file that is kept in the hackage index could change and that
 information will be reflected both in the hackage website and just as
 importantly for tools like cabal-install.

I don’t think this is a good idea.  The package description, list of exposed 
modules, etc. belongs to the software, in my opinion.  So changing it means 
changing the software which should be reflected by a version number increase.

Why can’t package maintainers double-check that they got everything in the 
Cabal file right before the package is uploaded to Hackage?  And if one has 
forgotten a URL or something like that, it is still possible to release a new 
version where just the 4th component of the version number was changed which 
means precisely that the interface didn’t change.

 […]

 The difference here is that those two would be in the same format,
 the .cabal file inside the tarball that never changes and the one in the
 index which may do. This is also the objection that some people have,
 that it would be confusing to have the two versions, especially that
 unpacking the tarball gives the old unmodified version.

Yes, it *is* confusing.

 […]

 I hope to implement this feature in the new hackage server implementation
 that I'll be announcing shortly.

May I kindly ask you to *not* implement this feature?

 […]

Best wishes,
Wolfgang
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] packages and QuickCheck

2008-09-10 Thread Wolfgang Jeltsch
Am Dienstag, 9. September 2008 15:46 schrieb Sean Leather:
 […]

 Testing non-exported functionality without exporting the test interface
 seems difficult in general. Is there a way to hide part of a module
 interface with Cabal? Then, you could have a 'test' function exported from
 each module for testing but hidden for release.

You can have modules not included in the Exposed-Modules section which contain 
the actual implementation and export also some internals used for testing.  
Then you can let the exposed modules import these internal modules and 
reexport the stuff intended for the public.

 […]

Best wishes,
Wolfgang
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] packages and QuickCheck

2008-09-10 Thread Wolfgang Jeltsch
Am Dienstag, 9. September 2008 16:05 schrieb Conal Elliott:
 […]

   My current leaning is to split a package foo into packages foo
   and foo-test 
 
  What benefit does this provide?

 It keeps the library and its dependencies small.

Do you publish foo-test on Hackage?  If yes than the HackageDB gets cluttered 
with test packages.  If not, there is no easy way for users to run your 
tests.

 […]

Best wishes,
Wolfgang
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage policy question

2008-09-10 Thread Ross Paterson
On Tue, Sep 09, 2008 at 10:59:17PM +, Duncan Coutts wrote:
 The .tar.gz packages are pristine and must not change, however
 the .cabal file that is kept in the hackage index could change and that
 information will be reflected both in the hackage website and just as
 importantly for tools like cabal-install. So not only could the
 maintainer fix urls or whatever but also adjust dependencies in the
 light of test results. Consider the analogy to pristine tarballs and
 debian or gentoo meta-data files. The former never changes for a
 particular version, but the meta-data can be adjusted as the
 distributors see fit.

So if Debian or Gentoo etc repackage one of these packages in their
distributions, what is the pristine tarball that they use?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can you do everything without shared-memory concurrency?

2008-09-10 Thread Maarten Hazewinkel

Hi Bruce,

Some comments from an 11 year Java professional and occasional Haskell  
hobbyist.



On 9 Sep 2008, at 20:30, Bruce Eckel wrote:

So this is the kind of problem I keep running into. There will seem  
to be consensus that you can do everything with isolated processes  
message passing (and note here that I include Actors in this  
scenario even if their mechanism is more complex). And then someone  
will pipe up and say well, of course, you have to have threads and  
the argument is usually for efficiency.


One important distinction to make, which can make a lot of difference  
in performance, is that shared memory itself is not a problem. It's  
when multiple threads/processes can update a single shared area that  
you get into trouble. A single updating thread is OK as long as other  
threads don't depend on instant propagation of the update or on an  
update being visible to all other threads at the exact same time.




I make two observations here which I'd like comments on:

1) What good is more efficiency if the majority of programmers can  
never get it right? My position: if a programmer has to explicitly  
synchronize anywhere in the program, they'll get it wrong. This of  
course is a point of contention; I've met a number of people who say  
well, I know you don't believe it, but *I* can write successful  
threaded programs. I used to think that, too. But now I think it's  
just a learning phase, and you aren't a reliable thread programmer  
until you say it's impossible to get right (yes, a conundrum).


In general I agree. I'm (in all modesty) the best multi-thread  
programmer I've ever met, and even if you were to get it right, the  
next requirements change tends to hit your house of cards with a large  
bucket of water.
And never mind trying to explain the design to other developers. I  
currently maintain a critical multi-threaded component (inherited from  
another developer who left), and my comment on the design is I cannot  
even properly explain it to myself, let alone someone else. Which is  
why I have a new design based on java.util.concurrent queues on the  
table.


2) What if you have lots of processors? Does that change the picture  
any? That is, if you use isolated processes with message passing and  
you have as many processors as you want, do you still think you need  
shared-memory threading?


In such a setup I think you usually don't have directly shared memory  
at the hardware level, so the processors themselves have to use  
message passing to access shared data structures. Which IMHO means  
that you might as well design your software that way too.



A comment on the issue of serialization -- note that any time you  
need to protect shared memory, you use some form of serialization.  
Even optimistic methods guarantee serialization, even if it happens  
after the memory is corrupted, by backing up to the uncorrupted  
state. The effect is the same; only one thread can access the shared  
state at a time.


And a further note on sharing memory via a transactional resource (be  
it STM, a database or a single controlling thread).
This situation always introduces the possibility that your update  
fails, and a lot of client code is not designed to deal with that. The  
most common pattern I see in database access code is to log the  
exception and continue as if nothing happened. The proper error  
handling only gets added in after a major screwup in production  
happens, and the usually only the the particular part of the code  
where it went wrong this time.



Kind regards,

Maarten Hazewinkel
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage policy question

2008-09-10 Thread Ketil Malde
Duncan Coutts [EMAIL PROTECTED] writes:

 So we should think about how to make it less confusing. Perhaps like
 distributors use an extra revision number we should do the same. I had
 hoped that would not be necessary but that's probably not realistic.

If we go this route, it'd be nice to have a distinguishable syntax,
like Gentoo's .rN, so that we don't have to remember whether it's
digit four or five that signifies administrative changes.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] packages and QuickCheck

2008-09-10 Thread Conal Elliott
If I do foo and foo-test, then I would probably place foo-test on Hackage.
Alternatively, just give foo a pointer to the location of the foo-test darcs
repo location.  But then it might not be easy for users to keep the versions
in sync.

On Wed, Sep 10, 2008 at 10:24 AM, Wolfgang Jeltsch 
[EMAIL PROTECTED] wrote:

 Am Dienstag, 9. September 2008 16:05 schrieb Conal Elliott:
  […]

My current leaning is to split a package foo into packages foo
and foo-test
  
   What benefit does this provide?
 
  It keeps the library and its dependencies small.

 Do you publish foo-test on Hackage?  If yes than the HackageDB gets
 cluttered
 with test packages.  If not, there is no easy way for users to run your
 tests.

  […]

 Best wishes,
 Wolfgang
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage policy question

2008-09-10 Thread Duncan Coutts
On Wed, 2008-09-10 at 10:26 +0100, Ross Paterson wrote:
 On Tue, Sep 09, 2008 at 10:59:17PM +, Duncan Coutts wrote:
  The .tar.gz packages are pristine and must not change, however
  the .cabal file that is kept in the hackage index could change and that
  information will be reflected both in the hackage website and just as
  importantly for tools like cabal-install. So not only could the
  maintainer fix urls or whatever but also adjust dependencies in the
  light of test results. Consider the analogy to pristine tarballs and
  debian or gentoo meta-data files. The former never changes for a
  particular version, but the meta-data can be adjusted as the
  distributors see fit.
 
 So if Debian or Gentoo etc repackage one of these packages in their
 distributions, what is the pristine tarball that they use?

They use the one and only pristine tarball. As for what meta-data they
choose, that's up to them, they have the choice of using the
original .cabal file in the .tar.gz or taking advantage of the updated
version.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage policy question

2008-09-10 Thread Wolfgang Jeltsch
Am Mittwoch, 10. September 2008 11:47 schrieben Sie:
 On Wed, 2008-09-10 at 10:11 +0200, Wolfgang Jeltsch wrote:
  Am Mittwoch, 10. September 2008 00:59 schrieb Duncan Coutts:
   […]
  
   The .tar.gz packages are pristine and must not change, however
   the .cabal file that is kept in the hackage index could change and that
   information will be reflected both in the hackage website and just as
   importantly for tools like cabal-install.
 
  I don’t think this is a good idea.  The package description, list of
  exposed modules, etc. belongs to the software, in my opinion.  So
  changing it means changing the software which should be reflected by a
  version number increase.

 Remember that all the distributors are doing this all the time and it
 doesn't cause (many) problems. They change the meta-data from what the
 upstream authors give and they add patches etc etc.

 Granted, they do use a revision number to track this normally.

And this is an important difference, in my opinion.  Changing meta-data is 
probably okay if a revision number is changed.

 […]

 The most radical thing is tightening or relaxing the version constraints on
 dependencies. For example if your package depends on foo = 1.0   1.1 but
 then foo-1.1 comes out and although it does contain api changes and had the
 potential to break your package (which is why you put the conservative upper
 bound) it turns out that you were not using the bits of the api that changed
 and it does in fact work with 1.1 so you can adjust the dependencies to foo
 = 1.0   1.2.

But what happens if you need the loose dependencies.  cabal-install checks the 
updated Cabal file and might see that you’ve already installed the correct 
dependencies.  It downloads and unpacks the package and then Cabal uses the 
Cabal file included in the package and complains about unmet dependencies.  
Or shall there be a mechanism to overwrite the package’s Cabal file with the 
updated Cabal file?

 […]

 So we should think about how to make it less confusing. Perhaps like
 distributors use an extra revision number we should do the same.

Yes, maybe this is the way to go.

 […]

Best wishes,
Wolfgang
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can you do everything without shared-memory concurrency?

2008-09-10 Thread Sebastian Sylvan
2008/9/9 Bruce Eckel [EMAIL PROTECTED]

 So this is the kind of problem I keep running into. There will seem to be
 consensus that you can do everything with isolated processes message passing
 (and note here that I include Actors in this scenario even if their
 mechanism is more complex). And then someone will pipe up and say well, of
 course, you have to have threads and the argument is usually for
 efficiency.
 I make two observations here which I'd like comments on:

 1) What good is more efficiency if the majority of programmers can never
 get it right? My position: if a programmer has to explicitly synchronize
 anywhere in the program, they'll get it wrong. This of course is a point of
 contention; I've met a number of people who say well, I know you don't
 believe it, but *I* can write successful threaded programs. I used to think
 that, too. But now I think it's just a learning phase, and you aren't a
 reliable thread programmer until you say it's impossible to get right
 (yes, a conundrum).



I don't see why this needs to be a religious either-or issue? As I said,
*when* isolated threads maps well to your problem, they are more attractive
than shared memory solutions (for correctness reasons), but preferring
isolated threads does not mean you should ignore the reality that they do
not fit every scenario well. There's no single superior
concurrency/parallelism paradigm (at least not yet), so the best we can do
for general purpose languages is to recognize the relative
strengths/weaknesses of each and provide all of them.



 2) What if you have lots of processors? Does that change the picture any?
 That is, if you use isolated processes with message passing and you have as
 many processors as you want, do you still think you need shared-memory
 threading?


Not really. There are still situations where you have large pools of
*potential* data with no way of figuring out ahead of time what pieces
you'll need to modify . So for explicit synchronisation, e.g. using isolated
threads to own the data, or with locks, you'll need to be conservative and
lock the whole world, which means you might as well run everything
sequentially. Note here that implementing this scenario using isolated
threads with message passing effectively boils down to simulating locks and
shared memory - so if you're using shared memory and locks anyway, why not
have native (efficient) support for them?

As I said earlier, though, I believe the best way to synchronize shared
memory is currently STM, not using manual locks (simulated with threads or
otherwise).



 A comment on the issue of serialization -- note that any time you need to
 protect shared memory, you use some form of serialization. Even optimistic
 methods guarantee serialization, even if it happens after the memory is
 corrupted, by backing up to the uncorrupted state. The effect is the same;
 only one thread can access the shared state at a time.



Yes, the difference is that with isolated threads, or with manual locking,
the programmer has to somehow figure out which pieces lock ahead of time, or
write manual transaction protocols with rollbacks etc. The ideal case is
that you have a runtime (possibly with hardware support) to let you off the
hook and automatically do a very fine-grained locking with optimistic
concurrency.

Isolated threads and locks are on the same side of this argument - they both
require the user to ahead of time partition the data up and decide how to
serialize operations on the data (which is not always possible statically,
leading to very very complicated code, or very low concurrency).

-- 
Sebastian Sylvan
+44(0)7857-300802
UIN: 44640862
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell and Java

2008-09-10 Thread Mauricio

At this time It's not really a question
of better implementation, but cooperation.
I know Haskell, they know Java, and it
would be nice if we could share code and
work. The idea of the api, or maybe dbus,
seems OK. It just would be easier if we
could join everything in a single piece,
but it is no big deal.

Maurício

Daryoush Mehrtash a écrit :
Why do you want to mix haskall and Java in one VM?  If there are 
functionality within your code that is better implemented in haskell, 
then why not  make that into a service (run it as haskell) with some api 
that Java code can use.


Daryoush

On Tue, Sep 9, 2008 at 2:36 AM, Maurí­cio [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Hi,

I use Haskell, and my friends at
work use Java. Do you think it
could be a good idea to use Haskell
with Java, so I could understand
and cooperate with them? Is there a
a Haskell to Java compiler that's
already ready to use?

Thanks,
Maurício

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




--
Daryoush

Weblog: http://perlustration.blogspot.com/




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Field names

2008-09-10 Thread Mauricio



(...)
* Since a data constructor can be an infix operator (either spelled with 
backticks or a symbolic name beginning with ':' ) we can also write our 
patterns with infix notation.

(...)


(Slightly off-topic?)

Do you have any reference for that use of infixing
constructors by start their name with ':'? That's
interesting, and I didn't know about it.

Thanks,
Maurício

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell and Java

2008-09-10 Thread Marc Weber
There used to be.
http://www.cs.rit.edu/~bja8464/lambdavm/
(Last darcs change log entry:
Sun Oct 21 03:05:20 CEST 2007  Brian Alliet [EMAIL PROTECTED]
  * fix build for hsjava change
)
However it didn't compile for me or I had some other problems.
The last bits from the Java backend have been removed 4 month ago.
Someone has just said: Fine, so they are finally just gone after a long
while of just laying around (or something similar)

Eclipsefp itself is nice, but It's lacking all functionality you'd
expect from an Eclipse plugin today except compiling.
(So no outline, jump to definitions etc.. :( (AFAIK))

So if you want to revive a Haskell - JVM project you'll need to dive
into some internals and spend quite a lot time on that project I'd guess.

Sincerly
Marc Weber
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Field names

2008-09-10 Thread Bulat Ziganshin
Hello Mauricio,

Wednesday, September 10, 2008, 4:07:41 PM, you wrote:

 Do you have any reference for that use of infixing
 constructors by start their name with ':'? That's
 interesting, and I didn't know about it.

really? ;)

sum (x:xs) = x + sum xs
sum [] = 0


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


User defined annotations for Haskell (Was: [Haskell-cafe] packages and QuickCheck)

2008-09-10 Thread Max Bolingbroke
2008/9/10 Johannes Waldmann [EMAIL PROTECTED]:

 Has there ever been a discussion of typed, user-definable,
 user-processable source code annotations for Haskell?

 afair it was on haskell-prime list

 http://hackage.haskell.org/trac/haskell-prime/ticket/88

 if you can call that a discussion :-)

I had to implement a kind of annotation system for GHC as part of my
Summer of Code project for compiler plugins. You can see a discussion
at http://hackage.haskell.org/trac/ghc/wiki/Plugins/Annotations,
though I was not aware of that Haskell' ticket.

The system as I have implemented is quite basic: the absolute minimum
I needed for plugins. The syntax is essentially:

{-# ANN foo I am an annotation! #-}

Where I am an annotation may be replaced with any Typeable value,
and is subject to staging requirements analgous to Template Haskell
(e.g. you may not refer to things in the module being compiled other
than by Template Haskell Name).

As well as values you can annotate types and modules, and annotations
are accessible via the GHC API. See that wiki page for full details.

This system has not yet been merged into GHC (or even code reviewed by
GHC HQ), so if/when it becomes available to end users the design may
have changed drastically.

Cheres,
Max
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can you do everything without shared-memory concurrency?

2008-09-10 Thread David Roundy
2008/9/9 Jed Brown [EMAIL PROTECTED]:
 On Tue 2008-09-09 12:30, Bruce Eckel wrote:
 So this is the kind of problem I keep running into. There will seem to be
 consensus that you can do everything with isolated processes message passing
 (and note here that I include Actors in this scenario even if their mechanism
 is more complex). And then someone will pipe up and say well, of course, you
 have to have threads and the argument is usually for efficiency.

 Some pipe up and say ``you can't do global shared memory because it's
 inefficient''.  Ensuring cache coherency with many processors operating
 on shared memory is a nightmare and inevitably leads to poor
 performance.  Perhaps some optimizations could be done if the programs
 were guaranteed to have no mutable state, but that's not realistic.
 Almost all high performance machines (think top500) are distributed
 memory with very few cores per node.  Parallel programs are normally
 written using MPI for communication and they can achieve nearly linear
 scaling to 10^5 processors BlueGene/L for scientific problems with
 strong global coupling.

I should point out, however, that in my experience MPI programming
involves deadlocks and synchronization handling that are at least as
nasty as any I've run into doing shared-memory threading.  This isn't
an issue, of course, as long as you're letting lapack do all the
message passing, but once you've got to deal with message passing
between nodes, you've got bugs possible that are strikingly similar to
the sorts of nasty bugs present in shared memory threaded code using
locks.

David
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] message passing style in Monad

2008-09-10 Thread jinjing
I found that as I can do

  xs.map(+1).sort

by redefine . to be

  a . f = f a
  infixl 9 .

I can also do

  readFile readme.markdown . lines . length

by making

  a . b = a .liftM b
  infixl 9 .

Kinda annoying, but the option is there.

- jinjing
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: message passing style in Monad

2008-09-10 Thread jinjing
sorry about the confusion, too much drinks.
I used the redefined . in the difination of .. it should really just be

  flip liftM

On Wed, Sep 10, 2008 at 9:14 PM, jinjing [EMAIL PROTECTED] wrote:
 I found that as I can do

  xs.map(+1).sort

 by redefine . to be

  a . f = f a
  infixl 9 .

 I can also do

  readFile readme.markdown . lines . length

 by making

  a . b = a .liftM b
  infixl 9 .

 Kinda annoying, but the option is there.

 - jinjing

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can you do everything without shared-memory concurrency?

2008-09-10 Thread Jed Brown
On Wed 2008-09-10 09:05, David Roundy wrote:
 2008/9/9 Jed Brown [EMAIL PROTECTED]:
  On Tue 2008-09-09 12:30, Bruce Eckel wrote:
  So this is the kind of problem I keep running into. There will seem to be
  consensus that you can do everything with isolated processes message 
  passing
  (and note here that I include Actors in this scenario even if their 
  mechanism
  is more complex). And then someone will pipe up and say well, of course, 
  you
  have to have threads and the argument is usually for efficiency.
 
  Some pipe up and say ``you can't do global shared memory because it's
  inefficient''.  Ensuring cache coherency with many processors operating
  on shared memory is a nightmare and inevitably leads to poor
  performance.  Perhaps some optimizations could be done if the programs
  were guaranteed to have no mutable state, but that's not realistic.
  Almost all high performance machines (think top500) are distributed
  memory with very few cores per node.  Parallel programs are normally
  written using MPI for communication and they can achieve nearly linear
  scaling to 10^5 processors BlueGene/L for scientific problems with
  strong global coupling.
 
 I should point out, however, that in my experience MPI programming
 involves deadlocks and synchronization handling that are at least as
 nasty as any I've run into doing shared-memory threading.

Absolutely, avoiding deadlock is the first priority (before error
handling).  If you use the non-blocking interface, you have to be very
conscious of whether a buffer is being used or the call has completed.
Regardless, the API requires the programmer to maintain a very clear
distinction between locally owned and remote memory.

 This isn't an issue, of course, as long as you're letting lapack do
 all the message passing, but once you've got to deal with message
 passing between nodes, you've got bugs possible that are strikingly
 similar to the sorts of nasty bugs present in shared memory threaded
 code using locks.

Lapack per-se does not do message passing.  I assume you mean whatever
parallel library you are working with, for instance, PETSc.  Having the
right abstractions goes a long way.

I'm happy to trade the issues with shared mutable state for distributed
synchronization issues, but that is likely due to it's suitability for
the problems I'm interested in.  If the data model maps cleanly to
distributed memory, I think it is easier than coarse-grained shared
parallelism.  (OpenMP is fine-grained; there is little or no shared
mutable state and it is very easy.)

Jed


pgpGjPCgsQQbZ.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell stacktrace

2008-09-10 Thread Michał Pałka
I was going to suggest using the -xc option of the GHC runtime (if you
are using GHC), but it seems that it doesn't always give meaningful
results as indicated here:
http://osdir.com/ml/lang.haskell.glasgow.bugs/2006-09/msg8.html
and here:
http://www.haskell.org/pipermail/glasgow-haskell-users/2006-November/011549.html

You might want to try it anyway. It's documented in the GHC manual:
http://haskell.org/ghc/docs/latest/html/users_guide/runtime-control.html

Other than that, there are also haskell debuggers like hat, but I
haven't used them myself so I can't really tell if they could help here.

Best,
Michał

On Tue, 2008-09-09 at 22:35 +0200, Pieter Laeremans wrote:
 Woops , I hit the  send button to early.
 
 
 The java approach to locate the error would be
 
 
 try {   ... }catch(Exception e ){ 
 // log error 
 throw new RuntimeException(e);
 }
 
 
 ...
 
 
 What 's the best equivalent haskell approach ?
 
 
 thanks in advance,
 
 
 Pieter
 
 
 On Tue, Sep 9, 2008 at 10:30 PM, Pieter Laeremans
 [EMAIL PROTECTED] wrote:
 Hello,
 
 
 I've written a cgi script in haskell, it crashes sometimes
  with the error message Prelude . tail  : empty list
 
 
 In Java we would use this approach to log the erro 
 
 
 try {
 
 
 } catch (Exception e) {
 
 
 
 
 }
 
 
 
 -- 
 Pieter Laeremans [EMAIL PROTECTED]
 
 The future is here. It's just not evenly distributed yet. W.
 Gibson
 
 
 
 
 -- 
 Pieter Laeremans [EMAIL PROTECTED]
 
 The future is here. It's just not evenly distributed yet. W. Gibson
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can you do everything without shared-memory concurrency?

2008-09-10 Thread David Roundy
On Wed, Sep 10, 2008 at 03:30:50PM +0200, Jed Brown wrote:
 On Wed 2008-09-10 09:05, David Roundy wrote:
  2008/9/9 Jed Brown [EMAIL PROTECTED]:
   On Tue 2008-09-09 12:30, Bruce Eckel wrote:
   So this is the kind of problem I keep running into. There will seem to be
   consensus that you can do everything with isolated processes message 
   passing
   (and note here that I include Actors in this scenario even if their 
   mechanism
   is more complex). And then someone will pipe up and say well, of 
   course, you
   have to have threads and the argument is usually for efficiency.
  
   Some pipe up and say ``you can't do global shared memory because it's
   inefficient''.  Ensuring cache coherency with many processors operating
   on shared memory is a nightmare and inevitably leads to poor
   performance.  Perhaps some optimizations could be done if the programs
   were guaranteed to have no mutable state, but that's not realistic.
   Almost all high performance machines (think top500) are distributed
   memory with very few cores per node.  Parallel programs are normally
   written using MPI for communication and they can achieve nearly linear
   scaling to 10^5 processors BlueGene/L for scientific problems with
   strong global coupling.
 
  I should point out, however, that in my experience MPI programming
  involves deadlocks and synchronization handling that are at least as
  nasty as any I've run into doing shared-memory threading.

 Absolutely, avoiding deadlock is the first priority (before error
 handling).  If you use the non-blocking interface, you have to be very
 conscious of whether a buffer is being used or the call has completed.
 Regardless, the API requires the programmer to maintain a very clear
 distinction between locally owned and remote memory.

Even with the blocking interface, you had subtle bugs that I found
pretty tricky to deal with.  e.g. the reduce functions in lam3 (or was
it lam4) at one point didn't actually manage to result in the same
values on all nodes (with differences caused by roundoff error), which
led to rare deadlocks, when it so happened that two nodes disagreed as
to when a loop was completed.  Perhaps someone made the mistake of
assuming that addition was associative, or maybe it was something
triggered by the non-IEEE floating point we were using.  But in any
case, it was pretty nasty.  And it was precisely the kind of bug that
won't show up except when you're doing something like MPI where you
are pretty much forced to assume that the same (pure!) computation has
the same effect on each node.

  This isn't an issue, of course, as long as you're letting lapack do
  all the message passing, but once you've got to deal with message
  passing between nodes, you've got bugs possible that are strikingly
  similar to the sorts of nasty bugs present in shared memory threaded
  code using locks.

 Lapack per-se does not do message passing.  I assume you mean whatever
 parallel library you are working with, for instance, PETSc.  Having the
 right abstractions goes a long way.

Right, I meant to say scalapack.  If you've got nice simple
abstractions (which isn't always possible), it doesn't matter if
you're using message passing or shared-memory threading.

 I'm happy to trade the issues with shared mutable state for distributed
 synchronization issues, but that is likely due to it's suitability for
 the problems I'm interested in.  If the data model maps cleanly to
 distributed memory, I think it is easier than coarse-grained shared
 parallelism.  (OpenMP is fine-grained; there is little or no shared
 mutable state and it is very easy.)

Indeed, data-parallel programming is nice and it's easy, but I'm not
sure that it maps well to most problems.  We're fortunate that it does
map well to most scientific problems, but as normal programmers are
thinking about parallelizing their code, I don't think data-parallel
is the paradigm that we need to lead them towards.

David
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage policy question

2008-09-10 Thread Brandon S. Allbery KF8NH

On 2008 Sep 10, at 6:48, Wolfgang Jeltsch wrote:

Am Mittwoch, 10. September 2008 11:47 schrieben Sie:

So we should think about how to make it less confusing. Perhaps like
distributors use an extra revision number we should do the same.


Yes, maybe this is the way to go.



Everyone who manages packages runs into this, and all of them use  
revision numbers like this.  (.rN for gentoo was already mentioned;  
BSD ports and MacPorts use _, RPM uses -.  Depot collections at CMU  
also use -.)


And while we're on that topic, most of them also have an epoch which  
overrides the version number.  If for some reason an updated package  
*doesn't* change the version, or goes backwards (because of a major  
bug leading to backing off the new release), you increase the epoch so  
dependent packages don't get confused when it's re-released.  If we're  
considering modifying hackage's versioning, we should probably decide  
if we want/need this now instead of having to add it in later when  
something major goes *boom*.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Field names

2008-09-10 Thread Brandon S. Allbery KF8NH

On 2008 Sep 10, at 8:53, Bulat Ziganshin wrote:

Wednesday, September 10, 2008, 4:07:41 PM, you wrote:

Do you have any reference for that use of infixing
constructors by start their name with ':'? That's
interesting, and I didn't know about it.


really? ;)

sum (x:xs) = x + sum xs
sum [] = 0



I think that only counts as the origin of the idea; isn't :-prefixed  
infix constructors a ghc-ism?


http://www.haskell.org/ghc/docs/latest/html/users_guide/data-type-extensions.html#infix-tycons

--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Field names

2008-09-10 Thread Jonathan Cast
On Wed, 2008-09-10 at 11:54 -0400, Brandon S. Allbery KF8NH wrote:
 On 2008 Sep 10, at 8:53, Bulat Ziganshin wrote:
  Wednesday, September 10, 2008, 4:07:41 PM, you wrote:
  Do you have any reference for that use of infixing
  constructors by start their name with ':'? That's
  interesting, and I didn't know about it.
 
  really? ;)
 
  sum (x:xs) = x + sum xs
  sum [] = 0
 
 
 I think that only counts as the origin of the idea; isn't :-prefixed  
 infix constructors a ghc-ism?
 
 http://www.haskell.org/ghc/docs/latest/html/users_guide/data-type-extensions.html#infix-tycons

That link is for type constructors, not data constructors; for data
constructors, go to 

http://haskell.org/onlinereport/lexemes.html

and search for `Operator symbols'.  (The Haskell 98 Report seems to not
have internal anchor tags for hot-linking, unfortunately).

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell and Java

2008-09-10 Thread Daryoush Mehrtash
As people have suggested on this list, in order to write a haskell program
you need to develop a mathematical model which requires some serious up
front thinking.  Writing java code on the other hand is more about coding
and then re-factoring.  'Thinking is discouraged (agile), as the design
is more about how you organize your objects for the illusive reuse and
future requirements than anything else.  The approach make sense  if you
consider design as an excersize in object organization as you would have
better idea on the object organization (aka design) as you plow through the
code.

On the other hand if you have a mathematical idea, then a language like java
doesn't give you the abstraction tools necessary to implement it as well as
a language like Haskell.  But if you don't have a model, then java's
approach may be more natural (as is evenident by its popularity)

It seems to me that the two can work side by side if you model your
application in  Service Oriented Architecture.   I think the boundries of
the services should be thought of as langauges rather than api (function
calls).  Two different examples that comes to mind are the SQL and Google
Chart.  A Java programmer doesn't care about the SQL Server implementation,
but it depends on its query langauge to create the tables, populate them,
and issue rather complicated queries on them. Google Chart is
interesting in that it porvideds a language in a URL to implement a service
that has been traditionally considered as library.  I would think if you can
defines such services in your application then you can define a langaugage
and mathematical model around it to implement the service in Haskell.

Daryoush

On Wed, Sep 10, 2008 at 5:01 AM, Mauricio [EMAIL PROTECTED] wrote:

 At this time It's not really a question
 of better implementation, but cooperation.
 I know Haskell, they know Java, and it
 would be nice if we could share code and
 work. The idea of the api, or maybe dbus,
 seems OK. It just would be easier if we
 could join everything in a single piece,
 but it is no big deal.

 Maurício

 Daryoush Mehrtash a écrit :

 Why do you want to mix haskall and Java in one VM?  If there are
 functionality within your code that is better implemented in haskell, then
 why not  make that into a service (run it as haskell) with some api that
 Java code can use.

 Daryoush

 On Tue, Sep 9, 2008 at 2:36 AM, Maurí­cio [EMAIL PROTECTED]mailto:
 [EMAIL PROTECTED] wrote:

Hi,

I use Haskell, and my friends at
work use Java. Do you think it
could be a good idea to use Haskell
with Java, so I could understand
and cooperate with them? Is there a
a Haskell to Java compiler that's
already ready to use?

Thanks,
Maurício

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Daryoush

 Weblog: http://perlustration.blogspot.com/


 

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Daryoush

Weblog: http://perlustration.blogspot.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage policy question

2008-09-10 Thread Duncan Coutts
On Wed, 2008-09-10 at 12:09 +0200, Ketil Malde wrote:
 Duncan Coutts [EMAIL PROTECTED] writes:
 
  So we should think about how to make it less confusing. Perhaps like
  distributors use an extra revision number we should do the same. I had
  hoped that would not be necessary but that's probably not realistic.
 
 If we go this route, it'd be nice to have a distinguishable syntax,
 like Gentoo's .rN, so that we don't have to remember whether it's
 digit four or five that signifies administrative changes.

Yes, it must be distinguished because we cannot constrain the upstream
maintainers version policy. We do not know and should not care how many
version places they use.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can you do everything without shared-memory concurrency?

2008-09-10 Thread Ryan Ingram
On Wed, Sep 10, 2008 at 2:55 AM, Maarten Hazewinkel
[EMAIL PROTECTED] wrote:
 And a further note on sharing memory via a transactional resource (be it
 STM, a database or a single controlling thread).
 This situation always introduces the possibility that your update fails, and
 a lot of client code is not designed to deal with that. The most common
 pattern I see in database access code is to log the exception and continue
 as if nothing happened. The proper error handling only gets added in after a
 major screwup in production happens, and the usually only the the particular
 part of the code where it went wrong this time.

This seems to be a bit too much F.U.D. for STM.  As long as you avoid
unsafeIOToSTM (which you really should; that function is far more evil
than unsafePerformIO), the only failure case for current Haskell STM
is starvation; some thread will always be making progress and you do
not have to explicitly handle failure.

This is absolutely guaranteed by the semantics of STM: no effects are
visible from a retrying transaction--it just runs again from the
start.  You don't have to write proper error handling code for
transactional updates failing.

The only reason why this isn't possible in most database code is that
access to the database is happening concurrently with side effects to
the local program state based on what is read from the database.
Removing the ability to have side effects at that point is what makes
STM great.  It's easy to make side effects happen on commit, though,
just return an IO action that you execute after the atomically block:

 atomicallyWithCommitAction :: STM (IO a) - IO a
 atomicallyWithCommitAction stm = join (atomically stm)

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell stacktrace

2008-09-10 Thread Paul Johnson

Pieter Laeremans wrote:

Hello,

I've written a cgi script in haskell, it crashes sometimes  with the 
error message Prelude . tail  : empty list

Yup, been there, done that.

First, look for all the uses of tail in your program and think hard 
about all of them.  Wrap them in assert or trace functions to see if 
there are any clues to be had.


Then take a look at the GHC debugger documentation.  Debuggers for lazy 
languages (or indeed any language with closures) is difficult because 
execution order isn't related to where the faulty data came from.  So if 
I say somewhere


  x = tail ys

and then later on

  print x

the print will trigger the error, but the faulty data ys may no 
longer be in scope, so you can't see it.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Fwd: profiling in haskell]

2008-09-10 Thread Vlad Skvortsov

Tim Chevalier wrote:

2008/9/8 Vlad Skvortsov [EMAIL PROTECTED]:
  

Posting to cafe since I got just one reply on [EMAIL PROTECTED] I was suggested 
to
include more SCC annotations, but that didn't help. The 'serialize' function
is still reported to consume about 32% of running time, 29% inherited.
However, functions called from it only account for about 3% of time.




If serialize calls standard library functions, this is probably
because the profiling libraries weren't built with -auto-all -- so the
profiling report won't tell you how much time standard library
functions consume.
  


Hmm, that's a good point! I didn't think about it. Though how do I make 
GHC link in profiling versions of standard libraries? My own libraries 
are built with profiling support and I run Setup.hs with 
--enable-library-profiling and --enable-executable-profiling options.



You can rebuild the libraries with -auto-all, but probably much easier
would be to add SCC annotations to each call site. For example, you
could annotate your locally defined dumpWith function like so:

dumpWith f = {-# SCC foldWithKey #-} Data.Map.foldWithKey f []
docToStr k (Doc { docName=n, docVectorLength=vl}) =
(:) (d  ++ show k ++   ++ n ++   ++ (show vl))
  


Here is how my current version of the function looks like:

serialize :: Database - [[String]]
serialize db =
 {-# SCC XXXCons #-}
 [
   [dbFormatTag],
   ({-# SCC dwDoc #-} dumpWith docToStr dmap),
   ({-# SCC dwTerm #-} dumpWith termToStr tmap)
 ]
 where
   (dmap, tmap) = {-# SCC XXX #-} db

   dumpWith f =  {-# SCC dumpWith #-} Data.Map.foldWithKey f []
   docToStr :: DocId - Doc - [String] - [String]

   docToStr k (Doc { docName=n, docVectorLength=vl}) =
  {-# SCC docToStr #-} ((:) (d  ++ show k ++   ++ n ++   ++ 
(show vl)))


   termToStr t il =
 {-# SCC termToStr #-} ((:) (t  ++ t ++   ++ (foldl 
ilItemToStr  il)))


   ilItemToStr acc (docid, weight) =
  {-# SCC ilItemToStr #-} (show docid ++ : ++ show weight ++  
 ++ acc)



...and still I don't see these cost centers to take a lot of time (they 
add up to about 3%, as I said before).



Then your profiling report will tell you how much time/memory that
particular call to foldWithKey uses.

By the way, using foldl rather than foldl' or foldr is almost always a
performance bug


Data.Map.foldWith key is implemented with foldr[1], however I'm not sure 
I'm getting how foldr is superior to foldl here (foldl' I understand). 
Could you shed some light on that for me please?


Thanks!

[1]: 
http://www.haskell.org/ghc/docs/latest/html/libraries/containers/src/Data-Map.html


--
Vlad Skvortsov, [EMAIL PROTECTED], http://vss.73rus.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Fwd: profiling in haskell]

2008-09-10 Thread Tim Chevalier
On Wed, Sep 10, 2008 at 12:31 PM, Vlad Skvortsov [EMAIL PROTECTED] wrote:
 Hmm, that's a good point! I didn't think about it. Though how do I make GHC
 link in profiling versions of standard libraries? My own libraries are built
 with profiling support and I run Setup.hs with --enable-library-profiling
 and --enable-executable-profiling options.

When you build your own code with -prof, GHC automatically links in
profiling versions of the standard libraries. However, its profiling
libraries were not built with -auto-all (the reason is that adding
cost centres interferes with optimization). To build the libraries
with -auto-all, you would need to build GHC from sources, which is not
for the faint of heart. However, the results of doing that aren't
usually very enlightening anyway -- for example, foldr might be called
from many different places, but you might only care about a single
call site (and then you can annotate that call site).

Just from looking, I would guess this is the culprit:

   termToStr t il =
 {-# SCC termToStr #-} ((:) (t  ++ t ++   ++ (foldl ilItemToStr 
 il)))


If you want to be really sure, you can rewrite this as:

 termToStr t il =
 {-# SCC termToStr #-} ((:) (t  ++ t ++   ++ ({-# SCC
termToStr_foldl #-} foldl ilItemToStr 
 il)))

and that will give you a cost centre measuring the specific cost of
the invocation of foldl.

 Data.Map.foldWith key is implemented with foldr[1], however I'm not sure I'm
 getting how foldr is superior to foldl here (foldl' I understand). Could you
 shed some light on that for me please?


I meant the call to foldl in termToStr. There's a good explanation of this at:
http://en.wikibooks.org/wiki/Haskell/Performance_Introduction
(look for foldl).

Cheers,
Tim


-- 
Tim Chevalier * http://cs.pdx.edu/~tjc * Often in error, never in doubt
Just enough: Obama/Biden '08.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] packages and QuickCheck

2008-09-10 Thread John Goerzen
Sean Leather wrote:
 
 How do folks like to package up QuickCheck tests for their
 libraries?  In the main library?  As a separate repo  package? 
 Same repo  separate package?  Keeping tests with the tested code
 allows testing of non-exported functionality, but can add quite a
 lot of clutter.
 
 
 I have QuickCheck properties plus HUnit tests, but I think the question
 is the same. For me, it's in the same repository and shipped with the
 package source. I think that if you ship source (even via Hackage), you
 should also ship tests. So, if somebody wants to modify the source, they
 can run the tests. And making it convenient to test is very important,
 so I have cabal test (or runhaskell Setup.hs test without
 cabal-install) configured to run the tests. I don't think tests should
 (in general) be part of the user-visible API, so I have them external to
 the module hierarchy.

Do you have a quick-and-easy recipe you could post for making this stuff
work well?  In particular, it would be helpful to have it not install
the test program as well.

I'm not as fluent in the intracacies of Cabal as I ought to be, I'm afraid.

-- John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] packages and QuickCheck

2008-09-10 Thread John Goerzen
Ketil Malde wrote:
 Conal Elliott [EMAIL PROTECTED] writes:
 
 Thanks a bunch for these tips.  I haven't used the flags feature of cabal
 before, and i don't seem to be able to get it right. 
 
 Another option might be to have the test command build via 'ghc
 --make' instead of Cabal - this way, you can avoid mentioning testing
 libraries altogether in the cabal file.

This is the approach I have often taken in the past, but it imposes
somewhat of a maintenance burden because you mus tthen make sure that
the ghc command line flags are kept in-sync what what you're doing in
the cabal file -- -X options, packages used, etc.  This becomes even
more difficult if your cabal file is doing any sort of dynamic
configuration.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] packages and QuickCheck

2008-09-10 Thread John Goerzen
Sean Leather wrote:
 
 My tests are making use of a nice console test runner I wrote that
 supports both HUnit and QuickCheck (and is extensible to other test
 providers by the user):
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/test-framework.
 
 
 The description looks great! I might have to try it out.
 
 I used HUnit with QuickCheck 2, so that I could run QC properties as
 HUnit tests. QC2 has the added ability (over QC1) to run a property and
 return a Bool instead of just exiting with an error, and that works
 nicely within HUnit. Does test-framework do something else to support QC
 running side-by-side with HUnit?

See:

http://software.complete.org/static/missingh/doc/MissingH/Test-HUnit-Utils.html

Also some examples at

http://git.complete.org/offlineimap?a=tree;f=testsrc;h=217ee16404384ba2ae3ad01bcdb5696fe495bbdf;hb=refs/heads/v7

see runtests.hs and TestInfrastructure.hs

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Can you do everything without shared-memory concurrency?

2008-09-10 Thread Simon Marlow

Bruce Eckel wrote:
So this is the kind of problem I keep running into. There will seem to 
be consensus that you can do everything with isolated processes message 
passing (and note here that I include Actors in this scenario even if 
their mechanism is more complex). And then someone will pipe up and say 
well, of course, you have to have threads and the argument is usually 
for efficiency.


I make two observations here which I'd like comments on:

1) What good is more efficiency if the majority of programmers can never 
get it right? My position: if a programmer has to explicitly synchronize 
anywhere in the program, they'll get it wrong. This of course is a point 
of contention; I've met a number of people who say well, I know you 
don't believe it, but *I* can write successful threaded programs. I 
used to think that, too. But now I think it's just a learning phase, and 
you aren't a reliable thread programmer until you say it's impossible 
to get right (yes, a conundrum).


(welcome Bruce!)

Let's back up a bit.  If the goal is just to make something go faster, 
then threads are definitely not the first tool the programmer should be 
looking at, and neither is message passing or STM.  The reason is that 
threads and mutable state inherently introduce non-determinism, and when 
you're just trying to make something go faster non-determinism is almost 
certainly unnecessary (there are problems where non-determinism helps, 
but usually not).


In Haskell, for example, we have par/seq and Strategies which are 
completely determinstic, don't require threads or mutable state, and are 
trivial to use correctly.  Now, getting good speedup is still far from 
trivial, but that's something we're actively working on.  Still, people 
are often able to get a speedup just by using a parMap or something. 
Soon we'll have Data Parallel Haskell too, which also targets the need 
for deterministic parallelism.


We make a clean distinction between Concurrency and Parallelism. 
Concurrency is a _programming paradigm_, wherein threads are used 
typically for dealing with multiple asynchronous events fromm the 
environment, or for structuring your program as a collection of 
interacting agents.  Parallelism, on the other hand, is just about 
making your programs go faster.  You shouldn't need threads to do 
parallelism, because there are no asynchronous stimuli to respond to. 
It just so happens that it's possible to run a concurrent program in 
parallel on a multiprocessor, but that's just a bonus.  I guess the main 
point I'm making is that to make your program go faster, you shouldn't 
have to make it concurrent.  Concurrent programs are hard to get right, 
parallel programs needn't be.


Cheers,
Simon

2) What if you have lots of processors? Does that change the picture 
any? That is, if you use isolated processes with message passing and you 
have as many processors as you want, do you still think you need 
shared-memory threading?


A comment on the issue of serialization -- note that any time you need 
to protect shared memory, you use some form of serialization. Even 
optimistic methods guarantee serialization, even if it happens after the 
memory is corrupted, by backing up to the uncorrupted state. The effect 
is the same; only one thread can access the shared state at a time.


On Tue, Sep 9, 2008 at 4:03 AM, Sebastian Sylvan 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:




On Mon, Sep 8, 2008 at 8:33 PM, Bruce Eckel [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:

As some of you on this list may know, I have struggled to understand
concurrency, on and off for many years, but primarily in the C++ and
Java domains. As time has passed and experience has stacked up,
I have
become more convinced that while the world runs in parallel, we
think
sequentially and so shared-memory concurrency is impossible for
programmers to get right -- not only are we unable to think in
such a
way to solve the problem, the unnatural domain-cutting that
happens in
shared-memory concurrency always trips you up, especially when the
scale increases.

I think that the inclusion of threads and locks in Java was just a
knee-jerk response to solving the concurrency problem. Indeed, there
were subtle threading bugs in the system until Java 5. I personally
find the Actor model to be most attractive when talking about
threading and objects, but I don't yet know where the limitations of
Actors are.

However, I keep running across comments where people claim they
must
have shared memory concurrency. It's very hard for me to tell
whether
this is just because the person knows threads or if there is
truth to
it. 

 
For correctness, maybe not, for efficiency, yes definitely!
 
Imagine a program where you 

Re: [Haskell-cafe] Can you do everything without shared-memory concurrency?

2008-09-10 Thread Maarten Hazewinkel


On 10 Sep 2008, at 20:28, Ryan Ingram wrote:


On Wed, Sep 10, 2008 at 2:55 AM, Maarten Hazewinkel
[EMAIL PROTECTED] wrote:

[on transaction failures in databases and STM]


This seems to be a bit too much F.U.D. for STM.  As long as you avoid
unsafeIOToSTM (which you really should; that function is far more evil
than unsafePerformIO), the only failure case for current Haskell STM
is starvation; some thread will always be making progress and you do
not have to explicitly handle failure.

This is absolutely guaranteed by the semantics of STM: no effects are
visible from a retrying transaction--it just runs again from the
start.  You don't have to write proper error handling code for
transactional updates failing.


Thanks for the clarification Ryan.

As a hobbyist I haven't actually used STM, so I was grouping it
with databases as the only transactional system I am directly familiar
with.

I suppose I could have guessed that the Haskell community would come
up with something that's a class better than a normal shared database.


Regards,

Maarten Hazewinkel
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell as a scripting language.

2008-09-10 Thread David F. Place
Hi, All.

I needed to make a batch of edits to some input files after a big change
in my program.  Normally, one would choose some scripting language, but
I can't bear to program in them.   The nasty thing about using Haskell
is that giving regexes as string constants sometime requires two levels
of quoting.  For instance. (mkRegex ) matches \\.

To get around that, I put the regexes in the head of a literate program
and let the program gobble itself up.  Works great!  I think I'll always
turn to Haskell for my scripting needs now.  

I put the file in the Haskell Pastebin, if you would like to see it.

http://hpaste.org/10249

Cheers, David




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell as a scripting language.

2008-09-10 Thread Olex P
Hi guys,

Any ideas how to integrate Haskell into other software as scripting engine?
Similarly to Python in Blender or GIMP or to JavaScript in the products from
Adobe. Which possibilities we have?

Cheers,
Alex.

On Wed, Sep 10, 2008 at 10:20 PM, David F. Place [EMAIL PROTECTED] wrote:

 Hi, All.

 I needed to make a batch of edits to some input files after a big change
 in my program.  Normally, one would choose some scripting language, but
 I can't bear to program in them.   The nasty thing about using Haskell
 is that giving regexes as string constants sometime requires two levels
 of quoting.  For instance. (mkRegex ) matches \\.

 To get around that, I put the regexes in the head of a literate program
 and let the program gobble itself up.  Works great!  I think I'll always
 turn to Haskell for my scripting needs now.

 I put the file in the Haskell Pastebin, if you would like to see it.

 http://hpaste.org/10249

 Cheers, David




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell and Java

2008-09-10 Thread Brian Alliet
On Wed, Sep 10, 2008 at 02:12:00PM +0200, Marc Weber wrote:
 There used to be.
 http://www.cs.rit.edu/~bja8464/lambdavm/
 (Last darcs change log entry:
 Sun Oct 21 03:05:20 CEST 2007  Brian Alliet [EMAIL PROTECTED]
   * fix build for hsjava change
 )

Sorry about this. I hit a critical mass of darcs conflicts (I look
forward to giving git a try) around the same time I got really busy at
work. I've been meaning to get back into it (and update it to GHC HEAD)
but I haven't received sufficient nagging yet. Please yell if you're
interested in LambdaVM. At the very least I should be able to help get
whatever is in darcs compiling.

-Brian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell as a scripting language.

2008-09-10 Thread John Goerzen
On Wed, Sep 10, 2008 at 05:20:54PM -0400, David F. Place wrote:
 Hi, All.
 
 I needed to make a batch of edits to some input files after a big change
 in my program.  Normally, one would choose some scripting language, but
 I can't bear to program in them.   The nasty thing about using Haskell
 is that giving regexes as string constants sometime requires two levels
 of quoting.  For instance. (mkRegex ) matches \\.

I've been tempted to write a preprocessor that would accept
Python-like strings, such as r'foo'  (raw, with no backslash
interpolation).  And while we're at it, transform things like

  Hi there, ${name}!

into

  Hi there,  ++ name ++ !

A dumb preprocessor should not be all that hard to write, I should
think.

Oh, also the here docs would also be lovely.

 To get around that, I put the regexes in the head of a literate program
 and let the program gobble itself up.  Works great!  I think I'll always
 turn to Haskell for my scripting needs now.  

Whoa, that is sneaky and clever.  But it will fail the minute you try
to run this on a compiled program, because then getProgName will give
you the binary executable.

-- John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell as a scripting language.

2008-09-10 Thread Jason Dagit
2008/9/10 Olex P [EMAIL PROTECTED]:
 Hi guys,

 Any ideas how to integrate Haskell into other software as scripting engine?
 Similarly to Python in Blender or GIMP or to JavaScript in the products from
 Adobe. Which possibilities we have?

This is also very interesting to me.  At my day job we use embedded
python and COM access as the extension languages to a very large
application we produce.  I would love to augment with the embedded
python with an option for haskell scripts.  I doubt I'd get the go
head with such a change, but it's fun to dream about and propose
regardless.

Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell as a scripting language.

2008-09-10 Thread David F. Place
On Wed, 2008-09-10 at 16:57 -0500, John Goerzen wrote:
 Whoa, that is sneaky and clever.  But it will fail the minute you try
 to run this on a compiled program, because then getProgName will give
 you the binary executable.

So, I won't do that.  In addition to getProgName getting the binary,
the compiler would have thrown away the comments.  The point is that
this is a convenient way to write simple scripts.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage policy question

2008-09-10 Thread Duncan Coutts
On Wed, 2008-09-10 at 11:49 -0400, Brandon S. Allbery KF8NH wrote:
 On 2008 Sep 10, at 6:48, Wolfgang Jeltsch wrote:
  Am Mittwoch, 10. September 2008 11:47 schrieben Sie:
  So we should think about how to make it less confusing. Perhaps like
  distributors use an extra revision number we should do the same.
 
  Yes, maybe this is the way to go.
 
 
 Everyone who manages packages runs into this, and all of them use  
 revision numbers like this.  (.rN for gentoo was already mentioned;  
 BSD ports and MacPorts use _, RPM uses -.  Depot collections at CMU  
 also use -.)
 
 And while we're on that topic, most of them also have an epoch which  
 overrides the version number.  If for some reason an updated package  
 *doesn't* change the version, or goes backwards (because of a major  
 bug leading to backing off the new release), you increase the epoch so  
 dependent packages don't get confused when it's re-released.  If we're  
 considering modifying hackage's versioning, we should probably decide  
 if we want/need this now instead of having to add it in later when  
 something major goes *boom*.

We've thought about this and we think we do not need epoch numbers since
we're in the lucky position of doing the upstream versioning.

http://hackage.haskell.org/trac/hackage/ticket/135

Hmm, I think the discussion on that ticket must have been in an email
thread in cabal-devel.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Heads Up: code.haskell.org is upgrading to darcs 2

2008-09-10 Thread Duncan Coutts
This email is for darcs users in general and in particular for people
who host a project on code.haskell.org.

What we are doing
=

We are upgrading /usr/bin/darcs to version 2 on the machine that hosts
code.haskell.org.

That means it will be used by everyone who uses ssh to push or pull from
a darcs repository on code.haskell.org. Pulling via http is completely
unaffected.

Do I have to do anything?
=

No. You do not have to change anything.

darcs v2 can talk to darcs v1 clients perfectly well so you do not need
to upgrade your local version of darcs.

You do not need to change the format of your local or server-side
repositories. Darcs v2 works fine with darcs v1 format repositories.

Should I change anything?
=

We recommend that you upgrade your local darcs to version 2.

This will allow you to take advantage of substantially faster push/pull
operations over ssh (darcs 2 makes far fewer ssh connections).

As noted above however it is not necessary for you to upgrade or for you
to synchronise your upgrading with anyone else who uses the same
repository.

It is also possible to use a darcs v1.5 hashed format locally
and continue to use darcs v1 format on the server side. This has some
performance and reliability advantages. Again this is something you can
decide yourself without coordinating with other users of the repository.

If your development style means that you bump into the infamous merging
problems with darcs v1 format then you may consider converting your
repository to the darcs v2 format. Remember that using darcs 2 with
darcs v1 format repositories does not eliminate the merging issue. To
banish the merging problems you have to use the darcs v2 format. This is
however a more substantial change and has to be synchronised between
users of the repository because it involves converting the server side
repository to v2 format and all users getting the repository again. So
if merging is an issue for you then you should discuss it with other
users of your repository.

Is this change safe?


Yes. 

Darcs 2 has a substantial test suite. It has unit tests covering the
core patch operations and also over 100 scripts doing functional
testing, including network tests. The darcs build-bots run all these
tests on all the popular platforms.

Additionally, the code in darcs 2 for handling the v1 format is mostly
the same as in darcs 1, so there are no big risks in using darcs 2 while
continuing to use darcs v1 format repositories. The main change in the
v1 format code is bug fixes, instrumentation code for
debugging/introspection and more cunning types in the patch handling
code that enforces some of the patch invariants.

Furthermore, we have done real world tests using copies of all the 153
repositories on code.haskell.org which comes to around 2GB.

We wanted to verify a couple things:
 1. that using darcs 1 on the client and darcs 2 on the server works
fine to pull and push patches. This corresponds to a project
where all the users are still using darcs 1.
 2. that using a mixture of darcs 1 and darcs 2 clients works when
pushing and pulling patches between the clients via the server.
This corresponds to a project where some users have upgraded but
others have not yet.

We tested with darcs 1.0.9 and 2.0.2 in three combinations:
  client darcs 1, server darcs 1
  client darcs 1, server darcs 2
  client darcs 2, server darcs 2

The test consisted of obliterating a significant number of patches from
each repository and pushing them back. In a separate experiment each
repository was converted to darcs v2 format which worked without problem
in every case.

What if I have problems?


Contact [EMAIL PROTECTED] We have made backups of all the
repositories in case there are any critical problems.


Thanks to the darcs hackers Eric Kow and Jason Dagit for doing all the
hard testing work.

Duncan, Ian and Malcolm
(part of the community server admin team)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage policy question

2008-09-10 Thread Brandon S. Allbery KF8NH

On 2008 Sep 10, at 17:51, Duncan Coutts wrote:
dependent packages don't get confused when it's re-released.  If  
we're

considering modifying hackage's versioning, we should probably decide
if we want/need this now instead of having to add it in later when
something major goes *boom*.


We've thought about this and we think we do not need epoch numbers  
since

we're in the lucky position of doing the upstream versioning.


Are we?  I think the package author has final say if a package needs  
to be backed off, and any packages released between the rollback and  
the next release with dependencies on the backed-off package will be  
problematic, no matter how draconian hackage's version checking is.   
(This is a different situation from datecode versions as in the trac  
ticket.)


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage policy question

2008-09-10 Thread Duncan Coutts
On Wed, 2008-09-10 at 18:35 -0400, Brandon S. Allbery KF8NH wrote:
 On 2008 Sep 10, at 17:51, Duncan Coutts wrote:
  dependent packages don't get confused when it's re-released.  If  
  we're
  considering modifying hackage's versioning, we should probably decide
  if we want/need this now instead of having to add it in later when
  something major goes *boom*.
 
  We've thought about this and we think we do not need epoch numbers  
  since
  we're in the lucky position of doing the upstream versioning.
 
 Are we?  I think the package author has final say if a package needs  
 to be backed off, and any packages released between the rollback and  
 the next release with dependencies on the backed-off package will be  
 problematic, no matter how draconian hackage's version checking is.   
 (This is a different situation from datecode versions as in the trac  
 ticket.)

I'm not quite sure I follow. Certainly it's the author/maintainer who
decides the version number. It's up to them to pick it, but they know
the ordering of version numbers.

As I understand it, epochs were mainly introduced to cope with
un-cooperative upstream maintainers whereas here maintainers already
have to specify a version number in the Cabal/Hackage scheme and there's
no way for them to pretend or unilaterally declare that 3  2 or any
other such silliness.

To account for having experimental versions available at the same time
as stable versions we're planning to let maintainers add a
suggested/soft version constraint. Is that related to what you mean by
backing off and rollback?

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage policy question

2008-09-10 Thread Brandon S. Allbery KF8NH

On 2008 Sep 10, at 18:43, Duncan Coutts wrote:

On Wed, 2008-09-10 at 18:35 -0400, Brandon S. Allbery KF8NH wrote:

On 2008 Sep 10, at 17:51, Duncan Coutts wrote:

dependent packages don't get confused when it's re-released.  If
we're
considering modifying hackage's versioning, we should probably  
decide

if we want/need this now instead of having to add it in later when
something major goes *boom*.


We've thought about this and we think we do not need epoch numbers
since
we're in the lucky position of doing the upstream versioning.


Are we?  I think the package author has final say if a package needs
to be backed off, and any packages released between the rollback and
the next release with dependencies on the backed-off package will be
problematic, no matter how draconian hackage's version checking is.
(This is a different situation from datecode versions as in the trac
ticket.)


I'm not quite sure I follow. Certainly it's the author/maintainer who
decides the version number. It's up to them to pick it, but they know
the ordering of version numbers.

As I understand it, epochs were mainly introduced to cope with
un-cooperative upstream maintainers whereas here maintainers already
have to specify a version number in the Cabal/Hackage scheme and  
there's


That is one use.  The far more common use, at least in FreeBSD ports,  
is when a version of a port has to be backed off; if any subsequently  
released packages depend on the backed-off version, things get nasty  
when the port is re-updated later.  This may not involve the port  
author; it could be an unexpected interaction with an updated  
dependency under certain circumstances.


Backed off, in the FreebSD context, means an older version of the  
port is restored from CVS; in the context of Hackage it means removing  
the broken version and making the previous version the current version.


Basically, you *do* have control over this... but it's possible for  
the effects to propagate rather widely before such an interaction is  
discovered, resulting in needing to revise either many packages to  
reflect the corrected dependency... or updating one epoch.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage policy question

2008-09-10 Thread Duncan Coutts
On Wed, 2008-09-10 at 18:53 -0400, Brandon S. Allbery KF8NH wrote:

  As I understand it, epochs were mainly introduced to cope with
  un-cooperative upstream maintainers whereas here maintainers already
  have to specify a version number in the Cabal/Hackage scheme and  
  there's
 
 That is one use.  The far more common use, at least in FreeBSD ports,  
 is when a version of a port has to be backed off; if any subsequently  
 released packages depend on the backed-off version, things get nasty  
 when the port is re-updated later.  This may not involve the port  
 author; it could be an unexpected interaction with an updated  
 dependency under certain circumstances.
 
 Backed off, in the FreebSD context, means an older version of the  
 port is restored from CVS; in the context of Hackage it means removing  
 the broken version and making the previous version the current version.

Ok, so we never remove packages. So that's ok. You could fix the above
problem in two ways, adjust the suggested version constraint (which is
of course still vapourware) or upload a new version.

(Well, we sometimes remove packages that were uploaded without the
consent of the author. But that does not apply here.)

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: STM and FFI

2008-09-10 Thread ChrisK

There are some examples of adding IO actions to commit and rollback events at

http://www.haskell.org/haskellwiki/New_monads/MonadAdvSTM

Disclaimer: I wrote this instance of the code, but have not used it much.

Cheers,
  Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage policy question

2008-09-10 Thread Ross Paterson
On Wed, Sep 10, 2008 at 10:07:54AM +, Duncan Coutts wrote:
 On Wed, 2008-09-10 at 10:26 +0100, Ross Paterson wrote:
  So if Debian or Gentoo etc repackage one of these packages in their
  distributions, what is the pristine tarball that they use?
 
 They use the one and only pristine tarball. As for what meta-data they
 choose, that's up to them, they have the choice of using the
 original .cabal file in the .tar.gz or taking advantage of the updated
 version.

Since the modified version may contain essential fixes, they'll almost
certainly want that.  And they'll want to keep their mods separate, so
the updated .cabal file becomes another part of the upstream source, with
secondary versioning.  To which they add a third level of versioning for
their distro packaging.  And all this versioning is real; it's keeping
track of significant changes at each stage.

And then there's the psychological effect.  Make it easier to clean up
broken releases afterwards, and you'll have more of them.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Field names

2008-09-10 Thread Mauricio

 Do you have any reference for that use of infixing
 constructors by start their name with ':'?
 (...)

 (...) for data constructors, go to

 http://haskell.org/onlinereport/lexemes.html

 and search for `Operator symbols'. (...)

Here it is:

  “Operator symbols are formed from one or more
  symbol characters, as defined above, and are
  lexically distinguished into two namespaces
  (Section 1.4):

* An operator symbol starting with a colon is
  a constructor.(...)”

Cool! What is the syntax for using that in 'data'?
Is it something like “data X = Y | Int :° Double”?

Maurício

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell and Java

2008-09-10 Thread Mauricio

There used to be.
http://www.cs.rit.edu/~bja8464/lambdavm/
(Last darcs change log entry:
Sun Oct 21 03:05:20 CEST 2007  Brian Alliet [EMAIL PROTECTED]
  * fix build for hsjava change
)


Sorry about this. I hit a critical mass of darcs conflicts (I look
forward to giving git a try) around the same time I got really busy at
work. I've been meaning to get back into it (and update it to GHC HEAD)
but I haven't received sufficient nagging yet. Please yell if you're
interested in LambdaVM. At the very least I should be able to help get
whatever is in darcs compiling.


Would it allow allow Haskell to also call Java code,
besides running in JVM?

I think I'm not enough to nag you alone. However,
anybody who allows a Haskell programmer to avoid
using other languages can be sure to have brought a
lot of happiness to this sad world. If you can do it,
I don't know how you can resist :)

Best,
Maurício

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell and Java

2008-09-10 Thread wren ng thornton

Mauricio wrote:

 Sorry about this. I hit a critical mass of darcs conflicts (I look
 forward to giving git a try) around the same time I got really busy at
 work. I've been meaning to get back into it (and update it to GHC HEAD)
 but I haven't received sufficient nagging yet. Please yell if you're
 interested in LambdaVM. At the very least I should be able to help get
 whatever is in darcs compiling.

Would it allow allow Haskell to also call Java code,
besides running in JVM?

I think I'm not enough to nag you alone.



If you're looking for more people to nag you... I'm working on a 
compiler for a new declarative language. Right now we're using Haskell 
for a proof-of-concept interpreter, though one of the near-term goals 
for the compiler itself is a Java FFI/backend. Since much of the 
language is in the runtime engine, it'd be a shame to have to rewrite it 
all in Java or deal with the horror of JNI.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Field names

2008-09-10 Thread Richard A. O'Keefe


On 11 Sep 2008, at 3:54 am, Brandon S. Allbery KF8NH wrote:
I think that only counts as the origin of the idea; isn't :-prefixed  
infix constructors a ghc-ism?


Haskell 98 report, page 10:
An operator symbol starting with a colon is a constructor.

(I seem to have four copies of the report on my Mac...)




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Field names

2008-09-10 Thread Jonathan Cast
On Wed, 2008-09-10 at 21:32 -0300, Mauricio wrote:
  Do you have any reference for that use of infixing
   constructors by start their name with ':'?
   (...)
 
   (...) for data constructors, go to
  
   http://haskell.org/onlinereport/lexemes.html
  
   and search for `Operator symbols'. (...)
 
 Here it is:
 
“Operator symbols are formed from one or more
symbol characters, as defined above, and are
lexically distinguished into two namespaces
(Section 1.4):
 
  * An operator symbol starting with a colon is
a constructor.(...)”
 
 Cool! What is the syntax for using that in 'data'?
 Is it something like “data X = Y | Int :° Double”?

Right.  So then

(:°) :: Int - Double - X

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe