RE: GHC targetting Java?

2005-11-23 Thread Simon Peyton-Jones


| On a similar note, how about the .NET support?  Can it work on a
| platform where the only .NET support is Mono?

This comes up regularly. Here's my message from Jan:
http://www.haskell.org/pipermail/glasgow-haskell-users/2005-January/0075
94.html

Simon

===
It'd make a lot of sense to give GHC a .NET back end, and it's a
question that comes up regularly.  The reason that we haven't done it
here, at GHC HQ, is because it's a more substantial undertaking than
might at first appear (see below).  Furthermore, it'd permanently add a
complete new back-end platform for us to maintain.  Given our rather
limited development effort (= Simon and me), we have so far not bitten
the bullet, and we have no immediate plans to do so.

It'd be a good, well-defined project for someone else to tackle, and
there is some good groundwork already done:

* Sigbjorn Finne did a simple interop implementation that allows a
  Haskell program to be compiled to native code (as now) but to call
  .NET programs via a variant of the FFI.  I don't think this work is
  in active use, and I'd be surprised if it worked out of the box, but
  it could probably be revived with modest effort

* Andre Santos and his colleagues at UFPE in Brazil are working on a
  .NET back end, that generates CLR IL, though I don't know where they
  are up to.

* [This para wasn't in the original message]
  Nigel Perry and Oliver Hunt have a Haskell.NET prototype that works
  using GHC to compile to Core, and then compiling Core to NET.  I'm
  not sure what stage it is at.

* GHC.Net would be extra attractive if there was a Visual Studio
  integration for GHC. Substantial progress on this has been made in
  2004 by Simon Marlow, Krasimir Angelov, and Andre Santos and
  colleagues.

There may be others that I don't know of.  If anyone wants to join in
this effort, do contact the above folk.  And please keep us informed!

Simon


Here's a summary of why it's a non-trivial thing to do:

* The first thing is to generate native CLR Intermediate Language
  (IL). That's not really hard.  Requires thinking about
  representations for thunks and functions, and it may not be
  particularly efficient, but it can surely be done.  An open question
  is about whether to generate verifiable IL or not.  The trouble here
  is that Haskell's type system is more expressive than the CLR's in
  some ways, notably the use of higher-kinded type variables.  So, to
  generate verifiable IL one is bound to need some run-time casts, and
  it's not clear how to minimise these.

At first blush this is *all* you need do.  But it isn't!

* Next, you need to think about how to inter-operate with .NET
  libraries.  You don't really want to write foreign import... for
  each and every import.  You'd like GHC to read the CLR meta-data
  directly.  But there are lots of tricky issues here; see the paper
  that Mark Shields and I wrote about Object-oriented style
  overloading for Haskell.

* Now you need to figure out how to implement GHC's primitive
  operations:
the I/O monad
arbitrary precision arithmetic
concurrency
exceptions
finalisers
stable pointers
software transactional memory

  Not all of these are necessary, of course, but many are used in the
  libraries.  The CLR supports many of them (e.g. concurrency) but
  with a very different cost model.

* Last, you have to figure out what to do for the libraries.  GHC has
  a pretty large library, and you either have to implement the primops
  on which the library is based (see previous point), or re-implement
  it.  For example, GHC's implementation of I/O uses mutable state,
  concurrency, and more besides. For each module, you need to decide
  either to re-implement it using .NET primitives, or to implement the
  stuff the module is based on.

These challenges are mostly broad rather than deep.  But to get a
production quality implementation that runs a substantial majority of
Haskell programs out of the box requires a decent stab at all of them.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: adding to GHC/win32 Handle operations support of Unicode filenamesand files larger than 4 GB

2005-11-23 Thread Simon Marlow
This sounds like a good idea to me.

As far as possible, we should keep the platform-dependence restricted to
the implementation of one module (System.Posix.Internals will do, even
though this isn't really POSIX any more).  So System.Posix.Internals
exports the CFilePath/CFileOffset types, and the foreign functions that
operate on them.

Alternatively (and perhaps this is better), we could hide the difference
even further, and provide functions like

  rmDir :: FilePath - IO CInt

in System.Posix.Internals.  Similarly for functions that operate on
COff, they would take/return Integer (eg. we already have
System.Posix.fdFileSize).

As regards whether to use feature tests or just #ifdef mingw32_HOST_OS,
in general feature tests are the right thing, but sometimes it doesn't
buy you very much when there is (and always will be) only one platform
that has some particular quirk.  Writing a bunch of autoconf code that
would, if we're lucky, handle properly the case when some future version
of Windows removes the quirk, is not a good use of developer time.
Furthermore, Windows hardly ever changes APIs, they just add new ones.
So I don't see occasional use of #ifdef mingw32_HOST_OS as a big deal.
It's more important to organise the codebase and make sure all the
#ifdefs are behind suitable abstractions.

Cheers,
Simon

On 21 November 2005 12:01, Bulat Ziganshin wrote:

 Simon, what you will say about the following plan?
 
 ghc/win32 currently don't support operations with files with Unicode
 filenames, nor it can tell/seek in files for positions larger than 4
 GB. it is because Unix-compatible functions open/fstat/tell/... that
 is supported in Mingw32 works only with char[] for filenames and
 off_t (which is 32 bit) for file sizes/positions
 
 half year ago i discussed with Simon Marlow how support for unicode
 names and large files can be added to GHC. now i implemented my own
 library for such files, and got an idea how this can incorporated to
 GHC with minimal efforts:
 
 GHC currently uses CString type to represent C-land filenames and COff
 type to represent C-land fileseizes/positions. We need to
 systematically change these usages to CFilePath and CFileOffset,
 respectively, defined as follows:
 
 #ifdef mingw32_HOST_OS
 type CFilePath = LPCTSTR
 type CFileOffset = Int64
 withCFilePath = withTString
 peekCFilePath = peekTString
 #else
 type CFilePath = CString
 type CFileOffset = COff
 withCFilePath = withCString
 peekCFilePath = peekCString
 #endif
 
 and of course change using of withCString/peekCString, where it is
 applied to filenames, to withCFilePath/peekCFilePath (this will touch
 modules System.Posix.Internals, System.Directory, GHC.Handle)
 
 the last change needed is to conditionally define all c_* functions
 in System.Posix.Internals, whose types contain references to filenames
 or offsets:
 
 #ifdef mingw32_HOST_OS
 foreign import ccall unsafe HsBase.h _wrmdir
c_rmdir :: CFilePath - IO CInt
 
 #else
 foreign import ccall unsafe HsBase.h rmdir
c_rmdir :: CFilePath - IO CInt
 
 #endif
 
 (note that actual C function used is _wrmdir for Windows and rmdir for
 Unix). of course, all such functions defined in HsBase.h, also need to
 be defined conditionally, like:
 
 #ifdef mingw32_HOST_OS
 INLINE time_t __hscore_st_mtime ( struct _stati64* st ) { return
 st-st_mtime; } #else
 INLINE time_t __hscore_st_mtime ( struct stat* st ) { return
 st-st_mtime; } #endif
 
 That's all! of course, this will broke compatibility with current
 programs which directly uses these c_* functions (c_open, c_lseek,
 c_stat and 
 so on). this may be issue for some libs. are someone really use these
 functions??? of course, we can go in another, fully
 backward-compatible way, by adding some f_* functions and changing
 high-level modules to work with these functions

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Patch: Add support for using --mk-dll with --make

2005-11-23 Thread Simon Marlow
On 21 November 2005 22:03, Esa Ilari Vuokko wrote:

 Attached small simple patch that allows using --mk-dll with --make.
 Behaviour before patch was to link .exe instead .dll, as batchmode
 simply called staticLink - now doMkDLL is called instead.
 
 * Add support for using --mk-dll with --make
 
 I changed documentation a bit, and moved --mk-dll from 4.17.3
 Alternative modes of operation into 4.17.21 Linking options.
 This documentation change isn't tested, as I don't have necessary
 tools installed on my machine.  But it *seems* harmless (and
 I think, more intuitive.)  Due to lack of tools, and because there's
 little to say, I didn't touch 11.5. Building and using Win32 DLLs.
 
 Patch should apply cleanly on cvs HEAD with patch -p0 under
 fptools.  Any comments are welcome.

Committed, thanks!

Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Problems with 6.x compilers

2005-11-23 Thread Simon Marlow
On 22 November 2005 17:18, Michael Marte wrote:

 I am having some annoying problems with the 6.x compilers:
 
 6.4 and 6.4.1: When repeating a build (with ghc --make) all modules
 are rebuild even if nothing has changed. With earlier compilers,
 only linking takes place in this setting. Can I change this
 behaviour? I cannot develop this way.

Should not happen, if it does there is a bug.  Can you tell us how to
repeat the behaviour you're seeing?  We aren't seeing any unecessary
recompilation here.

 6.2 and up: The object file is named after the module, not after the
 source file as in 5.04. As there are several main modules in my
 project, each in its own source file, all I get is a big mess. Any
 suggestions?

The object file is named after the source file, unless you use -odir.
The behaviour is clearly specified in the user's guide:

http://www.haskell.org/ghc/docs/latest/html/users_guide/separate-compila
tion.html#output-files

the behaviour has been through several revisions before we arrived at
this, and I'm not keen to change it.

You can have several Main modules in the same directory if you don't use
-odir.  If you do use -odir, then you need to use a different value for
-odir for each different program.  Alternatively use -main-is, as Bulat
suggested (actually this is better, IMO).

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: GHC targetting Java?

2005-11-23 Thread Simon Marlow
On 23 November 2005 09:32, Simon Peyton-Jones wrote:

 On a similar note, how about the .NET support?  Can it work on a
 platform where the only .NET support is Mono?
 
 This comes up regularly. Here's my message from Jan:

http://www.haskell.org/pipermail/glasgow-haskell-users/2005-January/0075
 94.html

Added to the FAQ.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Problems with 6.x compilers

2005-11-23 Thread kahl
  
   6.4 and 6.4.1: When repeating a build (with ghc --make) all modules
   are rebuild even if nothing has changed. With earlier compilers,
   only linking takes place in this setting. Can I change this
   behaviour? I cannot develop this way.
  
  Should not happen, if it does there is a bug.  Can you tell us how to
  repeat the behaviour you're seeing?  We aren't seeing any unecessary
  recompilation here.

I think I had that effect exactly when using   -ddump-minimal-imports


Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Problems with 6.x compilers

2005-11-23 Thread Michael Marte
Simon,

On Wed, 23 Nov 2005, Simon Marlow wrote:

 On 22 November 2005 17:18, Michael Marte wrote:
 
  I am having some annoying problems with the 6.x compilers:
  
  6.4 and 6.4.1: When repeating a build (with ghc --make) all modules
  are rebuild even if nothing has changed. With earlier compilers,
  only linking takes place in this setting. Can I change this
  behaviour? I cannot develop this way.
 
 Should not happen, if it does there is a bug.  Can you tell us how to
 repeat the behaviour you're seeing?  We aren't seeing any unecessary
 recompilation here.

Like Wolfram Kahl, I am using -ddump-minimal-imports.
I cannot check whether the use of this option causes the problem, because 
I returned to 6.2 already to continue porting my project from ghc 5. 

  6.2 and up: The object file is named after the module, not after the
  source file as in 5.04. As there are several main modules in my
  project, each in its own source file, all I get is a big mess. Any
  suggestions?
 
 The object file is named after the source file, unless you use -odir.
 The behaviour is clearly specified in the user's guide:
 
 http://www.haskell.org/ghc/docs/latest/html/users_guide/separate-compila
 tion.html#output-files
 
 the behaviour has been through several revisions before we arrived at
 this, and I'm not keen to change it.
 
 You can have several Main modules in the same directory if you don't use
 -odir.  If you do use -odir, then you need to use a different value for
 -odir for each different program.  Alternatively use -main-is, as Bulat
 suggested (actually this is better, IMO).

I followed the suggestion of Bulat: I renamed the modules and used 
-main-is.

Thanks everybody!
Michael
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Checking for WHNF (a horrible naughty thing)

2005-11-23 Thread Jan-Willem Maessen
I would like to do a horrible naughty thing (which I promise never to  
expose to the world).  I would like to tell whether a term is in  
WHNF, without forcing evaluation of that term.  Something like:


isWHNF :: a - Bool

Is there a way of doing this?  I can fake it with an IORef and much  
unsafeness, but I'm wondering if there's a safe-but-ugly way of doing  
the test in GHC.


If you're curious, I'm trying to compact exactly the evaluated spine  
of a list without changing the list's laziness in any way.  It  
remains to be seen whether this is even vaguely a good idea. :-)


-Jan-Willem Maessen

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


problem using ghc-6.5.20051122-i386-unknown-linux.tar.bz2

2005-11-23 Thread Konovalov, Vadim
I downloaded ghc-6.5.20051122-i386-unknown-linux.tar.bz2 and tried to use it
(as bootstrap for building Haskell at first, but building Pugs fails the
same way)

It installed fine.

My attempts to compile something with it result with a following error:

ghc-6.5.20051122: could not execute: gcc-3.4.3
ghc: 16599408 bytes, 3 GCs, 123124/123124 avg/max bytes residency (1
samples), 15M in use, 0.00 IN
IT (0.00 elapsed), 0.11 MUT (0.21 elapsed), 0.02 GC (0.03 elapsed) :ghc

Indeed, the compiler I have has version gcc-3.3.4, and this mismatches with
gcc-3.4.3, also 

How should I avoid this problem?
Why having that precise requirement of GCC version?

Best regards,
Vadim.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Checking for WHNF (a horrible naughty thing)

2005-11-23 Thread Sigbjorn Finne

The appended snippet might help..

--sigbjorn

-- whnf.hs
import Foreign.StablePtr
import System.IO.Unsafe

isWHNF :: a - Bool
isWHNF a = unsafePerformIO $ do
  stl - newStablePtr a
  rc  - isWhnf stl
  freeStablePtr stl
  return (rc /= 0)

foreign import ccall safe isWhnf isWhnf :: StablePtr a - IO Int

/* whnf.c */
#include Rts.h
int
isWhnf(StgStablePtr st)
{
   StgClosure* c = (StgClosure*)(stable_ptr_table[(StgWord)st].addr);
   return !(closure_THUNK(c));
}

- Original Message - 
From: Jan-Willem Maessen [EMAIL PROTECTED]

To: glasgow-haskell-users glasgow-haskell-users@haskell.org
Sent: Wednesday, November 23, 2005 08:10
Subject: Checking for WHNF (a horrible naughty thing)


I would like to do a horrible naughty thing (which I promise never to  
expose to the world).  I would like to tell whether a term is in  
WHNF, without forcing evaluation of that term.  Something like:


isWHNF :: a - Bool

Is there a way of doing this?  I can fake it with an IORef and much  
unsafeness, but I'm wondering if there's a safe-but-ugly way of doing  
the test in GHC.


If you're curious, I'm trying to compact exactly the evaluated spine  
of a list without changing the list's laziness in any way.  It  
remains to be seen whether this is even vaguely a good idea. :-)


-Jan-Willem Maessen

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re[2]: Patch for Word64

2005-11-23 Thread Bulat Ziganshin
Hello Simon,

Wednesday, November 23, 2005, 2:28:26 PM, you wrote:

SM   int64ToWord64# = unsafeCoerce#
SM   word64ToInt64# = unsafeCoerce#

SM this should reduce the cost of the conversions to zero, which is a
SM simpler way to fix the performance bug (if it works).

SM If you confirm that this also improves performance, I'll commit it.  If
SM not, I'll commit your patch instead.

please don't forget about:

  int32ToWord32#
  word32ToInt32#
  intToInt32#
  int32ToInt#
  wordToWord32#
  word32ToWord#
  int2Word#
  word2Int#



-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re[2]: adding to GHC/win32 Handle operations support of Unicode filenamesand files larger than 4 GB

2005-11-23 Thread Bulat Ziganshin
Hello Simon,

Wednesday, November 23, 2005, 2:22:02 PM, you wrote:

SM This sounds like a good idea to me.

SM As far as possible, we should keep the platform-dependence restricted to
SM the implementation of one module (System.Posix.Internals will do, even
SM though this isn't really POSIX any more).  So System.Posix.Internals
SM exports the CFilePath/CFileOffset types, and the foreign functions that
SM operate on them.

SM Alternatively (and perhaps this is better), we could hide the difference
SM even further, and provide functions like

SM   rmDir :: FilePath - IO CInt

SM in System.Posix.Internals.  Similarly for functions that operate on
SM COff, they would take/return Integer (eg. we already have
SM System.Posix.fdFileSize).

well... but not well :)  let's consider function c_open for more
informative example. between functions c_open and openFile there is
several levels of translation: 

1) convert C types to Haskell types
2) check for errno and raise exception on error
3) convert interfaces (translate IOMode to CMode in this example)
4) convert file descriptors to Handles

you suggestion is to build middle-level library whose functions lie
between step 1 and 2 in this scheme:

c_open :: CFilePath - CInt - CMode - IO CInt
1) convert C types to Haskell types
open :: String - Int - CMode - IO Int
2) check for errno
3) convert interfaces
4) convert file descriptors to Handles

This have one obvious benefit - these functions will look very like to
its C counterparts. but on the other side, resulting functions will
not belong to C, nor to Haskell world - they will use Haskell types
but C-specific error signalling

moreover, adding such middle-level functions will not help making
implementation simpler - all differences between platforms are already
covered by definitions of CFilePath/CFileOffset/withCFilePath/peekCFilePath


but i propose to make these middle-level functions after stage 2 or
even 3 in this scheme - so that they will be fully in Haskell world,
only work with file descriptors instead of Handles. for example:

lseek :: Integral int = FD - SeekMode - int - IO ()
lseek h direction offset = do
  let   whence :: CInt
whence = case mode of
   AbsoluteSeek - sEEK_SET
   RelativeSeek - sEEK_CUR
   SeekFromEnd  - sEEK_END
  throwErrnoIfMinus1Retry_ lseek
$ c_lseek (fromIntegral h) (fromIntegral offset) direction


profits:

1) current GHC.Handle code is monolithic, it performs all these 4
steps of translation in one function. this change will simplify this
module and concenrate it on solving only one, most complex, task -
implementing operations on Handles via operations on FDs

2) part of code in GHC.Handle, what is not really GHC-specific, will
be moved to standard hierarchical libraries, where it will become
ready to use by other Haskell implementations

3) alternative Handle implementations can use these middle-level
functions and not reinvent the wheel. just for example - in
http://haskell.org/~simonmar/new-io.tar.gz openFile code is mostly
copied from existing GHC.Handle

4) we will get full-fledged FD library on GHC, Hugs and NHC for free

5) if this FD library will have Handle-like interface, it can be
used as poor men's drop-in replacement of Handle library in
situations where we don't need its buffering and other advanced
features


so, as first step i propose to move middle-level code from GHC.Handle
to Posix.Internals, join FD type definitions, replace CString with
CFilePath where appropriate, and so on. and only after this - make
changes specific for windows. i can do it all. what you will say?


 That's all! of course, this will broke compatibility with current
 programs which directly uses these c_* functions (c_open, c_lseek,
 c_stat and 
 so on). this may be issue for some libs. are someone really use these
 functions??? of course, we can go in another, fully
 backward-compatible way, by adding some f_* functions and changing
 high-level modules to work with these functions

if my changes will be committed only to GHC 6.6 (HEAD) branch, the
problem that types of c_* functions is changed will not be a big
problem - you anyway change some interfaces between major releases.
but now i'm realized that Posix.Internals is part of libraries common
for several Haskell compilers. can such changes break their working?

moreover, i plan to move throwErrnoIfMinus1RetryOnBlock to
Foreign.C.Error, and sEEK_CUR/sEEK_SET/sEEK_END - to Posix.Internals.
can it be done?


SM As regards whether to use feature tests or just #ifdef mingw32_HOST_OS,
SM in general feature tests are the right thing, but sometimes it doesn't
SM buy you very much when there is (and always will be) only one platform
SM that has some particular quirk.  Writing a bunch of autoconf code that
SM would, if we're lucky, handle properly the case when some future version
SM of Windows removes the quirk, is not a good use of developer time.
SM Furthermore, Windows hardly 

GHC 6.6

2005-11-23 Thread Jim Apple

Help build the anticipation:

http://haskell.org/hawiki/GHC_206_2e6

Present text:

GHC 6.6:
Will be out before May 2006.
Included:
 * Parallel GHC
 * Associated types with class
Maybe:
 * Impredicativity
 * GADT  Typeclass interaction
 * Data types as kinds
No:

Jim

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Patch for Word64

2005-11-23 Thread Matt

Bulat Ziganshin wrote:

Hello Simon,

Wednesday, November 23, 2005, 2:28:26 PM, you wrote:


  int64ToWord64# = unsafeCoerce#
  word64ToInt64# = unsafeCoerce#



this should reduce the cost of the conversions to zero, which is a
simpler way to fix the performance bug (if it works).



If you confirm that this also improves performance, I'll commit it.
If not, I'll commit your patch instead.


please don't forget about:

 int32ToWord32#
 word32ToInt32#
 intToInt32#
 int32ToInt#
 wordToWord32#
 word32ToWord#
 int2Word#
 word2Int#


These are already free, as I understand. When I compare Int, Word, Int#, 
Word#, Int32 (same thing as Int), and Word32 (same thing as Word), they have 
identical performance. It is Int64/Word64/Int64#/Word64# that are abysmal.


I will continue as time allows and make the suggested changes, but I thought 
that the original patch also makes sense. Compare these definitions from the 
original Word.hs file:

   (W# x#) + (W# y#)  = W# (x# `plusWord#` y#)
   (W8# x#) + (W8# y#)= W8# (narrow8Word# (x# `plusWord#` y#))
   (W16# x#) + (W16# y#)  = W16# (narrow16Word# (x# `plusWord#` y#))
   (W32# x#) + (W32# y#)  = W32# (narrow32Word# (x# `plusWord#` y#))
   (W64# x#) + (W64# y#)  = W64# (int64ToWord64# (word64ToInt64# x# 
`plusInt64#` word64ToInt64# y#))


The definitions are those used on a 32-bit architecture. So, when it is 
cheap the plusWord# function is used, but when it is relatively expensive 
then extra conversions are added. Wouldn't it make sense to avoid the 
conversions in all cases, regardless of whether they are free or not? If so, 
then I will keep the old changes in the next patch I submit. If not, then I 
will discard them.


-Matt or [ricebowl, wearkilts, soysauce] on #haskell on Freenode 


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users