Sven Panne <[EMAIL PROTECTED]> wrote,
> 1. Mapping Haskell types to C types
> ===========================================================================
> Writing C code which gets called via Haskell's FFI is complicated by the
> fact that one needs to know the C types corresponding to Haskell's
> types. To solve this problem, every Haskell implementation should come with
> a header file FFI.h, which describes this mapping and the corresponding
> limits.
I would propose the name `hsffi.h'. This is C, so uppercase
filenames are for wimps only ;-) Furthermore, as Qrczak
wrote, the name should somehow make the connection to
Haskell to avoid clashes.
[..]
> Remarks:
> ========
> 1) Although the Haskell 98 report states that Char should be a Unicode
> character, a plain char is used here. No implementation uses Unicode so
> far, and char is what one wants most of the time, anyway.
As already pointed out by others, I guess, this one will
bite us one day. So maybe, we should say that `HsChar'
corresponds to `wchar_t' and use `CChar' on the Haskell-side
whenever we want to guarantee that we use 8bit characters.
> 3) Stable pointers are passed as addresses by the FFI, but this is only
> because a void* is used as a generic container in most APIs, not because
> they are real addresses. To make this special case clear, a separate C
> type is used here. Foreign objects are a different matter: They are
> passed as real addresses, so HsAddr is appropriate for them.
Yes, but it is not enough to say that `HsStablePtr' is
probably a `void *'. We have to guarantee that (I guess, we
also have to guarantee it for `HsAddr'). If a system
doesn't implement exactly this mapping, all bindings to C
code using C-style polymorphism (see my favourite `glist.h'
example from previous email re this topic) will break
anyway.
Fergus Henderson <[EMAIL PROTECTED]> wrote,
> What should happen with Int64 and Word64 if the C
> implementation does not support any 64-bit integer type?
> Or should conformance to the Haskell FFI specification
> require that the C implementation support a 64-bit integer
> type?
IMHO, that's not a problem for two reasons:
* If somebody writes a combined Haskell/C package making use
of 64-bit ints on the C side, the C code will just not
compile on a C compiler that doesn't support 64-bit ints.
* We can use
struct {
long int first32bit;
unsigned long int second32bit;
} HsInt64;
on the C side.
Sven Panne <[EMAIL PROTECTED]> wrote,
> 2. Mapping C types to Haskell types
> ===========================================================================
[..]
> Remarks:
> ========
> 1) Manuel Chakravarty's C2HSConfig uses slightly different names, but the
> ones above are more similar to the names used in some common C headers.
Ok, if the other are commonly used, I am happy to change my code.
> 2) ANSI C defines long double and some compilers have long longs, but these
> special types are probably not worth the portability trouble.
`long double' is a bother. `GLib', for example, has them in
its range of supported types. I don't know what we can do
about it now, but I think, Haskell should include a third
floating-point type in the long run.
> 4) The following standard types are so ubiquitous that they should probably
> be included, too: (any further types wanted?)
>
> Haskell type | C type
> -------------+---------------
> CSize | size_t
> CSSize | ssize_t
> COff | off_t
> CPtrdiff | ptrdiff_t
Surely a good idea to include those, but as Fergus said,
then it would be good to include as many of the ANSI C types
as possible. Which ones can we do without running the risk
that they are implemented as structs by some C system.
Generally, I think, we should make one more point clearer: A
fully FFI-compliant Haskell system on an architecture/OS
combination arch-os has to provide a representation for all
basic C types of the "standard" C compiler of that arch-os.
This is already implicit in the provision of the types
`CChar' to `CDouble' (and CSize & friends), but we may want
to state it explicitly. The reason is that we otherwise
cannot guarantee that there is a portable way to bind to
every C function (which doesn't use structs as arguments or
return value and doesn't use variable length argument
lists).
> > Furthermore it's generally a bad idea to write code that relies on
> > byte ordering. I don't see any reason for the Haskell FFI to
> > encourage this.
>
> It's not a matter of encouragement, but a matter of necessity: There
> are a lot of binary file formats out in the real world, and you really
> need to know if some swapping is needed or not. But I agree that this
> could go into module BinaryIO (discussed in a more or less private
> thread).
Yes, that's maybe a good idea.
Cheers,
Manuel
PS: My favourite statement in this thread so far is from Sven:
C compromises almost everything.
That's worth becoming a .signature, I think :-)