Re: compiling on solaris 9

2007-09-17 Thread skaller
On Mon, 2007-09-17 at 13:03 +0100, Simon Marlow wrote:
 skaller wrote:

  1. Measure the size (and alignment, while at it) of all the
  integer types. (trial execute and run).
 
 We already do this.  Incedentally, the GHC RTS does provide a full 
 complement of explicitly-sized types: Stg{Int,Word}{8,16,32,64} which are 
 correct on all the platforms we currently support.  The header that defines 
 them currently makes some assumptions (e.g. char is 8 bits), but that's 
 only because we're a bit lazy; 

All good programmers are lazy -- the whole point of computers
is to automate things so we can be :)

The algorithm I presented (crudely) goes further. 
What you're doing is enough for a language that doesn't have to 
bind to C and/or C++.

Note that doesn't mean you can't generate C/C++, it means you don't
need to bind to existing libraries.

If you did, you need to do more work as described. For example
if there is a C lib requiring a 

int*

argument .. there's no way to tell which one of your types,
if any, is aliased to it. Of course .. this only matters in
glue logic, i.e. when you're actually writing C/C++ code,
not if you're generating assembler/machine code and calling
the library directly using the platform ABI based on registers.

When writing glue logic the end user can fix this problem with
platform specific casts, but platform *independent* casts would
be better (e.g. you might cast StgInt32 pointer to long* on
Win64.. but that would make a corey mess if the glue logic 
were compiled on Linux where long is 64 bits).

In Felix, this is critical, because the compiler actually generates
C++ which has to run on all platforms. I actually preferred the
same model you're using: (u)int{8,16,32,64} but it just didn't fly
because of the almost complete dependence on source code based
binding (Felix is roughly a well typed souped up macro processor .. :)

BTW: .. any thoughts on 128 bit integers? I mean, registers are
now 64 bits on desktops. There really should be a double precision
integer type..


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: compiling on solaris 9

2007-09-15 Thread skaller
On Sat, 2007-09-15 at 14:42 +0200, Sven Panne wrote:
 On Saturday 15 September 2007 13:58, skaller wrote:
  The RIGHT way to do this is rather messy .. but there is only
  one right way to do it. [...]
 
 IMHO things are not that black or white: I think we have a tradeoff between 
 almost 100% fool-proofness (which is basically the approach you describe) and 
 the ability to do cross-compilation (use available headers and perhaps 
 fail/guess if nothings sensible could be found). What is right has to be 
 decided on a case-by-case basis.

Of course.

 But of course you are totally right in one respect: OS-based tests when used 
 in a context like our example are silly and should be replaced by 
 feature-based tests (be it Do we have foo.h? or What is the result of 
 compiling/running blah.c?).

I think the key point is that if you need say

int32_t or intptr_t

you must not use them, but instead use

my_int_32_t or my_intptr_t

size_t is required by ISO C89, C99 and C++ Standards so can be
used directly. This is annoying, but if you use these in interfaces
defining your own is necessary to avoid conflicts. In implementations
there's more freedom .. but no point not using the 'my_' versions.

In effect, this removes the 'optional' status of the symbols
and creates an in-house de-facto standard you can rely on.
Pity ISO didn't do that originally .. 


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: compiling on solaris 9

2007-09-15 Thread skaller
On Sat, 2007-09-15 at 15:42 +0200, Sven Panne wrote:
 On Saturday 15 September 2007 13:58, skaller wrote:
  [...]
  1. Measure the size (and alignment, while at it) of all the
  integer types. (trial execute and run).
  [...]
  4. For the ones provided, AND size_t, ptrdiff_t, check
  their size (and signedness). (trial execution)
 
 Small additional note: One can determine the size and alignment *without* 
 always resorting to trial execution. Autoconf does this by an 
 ingenious evaluation by compilation technique in case of cross-compilation, 
 see the implementation of AC_CHECK_SIZEOF and AC_CHECK_ALIGNOF.

Hmm .. I guess that's possible with some kind of type checking
hack? .. (I don't use autoconf but thanks for that info .. I will
have a look, it would be cool to get better cross-compilation
support).

  [...]
  6. Test what the typedefs found are actually aliased to
  using C++ (there is no other portable way to do this).
  (Trial execution)
 
 To be honest, I don't understand that point...

The point is that on most systems knowing the size of
say 'intptr_t' does NOT tell you which integer type it
is aliased to if there are two of them, eg:

// gcc, amd64, Linux:
sizeof(intptr_t) == 8
sizeof(long) == 8
sizeof(long long)== 8
sizeof(int)=4

so intptr_t is aliased to long or long long .. which one?
Short of examining the header file source code, the only
way to find out is in C++ with overloading:

void me(long){ printf(long); }
void me(long long) { printf(long long); }
me( (intptr_t)0 );

Unfortunately EVERY system I know of has at least one more integer 
type available than actual integer sizes: on Win64 int and long have
the same size, and of course long long must be used for the 
integer the size of a pointer.

This is what bit the Ocaml library.. if you need such an animal
it MUST be called 'my_intptr_t', since no other name is standard
than one you define yourself.

Yeah, this is a REAL pain .. but look at any 'professional' 
C library (OpenGL, GMP, etc etc) and they all do it.

BTW: in C, it usually doesn't matter which of two integers types
of the same size you use .. but in C++ it does, precisely because
it affects overloading. But note C99 has 'overloading' of a kind,
and consider using C++ safe C when possible, I think it is worth
ensuring 'my_intptr_t' aliases the same type as 'intptr_t' if the
latter is defined.


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-25 Thread skaller
On Mon, 2007-06-25 at 11:43 -0400, Peter Tanski wrote:

 It would be much better to have a single build system.  I would  
 gladly replace the whole thing for three reasons:

 (1) it is a source of many build bugs and it makes them much more  
 difficult to track down; and,
 (2) it seems to be a serious hurdle for anyone who wants to build and  
 hack on GHC--this is true for most other compiler systems that use  
 the autoconf and Make; and,
 (3) if GHC is ever going to have cross-compilation abilities itself,  
 the current build system must go, while cross-compiling GHC with the  
 current system requires access to the actual host-system hardware.
 The reasons I don't are:
 (1) time (parallel to money);
 (2) I wouldn't undertake such an effort unless we were all pretty  
 sure what you want to change the build system to;
 (3) an inevitable side-effect of the move would be loss of old (or  
 little-used) build settings, such as GranSim, and a change to the  
 build system would propagate to parallel projects; and,
 (4) it is a huge project: both the compiler and libraries must change  
 and the change must integrate with the Cabal system.

I am thinking of starting a new project (possibly as sourceforge)
to implement a new build system. I think Erick Tryzelaar might
also be interested. The rule would be: it isn't just for GHC.
So any interested people would have to thrash out what to
implement it in, and the overall requirements and design ideas.

My basic idea is that it should be generic and package based,
that is, it does NOT include special purpose tools as might
be required to build, say, Haskell programs: these are
represented by 'plugin' components.

A rough model of this: think Debian package manager, but
for source code not binaries.


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-25 Thread skaller
On Mon, 2007-06-25 at 13:35 -0400, Peter Tanski wrote:

  Maybe some gcc mimicing cl wrapper tailored specifically for GHC  
  building system could help? One more layer of indirection, but  
  could leave ghc driver relatively intact.
 
 That's a good idea!  Do you know if or how the mingw-gcc is able to  
 do that?  Does mingw-gcc wrap link.exe?  

There's more to portable building than the build system.
For example, for C code, you need a system of macros to support

void MYLIB_EXTERN f();

where MYLIB_EXTERN can be empty, say  __declspec(dllexport)
on Windows when building a DLL, and  __declspec(dllimport)
when using it. This is *mandatory*.

The build system controls the command line switches that
turn on We're building a DLL flag. A distinct macro is needed
for every DLL.

In Felix, there is another switch which tells the source
if the code is being built for static linkage or not:
some macros change when you're linking symbols statically
compared to using dlsym().. it's messy: the build system
manages that too. 

Building Ocaml, you have a choice of native or bytecode,
and there are some differences. Probably many such things
for each and every language and variation of just about
anything .. eg OSX supports two kinds of dynamic libraries.

The point is that a 'Unix' oriented build script probably
can't be adapted: Unix is different to Windows. The best
way to adapt to Windows is to use Cygwin.. if you want
a Windows native system, you have to build in the Windows 
way and make Windows choices. A silly example of that
is that (at least in the past) Unix lets you link at
link time against a shared library, whereas Windows
requires to link against a static thunk ..
so building a shared library produces TWO outputs
on Windows.

OTOH, Unix has this woeful habit of naming shared libraries
like libxxx.so.1.2 which really makes a complete mess
of build systems.

What I'm saying is you just can't wrap Windows tools
inside a Unix build script.

You have to write an abstract script, and implement
the abstractions for each platform.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-24 Thread skaller
On Sun, 2007-06-24 at 13:10 -0400, Matthew Danish wrote:

 Unfold MinGW, MSYS, and MSYS Developer Tool Kit.

Hmm .. well I'm not sure if this is still correct, but Mingw,
being a Windows program, has 255 character limit on command line
.. which makes it useless for building anything complex.

Ocaml had this problem: the MinGW version of Ocaml has to be
built using Cygwin, with gcc using the -mnocygwin option.

The thing is .. Cygwin, despite being rather large,
is much easier to install than MSYS because of the 
package manager.

Also there are many emulations of Unix programs for Windows,
located at:

http://gnuwin32.sourceforge.net/

which are also fairly easy to install and work from
the CMD.EXE command prompt.

The problem with building on Windows is that many scripts
assume bash, and it just doesn't work 'right' outside
a well configured Unix environment. Cygwin does this quite
well .. MSYS etc doesn't.

I'm not intending to knock MSYS .. but I wouldn't rely on
it for building complex projects 'transparently' .. Cygwin
has enough problems doing that, and it's quite a sophisticated
environment.

The thing is .. Windows *.bat files, though a bit clumsy,
work better than trying to get a bash emulation .. but really,
designed-to-be-portable code written in Python, Perl, Scheme,
or even Haskell is better .. because it eliminates uncertainty
and gives you full control of how build actions are implemented
on each platform.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-22 Thread skaller
On Fri, 2007-06-22 at 12:03 +0100, Simon Marlow wrote:

 
 Ok, you clearly have looked at a lot more build systems than I have.  So you 
 think there's a shift from autoconf-style figure out the configuration by 
 running tests to having a database of configuration settings for various 
 platforms?  I'm surprised - I thought conventional wisdom was that you should 
 write your build system to be as independent as possible from the name of the 
 build platform, so that the system is less sensitive to changes in its 
 environment, and easier to port.  I can see how wiring-in the parameters can 
 make the system more concrete, transparent and predictable, and perhaps that 
 makes it easier to manage.  It's hard to predict whether this would improve 
 our 
 situation without actually doing it, though - it all comes down to the 
 details.

This misses the point. The 'suck it and see' idea fails totally for
cross-compilation. It's a special case.

The right way to do things is to separate the steps:

(a) make a configuration
(b) select a configuration

logically. This is particularly important for developers who are using
the same code base to build for multiple 'platforms' on the 
same machine.

With the above design you can have your cake and eat it too .. :)

Thats the easy part.. the HARD part is: every 'system' comes with
optional 'add-on' facilities. These add-ons may need configuration
data. Often, you want to add the 'add-on' after the system is built.
So integrating the configuration data is an issue.

Felix build system allows add-on packages to have their own
configuration model. It happens to be executed on the fly,
and typically does your usual suck it and see testing
(eg .. where are the SDL headers? Hmm ..) 

This design is wrong of course ;(

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-22 Thread skaller
On Fri, 2007-06-22 at 14:45 +0100, Simon Marlow wrote:
 skaller wrote:

  This misses the point. The 'suck it and see' idea fails totally for
  cross-compilation. It's a special case.
  
  The right way to do things is to separate the steps:
  
  (a) make a configuration
  (b) select a configuration
  
  logically.
 
 Hmm, I don't see how the approach fails totally for cross-compilation.  You 
 simply have to create the configuration on the target machine, which is 
 exactly 
 what we do when we cross-compile GHC.  Admittedly the process is a bit 
 ad-hoc, 
 but it works.

But that consists of:

(a) make a configuration (on the target machine)
(b) select that configuration (on the host machine)

which is actually the model I suggest. To be more precise, the
idea is a 'database' of configurations, and building by selecting
one from that database as the parameter to the build process.

The database would perhaps consists of 

(a) definitions for common architectures
(b) personalised definitions

You would need a tool to copy and edit existing definitions
(eg .. a text editor) and a tool to 'autogenerate' prototype
definitions (autoconf for example).

What I meant failed utterly was simply building the sole
configuration by inspection of the properties of a the
host machine (the one you will actually build on).

That does work if

(a) the auto-dectect build scripts are smart and
(b) the host and target machines are the same

BTW: Felix has a 4 platform build model:

* build
* host
* target
* run

The build machine is the one you build on. Example: Debian
autobuilder.

The host is the one you intend to translate Felix code to
C++ code on, typically your workstation. In Windows environment
this might be Cygwin.

The target is the one you actually compile the C++ code on.
In Windows environment, this might be WIN32 native (MSVC++).

The run machine is where you actually execute the code.

The 'extra' step here is because it is a two stage compiler.
Some code has to be built twice: for example the GLR parser
elkhound executable runs on the host machine to generate
C++ and it uses a library. The same library is required
at run time, but has to be recompiled for the target.
EG: Elkhound built on cygwin to translate grammar to C++,
and Elkhound on MSVC++ for the run time automaton.

I'm not sure GHC as such need cross-cross compilation model,
but bootstrapping a cross compiler version almost certainly does.


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-21 Thread skaller
On Thu, 2007-06-21 at 14:40 -0400, Peter Tanski wrote:
 On Jun 21, 2007, at 11:48 AM, Simon Marlow wrote:

  So you'd hard-wire a bunch of things based on the platform name?   
  That sounds like entirely the wrong approach to me. 

FYI: there is a rather nice set of platform data in the
ACE package.


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version? (Haskell is a scripting language too!)

2007-06-21 Thread skaller
On Fri, 2007-06-22 at 02:06 +0100, Brian Hulley wrote:
 skaller wrote:

  (a) Pick a portable scripting language which is readily available
  on all platforms. I chose Python. Perl would also do.
 If I had time to look into improving the GHC build system I'd definitely 
 use Haskell as the scripting language. 

Two difficulties. The first, obviously, is that this will
only work for building Haskell when you already have Haskell,
so no good for the initial bootstrap of a first port.

The second is simply that dynamic typing is generally
better for build systems, because it allows code to
'self-adapt'. To do this with a statically typed language
you would need to generate text, compile it, and run it,
which is not possible because by specification you don't
have a compiler yet. Even at the point you have bootstrapped
far enough that you do have one .. it's still very messy
to get programlets to communicate using shells in a portable
way.. in some sense that's the problem you're trying to solve!

An alternative is to implement the build system in, say,
Scheme, and then write a Scheme interpreter in Haskell.
Scheme can self-adapt internally because its compiler
is built-in.

This approach removes the dependency on external vendors,
and you might even make the initial bootstrap builder
simple enough you could use a drop-in replacement,
eg Guile (GNU scheme) on unix systems.


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-20 Thread skaller
On Wed, 2007-06-20 at 08:49 +0100, Simon Marlow wrote:

 I don't think we'll be able to drop the mingw route either, mainly because 
 while 
 the MS tools are free to download, they're not properly free, and we want 
 to 
 retain the ability to have a completely free distribution with no 
 dependencies.

I'm not sure I understand this. MS tools are free to download
by anyone, but not redistributable. The binaries needed by
programs *built* by those tools are not only free to download,
they're free to redistribute, and they're less encumbered than
almost all so-called 'free software' products.

Don't forget -- Windows isn't a free operating system.
You're juggling some possible problem with a single source
vendor withdrawing supply (possible) against open source
products which are late to market (definite :)

64 bit Mingw .. will already be years out of date when
it turns up, since MS is focusing on .NET platform.
MSVC++ tools already support CLR, assemblies and .NET:
even if Mingw supported that .. you'd still need Mono
(does it work, really?) for a 'free' platform .. but .NET
is redistributable and available on most modern Windows
platforms already ..

I doubt the Open Source community is as reliable a supplier
for the Windows market as Microsoft. It's really a boutique 
market. Cygwin was a major platform in the past, for running
Unix software on Windows.

But now we're talking about a Windows *native* version of GHC,
there's no Unix in it. I see no real reason not to build
for the native toolchain .. and plenty of reasons not
to bother with others.

Hmm .. can't MS be coaxed into supplying some support to the
developers? After all, Haskell IS a major lazily evaluated
statically typed functional programming language. Why wouldn't
MS be interested  in bringing GHC on board? They have an
Ocaml (called F#) now..

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-20 Thread skaller
On Wed, 2007-06-20 at 14:42 +0100, Simon Marlow wrote:

 The binaries needed by programs built by these tools..., you're referring 
 to 
 the C runtime DLLs?  Why does that matter?
 
 Note I said with no dependencies above.  A Windows native port of GHC would 
 require you to go to MS and download the assembler and linker separately - we 
 couldn't automate that, there are click-through licenses and stuff.

So what? Felix requires:

(a) C/C++ compiler
(b) Python
(c) Ocaml

you have to download and install these tools on ANY platform,
including Ubuntu Linux. gcc isn't installed on a basic system.
True, with Debian, this can be automated, so you only have
to click on the main package.

I need THREE external tools. Is this a pain? YES!
[On Windows .. it's a breeze on Ubuntu .. :]

Is it too much effort to ask, for someone to use a major
advanced programming language like Haskell? 

Don't forget .. Mingw has to be installed too .. and in fact
that is much harder. I tried to install MSYS and gave up.

 MS pays for Ian Lynagh, who works full time on GHC as a contractor.  MS puts 
 roughly as much money into GHC as it does into F#, FWIW.

I'm happy to hear that!

Now let me turn the argument around. Mingw is a minor bit player.
The MS toolchain is the main toolchain to support. C++ can't
run on Mingw for example (MS and gcc C++ are incompatible).

GHC needs to target *professional windows programmers*.
They're going to have VS installed already. Haskell is far
too important a language (IMHO) not to have an entry in
the commercial programming arena.

Commercial programming is in a bad way! It NEEDS stuff like
Haskell available.

BTW: I don't really like Windows .. but I want to see Haskell
succeed. Trying to do Haskell on Windows without MSVC++ toolchain
is like trying to work on Linux without binutils... :)


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-20 Thread skaller
 into 'phases' where each phase
can be executed on a distinct machine (with a shared
file system). These phases 'split' all the packaged components
into time slices, and the build process builds the packages
in dependency order for each slice in turn.

This is useful on Windows, eg use Cygwin to build some of the
tools, but use them in native windows shell to build
the next stage.

Since GHC is bootstrapped it will be even messier.. :)

 There is also some gcc-specific code in the RTS (inline assembler,  
 use of extern inline, etc.) 

MSVC++ for Vista and beyond does not allow any inline assembler.
This is probably because it wants to target x86, x86_64 and
also ia64 (as well as CLR).. and doesn't have gcc interesting 
way of managing register usage.

 I don't know of any completely free 64-bit compilers for Windows.   

I don't know of any completely free 64 bit compilers for Linux.
gcc is GPL which is anything but free of encumberance .. 
  
-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-19 Thread skaller
On Tue, 2007-06-19 at 12:23 +0100, Simon Marlow wrote:
 Bulat Ziganshin wrote:
  Hello glasgow-haskell-users,
  
  are you plan to implement 64-bit windows GHC version?
 
 The main thing standing in the way of this is the lack of a 64-bit port of 
 mingw.  

Why do you need mingw? What's wrong with MSVC++?

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Re[2]: 64-bit windows version?

2007-06-19 Thread skaller
On Wed, 2007-06-20 at 07:34 +0400, Bulat Ziganshin wrote:
 Hello skaller,
 
 Tuesday, June 19, 2007, 8:15:19 PM, you wrote:
   are you plan to implement 64-bit windows GHC version?
 
  Why do you need mingw? What's wrong with MSVC++?
 
 really! Simon, how about unregisterised build?
 
 skaller, is *free* 64-bit msvc (or any other windows c++ compiler) available?

Visual Studio Express is a Visual Studio 2005 IDE system for XP 
with some features disabled, the disabled features include 
interactive debugging, help, online news/blog stuff, etc: 
advanced IDE features.

AFAIK the compilers are intact and can be used on the command line
as well as from the IDE. I think you do need to download 
the platform SDK separately though. AFAIK on x86_64 platform,
VS is a 32 bit program, as are the compilers, but they can
generate 64 bit code (the 32 and 64 bit compilers are separate
executables and run in distinct environments .. )

One thing to watch though: embedded assembler is gone in new
MSVC++ compilers (due to multi-arch support I guess).
However assembler works in 64 bit, Ocaml 64 bit for Windows uses it.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Locating shared libraries

2007-06-15 Thread skaller
On Fri, 2007-06-15 at 17:28 +0100, Ian Lynagh wrote:
 Hi John,
 
 On Thu, Jun 14, 2007 at 09:17:16AM +1000, skaller wrote:
  
  The rules for package managers like Debian are that a component
  library split into /usr/include, /usr/lib MUST have a man page,
 
 I'm not sure where you're getting that from? Debian policy:
 http://www.debian.org/doc/debian-policy/ch-docs.html#s12.1
 says
 Each [...] function *should* have an associated manual page
 (my emphasis), and doesn't say anything about where the function is in
 the file system.
 
 (it would be great if someone were to tweak haddock to generate suitable
 manpages, incidentally).
 
  and it MUST be a separately usable C/C++-callable component.
 
 Have you got a reference for that? I can't see it in a quick skim of
 http://www.debian.org/doc/debian-policy/ch-sharedlibs.html
 and the FHS.

No, it's the concept, not the letter of the Debian Law.

When you install a shared library, it is installed as a publically
accessible component to be shared.

That implies the interface is both documented and callable.

A library with undocumented symbols which are private to one
application such as GHC, or even a set of applications generated
by GHC, has no business in a directory intended for everyone to
be able to use it .. especially if the interface and ABI change
due to internal restructuring.

One way to measure this is: if you removed GHC and applications,
and there are (necessarily) no users of the remaining library
package .. the library package shouldn't be in the global public
place (/usr/lib+include etc).

In that case, the lib is an intrinsic part of GHC, and should be
in a GHC master package (sub)directory.

The problem with Debian is FSH standard doesn't really account
for packages like programming languages. If you look you see
for example Python lives in /usr/lib/python and ocaml lives
in /usr/lib/ocaml.

That's an abuse forced by a grossly inadequate directory model.
Ocaml and Python aren't libraries.

Solaris does this more correctly IMHO: packages live in a vendor
specific place, part of the /opt directory tree. 

Here 'opt' means 'optional', that is, not part of the core
operating system and utilities required to maintain it.

Anyhow, system like GHC, Ocaml, Python, Felix, etc, just do NOT
fit into the C model: bin/ lib/ include/ with or
without any versioning. They have bytecode, configuration,
and many other kinds of 'object' and 'development source'
and 'library' files, some of which may happen to be 'shared
libraries' but that's irrelevant. All those things need to be
managed in a (compiler-)system specific way.

The 'specific' way for C and Unix OS tools is the Debian FSH ...
it shouldn't be used for anything else.

Note that the Ocaml team is currently grappling with a second nasty
problem: Ocaml 3.10 uses a source incompatible pre-processor (camlp4).
So now, not only will a non-Debian installed Ocaml library or bytecode
executable be broken by an upgrade until the user recompiles from 
source (the usual situation for Ocaml) but now all sources using
camlp4 macros are broken until the end user edits them.

So the team is almost forced to support installation of 
non-conflicting separate versions now, just to allow migration.

Once you start versioning stuff .. splitting files into
subdirectories like bin/ lib/ include/ is a nightmare.
Note that the 'include' part of that utterly fails for
C code anyhow.. so even the basic model is flawed for the very
kind of code it was designed to support.

The bottom line is Debian FSH is archaic and work should be
done to eliminate all user level packages from it: only core
OS packages should be allowed in /usr.

IMHO the best workaround for this problem is to use a thunk/
driver script in /usr/bin and put all the real stuff in a single
version specific compile install directory whose location is
entirely irrelevant. This is more or less what gcc does, and the
gcc model works very well I think. Multiple gcc versions, including
cross compiler, can be installed and 'just work' with all the
right bits glued together.


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Locating shared libraries

2007-06-15 Thread skaller
On Fri, 2007-06-15 at 19:40 -0500, Spencer Janssen wrote:
 On Sat, 16 Jun 2007 08:21:50 +1000
 skaller [EMAIL PROTECTED] wrote:
  One way to measure this is: if you removed GHC and applications,
  and there are (necessarily) no users of the remaining library
  package .. the library package shouldn't be in the global public
  place (/usr/lib+include etc).
 
 As I understand it, the entire point of this effort (shared libraries
 in GHC) is to allow dynamically linked Haskell executables.  In this
 case, applications outside the GHC toolchain will in fact depend on
 these shared objects.  As a concrete case, a binary darcs package could
 be a user of libghc66-base.so and libghc66-mtl.so -- with no
 dependencies on the GHC compiler package itself.
 
 Does this pass your litmus test?

Yes, it passes the separability test. My darcs wouldn't run otherwise!
And versioning the library filename as above is a good idea too.

Felix adds _dynamic for shared libs and _static for static link
archives to ensure the Linux linker doesn't get confused.

However, the libs still aren't fully public if the interfaces
are only private details of the GHC tool chain. Hmmm.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Locating shared libraries

2007-06-13 Thread skaller
On Wed, 2007-06-13 at 15:24 -0700, Stefan O'Rear wrote:
 On Wed, Jun 13, 2007 at 11:33:36AM -0700, Bryan O'Sullivan wrote:

 Or better yet, put them directly in one of the LD_LIBRARY_PATH dirs.
 $PREFIX/lib/ghc-$VERSION is a relic of the static library system and
 IMO shouldn't be duplicated.

Hmm.. well what Felix does is:

(a) The bulk of the system lives in $PREFIX/lib/felix$VERSION

(b) Libraries are called *.flx or *.h or *.a or *.so and 
have NO version encoding

(c) Executables are compiled and run using a bash script

The script is the only program installed in /usr/bin

This script selects the install to use, defaulting to the
most recent.

You can override the install directory on the script command line
or with an environment variable.

The bash script uses LD_LIBRARY_PATH, etc, and command line switches
to the executables, to ensure a coherent 'assembly' of components are
used together.

If you opt for static linkage, the resulting executable is of course
independent of the version.

All the standalone binaries require non-defaulted switches to 
select where components other than load-time shared libs live.

This system supports multiple installations and also multiple
cross compilation.

IMHO: $PREFIX/lib/ghc-$VERSION style isn't a relic. It is the
current ridiculous versioning of shared libraries which is 
a relic of even older faulty design which unfortunately 
Debian and other package installers copied.

In particular, package components should always live in
the same directory tree, related by package/versions,
and NEVER get split up into lib/bin/include etc.

In fact that is a relic of archaic dog slow Unix directory searching,
and was a performance hack. It was never the correct design to split
packages into directories by function, and the Unix directory tree
is only capable of supporting C programs .. and it isn't really
suitable for C either.

The rules for package managers like Debian are that a component
library split into /usr/include, /usr/lib MUST have a man page,
and it MUST be a separately usable C/C++-callable component.

If the only user of these libraries is the Haskell or other
system, and the interfaces aren't documented, then the
libraries must NOT be placed in /usr/lib. In other words,
unless there is a distinct separable Debian package for the
library it must NOT go in /usr/lib.

The way most modern languages work, each version has a number
of utterly incompatible components. When you use Felix, Ocaml,
Mlton, or Haskell version xx.yy you can't use any other version
of any of the libraries .. and probably can't use any user libraries
either. It's likely the ABI (application binary interface) changes
with each version, the set of library entry points changes in
some detail, or whatever .. if not, why is a new version being
released??

This doesn't happen (usually) with C/C++, but it does sometimes:
the C++ ABI changed for Linux recently and it broke
Debian for months and months while everything got upgraded.

SO in my view, the only thing you might consider sharing
between (advanced language of your choice) versions is the source
code of software written in the standardised language -- but never,
NEVER the support libraries, run time, or compilers.

FYI: this is a particularly nasty problem for Ocaml, since the
ABI changes with every patch. The debian-ocaml team has to
rebuild and upload binaries to the package server every time
the compiler is changed .. and end users have to recompile
every program they wrote (for bytecode .. for native
code there's no dynamic loading anyhow), and every library
(for both bytecode and native code).

This isn't necessary with Felix because it *defines* execution
in terms of source code, not in terms of binaries, which are
regarded as mere cached values, and managed automatically
(i.e. rebuilt whenever there's a version mismatch).

So for something like Haskell I'd recommend the system be
split in two parts:

(a) the 'system' which provides a distinct installation
for every version all in a single directory tree

(b) libraries written in standardised Haskell 98  or whatever
are separate packages of source code, and are separately
maintained and installed

Any caching of partial compilations of the standard source
libraries should be automatic. Option (b) is rather hard
to organise without redesiging your tool chain to work
entirely in terms of source code .. but I recommend that
anyhow -- Haskell semantics are defined in terms of sources
and the 'average' user should know about anything else:
it's a basic principle of abstraction.


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: dynamic linking with stdc++ (was: Re: ghci dynamic linking)

2007-05-09 Thread skaller
On Wed, 2007-05-09 at 23:44 +0100, Frederik Eaton wrote:

 I think the problem is that there is a /usr/lib/libstdc++.so.5 and a
 /usr/lib/libstdc++.so.6 but no /usr/lib/libstdc++.so; when I created
 the latter, linking to the libstdc++.so.6 link, I was able to use ghci
 with my package. I wish I knew why /usr/lib/libstdc++.so is missing,
 but it is missing on 4 out of 4 of the computers I just now checked so
 I think it is normal for it to be missing, and the problem probably
 lies with ghci?

Libraries are utterly incompatible:
they use a different Application Binary Interface (ABI).
The ABI changed around starting gcc 3.4 I think.

Examining a binary on my amd64 Ubuntu/Linux box:

[EMAIL PROTECTED]:/work/felix/svn/felix/felix/trunk$ ldd bin/flx_run
libflx_pthread_dynamic.so = not found
libflx_dynamic.so = not found
libflx_gc_dynamic.so = not found
libdl.so.2 = /lib/libdl.so.2 (0x2b7ba1854000)
libpthread.so.0 = /lib/libpthread.so.0 (0x2b7ba1a59000)
libstdc++.so.6 = /usr/lib/libstdc++.so.6 (0x2b7ba1c74000)
libm.so.6 = /lib/libm.so.6 (0x2b7ba1f78000)
libgcc_s.so.1 = /lib/libgcc_s.so.1 (0x2b7ba21fb000)
libc.so.6 = /lib/libc.so.6 (0x2b7ba2409000)
/lib64/ld-linux-x86-64.so.2 (0x2b7ba1637000)

you can see:

* my application libs are not found (LD_LIBRARY_PATH not set)
* compiler libraries use *.so.integer links
* core libs use *.so.integer links
* the dynamic loader is hard coded

[EMAIL PROTECTED]:/work/felix/svn/felix/felix/trunk$ ls /usr/lib/libstd*
/usr/lib/libstdc++.so.5  /usr/lib/libstdc++.so.6
/usr/lib/libstdc++.so.5.0.7  /usr/lib/libstdc++.so.6.0.8

[EMAIL PROTECTED]:/work/felix/svn/felix/felix/trunk$ ls -l /usr/lib/libdl*
-rw-r--r-- 1 root root 13162 2007-04-04 21:06 /usr/lib/libdl.a
lrwxrwxrwx 1 root root15 2007-04-20 03:20 /usr/lib/libdl.so
- /lib/libdl.so.2

Note here the '/usr/lib/libdl.so' links to '/lib/libdl.so.2':
it's a core lib, not a compiler lib. Finally see:

[EMAIL PROTECTED]:/work/felix/svn/felix/felix/trunk$ ls
-l /usr/lib/gcc/x86_64-linux-gnu/4.1.2/libstdc++*

/lib/gcc/x86_64-linux-gnu/4.1.2/libstdc++.so - ../../../libstdc++.so.6

so the link 'libstdc++.so' does exist .. but it is in a compiler
specific location.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cost of Overloading vs. HOFs

2007-05-04 Thread skaller
On Fri, 2007-05-04 at 17:06 -0700, Conal Elliott wrote:
 Cool.  You know which types to consider because jhc is a whole-program
 compiler?
 
 Given the whole program, why not monomorphize, and inline away all of
 the dictionaries?

That's what Felix does: typeclasses, no dictionaries:
whole program analyser: resolves typeclasses, inlines 
whilst resolving typeclasses, monomorphises resolving
typeclasses, and inlines again resolving typeclassses.

Indirect dispatch can't be eliminated if the
method is wrapped in a polymorphic closure.
Monomorphisation alone can't eliminate second order
polymorphism. [But Felix doesn't support that anyhow]

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC Manuals online

2007-04-24 Thread skaller
On Tue, 2007-04-24 at 20:25 +0100, Neil Mitchell wrote:
 Hi
 
  why so complicated? try googling for ghc user guide, and the very
  first entry, for me, is the latest at haskell.org. in fact, google toolbar
  suggests this phrase after just typing ghc.
 
 What I was actually googling for is ghc haskell language pragma, and
 then you get loads of results from manuals before GHC had a language
 pragma. I don't mean searching to find the documentation, I mean
 searching to find content within the documentation.

This just shows you how screwed up Google currently is.

I recommend Wikipedia search first .. human maintained links
are often better. And if not complain and someone may fix it.


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: rules

2007-03-30 Thread skaller
On Fri, 2007-03-30 at 13:04 -0700, Tim Chevalier wrote:
 On 3/30/07, skaller [EMAIL PROTECTED] wrote:
  I'm curious when and how GHC applies rewrite rules,
  and how expensive it is.
 
 
 Have you seen the Playing By Rules paper?
 http://research.microsoft.com/~simonpj/Papers/rules.htm
 
 If you still have questions after reading it, you can always ask here.

Thanks for that reference! 

FYI, the authors comment that the possibility exists for
feeding reduction rules into a theorem prover but it has
not been explored.

This IS indeed being done right now by Felix. Felix
supports axioms, lemmas, and reduction rules. Lemmas
are theorems which are simple enough to be proven with
an automatic theorem prover.

A couple of days ago my first test case lemma was
proven by Ergo and Simplify. (roughly I proved
that given x + y = y + x and x + 0 = x, that
0 + x = x .. not exactly rocket science :)

Felix emits all the
axioms and lemmas in Why notation, which allows
a large class of theorem provers to have a go at
proving them, since Why translates them.

Reductions aren't handled yet because doing this raised
a simple syntactic problem: I have no way to distinguish
whether a reduction is an axiom or a lemma.

I would note also that default definitions of typeclass
functions are also axioms.

What would be REALLY cool is to prove that typeclass
instances conform to the specified semantics. That,
however, is much harder, especially in Felix (which
is an imperative language ;(

See also:

http://why.lri.fr

IMHO: connecting a programming language to a theorem prover
is probably mandatory for optimisation.  Obvious goals
other than reduction rules are elimination of pre-condition
checks for partial functions. However doing this is hard
because theorem provers can be slow, so generally have
to be run offline. Thus some way of caching the results
for a subsequent recompilation is required.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 6.4.1 and Win32 DLLs: Bug in shutdownHaskell?

2006-10-24 Thread skaller
On Tue, 2006-10-24 at 13:16 -0700, SevenThunders wrote:

 Here is the promised simple example.  This example will cause an exception
 when the DLL is unloaded, but it doesn't seem to cause the run time
 exception that a more complicated example might cause.  This could be
 because of the very small amount of memory it actually uses.

I know nothing of Matlab or Haskell linkage .. so do you know 
which C library they're linked against? It has to be the same one, 
and it must be dynamic linkage. (Felix build was inconsistent
and we got similar random behaviour).

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Floating point problems

2006-08-30 Thread skaller
On Wed, 2006-08-30 at 14:58 -0400, David Roundy wrote:
 It's
 sad, but we're stuck with it, as I'm not aware of any compiler that is
 capable of generating IEEE arithmetic.

Gcc man page:

-ffast-math
  Sets -fno-math-errno, -funsafe-math-optimizations, -fno-trap‐
  ping-math, -ffinite-math-only, -fno-rounding-math, -fno-signal‐
  ing-nans and fcx-limited-range.

  This option causes the preprocessor macro __FAST_MATH__ to be
  defined.

  This option should never be turned on by any -O option since it can
  result in incorrect output for programs which depend on an exact
  implementation of IEEE or ISO rules/specifications for math func‐
  tions.

Exact portable calculations are the default: you have to
enable non-IEEE/ISO C conformance to get good performance
explicitly with a switch.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Floating point problems

2006-08-30 Thread skaller
On Wed, 2006-08-30 at 22:29 +0100, Jamie Brandon wrote:
 If the object of the course is to teach you (more about) C,
 that might not go down too well :-)
 
 Its on computer aided research in maths.

[]

 You can always define an infix version of
 == (maybe ~=~ or something) that is a bit sloppier in its comparisons,
 of course.

Congratulations! You have just discovered Constructivist Mathematics :)


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 6.4.3 and threaded RTS problems

2006-08-24 Thread skaller
On Thu, 2006-08-24 at 08:56 +0100, Simon Marlow wrote:
 Hi Folks,
 
 Roman Leshchinskiy and I looked into the 6.4.3 crashes on Sparc/Solaris 
 yesterday.  I think we may have found the problem, and it seems likely that 
 this 
 is the same problem affecting 6.4.3 on MacOS X.  The threaded RTS is assuming 
 in 
 a couple of places that pthread_cond_wait() doesn't spuriously wake up, which 
 is 
 apparently the case on Linux but not true in general.

I don't believe it is the case on Linux either. I had the same
problem: on Linux it appears to work .. but Posix allows
spurious wakeups, our OSX/Solaris/Windows code failed. 

This isn't just for cond_wait: most system calls can return 
spuriously if an OS signal goes off. Somewhere I read a good article
explaining why this is necessary for good performance.
A signal to a condition variable should be regarded as
a hint to recheck the condition. 

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Data.Generics vs. Template Haskell

2006-08-24 Thread skaller
On Fri, 2006-08-25 at 00:53 +0400, Bulat Ziganshin wrote:
 Hello Vyacheslav,
 
 Thursday, August 24, 2006, 11:51:46 PM, you wrote:
 
  I am trying to figure out where these two libraries stand in relation
  to each other and which one is preferred to do generic programming in
  Haskell. I understand that their goals often intersect but couldn't
  find any good comparisons. Could someone point me in the right
  direction?
 
 search for generics on hswiki, you should find a lot of papers. in
 particular, there is a new paper that compares many different
 approaches to generic programming. in particular, TH is not g.p.
 tools, it's just a universal Haskell code generator whcih can be used
 to solve particular tasks in this area. but to solve general problem
 of defining traversal function what has some general case and a nu,ber
 of type-specific cases TH is not very appropriate

AFAIK this problem is now fully solved by Barry Jay:
polyadic traversal drops out of pattern calculus.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Re[4]: Replacement for GMP: Update

2006-08-12 Thread skaller
On Sat, 2006-08-12 at 10:58 +0400, Bulat Ziganshin wrote:
 Hello skaller,
 
 Saturday, August 12, 2006, 7:06:15 AM, you wrote:
 
  My point here is that actually this is a disastrous optimisation
  in a  multi-processing environment, because in general, the
  assignment of a pointer means the store isn't write once.
 
 :)  all the variables rewritten is local to the function. _without_
 tail call optimization, we create new stack frame for each recursive
 call. _with_ optimization, we just update vars in the first stack
 frame created because we know that these vars will not be reused after
 return from call

Yes, but this defeats the use of the kind of collector I attempted
to describe. 

The problem will occur if the 'stack' is aged: in that case
the sweep can miss the mutation and reap a live object.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Re[2]: Replacement for GMP: Update

2006-08-11 Thread skaller
On Fri, 2006-08-11 at 09:34 +0400, Bulat Ziganshin wrote:

 why you say that ForeignPtr is slow? afaik, malloc/free is slow, but
 starting from ghc 6.6 speed of _using_ ForeignPtr is the same as for Ptr

Er, malloc/free is not slow .. its very fast. I implemented
an arena based allocator (one which just increments a pointer)
and it was slower than malloc .. ok so my implementation was
naive, but still, that does make malloc pretty good.

After all on the average call where an object of that
size is free already it is a single array lookup, we have:

(a) fetch pointer (one read)
(b) fetch next (one read)
(c) store next as current (one write)

It is very hard to beat that. Indeed, the whole cost
of this technique is probably in the mutex 
based locking that wraps it on a multi-processor.

A purely functional system -- one which does NOT convert
self tail calls into jumps and reuse storage -- can
perhaps be faster, since each thread can have its own
local arena to allocate from (without need of any write
barrier) .. however it isn't clear to me what the trade
off is between cache misses and parallelism.

Doesn't Haskell do that optimisation?

It is of course hard to test this today without a
fairly expensive box with enough CPUs on the one bus.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Re[2]: Replacement for GMP: Update

2006-08-11 Thread skaller
but will actually be slower on a multi-processor because
you'd be forced to use an inferior garbage collection
scheme.

Felix uses an inferior scheme because it uses the C/C++
object model by specification. I have to stop ALL threads
to collect. This is REALLY BAD: if you have N processors,
N-1 of them can wait for a long time for the last one
to stop! [Felix cannot signal the threads to stop. It has
to wait until they reach a yield point]

 (Maybe that is what he meant when some time ago he  
 wrote John Meacham and said that they (the GHC researchers)  
 considered compiling via C a dead end.) 

Yes. I tend to agree. C is really very dead, it lacks really
basic serial primitives, and it uses a stack model: it isn't
really any good even for C :)

Unfortunately, hardware design tends to supply stuff that
runs C code fast ;(

 The curious thing about GHC- 
 Haskell is that through the prism of Cmm, which enforces such things  
 as immutable variables and recursion right at the machine level,  
 Haskell is less a language of translation to sequential machine code  
 and more a description of a computational model.  If you still think  
 I am wrong about this, 

I don't. I think you're right. Purely functional storage model
seems to scale better to multiple threads.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: fork and ghc

2006-08-09 Thread skaller
On Wed, 2006-08-09 at 16:26 -0700, John Meacham wrote:

 on another topic, I ran across this old paper online which gives an
 exceedingly clever method of implementing a functional language compiler
 doing real garbage collection and efficient tail calls in portable C
  http://home.pipeline.com/~hbaker1/CheneyMTA.html
 It is very interesting, I think I might implement it as my 'fast to
 compile' back end for jhc if I can adapt it to a lazy language.

Not even marginally portable ;(

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 6.6 plans and status

2006-08-07 Thread skaller
On Mon, 2006-08-07 at 17:07 +0100, Chris Kuklewicz wrote:
 Would a new and expanded Regex package (Test.Regex.Lazy) be something that 
 could 
 be included in the 6.6.0 libraries?  What is the best practice for getting it 
 included?
 
 It still supports a wrapped Posix regex backend, but also include a PCRE 
 wrapper 
 and pure haskell backends and work efficiently on both String and ByteString.
 
 It runs, has an increasing amount of Haddock documentation, some HUnit tests, 
 and some new QuickCheck tests are almost done.

Wouldn't it be nice to use Ville Laurikari's TRE 
package instead of PCRE?

[It is also Posix compliant and drop in replacement for
gnu regex .. as well as supporting nice extensions]

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Replacement for GMP as Bignum: ARPREC? Haskell?; OS-X and OpenSSL

2006-07-30 Thread skaller
On Sun, 2006-07-30 at 19:03 +0100, Duncan Coutts wrote:
 On Sun, 2006-07-30 at 17:33 +0100, Brian Hulley wrote:

 I think part of the issue is that static linking is very convenient and
 dynamic linking in this case would lead to some tricky administrative
 problems.
 
 Suppose for a moment that GHC did dynamically link gmp.dll, or indeed
 HSbase.dll. Where exactly would these files go?

 Now as far as I understand Windows dll technology this requires either
 that:
  1. gmp.dll be copied into the directory with Foo.exe
  2. gmp.dll exist in the windows or system directories
  3. gmp.dll exist somewhere on the windows search %PATH%
 
 None of these are attractive or scalable solutions.

Nor is static linkage ;) 

FWIW: Windows has obsoleted your solutions above. 
It now uses a thing called 'assemblies'.

This involves shipping all the dependent dll's with an application
or library, and embedding a 'manifest' in each executable object.
On installation, the dll's are copied into a cache if necessary,
and the dynamic loader knows how to find them there. Versions
of a library are kept distinct.

In particular note this applies to the C library (MSVCR80.DLL)
which is not considered a system library on XP anymore, 
XP64 in particular.

Exactly how MinGW version of gcc will cope with all this
I do not know. I personally couldn't figure out how to make
it all work ;(

 On Unix this isn't a problem because it's possible to embed the dynamic
 library search path into an executable (and indeed into a dynamic
 library) using the -rpath linker directive.

Which has problems of its own -- and is strongly discouraged
by systems like Debian. Don't even think about it, rpath is evil.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Re[2]: FFI: number of worker threads?

2006-06-21 Thread skaller
On Wed, 2006-06-21 at 17:55 +0100, Duncan Coutts wrote:

 So I'd suggest the best approach is to keep the existing multiplexing
 non-blocking IO system and start to take advantage of more scalable IO
 APIs on the platforms we really care about (either select/poll
 replacements or AIO).

FYI: Felix provides a thread for socket I/O. Only one is required.
Uses epoll/kqueue/io completion ports/select, depending on 
OS support.

Similarly, there is one thread to handle timer events.
We're working on a similar arrangement for asynchronous file
I/O which many OS provide support for (Windows, Linux at least).

There are only a limited number of devices you can connect
to a computer. You can't need more than a handful of threads,
unless the OS design is entirely lame .. in which case you'd
not be trying to run high performance heavily loaded applications
on that system in the first place.

Our biggest headache is non-reentrant APIs
such as Open GL which are only re-entrant on a process basis,
this doesn't play with with pre-emptive threading and it's
also not good for cooperative threading. The only real solution
here is to run a server thread and a thread safe abstraction layer,
which cooperate to do context switches when necessary. 

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: WordPtr,IntPtr,IntMax,WordMax

2006-05-11 Thread skaller
On Fri, 2006-05-12 at 00:34 +0100, Ben Rudiak-Gould wrote:
 Simon Marlow wrote:
  I suppose you might argue that extra precision is always good.
 
 Well... I'm having a hard time thinking of a situation where it isn't. 

Wastes space in the cache tree, slowing down the program
and limiting the max size problem that can be handled.

For SIMD registers far worse .. halves the number of
parallel computations you can do.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Silly IO problem

2005-11-03 Thread skaller
On Thu, 2005-11-03 at 09:49 +0100, Mirko Rahn wrote:
  It prints n, rather than tak(3n,2n,n).
 
 ??? No:
 
 *Main :set args 0
 *Main main
 0.0
 *Main :set args 1
 *Main main
 2.0
 *Main :set args 2
 *Main main
 3.0
 *Main :set args 3
 *Main main
 6.0
 
 The same with some versions of ghc.

Ah .. it seems to be a bug in gcc 4.0

The Glorious Glasgow Haskell Compilation System, version 6.4
[running on Ubuntu which uses Debian package]

gcc (GCC) 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu9)
x86_64

Summary: gcc does not work with GHC 6.4 on x86_64 with
optimisation higher than -O1

Perhaps the Evil Mangler is not working, or, possibly
the version isn't even registered. 

--
[EMAIL PROTECTED]:/work/felix/flx$ ghc -O3 -fvia-C -optc -O3 -o
speed/exes/ghc/takfp speed/src/haskell/takfp.hs

[EMAIL PROTECTED]:/work/felix/flx/speed/exes/ghc$ ./takfp 8
8.0

[EMAIL PROTECTED]:/work/felix/flx$ ghc -fvia-C -o speed/exes/ghc/takfp
speed/src/haskell/takfp.hs
[EMAIL PROTECTED]:/work/felix/flx$ speed/exes/ghc/takfp 8 9.0
[EMAIL PROTECTED]:/work/felix/flx$ speed/exes/ghc/takfp 3
6.0

[EMAIL PROTECTED]:/work/felix/flx$ ghc -O3 -fvia-C -optc -O2 -o
speed/exes/ghc/takfp speed/src/haskell/takfp.hs
[EMAIL PROTECTED]:/work/felix/flx$ speed/exes/ghc/takfp 3 
3.0

[EMAIL PROTECTED]:/work/felix/flx$ ghc -O3 -fvia-C -optc -O1 -o
speed/exes/ghc/takfp speed/src/haskell/takfp.hs
[EMAIL PROTECTED]:/work/felix/flx$ speed/exes/ghc/takfp 3
6.0


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: jhc vs ghc and the surprising result involving ghc generatedassembly.

2005-11-02 Thread skaller
On Wed, 2005-11-02 at 14:59 +0100, Florian Weimer wrote:


 Is it correct that you use indirect gotos across functions?  Such
 gotos aren't supported by GCC and work only by accident.

Even direct gotos aren't universally supported. Some info
in Fergus Henderson's paper may be of interest

http://felix.sourceforge.net/papers/mercury_to_c.ps


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: jhc vs ghc and the surprising result involving ghc generatedassembly.

2005-11-02 Thread skaller
On Wed, 2005-11-02 at 18:05 +0100, Florian Weimer wrote:
  Is it correct that you use indirect gotos across functions?  Such
  gotos aren't supported by GCC and work only by accident.
 
  Even direct gotos aren't universally supported. Some info
  in Fergus Henderson's paper may be of interest
 
  http://felix.sourceforge.net/papers/mercury_to_c.ps
 
 This paper seems to be from 1995 or so:
 
 %DVIPSSource:  TeX output 1995.11.29:1656
 
 (Why is it so uncommon to put the publication date on the first page?)
 
 GCC's IL has changed significantly since then; it's not clear if it
 still applies.

I am using some of it in Felix, that part I am using
seems to work fine on all platforms tested: various versions
of g++ and under Linux, OSX, Cygwin, and MinGW, possibly more.

The config script checks assembler labels are supported,
if they are the indirect jumps 'just work'.  Of course
the config would have to be built by hand for cross 
compilation ;(

However my system obeys a constraint: the runtime conspires
to ensure the function containing the target label is
entered before the jump is done. The address is calculated
by the caller though. So I don't run into any problems
loading the right data section pointer. I suspect Haskell
cannot do that, since it would defeat the intended optimisation.

[More precisely, in Felix the technique is used to implement
non-local gotos, and which can only occur in procedures,
not in functions]

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: jhc vs ghc and the surprising result involving ghc generated assembly.

2005-11-02 Thread skaller
On Wed, 2005-11-02 at 19:47 +0100, Florian Weimer wrote:
  It seems that the goto-based version leads to different static branch
  prediction results, which happen to be favorable. 
 
  It has nothing to do with branch prediction. I know 
  it is determined ENTIRELY by stack use.
 
 In both cases, The C compiler emits code which doesn't use the stack.

huh? how can a recursive call not use the stack??

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Silly IO problem

2005-11-02 Thread skaller
This code doesn't work:

import System(getArgs)

main = do n - getArgs = readIO.head
  putStrLn (show (tak (3*n) (2*n) n))

tak :: Float - Float - Float - Float
tak x y z | y=x  = z
  | otherwise = tak (tak (x-1) y z) (tak (y-1) z x) (tak (z-1) x
y)
--
It prints n, rather than tak(3n,2n,n). Can someone give me
the right encoding please?

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: jhc vs ghc and the surprising result involving ghc generated assembly.

2005-11-01 Thread skaller
On Tue, 2005-11-01 at 17:30 +0100, Florian Weimer wrote:

  use C control constructs rather than gotos.
 
 With GCC version 4, this will have no effect because the gimplifier
 converts everything to goto-style anyway.

Felix generates C with gotos. The result is FASTER
than native C using gcc 4.0 on x86_64.

http://felix.sourceforge.net/current/speed/en_flx_perf_0005.html

C code:

 4: int Ack(int M, int N) {
 5:   if (M==0) return N +1;
 6:   else if(N==0) return Ack(M-1,1);
 7:   else return Ack(M-1, Ack(M,N-1));
 8: }
 9: 

It is known gcc is correctly optimising tail calls here.

Felix:

 5: fun ack(x:int,y:int):int =
 6:   if x == 0 then y + 1
 7:   elif y == 0 then ack(x-1, 1)
 8:   else ack(x-1, ack(x, y-1))
 9:   endif
10: ;


Felix generated C(++) code -- compiled with same options:

int FLX_REGPARM _i1860_f1301_ack( 
int _i1864_v1303_x, int _i1865_v1304_y)
{
  _us2 _i1867_v1799_ack_mv_74;
  _us2 _i1868_v1821_ack_mv_84;
start_1828:;
  _i1867_v1799_ack_mv_74 = _i1864_v1303_x==0 ;
  if(!(_i1867_v1799_ack_mv_74==1))goto _1797;
  return _i1865_v1304_y+1 ;
_1797:;
  _i1868_v1821_ack_mv_84 = _i1865_v1304_y==0 ;
  if(!(_i1868_v1821_ack_mv_84==1))goto _1798;
  _i1865_v1304_y = 1;
  _i1864_v1303_x = _i1864_v1303_x-1 ;
  goto start_1828;
_1798:;
  _i1865_v1304_y = _i1860_f1301_ack(_i1864_v1303_x, _i1865_v1304_y-1 );
  _i1864_v1303_x = _i1864_v1303_x-1 ;
  goto start_1828;
}

[The FLX_REGPARM says __attribute__((regparm(3)) when
gcc is the compiler for i386 .. but it has no effect
on x86_64]

AFAICS gcc 4.x generates much better code if you just use
gotos everywhere instead of C control structures.

I have no real idea why the Felix generated C is faster.
Two guesses:

(a) the two 'mv' variables declared at the top are optimised
away, so the Felix version is only using 3 words of stack.

(b) the parallel assigment in tail calls optimisation
is saving one word on the stack (evaluating y before x
saves a temporary across the non-tail recursion).

but I don't really know.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: jhc vs ghc and the surprising result involving ghc generated assembly.

2005-11-01 Thread skaller
On Tue, 2005-11-01 at 19:03 +0100, Florian Weimer wrote:

  Felix generates C with gotos. The result is FASTER
  than native C using gcc 4.0 on x86_64.
 
 Coincidence. 8-)

Possibly :)

  Felix generated C(++) code -- compiled with same options:
 
  int FLX_REGPARM _i1860_f1301_ack( 
  int _i1864_v1303_x, int _i1865_v1304_y)
  {
_us2 _i1867_v1799_ack_mv_74;
_us2 _i1868_v1821_ack_mv_84;
 
 _us2 is unsigned, correct?

No, actually it means 'unit sum', in this case the
sum of two units (hence us2). Felix has a special representation
for unit sums .. it uses an int. And of course, 

1 + 1 = 2

is just another name for 'bool'. These two variables are
'mv's or 'match variables' -- the arguments of the two matches
which are generated by the 'if then elif else endif' sugar.
Match expressions are stuck into variables so they can be
destructed by pattern matching (even though in this case
there is none :)

 BTW, you shouldn't generate identifiers with leading underscores
 because they are reserved for the implementation.

I AM the implementation :)

Generated Identifiers start with underscores,
so they don't clash with arbitrary C code.

  I have no real idea why the Felix generated C is faster.
  Two guesses:
 
  (a) the two 'mv' variables declared at the top are optimised
  away, so the Felix version is only using 3 words of stack.
 
  (b) the parallel assigment in tail calls optimisation
  is saving one word on the stack (evaluating y before x
  saves a temporary across the non-tail recursion).
 
 Both variants do not use the stack.

This code evaluates y first, then x:

  _i1865_v1304_y = _i1860_f1301_ack(_i1864_v1303_x, _i1865_v1304_y-1 );
  _i1864_v1303_x = _i1864_v1303_x-1 ;

If you change the order you get this:

  {
 int tmp = _i1864_v1303_x;
_i1864_v1303_x = _i1864_v1303_x-1 ;
_i1865_v1304_y = _i1860_f1301_ack(tmp, _i1865_v1304_y-1 );
  }

which as written uses one extra word on the stack for 'tmp'
up until the closing } .. which means an extra word during
the recursive call. 

 It seems that the goto-based version leads to different static branch
 prediction results, which happen to be favorable. 

It has nothing to do with branch prediction. I know 
it is determined ENTIRELY by stack use. I rewrote the
assembler version many ways to try to find out how
to improve my compiler.. nothing made the slightest
difference except stack usage.

The performance relativities of various generated codes
depend only on the difference in stack pointers at the same place 
in the code, as a function of recursion depth.

This is obvious when you think about it: the CPU operations
(add, subtract, branch, etc) are orders of magnitude faster
than memory accesses -- including cache accesses -- and the
calculations in ackermann are trivial (add 1, subtract 1.. :)

Accessing the cache costs. Accessing RAM costs even more.
And I even drove this function so hard on my old i386
I filled all of RAM and pushed it into disk paging.

hehe .. just before publishing the latest Felix I had a look
at the performance graph and .. shock horror .. gcc and ocaml
were creaming it. I had accidentally disabled the optimisation
which replaced the creation of an C++ class object with
a C function. The latter not only has no 'this' pointer,
it also takes arguments 'one by one' instead of a 
reference to a tuple which is unpacked in the apply
method of the class .. thats a lot of extra words
on the stack. The graph you see, of course, has that
little problem fixed :)

Ackermann eats memory .. and doesn't do any other serious
work. That's the point of this function as a microbenchmark:
everything depends on efficient stack usage (and nothing
else).

I actually know how to optimise this function MUCH more:
all the way down to ONE word of stack. The first optimisation
eliminates x by unrolling -- x starts at 3 and is never incremented
(in the Shootout test, initial x is fixed at 3).

That eliminates one word. The second optimisation is secret g
but also eliminates one word at the cost of one register.

I guess the bottom line is: CPU performance is hard to predict,
and what optimisations gcc will do are also hard to predict.

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: mixing x86_64 and i386

2005-08-08 Thread skaller
On Sun, 2005-08-07 at 02:15 -0700, John Meacham wrote:
 Just thought I'd share a little hack...
 
 linux has a command 'setarch' which changes the current architecture so
 you can do things like build i386 rpms on a x86_64 system...

[EMAIL PROTECTED]:~$ man setarch
No manual entry for setarch
[EMAIL PROTECTED]:~$ setarch
bash: setarch: command not found
[EMAIL PROTECTED]:~$

-- 
John Skaller skaller at users dot sourceforge dot net



signature.asc
Description: This is a digitally signed message part
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Problem building GHC

2005-07-13 Thread John Skaller
On Wed, 2005-07-13 at 10:18 +0300, Dinko Tenev wrote:
 On 7/13/05, John Skaller [EMAIL PROTECTED] wrote:
  I think you need more RAM. I had to
  buy a new computer to fix this problem.
 
 Oh my, that hurts...quite an expensive piece of software, GHC ;)  But
 this really explains a lot, because two identical builds of mine
 happened to die on different files - it never occurred to me GCC could
 run low on memory though...

Try running 'top' in one window whilst compiling ..:)

 How about more swap, provided I would let it run overnight?

On my box, running RH9, I had a heap of swap .. that was
part of the problem -- the thing went into a paging frenzy.

I think that version of Linux had very bad code in it too:
I had the same problem opening too many browser windows.

If you do 'random access' on virtual memory, your box
will be sure to die -- you're loading a whole page
when all you may want is one word :)

If you run in single user mode .. you might do better
to turn the swap OFF :)

  Gcc is a badly written piece of software.
  What do you expect from a compiler that tries to do
  sophisticated  optimisations .. written in C??
 
 Ah, this sounds a bit more encouraging (really :)  Wouldn't disabling
 GCC optimisations be an easier way then?

I cannot say for GHC -- my compiler Felix relies on 
the C++ compiler to do some of the work.

Using GHC with a native code compiler might avoid the
problem (but I'm no expert on GHC ;)

-- 
John Skaller skaller at users dot sourceforge dot net
Download Felix: http://felix.sf.net


signature.asc
Description: This is a digitally signed message part
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Problem building GHC

2005-07-12 Thread John Skaller
On Tue, 2005-07-12 at 22:59 +0300, Dinko Tenev wrote:
 Hi,
 
 I am trying to build from the current GHC source tree on a Debian
 sarge box, and the build dies trying to compile
 /home/shinobi/build/fptools/ghc/parser/Parser.hs, with the following
 output:

I have had this problem regularly...

 It seems that, apart from whatever else might be wrong, there is a
 problem with GCC.  

Yes...

 I guess I'll have to switch to version 2.95.2 as
 suggested by the guide (I am currently using 3.3.5,) 

I think you need more RAM. I had to
buy a new computer to fix this problem.

WORKAROUND: Reboot your computer and try again,
in single user mode if necessary.

Gcc is a badly written piece of software.
What do you expect from a compiler that tries to do
sophisticated  optimisations .. written in C??

It uses HUGE amounts of memory, and some of the
algorithms it uses are quadratic or worse.
I regularly had to *hardware reset* my 
old 700MHz PIII to kill compiles.

My new box has 1G Ram .. that seems to have
fixed it .. but the C compiles are still
VERY slow (Compared with Ocaml for example,
which is at least 100 times faster).

Just for example, most versions use a linked
list for symbol lookup instead of a hashtable ..
O(n) instead of O(1).

It is woefully inadequate compiling large functions,
which language translators sometimes generate.
Most language translators would love to generate
a single huge function, and compilation via C
requires all sort of hacks to get around the
fact that the most commonly used C compiler
on Unix platforms, gcc, is incapable of handling it.

There is an interesting paper by Fergus Henderson et al
on the Felix website about the gcc specific hackery
used to allow Mercury to work (Mercury is the 
premier logic programming language) .. Felix uses
some of the techniques discussed:

http://felix.sourceforge.net/papers/mercury_to_c.ps


-- 
John Skaller skaller at users dot sourceforge dot net
Download Felix: http://felix.sf.net


signature.asc
Description: This is a digitally signed message part
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Performance on amd64

2005-07-05 Thread John Skaller
On Tue, 2005-07-05 at 12:39 +0100, Simon Marlow wrote:
 On 05 July 2005 10:38, John Skaller wrote:
 
  Can someone comment on the Debian package for Ubuntu Hoary
  providing ghc-6.2.2 with binary for amd64?
 
 You're probably running an unregisterised build, which is going to be
 generating code at least a factor of 2 slower than a registerised build.
 You can get an up to date snapshot of 6.4.1 for Linux/x86_64 here:
 
 http://www.haskell.org/ghc/dist/stable/dist/ghc-6.4.1.20050704-x86_64-un
 known-linux.tar.bz2


Thanks, downloading it now.. will try. What exactly is 
a 'registered' build?

 This build is registerised, but doesn't have the native code generator.

Which would generate the best code?

 I hope you're not going to conclude *anything* based on the performance
 of ackermann and tak! :-)

Ackermann is a good test of optimisation of a recursive
function, which primarily require the smallest possible
stack frame. Of course it is only one function, more need
to be tested.

In fact, this one test has been very good helping me get
the Felix optimiser to work well -- the raw code generated
without optimisation creates a heap closure for every function,
including ones synthesised for conditionals and matches, etc.
If I remember rightly, it took 2 hours to calculate Ack(3,6),
and I needed a friend to use a big PPC to get the result
in two hours for Ack(3,7).

So ... you could say the Felix optimiser has improved a bit... :)

If you would like to suggest other tests I'd be quite interested.
At the moment I'm using code from the Alioth Shootout, 
simply because I can -- saves writing things in languages
I don't know (which includes Haskell unfortunately).

-- 
John Skaller skaller at users dot sourceforge dot net
Download Felix: http://felix.sf.net


signature.asc
Description: This is a digitally signed message part
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Performance on amd64

2005-07-05 Thread John Skaller
On Tue, 2005-07-05 at 12:39 +0100, Simon Marlow wrote:

 http://www.haskell.org/ghc/dist/stable/dist/ghc-6.4.1.20050704-x86_64-un
 known-linux.tar.bz2
 
 This build is registerised, but doesn't have the native code generator.

BTW: I get this:

for i in `(cd share; find . -type f )`; do \
   /usr/bin/install -c -m 644
share/$i /usr/local/share/ghc-6.4.1.20050704/$i; \
done
./mkdirhier /usr/local/share/ghc-6.4.1.20050704/html
if test -d share/html ; then cp -r
share/html/* /usr/local/share/ghc-6.4.1.20050704/html ; fi
for i in share/*.ps; do \
cp $i /usr/local/share/ghc-6.4.1.20050704 ; \
done
cp: cannot stat `share/*.ps': No such file or directory
make: *** [install-docs] Error 1

It's right, in 'share' is just one directory called 'html'  ..

This code checks 'html' exists .. but doesn't check for
any *.ps files:

install-docs : install-dirs-docs
if test -d share/html ; then $(CP) -r share/html/* $(htmldir) ; fi
for i in share/*.ps; do \
$(CP) $$i $(psdir) ; \
done

Otherwise it worked fine .. results next post ..

-- 
John Skaller skaller at users dot sourceforge dot net
Download Felix: http://felix.sf.net


signature.asc
Description: This is a digitally signed message part
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Performance on amd64

2005-07-05 Thread John Skaller
On Tue, 2005-07-05 at 17:08 +0100, Simon Marlow wrote:

  Thanks, downloading it now.. will try. What exactly is
  a 'registered' build?
 
 An unregisterised build generates plain C which is compiled with a C
 compiler.  The term registerised refers to a set of optimisations
 which require post-processing the assembly generated by the C compiler
 using a Perl script (affectionately known as the Evil Mangler).  In
 particular, registerised code does real tail-calls and uses real machine
 registers to store the Hsakell stack and heap pointers.  

Ah! So 'register' refers to machine registers .. not
some certification by some kind of authority, which is
what I guessed .. ?

 Sure, it's good to look at these small benchmarks to improve aspects of
 our compilers, but we should never claim that results on microbenchmarks
 are in any way an indicator of performance on programs that people
 actually write.

One can also argue that 'programmer performance' is important,
not just machine performance.

 The shootout has lots of good benchmarks, for sure.  

I'm not so sure ;(

 Don't restrict
 yourself to the small programs, though.

Of course, larger more complex programs may give interesting
performance results, but have one significant drawback:
a lot more work is required to write them.

 It's still hard to get a big picture from the results - there are too
 many variables. I believe many of the Haskell programs in the suite can
 go several times faster with the right tweaks, and using the right
 libraries (such as a decent PackedString library).

Maybe I'm asking the wrong question. Can you think of a computation
which you believe Haskell would be the best at?

.. and while you're at it: a computation GHC does
NOT handle well -- IMHO these are actually most useful
to compiler writers.

-- 
John Skaller skaller at users dot sourceforge dot net
Download Felix: http://felix.sf.net


signature.asc
Description: This is a digitally signed message part
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users