Makefile tricks

2002-03-20 Thread S.D.Mechveliani

Dear  ghc-5.02.2-i386-unknown-linux,

(RedHat-6.1 Linux machine, libc-2.1.2),

Can you, please explain the following effect

   Main.hs:   main = putStr hello\n

   Makefile: 
   -
   obj:
  ghc -c -O Main.hs  +RTS -M10m -RTS

   -- Running 
   $ tcsh
make obj
   ghc -c -O Main.hs  +RTS -M10m -RTS
   ghc: input file doesn't exist: +RTS
   ghc: unrecognised option: -M10m
   ghc: unrecognised option: -RTS

Now, type this to command line:

ghc -c -O Main.hs  +RTS -M10m -RTS

And it works.
This is so for the  lstcsh  shell too.

   make -v
  GNU Make version 3.77, by Richard Stallman and Roland McGrath.

And it works correct under Debian Linux,
  GNU Make version 3.79.1, by Richard Stallman and Roland McGrath.
  Built for i586-pc-linux-gnu

I do not know, whether it is essential, but this RedHat machine 
is a claster: for any job started, the system chooses itself 
appropriate machine to run this job on.
But I was told that  tcsh  switches off this clastering.

Also I am not sure, that this refers to GHC.
I tried to set  
   obj:
  ls -la

and it works.
Maybe, the new  ssh  version bug?

-
Serge Mechveliani
[EMAIL PROTECTED]


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



sorry for false alarm

2002-03-20 Thread S.D.Mechveliani

I am sorry. It was a false alarm.

I wrote recently about Makefile faults when applying ghc.

The is also old  ghc  on this machine in the system area.
I set  alias  to new  ghc  (in my user directory),
but `make' does not know about alias and applies old  ghc,
which does not understand some options, like +RTS.

This is not a problem.

-
Serge Mechveliani
[EMAIL PROTECTED]


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



warning: `RLIM_INFINITY' redefined

2002-03-20 Thread S.D.Mechveliani

Dear  ghc-5.02.2-i386-unknown-linux,

When compiling with  -O,  you report sometimes things like

  In file included from /usr/include/sys/resource.h:25,
  from
  /share/nfs/users/internat/mechvel/ghc/5.02.2/inst/lib/ghc-5.02.2/
 include/HsStd.h:65,
 from /tmp/ghc29631.hc:2:
  /usr/include/bits/resource.h:109: warning: `RLIM_INFINITY' redefined
  /usr/include/asm/resource.h:26: warning: 
  this is the location of the previous definition
 
Is this harmless?

-
Serge Mechveliani
[EMAIL PROTECTED]
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



installing 5.02.2

2002-03-19 Thread S.D.Mechveliani

Dear GHC, 

I am going to install  ghc-5.02.2  

on certain  Linux  machine to which I have  ssh  access and 
permission to install  ghc-5.02.2  in my user directory
(not a system user).

The situation is as follows.
It has  i386-unknown  (Pentium Pro II ...),  much of memory and disk.
I am not sure, but it looks like it is under  RedHat-6.2,  glibc-2.1.2.
It has  ghc-4.04  installed in the system area.
But it failed to link  Main.o  of the program  main = putStr hello\n
Probably,  Happy,  is not installed.

Also I applied certain  ghc --make M  
on my Debian Linux machine with  ghc-5.02.2,  
scp -ed the  a.out.zip  file to the target machine and unzipped it.
And it cannot run there, some library not found, something of this
sort.

Could the GHC installation experts guide me, please?
What is easier in this situation, to install from source or from binary?
How to install Happy for the first time?
How to find out in a regular way the info about Linux system and 
libc version?

Thank you for the advice,

-
Serge Mechveliani
[EMAIL PROTECTED]







___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Linux version, libraries

2002-03-19 Thread S.D.Mechveliani

To my 

 How to find out in a regular way the info about Linux system and
 libc version?

Simon Marlow writes

 Try 'uname -a'  'ls -l /lib/libc*'.


uname -a
Linux   ...machine... 
  2.2.12 #6 SMP mar oct 5 16:44:59 CEST 1999 i686 unknown

ls -l /lib/libc*

-rwxr-xr-x 1 root root 4118299 Sep 20 1999 /lib/libc-2.1.2.so*
lrwxrwxrwx 1 root root  13 Nov  5 1999 /lib/libc.so.6 - libc-2.1.2.so*
lrwxrwxrwx 1 ...  1999/lib/libcom_err.so.2 - libcom_err.so.2.0*
-rwxr-xr-x 1 root root7889 Oct 26 1999 /lib/libcom_err.so.2.0*
-rwxr-xr-x 1 root root   64595 Sep 20 1999 /lib/libcrypt-2.1.2.so*
lrwxrwxrwx 1 .. 1999   /lib/libcrypt.so.1 - libcrypt-2.1.2.so*


But how to find whether it is  RedHat  (version) ?

On GMP library:

 find /usr/lib  -name '*gmp*'

/usr/lib/libgmp.so.2.0.2
/usr/lib/libgmp.so.2
find: /usr/lib/netscape/de/nethelp: Permission ...
/usr/lib/perl5/5.00503/i386-linux/linux/igmp.ph
/usr/lib/perl5/5.00503/i386-linux/netinet/igmp.ph

?

-
Serge Mechveliani
[EMAIL PROTECTED]


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



scope in ghci. Reply

2002-02-14 Thread S.D.Mechveliani

Martin Norb [EMAIL PROTECTED] writes


 I just noticed this behavior with ghci
 
 After loading a module with

 :l Module

 you can't use the Prelude functions unqualified, you just get things
 like
 
 interactive:1: Variable not in scope: `show'
 
 I am pretty sure that this worked some days ago, and I was using the
 same version then.

 I feel totally confused. Has this happened to anyone else?


Even if Prelude is visible, still the library items are not.
(List.sort, Ratio ...). Maybe, this is natural, I do not know.

But I believe, it is possible to control the scope.
For example, create a module  Universe.hs  which re-exports
Prelude, libraries and user programs.
Then,  :l Universe  should bring to scope all these items.

-
Serge Mechveliani
[EMAIL PROTECTED]





___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



notes on ghc-5.02.2

2002-01-24 Thread S.D.Mechveliani

Dear GHC,

Here are some comments on  ghc-5.02.2.
I have tested it in the following way.

1. Installed  binary = ghc-5.02.2-i386-linux-unknown.
2. Compiled  ghc-5.02.2-source  with  binary  for  linux-i386,  
   under empty  mk/build.mk,  removed binary installation.
3. Compiled a large Haskell project DoCon under -O
   (making package, -fglasgow-exts, overlappings ...) 
4. Run DoCon test.

It looks all right.
* It spends the memory in a better way when making a project and 
  loading interpreted modules.
* It looks fixed the bug with the value visibility in  ghci  in
  (let f = ...;  let g = h f ...) 


Some questions
--

* up-arrow key does not work in  ghci, 
  and it worked in binary installation.
  Probably, some library was not found. How to fix this?

  I have stored  config.log  of  `./configure ...'
  and  makeall.log  of  `make all'.
  `make boot' (before `make all') did nothing and issued a 
  message recommending to run a thing something in some 
  subdirectory, and I skipped this command at all. Do I mistake?

* the directory  /share  always neighboured to  /bin /lib
  in the installation directory. Why it is absent now?
  Maybe, the up-arrow bug is related to this?

* Where to find the ghc manual in the .ps form,
  like  set.ps  provided by binary installation? 

* The interpreted code looks 2-3 times larger than compiled one.
  DoCon-library has size11 Mb,  
  .o files of compiled test take less than   2 Mb,
  test loading   ghci -package docon T_ +RTS -Mxx -RTS

  needs  xx = 12  for the compiled full test  T_,
  31  for interpreted  T_.
  If  DoCon-library  is out of the heap reserved by -M  (is it?),
  then, we have that interpreted code is  2.6  times larger.
  Is this due to large interaces kept in heap?

Please, advise,

-
Serge Mechveliani
[EMAIL PROTECTED]

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



notes on ghc-5.02.2. Reply

2002-01-24 Thread S.D.Mechveliani

To my notes on  ghc-5.02.2  Simon Marlow  writes
 

 * up-arrow key does not work in  ghci,
   and it worked in binary installation.
   Probably, some library was not found. How to fix this?

 Do you have the readline-devel RPM installed?  This is needed to compile
 GHC with readline support.

I run ghc here on two machines. 
And it appears now that both are under  Debian Linux. 
Some people here say that this RPM does not go under Debian.
Maybe, GHC can take care of Debian?

Would it help now, after `make all', `make install',
setting LD_LIBRARY_PATH or such, making symbolic links?


 * The interpreted code looks 2-3 times larger than compiled one.
   DoCon-library has size11 Mb, 
   .o files of compiled test take less than   2 Mb,
   test loading   ghci -package docon T_ +RTS -Mxx -RTS

   needs  xx = 12  for the compiled full test  T_,
   31  for interpreted  T_.
   If  DoCon-library  is out of the heap reserved by -M  (is it?),
   then, we have that interpreted code is  2.6  times larger.
   Is this due to large interaces kept in heap?

 Compiled code isn't loaded into the heap, so it isn't included in the
 size you give to -M. 

 What are interaces?


A typo. Module interfaces. 
In any case, I could confuse things, not understanding these 
implementation details,
only observe that the interpreted code looks 2-3 times larger.
Maybe, it is natural, maybe not, I do not know.

Another question: am I in  glasgow-haskell-users.. ?
I am responding to Simon's respond to my letter to 
glasgow-haskell-users.. sent a couple of hours ago, 
and I had not recieved this my letter, so far.

-
Serge Mechveliani
[EMAIL PROTECTED]











___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



ignore, please

2002-01-24 Thread S.D.Mechveliani

this is mail test, ignore, please

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



scope in ghci

2001-10-05 Thread S.D.Mechveliani

Dear GHC,

Consider the following question concerning scopes in  
ghc -interpreted. 

A large application called  D  consists of 70 modules, 
it reexports all its public items via module  DExport.
It has also demonstration examplesDm1.hs ... Dm9.hs.
Each example imports 4-5 modules from  D
and lists precisely the imported items, like this:
import List (sort) 
import Foo1 (f, g, Dt(..)) 

There also exists the total  Dm.hs  which reexports   
Dm1 ... Dm9  and adds the total `test' function.

All this compiles successfully (Dm* may skip compilation).

Then,  ghci -package D  Dm1
   ...
   Dm1 sort $ dm1 [1%2,1%3]-- contrived example
is impossible.
Instead, it needs
Dm1 List.sort Prelude.$ dm1 [1 Ratio.% 2, 1 Ratio.% 3]

Also the user is expected to command :m DExport
and work with other modules than  Dm*.

I try to improve this as follows. Add

  (...module Prelude, module List, module Ratio, module Random...)

  import Prelude; import List; import Ratio; import Random;

to  DExport.hs
and replace the import part in  Dm1.hs ... Dm9.hs
with
   import DExport

Formally, everything works. But I doubt.
DExport imports and exports so many things that it looks natural 
to add all the standard Haskell library to it.
But the imports of  Dm1.hs ... Dm9.hs  are different.
I wonder, what is the difference between
 M.hs:  ...import Foo (f,g)
and  N.hs:  ...import DExport 

First, suppose  M.hs  contains name clashes with  DExport  
and wants to avoid qualified names.
Second, when compiling  M.hs  the name search is smaller
(would it compile any essentially faster?). 
Third, how GHC will react at :load Dm
?
By occasion, would not it load 9 copies of DExport ?
Shortly: is this all right to set  import DExport
to all the above  Dm* ?

Thanks to everyone for possible advice.

-
Serge Mechveliani
[EMAIL PROTECTED]







___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



what is :a

2001-10-05 Thread S.D.Mechveliani


Who would tell, please, what is the real difference between
:load  and  :add  in  ghci-5.02 ?

Thank you in advance for explanation.

-
Serge Mechveliani
[EMAIL PROTECTED]

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



scope in ghci

2001-10-05 Thread S.D.Mechveliani

To my 

 [..]
Dm1 sort $ dm1 [1%2,1%3]-- contrived example
 is impossible.
 Instead, it needs
 Dm1 List.sort Prelude.$ dm1 [1 Ratio.% 2, 1 Ratio.% 3]

Marcin 'Qrczak' Kowalczyk [EMAIL PROTECTED] writes

 This happens when the module Dm1 is compiled. It's because a compiled
 module doesn't provide the information which names were in scope in
 its source.
 [..]


In this particular experiment  Dm1 ... Dm9  were *interpreted* 
and the rest  D  project was compiled and loaded as object code 
from library via package.
But the case of compiled  Dm1  is sufficient too to present certain
small question.

-
Serge Mechveliani
[EMAIL PROTECTED]






___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



5.04 ?

2001-09-29 Thread S.D.Mechveliani

Dear GHC,

I have tested  ghc-5.02  
on various examples of computer algebra, with -O too.

All tests are all right,

except it shows a bug in memory management (Simon Marlow wrote of),
visible in subtly chosen cases. 

The impression is that 5.02 saves much memory and gains performance 
in situations involving garbage collections (most practicable case).

But due to the above bug, would  5.04  appear soon as a stable 
release?
For I need to mention some newest stable GHC release in DoCon 
program documentation.

Regards,

-
Serge Mechveliani
[EMAIL PROTECTED]

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



GC options. Reply

2001-08-07 Thread S.D.Mechveliani

Hello,

here are my votes on Simon Marlow's questions.


 Issue 1: should the maximum heap size be unbounded by default?
 Currently the maximum heap size is bounded at 64M. 
 [..]
   1. remove the default limit altogether
   2. raise the default limit
   3. no change

Put the default to  30M.


 Issue 2: Should -M be renamed to -H, and -H renamed to something else?
 The argument for this change is that GHC's -M option is closer to the
 traditional meaning of -H.

Yes.

 Issue 3: (suggestion from Julian S.) Perhaps there should be two options
 to specify optimise for memory use or optimise for performance,
 which have the effect of setting the defaults for various GC options to
 appropriate values. 
 [..]

All right. 
But let the meaning of  -O  remain as it was.

-
Serge Mechveliani
[EMAIL PROTECTED]

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



extra_ghc_opts

2001-06-18 Thread S.D.Mechveliani

To my 

 The lack of  extra_command_line_opts  is not important,
 but I wonder, why make things worse?
 [..]
 I thought the idea of  -package  was to collect in one place all
 the project-specific settings. For example, for my project  docon,
 I need
doconOpts = -fglasgow-exts -fallow-this-and-this ...


Simon Marlow [EMAIL PROTECTED] writes

 I'm in two minds about this.  At the moment, using a -package option
 does two well defined things:

  - it brings a set of modules into scope
  - it links in the object code for those modules

 According to the Principle of Least Astonishment, we should force the
 user to add any extra options himself, because if '-package wibble'
 automatically adds a number of language-feature options then it's just
 too magical and non-obvious. 

 On the other hand, I presume you *need* these options in order to be
 able to use Docon, so adding them automatically would seem reasonable.

 Opinions, anyone?


To add or not to add automatically the language-feature options,
or many other options, this is up to the designer of the package.
The designer is always free to set  []  in the package.
And you were going to desable this choice (concerning package).

-
Serge Mechveliani
[EMAIL PROTECTED]


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



ghci, -O

2001-05-02 Thread S.D.Mechveliani

I fear there is certain misunderstanding about  -O  usage with
ghc-5.00 interactive.

Simon Marlow  and  User's Guide (Section 4.4)  say that  

 -O does not work with  ghci.
 For technical reason, the bytecode compiler doesn't interact 
 well with one of the optimization passes
 [..]

What is a bytecode compiler? 
The one that prepares a lightly compiled code for the interpreter?

What I meant is something like this: 
to compile   ghc -c -O Foo.hs
in the batch mode and then run   ghci 
 ...
 Preude :load Foo 
 ...
 Foo sm [1..000]

I tried this with certain small function  Foo.sm,  and it works,
and runs better than when compied with -Onot.

Now, as I see that  ghci  can load and run the code made by  -O,
I wonder what the User's Guide means by saying 
-O does not work with  GHCi. Maybe   ghci -O  ?
could be meaningful?

-
Serge Mechveliani
[EMAIL PROTECTED]










___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



memory distribution in ghci

2001-04-28 Thread S.D.Mechveliani

To my request

 [..]
 can  ghci  provide a command measuring the part of heap taken
 by interpreted programs,
 by interfaces?
 (or, at least, their sum)

Simon Marlow  [EMAIL PROTECTED] writes


 Ah, I see now.  You'd like to know how much memory your program is
 using, compared to GHCi itself?  Hmm.  Unfortunately I can't see a good
 way to do this I'm afraid.  

Inghci ... +RTS -Mxxxm -RTS
xxx  sets the volum bound for  
(1) `real' data  +  
(2) interpreted program  + 
(3) interfaces of pre-compiled loaded .o modules.

Please, confirm, whether I understand correct?
And many users would like to measure (1) within the  ghci  loop.

The statistic like   Prelude List.sort [1..10]
 [1,2,3,4,5,6,7,8,9,10]
 (0.53 secs, 7109100 bytes)

includes the number of allocations, which cannot help to detect (1)
(can it?)
This is a user's wish for the future  ghci  version.

I do not know, whether it is hard to develop.
If it is hard, personally, I would not suffer much.
Because, getting heap exhausted, one can run  ghci  once more,
with larger  xxx,  and performing the same computation to detect
approximately which part of  xxx  is taken by the `real' data.
But this looks awkward.

Another wish: to document the  ghci  memory management
-
which we are discussing now.
Because GHC could simply give a reference to the manual section.


 However, you can always compile the program
 and use profiling...

Suppose User demonstrates an Haskell application A to the Listener. 
Naturally, most of  A  constitutes its `library'  L,  and  L  was 
compiled to .o files
 (desirably, gathered in the  libL.a  archive).

ghci  invokes, loads the _interpreted_ demonstration code  D,
much smaller than  L,  but importing items from  L.
Naturally, Listener asks something like 
and how much memory needs to compute this matrix?.

In many other system, the interactive command, say, `:size'
gives immediately some interpreter memory map and the sizes taken
by `code' and `data'.
And you suggest to wait 2 hours to re-compile _everything_ with 
profiling. The Listener would not wait.


 For, with -O and much in-lining, the interfaces may be very large.

 You really don't want to use -O with GHCi (in fact, at the moment you
 can't).

This may be serious. For example,  putStr $ show (sum1 [1..n])

will need a constant memory or O(n) memory depending on the  -O
possibility.
The docs on ghc-5.00 say that it joins the best of the two worlds - 
interpreted code calling for the fast code compiled earlier. 

Generally, I like only scientifically designed `interpreters' and
always advised not to spend efforts to .o compilation.
But as nobody agrees with me, as the o-compilation fiesta continues, 
then, why to spend effort in vain?

-
Serge Mechveliani
[EMAIL PROTECTED]





___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



memory destribution

2001-04-28 Thread S.D.Mechveliani

Here is another expression of what I wrote recently on the memory 
destribution.
Probbaly, it suffices to have the garbage collection command.
For example,  Hugs  can do

  Prelude :gc
  Garbage collection recovered 240372 cells
  ...
It is clear then, how much the user has for further computation.

-
Serge Mechveliani
[EMAIL PROTECTED]

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



memory distribution in ghci

2001-04-27 Thread S.D.Mechveliani

It looks, there remains a questions on memory distribution in 
   ghc-5.00 -interpreted :

can  ghci  provide a command measuring the part of heap taken
by interpreted programs, 
by interfaces?
(or, at least, their sum)

For, with -O and much in-lining, the interfaces may be very large. 
Also compare it to the _batch_ mode: commanding

   ghc -o run Foo.o
   ./run +RTS -MXXm -RTS

the user is sure that XX Mb is only for the `real' computation data.

-
Serge Mechveliani
[EMAIL PROTECTED]






___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



:l XX libFoo.a

2001-04-26 Thread S.D.Mechveliani

The  :load  command of the  ghc-5.00  interpreter 
first searches the needed compiled modules (*.o) and loads them
when finds.
But how to make it to search them in the object code library  
xxx/libFoo.a  ?  
For it is better to keep the fixed *.o files in a library.
And `:load' works in the above situation similarly as  
   ghc -o XX.o ...
I tried   :l XX libFoo.a,
but it does not work.

-
Serge Mechveliani
[EMAIL PROTECTED]


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



memory in --make

2001-04-25 Thread S.D.Mechveliani

Hello,

It occurs that ghc-5.00 --make  
spends the memory in a particular way.
Making my project via Makefile (envoking ghc driver many times)
succeeds within  -M29m.
While   ghc --make ... -Mxxx  Mk.hs
needs more than  50Mb,
probably, because it keeps much intermediate information between 
compiling the modules.
Still I managed to `make' it with  -M29m  by issueing the latter
command two times more, after the  insufficient-heap  break.
ghc --make   still looks faster and better to arrange.
Each time it compiles only remainging modules.
Maybe, something can be done to avoid these heap-exhausted breaks?
For, seeing that some modules remain to compile and heap is
exhausted, ghc can save the intermediate information to disk 
giving the room for the next module compilation.
Also it can restart the driver itself - with appropriate message
- ?

-
Serge Mechveliani
[EMAIL PROTECTED] 



___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



:! ghc -c vs :c

2001-04-25 Thread S.D.Mechveliani

The ghc interactive  ghci  compiles the module by the command
 
   :! ghc -c 

Has it sense to provide a command   :compile   (:c),

having a meaningful name and a format similar to other commands,
like say  :load
?

-
Serge Mechveliani
[EMAIL PROTECTED]

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



-make

2001-04-24 Thread S.D.Mechveliani

Hello,

I have two questions on  ghc-5.00  -make.

The User's Guide says in Section 4.5 that it is simpler to use 
-make  than the Makefile technique. 

1. I try   ghc -make Main.hs
   
with  Main.hs  containing   main = putStr hello\n

(in ghc-5.00, binary, i386-unknown, Linux),  
and it reports
   ghc-5.00: unrecognised flag: -make
- ?  


2. How to simplify (eliminate?) Makefile.
-

My project has many .hs files  in the directories  
   ./source/
   ./source/auxil/
   ./source/pol/factor/
   ...
`make ...' compiles them putting  .hi, .o  files into  ./export/,
then, applies `ar' to make  .a  library.
To do all this, Makefile includes the rules like 

  .hs.o:
 $(HC) -c $ $(HCFlags)
 
and applies the compiler $(HC) with the flags

  ...  -syslib data  -recomp ...
  -i$(E) -odir $(E)  -ohi $(E)/$(notdir $*).hi  $($*_flags)
  ...
Now, this Makefile does not work under  ghc-5.00,  because second
  compilation cannot find  .hi  file of the first compilation:

  ..ghc -c source/auxil/Prelude_.hs -fglasgow-exts ...
   -recomp -iexport -odir export  -ohiexport/Prelude_.hi  
   +RTS -G3 -F1.5 -M29m -RTS -Onot
  ...
  ..ghc -c source/parse/Iparse_.hs -fglasgow-exts ...
   -recomp -iexport -odir export  -ohiexport/Iparse_.hi 
   +RTS -G3 -F1.5 -M29m -RTS -Onot

  source/parse/Iparse_.hs:20:
failed to load interface for `Prelude_':
Bad interface file: export/Iparse_.hi
does not exist

Also what  -make  can do to replace some large part of Makefile.
For 
(a) Makefile has rather odd language, it is not good to force the
functional programmer to look into it,
(b) one has to think of Makefile versions for different operation 
systems.


Thank you in advance for the help.

-
Serge Mechveliani
[EMAIL PROTECTED]








___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



no problem with .sgml

2001-04-20 Thread S.D.Mechveliani

I wrote recently

| I want to install GHC-5.00 (from binaries, on Linux).
| But the installation guide, as most documentation, is in the  .sgml
| format. I can read  ASCII, .dvi, .ps, .tex, html  texts.
| But what to do with  .sgml,  please?


Forget this question, please.

Because I un-archivated the binaries and found there the needed docs on 
ghc in the needed formats.

-
Serge Mechveliani
[EMAIL PROTECTED]



___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



two questions on 4-08-notes

2000-07-07 Thread S.D.Mechveliani

Dear GHC,

Sorry for the ignorance, just two questions on  4-08-notes.sgml.

 Result type signatures now work.
 
 [..]

 Constant folding is now done by Rules

What do these two mean, or where they are explained?
Maybe, you can give an example?

--
Sergey Mechveliani
[EMAIL PROTECTED]




profiling example

2000-06-19 Thread S.D.Mechveliani

Dear GHC  (pre-4.07-i386-unknown-linux),

I am going to use the profiling to measure the parts of certain 
large program. To understand things, I start with simple example:

  main =
let  lgft :: Integer - Int
 lgft n = _scc_ "length n!" (length $ show $ product [2..n])

 ls = map lgft [2..500] :: [Int]
 s  = _scc_ "sum" (sum [1..22000] :: Integer)
in
putStr $ shows (s,ls) "\n"
  -

  ghc -c -O -prof T.hs;  ghc -o -prof run T.o;   run +RTS -pT
gives  run.prof:
---
total time  =3.60 secs   (180 ticks @ 20 ms)
total alloc = 139,166,388 bytes  (excludes profiling overheads)

COST CENTRE  MODULE %time %alloc

length n!Main99.4   99.3
GC   GC  24.40.0

  individual inherited
COST CENTRE  MODULE entries  %time %alloc   %time %alloc

MAIN MAIN 00.0   0.0100.0 100.0
 CAF PrelHandle   30.0   0.0  0.0   0.0
 CAF Main 30.0   0.0100.0 100.0
  length n!  Main   499   99.4  99.3 99.4  99.3
  sumMain 10.6   0.7  0.6   0.7
-

What does this mean CAF ?

The last two lines of the second table show that  0.994  of time and  
0.993  of allocations was spent by the center "length n!".
This looks natural.

But what means the first table? Why "sum" is skipped there?
Does it show that the Garbage Collection has taken  0.244  of the 
total time?
GC can be caused by many values. To find out which part of GC is 
caused by the given item  f,  we have to look at the  %alloc  part for
f  in the second table.

Could you please tell me whether I understand these things correct?

Thank you.

--
Sergey Mechveliani
[EMAIL PROTECTED] 














pre-4.07 test

2000-06-15 Thread S.D.Mechveliani


I tested  ghc-pre-4.07  on my large algebraic program. It works.

--
Sergey Mechveliani
[EMAIL PROTECTED]




++ efficiency. Reply

2000-05-05 Thread S.D.Mechveliani


Mike Jones [EMAIL PROTECTED] writes on  ++ efficiency

 how do I approximate its efficiency compared to adding one
 element at a time?

In a "regular" situation,   xs ++ ys  
needs the number of "steps" proportional to  (length xs).
This is according to the simplest Haskell program you could suggest 
for (++) - guess, what is this program?

 Does 1:2:3:4:5:6:7:8 execute faster than [1, 2, 3, 4] ++ [5, 6, 7, 8], 
 where the first case is executed recursively in a function?

This literally expressions the compiler is likely to evaluate at 
the compile time. They would both cost zero at the run-time.

But if at the run-time  ys = [5,6,7,8]  is "ready", after this
   [1,2,3,4]++ys
costs 4 "steps",  while  1:2:3:4:5:6:7:8  costs 8 steps.
The "lazy" evaluation makes the truth of these considerations 
depend highly on the concrete situation in the program.

Also if some designer implements Haskell basing on some different 
internal model for the lists (God save!), then one could make (++) 
regularly cost 1 step ...

Generally, beware of the (++) danger in some situations.   

--
Sergey Mechveliani
[EMAIL PROTECTED]







compiling large constant expressions

2000-04-18 Thread S.D.Mechveliani

I complained on the slow compilation of 40 functions like
  e1 = 
[(-3,(1,0,1),[3,0,0,0]), (-2,(0,1,0),[3,0,0,0]), (1,(0,0,3),[]),
 (1,(0,0,2),[2,0,0,0]) ...
]

I do not know now, probably, I have to withdraw the complaint ...


Simon Marlow [EMAIL PROTECTED]  writes on 17 Apr 2000 

 You compile the following program for several minutes, with  -Onot.
 But is it really a hard business to parse them 40 expressions below, 
 of which  e1  is the largest one?
 I need this as just reading of the input data for other program.


 I tried to reconstruct your example [..]
 [..]
   21M in use, 0.01 INIT (0.00 elapsed), 4.79 MUT (5.22 elapsed),
 4.96 GC (5.03 elapsed) :ghc

 Ok, not exactly fast, but not on the order of several minutes either.  And
 you can halve the total time taken to compile the module by adding -fasm-x86
 (with a recent CVS version of GHC).

 So what were you doing differently?


These  200-400 sec  were by

ghc-4.04-c  -optCrts-G3 -optCrts-F1.5 -optCrts-M29m


And  ghc-4.06  gives  65 sec  (on Linux-Intel-486, 166 MHz).

You see, I am still using 4.04, because 4.06 could not make the whole 
rest part of the project under -O.
(Rather "hungy" situation, though: no reliable version visible. 
ghc-4.04 treats wrongly overlapping instances. I hope for STG-Hugs, 
but it is not ready).

The difference to your experiment may be the memory size.
I do not want to give it more than  29Mb.
Also there might be no difference at all, if your machine is 10 times
faster.
And maybe, I had mistaken about the size of  ei.  The largest one is
e33 =
 [(3,(2,1,2),[]), (2,(2,1,1),[2,0,0,0]), (2,(2,1,0),[2,0,0,1]),
  (2,(2,0,1),[2,0,1,0]), (2,(2,0,0),[2,0,1,1]), (9,(1,2,1),[]),
  (4,(1,2,0),[2,0,0,0]), (1,(1,1,2),[1,0,0,0]), (4,(1,1,0),[2,0,1,0]),
  (1,(1,1,0),[1,0,0,2]), (1,(1,0,2),[1,0,1,0]), (3,(0,3,0),[]),
  (1,(0,2,1),[1,0,0,0]), (1,(0,2,0),[1,0,0,1]), (3,(0,1,3),[0,0,0,0]),
  (1,(0,1,2),[1,1,0,0]), (3,(0,1,2),[0,0,0,1]), (1,(0,1,1),[1,1,0,1]),
  (1,(0,1,1),[1,0,1,0]), (2,(0,1,0),[1,0,1,1]), (3,(0,0,3),[0,0,1,0]),
  (1,(0,0,2),[1,1,1,0]), (3,(0,0,2),[0,0,1,1])
 ]

All right, let us forget of  ghc-4.04,  and maybe, the cost is 
appropriate for these e1-e40.
On the occasion,  e(i)  are appended to the letter.

Thanks to  Marc Van Dongen,  he had noticed that reading from the
file may help.
Indeed,  readFile  gets  e(i)  to one string  str.
Then, one can applyread str  :: ...
repeatedly, parsing each time the value for  
 e(i) :: [(Int,(Int,Int,Int),[Int])]
and preserving the rest of  str.
This may occur faster that compiling  e(i).

Thanks to all for the consultation.

--
Sergey Mechveliani
[EMAIL PROTECTED]




yFields = [e1, e2, e3, e4, e5, e6, e7, e8, e9, e10,
   e11,e12,e13,e14,e15,e16,e17,e18,e19,e20,
   e21,e22,e23,e24,e25,e26,e27,e28,e29,e30,
   e31,e32,e33,e34,e35,e36,e37,e38,e39,e40 
  ]
  :: [ [(Int,(Int,Int,Int),[Int])] ]
e1 = 
 [(-3,(1,0,1),[3,0,0,0]), (-2,(0,1,0),[3,0,0,0]), (1,(0,0,3),[]),
  (1,(0,0,2),[2,0,0,0])
 ]
e2 =
 [(3,(2,0,0),[3,0,0,0]), (3,(1,0,2),[]), (1,(1,0,1),[2,0,0,0]),
  (3,(0,1,1),[]), (1,(0,1,0),[2,0,0,0]), (2,(0,0,2),[1,0,0,0])
 ]
e3 = 
 [(3,(2,0,1),[]), (2,(2,0,0),[2,0,0,0]), (3,(1,1,0),[]),
  (1,(1,0,1),[1,0,0,0]), (3,(0,0,2),[0,0,0,0])
 ]
e4 = 
 [(1,(3,0,0),[]), (1,(2,0,0),[1,0,0,0]), (3,(1,0,1),[0,0,0,0]),
  (1,(0,1,0),[0,0,0,0])
 ]
e5 = 
 [(3,(2,0,1),[3,0,0,0]), (1,(2,0,0),[3,0,0,1]), (2,(1,1,0),[3,0,0,0]),
  (1,(1,0,3),[]), (1,(1,0,2),[2,0,0,0]), (2,(1,0,1),[3,1,0,0]),
  (2,(0,1,0),[3,1,0,0]), (1,(0,0,2),[2,1,0,0])
 ]
e6 = 
 [(3,(1,1,1),[3,0,0,0]), (1,(1,1,0),[3,0,0,1]), (3,(1,0,1),[3,0,1,0]),
  (2,(0,2,0),[3,0,0,0]), (1,(0,1,3),[]), (1,(0,1,2),[2,0,0,0]),
  (1,(0,1,1),[3,1,0,0]), (3,(0,1,0),[3,0,1,0]), (1,(0,0,2),[2,0,1,0])
 ]
e7 = 
 [(3,(1,0,2),[3,0,0,0]), (4,(1,0,1),[3,0,0,1]), (5,(0,1,1),[3,0,0,0]),
  (3,(0,1,0),[3,0,0,1]), (1,(0,0,4),[]), (1,(0,0,3),[2,0,0,0]), 
  (1,(0,0,2),[3,1,0,0]), (1,(0,0,2),[2,0,0,1])
 ]
e8 = 
 [(3,(3,0,0),[3,0,0,0]), (3,(2,0,2),[]), (1,(2,0,1),[2,0,0,0]),
  (3,(2,0,0),[3,1,0,0]), (1,(2,0,0),[2,0,0,1]), (3,(1,1,1),[]), 
  (1,(1,1,0),[2,0,0,0]), (2,(1,0,2),[1,0,0,0]), (1,(0,1,0),[2,1,0,0]),
  (2,(0,0,2),[1,1,0,0])
 ]
e9 = 
 [(3,(2,1,0),[3,0,0,0]), (3,(2,0,0),[3,0,1,0]), (3,(1,1,2),[]),
  (1,(1,1,1),[2,0,0,0]), (1,(1,1,0),[2,0,0,1]), (1,(1,0,1),[2,0,1,0]),
  (3,(0,2,1),[]), (1,(0,2,0),[2,0,0,0]), (2,(0,1,2),[1,0,0,0]), 
  (1,(0,1,1),[2,1,0,0]), (2,(0,1,0),[2,0,1,0]), (2,(0,0,2),[1,0,1,0])
 ]
e10 = 
 [(3,(2,0,1),[3,0,0,0]), (3,(2,0,0),[3,0,0,1]), (6,(1,1,0),[3,0,0,0]),
  (3,(1,0,3),[]), (1,(1,0,2),[2,0,0,0]), (2,(1,0,1),[2,0,0,1]),
  (6,(0,1,2),[]), (2,(0,1,1),[2,0,0,0]), (2,(0,1,0),[2,0,0,1]),
  (2,(0,0,3),[1,0,0,0]), (1,(0,0,2),[2,1,0,0]), (2,(0,0,2),[1,0,0,1])
 ]
e11 =
  [(3,(3,0,1),[]), (2,(3,0,0),[2,0,0,0]), (3,(2,1,0),[]), 
   

key for fast compilation

2000-04-17 Thread S.D.Mechveliani

Dear GHC,

You compile the following program for several minutes, with  -Onot.
But is it really a hard business to parse them 40 expressions below, 
of which  e1  is the largest one?
I need this as just reading of the input data for other program.

Only please, do not write the compiler in C !

What I ask is maybe, an easy compilation key, easier than  -Onot,
just parse minimally, do not check or build expensive things.
The key like  -CompileEasyLike_Hugs.

If this will work in  STG-Hugs 
(and if STG-Hugs will work reliably), then this would suffice.

--
Sergey Mechveliani
[EMAIL PROTECTED]




yFields = [e1, e2, e3, e4, e5, e6, e7, e8, e9, e10,
   e11,e12,e13,e14,e15,e16,e17,e18,e19,e20,
   e21,e22,e23,e24,e25,e26,e27,e28,e29,e30,
   e31,e32,e33,e34,e35,e36,e37,e38,e39,e40
  ]   :: [ [(Int,(Int,Int,Int),[Int])] ]

e1 =  -- the largest of all  e_i
 [(3,(2,0,2),[3,0,0,0]), (4,(2,0,1),[3,0,0,1]), (1,(2,0,0),[3,0,0,2]),
  (5,(1,1,1),[3,0,0,0]), (3,(1,1,0),[3,0,0,1]), (1,(1,0,4),[]),
  (1,(1,0,3),[2,0,0,0]), (2,(1,0,2),[3,1,0,0]), (1,(1,0,2),[2,0,0,1]),
  (3,(1,0,1),[3,1,0,1]), (5,(0,1,1),[3,1,0,0]), (3,(0,1,0),[3,1,0,1]),
  (1,(0,0,3),[2,1,0,0]), (1,(0,0,2),[3,2,0,0]), (1,(0,0,2),[2,1,0,1])
 ]
e2 = ...
...






Re: key for fast compilation

2000-04-17 Thread S.D.Mechveliani

Hi, Marc,

Yes, I needed to read them from file. But then - to parse. 
The parsing program will be simple. But why should I write it if
GHC has the parser inside the compiler? I simply compile a thing
with  -Onot ...




measuring part of program

2000-04-13 Thread S.D.Mechveliani

Dear GHC users,

I am puzzled with the following effect in  ghc-4.04, 06.
In certain large program
  f x = let  XX...
 p = ...
 YY...
in   r p
p  is found respectively cheaply from  x,  
r  is found expensively via  x, p.
Now, without having profiling, I want to know what part of the time 
cost takes evaluating  p.
For this, I replace  `r p'  with error $ show p.

And it takes 1000 times more than computing  p  by separate program,
without `YY...' part.
The problem is that with all this, it is too hard to measure the parts
of computation; one needs to rewrite the part of the program as 
the completly new program.

What do you think of this?

--
Sergey Mechveliani
[EMAIL PROTECTED]








preparing executable for other platform

2000-03-13 Thread S.D.Mechveliani

Dear GHC,

Could you explain me please, how can one prepare the application
executable for various platforms?
I want to prepare the executable on  linux-386-unknown  machine
to run on other platform, say, Digital UNIX V4.0E ...

I have  ghc-4.04, 4.06  installed on  linux-i386.
Certain large application program  p201  and small example program  
Main.hs  are both written in Haskell.
p201  is compiled to the object library  libP201.a  and to 
*.hi files.
 make Main.o;  make run
compiles and links the thing using   *.hi  and  libP201.a,
producing the executable file `run'.

1. Having  Main.o, *.hi, libP201.a,  can one say 
   "make `run' for such and such platform from given object files
of Linux
   " ?
2. Is it possible: "covert Main.o and all other object files
to such and such platform "  ?
3. Is it possible: 
   "compile the whole library from .hs files preparing .o, .hi 
files for such and such platform 
   ",
   "link new  Main.o  ... for such and such platform "  ?

I thought, (3) should be possible at least ...

By the way, do the .hi files depend on the platform?

--
Sergey Mechveliani
[EMAIL PROTECTED]







bug in overlaps

2000-02-05 Thread S.D.Mechveliani

Dear GHC,

I wonder, what it follows from the fix you undertake with respect 
to the investigation by  Jeffrey R. Lewis  [EMAIL PROTECTED]
on 04 Feb 2000 

 class C awhere c :: a
 class C a = D a where d :: a

 instance C Int where c = 17
 instance D Int where d = 13

 instance C a = C [a] where c = [c]
 instance ({- C [a], -} D a) = D [a] where d = c

 instance C [Int] where c = [37]

 main = print (d :: [Int])

 d' :: D a = [a]
 d' = c
 d' :: [Int]

After the suggested GHC fix, am I (the user) supposed to write

instance (C [a], D a) = D [a] where d = c
in the above example  ---
?
My naive idea of the subject is as follows.
The user program for the instance in the above example has to be 

instance (D a) = D [a] where d = c
- natural and the simplest one.
And  d :: [Int]  has to evaluate to  [37].

Because the compiler can recall of `C [a]' and restore the  C [Int]  
instance automatically, choosing the most special definition.

  d' :: D a = [a]
  d' = c
  d' :: [Int]
also should yield  [37].
Why the polymorphic value   d' :: D a = [a]
occurs more difficult for the compiler than the instance for  d  ?
Seeing  d' :: [Int],  the compiler has again to choose the most 
special definition for  C [Int]  - because  d' is defined via the 
operation of the class C.

Am I missing something?

--
As to the latest  Hugs,  I had noticed indeed, that it forces me to
write the instances like above
 instance (C [a], D a) = D [a] where d = c
   ---
I disliked this.
And even this does not help to compile the whole project correct 
with Hugs. It still gives the wrong values for the overlaps. The 
ones I do not expect. While  ghc-4.04  yielded the correct results.
And now I do not understand whyd' :: D a = [a]
is difficult for  ghc-4.04.
And quite strangely, this did not reveal in practice.

Please, advise.

--
Sergey Mechveliani
[EMAIL PROTECTED]










mkdependHS usage

2000-02-03 Thread S.D.Mechveliani

Please, who could tell me how to append the *dependencies* for the
project shown by the below Makefile ?

The source files are in the directories shown below. 
.hi, .o  files go to the directory  $(E)/

I tried   cd directoryOf_Makefile;  ghc -M;
  mkdependHS;
  make depend
and they always create the empty list.
Respond privately, please.
Thank you in advance.

--
Sergey Mechveliani
[EMAIL PROTECTED]




-- Makefile  

HC   = $(ghcRoot)/ghc
RANLIB   = ar -s
E= export
RM   = rm -f
language = -fglasgow-exts  -optC-fallow-overlapping-instances \
   -optC-fallow-undecidable-instances
HCFlags  = $(language)  -fvia-C  -O -O2-for-C \
   ...
a  =auxil/
d  =demotest/
f  =factor/
p  =parse/
r  =residue/
s  =source/
sy =symmfunc/

$(s)$(d)T__flags =  -Onot

# in this succession the compilation proceeds: -

dcFiles = $(a)Prelude_   $(p)Iparse_   $(p)OpTab_\
  ...
dmFiles = $(d)T_permut   $(d)T_reseuc  $(d)T_primes  \
  ...
doconFiles = $(dcFiles:%= source/%)
demoFiles  = $(dmFiles:%= source/%)
doconSrcs  = $(doconFiles:%= %.hs)
doconObjs  = $(doconFiles:%= %.o)
demoSrcs   = $(demoFiles:%= %.hs)
demoObjs   = $(demoFiles:%= %.o)

.SUFFIXES : .hi .hs .o

.o.hi:
@:

.hs.o: 
$(HC) -c $ $(HCFlags) 

objs:   $(doconObjs) $(demoObjs)

docon:  $(doconObjs)
$(RM)  $(E)/libDocon.a
ar -q  $(E)/libDocon.a  `ls $(E)/*.o`
$(RANLIB)  $(E)/libDocon.a 

depend: $(HC) -M $(doconSrcs)   ** ???















ghc-4.06 for libc-2.0

2000-01-31 Thread S.D.Mechveliani

I have got into a stupidly stuck situation.
A stable recent GHC version is needed.  ghc-4.06  is going to be such.
And it should be able to be built from sources: I want to see that it 
passes this test.
ghc-4.04  cannot compile  ghc-4.06  due to certain bug.
Hence, it seems, I have to install first  ghc-4.06  binary
(for Linux - Debian - libc-2.0 - i386-unknown). 
   ---
But where one can reach the binary for  libc-2.0  ?
haskell.org  shows GHC only for libc-2.1.

Our admistrator says the time for libc-2.1 installing will come not so 
immediately.

--
Sergey Mechveliani
[EMAIL PROTECTED]



downloading

2000-01-26 Thread S.D.Mechveliani

Dr. Mark E. Hall  [EMAIL PROTECTED]  writes

  via  netscape  program.
  After 1-2 hours it gets about 50% of the file and than
  somewhat breaks,

 I had a similar problem downloading the ghc-4.04p1 source code distribution.
 [..]
 I solved the problem by using the GNU  wget program
 [..]


Yes, thank you.   wget  does the job.


--
Sergey Mechveliani
[EMAIL PROTECTED]



join interpreter and compiler

2000-01-24 Thread S.D.Mechveliani

To someone's

| So will the features of Hugs eventually be supported by all 
| platforms and integrated into a future version of Haskell or will I have
| to keep seperate versions of my code?


Simon Peyton-Jones  [EMAIL PROTECTED]  writes

 [..]  However, the GHC team and the
 Hugs team are making a conscious effort to align our implementations,
 so that at least where they support the same feature they do so 
 in the same way. 
 [..]


Please, the implementors, provide GHC with the interpreter and 
interactive shell, like in Hugs.
The users avoid the computer algebra programs where they cannot enter
simple expressions to the dialogue and get the answers there.
I do not program such an interface myself because it should be rather 
generic, its language has to be again Haskell. Hence, one has to 
integrate the interpreter with the GHC compiler.

--
Sergey Mechveliani
[EMAIL PROTECTED]



tail-recurse `partition' ?

2000-01-17 Thread S.D.Mechveliani

To my 

| It looks like List.partition  
| is a stack eater in GHC-4.04.
| And this can be improved easily. 
| Probably, there are many other stack or heap eaters.
| What the implementors think of this?

Simon Peyton-Jones  [EMAIL PROTECTED]  writes

 As various people have said, one can turn non-tail-recursive functions
 into tail-recursive ones, using an accumulating parameter.
 
 But this isn't obviously a win.  Stack usage is decreased, but heap
 usage is increased by the same amount.  It's not clear to me that
 the former is better than the latter (GHC, at least, grows its stack
 dynamically, and Hugs will too once we've completed the GHC/Hugs
 integration).
 
 I'm not sure how partition could be made much more efficient
 without also making it stricter.

I tried

pt p = prt [] []
   where  prt xs ys [] = (reverse xs, reverse ys)
  prt xs ys (z:zs) = if  p z  then  prt (z:xs) ys zs
 else   prt xs (z:ys) zs

main = let  n   = 123000 :: Int  
(ys,zs) = pt (0) [1..n]
   in
   putStr $ shows (last ys) "\n"
--
and found that it has 
  (time, minHeap, minStack) = (1.2 sec, 4700k, 1k  )
against   (0.9, 4300,  1000)
of GHC- partition.
All right, probably,  pt  does not win.


 In general, though, we have not devoted a great deal of effort
 to performance-tuning the GHC prelude.  If anyone finds a definition
 of a prelude function that out-performs the one that comes with
 GHC, please send it and we'll plug it in.


Well,  minimum(By), maximum(By)  
should run in constant heap+stack,
if programmed without  foldl.  For example,

  minBy, maxBy :: Comparison a - [a] - a
   -- example:  compare- [2,1,3,1] - 1
   --   (flip compare) - [2,1,3,1] - 3
  minBy _  [] = error "minBy comparison []\n"
  minBy cp xs = m xs
where  m [x]  = x
   m (x:y:xs) = case  cp x y  of  GT - m (y:xs)
  _  - m (x:xs)

Here the type of the first argument differs from  List.minimumBy,
but this can be improved easily.

Minor question:  minimumBy  could take the first argument
p  asTypeOf  min
or  p  asTypeOf  compare  - as in  sortBy.
One could question, which is more usable.


--
Sergey Mechveliani
[EMAIL PROTECTED]








tail-recurse `partition' ?

2000-01-16 Thread S.D.Mechveliani

It looks like List.partition  
is a stack eater in GHC-4.04.
And this can be improved easily. 
Probably, there are many other stack or heap eaters.
What the implementors think of this?

--
Sergey Mechveliani
[EMAIL PROTECTED]



where to import `trace' from

1999-12-05 Thread S.D.Mechveliani

Please, what is the ghc-standard way to import `trace' function of
ghc-4.04  library ?

Someone pointed this long ago, but i cannot recall now.
.../user_guide/librarie*   has, probably, to specify this.
But why it does not ?

Thank you in advance for the help.

--
Sergey Mechveliani
[EMAIL PROTECTED]



redirecting .hi

1999-07-29 Thread S.D.Mechveliani

I asked earlier how to replace in Makefile many lines of kind
  source/Categs_flags  = -ohi $(E)/Categs.hi
  source/auxil/Set__flags  = -ohi $(E)/Set_.hi
  ...
(processed by the   $($*_flags)  compilation key)
with something short.

Thanks to  Marc Van Dongen, Simon Marlow, Sigbjorn Finne 
for the advices.
Sigbjorn Finne  writes

 If you're using GNU make I'd simply add

 SRC_HC_OPTS += -ohi $(E)/$*.hi

I found that it does almost what is needed. 
Only it has to be corrected to   -ohi $(E)/$(notdir $*).hi

Thanks.

--
Sergey Mechveliani
[EMAIL PROTECTED]

















understanding optimizations

1999-07-16 Thread S.D.Mechveliani

I am trying to understand optimizations in  ghc.

Consider the lexicographic list comparison.
To my mind, the most natural program is

  lexListComp :: (a - b - Ordering) - [a] - [b] - Ordering
  lexListComp cp = lcp
where 
lcp [] [] = EQ
lcp [] _  = LT
lcp _  [] = GT
lcp (x:xs) (y:ys) = case  cp x y  of  EQ - lcp xs ys
  v  - v
  {-# SPECIALIZE lexListComp :: (Z-Z-Ordering)-[Z]-[Z]-Ordering 
   #-}  -- popular case
  
  type Z = Int

I expected this to perform for [Z] as fast as the special function 
for [Z]:
  cpZ :: [Z] - [Z] - Ordering
  cpZ[] [] = EQ
  cpZ[] _  = LT
  cpZ_  [] = GT
  cpZ(x:xs) (y:ys) = case  compare x y  of  EQ - cpZ xs ys
v  - v

Why  cp = lexListComp compare  (for [Z])  occurs essentially slower
than  cpZ ?
Substituting this equation for  cp,  the compiler has probably to 
obtain the copy of the function  cpZ  - ?

Below the comparison is given by   maxBy c $ sortBy c $ subLists xss
with  c = cpZ,  
= cp = lexListComp compare  :: [Z] - [Z] - Ordering

import List (sortBy)

Z, lexListComp, lpZ   as above

cp = lexListComp compare :: [Z] - [Z] - Ordering
  

main = let  xss = sublists [1,9,2,8,3,7,4,6,5,1,9,2,8,3,7] ::[[Z]]  
ys  = maxBy cpZ$ sortBy cpZ xss
ys' = maxBy cp $ sortBy cp  xss
   in
   putStr$ shows ys' "\n"   -- switch  ys - ys'

maxBy :: (a - a - Ordering) - [a] - a
maxBy  cp xs = m xs
where  m [x]  = x
   m (x:y:xs) = case  cp x y  of  LT - m (y:xs)
  _  - m (x:xs)
sublists :: [a] - [[a]]
sublists[] = [[]]
sublists(x:xs) = case  sublists xs  of  ls - (map (x:) ls)++ls



ghc-4.04~June-20  -c -O Main.hs; ghc -o run Main.o;   time ./run

gives the result  [9,9,8,7]  and the timing (on our machine)

 cpZ3.51 sec
 cp - spec. 4.13
 cp - no spec.  4.13

(comparing on faster machines might need adding a couple of elements 
to  xss).

The measured ratio here is more than  4.13/3.51  - because the program
consists not only of the list comparisons.

Further, i tried  {-# INLINE lexListComp #-},  {-# INLINE cp #-}
- for curiosity.
And this slows down  cp  to  6 sec  - ?


--
Sergey Mechveliani
[EMAIL PROTECTED]








GHC state

1999-06-14 Thread S.D.Mechveliani

Dear  GHC,

could you explain the state of the  ghc  project?

ghc-4.03-Win32  was announced, with the un-expected (for me) set of 
features. Some people talk of  ghc-4.03  for Linux ...

Are there the *sources* available for  ghc-4.03  to compile for Unix,
Linux-i386 ?
Maybe,  "Ghc/Hugs"  would do what i need? 
I mean the "real" features, like
  * fast small Integer,
  * relaxed restrictions on overlaps.


--
Sergey Mechveliani
[EMAIL PROTECTED]



space allocation in ghc-4.02

1999-06-07 Thread S.D.Mechveliani

f n = case  [1..n] 
  of
xs - let  x = head xs  in  (last xs) + x  :: Integer

when compiled   ghc-4.02 -c -Onot
has a strange effect.
Increasing  n  2 times,  from  9000  to  18000,  
requires 30 times more of space:
 ./run +RTS -M33k   -K4  
to   ./run +RTS -M1300k -K4 

One could expect 2-3 times increase at worse.
However, from 18000 to 36000, the space grows 2.1 times.

Is this all due to default space block size to allocate?
Is not the interpreter too wise in space managing?


--
Sergey Mechveliani
[EMAIL PROTECTED]



`sum' in ghc-4.02

1999-05-27 Thread S.D.Mechveliani

To my

 ghc-4.02  treats  sum  strangely.
 In the below program,  sum xs  needs small, constant size stack,
 sm xs  needs stack proportional to  length xs.
 And  sm  is the implementation of  sum  shown in  src/.../PrelList.lhs
 I apply  ghc -c -O.
 [..]


Simon Peyton-Jones [EMAIL PROTECTED]  replies


 foldl looks like this:
 
 foldl k z [] = z
 foldl k z (x:xs) = foldl k (z `k` x) xs
 
 To work in constant space, foldl needs to evaluate its second argument
 before the call, rather than building a thunk that is later forced
 (which in turn builds stack).  But in general, foldl is not strict in z.
 [..]
 Alternatively, you can write
 
 foldl_strict k z [] = z

 [..]



Thank you. I see. 

But it remains a puzzle:  ghc-4.02-i386-unknown-linux

performs `sum' of Prelude as if it was NOT compiled from the sources
of distribution.
The source -  src/.../PrelList.lhs  says
  sum = foldl (+) 0  
  {-# SPECIALISE sum :: [Int] - Int #-}

Testing `sum' in  ghc-4.02-i386-unknown-linux  
shows that it needs *small stack* for  sum [1..n]  :: Int.

But when the user defines `sm' exactly as above `sum', then 
sm [1..n] :: Int   needs large stack!

I am sorry, if i confuse something, but could somebody repeat this
experiment, explain it?


--
Sergey Mechveliani
[EMAIL PROTECTED]



Int/Integer cost

1999-04-30 Thread S.D.Mechveliani

What is the recent implementors disposition concerning Int vs Integer
cost?
Hugs-98-March99,  ghc-4.02 (-O -O2-for-C)  show the ratio about 4.

So, my program introduces  type PPInt   to be switched Int/Integer
to represent integers in the polynomial exponents.
This causes certain complication:  toPPInt, fromPPInt  also have to 
be switched and, well, sometimes set in the program.

If the ratio changes to, say, 2, then i would rather replace PPInt
with Integer.

And the comparison has to be considered only for the Int version that
checks overflow. Probably, this reduces the ratio, only how much?

Thank you in advance for the comments.

--
Sergey Mechveliani
[EMAIL PROTECTED]



inlining control

1999-03-24 Thread S.D.Mechveliani

Dear ghc,

I tried to find out, how the *inlining level* (eagerness) option 
changes the compilation. Is there such a parameter?
First, applying   ghc-4.02 -c -O 
to
-
module T where

f1 :: Int - Int
f1x   =  (f2 x) + 2

f2 x = (f3 x) + 2
f3 x = (f4 x) + 2
f4 x = (f5 x) + 2
f5 x = (f6 x) + 2
f6 x = (f7 x) + 2
f7 x = (f8 x) + 2
f8 x = (f9 x) + 2
f9 x = if  x  0  then -x  else x
-


i could not find out which of fi are inlined.  T.hi is too complex.
How to see it easy, what really had inlined?

My idea was that, as i like generally to compose the simple function 
applications, it, probably, worths to increase the threshold of
inlining, and see ...

--
Sergey Mechveliani
[EMAIL PROTECTED]




Re: Q: Efficiency of Array Implementation

1999-02-19 Thread S.D.Mechveliani

Concerning 'rapid access' you found in docs - it is hard to believe
this access is as fast as in C array - i mean changing X[i] in array
X. Because when this change is done fast, it is a side effect, breaks
functionality.
I always thought that this is the main cost the functional world pays
for the nice functional principles - to think of how to avoid indice
in the critical parts of the algorithm.
Well, i may mistake ...



`minimum' implementation

1999-02-15 Thread S.D.Mechveliani

Dear  ghc,

it was said long before (Simon P. Jones had agreed) that the prelude
functions  
  foldl, minimum, maximum, sum, product

need more careful implementation - on the subject of un-needed
laziness.
Still,  ghc-4.02  computes  minimum [1..61000]  to  Stack overflow.

Please, could you change this to
minimum []   = error...
minimum [x]  = x
minimum (x:y:xs) = if  x  y  then  minimum (x:xs)  
   else minimum (y:xs)
 
Similarly - with  minimumBy, maximum(By), sum, product. 

For, now, in my program, i am forced to re-define these functions and 
hide the ghc ones. This looks like annoying complications of simplest 
things.

It was also said, that it worths to change the  foldl  implementation
to
   foldl k z xs =  f z xs   where  f z [] = z
   f z (x:xs) = f (k z x) xs

What do you think of all this?


--
Sergey Mechveliani
[EMAIL PROTECTED]



`sort' implementation

1999-02-15 Thread S.D.Mechveliani

In addition to my previous note on the implementation of  minimum  and
the like, could you also revise the  List.sort  implementation?

For example, the  merge sorting,  it is 100 times cheaper on  
sort [1..6000]
than what shows  ghc-4.02.
And again, i was forced to hide the ghc `sort'.
The merge sorting costs  O( n*log(n) ), so it is good in any case.
Why not implement it?

--
Sergey Mechveliani
[EMAIL PROTECTED]



sortings

1999-02-15 Thread S.D.Mechveliani

To my suggestion of the  merge sort  for the  sort  implementation

Simon Marlow [EMAIL PROTECTED]  writes


 GHC's sort implementation is a well-optimised quicksort plundered originally
 from the hbc library I believe.  In your example above you mention testing
 it on an already sorted list, which is the worst case example for this
 sorting algorithm, since it picks the first element of the list as the
 paritition.  Try some tests on random lists, I think you'll find GHC's sort
 does rather better.

 However, we're open to suggestions (backed up by numbers, of course) for a
 better algorithm.


I really think merge sort is "better".
Because in practice, the worst case weights more, it is more than mere 
one of the random cases. 
But i admit, the thing is arguable and somewhat, depends on the taste. 
What i wanted to say now, it surprises me that most people think of the 
algorithm cost preferably as of the average one, not the worst-case.


--
Sergey Mechveliani
[EMAIL PROTECTED]



specialization with =

1999-02-15 Thread S.D.Mechveliani

Simon Marlow writes

 Thanks for the report, but the bug is in the docs - it should say
"SPECIALISE pragmas with a '= g' part are not implemented yet".  This kind
 of specialisation used to work in 0.xx, but since the rewrite we haven't got
 around to implementing it in the new specialiser.


SPECIALIZE with '='  is very desirable.
Otherwise, how can one denote with the same name a function together 
with its particularly efficient specialization?
If i knew of this `=' possibility, i would have raised the noise two 
years earlier.
Second question: is this hard to implement specializations like

  f :: C a = a - ...   -- exported function
  fx = ...

  f_spec :: (C a, D a) = a - ...   -- hidden function 

  {-# SPECIALIZE f :: (C a,D a) = a - ... = f_spec #-}

And the next step might be making this pragma standard ...
This will bring a bit of happiness to the Haskell world.

--
Sergey Mechveliani
[EMAIL PROTECTED]





-O, INLINE. Thanks.

1998-12-17 Thread S.D.Mechveliani

I thank  Simon Marlow [EMAIL PROTECTED]  for the help on my
request
 I need an advice on  -O, INLINE, NOINLINE  usage for ghc-4.01 ...

AlsoAnd it looks like  ghc-4.01 -c -O  ignores NOINLINE.

occured my mistake. I am sorry. I compiled by   ghc -c -O

  f = 'b'

prefixed with {-# NOINLINE f #-}  and not prefixed
and had not noticed the difference in  .hi  files in the lines

1 f :: PrelBase.Char {-## __A 0 __C __u ...


So,  ghc  is all right.
 
--
Sergey Mechveliani
[EMAIL PROTECTED]



Eval

1998-12-15 Thread S.D.Mechveliani

Ch. A. Herrmann  [EMAIL PROTECTED]  writes

I have a problem with the Eval class. Maybe its just that a compiler
flag or import declaration is missing. I compiled with
   ghc-4.01/bin/ghc -c -fvia-C -syslib exts

and got the error message:
   No instance for `Eval Int'
 [...]


It was said Haskell-98 and further will be free of  Eval  at all
(kind of everything is Eval).
Do i remember right?
Probably, some trail of Eval is still visible in ghc-4.01 ...

--
Sergey Mechveliani
[EMAIL PROTECTED]



+RTS -K in ghc-4

1998-10-20 Thread S.D.Mechveliani

Running the programs compiled with  ghc-4  (several examples), i have 
noticed that it pretends to spend thousands of space less than  
ghc-3.02.  Typically,

ghc-3.02 with  +RTS -H100k -K9k   runs as fast as  
ghc-4with  +RTS-K4

According to  4-00-notes.vsgml,  the latter -K4 means that the task
is performed within  4 bytes  of  heap+stack space.

This might happen, maybe, for  sum [1..1000],  but for the real
examples, it is somehow suspicious.
Could anybody tell, what does -K4 mean in   time ./run +RTS -K4  
in ghc-4?
The whole test was to see, how the small space slows down the 
performance. And it appears it does not value any :-)


--
Sergey Mechveliani
[EMAIL PROTECTED]




instance..= L a a

1998-10-19 Thread S.D.Mechveliani

ghc-4  does not allow what  ghc-3.02  did:


  class ...= MulSemigroup a  where  mul :: a - a - a
 ...
  class (AddGroup a,MulSemigroup a) = Ring a  where ...

  class (Ring r,AddGroup a) = LeftModule r a  where
   cMul :: r - a - a 

  instance Ring a = LeftModule a a  where  cMul = mul


ghc-4  says
  Illegal instance declaration for `LeftModule a a'
   (There must be at least one non-type-variable in the instance head)
 
If Haskell-2 is going to be this way, then this is somehow against the
needs of algebra.

ghc  developers, please, advise.


--
Sergey Mechveliani
[EMAIL PROTECTED]



overlapping instances

1998-06-04 Thread S.D.Mechveliani

Dear ghc developers,

Could you, please, tell me

1. What are the  ghc  plans concerning the  overlapping instances ?
   For it looks like  ghc-3.02  does not support them.

2. What is the recent ghc developers vision of overlapping instances?
   - when compared with the  Choice 3a,3b  from the Jones  Jones 
   paper "Type classes: an exploration of the design space".


--
Sergey Mechveliani
[EMAIL PROTECTED]




overlapping instances dup.defs

1998-05-20 Thread S.D.Mechveliani

It is said, next  ghc  is going to support the included overlapping
instances.
At this point, i have a question to the Haskell experts and 
implementors, concerning the duplicate definitions.
For example, program the determinant of a matrix:
-
class Num a = Field a where  divide :: a - a - a

data Matrix a = Mt [[a]] 

det :: Num a = Matrix a - a
det (Mt rs)  =  expandByRow rs
where
expandByRow rs =  ...something with +,-,*

det :: Field a = Matrix a - a
det   (Mt rs)  =  gaussMethod rs
   where
   gaussMethod rs = ...something with +,-,*,divide
--
For the case of   det m,   m :: Field a = Matrix a,
the compiler has to set the second definition of  det.  For under this
stronger condition, the better algorithm is possible - gaussMethod.
Otherwise, the compiler has to choose the first definition - at least,
it yields the correct result.
At this, Haskell reports an error: duplicated definitions for  det.

Providing the two functions, say,  det, det_f  for the user to choose 
is an ugly way out: the mathematical map is the same, why call it 
different? In other words, the values in the language are not 
polymorphic enough.
So, try to help this by using  ghc with  -fallow-overlapping-instances
and by making `det' a class method:
--
class WithDet a  where  det :: Matrix a - a

instance Num a   = WithDet a  where  det (Mt rs) = expandByRow ...
instance Field a = WithDet a  where ...gaussMethod ...
--

willghc -c -fglasgow-exts -fallow-overlapping-instances
allow this?
If it will, then could the compiler resolve in the "similar" way the 
duplicated  det  definitions in the first program - and thus provide 
the corresponding language extension?


--
Sergey Mechveliani
[EMAIL PROTECTED]
http://www.botik.ru/~mechvel




asking for apllication access help

1998-05-14 Thread S.D.Mechveliani

Dear all,

There exists a computer algebra program DoCon written in Haskell,
distributed freely with the sources, at
   ftp.botik.ru:/pub/local/Mechveliani/docon/

and some users say the main archive (about 600K byte) is very hard to 
transfer.

Is it possible, or could somebody kind and powerful provide the 
mirroring of this directory?  I mean, for free, no money.
Or maybe, simply somewhat keep a copy of the file(s) in some place 
that we can reference publicly.

I address to those who might be interested in Haskell application 
distribution.

??


Sergey Mechveliani
[EMAIL PROTECTED]
http://www.botik.ru/~mechvel









ghc-3.00 language. 2-nd formulation

1998-01-28 Thread S.D.Mechveliani

S.P.Jones wrote recently about  ghc-3.00  supporting classes with 
multiple parameters and about  -fglasgow-ext...  flag to apply.
And ghc-3.00 announcement says it implements Haskell-1.4.

Since (i am sorry) my question on this subject was badly formulated, 
i try to say it in other words:

My impression was that Haskell-1.4 does NOT allow multiple class 
parameters (if it does, then ghc-2.10 does not support full Haskell-1.4).

So, how to name the language that compiles ghc-3.00 ?
Probably, it is  Haskell-1.4-with-Glasgow-extensions ?



Sergey Mechveliani
[EMAIL PROTECTED]



cyclic module dependencies

1998-01-27 Thread S.D.Mechveliani

Hello, all,

Earlier, i requested on the subject of how to compile the modules
with cyclic dependencies, if the .hi files are removed.

Thanks to  Sigbjorn Finne [EMAIL PROTECTED]  for the guidence.

Also  Johannes Waldmann  wrote

 sorry, this is a no-answer as well, but: who needs them?
 in the few cases where i had a cyclic dependency, and i wanted to 
 avoid it (because hugs wouldn't handle it), i re-thought the 
 placement of identifiers and the programm looked generally better
 afterwards. Are there typical cases where cyclic module dependencies
 are the "natural" solution?


By the cyclic module dependencies i mean only such situation when, for
example, the module M imports the item X from module N, and N imports
some Y from M.
Such mutually recursive modules are as natural as the mutually
recursive functions in the source program: f calls g, g calls f.
And i like  ghc  because it supports such module systems.
You say,  hugs  does not allow them. This means, probably,  hugs  does
not support full Haskell-1.4.

Example.
In my experience, there are too many items related to Polynomial. 
It is not good to put them in one module. Too much compilation
expense, especially when you are changing the script now and then
- changed is a small part, and re-compiled is everything. Better to 
split it into several modules:
  
  module Pol where
  import PolAux ( polGCD ...)
  ...
  data Polynomial a i = Pol [Monomial a i] ...

  several data declarations 

  instance ... = Num (Polynomial a i)   where ...
  instance ... = WithGCD (Polynomial a i)   where
 gcD f g = polGCD...
 ...
  many other instance declarations for Polynomial
  
  module PolAux where
  import Pol ( Polynomial(..)  ...)
  ...
  polGCD f g =  large implementation
  

`Pol' contains many instances which formal implementation parts are
small, they refer to the functions from module PolAux - like polGCD.
PolAux (usually there are many such modules) contains the real script
for implementation.
But PolAux has to say  `import Pol ...'  because  polGCD  cannot
compute gcd for polynomials without knowing what is polynomial; and
the latter is explained in `data Polynomial ...'.  More over polGCD
uses expressions like  f+g :: Polynomial a i,  which again need to
import the instance  Num (Polynomial ...)  from  Pol.hs.  And so on.

Is here possible a natural solution which does not require recursive
modules?
In my practice, the modules are much more recursive (the question is 
whether it is so necessary). So  ghc  was lead into temptation by the 
modules

module DInteger ( module DInt, rootInt )  
where
import DInt
import Int_  ( factor )
...
rootInt = ...
 many instances for Integer 

module DInt  
where
import DInteger
import Int_ ( factor )
...
 many instances for Int 

module Int_ ( factor ...)
where
import DInteger
...
factor ... = ...
some real implementation code for the operations with Int,Integer


Here DInteger imports all the items from DInt and re-exports them.
And DInt imports all DInteger items! Why so? Because

(a) DInteger, DInt  are to large to be put together.
(b) DInt, Int_  are hidden: the user has to know only one module
DInteger and to import everything from it.
(c) DInt  uses one or two instances for Integer declared in DInteger.
But the language does not alow to import them only. To import 
some instances for D we have to import the data constructor D
- and, automatically, all its instances. Besides, in our case,
D = Integer  (!).  So we have to set  `import DInteger'.
Right?

At all this,  ghc  gets a little surprised and issues many warnings
about `weirdness in declaration...'.  But it produces quite a workable
code. Good.  

Only why the ghc implementors keep on saying the mutually recursive 
modules is a headache? 
The bold-naive question arises:
if ghc compiles, - without warning and without asking the order of 
compilation, - the mutually recursive functions f, g, why cannot it
do this "similarly" for the modules M, N ?



Sergey Mechveliani  [EMAIL PROTECTED]



macros in ghc-0.29

1997-03-11 Thread S.D.Mechveliani

Hello,


please, who knows, in what way the  in-line -s  or maybe,  macros 
are available in  ghc-0.29 ?

Say, suppose the text

accum t xs f =
  let
(t',h) = case  g f  of  []   - (t,h)
forms- foldl p (t,zero) forms
  in
(t',h:hs)

has better to be substituted before the compiling because it 
* repeats several times, 
* contains the values undefined in it 
* and has a complex type definition if written as a function.


Does the C preprocessor fit ?


Thank you.

Sergey Mechveliani[EMAIL PROTECTED]







specializing FiniteMap

1997-02-07 Thread S.D.Mechveliani

Hellow,


I was forced to use a binary table in certain situation.
And  FiniteMap  from the  ghc library occured good.

Only there was a special case of a key being a small List of Int-s: 

 key =  [i(1)..i(l)],   i,l - [0..30]   :: [Int]

Probably, much more efficient function should exist for this case.

I had tried to store these keys so that all the key tails of the same
head get into one sublist (level). And here we need no binary trees and
no re-balancing. But the profiling does not show any sensible gain.

Also the script in  ghc-0.29  library says something about 
*specializing* to the various cases.

Does this mean that the user can re-compile this module setting 
necessary specialization instructions for some preprocessor (?) to 
obtain the specially efficient FiniteMap code for the case of  [Int] 
- ?

And maybe, someone would like to get the script (2-3 pages along with 
the comments) and give advice ?  
Address [EMAIL PROTECTED]


Thank you.

Sergey Mechveliani.