Makefile tricks
Dear ghc-5.02.2-i386-unknown-linux, (RedHat-6.1 Linux machine, libc-2.1.2), Can you, please explain the following effect Main.hs: main = putStr hello\n Makefile: - obj: ghc -c -O Main.hs +RTS -M10m -RTS -- Running $ tcsh make obj ghc -c -O Main.hs +RTS -M10m -RTS ghc: input file doesn't exist: +RTS ghc: unrecognised option: -M10m ghc: unrecognised option: -RTS Now, type this to command line: ghc -c -O Main.hs +RTS -M10m -RTS And it works. This is so for the lstcsh shell too. make -v GNU Make version 3.77, by Richard Stallman and Roland McGrath. And it works correct under Debian Linux, GNU Make version 3.79.1, by Richard Stallman and Roland McGrath. Built for i586-pc-linux-gnu I do not know, whether it is essential, but this RedHat machine is a claster: for any job started, the system chooses itself appropriate machine to run this job on. But I was told that tcsh switches off this clastering. Also I am not sure, that this refers to GHC. I tried to set obj: ls -la and it works. Maybe, the new ssh version bug? - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
sorry for false alarm
I am sorry. It was a false alarm. I wrote recently about Makefile faults when applying ghc. The is also old ghc on this machine in the system area. I set alias to new ghc (in my user directory), but `make' does not know about alias and applies old ghc, which does not understand some options, like +RTS. This is not a problem. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
warning: `RLIM_INFINITY' redefined
Dear ghc-5.02.2-i386-unknown-linux, When compiling with -O, you report sometimes things like In file included from /usr/include/sys/resource.h:25, from /share/nfs/users/internat/mechvel/ghc/5.02.2/inst/lib/ghc-5.02.2/ include/HsStd.h:65, from /tmp/ghc29631.hc:2: /usr/include/bits/resource.h:109: warning: `RLIM_INFINITY' redefined /usr/include/asm/resource.h:26: warning: this is the location of the previous definition Is this harmless? - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
installing 5.02.2
Dear GHC, I am going to install ghc-5.02.2 on certain Linux machine to which I have ssh access and permission to install ghc-5.02.2 in my user directory (not a system user). The situation is as follows. It has i386-unknown (Pentium Pro II ...), much of memory and disk. I am not sure, but it looks like it is under RedHat-6.2, glibc-2.1.2. It has ghc-4.04 installed in the system area. But it failed to link Main.o of the program main = putStr hello\n Probably, Happy, is not installed. Also I applied certain ghc --make M on my Debian Linux machine with ghc-5.02.2, scp -ed the a.out.zip file to the target machine and unzipped it. And it cannot run there, some library not found, something of this sort. Could the GHC installation experts guide me, please? What is easier in this situation, to install from source or from binary? How to install Happy for the first time? How to find out in a regular way the info about Linux system and libc version? Thank you for the advice, - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Linux version, libraries
To my How to find out in a regular way the info about Linux system and libc version? Simon Marlow writes Try 'uname -a' 'ls -l /lib/libc*'. uname -a Linux ...machine... 2.2.12 #6 SMP mar oct 5 16:44:59 CEST 1999 i686 unknown ls -l /lib/libc* -rwxr-xr-x 1 root root 4118299 Sep 20 1999 /lib/libc-2.1.2.so* lrwxrwxrwx 1 root root 13 Nov 5 1999 /lib/libc.so.6 - libc-2.1.2.so* lrwxrwxrwx 1 ... 1999/lib/libcom_err.so.2 - libcom_err.so.2.0* -rwxr-xr-x 1 root root7889 Oct 26 1999 /lib/libcom_err.so.2.0* -rwxr-xr-x 1 root root 64595 Sep 20 1999 /lib/libcrypt-2.1.2.so* lrwxrwxrwx 1 .. 1999 /lib/libcrypt.so.1 - libcrypt-2.1.2.so* But how to find whether it is RedHat (version) ? On GMP library: find /usr/lib -name '*gmp*' /usr/lib/libgmp.so.2.0.2 /usr/lib/libgmp.so.2 find: /usr/lib/netscape/de/nethelp: Permission ... /usr/lib/perl5/5.00503/i386-linux/linux/igmp.ph /usr/lib/perl5/5.00503/i386-linux/netinet/igmp.ph ? - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
scope in ghci. Reply
Martin Norb [EMAIL PROTECTED] writes I just noticed this behavior with ghci After loading a module with :l Module you can't use the Prelude functions unqualified, you just get things like interactive:1: Variable not in scope: `show' I am pretty sure that this worked some days ago, and I was using the same version then. I feel totally confused. Has this happened to anyone else? Even if Prelude is visible, still the library items are not. (List.sort, Ratio ...). Maybe, this is natural, I do not know. But I believe, it is possible to control the scope. For example, create a module Universe.hs which re-exports Prelude, libraries and user programs. Then, :l Universe should bring to scope all these items. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
notes on ghc-5.02.2
Dear GHC, Here are some comments on ghc-5.02.2. I have tested it in the following way. 1. Installed binary = ghc-5.02.2-i386-linux-unknown. 2. Compiled ghc-5.02.2-source with binary for linux-i386, under empty mk/build.mk, removed binary installation. 3. Compiled a large Haskell project DoCon under -O (making package, -fglasgow-exts, overlappings ...) 4. Run DoCon test. It looks all right. * It spends the memory in a better way when making a project and loading interpreted modules. * It looks fixed the bug with the value visibility in ghci in (let f = ...; let g = h f ...) Some questions -- * up-arrow key does not work in ghci, and it worked in binary installation. Probably, some library was not found. How to fix this? I have stored config.log of `./configure ...' and makeall.log of `make all'. `make boot' (before `make all') did nothing and issued a message recommending to run a thing something in some subdirectory, and I skipped this command at all. Do I mistake? * the directory /share always neighboured to /bin /lib in the installation directory. Why it is absent now? Maybe, the up-arrow bug is related to this? * Where to find the ghc manual in the .ps form, like set.ps provided by binary installation? * The interpreted code looks 2-3 times larger than compiled one. DoCon-library has size11 Mb, .o files of compiled test take less than 2 Mb, test loading ghci -package docon T_ +RTS -Mxx -RTS needs xx = 12 for the compiled full test T_, 31 for interpreted T_. If DoCon-library is out of the heap reserved by -M (is it?), then, we have that interpreted code is 2.6 times larger. Is this due to large interaces kept in heap? Please, advise, - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
notes on ghc-5.02.2. Reply
To my notes on ghc-5.02.2 Simon Marlow writes * up-arrow key does not work in ghci, and it worked in binary installation. Probably, some library was not found. How to fix this? Do you have the readline-devel RPM installed? This is needed to compile GHC with readline support. I run ghc here on two machines. And it appears now that both are under Debian Linux. Some people here say that this RPM does not go under Debian. Maybe, GHC can take care of Debian? Would it help now, after `make all', `make install', setting LD_LIBRARY_PATH or such, making symbolic links? * The interpreted code looks 2-3 times larger than compiled one. DoCon-library has size11 Mb, .o files of compiled test take less than 2 Mb, test loading ghci -package docon T_ +RTS -Mxx -RTS needs xx = 12 for the compiled full test T_, 31 for interpreted T_. If DoCon-library is out of the heap reserved by -M (is it?), then, we have that interpreted code is 2.6 times larger. Is this due to large interaces kept in heap? Compiled code isn't loaded into the heap, so it isn't included in the size you give to -M. What are interaces? A typo. Module interfaces. In any case, I could confuse things, not understanding these implementation details, only observe that the interpreted code looks 2-3 times larger. Maybe, it is natural, maybe not, I do not know. Another question: am I in glasgow-haskell-users.. ? I am responding to Simon's respond to my letter to glasgow-haskell-users.. sent a couple of hours ago, and I had not recieved this my letter, so far. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
ignore, please
this is mail test, ignore, please ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
reading from file
Who would tell me, please, what is the simplest way to read a string from a file? Namely, what has one to set in place of `...' in the program main = putStr (...) to obtain instead of `...' a string contained in the file foo.txt ? Thank you in advance for the help. - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
duplicate loading?
Dear ghc-5.02.2 (linux-i386-unknown, binary snapshot for libc.2.1), Unlike ghc-5.00, you cannot load my project: --- making package, project, preparing ...export/HSdocon.o libHSdocon.a ... ghci -package docon T_ ... Loading package concurrent ... linking ... done. Loading package posix ... linking ... done. Loading package util ... linking ... done. Loading package data ... linking ... done. Loading package docon ... GHCi runtime linker: warning: looks like you're trying to load the same object file twice: /home/mechvel/docon/2.03/docon/source/export/HSdocon.o GHCi will continue, but a duplicate-symbol error may shortly follow. GHCi runtime linker: fatal error: I found a duplicate definition for symbol UPolzu_pFromVec_closure whilst processing object file /home/mechvel/docon/2.03/docon/source/export/HSdocon.o This could be caused by: * Loading two different object files which export the same symbol * Specifying the same object file twice on the GHCi command line * An incorrect `package.conf' entry, causing some object to be loaded twice. GHCi cannot safely continue in this situation. Exiting now. Sorry. --- Does this mean that the project has error which is detected by 5.02.2 and was not detected by 5.00 ? If you need the source, tell me the address to send personally the archive to. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
differentiation. Reply
Zhe Fu [EMAIL PROTECTED] writes Is there any built-in functions in Haskell to implement diffential operation and partial diffential operation? Or can anyone give me some advices about how to implement them with Haskell? Thanks. There are no differential operators in Haskell-98 standard library, nor even polynomial arithmetic. Very probably, there exist application programs treating differential operators. Try to search in http://www.haskell.org If you are going to implement differential operators, try first to implement, for example, polynomial multiplication. After this, differentiation would not look hard. But it depends on what one wants to differentiate, one has to formulate first what kind of expressions are supposed to be differentiated. - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
x^y. Reply
Toralf Wittner [EMAIL PROTECTED] writes [..] data PowerNum = INT Integer | DBL Double deriving (Eq, Show, Read) [..] Now it basically works. However wouldn't it have been easier to write something like this: powPos :: Integer - Integer - Integer [..] powNeg :: Integer - Integer - Double [..] | y 0= 1 / fromInteger x * powNeg x (y + 1) Initially I wanted something as terse as the Python version, now I either have to write two functions or I need to define a type. Is there not an easier way to come as close as possible to the Python version? Thanks anyway - learnt a lot! For this particular task, the most natural solution is, probably, pow :: Fractional a = a - Integer - a pow a n = if n 0 then a^n else (1/a)^(- n) (^) is of standard, only pow adds the facility of negative n. Then, you need each time to convert the argument to appropriate type of Fractional: pow (fromInteger 2 :: Ratio.Rational) 2 -- 4 % 1 pow (fromInteger 2 :: Ratio.Rational) (-2) -- 1 % 4 pow (2 :: Double) (-2) -- 0.25 pow (2 :: Integer) (-2) -- ... No instance for (Fractional Integer) If you replace standard (and not lucky) Fractional with some class Foo (with multiplication mul and division div), make Integer an instance of Foo (where div may fail for some data), and program pow :: Foo a = a - Integer - a via mul, div, then it would work like this: pow (2 :: Integer) 2-- 4 pow (2 :: Integer) (-2) -- Error: cannot invert 2 :: Integer pow (2 :: Rational) (-2) -- 1%4 Another way is to try to straggle with overlapping instances by defining something like this: class Pow a b where pow :: a - Integer - b instance Num a = Pow a a where pow = (^) Fractional ? instance Num a = Pow Integer a where pow = ? If this succeeds, there will be also no need in new type constructors. - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
x^y. Reply
Toralf Wittner [EMAIL PROTECTED] writes [..] power x y [..] | y 0 = x * power x (y-1) | y 0 = 1 / fromInteger x * power x (y+1) One recognizes that the function returns either an integer value if y 0 or a float value if y 0. Therefore I can't write a signature like pow :: Integer - Integer - Integer nor can I do pow :: Integer - Integer - Double. [..] How then would I write this function in Haskell (concerning types)? The type Rational fits the case n 0 too, and it includes Integer. But if you still need Integer | Double, you can, for example, introduce a new type of a disjoint union of the above two, and then, to compute like this: pow (Intg 2) 2 -- Intg 4 pow (Intg 2) (-2) -- D 0.25 pow (D 2.0) (-2) -- D 0.25 This is achieved by data PowerDom = Intg Integer | D Double deriving(Eq,Show) pow :: PowerDom - Integer - PowerDom pow x n = p x n where p (Intg m) n = if n 0 then Intg $ powerInteger m n else D $ powerDouble (fromInteger m :: Double) n p(D d) n = D $ powerDouble d n powerInteger m n = m^n :: Integer powerDouble :: Double - Integer - Double powerDoubled n = ... usual way for float - something like this. - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
FiniteMap for standard
Long ago I suggested to include FiniteMap into Haskell-98 standard library. Is it still late to do this, or maybe, the situation has changed? - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
gcd definition
Simon Peyton-Jones [EMAIL PROTECTED] writes [..] If someone could write a sentence or two to explain why gcd 0 0 = 0, (ideally, brief ones I can put in the report by way of explanation), I think that might help those of us who have not followed the details of the discussion. Here it is. gcd n m :: Integer = if n == 0 m == 0 then 0 else greatest integer that divides both n and m It is set so according to classic definition (by Pythago, Euclid ...) GCD x y = such g that divides both x and y and is a multiple of any g' that divides both x and y. In particular, GCD 0 0 :: Integer = 0 and can be nothing else. Because 0 divides 0 and divides nothing else, everything divides 0 (z*0 = 0). Other comments - * People say, D.Knuth writes gcd 0 0 = 0 in his book. And he is a known mathematically reliable person. * Example. gcd 12 18 :: Integer = 6 or -6, because 6 divides 12 and 18, and any other such number divides 6. * The Haskell report says greatest integer for domain = Integer, and thus, chooses the sign `+' for the result. This choice is not a mistake and helps to write shorter programs. * All the above agrees with the modern generic theory of ideals (started in |XX by Kummer, Gauss, Lagrange) for any commutative domain R having the properties of (a /= 0, b /= 0 == a*b /= 0), existence of unique factorization to primes. According to it gcd x y = least by inclusion ideal of kind (g) that includes the ideal (x, y), where Ideal (a,b..c) =def= {x*a + y*b +..+ z*c | x,y..z - R} - a subset in R. `includes` (as set) is a partial order on ideals, and it is true that (1) a divides b == (a) includes (b), (2) (a) includes (a). Specializing this definition to Integer, we obtain the source formula. - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
gcd 0 0 = 0
People write on gcd 0 0 : Alan Bawden [EMAIL PROTECTED] If you take the point-of-view that gcd is actually an operation on ideals, then gcd(0, 0) is 0. I.e. define gcd(x, y) to be the smallest z = 0 such that {m*x + n*y | m, n in Z} = {n*z | n in Z}. This is probably the most natural and general definition of gcd, and is, in fact, what many mathematicians would expect. Also, it is nice to be able to know that gcd(x, 0) = x for -all- x. Frank Seaton Taylor [EMAIL PROTECTED] writes T Let me chime in in favor of gcd(0,0) = 0 rather than undefined. T T On Tuesday, December 11, 2001, at 01:07 , Alan Bawden wrote: T T Also, it is nice to be able to know that gcd(x, 0) = x for -all- x. T T Alan is in good company: T T 1. Steele in Common Lisp: The Language says the same thing. T T 2. Knuth in The Art of Computer Programming defines gcd(0,0)=0. T classic algebra All right. Let us try to investigate this. In such cases one should consult mathematics, not programming. So, let us ignore Common Lisp And Knuth is all right, I think. Further, the definintion gcd(x, y) to be the smallest z = 0 such that {m*x + n*y | m, n in Z} = {n*z | n in Z} is not natural. In particular, how does it generalize to gcd X Y for polynomials in X, Y with rational coefficients? It is 1. And how would one obtain 1 = f*X + g*Y ? According to classic algebra (Kummer, I guess), gcd x y in (such and such) domain R is the least by inclusion principal ideal (g) that includes the ideal (x, y). Here (a1..an) = set of all combinations y1*a1 + .. + yn*an, yi from R. According to this definition, gcd (X) (Y) = (1) in polynomials, all right, andgcd n m :: Integer is (0) Because (0) is the least by inclusion principal ideal containing (0). And this also contradicts the part greatest integer that divides ... in recent Haskell specification. Probably, the best specification would be gcd n m :: Integer = if n == 0 m == 0 then 0 else greatest integer that divides both n and m - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
gcd 0 0
People write about the Report definition of gcd x y as of greatest integer that divides x and y, and mention gcd 0 0 = 0 But 2 also divides 0, because 2*0 = 0. Does the Report specify that gcd 0 0 is not defined? For an occasion: probably, it is better to specify this. - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
strong typing?
People write Fergus Henderson: H [...] H The whole idea of letting you omit method definitions for methods with H no default and having calls to such methods be run-time errors is IMHO H exceedingly odd in a supposedly strongly typed language, and IMHO ought H to be reconsidered in the next major revision of Haskell. [EMAIL PROTECTED]: F I agree too, but being able to omit method definitions is F sometimes useful -- would it be possible to make calls to F those methods a /static/ error? I suspect this would be hard F to do. I am not a specialist and can mistake and confuse things, but I wonder whether a notion of a strongly typed language is so scientifically important. The same is with the `compile-time' and `run-time' separation. There is no scientific reason why all computations with types and type resolution should preceed all computations with non-types. Very often the types need to behave like ordinary data. Would it be reasonable to avoid as possible the restriction of strong typing in language specification? The essence of compiling is to spend once an effort to reorganize some piece of data P into P' in order to reuse P' many times and win performance. For a program PP introducing P1 ... P100, why all compilations P1 - P1' ... P100 - P100' should preceed all usages of P1' ... P100' ? - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
scope in ghci
Dear GHC, Consider the following question concerning scopes in ghc -interpreted. A large application called D consists of 70 modules, it reexports all its public items via module DExport. It has also demonstration examplesDm1.hs ... Dm9.hs. Each example imports 4-5 modules from D and lists precisely the imported items, like this: import List (sort) import Foo1 (f, g, Dt(..)) There also exists the total Dm.hs which reexports Dm1 ... Dm9 and adds the total `test' function. All this compiles successfully (Dm* may skip compilation). Then, ghci -package D Dm1 ... Dm1 sort $ dm1 [1%2,1%3]-- contrived example is impossible. Instead, it needs Dm1 List.sort Prelude.$ dm1 [1 Ratio.% 2, 1 Ratio.% 3] Also the user is expected to command :m DExport and work with other modules than Dm*. I try to improve this as follows. Add (...module Prelude, module List, module Ratio, module Random...) import Prelude; import List; import Ratio; import Random; to DExport.hs and replace the import part in Dm1.hs ... Dm9.hs with import DExport Formally, everything works. But I doubt. DExport imports and exports so many things that it looks natural to add all the standard Haskell library to it. But the imports of Dm1.hs ... Dm9.hs are different. I wonder, what is the difference between M.hs: ...import Foo (f,g) and N.hs: ...import DExport First, suppose M.hs contains name clashes with DExport and wants to avoid qualified names. Second, when compiling M.hs the name search is smaller (would it compile any essentially faster?). Third, how GHC will react at :load Dm ? By occasion, would not it load 9 copies of DExport ? Shortly: is this all right to set import DExport to all the above Dm* ? Thanks to everyone for possible advice. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
what is :a
Who would tell, please, what is the real difference between :load and :add in ghci-5.02 ? Thank you in advance for explanation. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
scope in ghci
To my [..] Dm1 sort $ dm1 [1%2,1%3]-- contrived example is impossible. Instead, it needs Dm1 List.sort Prelude.$ dm1 [1 Ratio.% 2, 1 Ratio.% 3] Marcin 'Qrczak' Kowalczyk [EMAIL PROTECTED] writes This happens when the module Dm1 is compiled. It's because a compiled module doesn't provide the information which names were in scope in its source. [..] In this particular experiment Dm1 ... Dm9 were *interpreted* and the rest D project was compiled and loaded as object code from library via package. But the case of compiled Dm1 is sufficient too to present certain small question. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
small bug report for 5.02
Dear ghc-5.02-linux-i386-unknown, Here is quite a simple bug report. main = putStr $ shows ((head xs)+(last xs)) \n where xs = [1..99000] ghc -c Main.hs ghc -o run Main.o ./run +RTS -M100k -RTS 99001 ./run +RTS -M300k -RTS Heap exhausted; Current maximum heap size is 299008 bytes; Maybe, my earlier complex bug report will become unnecessary after this small one is fixed. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
(no subject)
addition to my last letter: +RTS -c100 -RTS does not change the behavior of the program in report. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
5.04 ?
Dear GHC, I have tested ghc-5.02 on various examples of computer algebra, with -O too. All tests are all right, except it shows a bug in memory management (Simon Marlow wrote of), visible in subtly chosen cases. The impression is that 5.02 saves much memory and gains performance in situations involving garbage collections (most practicable case). But due to the above bug, would 5.04 appear soon as a stable release? For I need to mention some newest stable GHC release in DoCon program documentation. Regards, - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
ghc -O .. `Segmentation fault'
Dear ghc-5.02-linux-i386-unknown, You cannot compile with -O certain large project. `Segmentation fault' appears after compiling half of it, no matter how much +RTS -M.. -RTS is set. The project uses multiparameter, overlapping and undecidable instances, and strict fields. And Simon Marlow already has another bug archive on `Segmentation fault'. I wonder whether these bugs are independent and whether the last archive is needed before the first bug is fixed. Please, advise, - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
-O segmentation fault
I wrote recently about Segmentation fault bug in -O compiling of the project. And it appears that it runs into it only when the compiler is given appropriate memory options: make ... space=M45m project ... then, it says `memory exhausted'. Repeat make ... space=M45m project It compiles another group of modules ... `memory exhausted'. make ... space=M60m project ... Segmentation fault. Greater -M numbers (set from the very start) lead to successfull `make'. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
strange effects with package
To my two `bug' reports 1: You cannot load my project of 2 small modules, using package: ghci -package docon B ... Loading package docon ... linking ... done. Compiling B( B.hs, interpreted ) ghc-5.02: panic! (the `impossible' happened, GHC version 5.02): ByteCodeLink.lookupCE A.a{-r41-} and 2: ghci -package pk Foo loads the interpreted code Foo.hs, if Foo.hs resides in the current directory and *.o modules are gathered in a library in other directory referred by pk. Simon Marlow [EMAIL PROTECTED] responds 1: [..] your library files don't contain any objects! [..] ar -qc [..] $(wildcard $(e)/\*.o)) [..]$(e)/*.o 2: Are you saying that the package 'pk' also contains a module called 'Foo'? If that's the case, then it is really an error for there to also be a Foo.hs on the path, and it is likely to lead to trouble as GHC will get confused about which one you mean. [..] A small archive is enclosed here that shows both remaining strange effects: uudecode, unzip -- several files, read readme.txt to install. 1. Makefile changes to ... obj: ghc -odir $(e) -hidir $(e) -package-name docon --make A docon: obj rm -f $(e)/libHSdocon.a $(e)/HSdocon.o ar -qc $(e)/libHSdocon.a $(wildcard $(e)/*.o) $(RANLIB) $(e)/libHSdocon.a ld -r -x --whole-archive $(e)/libHSdocon.a -o $(e)/HSdocon.o And it works this way: make init; make obj; make docon `obj' looks unnecessary here. But if we bring the command ghc -odir ... --make A inside the docon target, then `make docon' does not notice the appearing file $(e)/A.o and expands $(wildcard $(e)/*.o) to emptyness. I do not understand, why. 2. All right, let us, so far, `make' a thing with make init make obj make docon Then, we have $root/A.hs Makefile readme.txt /d/B.hs /export/A.o A.hi HSDocon.o libHSDocon.a, module A exports a = True, module B imports A and exports b = a, A.o is in package, B is not. cd $root ghci -package docon ... Loading package docon ... linking ... done. Prelude A.a True-- hence, A is visible Prelude :m A `A' is not currently loaded -- BUG ? Prelude :l A Source file changed or recompilation check turned off Compiling A( A.hs, interpreted ) -- BUG ? ... A And in $root/d, it works: cd $root/d ghci -package docon ... Prelude :l A module `A' is a package module Prelude :m A A a True A :l B ... ( B.hs, interpreted ) ... B b -- B contains b = A.a True B - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
adding archive
Sorry, here I add the archive forgotten in my prevoius letter begin 644 b.zip M4$L#!`H``-QS.BM!,%+C0```!D$`!4`02YHU540`#8*ZQ.P/5 MLCM5`0`%P07!UO9'5L92!!('=H97)E@IA(#T@5')U90I02P,$@`` MH%H[*P(`%0!D+U540`#7-.R.\?8LCM5`0`%P07!%!+ M`P0*``#WSHKHE?`62(B!@`5`0O0BYHU540`#D:ZQ.P/5 MLCM5`0`%P07!UO9'5L92!('=H97)E@II;7!OG0@02`*F(@/2!A(`I0 M2P,$%`@`U@[*ZR!`I\@`@``AP0```@`%0!-86ME9FEL95540`#6L^R M.ZC7LCM5`0`%P07!(V33V_30!#%SYY/,2212*QLC)XI.JAJ)6H!`45;DT5 M-NM)O,3==?K-!'PW1G_2T+Q@3UYGW_[_9V3'B)46(SBC)2R8[2Z-V;MZMR M$]$^M\[#$/][W5S??D.?Z`)?81B`/?V-QE*-;`/J41RF898^?[V[0KIDE M7%%JG_G)$6J#14H:MT:0F6=HR*W)M9F@]ZRR7?I4!2OL7*2)D9Z*O5.II6I M-GBPI/B4'C*YC$^ZN[C[?O.4-]B(4A'B4G3:I7YXIW)0'D4FVOK;*E07@ M%][*#'/:F-D1J=R+WP,05NA@,IK6DLZIGRUB[HD4F!F-:+(X[3R*#BB MH@YY?#B5$ZZP]DQR,IE@VEN5TM\.%K%X49VGLGSY@AVT46D9=UF8:0* MEEJV2-%5_#?'GG3GV5,/!+9%+@8=V$39)II]_S?9H$-42/3G:N+6 MHD80?P-HH_T`KW!YXY$6/5)GR\J;(8+;ERFKI`M:`@XA\NT18H(`4^[ MY9?'^Y[@+SP2LB)./(!=_9C7J*PG2V*1!^?4;2]$/68U*UF461R2W@%4`MS M9,(ZG^B/A;Q);7W,I.-TFTM!-7/JE\%FGL9*N*3D*9W8P6C##.'^( M!F,@OWVG.DYL2D)Z52B=]1C+^S+**!2DF;1'=-]K,.N:9J/YSZ6:ST7BM M4T].V-)CV-J$T][$SX%?P!02P,$%`@`UT[*P.3X;\9`@``M`0```H` M%0!R96%D;64N='AT550)``.JV+([!=BR.U5X!``7!!$G53+;MLP$+SK*^80 MH`E@RW#2$'15@+Z#.O32YMAC1I+BTOCONZ0LQ4X,%A/I+@SNSL[%!D; M83WP336TLXZ`6!.];(#X1YE61:%[/%0/6Z^_/B^A`GP(4(9@ZU3OND204*1 M-P@[V=HN,Q3%IB8_@PZ'@_(FT1PDC81+TG'-Y]!,*E*7.:SOHG).11L\C72 M,?`1-[=T!^%`@2M+.!1:I1NUE[)-T((%TXZ8K=\C!F213E6$+9/%VBIL)7F M.[0GB0EVC[!UKBS),#/^E.)=!\YN6;E+,F`D4!GUY:YXFYO^S)NVT M(W616[@9Z)CR+\TT26#F(DI$R,IA1EN!7MK`L8/(=8\-]S1[O5W#'H;; M*BL]Q6XE5J7`2I2PN;63UK,$ZI(;LDTP0V'$6[K[7%?!Q)EDN^)B\!7X,R MJ=+VW273--DU65O@J4_9/)]896DEW),15]S0)GDHCGM)16I=*;6WCLYY ME@=4OY=?3C5#MTSDX_NED1F)UH^?\?$ZC(4^!5ZUH3\8'2M_%Y`@5R M@Y0N:=(/8LT^WNYT@'_)U:K%Z6_BM-%AW,U$W$K=,44!WYVV-U0PB5BLQ M5B5CRD9+JB_,#)']2=PTRW/Y[$P_YC(VP9/CCA)]/JTAN]7Q*Q6(SG(1H M/=!/[:W?MS5L%Z)RZ9Y3GZJ.1/,'CO=?`YLO@+4$L!`A#@``W',Z M*T$P4N,900`#0```0```,!`$$N:'-55`4``VNL3M5 M```4$L!`A#@``H%H[*P(`#0`0`,!! M40O550%``-T[([57@``%!+`0(7`PH``/=S.BNB5\!9(@```(` M`T```$```D@84```!D+T(N:'-55`4``YNL3M5```4$L!`A# M%`@`U@[*ZR!`I\@`@``AP0```@`#0```0```,!X$UA:V5F M:6QE550%``-:S[([57@``%!+`0(7`Q0(`'-=.RL#D^_0(``+0$```* M``T```$```#`@3L#``!R96%D;64N='AT550%``.JV+([57@``%!+!08` 1!0`%`$4!``1!0!R ` end size 1772 ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
-M works strangely
Dear ghc-5.02-linux-i386-unknown, I doubt about your -M option work. I command (on 32M RAM machine), ghci +RTS -M16m -RTS ... Prelude sum [1..333000] GHC's heap exhausted; while trying to allocate 0 bytes in a 19996672-byte heap; use the `-Hsize' option to increase the total heap size. The top command measures the memory on other linux terminal. And it shows 92% usage of memory in some moment, then, the disk starts to work hardly, and only then, `heap exhausted' appears. Setting -M8m prevents such expansion. By the way, there was announced something about changes of meaning with -H, -M ... But I had not noticed the change. Was it for default values? - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
segmentation fault in ghc-5.02
Dear ghc-5.02-linux-i386-unknown, Probably, you manage the memory a bit wrongly. Consider the following situation. ghci -package docon T_ +RTS -MXXm -RTS loads a large package for DoCon-2.04 from .o, .a libraries and then, several interpreted test modules headed by T_.hs, and this all takes about 50Mb. Then, T_ test log computes and prints everything correct, until the very end. Then, if XX is about 50-60, it reports Segmentation fault or sometimes ghc-5.02: fatal error: evacuate: strange closure type 48856 Increasing XX to 80 improves this. If you want an archive, tell me some personal address to send it to for debugging. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
small bug archive enclosed
Dear ghc-5.02-linux-i386-unknown, You cannot load my project of 2 small modules, using package: ghci -package docon B ... Loading package data ... linking ... done. Loading package docon ... linking ... done. Compiling B( B.hs, interpreted ) ghc-5.02: panic! (the `impossible' happened, GHC version 5.02): ByteCodeLink.lookupCE A.a{-r41-} The project is enclosed here. Apply uudecode, unzip --readme.txt Makefile A.hs d/B.hs and see readme.txt - Serge Mechveliani [EMAIL PROTECTED] begin 644 b.zip M4$L#!`H``-QS.BM!,%+C0```!D$`!4`02YHU540`#8*ZQ.[S ML3M5`0`%P07!UO9'5L92!!('=H97)E@IA(#T@5')U90I02P,$@`` M8WZ*P(`%0!D+U540`#^K2Q.W*UL3M5`0`%P07!%!+ M`P0*``#WSHKHE?`62(B!@`5`0O0BYHU540`#D:ZQ.ZZT ML3M5`0`%P07!UO9'5L92!('=H97)E@II;7!OG0@02`*F(@/2!A(`I0 M2P,$%`@`:'0Z*[]6!8`!`@``-`0```@`%0!-86ME9FEL95540`#8Z^Q M.[SL3M5`0`%P07!(V3S6[;,!$S^)3;.T`C04S*@KTXB'%`G0`U:)+W% M@4I3:XLP12H4Y1^D??L*,E.6AW*DS3\-)Q=KA`N(,EM@4F!,M^@3CY]^+BH M5PGN2NL\\-_K^NKFY_@U7!.XCCF+[;V3NN!+1CZUX=(6!1H/WV^OP2Z) M15B@MEMZHB@E$*-M09#6.:Q*:S)E5N`MF?P2#GCU'AHG83+`IUIMAY, ME8]K1U4^\IC,6-C=G=Y^_7F,V4('Y$PAH/DA-%J\5KQKD;2B'75U9:0\J MP0]Z%2N$Y^;%B`*/Y5[`:#[*G0^DV#I(JF9VFF7-4A#\2G.*$D,=IXU%1 M1(D]\C=`BU(YX?8'DP/O$I;2E[.N#+?1^%-QY)UXQ`PC9*-UG?59BD# M')=,.Z3J*WZ+D$?9]B?-LP]J7B_FH!]LHJURFMO0#QY!'B\B.,C1E9-U M%@$!^,.8,LK/6*26\$`SQS-HV@2/Y\T$2C65%F0SFI6$0A+E`=`+*)I MM[1YN.\)_(8#(1KB#][V9!!VMP:J\,S\*X?/(Q*^(1$7H@UPB6+PK\0 MT(0NI[N/,\B9D?Y+\[IR5]NBXK3W$G1Y/2=\JG4GAVIJ37QF)Q,6G9RV M\TQ9!H[17`Z:41MKG5R(63N=K@P,'MF*O6'9+U!+`P04``4#HK M,O/D,HP!``!B`P``@`5`')E861M92YT'155`D``UUL3M7M;$[57@$`!$ M%P2]DD]KS$0Q_[*5ZA$!N\2EO:BZ$%[U+:0OH'FIP=37VBM5JA2WF)+O M'LTZ,22MK]5%(^F]-[,_EHS-L![XJ@?:6D=`[@G.^E*!\!Y*J:HJ-=K5S?67 M[]^6,`S/=H8;)SV0Y(`9$WXTI;9H2JNJZ)[]`Q^.HO98L;0I\M+T=4U MND@Z4YHRK$]9.ZS90]C(W69XP$O9S1'R4%?ZR2H1%T-^A==MP5[R(M*48 MK=\A,Z:`2W4:X4'SUPC.;J*.EJ9OFDQBZ0Q,V79]9U$_]CE-$X0DGV*]9 M6IY$.FMY%!Z#/$AMV-,9^91Y7M_R*R3^^8Y@QD:U:=%F3M3#)$RPG4QF[ M?J=O5F6-MYV+S`3T+=V#)R2W3BZ0*]#($]F@4^?6_RBF`2_F.;+)\2;0Z:6 M#5V5\91C'O:A_8B5TG_J^/9U?5=U$=SW'5]82_.4OS/\+\.=@0Y/H\3'79 M*'[`B.-IB;T?//_V2(=QPPZWZW7*._FGUZN+2?DCDML;^E#)N@=02P$%P,* M``#SHK03!2XQD9!``-```!P($`02YHU54 M!0`#8*ZQ.U5X``!02P$%P,*``!C=SHK`@`- M`!``P$%09]55`4``_JTL3M5```4$L!`A#@``]W,Z*Z)7 MP%DB(@8`#0```0```*2!A0```0O0BYHU54!0`#D:ZQ.U5X M``!02P$%P,4`!H=#HKOU8%@`$```T!`-```!P('@ M36%K969I;555`4``V.OL3M5```4$L!`A#%`@`%'@Z*S+SY#*, M`0``8@,```H`#0```0```,!'`,``')E861M92YT'155`4``UUL3M5 9```4$L%!@`%``4`10$``.4$`')E ` end size 1600 ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
extra_ghc_opts package field in 5.02
Dear ghc-5.02, Does the extra_ghc_opts package field work in 5.02 ? For I cannot make it work, so far. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
Joy language. Reply
S. Alexander Jacobson [EMAIL PROTECTED] writes I just found out about a functional programming language called Joy (see http://www.latrobe.edu.au/philosophy/phimvt/joy.html). Joy differs from Haskell in that it has no variables. Instead, all functions are postfix, taking a stack as their argument and returning a stack as a result. [..] tree-rewriting rules go away), faster development, easier optimization (search and replace lists of functions) and even simple meta-programming. Here is a quick example program to give a flavor for how it works. [1 2 3 4] [dup *] map == [1 2 3 4] [square] map is the same as Haskell's map (\x-x*x) [1,2,3,4] == map square [1,2,3,4] [..] They say FP language and internal language of Miranda (S, K combinators) also avoid variables. What is the difference to Joy? I thought it may be a good strategy for a compiler to get free of variables, but it is hard for a programmer to write programs avoiding variables. Is Joy programming a joy? - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
series. Reply
Luis Pablo Michelena [EMAIL PROTECTED] writes on the subject `series' where to find a haskell program that calculates the number e, that is the list of infinite digits? [..] what i am looking for is something like the ertostenes sifts, that prints every prime number until it run out of memory ... In what way the Heratosphenes sieve for prime numbers may relate to finding approximations of number e ? As to finding the infinite list of digits for e = lim (1 + 1/n)^n, n - infinity here the program is suggested for finding (eAppr n) :: Rational such that |e - (eAppr n)| 1/2^(n-3) : import Ratio eAppr :: Integer - Rational eAppr n = appr 0 (1%1) (0%1) where appr k member res = --- member = 1/(k!), if --- res = sum [1/i! | i - [0..k]] k==n then res else appr (k+1) (member/(fromInteger (k+1))) (res+member) Several decimal digits can be obtained, then, like this: fromRational (eAppr 6) :: Double -- 2.717 fromRational (eAppr 20) :: Double -- 2.718281828459045 Explanation. According to Calculus, we have e = lim (eApp n), where eAppr n = sum [1/k! | k - [0..n]] n - infinity and |e - (eAppr n)| 3/n! = 1/2^(n-3). Therefore, eAppr(n+3) differs from e in less than 1/2^n. I believe, this fact will help us to find first true k digits of e for any given k. If people would not give other good solution and if you ask me to complete this task, then I'll try to do this. Regards, - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
GC options. Reply
Hello, here are my votes on Simon Marlow's questions. Issue 1: should the maximum heap size be unbounded by default? Currently the maximum heap size is bounded at 64M. [..] 1. remove the default limit altogether 2. raise the default limit 3. no change Put the default to 30M. Issue 2: Should -M be renamed to -H, and -H renamed to something else? The argument for this change is that GHC's -M option is closer to the traditional meaning of -H. Yes. Issue 3: (suggestion from Julian S.) Perhaps there should be two options to specify optimise for memory use or optimise for performance, which have the effect of setting the defaults for various GC options to appropriate values. [..] All right. But let the meaning of -O remain as it was. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
scope change
ghci-5.00.2 (binary distribution for linux, i386-unknown) changes unexpectedly the scope. This looks like this: ghci Foo Foo f 1 Foo 0.1 * 0.2 :: Double 2.0004e-2 Foo :l Foo ... Foo 0.1 * 0.2 :: Double ... unknown value `*' Foo :m Prelude ... Prelude 0.1 * 0.2 :: Double 2.0004e-2 But the behavior changes, and I cannot always reproduce the effect. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
-hidir
Simon Marlow said -hidir option was added, but ghc-5.00.2 does not seem to recognize it. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
visibility in interactive `let'
Hello, I have an impression that interactive `let' in ghc-5.00.2 (binary distribution for linux i386-unknown) looses visibility of some instances. Having installed my ghc package docon, with its libraries, I command ghci $doconOpts ... Loading package docon ... linking ... done. Prelude :m DExport DExport let {q1 = 1:/1 :: Fraction Z; dQ = upField q1 eFM; (_,rR) = baseRing q1 dQ } in subringChar rR Just 0 - and this is correct. Changing the `let' part to DExport let q1 = 1:/1 :: Fraction Z DExport let dQ = upField q1 eFM yields the report no file:0: No instance for `Field (Fraction Z)' arising from use of `upField' at no file:0 in the definition of function `dQ': upField q1 eFM $doconOpts = -package docon language extension and other options DExport exports everything of docon application, all the needed instances too. type Z = Integer, :/, Fraction, upField come from DExport, eFM = emptyFM is of FiniteMap linked by docon. As I had sent earlier the DoCon project to Simon Marlow, you can reproduce this example. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
loading .hs instead of .o
In addition to the report on ghc-5.00.1 (2) (binary, linux, i386-unknown) on loading interpreted code instead of compiled one there appears another similar effect: if Foo.o is in the current directory, then ghci Foo sometimes loads the interpreted code while ghci ... Prelude :l Foo loads Foo.o. This is visible in the project I sent to Simon Marlow, with Foo.hs = .../demotest/T_permut.hs (it needs -package docon). - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
paths in ghci-5.00.1
Hello, ghc-5.00.1 -interactive(binary distribution, linux i386-unknown) finds or does not find the value allPermuts depending on the directory where ghci is run. ghci ... -package docon ... Prelude Permut.allPermuts $ reverse [1..3] The project is as follows. d/source/ contains the sources, and Permut.hs too. Everything was compiled by ghc $(HCFlags) -package-name docon --make DExport, and all *.o *.hi moved to d/e/ , where .o and .a libraries are created and referred by the package docon. Now, when ghci is run in d/source/ it does not find allPermuts, in other directories, it finds. Is this a bug? - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
.o not found
To my ghc-5.00.1 -interactive (binary, linux, i386-unknown) forgets sometimes of existing Foo.o module to load and loads instead the interpreted code. A strange effect. Compiled and loaded is a single module Foo.hs in the current directory. In such cases I compile Foo.hs once more, rerun ghci and then it finds Foo.o. How to trace this down if the effect is not always reproducible? Simon Marlow [EMAIL PROTECTED] writes I found one bug recently which could cause this, if you have recursive modules and use .hi-boot files. If this isn't the case, you can get more info using the -ddump-hi-diffs option, which should give some clue as to why GHC thinks the module needs to be recompiled. My case is the second one. And GHC does not report of recompilation need. In simply may load interpreted code instead of .o. I added -ddump-hi-diffs (to ghc -c and to ghci lines) But I cannot reproduce the effect. Any simplest Foo.hs would do. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
extra_ghc_opts
To my The lack of extra_command_line_opts is not important, but I wonder, why make things worse? [..] I thought the idea of -package was to collect in one place all the project-specific settings. For example, for my project docon, I need doconOpts = -fglasgow-exts -fallow-this-and-this ... Simon Marlow [EMAIL PROTECTED] writes I'm in two minds about this. At the moment, using a -package option does two well defined things: - it brings a set of modules into scope - it links in the object code for those modules According to the Principle of Least Astonishment, we should force the user to add any extra options himself, because if '-package wibble' automatically adds a number of language-feature options then it's just too magical and non-obvious. On the other hand, I presume you *need* these options in order to be able to use Docon, so adding them automatically would seem reasonable. Opinions, anyone? To add or not to add automatically the language-feature options, or many other options, this is up to the designer of the package. The designer is always free to set [] in the package. And you were going to desable this choice (concerning package). - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
ghc_extra_opts in package
To my I tried (file p.txt) Package {name = foo, import_dirs = [], [..] package_deps = [], extra_ghc_opts = [-v], extra_cc_opts = [], extra_ld_opts = [] } [..] ghc -c -package foo Main.hs expecting that it would add -v automatically after -c. But it does not. [..] Simon Marlow writes That does indeed look like a bug: we don't pay any attention to the extra_ghc_opts at all. In fact, I'd like to remove that line from the package spec altogether, since the package spec isn't supposed to be ghc-specific at all. So unless you have a particularly good use for it, could I remove extra_ghc_opts? Why it is ghc-specific? Call it extra_command_line_opts, and it will not look ghc-specific. The lack of extra_command_line_opts is not important, but I wonder, why make things worse? One handcraft teacher said make it best possible from the start, for it will outcome bad by itself. I thought the idea of -package was to collect in one place all the project-specific settings. For example, for my project docon, I need doconOpts = -fglasgow-exts -fallow-this-and-this ... Now, the user commands ghc -c $doconOpts -package docon Foo.hs And with extra_command_line_opts, this will be ghc -c -package docon Foo.hs The user has only to set the path variable and command make docon in install the project. The package is created automatically. Then, the user has only to remember to add -package docon to the command line. One has not to remember adding $doconOpts, has not to think about the compilation error reports when one forgets to type $doconOpts. Neither one has to care for initiation of $doconOpts, because it was given in Makefile. Am I missing something? - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
extra_ghc_opts in package
Hello, ghc-5.00.1 User's guide says extra_ghc_opts Extra arguments to be added to the GHC command line when this package is being used. I tried (file p.txt) Package {name = foo, import_dirs = [], source_dirs= [], library_dirs= [], hs_libraries = [], extra_libraries = [], include_dirs = [], c_includes = [], package_deps = [], extra_ghc_opts = [-v], extra_cc_opts = [], extra_ld_opts = [] } ghc-pkg -a p.txt main = putStr hello\n (file Main.hs) ghc -c -package foo Main.hs expecting that it would add -v automatically after -c. But it does not (ghc-5.00.1 binary for Linux i386-unknown). Could, you advise, please? - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
:l with package lib
In ghc-5.00.1 linux-i386-unknown cd ~/t ghci -package docon ... Prelude :l B loads the interpreted code. But there was applied earlier make init make docon that builds a package in ~/t/export/ with the HSdocon.o, libHSdocon.a libraries. Should ghci load the .o code? Aslo changing to other directory gives the error ... Prelude :l B module `B' is a package module To find the example, applyuudecode, unzip to the small project enclosed below, for the directory structure ~/t/export/ /source/Makefile, *.hs /source/auxil/ - Serge Mechveliani [EMAIL PROTECTED] begin 700 t.zip M4$L#!`H``!QQBH)`!4`95M;W1EW0O550)``,X MMAT[@=.U5X!``-!`T$4$L#!`H``!5QBIZ-]6D)0-`!4` M95M;W1EW0O5YHU540`#*;8=.SRV'3M5`0`#00-!UO9'5L92!4('=H M97)E(`II;7!OG0@0@IT(#T@8B`K*R!BE!+`P0*``#!38J M!@`5`%UEL+U540`#FL,=.QG('3M5`0`#00-!%!+`P0* M```Y1L8JC%7ZP!7@`5`%UEL+T$N:'-55`D``VZV'3N9PQT[ M57@$``T$#01M;V1U;4@02!W:5R90IA(#T@)V$GE!+`P0*```P1L8J MR4L'2DI!``5`$(N:'-55`D``URV'3OGPQT[57@$``T$#01M;V1U M;4@0B!W:5R90II;7!OG0@02`H82D*8B`](%LG8BL(%=E!+`P04 M`S38J_HI$2\0```Y`5`$UA:V5F:6QE550)``.!PQT[O,=.U5X M!``-!`T$K55=3]LP%'VN?X55]:%AF$S:6S4!M/$I#$FZ-LZ%1.[B57'SNRD M!8W]]UW;=I$!55H?FGOO?G?ISDAN/VG..TT5/2YX5R[3.N6/E38UNKWN M$*;$9(5N/WW_]O7.ZC!Q))5=[0G(,#XKFD-M=;PA]KZVPJ)5AZPXVD5254 M3H2R-549MWB!\LGWFT4XYE@]$'RW5V$MMHHH+.0EJR4)EMJ5]/1N:V4A M[.-:BQGY$$H-DC=!Y2TS@H^O9*-PZ+$1),)5-H]SA@O=UMC=#5Y1=) M?3UX,HU#28!=3*;^3H*)AE^(@GLRC0TD@V+LDY7B`3-:4VRUI05*01A8K6 MD?;`DZFM:,9OJMHZ2ABZH57WD2HWFQWMW.[SY_O`5(2_:`\+480`O/ZL M,ZW`@AP_P'2B_GHB7?)3W'X\68.AB/#[UKC=I9],@/P$CL4(+].'8?5 MCEXA`P!*!I0\U31W*`H[#+@!*@3PNXNHNE`,8WM8Y``$:H3+9L%@+8$(! MNY,M6XB-'?AP%%^2P9KP('C`046XPC,)22%]E2AQW%'`WA+)A9'?:-E@` MA/P8_VU%@B?N2(T`^3\EKMH9#7!.H01^CS1GG,P`9]I3JU.B$BFQ'Z/2B M3$*)H9Y3J^[F@$_*1:YYA(C-$(MJR0/?.)?@9=P@Z1,!L8(\,$3UY+N; M2^LA8]PM=U';@.UTR#^?%(,*B6W-\@49NC?^.^X0G*:C3OC%GD.AVN`^ M.=/W:#29A@]`@,1I)A`FQD6VC8UM1DA=CP`ZQ$!V?T:!C6QJZ.2O$?1RP MQW0AOW#3%A!*).FIN9Q_09=V8O+.$V,[T'5TGPS;WZ8;$?;X!,R@!?XX3 MPKW(48?X'4GG+PGB^/?UF+\FB`?H4@W9/#=]-O*LJ3+T=.IQ_4$L!`A# M@``'$;*@D`#0`0`.U%`1E;6]T M97-T+U54!0`#.+8=.U5X``!02P$%P,*```51L8JC?5I0D M#0`-```!I($\95M;W1EW0O5YHU54!0`#*;8=.U5X``!0 M2P$%P,*``#!38J!@`-`!``[46@ M875X:6PO550%``.:PQT[57@``%!+`0(7`PH``#EQBJ,5?K`%P```!` M```*``T```$```D@=D```!A=7AI;]!+FAS550%``-NMAT[57@``%!+ M`0(7`PH``#!QBK)1RP=*0```D$``T```$```D@2T!``! M+FAS550%``-MAT[57@``%!+`0(7`Q0(`+--QBK^BD1+Q`(``#D(```( M``T```$```#`@8T!``!-86ME9FEL9554!0`#@,=.U5X``!02P4 /``8`!@4`0``C`0` ` end size 1590 ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
package for ghci
ghci (5.00.1) rejects my packages (ghc compiled from source for Linux, i386-unknown). Let us take some simplest project foo, consisting of a single file /t/A.hs containing module A where a = True Suppose the user is going to work in the directory /u/, to refer to the library kept in /t/export/ and to command cd /u ghci -package foo ... Prelude :l A ... A a Please, how to compile a project creating a package foo for this? --- I try starting with ghc-pkg -a foo p.txt ... Then, cd /t ghc -package-name foo --make A does not differ from ghc --make A in any visible effect: A.hi, A.o appear as usual. Then, after creating manually /t/export/libHSfoo.a and converting it by ld -r --whole-archive libHSfoo.a -o libHSfoo.o (without -r, it reports error of non-found reference PrelBase...), it still does not work. ghci -package foo yields Loading package foo ... can't find .o or .so/.DLL for: HSfoo (libHSfoo.so: cannot open shared object file: No such file Please, can you reproduce the whole this business with making, packaging and interpreting and show me how it sould be built? How should look a package for this? Is it created manually or byghc -package-name ... ? For I have an impression that the ghc-5.00 User's guide cannot help in this situation. Thank you in advance for the explanation. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
sum1, -O
Please, what is the matter here with ghc-5.00.1 (compiled from source for Linux i386-unknown) ? -- sum1 :: (Num a) = [a] - a sum1 [] = error sum1 []\n sum1 (x:xs) = sm xs x where sm [] s = s sm (x:xs) s = sm xs (x+s) main = putStr -- ghc -c -O Main.hs ghci +RTS -M70m -RTS ... Main sum1 [1 .. 1999000] GHC's heap exhausted; ... -- Now, if we change it to ... main = putStr $ shows (sum1 [1 .. 1999000]) \n , recompile, then it yields ghci +RTS -M70m -RTS ... Main main 1998001499500 - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
ghc-5.00.1: can't find `demo/D_permut.o' to unload
Dear ghc-5.00.1 (built from source for Linux i386-unkkown), here is something like a bug. - set balOpts = -fglasgow-exts -fallow-overlapping-instances -fallow-undecidable-instances -fno-warn-overlapping-patterns -fwarn-unused-binds -fwarn-unused-matches -fwarn-unused-imports ghc --make Mk.hs $balOpts ... ... ghci $balOpts ... Prelude :load demo/D_permut Ok, modules loaded: Main, Permut, BPrelude, BPair, BInteger, BList, Prelude_, OpTab, Z_, IParse. Main main True Main :l can't find module `' Main :l demo/D_linAlg unloadObj: can't find `demo/D_permut.o' to unload ghc-5.00.1: panic! (the `impossible' happened, GHC version 5.00.1): unloadObj: failed - The modules in the current directory were compiled earlier, and demo/D_permut, demo/D_linAlg are to be interpreted, their .o files need not to appear. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
:load un-existing-module
Inserting :load un-existing-module between other :load commands in ghci -5.00.1 leads to crash. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
BAL paper available
Paper announcement -- The file http://www.botik.ru/pub/local/Mechveliani/basAlgPropos/haskellInCA1.ps.zip contains more expanded explanations on the BAL (Basic Algebra Library) project (previous variant was haskellInCA.ps.zip). My real intention in whole this line of business was always not just to propose a standard but rather to discuss and to find, what may be an appropriate way to program mathematics in Haskell. The matter was always in parametric domains ... Whoever tried to program real CA in Haskell, would agree that such a problem exists. - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
permutations. Reply
Andy Fugard [EMAIL PROTECTED] writes Hi all, I'm currently teaching myself a little Haskell. This morning I coded the following, the main function of which, permutate, returns all the permutations of a list. (Well it seems to at least!) [..] The BAL library implements several operations with permutations. In particular, it can list permutations by applying nextPermutation. You may look into the source and demonstration example: http://www.botik.ru/pub/local/Mechveliani/basAlgPropos/ - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
CA proposal by D.Thurston
I was said that there exists an algebraic library proposal for Haskell (version 0.02) dated by February (2001?), by Dylan Thurston. Who could, please, tell me where to download this document from? For I could not find it from http://haskell.org Thanks in advance for the help. - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
standard library. Reply
Ch. A. Herrmann [EMAIL PROTECTED] writes [..] Anyway, an algebraic library is important: it is nice that Haskell has the rational numbers but recently, it appeared useful for me also to have the algebraic numbers, e.g., to evaluate expressions containing roots exactly. The problem is that I'm not an expert in this stuff and thus, be very glad if such things are added by an expert. On the other hand, I'd like to add things like a linear equation solver for a non-invertible system which may help to convince people that Haskell provides more features than other programming languages do. The BAL library http://www.botik.ru/pub/local/Mechveliani/basAlgPropos/ provides such linear solver, as well as operations with roots. For example, the root of x^5 - x + 1 can be handled (in many respects) in BAL as only a residue of polynomials modulo this equation - a data like (Rse ...(x^5-x+1))). But BAL is not a standard library. And there is another point: [..] for a non-invertible system which may help to convince people that Haskell provides more features than other programming languages do. In any case, we have to distinguish between a standard library and an application. A standard library should be small. I think, for Haskell, it should be something that you mention now. But, for example, the true algebraic number theory algorithms are too complex, it is for the non-standard application writers. And if a language is good, there should come many special applications (non-standard ones). Haskell's www page does reveal some. Regards, - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
space usage in ghc-5.00 --interctive
Something is wrong in the space usage in GHC-5.00 --interactive. Is the interpreted code larger than compiled to object? I try certain small project having the import dependency Set_ / / |\ / / Categs \ / || \ \ /| Categs_ \ Common_ / \ | / IParse_ Prelude_ - the edges oriented downwards: Set_ imports Categs ..., also some items are imported from FiniteMap, List ... After setting opts = -syslib data -fglasgow-exts -fallow-overlapping-instances -fallow-undecidable-instances -fno-warn-overlapping-patterns -fwarn-unused-binds -fwarn-unused-matches -fwarn-unused-imports and running ghc --make $opts Set_.hs it appear .o and .hi files of the total size less than 800 Kb. Then, ghci $opts +RTS -M7m -RTS Set_ loads the project sucessfully. Why -M6m does not suffice? Further, after rm *.hi *.o ghci $opts +RTS -M24m -RTS Set_ it is not enough 24 Mb to load the interpreted code. Something is wrong here. For size(*.o + *.hi) 1 Mb. Maybe, ghci loads several copies of Prelude_ ? I am sending the archive to Julian Seward [EMAIL PROTECTED] only, in order not to fill e-mail list with a large letter. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
ghc-5.00 --make -O2
ghc-5.00 --make -O2 (binaries improved by J.Seward for Linux, i386-unknown) cannot `make' certain small project. After setting opts = -syslib data -O2 -fvia-C -O2-for-C +RTS -M50m -RTS -fglasgow-exts -fallow-overlapping-instances -fallow-undecidable-instances -fno-warn-overlapping-patterns -fwarn-unused-binds -fwarn-unused-matches -fwarn-unused-imports and running ghc --make $opts DExport it reports ... DPrelude.hs:14: Warning: Module `Char_' is imported, but nothing from it is used (except perhaps to re-export instances visible in `Char_') Compiling SetGroup ( SetGroup.hs, SetGroup.o ) ... Compiling Vec_ ( Vec_.hs, Vec_.o ) G3HC's heap exhausted ... Then, I repeate the command, because this was the way to `make' it to end: ghc --make $opts DExport ghc-5.00: chasing modules from: DExport Skipping Prelude_ ( Prelude_.hs, Prelude_.o ) Skipping Categs_ ( Categs_.hs, Categs_.o ) Skipping Common_ ( Common_.hs, Common_.o ) ./Common_.hi:73: tcLookup: `Common_.a' is not in scope When checking kinds in `Common_.a' When checking the transformation rule SC:poly_minPartial_stCK1 I am sending the source archive to J.Seward [EMAIL PROTECTED], S.Marlow [EMAIL PROTECTED] in order not to fill a list with archive. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
ghci, -O
I fear there is certain misunderstanding about -O usage with ghc-5.00 interactive. Simon Marlow and User's Guide (Section 4.4) say that -O does not work with ghci. For technical reason, the bytecode compiler doesn't interact well with one of the optimization passes [..] What is a bytecode compiler? The one that prepares a lightly compiled code for the interpreter? What I meant is something like this: to compile ghc -c -O Foo.hs in the batch mode and then run ghci ... Preude :load Foo ... Foo sm [1..000] I tried this with certain small function Foo.sm, and it works, and runs better than when compied with -Onot. Now, as I see that ghci can load and run the code made by -O, I wonder what the User's Guide means by saying -O does not work with GHCi. Maybe ghci -O ? could be meaningful? - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
ghc --make -O, ghci
To my Me For, with -O and much in-lining, the interfaces may be very large. Simon Marlow [EMAIL PROTECTED] replies Ma You really don't want to use -O with GHCi (in fact, at the moment Ma you can't). Me This may be serious. For example, putStr $ show (sum1 [1..n]) Me will need a constant memory or O(n) memory depending on the -O Me possibility. Ma True. But compiling with optimsation takes 2-3 times longer than Mainterpreting - if you're prepared to wait that long, you might as well Ma compile instead. Compiling will give more of a performance boost in Ma general (about 10x) than turning on -O (perhaps 2x). ?? I do not understand. I meant -O only for ghc --make -O (has -O sense in other situation?) to compile _once_ the whole application library. .o, .hi files are prepared once (Why are you saying `to wait that long' ?). Then, thousands of times ghci is run that uses the ready library optimized code and loads smaller examples to interptret. (I never tried -O to compile for ghci because so far we are considering the bugs made by ghci using the code produced with -Onot). Typical example: the product of concrete matrices matrMul M N is called for several M, N - in interpreted mode. But matrMul refers to the application library code compiled (once) with -O. Is this possible? Please, document these questions in the manual. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
memory distribution in ghci
To my request [..] can ghci provide a command measuring the part of heap taken by interpreted programs, by interfaces? (or, at least, their sum) Simon Marlow [EMAIL PROTECTED] writes Ah, I see now. You'd like to know how much memory your program is using, compared to GHCi itself? Hmm. Unfortunately I can't see a good way to do this I'm afraid. Inghci ... +RTS -Mxxxm -RTS xxx sets the volum bound for (1) `real' data + (2) interpreted program + (3) interfaces of pre-compiled loaded .o modules. Please, confirm, whether I understand correct? And many users would like to measure (1) within the ghci loop. The statistic like Prelude List.sort [1..10] [1,2,3,4,5,6,7,8,9,10] (0.53 secs, 7109100 bytes) includes the number of allocations, which cannot help to detect (1) (can it?) This is a user's wish for the future ghci version. I do not know, whether it is hard to develop. If it is hard, personally, I would not suffer much. Because, getting heap exhausted, one can run ghci once more, with larger xxx, and performing the same computation to detect approximately which part of xxx is taken by the `real' data. But this looks awkward. Another wish: to document the ghci memory management - which we are discussing now. Because GHC could simply give a reference to the manual section. However, you can always compile the program and use profiling... Suppose User demonstrates an Haskell application A to the Listener. Naturally, most of A constitutes its `library' L, and L was compiled to .o files (desirably, gathered in the libL.a archive). ghci invokes, loads the _interpreted_ demonstration code D, much smaller than L, but importing items from L. Naturally, Listener asks something like and how much memory needs to compute this matrix?. In many other system, the interactive command, say, `:size' gives immediately some interpreter memory map and the sizes taken by `code' and `data'. And you suggest to wait 2 hours to re-compile _everything_ with profiling. The Listener would not wait. For, with -O and much in-lining, the interfaces may be very large. You really don't want to use -O with GHCi (in fact, at the moment you can't). This may be serious. For example, putStr $ show (sum1 [1..n]) will need a constant memory or O(n) memory depending on the -O possibility. The docs on ghc-5.00 say that it joins the best of the two worlds - interpreted code calling for the fast code compiled earlier. Generally, I like only scientifically designed `interpreters' and always advised not to spend efforts to .o compilation. But as nobody agrees with me, as the o-compilation fiesta continues, then, why to spend effort in vain? - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
memory destribution
Here is another expression of what I wrote recently on the memory destribution. Probbaly, it suffices to have the garbage collection command. For example, Hugs can do Prelude :gc Garbage collection recovered 240372 cells ... It is clear then, how much the user has for further computation. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
memory distribution in ghci
It looks, there remains a questions on memory distribution in ghc-5.00 -interpreted : can ghci provide a command measuring the part of heap taken by interpreted programs, by interfaces? (or, at least, their sum) For, with -O and much in-lining, the interfaces may be very large. Also compare it to the _batch_ mode: commanding ghc -o run Foo.o ./run +RTS -MXXm -RTS the user is sure that XX Mb is only for the `real' computation data. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
want to send archive with bugs
may I send to someone of GHC team an archive with a couple of bugs of ghc-5.00 ? I mean not to copy the archive to the whole list ... and to debug. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
:l XX libFoo.a
The :load command of the ghc-5.00 interpreter first searches the needed compiled modules (*.o) and loads them when finds. But how to make it to search them in the object code library xxx/libFoo.a ? For it is better to keep the fixed *.o files in a library. And `:load' works in the above situation similarly as ghc -o XX.o ... I tried :l XX libFoo.a, but it does not work. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
memory in --make
Hello, It occurs that ghc-5.00 --make spends the memory in a particular way. Making my project via Makefile (envoking ghc driver many times) succeeds within -M29m. While ghc --make ... -Mxxx Mk.hs needs more than 50Mb, probably, because it keeps much intermediate information between compiling the modules. Still I managed to `make' it with -M29m by issueing the latter command two times more, after the insufficient-heap break. ghc --make still looks faster and better to arrange. Each time it compiles only remainging modules. Maybe, something can be done to avoid these heap-exhausted breaks? For, seeing that some modules remain to compile and heap is exhausted, ghc can save the intermediate information to disk giving the room for the next module compilation. Also it can restart the driver itself - with appropriate message - ? - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
:! ghc -c vs :c
The ghc interactive ghci compiles the module by the command :! ghc -c Has it sense to provide a command :compile (:c), having a meaningful name and a format similar to other commands, like say :load ? - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
-make
Hello, I have two questions on ghc-5.00 -make. The User's Guide says in Section 4.5 that it is simpler to use -make than the Makefile technique. 1. I try ghc -make Main.hs with Main.hs containing main = putStr hello\n (in ghc-5.00, binary, i386-unknown, Linux), and it reports ghc-5.00: unrecognised flag: -make - ? 2. How to simplify (eliminate?) Makefile. - My project has many .hs files in the directories ./source/ ./source/auxil/ ./source/pol/factor/ ... `make ...' compiles them putting .hi, .o files into ./export/, then, applies `ar' to make .a library. To do all this, Makefile includes the rules like .hs.o: $(HC) -c $ $(HCFlags) and applies the compiler $(HC) with the flags ... -syslib data -recomp ... -i$(E) -odir $(E) -ohi $(E)/$(notdir $*).hi $($*_flags) ... Now, this Makefile does not work under ghc-5.00, because second compilation cannot find .hi file of the first compilation: ..ghc -c source/auxil/Prelude_.hs -fglasgow-exts ... -recomp -iexport -odir export -ohiexport/Prelude_.hi +RTS -G3 -F1.5 -M29m -RTS -Onot ... ..ghc -c source/parse/Iparse_.hs -fglasgow-exts ... -recomp -iexport -odir export -ohiexport/Iparse_.hi +RTS -G3 -F1.5 -M29m -RTS -Onot source/parse/Iparse_.hs:20: failed to load interface for `Prelude_': Bad interface file: export/Iparse_.hi does not exist Also what -make can do to replace some large part of Makefile. For (a) Makefile has rather odd language, it is not good to force the functional programmer to look into it, (b) one has to think of Makefile versions for different operation systems. Thank you in advance for the help. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
implicit-parameters paper
Simon P. Jones mentions some paper on implicit parameters in his recent letter on Implicit parameters and monomorphism. Please, where to find this paper? - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
`Illegal instruction'
I have `about' installed ghc-5.00 binary on Linux, i-386-inknown. But I have libreadline.so.4.1 instead of libreadline.so.3. So, I made a symbolic link from ..so.3 to ..so.4.1. Then, import Ratio main = putStr (shows (0.9 :: Rational) "\n") works and... 0.9 :: Double ... yields `Illegal instruction' report at running. Could you tell, please, 1. does this library replacement any harm? 2. why `Illegal instruction' ? - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
no problem with .sgml
I wrote recently | I want to install GHC-5.00 (from binaries, on Linux). | But the installation guide, as most documentation, is in the .sgml | format. I can read ASCII, .dvi, .ps, .tex, html texts. | But what to do with .sgml, please? Forget this question, please. Because I un-archivated the binaries and found there the needed docs on ghc in the needed formats. - Serge Mechveliani [EMAIL PROTECTED] ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
question of decimal pointed literals
Can we solve and close the problem of the meaning of decimal pointed leterals? To my | People wrote about toRational (0.9) == 9%10 = False | ... | Probably, the source of a `bug' is a language agreement that the | input is in decimal representation (`0.9') and its meaning is a | floating approximation in _binary_ representation. | For example, 1%5 yields a finite mantissa in decimal representation | and an infinite (periodic) mantissa for the binary representation of | toBinary(1%5) = 1B % 1001B. (a typo: this should be 101B) | Therefore applying toRational to any finite float approximation | (Double, or other) of 1%5 does not return 1%5. Lennart Augustsson [EMAIL PROTECTED] writes What are you talking about? Input in decimal representation is stored as a Rational number. There is absolutely no loss of precision. and in his next letter citates the Report: "The floating point literal f is equivalent to fromRational (n Ratio.% d), where fromRational is a method in class Fractional and Ratio.% constructs a rational from two integers, as defined in the Ratio library. The integers n and d are chosen so that n/d = f." By saying "input is in decimal representation (`0.9')" I meant that a number `0.9' is written by a programmer keeping in mind a decimal representation. Further, according to the citation, 0.9 -- fromRational (9%10), 0.2 -- fromRational (1%5) are stored as the values of type Fractional a = a. (1) What is this `a' for our example of toRational (fromRational (0.9)) == 9%10 ? (2) Why Haskell does not report an ambiguity error? For `a' may be Rational, Double, Float - just anything of Fractional. If we take Lennart's assertion Input in decimal representation is stored as a Rational number. There is absolutely no loss of precision. then it should be `a' = Rational. Then, in particular, toRational (0.d) == d%10 = True for any decimal literal d and * any other behavior is a bug. * Is this really so? Now, Hugs-98-Feb-2000, probably, decides that `a' = Double. Because it returns False for 0.9. And in this case these values are _not_ stored as the rational numbers, but convert to Double, with the precision loss caused, as I wrote initially, by truncating of the division result, like 1001B % 1010B. Who could explain, please, whether I understand correct the whole question, and answer to the questions (1), (2) ? And is `False' is a bug? Thanks in advance for the comments. - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: toRational (0.9). Reply
sorry, I did not know this. ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
toRational (0.9)
Jerzy Karczmarczuk [EMAIL PROTECTED] writes | Lennart Augustsson wrote: "S.D.Mechveliani" wrote: ... ... Probably, the source of a `bug' is a language agreement that the input is in decimal representation (`0.9') and its meaning is a floating approximation in _binary_ representation. What are you talking about? Input in decimal representation is stored as a Rational number. There is absolutely no loss of precision. | No need for whatareyoutalkingabout preamble. | Input in decimal representation *in general* is stored as the | implementors wish. | [..] The matter is in what the _language standard_ says. If it puts that `0.9' in the user program means precizely 9%10, then Lennart is right. As I never used such things like `0.9', I would take, so far, what the language experts say. - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
(fromRational (1%5)) :: Double
Lennart Augustsson [EMAIL PROTECTED] writes "S.D.Mechveliani" wrote: the matter is in what the _language standard_ says. If it puts that `0.9' in the user program means precizely 9%10, then Lennart is right. I quote the report: "The floating point literal f is equivalent to fromRational (n Ratio.% d), where fromRational is a method in class Fractional and Ratio.% constructs a rational from two integers, as defined in the Ratio library. The integers n and d are chosen so that n/d = f." I think the way Haskell handles numeric literals is pretty nice and it's important to understand what happens if you use them. :) I confess, I do not understand from the previous letters, in what way, for example, Hugs finds Prelude (fromRational (1%5)) :: Double 0.2 I never dealt with Floats in Haskell. And thought that a value of Double has mantissa in a binary representation. So, the interpreter has to convert a decimal 5 to a binary 101B, evaluate 1B / 101B obtaining an infinite sequence of binary digits and take the first m of them required for Double. Printing the result should yield something like 0.1999... Thank you in advance for the explanation. - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
`Covertible' class. Reply.
Dylan Thurston [EMAIL PROTECTED] writes In thinking about various issues with the numeric classes, I came up with the following question: Is there a problem with having a class 'Convertible' as follows? class Convertible a b where convert :: a - b [..] The basic algebra library BAL http://www.botik.ru/pub/local/Mechveliani/basAlgPropos/bal-pre-0.01/ suggests class Cast a b where cast :: a - b - a If s is an element of a certain domain, then one can use the construction cast s x to convert various data x to corresponding canonical values in the domain defined by s. For example, if s - Z[x] is a polynomial over Integer, then the expressions cast s 2, cast s (2,3) give the constant 2, considered as a polynomial, and a one-term polynomial equal to 2*x^3. Also BAL overloades +, * ... by hiding-reexporting the standard Prelude and implements the algebraic categories Group, Ring ... - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
finding primes
Hello, I wrote recently on generating prime numbers [..] The DoCon program written in Haskell (again, no arrays) gives [..] take 5 $ filter isPrime [(10^9 ::Integer) ..] -- [17,19,100021,100033,100087] This takes 0.05 sec. [..] But it is better to refer to the BAL library for Haskell (basAlgPropos) http://www.botik.ru/pub/local/Mechveliani/basAlgPropos/ It is easier to use. -- Sergey Mechveliani [EMAIL PROTECTED] ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
group theory. Reply
Hi, all, To Eric Allen Wohlstadter's ([EMAIL PROTECTED]) : Are there any Haskell libraries or programs related to group theory? I am : taking a class and it seems like Haskell would be a good programming : language for exploring/reasoning about group theory. What I had in mind : was perhaps you could have a function which takes a list(set) and a : function with two arguments(binary operator) and checks to see whether or : not it is a group. I think it might be a fun exercies to write myself but : I'd like to see if it's already been done or what you guys think about it. Marc van Dongen [EMAIL PROTECTED] writes I think Sergey Mechveliani's docon (algebraic DOmain CONstructor) has facilities for that. Have a look at: http://www.cs.bell-labs.com/who/wadler/realworld/docon.html Sorry, DoCon (http://www.botik.ru/pub/local/Mechveliani/docon/2.01/) really supports the Commutative Rings, but provides almost nothing for the Group theory. For example, for the domain(Integer,Integer) it would set automatically (IsGroup,Yes) for the Additive semigroup and (IsGroup,No) for the Multiplicative semigroup. For the additive case, it would also set the group generator list [(1,0),(0,1)]. In both cases, it would also set cardinality = Infinity. Similar attributes are formed for the constructors of Permutation, Vector, Matrix, Polyninomial, Fraction, ResidueRing. And that is all. It does not provide so far any real algorithmic support for the Group theory, except some operations on permutations. But one may develop the program by adding the needed algorithms and introducing new attributes. : What I had in mind : was perhaps you could have a function which takes a list(set) and a : function with two arguments(binary operator) and checks to see whether or : not it is a group. I think it might be a fun exercies to write myself but : I'd like to see if it's already been done or what you guys think about it. I never programmed this. It looks like some exercise in algorithms. There are also books on the combinatorial group theory, maybe, they say something about efficient procedures for this. Regards, -- Sergey Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
info for basAlgPropos users
Information for the users of basAlgPropos, DoCon: Starting from today, I shall be out of the business for about a month. Sergey Mechveliani [EMAIL PROTECTED]
DoCon-2.01 announcement
ANNOUNCE. The computer algebra program DoCon-2.01 is available at http://www.botik.ru/pub/local/Mechveliani/docon/2.01/ It is a slight development of DoCon-2. It works under GHC-4.08. With respect to basAlgPropos, DoCon has more complex architecture and is designed mainly for the mathematicians. -- Sergey Mechveliani [EMAIL PROTECTED]
basAlgPropos. Reply.
Andrew U. Frank [EMAIL PROTECTED] write on 24 Aug 2000 on basAlgPropos i am interested in an improvement of the numeric classes and hope the proposal by Mechveliani can contribute to this. basAlgPropos contains such suggestion. [..] i would suggest that the authors separate the proposal into two parts - the piece which identifies the parts which must be changed in the prelude and a library of routines, which demonstrate that on the proposed base functionality in the prelude an algebraic package can be built. [..] It seems to me that all what is needed to change in the Prelude are the classes defining numbers (Num, Rationl, Fractional...); everything else can go in a support library? is this correct? Yes, probably. The proposal contains (0) common purpose items (data PropValue, sortE, ...) (1) algebraic classes (Set, ..., EuclideanRing, Field) (2) algebraic data structures (Pair, Vector, Matrix, Permutation, Fraction, UPol, Residue) (3) instances for (2) and related items, in particular, the algorithmic support for solution of linear systems and such. I think, the Proposal would suggest for (1) and some part of (0) to become a part of the proper Prelude. And (2), (3) have to move to the `support' modules - to be imported on demand. Similarly as Num is in Haskell-98 Prelude, and Ratio, `sort' are in the `support' modules Ratio, List (if I understand correct the request). And all the old Prelude remains, except the items like Num, Integral, Fractional that are replaced by (1). -- Sergey Mechveliani [EMAIL PROTECTED]
basAlgPropos, dependent types
Dear haskellers, I suggest the two points to your attention. New basAlgPropos materials reside in http://www.botik.ru/pub/local/Mechveliani/basAlgPropos /haskellInCA.ps.zip /bal-pre-0.01/ Read the paper first. The implementation works, at least, so show the demo-s. But it is about 30% documented, the sources need to be brought to a better shape. I do not know when this project will be completed. I was not disposed to show it, but one of the users asked for the implementation. So, OK, let it exist in an in-complete form. Attempt with dependent types This may present an alternative to the sample argument approach used in basAlgPropos. I experimented with Cayenne, for several days and remain in a doubt. I understand few in Cayenne. Still, here are my newbie impressions. (Inst) Cayenne lacks half of the Haskell instance possibilities. (DTG) Sometimes dependent types do a good job for basAlgPropos. (DTB) In other cases, the record type expressions become too complex, maybe, the language should be able to specialize some record types automatically. Could the Cayenninsts advise? -- Sergey Mechveliani [EMAIL PROTECTED] --- (Inst) -- Example. In Haskell, if a type T matches the instance of Eq, then it is clear, what does it mean x == y for x :: T. And the match is determined according to the data constructors used to build x. The map x -- T -- instance match works automatically. In Cayenne, the programmer has to bring in from somewhere the record eq :: Eq T and set eq.(==) x y The map x -- eq is never automatic: the place "from somewhere" is never related to the surrounding context of a variable x. This leads to more complex and less safe programs. But could Cayenne accept the domain membership declaration? In addition to x :: T, to introduce the denotation x - dom, ( ::: ?) where dom is a collection (a record, a finite map ...) that may contain several instance records. Example: let dom = struct {eq = eq_Int; enum = enum_Int; num = num_Int} x,y - dom in (x==y) (x+y == x*y) The meaning of this example is that x,y belong to the mathematical domain dom. dom is represented as a container of the instances (records) of Eq, Enum, Num. The instances can be extracted by their names. Now, as x+y is in the scope of x - dom declaration, it is clear from what universe to get the instance records for (==) -- Eq, (+) -- Num, and so on. And the compositions of the field selection `.' can be rebuilt automatically. (DTG) - Sometimes dependent types do a good job for basAlgPropos. For example, I defined the residue domain Int/(m) modulo m as a record type with the field `base' containing the value parameter m :: Int. And the compiler rejects at the compile time the expressions like num.(+) r1 r2, r1 :: R1 -- Int/(m1), r2 :: R2 -- Int/(m2) - unless the user program helps it to proof that the types (residue domains) R1 and R2 are the same. That is m1 == m2. The easiest way to help such proof is to write the same variable m for m1, m2, or maybe, m1 = ... m2 = m1, - but this is not always the best. Anyway, it is good that the compiler needs here to insure in the type equality. (DTB) - A more advanced example leads to complications. type Attributes = Pair Bool Bool type OrdSet a = sig {(==) :: a - a - Bool; ():: a - a - Bool; attributes :: Attributes; finList:: List a } OrdSet exports equality and ordering on a set, finite listing and a pair of attributes. The first attribute shows whether () is transitive. The second shows whether a set is finite. finList lists all the elements of a set, and it has sense only for a finite set. maxOfSet :: (a :: #) |- OrdSet a - a has to find the maximum of a given set. But the compiler has to reject the program if maxOfSet is applied to a set with wrong attributes. For example, maxOfSet _ OrdSet_Int is to be rejected by the compiler, maxOfSet _ OrdSet_Bool -- True, maxOfSet _ (OrdSet_Pair OrdSet_Bool OrdSet_Bool) -- (True,True). The first is because the record OrdSet_Int :: OrdSet Int contains the attributes (True,False). The transitiveness of () and the infiniteness are postulated with the word `concrete' when the record is built. In OrdSet_Bool, (True,True) are postulated. For Pair Bool Bool, OrdSet_Pair computes
materials on dependent types
There was earlier some discussion on the Basic algebra proposal (basAlgPropos), in particular, about the Sample argument approach (SA). This SA is somehow opposed by the Haskell community. On the other hand, it may occur that the dependent types, Cayenne provide some alternative approach. But I cannot reach its materials. http://www.cs.chalmers.se refuses the request. I wonder, why? -- Sergey Mechveliani [EMAIL PROTECTED]
what Enum should denote
Marcin 'Qrczak' Kowalczyk [EMAIL PROTECTED] writes I would use a simpler interface: arithSequence :: Num a = a{-start-} - a{-step-} - [a] take 300 (arithSequence (-3.0) 0.02) arithSequence (-3.0) 0.02 takeWhile (3.01) (arithSequence (-3.0) 0.02) Yes, this is better. Generally, the idea was that the domains like T = Float hardly ever need Enum, because fromTo, succ look counter-intuitive there. Probably, what the users wanted was the arithmetic sequences inside T. And the latters are achieved independently of Enum. --- But what Enum is for? Maybe, it worth to consider the following. 1. Finitely-enumerated domain `a' - the doubtless Enum instance. There fit toEnum :: Integer - Maybe a fromEnum :: a - Integer succ, pred :: a - Maybe a For example: toEnum 0 :: Maybe Bool = Just True toEnum 1 :: Maybe Bool = Just False toEnum 2 :: Maybe Bool = Nothing fromEnum False = 1 succ True = Just False succ False = Nothing And we put that the numeration domain is [0..h] :: [Integer], toEnum, fromEnum should be the reciprocally inverse bijections a -- [0..h]. toEnum n returns Nothing for n out of [0..h]. This reflects that toEnum is a partial map. These maps induce the ordering independent of Ord; Ord is not a superclass of Enum; succ, pred are defined according to fromEnum. 2. Countably-enumerated domain `a' toEnum, toEnum' :: Integer - a fromEnum, fromEnum' :: a - Integer succ, pred, pred' :: a - a toEnum, fromEnum should be the reciprocally inverse bijections between a -- [0..] toEnum n to break with error for n 0; pred (toEnum 0) to break with error. toEnum', fromEnum' should be the reciprocally inverse bijections between a -- Integer I doubt how to eliminate these '-s. For Integer looks better when mapped with Integer, rather than with [0..], and NonNegativeInteger looks better with [0..] ... What domains should be standardly countably-enumerated? For example, any data C a = C a deriving (Enum) has to induce standardly Enum from a to C a. Should Float be standardly enumerated? I suggest - it should not. Because Float is close ideologically to RealNumber, and there does not exist a bijection Integer -- RealNumber: there are too few of integers to enumerate the reals. Nother reason is as follows for Rational. Should Rational be standardly enumerated? I suggest - it should not. Rational can be enumerated bijectively with [0..] :: [Integer]. But any such enumeration would be too "deliberate". There is only one definition of `compare' on Rational that agrees with the arithmetic. Therefore Rational is very naturally a *standard* instance of Ord. With the enumeration, the situation is different. It brings another ordering to Rational, which does not agree with the arithmetic; succ, pred are considered in the sense of this Enum ordering. There are too many such enumerations. For each particular task exploiting some enumeration, the user has to choose its own enumeration for Rational, to make the solution efficient. Usually, changing the enumeration changes many times the cost of solution. Therefore, it is better to define the user instance rather than the standard one. Similarly, (a,b), [a] should not have the standard Enum instances. It remains, maybe, to unify the operation types for the finitely-enumerated and countably-enumerated domains. -- Sergey Mechveliani [EMAIL PROTECTED]
Enum Float ?
About the expressions like [1.0 .. 9.0], people write Lennart Augustsson: By definition, if you follow the standard you can't be wrong. :) But the standard can be wrong. Perhaps this is a typo in the report? George Russell [EMAIL PROTECTED]: [..] The standard is kaput. It gets even worse if you try to make sense of the definitions of succ and pred as applied to floating-point number. My suggestion: get rid of Enum on floating-point numbers. Maybe it'll make floating point loops a little lengthier to code, but at least it will be clear what exactly is being coded. I agree with George - about Enum Float. Enum is kind of counter-intuitive for Float and such. Think of what is succ ("successor") for the Real or Rational number: just "next real number", "next rational number". Kind of a nonsense. We know that Float is not exactly a Real number, that Ord is not a superclass for Enum. Still Enum.succ does not look good for the Float-like domains. We can imagine "next real number", but - for some non-standard ordering, that does not correlate good with other operations. Therefore such ordering hardly ever fit to be included into the language standard. If some user needs it, one may introduce the user class. The expressions like let x = ... :: (context a) = a d = ... :: (context a) = a -- the "step" in [x + (fromInteger n)*d | n - [l..(h :: Integer)]] can be coded as arithmSequence :: Num a = a - a - Integer - Mybe Integer --Ring --RingLike For example, arithmSequence -3.0 0.02 0 (Just 300) = [-3.0, -2.08, ..., 3.0] arithmSequence -3.0 0.02 0 Nothing= [-3.0, -2.08, ...] -- Sergey Mechveliani [EMAIL PROTECTED]
two questions on 4-08-notes
Dear GHC, Sorry for the ignorance, just two questions on 4-08-notes.sgml. Result type signatures now work. [..] Constant folding is now done by Rules What do these two mean, or where they are explained? Maybe, you can give an example? -- Sergey Mechveliani [EMAIL PROTECTED]
-fwarn-unused-imports message
Hello, ghc-pre-4.07 -fwarn-unused-imports says to my ` import M () ' .../M.hs:42: Warning: Module `M' is imported, but nothing from it is used Should it add "except, maybe, instances" ? Because removing this import may cause the error report about the lack of some instances. -- Sergey Mechveliani [EMAIL PROTECTED]
esimation debugging with profiler
People, I want to comment my last letter on the need of profiling. The matter is that it helps to find errors in theoretical estimations, to find the estimation "bugs". Analyzing a complex program, the designer may assign the cost estimations to its parts. For example, cost(A) a1*n^3*m^7 + a2*n^4 cost(B) b*2^k ... First, one usually, knows a1, a2, b very approximately, also is not so sure, is it, say, n^3 or maybe n^3.5. Finally, the program being complex, some important parts may be forgotten. In my example, I had forgotten that certain A20 may cost somewhat more than A. Similarly as the examples usually show a bug in a program, the example timings are likely to show a bug in the estimations. In a lazy language, it occurs that with the profiler, the programmer finds the estimation bug many times easier than without it. And what is important, these estimations do not depend on the implementation. Though, the profiler may behave somewhat differently. -- Sergey Mechveliani [EMAIL PROTECTED]
profiling is important
Dear Haskell implementors, Please, always try to provide the working profiling. For it appears indeed very helpful. I had made many experiments with certain large program. Spent many days in this. And it was very hard to find out the practically most expensive parts of the program. Finally, I gave up and waited for some admissible profiler to come. Now, as it was applied, one day of experiments revealed a couple of unexpected earlier main expense centers in the program. I realized then that these parts may indeed occur the most expensive. Then, tested this independently, and it confirmed that the profiler was right. -- Sergey Mechveliani [EMAIL PROTECTED]
profiling example
Dear GHC (pre-4.07-i386-unknown-linux), I am going to use the profiling to measure the parts of certain large program. To understand things, I start with simple example: main = let lgft :: Integer - Int lgft n = _scc_ "length n!" (length $ show $ product [2..n]) ls = map lgft [2..500] :: [Int] s = _scc_ "sum" (sum [1..22000] :: Integer) in putStr $ shows (s,ls) "\n" - ghc -c -O -prof T.hs; ghc -o -prof run T.o; run +RTS -pT gives run.prof: --- total time =3.60 secs (180 ticks @ 20 ms) total alloc = 139,166,388 bytes (excludes profiling overheads) COST CENTRE MODULE %time %alloc length n!Main99.4 99.3 GC GC 24.40.0 individual inherited COST CENTRE MODULE entries %time %alloc %time %alloc MAIN MAIN 00.0 0.0100.0 100.0 CAF PrelHandle 30.0 0.0 0.0 0.0 CAF Main 30.0 0.0100.0 100.0 length n! Main 499 99.4 99.3 99.4 99.3 sumMain 10.6 0.7 0.6 0.7 - What does this mean CAF ? The last two lines of the second table show that 0.994 of time and 0.993 of allocations was spent by the center "length n!". This looks natural. But what means the first table? Why "sum" is skipped there? Does it show that the Garbage Collection has taken 0.244 of the total time? GC can be caused by many values. To find out which part of GC is caused by the given item f, we have to look at the %alloc part for f in the second table. Could you please tell me whether I understand these things correct? Thank you. -- Sergey Mechveliani [EMAIL PROTECTED]
pre-4.07 test
I tested ghc-pre-4.07 on my large algebraic program. It works. -- Sergey Mechveliani [EMAIL PROTECTED]
pre-4.07 inherits bug from 4.06
ghc-pre-4.07-2613-i386-unknown-linux.tar.gz inherits this bug from ghc-4.06: main = let xs = [((1,[0,0,0,0,2,0,0,0,0,2]),(-5,[0,0,0,0,1,0,3,3,0,1])), ((1,[0,1,0,0,0,1,0,0,0,1]),(-5,[0,1,0,0,0,1,0,0,0,1])), ((1,[0,1,0,0,1,0,0,0,0,2]),(-3,[0,1,0,0,1,0,0,-2,4,0])), ((1,[0,1,0,0,1,0,1,0,0,1]),(1,[0,1,0,0,1,-2,3,0,2,-1])), ((1,[0,1,0,0,1,0,2,0,1,0]),(3,[0,1,0,1,0,5,1,3,-3,2])), ((1,[0,2,0,0,0,0,0,0,0,2]),(-5,[0,1,0,3,0,0,0,3,0,1])), ((1,[0,2,0,0,0,0,1,0,0,1]),(-3,[0,0,4,-2,0,0,1,0,0,1])), ((1,[0,2,0,0,0,0,2,0,0,0]),(-5,[0,1,0,3,3,0,1,0,0,0])), ((1,[1,0,0,0,0,1,0,0,0,1]),(1,[1,0,0,0,1,0,0,-1,1,1])), ((1,[1,0,0,0,1,0,0,0,0,1]),(1,[1,0,0,0,1,-1,1,0,1,0])), ((1,[1,0,0,0,1,0,1,0,1,0]),(-5,[1,0,0,0,0,2,0,1,-1,1])), ((1,[1,0,0,1,1,0,0,0,1,0]),(-5,[1,0,0,1,0,2,-1,1,-1,1])), ((1,[1,0,1,0,0,0,0,0,0,1]),(1,[1,1,0,0,0,0,0,-1,1,1])), ((1,[1,0,1,0,0,0,1,0,0,0]),(1,[1,1,0,0,-1,1,1,0,0,0])), ((1,[1,1,0,0,0,0,0,0,0,1]),(1,[1,1,-1,1,0,0,0,0,1,0])) ] :: [((Int,[Int]), (Int,[Int]))] in putStr $ shows xs "\n" It hangs afterghc -c Main.hs; ghc -o run Main.o; ./run -- Sergey Mechveliani [EMAIL PROTECTED]
fixing a typo in last letter
Sorry, I have to fix a typo in may last letter: There appeared a nice manner for the public discussion introduced recently by M.Kowalsky and encouraged by L.Augustsson [..] I intended to write Kowalczyk instead of Kowalsky. -- Sergey Mechveliani [EMAIL PROTECTED]
mode argument
To my suggestion on the mode argument for some standard functions quotRem ... divRem ... And we can hardly invent the mode type better than Char, because any specially introduced mode types bring the long names. quotRem 'n' x (-3) looks better than the pair quotRem divMod, and quotRem QuotRemSuchAndSuch x (-3) looks worse than quotRem divMod. D.Tweed [EMAIL PROTECTED] writes on 31 May 2000 T [..] T (2) Having an enumerated type is better because it will cut down on T `accidental namespace collisions' if I cut and paste something which T expects mode character 'n' to mean one thing into a function call where it T means something completely different. I knew of the namespace collision effect. But one has to choose between the bad and worse. And in any case, there remain too many ways to error. We also may paste True :: Bool instead of False (the type design does not help), x / 0 instead of x / 2, take n xs instead of take (-n) xs We also may forget to write a program and try to use it. Generally, concerning the mode argument: looks like the Haskellites avoid it. I am just trying to understand what is going on. Why people do not complain on the mode argument outside of Haskell: `zip -r ...' `unzip -t xxx.zip' `tar xvfz xxx.tar.gz' ? One may paste -x instead of -t, with unpleasant effect. Also in any application each respectful function has about a douzen of the sensible mode values. And it is definitely not good to have several functions in the interface instead of several mode values for one function. T To be fair, there are two minor cons that haven't been mentioned: T [..] T (2) These enumerated types presumably have to be added to explicit import T lists, even if you're only using the most common type. Much existing T script breakage! It was said, Haskell-2 does not care for the old code. And I mean only Haskell-2. Patrik Jansson [EMAIL PROTECTED] writes on 31 May 2000 J On the contrary - you only need one character N instead of threee 'n' if J you use a datatype with two constructors N and (whatever you like for J the other case). J But the length of the argument is not that interesting here - you can have J long names for the constructors of the "mode" datatype and still use short J local abbreviations. 'n' :: Char does not hold a name in the constructor name space. And `N' hardly would do. There may be many other constructors wanting short names. Suppose there are about 10 functions to provide with the mode. It will looke like this quotRem :: ...= Mode_quotRem - a - a - (a,a) sortBy :: Mode_sort - Comparison a - [a] - [a] ... data ModeQuotRem = QuotRem_minRem | QuotRem_positive | QuotRem_other deriving(Eq,Ord,Enum,Show,Read) -- contrived data Mode_sort = Sort_quick | Sort_merge | Sort_insert | Sort_other deriving(Eq,Ord,Enum,Show,Read) ... Again, `Positive' would not do, it should be something like QuotRem_Positive, and so on. For example, my program contains many expressions like f x y b = myRem 'c' ((myRem '_' x b)*(myRem '_' y b)) b Now, it has to be, maybe, remP = rem QuotRem_positive remO = rem QuotRem_other f x y = remP ((remO x b)*(remO x b)) b Maybe, Char is better? -- Sergey Mechveliani [EMAIL PROTECTED]
mode in functions
Fergus Henderson [EMAIL PROTECTED] writes on 1 Jun 2000 'n' :: Char does not hold a name in the constructor name space. Yes, but it is also far from self-expanatory. With a constructor name, in a suitable environment you could just click on the constructor name and the environment would immediately pop up the declaration and documentation for that constructor and the type that it belongs to. [..] [..] data Mode_sort = Sort_quick | Sort_merge | Sort_insert | Sort_other deriving(Eq,Ord,Enum,Show,Read) ... Again, `Positive' would not do, it should be something like QuotRem_Positive, and so on. This is a problem with Haskell, IMHO. Mercury allows overloading of constructor names, so in Mercury you could use just `Positive' rather than `QuotRem_Positive'. The type checker will resolve the ambiguity whenever possible. This is my wish for Haskell-2: overloded constructor names. In many cases, these name overlaps can be resolved. f x y b = myRem 'c' ((myRem '_' x b)*(myRem '_' y b)) b Now, it has to be, maybe, remP = rem QuotRem_positive remO = rem QuotRem_other f x y = remP ((remO x b)*(remO x b)) b Maybe, Char is better? No, IMHO Char would definitely not be better. In this case, I think separate functions would be best, a single function with a properly typed mode argument second best, and a single function with a `Char' mode argument worst. About the type constructor for mode, I half-agree. But about a single function - no. If you require the single functions sort_merge, sort_insert, sort_quick, do you also require tar_x, tar_xv, tar_v instead of tar mode ? As to quotRem - diveRem, it has the three extra friends to export. -- Sergey Mechveliani [EMAIL PROTECTED]
mode for functions
D. Tweed [EMAIL PROTECTED] writes on 1 Jun 2000 We also may paste True :: Bool instead of False (the type design does not help), x / 0 instead of x / 2, take n xs instead of take (-n) xs We also may forget to write a program and try to use it. Do you really believe the general principle that `if something is inherently error prone then something is only worth doing if it solves all the errors completely, not if it only makes things slightly less error prone?' That would suggest that strong typing, haskell style modules and numerous other innovations are mistakes made in the language design process. I meant: if 3/4 of errors still remain after the perfect type design, maybe, the place of this type design is not so important. data ModeQuotRem = QuotRem_minRem | QuotRem_positive | QuotRem_other deriving(Eq,Ord,Enum,Show,Read) -- contrived data Mode_sort = Sort_quick | Sort_merge | Sort_insert | Sort_other deriving(Eq,Ord,Enum,Show,Read) ... Again, `Positive' would not do, it should be something like QuotRem_Positive, and so on. The only collision here if you remove the QuotRem_ Sort_ prefixes comes from the other, which seems like excess generality to me: do you really want an option to use a QuotRem function in a mode about which you know nothing. By _other I meant _Default. For example, Sort_other means "sort as you (library) can". If Fergus' suggestion about allowing overloaded data constructors for H-2 is allowed then I've no problem with multiple instances of Positive With the overloaded data constructors, the special data types for the Mode may have sense. [..] The problem with chars is if for example `p' means positive (ie =0) in one function and strictly-positive (ie 0) in another [..] The manual on a Mode is the part of the manual on a function. See a function, and it shows what the mode values mean. Like everywhere in the world. The mode value names themself are somewhat senseless, they have sense only inside the manual on the function. -- Sergey Mechveliani [EMAIL PROTECTED]
mode in functions
Ketil Malde [EMAIL PROTECTED] writes I could accept "mode flags" if the algorithm is extremely similar, e.g. passing a comparator function to a sort is a kind of mode flag (think ordered/reversed) which I think is perfectly acceptable. Having flags indicating algorithm to use (sort Merge (s:ss)) is IMHO silly. Not at all. Silly it to call differently the functions that compute the same map. Also silly is to have quotRem divMod instead of quotRem mode. --- Observation on the literature. There appeared a nice manner for the public discussion introduced recently by M.Kowalsky and encouraged by L.Augustsson: using elengantly the words "silly", "ugly", "crazy", "blathering", and so on. The Haskell place is in bloom, enjoying concentration of culture. -- Sergey Mechveliani [EMAIL PROTECTED]
RE: re-preluding
Thanks a lot. As to `SYNTAX', all right, let it be two month later. It makes sense when it is in the *standard*. If it is only GHC feature, it is 50% good.
mode for standard functions
More example on standard functions needing the *mode*: in quotRem, quot, rem, divMod, div, mod. the latter triplet is unnecessary, if the first has the mode argument. And we can hardly invent the mode type better than Char, because any specially introduced mode types bring the long names. quotRem 'n' x (-3) looks better than the pair quotRem divMod, and quotRem QuotRemSuchAndSuch x (-3) looks worse than quotRem divMod. Only do not say the mode will slow down anything. This is mostly a trivial matter of the compiler in-lining rem 'p' -- some_internal_function, rem 'n' -- other_internal_function. If the compiler does not inline it, then the additional step does not matter. -- Sergey Mechveliani [EMAIL PROTECTED]
re-preluding
Hello, all, It is again on re-preluding. MyPrelude aims to hide Ord(..), Bounded(..), Num(..), Integral(..), Fractional(..), subtract, fromIntegral, even, odd, gcd, lcm, (^), (^^) of Prelude and to re-define Ord, (+),(*),(^), even, odd, gcd, lcm The attempt ofmodule MyPrelude (...) where import qualified Prelude ... leads to that MyPrelude has to write f (x Prelude.: xs) = ... instead off (x:xs) = ... So, I set import qualified Prelude import Prelude hiding (Ord(..), Bounded(..), Num(..), Integral(..), Fractional(..), subtract, fromIntegral, even, odd, gcd, lcm, (^), (^^) ) ..., and the user modules have to copy this "import" part. Looks like it helps. But it remains the problem of numerals: --- The compiler replaces (23, -23) with (Num.fromInteger 23, Num.negate 23), while it is needed (A.fromInteger 23, A.negate 23). Could the language put is `23' - `fromInteger 23', so that fromInteger is the operation of any *visible* class? Standardly, Num is visible. In my case, MyPrelude.A is visible instead. ? If the Simon P. Jones's suggestion import Prelude () import {-# SYNTAX #-} MyPrelude will work, then OK. Probably, someone has to decide what to put for the language and to implement this ... -- Sergey Mechveliani [EMAIL PROTECTED]
`span' vs `partition'. Reply
Michael Marte [EMAIL PROTECTED] writes on 30 May 2000 Standard module List defines: [..] groupBy :: (a - a - Bool) - [a] - [[a]] groupBy eq []= [] groupBy eq (x:xs)= (x:ys) : groupBy eq zs where (ys,zs) = span (eq x) xs In my programs, I often use the following function to compute equivalence classes: eqClasses :: Eq a = [a] - [[a]] eqClasses = eqClassesBy (==) [..] There is a single difference: The use of span vs. partition, i.e. grouping and computing equivalence classes are very similiar. Indeed, if a set is represented by a sorted list, grouping can be used to compute its equivalence classes efficiently. I think, some standard functions need additional argument for the Mode. This is for the economy of the function names: less interface names. This is a common practice of options in programming. For example, partition :: Char - (a - Bool) - [a] - ([a],[a]) groupBy :: Char - (a - a - Bool) - [a] - [[a]] sort(By) :: Char - ... find :: Integer - ... `partition' could behave as `span' under certain mode. So, `span' removes. groupBy could split to the equivalence classes under the appropriate mode. sort(By)could apply different algorithms: quickSort,mergeSort,... depending on the mode 'q', 'm' ... In many situations, the Mode helps the efficiency that points whether the list is ordered, or whether it contains repetitions (by equivalence). A joke: use Mode :: String to obtain unique program for each type. Generally, the functions have to unify that have common philosophy and tradition, not only common type. -- Sergey Mechveliani [EMAIL PROTECTED]
gcd :: [a] - a
On my proposal for minBy, gcd ... :: [a] - a and remark on +, ... being the exceptions Matt Harden [EMAIL PROTECTED] writes on 19 May 2000 (+), () ... are different. Because they have classical tradition to be applied as binary infix operations. And gcd, min, max, lcm have not this "infix" tradition. Yes, but the "infix tradition" is not the only reason we have these operations. We have them because they are useful. They are the simplest versions of the operations. If I want to sum up all the elements of a binary tree, for example, I would use (+), not sum, because I always want to add two numbers and I don't want the overhead of intermediate lists. The same is true if I want the minimum element of the tree. When processing this tree, it would be natural to write in each node m + b and min [m,b]. The former is "necessary" due to the infix-binary tradition. The latter uses [,] because it is good to have one function min for a list and for the two elements. I admit that, if the haskell compiler/interpreter inlines the list versions, and implements list deforestation, then the list versions can be just as efficient as the curried versions, but we can't always be assured of this. If it does not inline, then it estimates the expence as small. You can set an experiment, for example, with Hugs. The economy of names is more important. -- Sergey Mechveliani [EMAIL PROTECTED]