<[EMAIL PROTECTED]> wrote: > We may be able to address the non-null lexical environment if it is > deemed important. But the remaining issues if any appear inaccessible > as the author states he is not publishing the test code. Why not?
I'm not making the code public, since ITA is using these puzzles as a sort of job application filter. It seems a bit impolite to start publishing solutions to them. I'll be happy to email a copy to you if you want (Paul already has one). Apparently I should've added bigger disclaimers to the blog posting. This really was NOT intended as a grand Common Lisp benchmark. It was just intended to show that writing an efficient Lisp program that's portable doesn't neccessarily result in a Lisp program that's portably efficient. Often the reasons for it being slow are non-obvious, like with the lexenv issue that slowed GCL by a factor of 15 on this code. Also, Paul's message might've been a bit misleading. I'm not really complaining about performance (I'm already happy with the performance on one Lisp), not really a GCL user (I just have recent versions of most Lisps installed for other reasons), and this really is in no way an important program (just a way to spend an evening doing something more fun than writing the thesis). So I'm probably about the worst possible "customer" :-) With that out of the way, and for what little it's worth, these are the performance-related issues I've had during my to date brief acquaintance with GCL: * The non-null lexenv problem. (LET ((...)) (DEFUN ...)) might be slightly eccentric CL style, but that's how I like to write variables that aren't really global. In my experience toplevel MACROLETs that expand to DEFUNs are much less eccentric, and the same performance issue applied to them too. As far as I can see, no other CL implementation suffers from this. * You already noticed my other post about SXHASH on fixnums, so I won't go into much detail there. IMHO some amount of mixing should be done for all hashing. If you use a hash that's too good, you lose a tiny fraction of performance over the whole system. If you use a bad hash, some other bozo like me is going to come along and have his linear algorithm transformed to a quadratic one. * Speaking of hashes, GCL probably doesn't optimally compile the hash function in the blog post. It could probably be fixed by sprinkling some magic (THE FIXNUM ...) pixie dust into the code. Unfortunately I think that would cause undefined behaviour, and thus be against the self-imposed restriction of portability. SBCL compiles the function efficiently (it could be rewritten to be more efficient and remain portable, but then it would be even more unlikely to be efficient on other Lisps). * I said "probably doesn't optimally compile". As far as I can tell, GCL gives no information about when it's going forced to use generic operations instead of something efficient. NIL. SBCL and CMUCL are very chatty about this. Some people are annoyed by this. I think the compiler notes are great. Either way it means that when porting code to SBCL it's easy to see where to add those crucial few declarations to speed the code up, while with GCL I find it nearly impossible. DISASSEMBLE seems to always return NIL, and thus can't be used for checking whether all the generic ops got optimized out. > To > this end, perhaps sbcl is a better reference system than cmucl -- I've > never really understood why there are too different projects here. Why do both ECL and GCL exist? Likewise SBCL and CMUCL have different goals and codebases that are beyond reconciliation. -- Juho Snellman _______________________________________________ Gcl-devel mailing list [email protected] http://lists.gnu.org/mailman/listinfo/gcl-devel
