Re[4]: [Haskell-cafe] Understanding allocation behavior
Hello David, Saturday, April 8, 2006, 9:58:56 PM, you wrote: bitsTable :: Array Word Int bitsTable = array (0,255) $ [(i,bitcount i) | i - [0..255]] bitsTable :: UArray Word Int bitsTable = listArray (0,255) $ map bitcount [0..255] UArray is much faster than Array but can be used only for simple datatypes (integral, floating point, Char, Bool). more info at http://haskell.org/haskellwiki/Arrays btw: you can use UArray a Bool to represent sets of ANY type `a` belonging to class Ix. It uses only one bit per elemnt in current implementation -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Code Review: Sudoku solver
I just ran a simple metric for the dancing-links solver. The only real metric I could use was the number of coverOthers calls which counts the number of selections made (there is no distinction between certainty and guessing). So the minimum # of selections is 81 (of which 17 are the free hints, and 64 were real moves). Each selection beyond 81 involved backtracking. Of the 36628 puzzles, there were 21395 puzzles that were solved with 81 selections, and therefore no backtracking. So 58% of them were (some with luck) solved instantly. This low percentage is why this is not a by logic solver. puzzles selections each selections excess/above 81 21395 minimal 81 (17+64*1) 17329950 9841 up to 145 (17+64*2) 1043838 246717 (== 1043838 - 9841*81) 3192 up to 273 (17+64*4) 611275 352723 1215 up to 529 (17+64*8) 453302 354887 576 up to 1041 (17+64*16) 415496 368840 248 up to 2065 (17+64*32) 354028 333940 105 up to 4113 (17+64*64) 300909 292404 42 up to 8209 (17+64*128)248645 245243 10 up to 16401 (17+64*256)120724 119914 3 up to 32785 (17+64*512) 6231962076 1 used 32875 3287532794 (puzzle # 10995) - --- --- 36628 5376406 2409538 I think the important thing about the list is that the work in each group is double the one before it, but the count of puzzles is always less than half the one before it. Thus the total selections decreases. So the hard puzzles are time consuming but not catastrophic. Anyway, 94% of the puzzles were solved with no more than 17+64*4 selections. And only 50.7% of the selections beyond the 17 hints required backtracking. (Computed from 2409538 / ( 5376406 - 36628 * 17 ) ). That's an average of 65.8 bad selections per puzzle that requires 64 good selections to solve. This seems like an excellent benchmark for comparing brute force methods. A final point about dancing links is that this is a generic data structure and algorithm for solving any binary cover problem. (Such as arranging pentaminos). It is not specialized to knowing anything about Sudoku. The initialization tells it there are 324 constraints and this is a comprehensive set of moves, limited only by the initial hints. And it runs only 50% slower than bit-twiddling specialized logic. The funny thing is that Sudoku is too small for this method, by which I mean that the constant cost of initializing the data structure that is used can become comparable to running the algorithm. I had to profile my code while rewriting it and optimize the initialization code to have a much smaller overhead than Knuth's code. Most non-Sudoku problems that one needs to brute force with this will run much longer than any setup costs. Finally, here were the worst 30 puzzles: selections puzzle# 32875 10995 23577 412 20707 27267 18035 26632 14350 21765 14247 33677 13966 10992 13660 7250 13453 28608 12243 19106 12181 10993 930018177 868624590 863812445 816119387 81577921 791635782 790033131 785036162 766321951 742731110 732116608 726833554 72001259 71089750 680528607 678121919 671614969 643226234 636115697 I don't notice any overlap with the list of hard puzzles to solve with the other programs. -- Chris ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Chapter 4 of Hitchhikers Guide to the Haskell is out
Hello, After great amount of procrastination I finally put online the next installment of my Haskell tutorial, which could be found at http://www.haskell.org/haskellwiki/Hitchhikers_Guide_to_the_Haskell Many thanks to all who commented on my efforst, spell-checked the text and urged me to go forward. As alway I'm interested in any feedback. -- Dmitry Astapov //ADEpt GPG KeyID/fprint: F5D7639D/CA36 E6C4 815D 434D 0498 2B08 7867 4860 F5D7 639D ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re[4]: [Haskell-cafe] Understanding allocation behavior
Hello Robert, Sunday, April 9, 2006, 2:54:58 AM, you wrote: findMinIndex :: Word - Int findMaxIndex :: Word - Int on the other side, these procedures can use the same divide-to-bytes technique as `size` findMinIndex 0 = undefined findMinIndex n = case (n `shiftR` 8) of 0 - minIndexInByte ! (n .. 255) b - 8 + findMinIndex b something like this should work -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[4]: [Haskell-cafe] Understanding allocation behavior
Hi Bulat, On Apr 9, 2006, at 6:31 AM, Bulat Ziganshin wrote: on the other side, these procedures can use the same divide-to-bytes technique as `size` findMinIndex 0 = undefined findMinIndex n = case (n `shiftR` 8) of 0 - minIndexInByte ! (n .. 255) b - 8 + findMinIndex b something like this should work In an email to me, Jean-Philippe Bernardy expressed a concern that a large table could needlessly fill the data cache. He proposed checking 4 bits at a time and using a small table of 16 elements. Not surprisingly, it isn't as fast. I wonder what you think of this. Also, I wonder what would be a good test to demonstrate this possible interaction with the cache. Cheers, David ps. Thanks for the tip about UArray. David F. Place mailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Code Review: Sudoku solver
Am Samstag, 8. April 2006 10:21 schrieb Chris Kuklewicz: Daniel Fischer wrote: But, lo and behold, I also tried how plain Array fared in comparison to DiffArray and ... reduced the running time to under ten minutes (a little above for the list version), 5% GC time without -AxM, 1.2% with -A8M. And I thought, DiffArrays were supposed to be fast! No. DiffArray's are faster for the usual imperative single threaded usage pattern. The haddock documentation explains: Well, it's single threaded for 31,309 of the puzzles and hasn't much branching for most of the others. I hoped that when guessing was necessary, a copy was made to be used for the other branches, but apparently I couldn't persuade the compiler to do that. Diff arrays have an immutable interface, but rely on internal updates in place to provide fast functional update operator //. When the // operator is applied to a diff array, its contents are physically updated in place. The old array silently changes its representation without changing the visible behavior: it stores a link to the new current array along with the difference to be applied to get the old contents. So if a diff array is used in a single-threaded style, i.e. after // application the old version is no longer used, a'!'i takes O(1) time and a // d takes O(length d). Accessing elements of older versions gradually becomes slower. Updating an array which is not current makes a physical copy. The resulting array is unlinked from the old family. So you can obtain a version which is guaranteed to be current and thus have fast element access by a // []. I assume the usage in a Sudoku solver involves a non-trivial amount of back-tracking. So as the solver backs up and goes forward again it ends up being much more work than having used a plain Array. I tried to keep backtracking to a minimum, and in most cases that minimum is reached. And as was pointed out by someone else on this list: to be thread safe the DiffArray uses MVar's (with locking) instead of IOVars. But I expect the main problem is that a DiffArray is simply not the right mutable data structure for the job. Should I try an MArray? And what's more promising, IOArray or STArray? I have had the flu this week, so I did not finish cleaning up my port of So did I, hope you're well again. Knuth's mutable dancing links based Sudoku solver. But it uses a much more lightweight way to alter a mutable data structure both going forward and backwards while backtracking. And I can use STRef's to build it, instead of MVars. Cheers, Daniel -- In My Egotistical Opinion, most people's C programs should be indented six feet downward and covered with dirt. -- Blair P. Houghton ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re[6]: [Haskell-cafe] Understanding allocation behavior
Hello David, Sunday, April 9, 2006, 5:47:03 PM, you wrote: In an email to me, Jean-Philippe Bernardy expressed a concern that a large table could needlessly fill the data cache. He proposed checking 4 bits at a time and using a small table of 16 elements. Not surprisingly, it isn't as fast. I wonder what you think of this. Also, I wonder what would be a good test to demonstrate this possible interaction with the cache. it depends on the actual task, CPU and other programs running at the same time :) i just can say that it will be better to use array of Word8 (with `fromIntegral` for conversion) and use `unsafeAt` for indexing -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Justification for Ord inheriting from Eq?
On 4/7/06, Jared Updike [EMAIL PROTECTED] wrote: given an Ord instance (for a type T) a corresponding Eq instance can be given by: instance Eq T where a == b = compare a b == EQ where did this second -^ == come from? (I guess if if Ordering derives Eq :-) I think you meant I think another poster essentially already said this, but the second == comes from the Eq instance for type Ordering, which is in the Prelude. So this we can actually rely on. Steve ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: lambda evaluator in GADT
The introduction to GADT usually starts with a little expression evaluator. So I gave it a try, but there are some troubles. Actually, the generalization is not necessarily trivial at all, depending on what you need to do with your ASTs. data E a where Lit :: a - E a App :: E (a - b) - E a - E b Lam :: Var a - E b - E (a - b) Val :: Var a - E a data Var a = Var String You're using a first-order abstract syntax. Each GADT branch corresponds to one of the typing rule of your language, and when you introduce variables, your typing rules end up needing an extra environment (which maps each var to its type), which you also need to add here: E env a. Building up `env' is left as an exercise. An alternative is to use higher-order abstract syntax, which correspond to using hypothetic judgments in your typing rules instead of an extra environment. I.e. something like: data E a where Lit :: a - E a App :: E (a - b) - E a - E b Lam :: (E a - E b) - E (a - b) eval (Lit x) = x eval (App f x) = (eval f) (eval x) eval (Lam f) x = f (Lit x) Stefan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe