I was just watching google desktop -- that's supposed to index in background using up 75% of my cpu(s) (3 out of 4) and wanted to try something cpu-intensive to see if it would back-off (it doesn't).
I decided maybe factoring a large int might do the trick. So I popped into my "cygwin/win32" window and started trying numbers with factor. I found that the version on cygwin/win32 would take a maximum of 20 digits. 12345678901234567890 worked, but 123456789012345678901 fails with a "is too large" error. It looks like it handles up to 2**64-1. I thought maybe a machine with a 64-bit word size might give more, so I tried it on a linux running x86-64. But was surprised when it was limited to 2**63-1! Should they be the same on a 32-bit platform (same type of processor) as one running in 64 bits? Would it completely kill performance to use the arbitrary precision lib like 'bc' appears to use? Might be an interesting way to rate processor speeds -- I note that 12345678901234567891 (substituting a 1 for the 0, in a 20-digit num on cygwin) takes a "while" (but can't compare it on my 64-bit machine because the number is "too big"). I'm guessing the cygwin version uses 64 bit-unsigned? Whereas the version on the 64-bit machine is running in signed? Not sure why there'd be a difference... except that the 64-bit version is V6.9, while the Cygwin ver is V6.10. Was that changed between the deltas? Curious, -linda _______________________________________________ Bug-coreutils mailing list Bug-coreutils@gnu.org http://lists.gnu.org/mailman/listinfo/bug-coreutils