Folks, this is really a Friday topic, but I'm stunned by what I just found.
I was fiddling around with the BigInteger class calculating the FLT
<https://en.wikipedia.org/wiki/Fermat%27s_little_theorem> and Carmichael
Numbers <https://en.wikipedia.org/wiki/Carmichael_number>, and in the
process knocked-up a bit of high-school level code to factor numbers (or
tell me they're prime). I was wondering why it was so fast at factoring
16-digit numbers in about 1 second, so I fed it bigger and bigger numbers
to see how far I could go. Here's some sample output and timing:

20000000000000003 = prime [~3 seconds]
1000000000000000003 = prime [~22 seconds]
9000000000000000041 = prime [~72 seconds]
9000000000000000042 = 2 ⋅ 3 ⋅ 29 ⋅ 89 ⋅ 5443 ⋅ 106773854329 [~71 seconds]

So my 4 year old PC using the standard Int64 and BigInteger classes, not
optimised, inside LINQPad is factoring 20 digit numbers in about a minute.
I'm amazed! Where is the speed coming from? BigInteger is an immutable
struct and probably carefully written, but I thought it would really choke
performance at these sizes. I stopped around n=2^63.

*Greg K*

// Naive factoring loops
IEnumerable<BigInteger> Factor(BigInteger n)
{
  double sqr = Math.Sqrt((double)n);
  foreach (long div in Divseq())
  {
    if (n == 1)
    {
      Console.WriteLine($"break done");
      break;
    }
    if (div > sqr)
    {
      Console.WriteLine($"break {div} > {sqr}");
      yield return n;
      break;
    }
    while (n % div == 0)
    {
      n = n / div;
      yield return div;
    }
  }
}

// Endless sequence of test divisors: 2, 3, 6N±1 ...
IEnumerable<long> Divseq()
{
  yield return 2;
  yield return 3;
  for (long i = 6; ; i += 6)
  {
    yield return i - 1;
    yield return i + 1;
  }
}

Reply via email to