On Sun, 7 Aug 2022 at 08:37, <[email protected]> wrote:
>
> If the numbers are represented as prime factors to begin with, finding the 
> gcd is easier. Then, calculating the sum requires multiplying out the 
> numerators, which should be easier than factoring the decimal representation. 
> But that's just my relatively naïve observation.

The prime factorisation of a sum of two integers does not necessarily
have any relationship to the prime factorisation of the summands.
Consider 8 + 5 = 13 which in terms of prime factorisations looks like:

   {2, 2, 2} + {5} = {13}

How can you compute the prime factorisation of the result (13) without
multiplying out the other factorisations?

If you don't need to add two integers and you only need integer
multiplication, exact division, gcd then it could be possible to do
something more efficient with the prime factorisations. If you have to
compute the result of an addition explicitly and then factorise it
then this is going to be very inefficient as soon as the numbers get
large.

If you have some reason to think that your factorisations will always
share lots of prime factors in common then maybe it is possible to
implement addition more efficiently without needing to factorise large
integers. In general though you should avoid any approach that
requires factorising large integers since that is a famously hard
problem (gcd is much easier).

> The power series I am dealing with was invented in a paper that is about 150 
> years old and the author calculated some of the coefficients up to a power of 
> 9 and used the prime-factor representation for the denominator and the 
> decimal representation for the numerator.  One of the larger coefficients has 
> a value of 2114557858/(2**12*3**6*5*3). I don't know how much paper he needed 
> to calculate that one. Most subsequent papers calculate to a power of 5.

For a computer those integers are not really large. Below you talk
about running out of RAM which implies that the numbers must be a lot
bigger than this.

> My software can calculate to a power of 23 and then reaches the limits of my 
> PC. Looking at the resource monitor tells me that I can probably benefit a 
> lot from more RAM, but I haven't done that yet.
>
> Based on your observations, I think the best way forward is the simple 
> solution (more RAM), since we haven't received a response indicating that 
> another solution is easy.

If the numbers are really so large that you are running out of RAM
then getting more RAM probably won't help that much. I expect that
even if you do get another term in the series the next term would fill
your new larger RAM.

If the prime factorisation approach can work then it would potentially
deliver a much better improvement. One way to implement the prime
factorisation idea in SymPy would just be to use a symbol for each
prime factor. Assuming you only need multiplication, division and
cancellation of fractions that would work fine: SymPy's Mul already
represents that naturally (although I'd probably use monomials from
polys for this).

--
Oscar

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/CAHVvXxTYTsQrfe1SToaah%3D2qLks2SRYVBxD%2BE8GPBP8WyMX_cA%40mail.gmail.com.

Reply via email to