@Ralf: I see. I can't seem to reproduce the *big* difference anymore. When 
I posted it I tried it with the s1 and s2 evaluations exchanged
to exclude CPU scheduling issues (boost kicking in after the first one) and 
other problems, so who knows what happened.
In practice I don't use approximate, but handle the underlying streams 
myself
because I want to dynamically evaluate up to a certain point (error). This 
was just an example to play around and I found it strange
that there is such a big difference. But Waldek figured out why there is 
*some* difference:

@Waldek: Thanks, that is another "d'oh" moment. The second way clearly 
starts with a more complicated expression.

I've noticed before that if I do a ratDenom() of my expression to eliminate 
squareroots in the denominator the series
expansions are *significantly* faster. Given that improvement, I was 
playing around to see if there are other things that
could be tuned. See the dramatic improvement below:

s3 := taylor(ratDenom(y^2 + cos(y) + y^8/ sqrt(y-1)),y=0);

approximate(s1,450);
                           Time: 0.01 (IN) + 5.72 (EV) + 0.01 (OT) = 5.74 
sec

approximate(s2,450);
                                 Time: 0 (IN) + 6.04 (EV) + 0 (OT) = 6.04 
sec

approximate(s3,450);
                                 Time: 0 (IN) + 0.87 (EV) + 0 (OT) = 0.87 
sec

On Thursday, March 18, 2021 at 5:17:29 PM UTC-4 Waldek Hebisch wrote:

> On Thu, Mar 18, 2021 at 01:12:03PM -0700, Tobias Neumann wrote:
> > I am currently trying to see if I can accelerate my code around series 
> > expansions. I really like that series are lazy in FriCAS, but I wonder 
> > what impact multiple operations on the series (or the underlying 
> > stream), like integration etc. have. I would assume that up to memory 
> > requirements that "save" or delay all the performed operations, there 
> > should be no 
> > performance penalty compared to a non-lazy evaluation.
> > 
> > Anyway, while testing I came up with this example, where some expression
> > is series expanded in two different ways. 
> > 
> > )clear completely
> > )set mes tim on
> > 
> > x := taylor(x');
> > s1 := x^2 + cos(x) + x^8/ sqrt(x-1);
> > 
> > s2 := taylor(y^2 + cos(y) + y^8/ sqrt(y-1),y=0);
> > 
> > approximate(s1,450);
> > approximate(s2,450);
> > 
> > approximate(s1,450);
> > approximate(s2,450);
> > 
> > 
> > When running the code (see below) the first method of evaluation is 
> three 
> > times faster than the second. (A second evaluation runs in the same time 
> > for both cases, probably because the results are cached, but that's not 
> > relevant for me).
> > 
> > In my case I start with an Expression(Integer), so I (obviously) can 
> only 
> > use the second version from the ExpressionToUnivariatePowerSeries 
> package. 
> > But my idea is that maybe I can improve the performance if I understand 
> > why the first method is so much faster.
> > 
> > I would have expected that the ExpressionToUnivariatePowerSeries package
> > does something like "subst(expression, x=series('x))", symbolically 
> > speaking, but then it should be just as fast.
>
> Well, we have:
>
> (1) -> y^2 + cos(y) + y^8/ sqrt(y-1)
>
> 2 +-----+ 8
> (cos(y) + y )\|y - 1 + y
> (1) --------------------------
> +-----+
> \|y - 1
> Type: Expression(Integer)
>
>
> Replacing in this y by series for x I get comparable time to expanding
> expression.
>
> > In principle I see two ways to do the series expansions: Compute the 
> > expansion as a whole for the whole expression (for example by taking 
> > derivatives for a Taylor series). Or do it for each function separately, 
> > and use the Cauchy product to multiply the individual series together, 
> or 
> > add them, etc. For the first series expansion example this is clearly 
> > done,
>
> No, FriCAS uses derivatives only as fallback, when it knows no
> better method. For elementaty functions, "standard" special
> functions and few other things FriCAS has much faster methods.
>
> > but maybe not for ExpressionToUnivariatePowerSeries and this could 
> > be the difference?
>
> I think it is order of operations that matters. s1 uses good
> formula, formula used to compute s2 is subject to automatic
> transformatins and this makes it less efficient.
>
> -- 
> Waldek Hebisch
>

-- 
You received this message because you are subscribed to the Google Groups 
"FriCAS - computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/fricas-devel/10a93687-18ff-489d-81cf-488aa9274b81n%40googlegroups.com.

Reply via email to