I should generalise the idea to make Taylor parametric, allowing arbitrary 
types as coefficients (well, almost: they'd have to support the usual 
mathematical functions). Since that would include Taylor, I'd have 
multidimensional expansions, too. I'd have to think of a mechanism for the 
type to recognise "its own" independent variable, so that a partial 
derivative would either be handled by it, or delegated to the coefficients. 
That shouldn't be too hard, though.
Unfortunately, the simple trick to handle an ODE by transforming it into a 
Volterra type integral equation doesn't work with partial differential 
equations, so I wouldn't need multidimensional expansions very much.

Am Samstag, 21. Juni 2014 05:38:19 UTC+2 schrieb Luis Benet:
>
> Thanks for the explanations. I really really like the trick of the 
> dictionary and the way you define 
> the functions, so everything works without fixing constants before hand. 
> This could make 
> life much simpler for our many-variable implementation... 
>
> Best, Luis 
>
>
> On Jun 20, 2014, at 4:28 PM, gentlebeldin <[email protected] 
> <javascript:>> wrote: 
>
> > Hi, 
> > 
> > I'm glad you like the approach. The simplicity comes from the math 
> involved, but it's nice that Julia makes it work, allowing an almost 
> declarative style. 
> > Sure I could use any constant in show, I decided 4 is enough. One can 
> always add more functions, pretty-printing, whatever. 
> > Something like x[i] in Julia is just special syntax for getindex(x,i), 
> so defining this function for type Taylor, I can use the syntax, as if it 
> were a vector. 
> > I used a dictionary as a cache because I had used a Map in my Java port, 
> and since Java 8, they come with the computeIfAbsent function built in. I 
> could try an implementation with a vector, instead, the necessary changes 
> would be minimal. But I don't think performance is a big problem, here 
> (unless I want to use it to simulate the evolution of the solar system, 
> say). 
> > The lines 
> >   
> >  y<|y0+integral(z) 
> >  z<|z0-integral(sin(y)) 
> > 
> > 
> > are the equations of motion, they define y and z much as 
> > r<|exp(t[0])+integral(t'*r) 
> > 
> > defines the function exp(t). It's almost declarative: in Scala, I could 
> write 
> > def z=z0-integral(sin(y)) 
> > 
> > , while in Julia, I need the hack with <| to make it recursive. 
> > 
> > 
> > Am Freitag, 20. Juni 2014 19:10:25 UTC+2 schrieb Luis Benet: 
> > Hi, 
> > 
> > I took a look on your code at night. It is very nice because its 
> simplicity, almost magical!. 
> > 
> > There are few points which are not yet clear to me. I guess that "by 
> default" you calculate 
> > at least 4 terms of the expansion; this is "imposed" by your 
> redefinition of show. Am I right? 
> > Is there an easy way to change this default, say to 10? 
> > 
> > Is the trick of a dictionary *efficient*? I mean, by comparing 
> (performance and memory) 
> > against a vector? If I understand the idea, you store the coefficients 
> of the expansion 
> > in the dictionary, with the key being the order of the coefficient. 
> Along the same, why 
> > does it work `x[0]` yielding `x.cm[0]` ? Sorry for the very basic julia 
> question... 
> > 
> > It is not yet clear to me the way you succeed in integrating the 
> equations of motion, but 
> > certainly is some sort of Taylor method. The differences in the change 
> of energy comes 
> > from the fact that you use a 25th order expansion, I use a 28th, and you 
> fix the step-size 
> > while I calculate it from the last two terms in the expansion (which I 
> impose to be smaller 
> > than 1e-20). 
> > 
> > Best, Luis 
> > 
> > On Jun 19, 2014, at 4:39 PM, gentlebeldin <[email protected]> 
> wrote: 
> > 
> > > Thanks! My energy error plot looks a bit different, cf. attachment. 
> > > 
> > > Am Donnerstag, 19. Juni 2014 22:19:01 UTC+2 schrieb Luis Benet: 
> > > I have uploaded it as a gist:  
> http://nbviewer.ipython.org/gist/lbenet/616fa81f3c12c9cfcf97 
> > > 
> > > 
> > > On Jun 19, 2014, at 3:07 PM, gentlebeldin <[email protected]> 
> wrote: 
> > > 
> > > > I suspected that's what you meant: in theory, energy is conserved, 
> and looking at the real figures would tell us how good the integration is. 
> Well, it stays between 0.9999995000000417 and 0.9999995000000405, with a 
> very slight downward tendency. So I think we could call that "digital 
> friction plus quantum fluctuations". ;-) 
> > > > I didn't know PowerSeries,jl, Just had a look: it's based on similar 
> ideas, but the implementation is different. 
> > > > I can't have a look at your file PendulumIntegration.ipynb, 
> unfortunately, because I don't have IPython/IJula installed. Maybe I should 
> give it a try, but not tonight. 
> > > 
> > > <energy_error.png> 
> > 
>
>

Reply via email to