Thanks for pointing to juliadiff. I stumbled on this page some time ago, 
but I couldn't find anything suitable to my needs.

By the way, if anyone is interested I achieved a 3x speedup by fitting the 
whole algorithm into one function (by manually inlining the fddcoeffs into 
fdd).  This seems a little bit against the advice to "Separate kernel 
functions" [1] from "Performance Tips" in the docs.  This way the general 
algorithm is "only" 3-4x slower than the handcrafted function.

using FiniteDifferenceDerivatives
x=linspace(0,1,1000)
f=sin(x)
df=copy(x)
@elapsed for i = 1:1000; fdd!(df,1,3,x,f); end  # general function => ~0.15 
s
@elapsed for i = 1:1000; fdd13!(df,x,f); end  # hand crafted => ~0.05 s

I will make a pull request to add it to METADATA after I add tests.

[1] 
http://julia.readthedocs.org/en/latest/manual/performance-tips/#separate-kernel-functions


W dniu czwartek, 16 października 2014 18:52:04 UTC+2 użytkownik Miles Lubin 
napisał:
>
> Hi Paweł,
>
> Thanks for the clarification, I agree that this code is likely too 
> specialized for Calculus.jl. I'm not very familiar with PDE solution 
> techniques, but you may want to take a look at the various JuliaDiff (
> http://www.juliadiff.org/) projects. It's possible that by using one of 
> the truncated taylor series approaches, you could compute exact derivatives 
> without the need for finite differences. I'm also happy to add a link to 
> FDD.jl when it's published in METADATA.
>
> Miles
>
> On Thursday, October 16, 2014 7:41:34 AM UTC-4, Paweł Biernat wrote:
>>
>> Hi Miles,
>>
>> FiniteDifferenceDerivatives.jl contains methods that generalize those 
>> used in Calculus.jl (general k-point methods vs fixed 2-point methods).  
>>
>> But at the same time I don't think that Calculus.jl would benefit from 
>> general k-point methods,  because the use cases of both packages differ 
>> greatly.   FDD.jl is useful mostly when you have to work with a function 
>> given at fixed mesh points (for example when solving PDEs with method of 
>> lines), then the only way to increase the precision of the derivative is to 
>> increase the order of method.  On the other hand, in Calculus.jl you can 
>> evaluate the function to be differentiated wherever you want so there is no 
>> fixed mesh and you can just use a lower order method with a smaller step 
>> size to compensate for the low order.
>>
>> Also Calculus.jl uses (and implements) d-dimensional finite difference 
>> schemes necessary to compute gradients, while in FDD.jl I only implemented 
>> a 1-dimensional case.  So far I have not looked at a general d-dimensional, 
>> n-point, k-derivative methods and I don't even know whether they exist.
>>
>> In summary, it would be possible to use FDD.jl in Calculus.jl, although 
>> it would probably be an overkill and with current implementation of FDD.jl 
>> it would decrease the performance.
>>
>> Best,
>> Paweł
>>
>> W dniu środa, 15 października 2014 22:06:40 UTC+2 użytkownik Miles Lubin 
>> napisał:
>>>
>>> Hi Paweł,
>>>
>>> How does your approach compare with the implementation of finite 
>>> differences in Calculus.jl? It would be great if you could contribute any 
>>> new techniques there.
>>>
>>> Miles
>>>
>>> On Wednesday, October 15, 2014 6:48:29 AM UTC-4, Paweł Biernat wrote:
>>>>
>>>> Hi,
>>>> I have been experimenting with various finite difference algorithms for 
>>>> computing derivatives and I found and implemented a general algorithm for 
>>>> computing arbitrarily high derivative to arbitrarily high order [1].  So 
>>>> for anyone interested in playing with it I wrapped it up in a small 
>>>> package.
>>>>
>>>> I would also like to ask for your an advice on how to improve the 
>>>> performance of the function fdd, so far for a first derivative on a three 
>>>> point stencil (on a non-uniformly spaced mesh) it takes ten times as much 
>>>> time to compute the derivative, when compared to a hand crafted 
>>>> implementation of the same function.  For a comparison you can call
>>>>
>>>> using FiniteDifferenceDerivatives
>>>> x=linspace(0,1,1000)
>>>> f=sin(x)
>>>> df=copy(x)
>>>> @elapsed for i = 1:1000; fdd(1,3,x,f); end  # general function => ~0.5 s
>>>> @elapsed for i = 1:1000; fdd13(x,f); end  # hand crafted => ~0.05 s
>>>>
>>>> Obviously, most of the time is spent inside fddcoeffs.  One way I could 
>>>> increase the performance is to cache the results of fddcoeffs (its result 
>>>> depends only on the mesh).  In my use case however, the mesh changes often 
>>>> and fddcoeffs has to be called every time fdd is called.  I also tried to 
>>>> replace x[i1:i1+order-1] with view(x,i1:i1+order-1) in the call to 
>>>> fddcoeffs (with view from ArrayViews), but it didn't result in much of an 
>>>> improvement.  I also played with swapping the indexing order of c in 
>>>> fddcoeffs, but to no avail (maybe because the typical size of the matrix c 
>>>> is small (3x3 or similar)).  Maybe I should try to inline the fddcoeffs 
>>>> into the main body of fdd (I did this in my earlier version, and it 
>>>> actually resulted in an improvement, but it looked kind of less elegant)?  
>>>> Any advice (or pull requests) would be welcome.
>>>>
>>>> [1] https://github.com/pwl/FiniteDifferenceDerivatives.jl
>>>>
>>>> Cheers,
>>>> Paweł
>>>>
>>>

Reply via email to