Hi ! I understand the performance concerns and I'm working on some trade off. But…
1 - Non-blind Richardson-Lucy deconvolution by gradient descent with Total Variation regularization (probably not the algorithm used in Gimp since it's relatively recent) gives very good results in 25 iterations, thus 16 s on my 2 Mpx test picture with a Python script (on an i7 Ivy bridge laptop). 2 - Myopic deconvolution (the sort of "blind" deconvolution where you give a good-enough initial guess of the blur profile) converges faster than the blind one, 3 - The most computation-demanding operation is convolution product (2 FFT-convolve by non blind iteration, 4 by blind iteration). The good news is we don't need to compute them on the whole picture (it's actually bad when you have a large bokeh area) and you can/should mask the area of interest and do the computations only on it. It saves a lot of time and gives better results on some cases. 4 - It should be possible to deconvolve the RAW pic first, cache it, then apply the further edits on the cached picture (similar to the HDR workflow). 5 - Piccur uses a myopic deconvolution (from what I have understood), and seems to offer rather decent time/quality ratio. Also, ImageJ has a similar open-source plugin (http://imagej.net/Parallel_Iterative_Deconvolution) which code could be of interest. Thanks for your interest ! *Aurélien PIERRE* aurelienpierre.com <http://aurelienpierre.com> ------------------------------------------------------------------------ Le 2017-10-11 à 14:59, Heiko Bauke a écrit : > Hi, > > Am 11.10.2017 um 19:11 schrieb Martin Marmsoler: >> Gimp use python as scripting language. It might be easier to port for >> Gimp? > > by the way: there is a Richardson Lucy sharpening filter in G'MIC. > (As far as I understand this is a non-blind deconvolution algorithm.) > > > Heiko > ___________________________________________________________________________ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org