Hi

Probably a silly idea, but in case this deconvolution is really really
long, wouldn't it be possible to run it separately, and save the result as
a DNG file ? This way it would be done just once, and no need to
recalculate everything each time you want to edit the photo. But maybe it's
impossible to keep the raw data this way, I don't know much about what is
happening behind the scene ;)

In any case, i'm curious to see the result, it looks promising and like a
potential life saver on some shots.

have a nice day, guys !

    François

On Fri, Oct 13, 2017 at 4:30 AM, Aurélien PIERRE <rese...@aurelienpierre.com
> wrote:

> Hi Tobias and Johannes !
>
> @ Tobias :
>
> The standard deconvolution has to be computed on the whole picture since
> the iterations stack represents the approximated numeric solution of the
> equation we try to solve with an implicit method, so there is no way to
> just store a bunch of parameters for a later recomputation because we never
> actually know the exact equation (it's like solving a partial differential
> equation). Hence the iterations. As far as I know… (maybe Heiko Bauke''s
> second opinion on this matter could be of use).
>
> The blind/myopic deconvolution performs an additional refinement of the
> blur kernel (hence the 2 additional convolution products) and this part
> doesn't need to be computed on the whole image. The size of the data could
> be set by the user or limited in the software based on the available
> computation power. This is an additional constrain passed to the solver so
> there is always a trade-off anyway (too much data means an over-constrained
> problem, too few is a lack of information). However, here, if the blur
> kernel ends up accurate enough, it *could* be possible in theory to just
> solve the equation in one iteration thus store the blur kernel and re-apply
> it anytime. I still have to play with the algorithm more to understand it
> fully.
>
> I think what need to be understood here by everyone (I don't know
> everyone's background) is R-L deconvolution is not just a regular filter to
> apply on the data, but an equation to solve in a numeric way with an
> accuracy degree roughly related to the number of iterations. There is no
> explicit transfer function known.
>
> @ Johannes :
>
> the non-blind deconvolution runs on 3 threads (1/channel) in 16 s but
> could be tiled (you just need to pad the tiles accordingly with
> low-frequency content to avoid ringing and weird border effects).
>
> 4) that depends on the order of modules. if you want to do it in the
> current sharpen module as an option, it'll come pretty much last. if
> you do it early, dt will transparently cache the output for you and do
> the other computations in darkroom mode on top. that said, i doubt you
> want to implement this for raw/bayer/xtrans images or run it before
> denoising.
>
> I believe that it should be applied right after denoising since this is
> low-level signal processing. Also a Total Variation and a Wiener filter
> denoising methods should be added to the modules for better results with
> the deconvolution (Total Variation is litteraly a gradient computation plus
> 3 lines of code similar to the Unsharp Mask equation ; I'm not familiar
> with Wiener filters, althouh they come often in the litterature as a RL
> pre-processor).
>
> mit herzlichen Grüße ;-)
>
> *Aurélien PIERRE*
> aurelienpierre.com
> ------------------------------
>
> Le 2017-10-12 à 03:58, Tobias Ellinghaus a écrit :
>
> Am Donnerstag, 12. Oktober 2017, 00:23:42 CEST schrieb Aurélien PIERRE:
>
> Hi !
>
> I understand the performance concerns and I'm working on some trade off.
> But…
>
> [...]
>
>
> 3 - The most computation-demanding operation is convolution product (2
> FFT-convolve by non blind iteration, 4 by blind iteration). The good
> news is we don't need to compute them on the whole picture (it's
> actually bad when you have a large bokeh area) and you can/should mask
> the area of interest and do the computations only on it. It saves a lot
> of time and gives better results on some cases.
>
> How big is the data this step computes? If it's just a few values required to
> process the whole image with afterwards and not changing once it's been
> computed, then we could easily have a button in the module that does the heavy
> lifting once for the image and stores it in its params. Similar to what "color
> mapping" does.
> Provided the rest of the computations that just use the values computed here
> are fast, we don't have to worry too much how long a one-time operation takes.
>
> [...]
>
>
> Thanks for your interest !
>
> *Aurélien PIERRE*aurelienpierre.com <http://aurelienpierre.com> 
> <http://aurelienpierre.com>
>
> Tobias
>
> [...]
>
>
>

___________________________________________________________________________
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Reply via email to