[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-17 Thread rawfiner
Le dimanche 17 juin 2018, Aurélien Pierre  a
écrit :

>
>
> Le 13/06/2018 à 17:31, rawfiner a écrit :
>
>
>
> Le mercredi 13 juin 2018, Aurélien Pierre  a
> écrit :
>
>>
>>
>>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>>  wrote:
>>> > Hi,
>>> >
>>> > The problem of a 2-passes denoising method involving 2 differents
>>> > algorithms, the later applied where the former failed, could be the
>>> grain
>>> > structure (the shape of the noise) would be different along the
>>> picture,
>>> > thus very unpleasing.
>>
>>
>> I agree that the grain structure could be different. Indeed, the grain
>> could be different, but my feeling (that may be wrong) is that it would be
>> still better than just no further processing, that leaves some pixels
>> unprocessed (they could form grain structures far from uniform if we are
>> not lucky).
>> If you think it is only due to a change of algorithm, I guess we could
>> apply non local means again on pixels where a first pass failed, but with
>> different parameters to be quite confident that the second pass will work.
>>
>> That sounds better to me… but practice will have the last word.
>>
>
> Ok :-)
>
>>
>>
>>> >
>>> > I thought maybe we could instead create some sort of total variation
>>> > threshold on other denoising modules :
>>> >
>>> > compute the total variation of each channel of each pixel as the
>>> divergence
>>> > divided by the L1 norm of the gradient - we then obtain a "heatmap" of
>>> the
>>> > gradients over the picture (contours and noise)
>>> > let the user define a total variation threshold and form a mask where
>>> the
>>> > weights above the threshold are the total variation and the weights
>>> below
>>> > the threshold are zeros (sort of a highpass filter actually)
>>> > apply the bilateral filter according to this mask.
>>> >
>>> > This way, if the user wants to stack several denoising modules, he
>>> could
>>> > protect the already-cleaned areas from further denoising.
>>> >
>>> > What do you think ?
>>
>>
>> That sounds interesting.
>> This would maybe allow to keep some small variations/details that are not
>> due to noise or not disturbing, while denoising the other parts.
>> Also, it may be computationally interesting (depends on the complexity of
>> the total variation computation, I don't know it), as it could reduce the
>> number of pixels to process.
>> I guess the user could use something like that also the other way?: to
>> protect high detailed zones and apply the denoising on quite smoothed zones
>> only, in order to be able to use stronger denoising on zones that are
>> supposed to be background blur.
>>
>>
>> The noise is high frequency, so the TV (total variation) threshold will
>> have to be high pass only. The hypothesis behind the TV thresholding is
>> noisy pixels should have abnormally higher gradients than true details, so
>> you isolate them this way.  Selecting noise in low frequencies areas would
>> require in addition something like a guided filter, which I believe is what
>> is used in the dehaze module. The complexity of the TV computation depends
>> on the order of accuracy you expect.
>>
>> A classic approximation of the gradient is using a convolution product
>> with Sobel or Prewitt operators (3×3 arrays, very efficient, fairly
>> accurate for edges, probably less accurate for punctual noise). I have
>> developped myself optimized methods using 2, 4, and 8 neighbouring pixels
>> that give higher order accuracy, given the sparsity of the data, at the
>> expense of computing cost : https://github.com/aurelienpie
>> rre/Image-Cases-Studies/blob/947fd8d5c2e4c3384c80c1045d86f8c
>> f89ddcc7e/lib/deconvolution.pyx#L342 (ignore the variable ut in the
>> code, only u is relevant for us here).
>>
> Great, thanks for the explanations.
> Looking at the code of the 8 neighbouring pixels, I wonder if we would
> make sense to compute something like that on raw data considering only
> neighbouring pixels of the same color?
>
>
> the RAW data are even more sparse, so the gradient can't be computed this
> way. One would have to tweak the Taylor theorem to find an expression of
> gradient for sparse data. And that would be different for Bayer and X-Trans
> patterns. It's a bit of a conundrum.
>

Ok, thank you for these explainations


>
> Also, when talking about the mask formed from the heat map, do you mean
> that the "heat" would give for each pixel a weight to use between input and
> output? (i.e. a mask that is not only ones and zeros, but that controls how
> much input and output are used for each pixel)
> If so, I think it is a good idea to explore!
>
> yes, exactly, think of it as an opacity mask where you remap the
> user-input TV threshold and the lower values to 0, the max magnitude of TV
> to 1, and all the values in between accordingly.
>

Ok that is really cool! It seems a good idea to try to use that!

rawfiner


>
>
> rawfiner
>
>>
>>
>>
>>> >
>>> > Aurélien.
>>> >
>>> >
>>> > Le 13/06/2018 à 03:16, rawfiner a 

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-17 Thread Aurélien Pierre


Le 13/06/2018 à 17:31, rawfiner a écrit :
>
>
> Le mercredi 13 juin 2018, Aurélien Pierre  > a écrit :
>
>
>>
>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>> > > wrote:
>> > Hi,
>> >
>> > The problem of a 2-passes denoising method involving 2
>> differents
>> > algorithms, the later applied where the former failed,
>> could be the grain
>> > structure (the shape of the noise) would be different along
>> the picture,
>> > thus very unpleasing.
>>
>>
>> I agree that the grain structure could be different. Indeed, the
>> grain could be different, but my feeling (that may be wrong) is
>> that it would be still better than just no further processing,
>> that leaves some pixels unprocessed (they could form grain
>> structures far from uniform if we are not lucky).
>> If you think it is only due to a change of algorithm, I guess we
>> could apply non local means again on pixels where a first pass
>> failed, but with different parameters to be quite confident that
>> the second pass will work.
> That sounds better to me… but practice will have the last word.
>
>
> Ok :-) 
>
>>  
>>
>> >
>> > I thought maybe we could instead create some sort of total
>> variation
>> > threshold on other denoising modules :
>> >
>> > compute the total variation of each channel of each pixel
>> as the divergence
>> > divided by the L1 norm of the gradient - we then obtain a
>> "heatmap" of the
>> > gradients over the picture (contours and noise)
>> > let the user define a total variation threshold and form a
>> mask where the
>> > weights above the threshold are the total variation and the
>> weights below
>> > the threshold are zeros (sort of a highpass filter actually)
>> > apply the bilateral filter according to this mask.
>> >
>> > This way, if the user wants to stack several denoising
>> modules, he could
>> > protect the already-cleaned areas from further denoising.
>> >
>> > What do you think ?
>>
>>
>> That sounds interesting.
>> This would maybe allow to keep some small variations/details that
>> are not due to noise or not disturbing, while denoising the other
>> parts.
>> Also, it may be computationally interesting (depends on the
>> complexity of the total variation computation, I don't know it),
>> as it could reduce the number of pixels to process.
>> I guess the user could use something like that also the other
>> way?: to protect high detailed zones and apply the denoising on
>> quite smoothed zones only, in order to be able to use stronger
>> denoising on zones that are supposed to be background blur.
>
> The noise is high frequency, so the TV (total variation) threshold
> will have to be high pass only. The hypothesis behind the TV
> thresholding is noisy pixels should have abnormally higher
> gradients than true details, so you isolate them this way. 
> Selecting noise in low frequencies areas would require in addition
> something like a guided filter, which I believe is what is used in
> the dehaze module. The complexity of the TV computation depends on
> the order of accuracy you expect.
>
> A classic approximation of the gradient is using a convolution
> product with Sobel or Prewitt operators (3×3 arrays, very
> efficient, fairly accurate for edges, probably less accurate for
> punctual noise). I have developped myself optimized methods using
> 2, 4, and 8 neighbouring pixels that give higher order accuracy,
> given the sparsity of the data, at the expense of computing cost :
> 
> https://github.com/aurelienpierre/Image-Cases-Studies/blob/947fd8d5c2e4c3384c80c1045d86f8cf89ddcc7e/lib/deconvolution.pyx#L342
> 
> 
> (ignore the variable ut in the code, only u is relevant for us here).
>
> Great, thanks for the explanations.
> Looking at the code of the 8 neighbouring pixels, I wonder if we would
> make sense to compute something like that on raw data considering only
> neighbouring pixels of the same color?

the RAW data are even more sparse, so the gradient can't be computed
this way. One would have to tweak the Taylor theorem to find an
expression of gradient for sparse data. And that would be different for
Bayer and X-Trans patterns. It's a bit of a conundrum.
>
> Also, when talking about the mask formed from the heat map, do you
> mean that the "heat" would give for each pixel a weight to use between
> input and output? (i.e. a mask that is not only ones and zeros, but
> that controls 

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-17 Thread rawfiner
Here are some of the RAW files I use to test the changes I make to
denoising modules (including the one I used as an exemple in the beginning
of this conversation):
https://drive.google.com/open?id=11LxZWpZbS66m7vFdcoIHNTiG20JnwlJT
The reference-jpg folder contains the JPGs produced by the camera for these
raws (except for 2 of the RAWs for which I don't have the reference JPG).
I also use several other RAW files to test, but unfortunately I cannot
upload them as either they were not made by me, either they are photos of
people.

These are really noisy pictures, as I would like to be able to easily
process such pictures in darktable and to reach levels of quality similar
or better than the cameras.
Hope it will help.

If you have noisy photos you would like to share too, I'd like to have them
as my database of noisy pictures is a little biased (majority of photos in
my little "noisy database" are from my own cameras Lumix FZ1000 and Fuji
XT20 and I'd like to have more photos from other cameras)

Thanks!

rawfiner



2018-06-13 23:31 GMT+02:00 rawfiner :

>
>
> Le mercredi 13 juin 2018, Aurélien Pierre  a
> écrit :
>
>>
>>
>>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>>  wrote:
>>> > Hi,
>>> >
>>> > The problem of a 2-passes denoising method involving 2 differents
>>> > algorithms, the later applied where the former failed, could be the
>>> grain
>>> > structure (the shape of the noise) would be different along the
>>> picture,
>>> > thus very unpleasing.
>>
>>
>> I agree that the grain structure could be different. Indeed, the grain
>> could be different, but my feeling (that may be wrong) is that it would be
>> still better than just no further processing, that leaves some pixels
>> unprocessed (they could form grain structures far from uniform if we are
>> not lucky).
>> If you think it is only due to a change of algorithm, I guess we could
>> apply non local means again on pixels where a first pass failed, but with
>> different parameters to be quite confident that the second pass will work.
>>
>> That sounds better to me… but practice will have the last word.
>>
>
> Ok :-)
>
>>
>>
>>> >
>>> > I thought maybe we could instead create some sort of total variation
>>> > threshold on other denoising modules :
>>> >
>>> > compute the total variation of each channel of each pixel as the
>>> divergence
>>> > divided by the L1 norm of the gradient - we then obtain a "heatmap" of
>>> the
>>> > gradients over the picture (contours and noise)
>>> > let the user define a total variation threshold and form a mask where
>>> the
>>> > weights above the threshold are the total variation and the weights
>>> below
>>> > the threshold are zeros (sort of a highpass filter actually)
>>> > apply the bilateral filter according to this mask.
>>> >
>>> > This way, if the user wants to stack several denoising modules, he
>>> could
>>> > protect the already-cleaned areas from further denoising.
>>> >
>>> > What do you think ?
>>
>>
>> That sounds interesting.
>> This would maybe allow to keep some small variations/details that are not
>> due to noise or not disturbing, while denoising the other parts.
>> Also, it may be computationally interesting (depends on the complexity of
>> the total variation computation, I don't know it), as it could reduce the
>> number of pixels to process.
>> I guess the user could use something like that also the other way?: to
>> protect high detailed zones and apply the denoising on quite smoothed zones
>> only, in order to be able to use stronger denoising on zones that are
>> supposed to be background blur.
>>
>>
>> The noise is high frequency, so the TV (total variation) threshold will
>> have to be high pass only. The hypothesis behind the TV thresholding is
>> noisy pixels should have abnormally higher gradients than true details, so
>> you isolate them this way.  Selecting noise in low frequencies areas would
>> require in addition something like a guided filter, which I believe is what
>> is used in the dehaze module. The complexity of the TV computation depends
>> on the order of accuracy you expect.
>>
>> A classic approximation of the gradient is using a convolution product
>> with Sobel or Prewitt operators (3×3 arrays, very efficient, fairly
>> accurate for edges, probably less accurate for punctual noise). I have
>> developped myself optimized methods using 2, 4, and 8 neighbouring pixels
>> that give higher order accuracy, given the sparsity of the data, at the
>> expense of computing cost : https://github.com/aurelienpie
>> rre/Image-Cases-Studies/blob/947fd8d5c2e4c3384c80c1045d86f8c
>> f89ddcc7e/lib/deconvolution.pyx#L342 (ignore the variable ut in the
>> code, only u is relevant for us here).
>>
>> Great, thanks for the explanations.
> Looking at the code of the 8 neighbouring pixels, I wonder if we would
> make sense to compute something like that on raw data considering only
> neighbouring pixels of the same color?
>
> Also, when talking about the mask formed from the