[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-10-16 Thread rawfiner
Hello
Here is a little update on what I have done recently on denoising.
My work on a "new" raw denoise module is still ongoing, but it takes a lot
of time (as expected), as I have to try various things before getting
correct ones. So no news on this side.

Yet, I found a quicker and easier way to improve darktable's denoising
capabilities.
In fact, it does not even change the algorithm!

The idea is to give equalizer-like GUI for all wavelets based modules, and
to allow the user to change the force for red, green and blue channel, as
these channels usually suffer from different levels on noise (especially
after demosaic, where red and blue channel have coarser noise due to the
fact that errors propagated during the demosaic).
This way, the user can reduce coarse grain noise while keeping fine grain
noise if he wants, or whatever fits its need.

I have implemented this idea both for denoiseprofile and rawdenoise
modules, and I have just opened two pull requests for them:
https://github.com/darktable-org/darktable/pull/1752
https://github.com/darktable-org/darktable/pull/1753

Using these updated GUIs, I personally found that the existing algorithm
had plenty of hidden power (especially in case of high ISO images where the
coarse grain noise is more important)!
I hope you will enjoy this as much as I do.

rawfiner


Le dim. 22 juil. 2018 à 20:50, rawfiner  a écrit :

> Thank you Aurélien, that is a great answer.
> I think I will try to incorporate this in the weight computation of non
> local means to use only "non noisy" pixels in the computations of the
> weights, in addition to trying to use this as a (parametric?) mask.
>
> rawfiner
>
>
> Le samedi 21 juillet 2018, Aurélien Pierre  a
> écrit :
>
>> The TV is the norm (L1, L2, or something else) of the gradient along the
>> dimensions. Here, we have TV = || du/dx ; du/dy||. The discretized gradient
>> of a function u along a direction x is a simple forward or backward finite
>> difference such as du/dx = [u(i) - u(i-1)] / [x(i) - x(i-1)] (backward) or
>> du/dx = [u(i +1) - u(i)] / [x(i+1) - x(i)] (forward).
>>
>> For contiguous pixels on main directions, the distance between 2 pixels
>> is x(i) - x(i-1) = 1 (I don't divide explicitely by 1 in the code though),
>> on diagonals it's = sqrt(2) (result of Pythagore's theorem). Hence the
>> division by sqrt(2).
>>
>> Now, imagine a 2D problem where we have an inconsistent pixel in a smooth
>> sub-area of a picture with 0 all around:
>>
>> [0 ; 0 ; 0]
>> [0 ; 1 ; 0]
>> [0 ; 0 ; 0]
>>
>> That is the matrix of a 2D Dirac delta function (impulse). Computing the
>> TV L1 in forward difference leads to :
>>
>> ([0.0 ; 0.5 ; 0.0]
>>  [0.5 ; 1.0 ; 0.0]
>>  [0.0 ; 0.0 ; 0.0])*2
>>
>> Doing the same backwards leads to :
>>
>> ([0.0 ; 0.0 ; 0.0]
>>  [0.0 ; 1.0 ; 0.5]
>>  [0.0 ; 0.5 ; 0.0])*2
>>
>> So what happens is in both cases, the immediate neighbours of the noisy
>> pixel are detected as somewhat noisy as well because of the first order
>> discretization, but they are not noise. That's a limit of the discrete
>> computation. Also the derivative of a Dirac delta function is supposed to
>> be an even function, obviously that property is broken here. If you compute
>> the L2 norm of these arrays, you get 1.22. A delta function should have a
>> L2 norm = 1. Actually, the best approximation of the TV of the delta
>> function would be the original delta function itself.
>>
>> If we average the both TV norms, we get :
>>
>> ([0.00 ; 0.25 ; 0.00]
>>   [0.25 ; 1.00 ; 0.25]
>>   [0.00 ; 0.25 ; 0.00])*4
>>
>> So, now, we have an error on more neighbours, but smaller in magnitude
>> and the TV map is now even. Also, the L2 norm of the array is now 1.12,
>> which is closer to 1. So we have a better approximation of the delta
>> derivative.
>>
>> With that in mind, on the 8 neighbours variant, we also compute the TV L1
>> norms (average of backward and forward) on diagonals, meaning :
>>
>> ([0.25 ; 0.00 ; 0.25]
>>   [0.00 ; 1.00 ; 0.00]
>>   [0.25 ; 0.00 ; 0.25])*4/sqrt(2)
>>
>> And… you are right, there is a problem of normalization because we should
>> divide by 4*(1 + 1/sqrt(2)) instead of 4. Then, our TV L1 map will be :
>>
>> [0.1036 ; 0.1464 ; 0.1036]
>> [0.1464 ; 1. ; 0.1464]
>> [0.1036 ; 0.1464 ; 0.1036]
>>
>> That's an even better approximation to the Dirac delta. Now, the L2 norm
>> is 1.06. And now that I see it, that could lead to a separable kernel to
>> compute the TV L1 with two 1D convolutions…
>>
>> I didn't plan on going full math here, but, here we are…
>>
>> I will correct my code soon.
>>
>> 16/07/2018 à 01:51, rawfiner a écrit :
>>
>> I went through Aurélien's study again
>> I wonder why the result of TV is divided by 4 (in case of 8 neighbors, "
>> out[i, j, k] /= 4.")
>>
>> I guess it is kind of a normalisation.
>> But as we divided the differences along diagonals by sqrt(2), the maximum
>> achievable (supposing the values of the image are in [0,1], thus taking a
>> difference of 1 along each 

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-07-22 Thread rawfiner
Thank you Aurélien, that is a great answer.
I think I will try to incorporate this in the weight computation of non
local means to use only "non noisy" pixels in the computations of the
weights, in addition to trying to use this as a (parametric?) mask.

rawfiner


Le samedi 21 juillet 2018, Aurélien Pierre  a
écrit :

> The TV is the norm (L1, L2, or something else) of the gradient along the
> dimensions. Here, we have TV = || du/dx ; du/dy||. The discretized gradient
> of a function u along a direction x is a simple forward or backward finite
> difference such as du/dx = [u(i) - u(i-1)] / [x(i) - x(i-1)] (backward) or
> du/dx = [u(i +1) - u(i)] / [x(i+1) - x(i)] (forward).
>
> For contiguous pixels on main directions, the distance between 2 pixels is
> x(i) - x(i-1) = 1 (I don't divide explicitely by 1 in the code though), on
> diagonals it's = sqrt(2) (result of Pythagore's theorem). Hence the
> division by sqrt(2).
>
> Now, imagine a 2D problem where we have an inconsistent pixel in a smooth
> sub-area of a picture with 0 all around:
>
> [0 ; 0 ; 0]
> [0 ; 1 ; 0]
> [0 ; 0 ; 0]
>
> That is the matrix of a 2D Dirac delta function (impulse). Computing the
> TV L1 in forward difference leads to :
>
> ([0.0 ; 0.5 ; 0.0]
>  [0.5 ; 1.0 ; 0.0]
>  [0.0 ; 0.0 ; 0.0])*2
>
> Doing the same backwards leads to :
>
> ([0.0 ; 0.0 ; 0.0]
>  [0.0 ; 1.0 ; 0.5]
>  [0.0 ; 0.5 ; 0.0])*2
>
> So what happens is in both cases, the immediate neighbours of the noisy
> pixel are detected as somewhat noisy as well because of the first order
> discretization, but they are not noise. That's a limit of the discrete
> computation. Also the derivative of a Dirac delta function is supposed to
> be an even function, obviously that property is broken here. If you compute
> the L2 norm of these arrays, you get 1.22. A delta function should have a
> L2 norm = 1. Actually, the best approximation of the TV of the delta
> function would be the original delta function itself.
>
> If we average the both TV norms, we get :
>
> ([0.00 ; 0.25 ; 0.00]
>   [0.25 ; 1.00 ; 0.25]
>   [0.00 ; 0.25 ; 0.00])*4
>
> So, now, we have an error on more neighbours, but smaller in magnitude and
> the TV map is now even. Also, the L2 norm of the array is now 1.12, which
> is closer to 1. So we have a better approximation of the delta derivative.
>
> With that in mind, on the 8 neighbours variant, we also compute the TV L1
> norms (average of backward and forward) on diagonals, meaning :
>
> ([0.25 ; 0.00 ; 0.25]
>   [0.00 ; 1.00 ; 0.00]
>   [0.25 ; 0.00 ; 0.25])*4/sqrt(2)
>
> And… you are right, there is a problem of normalization because we should
> divide by 4*(1 + 1/sqrt(2)) instead of 4. Then, our TV L1 map will be :
>
> [0.1036 ; 0.1464 ; 0.1036]
> [0.1464 ; 1. ; 0.1464]
> [0.1036 ; 0.1464 ; 0.1036]
>
> That's an even better approximation to the Dirac delta. Now, the L2 norm
> is 1.06. And now that I see it, that could lead to a separable kernel to
> compute the TV L1 with two 1D convolutions…
>
> I didn't plan on going full math here, but, here we are…
>
> I will correct my code soon.
>
> 16/07/2018 à 01:51, rawfiner a écrit :
>
> I went through Aurélien's study again
> I wonder why the result of TV is divided by 4 (in case of 8 neighbors, "
> out[i, j, k] /= 4.")
>
> I guess it is kind of a normalisation.
> But as we divided the differences along diagonals by sqrt(2), the maximum
> achievable (supposing the values of the image are in [0,1], thus taking a
> difference of 1 along each direction) are:
> sqrt(1 + 1) + sqrt(1 + 1) + sqrt(1/2+1/2) + sqrt(1/2+1/2) = 2*sqrt(2) + 2
> in case of L2 norm
> 2 + 2 + 2*1/sqrt(2) + 2*1/sqrt(2) = 4 + 2*sqrt(2) in case of L1 norm
>
> So why this 4 and not a 4.83 and a 6.83 for L2 norm and L1 norm
> respectively?
> Or is it just a division by the number of directions? (if so, why are the
> diagonals difference divided by sqrt(2)?)
>
> Thanks!
>
> rawfiner
>
>
> 2018-07-02 21:34 GMT+02:00 rawfiner :
>
> Thank you for all these explanations!
> Seems promising to me.
>
> Cheers,
>
> rawfiner
>
> 2018-07-01 21:26 GMT+02:00 Aurélien Pierre :
>
> You're welcome ;-)
>
> That's true : the multiplication is equivalent to an "AND" operation, the
> resulting mask has non-zero values where both TV AND Laplacian masks has
> non-zero values, which - from my tests - is where the real noise is.
>
> That is because TV alone is too sensitive : when the image is noisy, it
> works fine, but whenever the image is clean or barely noisy, it detect
> edges as well, thus false-positive in the case of noise detection.
>
> The TV × Laplacian is a safety jacket that allows the TV to work as
> expected on noisy images (see the example) but will protect sharp edges on
> clean images (on the example, the masks barely grabs a few pixels in the
> in-focus area).
>
> I have found that the only way we could overcome the oversensibility of
> the TV alone is by setting a window (like a band-pass filter) instead of a
> threshold (high-pass filter) because, 

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-07-15 Thread rawfiner
I went through Aurélien's study again
I wonder why the result of TV is divided by 4 (in case of 8 neighbors, "out[
i, j, k] /= 4.")

I guess it is kind of a normalisation.
But as we divided the differences along diagonals by sqrt(2), the maximum
achievable (supposing the values of the image are in [0,1], thus taking a
difference of 1 along each direction) are:
sqrt(1 + 1) + sqrt(1 + 1) + sqrt(1/2+1/2) + sqrt(1/2+1/2) = 2*sqrt(2) + 2
in case of L2 norm
2 + 2 + 2*1/sqrt(2) + 2*1/sqrt(2) = 4 + 2*sqrt(2) in case of L1 norm

So why this 4 and not a 4.83 and a 6.83 for L2 norm and L1 norm
respectively?
Or is it just a division by the number of directions? (if so, why are the
diagonals difference divided by sqrt(2)?)

Thanks!

rawfiner


2018-07-02 21:34 GMT+02:00 rawfiner :

> Thank you for all these explanations!
> Seems promising to me.
>
> Cheers,
>
> rawfiner
>
> 2018-07-01 21:26 GMT+02:00 Aurélien Pierre :
>
>> You're welcome ;-)
>>
>> That's true : the multiplication is equivalent to an "AND" operation, the
>> resulting mask has non-zero values where both TV AND Laplacian masks has
>> non-zero values, which - from my tests - is where the real noise is.
>>
>> That is because TV alone is too sensitive : when the image is noisy, it
>> works fine, but whenever the image is clean or barely noisy, it detect
>> edges as well, thus false-positive in the case of noise detection.
>>
>> The TV × Laplacian is a safety jacket that allows the TV to work as
>> expected on noisy images (see the example) but will protect sharp edges on
>> clean images (on the example, the masks barely grabs a few pixels in the
>> in-focus area).
>>
>> I have found that the only way we could overcome the oversensibility of
>> the TV alone is by setting a window (like a band-pass filter) instead of a
>> threshold (high-pass filter) because, in a noisy picture, depending on the
>> noise level, the TV values of noisy and edgy pixels are very close. From an
>> end-user perspective, this is tricky.
>>
>> Using TV × Laplacian, given that the noise stats should not vary much for
>> a given sensor at a given ISO, allows to confidently set a simple threshold
>> as a factor of the standard deviation. It gives more reproductibility and
>> allows to build preset/styles for given camera/ISO. Assuming gaussian
>> noise, if you set your threshold factor to X (which means "unmask
>> everything above the mean (TV × Laplacian) + X standard deviation), you
>> know beforehand how many high-frequency pixels will be affected, no matter
>> what :
>>
>>- X = -1 =>  84 %,
>>- 0 => 50 %,
>>- 1 =>  16 % ,
>>- 2 =>  2.5 %,
>>- 3 => 0.15 %
>>- …
>>
>> Le 01/07/2018 à 14:13, rawfiner a écrit :
>>
>> Thank you for this study Aurélien
>>
>> As far as I understand, TV and Laplacians are complementary as they
>> detect noise in different regions of the image (noise in sharp edge for
>> Laplacian, noise elsewhere for TV).
>> Though, I do not understand why you multiply the TV and Laplacian results
>> to get the mask.
>> Multiplying them would result in a mask containing non-zero values only
>> for pixels that are detected as noise both by TV and Laplacian.
>> Is there a particular reason for multiplying (or did I misunderstood
>> something?), or could we take the maximum value among TV and Laplacian for
>> each pixel instead?
>>
>> Thanks again
>>
>> Cheers,
>> rawfiner
>>
>>
>> 2018-07-01 3:45 GMT+02:00 Aurélien Pierre :
>>
>>> Hi,
>>>
>>> I have done experiments on that matter and took the opportunity to
>>> correct/test further my code.
>>>
>>> So here are my attempts to code a noise mask and a sharpness mask with
>>> total variation and laplacian norms : https://github.com/aurelienpie
>>> rre/Image-Cases-Studies/blob/master/notebooks/Total%20Variat
>>> ion%20masking.ipynb
>>>
>>> Performance benchmarks are at the end.
>>>
>>> Cheers,
>>>
>>> Aurélien.
>>>
>>> Le 17/06/2018 à 15:03, rawfiner a écrit :
>>>
>>>
>>>
>>> Le dimanche 17 juin 2018, Aurélien Pierre 
>>> a écrit :
>>>


 Le 13/06/2018 à 17:31, rawfiner a écrit :



 Le mercredi 13 juin 2018, Aurélien Pierre 
 a écrit :

>
>
>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>  wrote:
>> > Hi,
>> >
>> > The problem of a 2-passes denoising method involving 2 differents
>> > algorithms, the later applied where the former failed, could be the
>> grain
>> > structure (the shape of the noise) would be different along the
>> picture,
>> > thus very unpleasing.
>
>
> I agree that the grain structure could be different. Indeed, the grain
> could be different, but my feeling (that may be wrong) is that it would be
> still better than just no further processing, that leaves some pixels
> unprocessed (they could form grain structures far from uniform if we are
> not lucky).
> If you think it is only due to a change of algorithm, I guess we could
> apply non local means again on 

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-07-02 Thread rawfiner
Thank you for all these explanations!
Seems promising to me.

Cheers,

rawfiner

2018-07-01 21:26 GMT+02:00 Aurélien Pierre :

> You're welcome ;-)
>
> That's true : the multiplication is equivalent to an "AND" operation, the
> resulting mask has non-zero values where both TV AND Laplacian masks has
> non-zero values, which - from my tests - is where the real noise is.
>
> That is because TV alone is too sensitive : when the image is noisy, it
> works fine, but whenever the image is clean or barely noisy, it detect
> edges as well, thus false-positive in the case of noise detection.
>
> The TV × Laplacian is a safety jacket that allows the TV to work as
> expected on noisy images (see the example) but will protect sharp edges on
> clean images (on the example, the masks barely grabs a few pixels in the
> in-focus area).
>
> I have found that the only way we could overcome the oversensibility of
> the TV alone is by setting a window (like a band-pass filter) instead of a
> threshold (high-pass filter) because, in a noisy picture, depending on the
> noise level, the TV values of noisy and edgy pixels are very close. From an
> end-user perspective, this is tricky.
>
> Using TV × Laplacian, given that the noise stats should not vary much for
> a given sensor at a given ISO, allows to confidently set a simple threshold
> as a factor of the standard deviation. It gives more reproductibility and
> allows to build preset/styles for given camera/ISO. Assuming gaussian
> noise, if you set your threshold factor to X (which means "unmask
> everything above the mean (TV × Laplacian) + X standard deviation), you
> know beforehand how many high-frequency pixels will be affected, no matter
> what :
>
>- X = -1 =>  84 %,
>- 0 => 50 %,
>- 1 =>  16 % ,
>- 2 =>  2.5 %,
>- 3 => 0.15 %
>- …
>
> Le 01/07/2018 à 14:13, rawfiner a écrit :
>
> Thank you for this study Aurélien
>
> As far as I understand, TV and Laplacians are complementary as they detect
> noise in different regions of the image (noise in sharp edge for Laplacian,
> noise elsewhere for TV).
> Though, I do not understand why you multiply the TV and Laplacian results
> to get the mask.
> Multiplying them would result in a mask containing non-zero values only
> for pixels that are detected as noise both by TV and Laplacian.
> Is there a particular reason for multiplying (or did I misunderstood
> something?), or could we take the maximum value among TV and Laplacian for
> each pixel instead?
>
> Thanks again
>
> Cheers,
> rawfiner
>
>
> 2018-07-01 3:45 GMT+02:00 Aurélien Pierre :
>
>> Hi,
>>
>> I have done experiments on that matter and took the opportunity to
>> correct/test further my code.
>>
>> So here are my attempts to code a noise mask and a sharpness mask with
>> total variation and laplacian norms : https://github.com/aurelienpie
>> rre/Image-Cases-Studies/blob/master/notebooks/Total%
>> 20Variation%20masking.ipynb
>>
>> Performance benchmarks are at the end.
>>
>> Cheers,
>>
>> Aurélien.
>>
>> Le 17/06/2018 à 15:03, rawfiner a écrit :
>>
>>
>>
>> Le dimanche 17 juin 2018, Aurélien Pierre  a
>> écrit :
>>
>>>
>>>
>>> Le 13/06/2018 à 17:31, rawfiner a écrit :
>>>
>>>
>>>
>>> Le mercredi 13 juin 2018, Aurélien Pierre 
>>> a écrit :
>>>


> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>  wrote:
> > Hi,
> >
> > The problem of a 2-passes denoising method involving 2 differents
> > algorithms, the later applied where the former failed, could be the
> grain
> > structure (the shape of the noise) would be different along the
> picture,
> > thus very unpleasing.


 I agree that the grain structure could be different. Indeed, the grain
 could be different, but my feeling (that may be wrong) is that it would be
 still better than just no further processing, that leaves some pixels
 unprocessed (they could form grain structures far from uniform if we are
 not lucky).
 If you think it is only due to a change of algorithm, I guess we could
 apply non local means again on pixels where a first pass failed, but with
 different parameters to be quite confident that the second pass will work.

 That sounds better to me… but practice will have the last word.

>>>
>>> Ok :-)
>>>


> >
> > I thought maybe we could instead create some sort of total variation
> > threshold on other denoising modules :
> >
> > compute the total variation of each channel of each pixel as the
> divergence
> > divided by the L1 norm of the gradient - we then obtain a "heatmap"
> of the
> > gradients over the picture (contours and noise)
> > let the user define a total variation threshold and form a mask
> where the
> > weights above the threshold are the total variation and the weights
> below
> > the threshold are zeros (sort of a highpass filter actually)
> > apply the bilateral filter according to this 

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-07-01 Thread Aurélien Pierre
You're welcome ;-)

That's true : the multiplication is equivalent to an "AND" operation,
the resulting mask has non-zero values where both TV AND Laplacian masks
has non-zero values, which - from my tests - is where the real noise is.

That is because TV alone is too sensitive : when the image is noisy, it
works fine, but whenever the image is clean or barely noisy, it detect
edges as well, thus false-positive in the case of noise detection.

The TV × Laplacian is a safety jacket that allows the TV to work as
expected on noisy images (see the example) but will protect sharp edges
on clean images (on the example, the masks barely grabs a few pixels in
the in-focus area).

I have found that the only way we could overcome the oversensibility of
the TV alone is by setting a window (like a band-pass filter) instead of
a threshold (high-pass filter) because, in a noisy picture, depending on
the noise level, the TV values of noisy and edgy pixels are very close.
>From an end-user perspective, this is tricky.

Using TV × Laplacian, given that the noise stats should not vary much
for a given sensor at a given ISO, allows to confidently set a simple
threshold as a factor of the standard deviation. It gives more
reproductibility and allows to build preset/styles for given camera/ISO.
Assuming gaussian noise, if you set your threshold factor to X (which
means "unmask everything above the mean (TV × Laplacian) + X standard
deviation), you know beforehand how many high-frequency pixels will be
affected, no matter what :

  * X = -1 =>  84 %,
  * 0 => 50 %,
  * 1 =>  16 % ,
  * 2 =>  2.5 %,
  * 3 => 0.15 %
  * …

Le 01/07/2018 à 14:13, rawfiner a écrit :
> Thank you for this study Aurélien
>
> As far as I understand, TV and Laplacians are complementary as they
> detect noise in different regions of the image (noise in sharp edge
> for Laplacian, noise elsewhere for TV).
> Though, I do not understand why you multiply the TV and Laplacian
> results to get the mask.
> Multiplying them would result in a mask containing non-zero values
> only for pixels that are detected as noise both by TV and Laplacian.
> Is there a particular reason for multiplying (or did I misunderstood
> something?), or could we take the maximum value among TV and Laplacian
> for each pixel instead?
>
> Thanks again
>
> Cheers,
> rawfiner
>
>
> 2018-07-01 3:45 GMT+02:00 Aurélien Pierre  >:
>
> Hi,
>
> I have done experiments on that matter and took the opportunity to
> correct/test further my code.
>
> So here are my attempts to code a noise mask and a sharpness mask
> with total variation and laplacian norms :
> 
> https://github.com/aurelienpierre/Image-Cases-Studies/blob/master/notebooks/Total%20Variation%20masking.ipynb
> 
> 
>
> Performance benchmarks are at the end.
>
> Cheers,
>
> Aurélien.
>
>
> Le 17/06/2018 à 15:03, rawfiner a écrit :
>>
>>
>> Le dimanche 17 juin 2018, Aurélien Pierre
>> mailto:rese...@aurelienpierre.com>>
>> a écrit :
>>
>>
>>
>> Le 13/06/2018 à 17:31, rawfiner a écrit :
>>>
>>>
>>> Le mercredi 13 juin 2018, Aurélien Pierre
>>> >> > a écrit :
>>>
>>>

 On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
 >>> > wrote:
 > Hi,
 >
 > The problem of a 2-passes denoising method
 involving 2 differents
 > algorithms, the later applied where the former
 failed, could be the grain
 > structure (the shape of the noise) would be
 different along the picture,
 > thus very unpleasing.


 I agree that the grain structure could be different.
 Indeed, the grain could be different, but my feeling
 (that may be wrong) is that it would be still better
 than just no further processing, that leaves some
 pixels unprocessed (they could form grain structures
 far from uniform if we are not lucky).
 If you think it is only due to a change of algorithm, I
 guess we could apply non local means again on pixels
 where a first pass failed, but with different
 parameters to be quite confident that the second pass
 will work.
>>> That sounds better to me… but practice will have the
>>> last word.
>>>
>>>
>>> Ok :-) 
>>>
  

 >
 > I thought maybe we could instead create some sort
 of total variation
 > threshold 

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-07-01 Thread rawfiner
Thank you for this study Aurélien

As far as I understand, TV and Laplacians are complementary as they detect
noise in different regions of the image (noise in sharp edge for Laplacian,
noise elsewhere for TV).
Though, I do not understand why you multiply the TV and Laplacian results
to get the mask.
Multiplying them would result in a mask containing non-zero values only for
pixels that are detected as noise both by TV and Laplacian.
Is there a particular reason for multiplying (or did I misunderstood
something?), or could we take the maximum value among TV and Laplacian for
each pixel instead?

Thanks again

Cheers,
rawfiner


2018-07-01 3:45 GMT+02:00 Aurélien Pierre :

> Hi,
>
> I have done experiments on that matter and took the opportunity to
> correct/test further my code.
>
> So here are my attempts to code a noise mask and a sharpness mask with
> total variation and laplacian norms : https://github.com/
> aurelienpierre/Image-Cases-Studies/blob/master/notebooks/
> Total%20Variation%20masking.ipynb
>
> Performance benchmarks are at the end.
>
> Cheers,
>
> Aurélien.
>
> Le 17/06/2018 à 15:03, rawfiner a écrit :
>
>
>
> Le dimanche 17 juin 2018, Aurélien Pierre  a
> écrit :
>
>>
>>
>> Le 13/06/2018 à 17:31, rawfiner a écrit :
>>
>>
>>
>> Le mercredi 13 juin 2018, Aurélien Pierre  a
>> écrit :
>>
>>>
>>>
 On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
  wrote:
 > Hi,
 >
 > The problem of a 2-passes denoising method involving 2 differents
 > algorithms, the later applied where the former failed, could be the
 grain
 > structure (the shape of the noise) would be different along the
 picture,
 > thus very unpleasing.
>>>
>>>
>>> I agree that the grain structure could be different. Indeed, the grain
>>> could be different, but my feeling (that may be wrong) is that it would be
>>> still better than just no further processing, that leaves some pixels
>>> unprocessed (they could form grain structures far from uniform if we are
>>> not lucky).
>>> If you think it is only due to a change of algorithm, I guess we could
>>> apply non local means again on pixels where a first pass failed, but with
>>> different parameters to be quite confident that the second pass will work.
>>>
>>> That sounds better to me… but practice will have the last word.
>>>
>>
>> Ok :-)
>>
>>>
>>>
 >
 > I thought maybe we could instead create some sort of total variation
 > threshold on other denoising modules :
 >
 > compute the total variation of each channel of each pixel as the
 divergence
 > divided by the L1 norm of the gradient - we then obtain a "heatmap"
 of the
 > gradients over the picture (contours and noise)
 > let the user define a total variation threshold and form a mask where
 the
 > weights above the threshold are the total variation and the weights
 below
 > the threshold are zeros (sort of a highpass filter actually)
 > apply the bilateral filter according to this mask.
 >
 > This way, if the user wants to stack several denoising modules, he
 could
 > protect the already-cleaned areas from further denoising.
 >
 > What do you think ?
>>>
>>>
>>> That sounds interesting.
>>> This would maybe allow to keep some small variations/details that are
>>> not due to noise or not disturbing, while denoising the other parts.
>>> Also, it may be computationally interesting (depends on the complexity
>>> of the total variation computation, I don't know it), as it could reduce
>>> the number of pixels to process.
>>> I guess the user could use something like that also the other way?: to
>>> protect high detailed zones and apply the denoising on quite smoothed zones
>>> only, in order to be able to use stronger denoising on zones that are
>>> supposed to be background blur.
>>>
>>>
>>> The noise is high frequency, so the TV (total variation) threshold will
>>> have to be high pass only. The hypothesis behind the TV thresholding is
>>> noisy pixels should have abnormally higher gradients than true details, so
>>> you isolate them this way.  Selecting noise in low frequencies areas would
>>> require in addition something like a guided filter, which I believe is what
>>> is used in the dehaze module. The complexity of the TV computation depends
>>> on the order of accuracy you expect.
>>>
>>> A classic approximation of the gradient is using a convolution product
>>> with Sobel or Prewitt operators (3×3 arrays, very efficient, fairly
>>> accurate for edges, probably less accurate for punctual noise). I have
>>> developped myself optimized methods using 2, 4, and 8 neighbouring pixels
>>> that give higher order accuracy, given the sparsity of the data, at the
>>> expense of computing cost : https://github.com/aurelienpie
>>> rre/Image-Cases-Studies/blob/947fd8d5c2e4c3384c80c1045d86f8c
>>> f89ddcc7e/lib/deconvolution.pyx#L342 (ignore the variable ut in the
>>> code, only u is relevant for us here).
>>>
>> Great, 

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-30 Thread Aurélien Pierre
Hi,

I have done experiments on that matter and took the opportunity to
correct/test further my code.

So here are my attempts to code a noise mask and a sharpness mask with
total variation and laplacian norms :
https://github.com/aurelienpierre/Image-Cases-Studies/blob/master/notebooks/Total%20Variation%20masking.ipynb

Performance benchmarks are at the end.

Cheers,

Aurélien.


Le 17/06/2018 à 15:03, rawfiner a écrit :
>
>
> Le dimanche 17 juin 2018, Aurélien Pierre  > a écrit :
>
>
>
> Le 13/06/2018 à 17:31, rawfiner a écrit :
>>
>>
>> Le mercredi 13 juin 2018, Aurélien Pierre
>> mailto:rese...@aurelienpierre.com>>
>> a écrit :
>>
>>
>>>
>>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>> >> > wrote:
>>> > Hi,
>>> >
>>> > The problem of a 2-passes denoising method involving 2
>>> differents
>>> > algorithms, the later applied where the former failed,
>>> could be the grain
>>> > structure (the shape of the noise) would be different
>>> along the picture,
>>> > thus very unpleasing.
>>>
>>>
>>> I agree that the grain structure could be different. Indeed,
>>> the grain could be different, but my feeling (that may be
>>> wrong) is that it would be still better than just no further
>>> processing, that leaves some pixels unprocessed (they could
>>> form grain structures far from uniform if we are not lucky).
>>> If you think it is only due to a change of algorithm, I
>>> guess we could apply non local means again on pixels where a
>>> first pass failed, but with different parameters to be quite
>>> confident that the second pass will work.
>> That sounds better to me… but practice will have the last word.
>>
>>
>> Ok :-) 
>>
>>>  
>>>
>>> >
>>> > I thought maybe we could instead create some sort of
>>> total variation
>>> > threshold on other denoising modules :
>>> >
>>> > compute the total variation of each channel of each
>>> pixel as the divergence
>>> > divided by the L1 norm of the gradient - we then
>>> obtain a "heatmap" of the
>>> > gradients over the picture (contours and noise)
>>> > let the user define a total variation threshold and
>>> form a mask where the
>>> > weights above the threshold are the total variation
>>> and the weights below
>>> > the threshold are zeros (sort of a highpass filter
>>> actually)
>>> > apply the bilateral filter according to this mask.
>>> >
>>> > This way, if the user wants to stack several denoising
>>> modules, he could
>>> > protect the already-cleaned areas from further denoising.
>>> >
>>> > What do you think ?
>>>
>>>
>>> That sounds interesting.
>>> This would maybe allow to keep some small variations/details
>>> that are not due to noise or not disturbing, while denoising
>>> the other parts.
>>> Also, it may be computationally interesting (depends on the
>>> complexity of the total variation computation, I don't know
>>> it), as it could reduce the number of pixels to process.
>>> I guess the user could use something like that also the
>>> other way?: to protect high detailed zones and apply the
>>> denoising on quite smoothed zones only, in order to be able
>>> to use stronger denoising on zones that are supposed to be
>>> background blur.
>>
>> The noise is high frequency, so the TV (total variation)
>> threshold will have to be high pass only. The hypothesis
>> behind the TV thresholding is noisy pixels should have
>> abnormally higher gradients than true details, so you isolate
>> them this way.  Selecting noise in low frequencies areas
>> would require in addition something like a guided filter,
>> which I believe is what is used in the dehaze module. The
>> complexity of the TV computation depends on the order of
>> accuracy you expect.
>>
>> A classic approximation of the gradient is using a
>> convolution product with Sobel or Prewitt operators (3×3
>> arrays, very efficient, fairly accurate for edges, probably
>> less accurate for punctual noise). I have developped myself
>> optimized methods using 2, 4, and 8 neighbouring pixels that
>> give higher order accuracy, given the sparsity of the data,
>> at the expense of computing cost :
>> 
>> 

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-17 Thread rawfiner
Le dimanche 17 juin 2018, Aurélien Pierre  a
écrit :

>
>
> Le 13/06/2018 à 17:31, rawfiner a écrit :
>
>
>
> Le mercredi 13 juin 2018, Aurélien Pierre  a
> écrit :
>
>>
>>
>>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>>  wrote:
>>> > Hi,
>>> >
>>> > The problem of a 2-passes denoising method involving 2 differents
>>> > algorithms, the later applied where the former failed, could be the
>>> grain
>>> > structure (the shape of the noise) would be different along the
>>> picture,
>>> > thus very unpleasing.
>>
>>
>> I agree that the grain structure could be different. Indeed, the grain
>> could be different, but my feeling (that may be wrong) is that it would be
>> still better than just no further processing, that leaves some pixels
>> unprocessed (they could form grain structures far from uniform if we are
>> not lucky).
>> If you think it is only due to a change of algorithm, I guess we could
>> apply non local means again on pixels where a first pass failed, but with
>> different parameters to be quite confident that the second pass will work.
>>
>> That sounds better to me… but practice will have the last word.
>>
>
> Ok :-)
>
>>
>>
>>> >
>>> > I thought maybe we could instead create some sort of total variation
>>> > threshold on other denoising modules :
>>> >
>>> > compute the total variation of each channel of each pixel as the
>>> divergence
>>> > divided by the L1 norm of the gradient - we then obtain a "heatmap" of
>>> the
>>> > gradients over the picture (contours and noise)
>>> > let the user define a total variation threshold and form a mask where
>>> the
>>> > weights above the threshold are the total variation and the weights
>>> below
>>> > the threshold are zeros (sort of a highpass filter actually)
>>> > apply the bilateral filter according to this mask.
>>> >
>>> > This way, if the user wants to stack several denoising modules, he
>>> could
>>> > protect the already-cleaned areas from further denoising.
>>> >
>>> > What do you think ?
>>
>>
>> That sounds interesting.
>> This would maybe allow to keep some small variations/details that are not
>> due to noise or not disturbing, while denoising the other parts.
>> Also, it may be computationally interesting (depends on the complexity of
>> the total variation computation, I don't know it), as it could reduce the
>> number of pixels to process.
>> I guess the user could use something like that also the other way?: to
>> protect high detailed zones and apply the denoising on quite smoothed zones
>> only, in order to be able to use stronger denoising on zones that are
>> supposed to be background blur.
>>
>>
>> The noise is high frequency, so the TV (total variation) threshold will
>> have to be high pass only. The hypothesis behind the TV thresholding is
>> noisy pixels should have abnormally higher gradients than true details, so
>> you isolate them this way.  Selecting noise in low frequencies areas would
>> require in addition something like a guided filter, which I believe is what
>> is used in the dehaze module. The complexity of the TV computation depends
>> on the order of accuracy you expect.
>>
>> A classic approximation of the gradient is using a convolution product
>> with Sobel or Prewitt operators (3×3 arrays, very efficient, fairly
>> accurate for edges, probably less accurate for punctual noise). I have
>> developped myself optimized methods using 2, 4, and 8 neighbouring pixels
>> that give higher order accuracy, given the sparsity of the data, at the
>> expense of computing cost : https://github.com/aurelienpie
>> rre/Image-Cases-Studies/blob/947fd8d5c2e4c3384c80c1045d86f8c
>> f89ddcc7e/lib/deconvolution.pyx#L342 (ignore the variable ut in the
>> code, only u is relevant for us here).
>>
> Great, thanks for the explanations.
> Looking at the code of the 8 neighbouring pixels, I wonder if we would
> make sense to compute something like that on raw data considering only
> neighbouring pixels of the same color?
>
>
> the RAW data are even more sparse, so the gradient can't be computed this
> way. One would have to tweak the Taylor theorem to find an expression of
> gradient for sparse data. And that would be different for Bayer and X-Trans
> patterns. It's a bit of a conundrum.
>

Ok, thank you for these explainations


>
> Also, when talking about the mask formed from the heat map, do you mean
> that the "heat" would give for each pixel a weight to use between input and
> output? (i.e. a mask that is not only ones and zeros, but that controls how
> much input and output are used for each pixel)
> If so, I think it is a good idea to explore!
>
> yes, exactly, think of it as an opacity mask where you remap the
> user-input TV threshold and the lower values to 0, the max magnitude of TV
> to 1, and all the values in between accordingly.
>

Ok that is really cool! It seems a good idea to try to use that!

rawfiner


>
>
> rawfiner
>
>>
>>
>>
>>> >
>>> > Aurélien.
>>> >
>>> >
>>> > Le 13/06/2018 à 03:16, rawfiner a 

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-17 Thread Aurélien Pierre


Le 13/06/2018 à 17:31, rawfiner a écrit :
>
>
> Le mercredi 13 juin 2018, Aurélien Pierre  > a écrit :
>
>
>>
>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>> > > wrote:
>> > Hi,
>> >
>> > The problem of a 2-passes denoising method involving 2
>> differents
>> > algorithms, the later applied where the former failed,
>> could be the grain
>> > structure (the shape of the noise) would be different along
>> the picture,
>> > thus very unpleasing.
>>
>>
>> I agree that the grain structure could be different. Indeed, the
>> grain could be different, but my feeling (that may be wrong) is
>> that it would be still better than just no further processing,
>> that leaves some pixels unprocessed (they could form grain
>> structures far from uniform if we are not lucky).
>> If you think it is only due to a change of algorithm, I guess we
>> could apply non local means again on pixels where a first pass
>> failed, but with different parameters to be quite confident that
>> the second pass will work.
> That sounds better to me… but practice will have the last word.
>
>
> Ok :-) 
>
>>  
>>
>> >
>> > I thought maybe we could instead create some sort of total
>> variation
>> > threshold on other denoising modules :
>> >
>> > compute the total variation of each channel of each pixel
>> as the divergence
>> > divided by the L1 norm of the gradient - we then obtain a
>> "heatmap" of the
>> > gradients over the picture (contours and noise)
>> > let the user define a total variation threshold and form a
>> mask where the
>> > weights above the threshold are the total variation and the
>> weights below
>> > the threshold are zeros (sort of a highpass filter actually)
>> > apply the bilateral filter according to this mask.
>> >
>> > This way, if the user wants to stack several denoising
>> modules, he could
>> > protect the already-cleaned areas from further denoising.
>> >
>> > What do you think ?
>>
>>
>> That sounds interesting.
>> This would maybe allow to keep some small variations/details that
>> are not due to noise or not disturbing, while denoising the other
>> parts.
>> Also, it may be computationally interesting (depends on the
>> complexity of the total variation computation, I don't know it),
>> as it could reduce the number of pixels to process.
>> I guess the user could use something like that also the other
>> way?: to protect high detailed zones and apply the denoising on
>> quite smoothed zones only, in order to be able to use stronger
>> denoising on zones that are supposed to be background blur.
>
> The noise is high frequency, so the TV (total variation) threshold
> will have to be high pass only. The hypothesis behind the TV
> thresholding is noisy pixels should have abnormally higher
> gradients than true details, so you isolate them this way. 
> Selecting noise in low frequencies areas would require in addition
> something like a guided filter, which I believe is what is used in
> the dehaze module. The complexity of the TV computation depends on
> the order of accuracy you expect.
>
> A classic approximation of the gradient is using a convolution
> product with Sobel or Prewitt operators (3×3 arrays, very
> efficient, fairly accurate for edges, probably less accurate for
> punctual noise). I have developped myself optimized methods using
> 2, 4, and 8 neighbouring pixels that give higher order accuracy,
> given the sparsity of the data, at the expense of computing cost :
> 
> https://github.com/aurelienpierre/Image-Cases-Studies/blob/947fd8d5c2e4c3384c80c1045d86f8cf89ddcc7e/lib/deconvolution.pyx#L342
> 
> 
> (ignore the variable ut in the code, only u is relevant for us here).
>
> Great, thanks for the explanations.
> Looking at the code of the 8 neighbouring pixels, I wonder if we would
> make sense to compute something like that on raw data considering only
> neighbouring pixels of the same color?

the RAW data are even more sparse, so the gradient can't be computed
this way. One would have to tweak the Taylor theorem to find an
expression of gradient for sparse data. And that would be different for
Bayer and X-Trans patterns. It's a bit of a conundrum.
>
> Also, when talking about the mask formed from the heat map, do you
> mean that the "heat" would give for each pixel a weight to use between
> input and output? (i.e. a mask that is not only ones and zeros, but
> that controls 

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-17 Thread rawfiner
Here are some of the RAW files I use to test the changes I make to
denoising modules (including the one I used as an exemple in the beginning
of this conversation):
https://drive.google.com/open?id=11LxZWpZbS66m7vFdcoIHNTiG20JnwlJT
The reference-jpg folder contains the JPGs produced by the camera for these
raws (except for 2 of the RAWs for which I don't have the reference JPG).
I also use several other RAW files to test, but unfortunately I cannot
upload them as either they were not made by me, either they are photos of
people.

These are really noisy pictures, as I would like to be able to easily
process such pictures in darktable and to reach levels of quality similar
or better than the cameras.
Hope it will help.

If you have noisy photos you would like to share too, I'd like to have them
as my database of noisy pictures is a little biased (majority of photos in
my little "noisy database" are from my own cameras Lumix FZ1000 and Fuji
XT20 and I'd like to have more photos from other cameras)

Thanks!

rawfiner



2018-06-13 23:31 GMT+02:00 rawfiner :

>
>
> Le mercredi 13 juin 2018, Aurélien Pierre  a
> écrit :
>
>>
>>
>>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>>  wrote:
>>> > Hi,
>>> >
>>> > The problem of a 2-passes denoising method involving 2 differents
>>> > algorithms, the later applied where the former failed, could be the
>>> grain
>>> > structure (the shape of the noise) would be different along the
>>> picture,
>>> > thus very unpleasing.
>>
>>
>> I agree that the grain structure could be different. Indeed, the grain
>> could be different, but my feeling (that may be wrong) is that it would be
>> still better than just no further processing, that leaves some pixels
>> unprocessed (they could form grain structures far from uniform if we are
>> not lucky).
>> If you think it is only due to a change of algorithm, I guess we could
>> apply non local means again on pixels where a first pass failed, but with
>> different parameters to be quite confident that the second pass will work.
>>
>> That sounds better to me… but practice will have the last word.
>>
>
> Ok :-)
>
>>
>>
>>> >
>>> > I thought maybe we could instead create some sort of total variation
>>> > threshold on other denoising modules :
>>> >
>>> > compute the total variation of each channel of each pixel as the
>>> divergence
>>> > divided by the L1 norm of the gradient - we then obtain a "heatmap" of
>>> the
>>> > gradients over the picture (contours and noise)
>>> > let the user define a total variation threshold and form a mask where
>>> the
>>> > weights above the threshold are the total variation and the weights
>>> below
>>> > the threshold are zeros (sort of a highpass filter actually)
>>> > apply the bilateral filter according to this mask.
>>> >
>>> > This way, if the user wants to stack several denoising modules, he
>>> could
>>> > protect the already-cleaned areas from further denoising.
>>> >
>>> > What do you think ?
>>
>>
>> That sounds interesting.
>> This would maybe allow to keep some small variations/details that are not
>> due to noise or not disturbing, while denoising the other parts.
>> Also, it may be computationally interesting (depends on the complexity of
>> the total variation computation, I don't know it), as it could reduce the
>> number of pixels to process.
>> I guess the user could use something like that also the other way?: to
>> protect high detailed zones and apply the denoising on quite smoothed zones
>> only, in order to be able to use stronger denoising on zones that are
>> supposed to be background blur.
>>
>>
>> The noise is high frequency, so the TV (total variation) threshold will
>> have to be high pass only. The hypothesis behind the TV thresholding is
>> noisy pixels should have abnormally higher gradients than true details, so
>> you isolate them this way.  Selecting noise in low frequencies areas would
>> require in addition something like a guided filter, which I believe is what
>> is used in the dehaze module. The complexity of the TV computation depends
>> on the order of accuracy you expect.
>>
>> A classic approximation of the gradient is using a convolution product
>> with Sobel or Prewitt operators (3×3 arrays, very efficient, fairly
>> accurate for edges, probably less accurate for punctual noise). I have
>> developped myself optimized methods using 2, 4, and 8 neighbouring pixels
>> that give higher order accuracy, given the sparsity of the data, at the
>> expense of computing cost : https://github.com/aurelienpie
>> rre/Image-Cases-Studies/blob/947fd8d5c2e4c3384c80c1045d86f8c
>> f89ddcc7e/lib/deconvolution.pyx#L342 (ignore the variable ut in the
>> code, only u is relevant for us here).
>>
>> Great, thanks for the explanations.
> Looking at the code of the 8 neighbouring pixels, I wonder if we would
> make sense to compute something like that on raw data considering only
> neighbouring pixels of the same color?
>
> Also, when talking about the mask formed from the 

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-13 Thread rawfiner
Le mercredi 13 juin 2018, Aurélien Pierre  a
écrit :

>
>
>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>  wrote:
>> > Hi,
>> >
>> > The problem of a 2-passes denoising method involving 2 differents
>> > algorithms, the later applied where the former failed, could be the
>> grain
>> > structure (the shape of the noise) would be different along the picture,
>> > thus very unpleasing.
>
>
> I agree that the grain structure could be different. Indeed, the grain
> could be different, but my feeling (that may be wrong) is that it would be
> still better than just no further processing, that leaves some pixels
> unprocessed (they could form grain structures far from uniform if we are
> not lucky).
> If you think it is only due to a change of algorithm, I guess we could
> apply non local means again on pixels where a first pass failed, but with
> different parameters to be quite confident that the second pass will work.
>
> That sounds better to me… but practice will have the last word.
>

Ok :-)

>
>
>> >
>> > I thought maybe we could instead create some sort of total variation
>> > threshold on other denoising modules :
>> >
>> > compute the total variation of each channel of each pixel as the
>> divergence
>> > divided by the L1 norm of the gradient - we then obtain a "heatmap" of
>> the
>> > gradients over the picture (contours and noise)
>> > let the user define a total variation threshold and form a mask where
>> the
>> > weights above the threshold are the total variation and the weights
>> below
>> > the threshold are zeros (sort of a highpass filter actually)
>> > apply the bilateral filter according to this mask.
>> >
>> > This way, if the user wants to stack several denoising modules, he could
>> > protect the already-cleaned areas from further denoising.
>> >
>> > What do you think ?
>
>
> That sounds interesting.
> This would maybe allow to keep some small variations/details that are not
> due to noise or not disturbing, while denoising the other parts.
> Also, it may be computationally interesting (depends on the complexity of
> the total variation computation, I don't know it), as it could reduce the
> number of pixels to process.
> I guess the user could use something like that also the other way?: to
> protect high detailed zones and apply the denoising on quite smoothed zones
> only, in order to be able to use stronger denoising on zones that are
> supposed to be background blur.
>
>
> The noise is high frequency, so the TV (total variation) threshold will
> have to be high pass only. The hypothesis behind the TV thresholding is
> noisy pixels should have abnormally higher gradients than true details, so
> you isolate them this way.  Selecting noise in low frequencies areas would
> require in addition something like a guided filter, which I believe is what
> is used in the dehaze module. The complexity of the TV computation depends
> on the order of accuracy you expect.
>
> A classic approximation of the gradient is using a convolution product
> with Sobel or Prewitt operators (3×3 arrays, very efficient, fairly
> accurate for edges, probably less accurate for punctual noise). I have
> developped myself optimized methods using 2, 4, and 8 neighbouring pixels
> that give higher order accuracy, given the sparsity of the data, at the
> expense of computing cost : https://github.com/aurelienpierre/Image-Cases-
> Studies/blob/947fd8d5c2e4c3384c80c1045d86f8cf89ddcc7e/lib/deconvolution.
> pyx#L342 (ignore the variable ut in the code, only u is relevant for us
> here).
>
> Great, thanks for the explanations.
Looking at the code of the 8 neighbouring pixels, I wonder if we would make
sense to compute something like that on raw data considering only
neighbouring pixels of the same color?

Also, when talking about the mask formed from the heat map, do you mean
that the "heat" would give for each pixel a weight to use between input and
output? (i.e. a mask that is not only ones and zeros, but that controls how
much input and output are used for each pixel)
If so, I think it is a good idea to explore!

rawfiner

>
>
>
>> >
>> > Aurélien.
>> >
>> >
>> > Le 13/06/2018 à 03:16, rawfiner a écrit :
>> >
>> > Hi,
>> >
>> > I don't have the feeling that increasing K is the best way to improve
>> noise
>> > reduction anymore.
>> > I will upload the raw next week (if I don't forget to), as I am not at
>> home
>> > this week.
>> > My feeling is that doing non local means on raw data gives much bigger
>> > improvement than that.
>> > I still have to work on it yet.
>> > I am currently testing some raw downsizing ideas to allow a fast
>> execution
>> > of the algorithm.
>> >
>> > Apart of that, I also think that to improve noise reduction such as the
>> > denoise profile in nlm mode and the denoise non local means, we could
>> do a 2
>> > passes algorithm, with non local means applied first, and then a
>> bilateral
>> > filter (or median filter or something else) applied only on pixels
>> where non
>> > local 

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-13 Thread Aurélien Pierre

Le 13/06/2018 à 14:48, rawfiner a écrit :
>
> Le mercredi 13 juin 2018, johannes hanika  > a écrit :
>
> hi,
>
> that doesn't sound like a bad idea at all. for what it's worth, in
> practice the nlmeans doesn't let any grain at all through due to the
> piecewise constant prior that it's based on. well, only in regions
> where it finds enough other patches that is. in the current
> implementation with a radius of 7 that is not always the case.
>
>
> That's precisely the type of grain that I thought to try to tackle
> with a 2 pass.
> When the image is very noisy, it is quite frequent to have pixels
> without enough other patches.
> It sometimes forces me to raise the strength sliders, resulting in an
> overly smoothed image.
> The idea is to give the user the choice of how to handle these pixels,
> either by leaving them like this, either by using another denoising
> algorithm so that they integrate better with their surroundings.
> Anyway, I guess I may try that and come back after some results to
> discuss if it's worth it or no ;-)
>  
>
>
> also, i usually use some blending to add the input buffer back on top
> of the output. this essentially leaves the grain alone but tones it
> down.
>
>
> I do the same ;-)
Me too
>  
>
>
> cheers,
>  jo
>
>
> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
> mailto:rese...@aurelienpierre.com>>
> wrote:
> > Hi,
> >
> > The problem of a 2-passes denoising method involving 2 differents
> > algorithms, the later applied where the former failed, could be
> the grain
> > structure (the shape of the noise) would be different along the
> picture,
> > thus very unpleasing.
>
>
> I agree that the grain structure could be different. Indeed, the grain
> could be different, but my feeling (that may be wrong) is that it
> would be still better than just no further processing, that leaves
> some pixels unprocessed (they could form grain structures far from
> uniform if we are not lucky).
> If you think it is only due to a change of algorithm, I guess we could
> apply non local means again on pixels where a first pass failed, but
> with different parameters to be quite confident that the second pass
> will work.
That sounds better to me… but practice will have the last word.
>  
>
> >
> > I thought maybe we could instead create some sort of total variation
> > threshold on other denoising modules :
> >
> > compute the total variation of each channel of each pixel as the
> divergence
> > divided by the L1 norm of the gradient - we then obtain a
> "heatmap" of the
> > gradients over the picture (contours and noise)
> > let the user define a total variation threshold and form a mask
> where the
> > weights above the threshold are the total variation and the
> weights below
> > the threshold are zeros (sort of a highpass filter actually)
> > apply the bilateral filter according to this mask.
> >
> > This way, if the user wants to stack several denoising modules,
> he could
> > protect the already-cleaned areas from further denoising.
> >
> > What do you think ?
>
>
> That sounds interesting.
> This would maybe allow to keep some small variations/details that are
> not due to noise or not disturbing, while denoising the other parts.
> Also, it may be computationally interesting (depends on the complexity
> of the total variation computation, I don't know it), as it could
> reduce the number of pixels to process.
> I guess the user could use something like that also the other way?: to
> protect high detailed zones and apply the denoising on quite smoothed
> zones only, in order to be able to use stronger denoising on zones
> that are supposed to be background blur.

The noise is high frequency, so the TV (total variation) threshold will
have to be high pass only. The hypothesis behind the TV thresholding is
noisy pixels should have abnormally higher gradients than true details,
so you isolate them this way.  Selecting noise in low frequencies areas
would require in addition something like a guided filter, which I
believe is what is used in the dehaze module. The complexity of the TV
computation depends on the order of accuracy you expect.

A classic approximation of the gradient is using a convolution product
with Sobel or Prewitt operators (3×3 arrays, very efficient, fairly
accurate for edges, probably less accurate for punctual noise). I have
developped myself optimized methods using 2, 4, and 8 neighbouring
pixels that give higher order accuracy, given the sparsity of the data,
at the expense of computing cost :
https://github.com/aurelienpierre/Image-Cases-Studies/blob/947fd8d5c2e4c3384c80c1045d86f8cf89ddcc7e/lib/deconvolution.pyx#L342
(ignore the variable ut in the code, only u is relevant for us here).

>
> rawfiner
>
>  
>
> >
> > Aurélien.
> >
> >
> > Le 

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-13 Thread rawfiner
Le mercredi 13 juin 2018, johannes hanika  a écrit :

> hi,
>
> that doesn't sound like a bad idea at all. for what it's worth, in
> practice the nlmeans doesn't let any grain at all through due to the
> piecewise constant prior that it's based on. well, only in regions
> where it finds enough other patches that is. in the current
> implementation with a radius of 7 that is not always the case.


That's precisely the type of grain that I thought to try to tackle with a 2
pass.
When the image is very noisy, it is quite frequent to have pixels without
enough other patches.
It sometimes forces me to raise the strength sliders, resulting in an
overly smoothed image.
The idea is to give the user the choice of how to handle these pixels,
either by leaving them like this, either by using another denoising
algorithm so that they integrate better with their surroundings.
Anyway, I guess I may try that and come back after some results to discuss
if it's worth it or no ;-)


>
> also, i usually use some blending to add the input buffer back on top
> of the output. this essentially leaves the grain alone but tones it
> down.


I do the same ;-)


>
> cheers,
>  jo
>
>
> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>  wrote:
> > Hi,
> >
> > The problem of a 2-passes denoising method involving 2 differents
> > algorithms, the later applied where the former failed, could be the grain
> > structure (the shape of the noise) would be different along the picture,
> > thus very unpleasing.


I agree that the grain structure could be different. Indeed, the grain
could be different, but my feeling (that may be wrong) is that it would be
still better than just no further processing, that leaves some pixels
unprocessed (they could form grain structures far from uniform if we are
not lucky).
If you think it is only due to a change of algorithm, I guess we could
apply non local means again on pixels where a first pass failed, but with
different parameters to be quite confident that the second pass will work.


> >
> > I thought maybe we could instead create some sort of total variation
> > threshold on other denoising modules :
> >
> > compute the total variation of each channel of each pixel as the
> divergence
> > divided by the L1 norm of the gradient - we then obtain a "heatmap" of
> the
> > gradients over the picture (contours and noise)
> > let the user define a total variation threshold and form a mask where the
> > weights above the threshold are the total variation and the weights below
> > the threshold are zeros (sort of a highpass filter actually)
> > apply the bilateral filter according to this mask.
> >
> > This way, if the user wants to stack several denoising modules, he could
> > protect the already-cleaned areas from further denoising.
> >
> > What do you think ?


That sounds interesting.
This would maybe allow to keep some small variations/details that are not
due to noise or not disturbing, while denoising the other parts.
Also, it may be computationally interesting (depends on the complexity of
the total variation computation, I don't know it), as it could reduce the
number of pixels to process.
I guess the user could use something like that also the other way?: to
protect high detailed zones and apply the denoising on quite smoothed zones
only, in order to be able to use stronger denoising on zones that are
supposed to be background blur.

rawfiner



> >
> > Aurélien.
> >
> >
> > Le 13/06/2018 à 03:16, rawfiner a écrit :
> >
> > Hi,
> >
> > I don't have the feeling that increasing K is the best way to improve
> noise
> > reduction anymore.
> > I will upload the raw next week (if I don't forget to), as I am not at
> home
> > this week.
> > My feeling is that doing non local means on raw data gives much bigger
> > improvement than that.
> > I still have to work on it yet.
> > I am currently testing some raw downsizing ideas to allow a fast
> execution
> > of the algorithm.
> >
> > Apart of that, I also think that to improve noise reduction such as the
> > denoise profile in nlm mode and the denoise non local means, we could do
> a 2
> > passes algorithm, with non local means applied first, and then a
> bilateral
> > filter (or median filter or something else) applied only on pixels where
> non
> > local means failed to find suitable patches (i.e. pixels where the sum of
> > weights was close to 0).
> > The user would have a slider to adjust this setting.
> > I think that it would make easier to have a "uniform" output (i.e. an
> output
> > where noise has been reduced quite uniformly)
> > I have not tested this idea yet.
> >
> > Cheers,
> > rawfiner
> >
> > Le lundi 11 juin 2018, johannes hanika  a écrit :
> >>
> >> hi,
> >>
> >> i was playing with noise reduction presets again and tried the large
> >> neighbourhood search window. on my shots i could very rarely spot a
> >> difference at all increasing K above 7, and even less so going above
> >> 10. the image you posted earlier did show quite a 

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-01-26 Thread rawfiner
Oh ok sorry for that...
rawfiner

Le vendredi 26 janvier 2018, Terry Duell  a écrit :

> On Sat, 27 Jan 2018 05:34:24 +1100, rawfiner  wrote:
>
> Thank you for your answer I perfectly agree with the fact that the GUI
>> should not become
>> overcomplicated.
>>
>
> ...and neither should large attachments (9 MB) be sent directly to a
> mailing list.
> Please use a link to large attached files, not everyone wants or needs to
> get it.
>
> Cheers,
> --
> Regards,
> Terry Duell
> 
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscribe@
> lists.darktable.org
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org