On Wed, Dec 21, 2016 at 11:27 AM, J. Liles <malnour...@gmail.com> wrote:

>
>
> On Wed, Dec 21, 2016 at 11:20 AM, J. Liles <malnour...@gmail.com> wrote:
>
>>
>>
>> On Wed, Dec 21, 2016 at 11:17 AM, Ingo Liebhardt <ingo.liebha...@ziggo.nl
>> > wrote:
>>
>>> Ah, and by the way @J Liles: could you please explain me a bit more what
>>> you mean by ‚textile like artifact‘, I’d like to investigate that one a bit
>>> more in-depth.
>>> Thx
>>>
>>>
>>> Am 21.12.2016 um 20:10 schrieb Ingo Liebhardt <ingo.liebha...@ziggo.nl>:
>>>
>>> Hi all,
>>>
>>> Thanks a lot for the feedback, and no worries if it takes you a while
>>> testing it.
>>> As you see, I’m also progressing rather slowly on my side…
>>>
>>> It’s still a proof-of-concept and I have quite some items on my to do
>>> list, most notably:
>>> - the literature mentions training the filters based on reference
>>> images, and I’m slowly working on this, hoping that it would further
>>> increase image quality. So far, the filters are designed using the window
>>> design method.
>>> - trying to find out where the hue shifts come from - I already noticed
>>> them, too.
>>>
>>> Other things like performance improvements will be for later…
>>>
>>> I’ll let you know as soon as I make progress on the filters.
>>>
>>> Cheers,
>>> Ingo
>>>
>>>
>>>
>>> Am 21.12.2016 um 01:14 schrieb J. Liles <malnour...@gmail.com>:
>>>
>>>
>>>
>>> On Mon, Dec 19, 2016 at 7:40 PM, J. Liles <malnour...@gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On Mon, Dec 12, 2016 at 10:40 AM, Ingo Liebhardt <
>>>> ingo.liebha...@ziggo.nl> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> Maybe you still remember that I tried an alternative approach to
>>>>> X-Trans demosaicking (using guided filtering) in March / April this year…
>>>>> In the end, I was not satisfied, and I gave up on that approach. The
>>>>> problems were comparable to the Markesteijn algorithm, and the 
>>>>> improvements
>>>>> marginal.
>>>>>
>>>>> After giving up on that approach, I was again browsing conference
>>>>> papers trying to get some inspiration.
>>>>> I came across the work of E. Dubois, which looked promising.
>>>>> It is promising, not so much when applied alone, but very much so when
>>>>> combined with a gradient based approach like Markesteijn.
>>>>>
>>>>> I like Jo’s xtrans fringes profile a lot, but the colors get somewhat
>>>>> muted, overall.
>>>>>
>>>>> Contrary to my first approach, this one finally seems to give
>>>>> reasonable results.
>>>>> I managed to get good output for the redline bug #10333.
>>>>> You can have a look here: dropbox link
>>>>> <https://www.dropbox.com/sh/un1y11uimbqxjjk/AAD3L-Rs9-ztwyBIm4rnCzK-a?dl=0>
>>>>>
>>>>> This is the output just with demosaic + base curve, nothing else.
>>>>>
>>>>> If you want to try some nasty X-Trans images yourself, I made a little
>>>>> proof-of-concept.
>>>>> This in form of a fork of darktable, which you can find here:
>>>>> https://github.com/ILiebhardt/darktable.git
>>>>> For trying, just compile, deactivate openCL (only C code thus far),
>>>>> and choose ‚1 pass Markesteijn‘ as demosaicking method (doesn’t work for
>>>>> 3-pass, and wouldn’t really yield advantages, either).
>>>>>
>>>>> Have fun trying, and let me know if you think that this one’s worth
>>>>> pursuing further (only quick hack so far, and the used correlation filters
>>>>> are a slow, naive implementation O(m n p q)).
>>>>>
>>>>> If you’d like to read some basics concerning the idea, I made a
>>>>> mini-blog here: http://xtransdemosaicking.blogspot.nl
>>>>>
>>>>> Cheers,
>>>>> Ingo
>>>>>
>>>>>
>>>>> P.S.: concerning my previous approach, J Liles spotted single
>>>>> pixel artifacts. I found out that these are not related tot the
>>>>> demosaicking as such. X-Trans 2 and X-Trans 3 have hybrid AF, and the
>>>>> pixels used for phase detection show higher noise. These are all green
>>>>> pixels of a 4-group of pixels; never a red or blue, and never a solitary
>>>>> green. But solving this would be a whole different project...
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ___________________________________________________________________________
>>>>> darktable developer mailing list to unsubscribe send a mail to
>>>>> darktable-dev+unsubscr...@lists.darktable.org
>>>>>
>>>>
>>>> Ingo,
>>>>
>>>> Great to hear you're still working on this!
>>>>
>>>> I haven't reviewed the code of the algorithm, but I did give it a try
>>>> on a few images.
>>>>
>>>> Here's one in particular (lots of sharpening added to make the
>>>> differences more obvious.)
>>>>
>>>> http://www.nevermindhim.com/liebhardt-test
>>>>
>>>> Direct image links:
>>>>
>>>> http://www.nevermindhim.com/files/liebhardt-test/6acffe60-09
>>>> c5-11e6-93d7-178612e3e7eb_E1_VNG.png
>>>> http://www.nevermindhim.com/files/liebhardt-test/6acffe60-09
>>>> c5-11e6-93d7-178612e3e7eb_E1_Markesteijn.png
>>>> http://www.nevermindhim.com/files/liebhardt-test/6acffe60-09
>>>> c5-11e6-93d7-178612e3e7eb_E1_Liebhardt.png
>>>>
>>>>
>>>> My first impressions are:
>>>>
>>>> 1) (obviously you know this) It's slow
>>>> 2) It introduces a hue shift
>>>> 3) It does a better job of controlling color noise than VNG or
>>>> Markesteijn.
>>>> 4) Artifacts are similar in structure to Markesteijn (maze-like)
>>>> 5) There is an additional textile like artifact that Markesteijn
>>>> doesn't exhibit.
>>>> 6) It overshoots in interpolating across gradients, but not as much as
>>>> VNG does.
>>>>
>>>> If you can get rid of the textile effect and, color cast, and speed it
>>>> up, this looks like it would be an improvement over Markesteijn (with no
>>>> color smoothing/noise reduction). It's already looking more "film like"
>>>>
>>>>
>>>>
>>> Replying to myself here...
>>>
>>> Added another set of images to:
>>>
>>> http://www.nevermindhim.com/liebhardt-test
>>>
>>> (TEST IMAGE 2)
>>>
>>> This time correcting for the hue shift (with auto white balance).
>>>
>>> I wanted to illustrate how it deals with a high ISO (12800) image, with
>>> and without noise reduction and sharpening.
>>>
>>> As you can see, the result is a definite improvement, especially the
>>> noise reduced version. There may be a slight loss of sharpness, but for me
>>> it's worth it to get rid of those crusty false colors.
>>>
>>> However, whether or not even this is better than the SOOC JPEG (NR -4,
>>> Sharpness 0) is debatable. It seems like using the maximum NR in darktable
>>> is required to produce a similar result as the minimum NR in camera...
>>>
>>>
>>>
>>> ___________________________________________________________________________
>>> darktable developer mailing list to unsubscribe send a mail to
>>> darktable-dev+unsubscr...@lists.darktable.org
>>>
>>>
>>>
>> In TEST IMAGE 1, look at the blue TV screen behind the subject's head.
>> You can see a textile/grid type effect that wasn't really there. This
>> effect doesn't appear with VNG or Markesteijn. It looks like the your
>> weightings might be off causing the X-Trans pattern to show through when
>> interpolating solid colors.
>>
>>
> Just to add to this, other points of interest in this image for finding
> artifacts are the saturated purple lights in the upper right, and the edges
> of the TV screen and the subject's hair. Of particular interest is the
> serial number on the dollar bill. It should be dark green (the same color
> as the stamp/seal above). Too-aggressive chroma denoising may make it turn
> light gray/green like the rest of the bill.
>
>

Continuing my habit of replying to myself, I was curious how your algorithm
would handle the dreaded X-Trans II/III "purple flare/grid artifact"
problem (which, AFAICT, remains an unsolved problem everywhere). The result
is interesting. Your algorithm completely removes the purple color cast of
the flare, resulting in, IMHO, a much more pleasing appearance. However,
the grid aspect remains:

http://www.nevermindhim.com/files/liebhardt-test/a75767d6-cc7c-11e6-95bd-739c86278d6a_E1_Liebhardt.png

___________________________________________________________________________
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Reply via email to