Re: [darktable-dev] different meanings of "feather" in darktable

2021-01-06 Thread Aurélien Pierre
I would go for softening in drawn masks and feathering in the 
variance-thresholded edge detection of the guided filter.


What happens in drawn masks is only outside the main outline, while the 
guided filter is much less predictable in its behaviour (could be in, 
could be out, could cross the line too).


Cheers,

A.

Le 05/01/2021 à 19:12, David Vincent-Jones a écrit :


So .. a question: does feathering become applied to both sides of the 
base outline or only effect the 'smoothing' area outside of the base?



On 2021-01-05 8:13 a.m., DougC wrote:

"Feathering amount" might be better than "feather size".





 On Tue, 05 Jan 2021 11:04:32 -0500 Mica Semrick  
wrote 


"Feather size" is still awkward. It could be changed to "feathering" as well.
___
darktable developer mailing list
to unsubscribe send a mail todarktable-dev+unsubscr...@lists.darktable.org


___ 
darktable developer mailing list to unsubscribe send a mail to 
darktable-dev+unsubscr...@lists.darktable.org


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Feature request: Change modules order in UI

2020-12-12 Thread Aurélien Pierre
darktable modules are not merely GUI widgets, they are pixels filters, 
or adjustment layers if you come from Adobe.


Do you go in Photoshop or in Gimp moving layers on the stack because you 
prefer to have them sorted in alphabetic order ?


sob., 12 gru 2020 o 11:07 Wiktor Nowak > napisał(a):


I personally don't need this but also don't see a problem to make an
option to reorder modules only in UI not affecting the pipeline.
If some
users would find this helpful then why not?

W dniu 12.12.2020 o 10:57, Martin Straeten pisze:
> This doesn’t make sense since the order represents the order the
modules are applied in the pixel pipe. The output of the lower
module is the input of the upper. This is important to understand,
whats happening in the modules.
> You’re able to change the displayed order by ctrl+shift+drag but
this also changes the order of the module instance in the pipe.
> If it’s not relevant to take care of the moduleorder in the
pixel pipe, then you don’t really need darktable - and you
certainly don’t belong to the target group for which darktable was
developed.
>
>> Am 12.12.2020 um 10:20 schrieb Wiktor Nowak mailto:wik...@gmail.com>>:
>>
>> Hello
>>
>> Under one of my polish darktable tutorial video someone asked
about an
>> option to change order of modules in UI to order them
accordingly to
>> order of actions preferred by the user. I find this as a quite
simple to
>> develop and potentially useful functionality.
>>
>> best regards
>> Wiktor.
>>
___
>> darktable developer mailing list
>> to unsubscribe send a mail to
darktable-dev+unsubscr...@lists.darktable.org

>>
___
darktable developer mailing list
to unsubscribe send a mail to
darktable-dev+unsubscr...@lists.darktable.org




--
Pozdrawiam,
Hubert Kowalski

___ 
darktable developer mailing list to unsubscribe send a mail to 
darktable-dev+unsubscr...@lists.darktable.org


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Meaning of one of the strings

2020-11-15 Thread Aurélien Pierre

Hi,

the deprecated modules have alternative modules. These alternatives 
solve the original design mistakes of the deprecated ones. (Full story : 
fixing the originals would break compatibility of old edits, so we have 
to start fresh).


Cheers,

Aurélien.

Le 16/11/2020 à 00:43, Jeronimo Pellegrini a écrit :

Hello;

This string was recently added to darktable:

"following modules are deprecated because they have internal design mistakes"
"that can't be solved and alternatives that solve them.\n"
"they will be removed for new edits in next release."

What exactly does the second line mean, specifically the
part after the "and"?

I suppose it would be "and there are alternatives"?

Thank you!
J.
___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] idea to consider

2020-08-30 Thread Aurélien Pierre
1. that still doesn't give you the jpeg cooking recipe, which is more
complicated than building an ad-hoc LUT or a tonecurve if local filters
are applied (and there are),

2. what is it with people editing jpegs ? That's nonsensical ! Not the
same workflow, not the same maths, not the same filters, not the same
pipeline, not the same software. A raw is an output-agnostic linearly
encoded master picture that you can still salvage from the beginning, a
jpeg is already non-linearly cooked for display assuming dim viewing
conditions (as per sRGB standard) and firmware blackboxes.

People need to stop dealing with image processing as if everyone was
right and correctness didn't matter. It's not a silly magic game of
pixels values, there are assumptions to assert underneath the hood.
Sometimes I wish image processing could kill people, as civil
engineering or medicine do, so people would start taking it seriously
and check the theory behind before doing shit carelessly. That kind of
silly workflow will blow up in your face 50 % of times because there is
zero reliability in handling pre-baked jpegs with all the firmwares
discrepancies in a software designed to unroll image operations on raw
files. Then I will let you deal with users who don't understand why the
workflow is so unpredictable.

Indulging bad habits of users is not a solution, especially since we
don't sell anything/whore ourselves out. Let's be rigorous about pixels
operations and do things properly. Want to edit jpegs ? Use bloody
Photoshop and the likes. They are good at doing shit, don't care about
color consistency, don't care about light emissions, don't even do
associated alpha occlusion properly. Yet people love them because
marketing expenses make up for dev mediocrity and overall stupidity.

Le 30/08/2020 à 17:05, Jason Polak a écrit :
> Another possibility is shooting Raw+JPEG. With this option you can still
> tune the JPEG engine in the camera somewhat with most models, and if you
> need to edit you can go to the Raw file. Also, you can make some minor
> edits directly on a JPEG file. Also, for people who are already working
> in JPEG, why would they suddenly switch to Raw format just to batch
> process Raw files to get what the camera would give them anyway?
>
> On 30/08/2020 08.22, Chris Shelton wrote:
>> Some photographers work in jpg only and sometimes  I would not question 
>> their  creative ability while maybe questioning the technical choice
>> although I suppose they do avoid the mistake of overworking and gilding
>> the lily
>>
>>  I thought that the  appeal of Darktable could be expanded to include
>> people like this by having a facility whereby raw images are batch
>> processed  to create set of jpg images in a similar way to    the way
>> jpg images are created by in-camera processing,  maybe in the light
>> table module; there would still be the option to work on a raw image as
>> normal.
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] idea to consider

2020-08-30 Thread Aurélien Pierre
Hi,

Camera firmwares that convert raw to jpeg internally rely on proprietary
and undisclosed algorithms. Emulating these is not merely a game of
tuning input parameters, we need to reverse-engineer what algorithms
they use, in what color space, and in which order they are applied, and
repeat × pictures profiles × camera models × camera brands.

Not to mention, manufacturers use their internal sensor knowledge to
fine-tune algos, and we are in no position to compete by simple lack of
access to this knowledge.

darktable is already shipped with basic "basecurves" which emulate the
overall tone/color intent of each brand using only its standard image
profile, but they are limited, often inaccurate and no quality-control
is performed on them.

If people want to quickly get jpegs from their raws, any camera is
provided with some proprietary piece of software that has much more
educated guesses built-in than darktable, and knows about the camera
firmware, such that you could get your camera jpeg back from your
computer in no time.

So I would say this : darktable is not a firmware emulator. We don't
have the ressources to be, we already have an half-baked emulating
method for what it's worth, and then what use would it be to mimic
something that your camera already does better ? Designing a consistent
raw workflow is enough work as it is, trying to take into account all
the firmware discrepancies is simply madness in our context.

Cheers,

Aurélien.

Le 30/08/2020 à 14:22, Chris Shelton a écrit :
>
> Some photographers work in jpg only and sometimes  I would not
> question  their  creative ability while maybe questioning the
> technical choice although I suppose they do avoid the mistake of
> overworking and gilding the lily
>
>  I thought that the  appeal of Darktable could be expanded to include
> people like this by having a facility whereby raw images are batch
> processed  to create a set of jpg images in a similar way to    the
> way jpg images are created by in-camera processing,  maybe in the
> light table module; there would still be the option to work on a raw
> image as normal.
>
> I'm a bit overworked myself at the moment and struggle with RSI as you
> may deduce from the voice recognition  I use .  I did do C and C++ in
> the 90s though it would take a while to get up to speed; maybe one day
>
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] filmic v.4 and exposure compensation bias in git master

2020-06-11 Thread Aurélien Pierre
I need to presume things in order to provide consistent and reasonable
workflows from start to end. You get the scene-linear one, based on
linear editing with exposure and filmic, and the display-linear one,
based on legacy modules and modules order (and base curve), or
alternatively, no workflow at all, and set everything yourself.

I guess the question could be reformulated as : why can't user shoot
himself in the foot faster ?

Pick a consistent workflow from the start, and abide by it. There is no
point in trying to interleave unrelated things, unless you want to make
it worse for everyone.

There are technical reasons at play here, that impose to think the
pipeline as a whole, and not just as a stack of individual modules
allowing local shortcuts to spare 2 clicks at the expense of degrading
color for all the upcoming modules and image processing steps in the pipe.

Global exposure needs to be fixed before input profile for people who
use LUTs ICC profiles, or other LUT-based modules, and to help gamut
mapping and chromatic adaptation. Fixing it later is a nasty hack that
may work for people as long as they do lightweight editing with little
exigence, but is not safe in general.

Technics are not a democracy, they are dictated by the tools you use,
and if you don't adapt to them, they will not adapt to you. Workflows
are not a matter of opinion or personal preference, but a way to cope
with technical requirements in the least painful way. Having more
choices to shoot yourself in the foot is only having the illusion of choice.

Cheers,

Aurélien.

Le 11/06/2020 à 20:54, Alexander Rabtchevich a écrit :
> You presume a user chooses one of the approaches and uses it full time.
> In that case the expose is added automatically. And what if a user
> mostly uses basecurve approach and time after time wants to make a
> scene-referred variant? He needs to turn off the basecurve, turn on the
> filmic module (it is logical) and to add + 1EV exposure shift. The first
> two actions can be done easily via custom style, but if one adds
> exposure to the style, the previous exposure value will be replaced by
> the value from the style (1.0). And the image will require to adjust the
> exposure again.
>
> Why can't the module itself make signal boost of +1EV at its input just
> as the +1EV exposure does in the pipeline before the filmic. Is there a
> difference in the results?
>
> With respect,
> Alexander Rabtchevich
>
> Aurélien Pierre wrote:
>>
>> The required +1 EV exposure boost is now applied automatically in
>> exposure module for new edits if you choose the scene-linear workflow
>> in preferences -> processing -> auto-apply pixel workflow default.
>>
>> Filmic v4 imposes 18% scene to be remapped to 18% display no matter
>> what, resulting in preservation of overall brightness through the
>> process. Previous Filmic v3 remapped 9% scene to 18% display to better
>> match OOC JPEG, which creates problems with the curve (contrast
>> behaviour degrades if grey is not more or less centered on the graph).
>>
>> Current v4 behaviour is to slide the image dynamic range along the
>> luminance axis with exposure module, to anchor scene grey to display
>> grey. Once this is done, filmic compresses the bounds of the dynamic
>> range by rolling *around* the grey value, used as a reference, and
>> therefore preserving the overall brightness of the image. This makes
>> the transform more predictable and remove the scene grey param in GUI,
>> which has confused too many users.
>>
>> See : https://www.youtube.com/watch?v=leZVK2s68QA
>>
>> Cheers,
>>
>> Aurélien.
>>
>> Le 10/06/2020 à 08:19, David Vincent-Jones a écrit :
>>>
>>> The change that I see in v. 4.0 is that all of my images appear 1
>>> stop underexposed as compared to the previous version. Earlier I did
>>> not need to adjust exposure at all, now I must boost the exposure on
>>> all images.
>>>
>>> David
>>>
>>> On 2020-06-09 11:46 a.m., Alexander Rabtchevich wrote:
>>>> Hm, as I understand, if I enable filmic plugin v 4.0 manually, I
>>>> need to
>>>> add +1.0EV manually too? That wasn't so with 3.0.
>>>>
>>>>  With respect,
>>>>  Alexander Rabtchevich
>>>>
>>>>
>>>>
>>>> Aurélien Pierre wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> there is a +1 EV added by default in exposure if you enable the
>>>>> scene-referred workflow. I don't understand what you mean. The
>>>>> exposure bias is read in image EXIF, if the bias is 0 EV, then the
>>>>> setting is 0 E

Re: [darktable-dev] filmic v.4 and exposure compensation bias in git master

2020-06-11 Thread Aurélien Pierre
The required +1 EV exposure boost is now applied automatically in
exposure module for new edits if you choose the scene-linear workflow in
preferences -> processing -> auto-apply pixel workflow default.

Filmic v4 imposes 18% scene to be remapped to 18% display no matter
what, resulting in preservation of overall brightness through the
process. Previous Filmic v3 remapped 9% scene to 18% display to better
match OOC JPEG, which creates problems with the curve (contrast
behaviour degrades if grey is not more or less centered on the graph).

Current v4 behaviour is to slide the image dynamic range along the
luminance axis with exposure module, to anchor scene grey to display
grey. Once this is done, filmic compresses the bounds of the dynamic
range by rolling *around* the grey value, used as a reference, and
therefore preserving the overall brightness of the image. This makes the
transform more predictable and remove the scene grey param in GUI, which
has confused too many users.

See : https://www.youtube.com/watch?v=leZVK2s68QA

Cheers,

Aurélien.

Le 10/06/2020 à 08:19, David Vincent-Jones a écrit :
>
> The change that I see in v. 4.0 is that all of my images appear 1 stop
> underexposed as compared to the previous version. Earlier I did not
> need to adjust exposure at all, now I must boost the exposure on all
> images.
>
> David
>
> On 2020-06-09 11:46 a.m., Alexander Rabtchevich wrote:
>> Hm, as I understand, if I enable filmic plugin v 4.0 manually, I need to
>> add +1.0EV manually too? That wasn't so with 3.0.
>>
>>  With respect,
>>  Alexander Rabtchevich
>>
>>
>>
>> Aurélien Pierre wrote:
>>>
>>> Hi,
>>>
>>> there is a +1 EV added by default in exposure if you enable the
>>> scene-referred workflow. I don't understand what you mean. The
>>> exposure bias is read in image EXIF, if the bias is 0 EV, then the
>>> setting is 0 EV too.
>>>
>>> Cheers,
>>>
>>> Aurélien.
>>>
>>> Le 09/06/2020 à 19:46, Alexander Rabtchevich a écrit :
>>>> Hello
>>>> Exposure compensation bias for my Sony A99 is 0EV and images are too
>>>> dark. They require to add +1EV to match filmic v.3
>>>>
>>>> Current git master
>>>>
>>>> With respect,
>>>> Alexander Rabtchevich
>>
>> ___
>>
>> darktable developer mailing list
>> to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>>
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] filmic v.4 and exposure compensation bias in git master

2020-06-09 Thread Aurélien Pierre
Hi,

there is a +1 EV added by default in exposure if you enable the
scene-referred workflow. I don't understand what you mean. The exposure
bias is read in image EXIF, if the bias is 0 EV, then the setting is 0
EV too.

Cheers,

Aurélien.

Le 09/06/2020 à 19:46, Alexander Rabtchevich a écrit :
> Hello
> Exposure compensation bias for my Sony A99 is 0EV and images are too
> dark. They require to add +1EV to match filmic v.3
>
> Current git master
>
> With respect,
> Alexander Rabtchevich
> ___
>
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] problem compiling darktable 3.0.2

2020-06-02 Thread Aurélien Pierre
Hi,

this looks like your compiler is too old to support the simd:static
statement. What compiler and version are you using ?


Aurélien.


Le 02/06/2020 à 09:06, Peter Johnston a écrit :
> Hello,
>
> I am having a problem building darktable 3.0.2. I seem to be able to
> complete the cmake procedure OK, but then I get the following errors
> using make:
>
> [ 15%] Building C object
> src/CMakeFiles/lib_darktable.dir/common/collection.c.o
> [ 15%] Building C object
> src/CMakeFiles/lib_darktable.dir/common/color_picker.c.o
> In file included from
> /home/peter/fromnet/darktable-3.0.2/src/common/color_picker.c:24:0:
> /home/peter/fromnet/darktable-3.0.2/src/develop/imageop_math.h: In
> function ‘dt_simd_memcpy’:
> /home/peter/fromnet/darktable-3.0.2/src/develop/imageop_math.h:166:10:
> error: invalid schedule kind before ‘simd’
>  schedule(simd:static) aligned(in, out:64)
>           ^
> src/CMakeFiles/lib_darktable.dir/build.make:268: recipe for target
> 'src/CMakeFiles/lib_darktable.dir/common/color_picker.c.o' failed
>
> Unfortunately, I am new to darktable and this kind of platform
> creation. I am using Ubuntu 18.04.
>
> Any advice would be much appreciated. If I can supply any further
> information, please let me know.
>
> Thank you,
>
> Peter.
>
> Associate Professor Peter Johnston (FAustMS, FIMA)
> School of Environment and Science
> *Griffith University *| Nathan | QLD 4111 | Technology (N44) Room 3.19
> T +61 7 373 57748| F +61 7 373 57656 Email p.johns...@griffith.edu.au
> 
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] HSL and RGB scene-referred space

2020-04-16 Thread Aurélien Pierre
Hello,

 1. you go from RGB to XYZ using the ICC profile of the RGB space you
are using, which gives you the coeffs of the 3×3 matrix RGB -> XYZ.
Call that matrix M. Do a matrix dot product [XYZ] = [M] * [RGB]
 2. you go from XYZ to Yxy using x = X / (X + Y + Z) and y = Y / (X + Y + Z)
 3. you subtract the x and y values by those of the white point you are
using (usually, D50). That's x_D50 = 0.34567 and y_D50 = 0.35850,
using 2° CIE 1931 observer. So
x = X / (X + Y + Z) - x_D50 and y = Y / (X + Y + Z) - y_D50,
 4. then, hue = arctan2(y, x), saturation = sqrt(x^2 + y^2)
 5. since it's only to produce a boolean mask, you don't need to bother
about the back transform. But in case you need :
 1. x = saturation * cos(hue) + x_D50, y = saturation * sin(hue) + y_D50
 2. X = xY / y, Y = Y, Z = (1 - x - y) * Y / y
 3. and then, again, matrix dot product [RGB] = [M^-1] * [XYZ] where
[M^-1] is the inverse of the matrix M formed with the ICC
profile coefficients.

Cheers,

Aurélien.

Le 11/04/2020 à 18:23, Harold le Clément a écrit :
> Hello,
>
> Thanks a lot for the feedback.
>
> Agree, the problem would be only when trying to use the saturation for
> parametric masking (as the color picker HSL mode is only used for
> that).
>
> Perhaps a cylindrical HSY could be used, but it depends how the
> saturation is computed (most of the information I could find on the
> Internet suppose that the RGB values are bounded).
>
> Anyway, in the mean time I could just disregard the saturation selector.
>
> Thanks,
>
> Harold
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Moving left panel modules in the darkroom

2020-01-22 Thread Aurélien Pierre
Hi,

image-processing modules *are not moved *up and down *in the GUI* but
*in the pixel pipe, *so that changes the actual order of application of
the corresponding filters. You are on for a lot of trouble if you handle
that as a workspace widgets reordering feature.

Reordering the (non image-processing) modules as a workspace management
might raise 2 concerns :

  * a need for a global workspace/widgets layout editor (Qt app do that
a lot, GTK is not super keen),
  * a possible confusion among users, because left panel reordering will
mean workspace reordering, but in right panel, it will mean pipe
reordering. 2 different behaviours for the seemingly identical
graphical widgets in the same app is not a good UX design.

Cheers,

Aurélien.

Le 22/01/2020 à 14:33, Florian W a écrit :
> Hi guys, after a few months here is the usability nerd again :)
>
> I wondered if there was any reason (other than historical I mean) for
> the ordering of the left panel modules in the darkroom.
>
> I can see good reasons to put the snapshot and history modules at the
> top of the panel, however I'm a bit puzzled to see the mask manager at
> the bottom and the duplicate manager at a higher position.
>
> Different people have different workflows and want to organize their
> workspace differently according to it.
>
> Wouldn't it be great if the left panel modules could be moved up and
> down like we can do in the right panel now?
>
> I mean, I usually use the mask manager way more often then the
> duplicate manager (which I basically don't use as there's Ctrl+D to do
> the job).
>
> It's been a while since I haven't been into the DT code base but I
> shall have a look to do that.
>
> Cheers and keep up the good work 
>
> Florian
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Filmic RGB

2020-01-03 Thread Aurélien Pierre
Hi Bruce,

the abscissa are log scaled, the ordinates are display-gamma scaled.

Happy New Year !

Aurélien.

Le 03/01/2020 à 20:08, Bruce Williams a écrit :
> Aurélien,
> Just seeking some clarification. Is the read-only graph in filmic RGB
> linear or log?
> Thanks in advance.
> Cheers,
> Bruce Williams.
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] HDR and SDR

2019-12-25 Thread Aurélien Pierre
Hi,

… and that's where the image processing pipeline mixed with people's
misunderstandings backfires.

What's HDR here ? The scene or the display ?

The image only encodes a scene-referred light emission with 0 and 1.
That scene-referred light emission has infinite dynamic range, with a
more limited region of interest for human vision. The encoded recording
by the camera has not an infinite DR, but still has a dynamic range much
wider than any display. In this encoding, each bit can only encode 1 EV
of dynamic range. So, 8 bits => 8 EV, and bit depth => dynamic range.
Using a tone curve/gamma/OETF only redistributes the bits more evenly
over that dynamic range.

The point you make is valid : an SDR file can be an HDR scene backed
into an SDR image adapted for SDR display. Yet we don't care, an 8-bits
file has a contrast of 255:1, a 10-bits one has a contrast of 1023:1, no
matter the scene that it actually encodes. It would appear that any bit
depth > 8 bits is to be assumed HDR until proved otherwise.

So the issue seems more general to me than mapping "tonecurve" to
"dynamic range". The classical imaging pipeline expects 100% RGB (255 in
8 bits) to encode 100 Cd/m² (display white point), and (18%
RGB)^(1/gamma) to encode middle grey. We can say one file is HDR is 100%
RGB encodes more than 100 Cd/m², in which case the colour management
needs to apply an highlight roll-off to tonemap the image for SDR
displays. From what I understand, PQ and HLG tonecurve are just fancy
maths tricks to optimize the distribution of bits over the dynamic range
among high bit-depths to reduce posterization. They don't tonemap
anything in themselves. So these transfer functions can't be assumed to
be linked to HDR files all the time, they are only encodings (same as
sRGB gamma, but more refined).

Say your 16 bits file (contrast of 65535:1) encodes SDR (65535 = 100
Cd/m²), to display it on a regular monitor (contrast of 255:1), you
simply need to rescale every pixel linearly (×255 / 65535). If it
encodes HDR (65535 = 200, 400, 1000… Cd/m²), you will probably rescale
linearly below middle grey, and roll-off progressively to compress
highlights in the [117; 255] display-referred space.

My point here is we need to be absolutely clear about *what* is encoded
(scene-referred dynamic range), *for what* it is encoded
(display-referred dynamic range), and *how* it is encoded (OETF + bit
depth).

Bit-depth and OETF are stored in ICC metadata of images, but the
scene-referred dynamic range should be deduced from the white point
luminance tag if provided (if white > 100 Cd/m² then HDR ; else SDR),
and I don't know for the display-referred dynamic range (if the file has
been saved with a tonemapping already or if the colour management needs
to roll-off highlights on-the-fly before sending to GPU memory buffer).

Merry Christmas,

Aurélien.

Le 18/12/2019 à 07:59, Wiktor Nowak a écrit :
> Why thinking of bit depth in terms of dynamic range? Dynamic range is a
> range between the brightest and darkest part of image with no clipping.
> HDR image could be 16, 10, 8 or whatever bit depth or format with a base
> curve applied or not i think...
>
> W dniu 17.12.2019 o 14:24, Andreas Schneider pisze:
>> On Tuesday, 17 December 2019 14:12:48 CET Sturm Flut wrote:
>>> Hi Andreas,
>>>
>>> On 16.12.19 20:13, Andreas Schneider wrote:
 sRGB -> SDR
 AdobeRGB -> SDR

 PQ Rec2020 -> HDR
 HLG Rec2020 -> HDR

 Does that make sense? I could look into that next.
>>> Where do RAW files fit into this definition? They have no color space.
>>>
>>> A 16 Bit AdobeRGB out-of-camera TIFF file might have more dynamic range
>>> than 10 Bit Rec.2020 data. Color space alone might not be a sufficient
>>> measure.
>> If you have a better idea, I'm open to suggestions!
>>
>> All TIFF files are currently set to SDR ...
>>
>>
>>  Andreas
>>
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: Fwd: [darktable-dev] Feature freeze for 3.0

2019-12-02 Thread Aurélien Pierre
Hello Isabelle and William and everyone,

is this arrangement still on ? I have started emptying my brain here (in
English) :
https://www.youtube.com/playlist?list=PL4EYo8VotTsiZLr3BqGeBRj-qYGO63bIv

Thanks,

Aurélien.

PS : Yes I'm French :-)

Le 18/08/2019 à 11:17, Isabelle Hurbain-Palatin a écrit :
> Hi all,
> If there's an easy way to split the work and to share it, I'd be more
> than happy to help - it's more than time I start contributing to the
> awesomeness that is Darktable, and I do have a fair amount of
> experience writing doc (although English is not my native language, so
> English proofreading would be required). Videos explaining what's new
> and how it works would be tremendously helpful! (this is mostly how I
> work when I write our software documentation these days at work...).
> I'm assuming individual module documentation at least can be delegated
> individually.
> @Aurélien - considering your name, I'm assuming you're a native French
> speaker - if the work is divided beforehand and it's easier for you,
> you could provide info in French to me as well.
> Best regards,
> Isabelle
>
> On Sun, Aug 18, 2019 at 11:06 AM Aurélien Pierre
> mailto:rese...@aurelienpierre.com>> wrote:
>
> Hi,
>
> for last 2.6 version, it took me a full-time week to write the doc
> for 3 modules, trying to accurately describe how they worked in a
> vulgarized way, in my 3rd language. Instead, will it be helpful if
> I recorded videos showing what's new and how it works (possibly in
> a disorganized/non-linear order), and let you put the information
> in shape for a documentation ? Not that I want to give you the bad
> job, but I might be more useful on the debugging team, as the
> release approaches. I'm still happy to do proof-reading.
>
> Cheers,
>
> Aurélien.
>
> Le 18/08/2019 à 06:34, Bruce Williams a écrit :
>> Thanks, I'll check it out!
>>
>> Cheers,
>> Bruce Williams.
>>
>> -- Forwarded message -
>> From: *Ariel Kanterewicz* > <mailto:ar...@aknt.com.ar>>
>> Date: Sun., 18 Aug. 2019, 09:06
>> Subject: Re: [darktable-dev] Feature freeze for 3.0
>> To: darktable > <mailto:darktable-dev@lists.darktable.org>>
>>
>>
>> @Bruce: if you're willing to pay for it, there's a *really* nice
>> git course on pluralsight, it's the best intro to the topic I've
>> found
>> yet: https://app.pluralsight.com/library/courses/how-git-works/
>>
>> I'm excited to see the feature list in the new release,
>> especially the Culling mode :-D
>>
>> On Sat, 17 Aug 2019 at 22:38, Bruce Williams > <mailto:stu...@audio2u.com>> wrote:
>>
>> I will certainly do what I can.
>> As yet, I have not delved into the world of github, pull
>> requests, merges, and so forth.
>> Nor compiling!
>> But when I get home from holiday (another 8 days), that is on
>> my list of things to get up to speed on. 
>>
>> Cheers,
>> Bruce Williams.
>>
>> -- Forwarded message -
>> From: *Ulrich Pegelow* > <mailto:ulrich.pege...@tongareva.de>>
>> Date: Sun., 18 Aug. 2019, 01:31
>> Subject: Re: [darktable-dev] Feature freeze for 3.0
>> To: > <mailto:darktable-dev@lists.darktable.org>>
>>
>>
>> Hi,
>>
>> frankly speaking I am still using the 2.6 branch and have no
>> experience
>> of any of the 2.7 features at all. I would be happy if
>> someone closer to
>> the recent development (Bruce?) could take over the manual.
>>
>> Best wishes
>>
>> Ulrich
>>
>> Am 13.08.19 um 17:42 schrieb Pascal Obry:
>> >
>> > Hi Matthieu,
>> >
>> > I fully agree here. I'm not the maintainer of the
>> documentation so I
>> > cannot speak for Ulrich.
>> >
>> > That being said, I can certainly ping developers to start a
>> draft of
>> > the code they have introduced in dt 3.0.
>> >
>> > I have noted that Bruce will be happy to help doing the
>> preparation of
>> > the documentation from draft to final version. That's
>> really nice,
>> > thanks Bruce for stepping in!
>> >
>> >

[darktable-dev] Filmic (RGB or not) explained

2019-11-30 Thread Aurélien Pierre
Hi,

This is my video on filmic RGB, explaining how it works and how to do it
in real-life cases:
https://www.youtube.com/watch?v=zbPj_TqTF88=youtu.be I hope it
resolves most of the misunderstandings and trial-errors sessions that
always end-up in frustration.

A shorter addendum is on the way to explain specifically to users of
darktable 2.6 what changed between the 2.6 and 3.0.

Cheers,

Aurélien.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] filmic module is not present

2019-11-30 Thread Aurélien Pierre
Filmic RGB works exactly the same as the old one, the changes are merely
cosmetic.

What does not work ?

Le 30/11/2019 à 03:46, David Vincent-Jones a écrit :
>
> It is unfortunate that the new 'filmic rgb' appears to act quite
> differently from the original 'filmic' and I have now lost the ability
> to re-edit/adjust a large portion of my recent work.
>
> David
>
> On 2019-11-29 3:40 p.m., Lorenzo Fontanella wrote:
>>
>> Hello, let me point out that in build 3.0 rc1, the "filmic" module is
>> not present in the modules list. Using old styles created with
>> version 2.6 that prevent the use of it, it appears active and
>> modifies words. The module was present in version 3.0rc0 and
>> disappeared in the 3.0rc1 version 
>>  
>> Lorenzo Fontanella
>>
>>
>> ___
>> darktable developer mailing list to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] tone equalizer, filmic rgb → icon background

2019-11-19 Thread Aurélien Pierre
Did look at that.

The colour picker in levels and tone curve are GTK buttons. As such,
they get hovering effects and stuff from CSS. The colour pickers next to
sliders are bauhaus widgets, and their CSS support is very basic at this
point (size and colour only). This is indeed a Cairo + GTK business to
wire their state to user input, then their state to CSS rules.

Le 15/11/2019 à 21:33, Aurélien Pierre a écrit :
>
> Will have a look in the next few days.
>
> Le 15/11/2019 à 21:10, Nicolas Auffray a écrit :
>> Timur, see here for issue posted :
>> https://github.com/darktable-org/darktable/issues/3445
>>
>> I'm now waiting for someone with GTK/Cairo skills.
>>
>>
>> Le 15/11/2019 à 20:55, Nicolas Auffray a écrit :
>>> I just look on that and color pickers with hover background are
>>> indeed buttons gtk items as color pickers in filmic rgb, exposer,
>>> color balance, tone equalizer (or some other modules) are not button
>>> widget ones but seems included in bauhaus-slider widget (or with).
>>> And these ones doesn't have specific CSS tag so I can't change that
>>> with CSS as it is without adding other glitches. So probably need a
>>> GTK code change to make all color pickers here have same behavior. I
>>> will post an issue on Github.
>>>
>>>
>>> Le 15/11/2019 à 20:12, Nicolas Auffray a écrit :
>>>> Oups, sorry I just see that I was not on same color pickers than
>>>> you. We recently update color pickers on levels modules and I don't
>>>> know why, by answering you, I was thinking of that levels modules
>>>> (so the black, grey and white pickers I was pointing) and not the
>>>> curve rgb one...
>>>>
>>>> I've just edited an image and opening curve rgb (not level...) and
>>>> I now understand what you point. Indeed, you're right, this pickers
>>>> are the same as filmic, exposure or tones ones and not have the
>>>> same behavior. Again, sorry for my last answers, not on purpose...
>>>>
>>>> I will see if I can fix that with CSS this week-end. Thanks for
>>>> pointing that.
>>>>
>>>> Nicolas
>>>>
>>>>
>>>> Le 15/11/2019 à 19:41, Timur Irikovich Davletshin a écrit :
>>>>> Nic, I'm just saying that this behavior looks a bit confusing from
>>>>> user
>>>>> standpoint.
>>>>>
>>>>> I'm sorry for not posting to github. MS decided that I cannot use
>>>>> it —
>>>>> https://help.github.com/en/github/site-policy/github-and-trade-controls
>>>>>
>>>>>   and I'm too lazy to use VPN for that purpose.
>>>>>
>>>>> Timur.
>>>>>
>>>>> P.S. there is no need to reply both to personal email and mailing
>>>>> list.
>>>>>
>>>>> On Fri, 2019-11-15 at 19:32 +0100, Nicolas Auffray wrote:
>>>>>> I have never say they are not pickers, just they are not same one...
>>>>>> Just see how they renders and where they are on modules UI. Size is
>>>>>> not
>>>>>> the same, and also color. just see that they are visibly
>>>>>> different...
>>>>>> But remember that it's an open source software, so feel free to
>>>>>> change
>>>>>> that the way you want if you don't like that, for you or you could
>>>>>> even
>>>>>> propose a PR on Github... Hmm
>>>>>>
>>>>>> Color pickers on curve modules are here to apply black, grey and
>>>>>> white
>>>>>> parts of curve (so they are colored in... black, grey, white). If
>>>>>> hover,
>>>>>> that could only be on background. Other color pickers are not
>>>>>> related
>>>>>> to
>>>>>> black, grey and white. So if we had a hover effect, it would be
>>>>>> better
>>>>>> to do that just on the picker (not the background but the picker
>>>>>> himself). It's a possibility, but for me it's good as it is actually
>>>>>> (and nobody else seems to have problem with that...). That's my view
>>>>>> here !
>>>>>>
>>>>>>
>>>>>> Le 15/11/2019 à 18:40, Timur Irikovich Davletshin a écrit :
>>>>>>> If they are not pickers why use same icon? And if all other icons
>>>>>>> have
>>&

Re: [darktable-dev] basic adjustments

2019-11-18 Thread Aurélien Pierre
Don't, it's fine.

But serious bug reports like crashes, regressions, Mac/Win/Nux and
CPU/GPU inconsistencies do pile up faster than fixes, so all the
cosmetic talk about the proper way to write grey is kind of hilarious. I
wonder sometimes if users realize we are just a handful working on a
day-to-day basis on darktable, and we are already swamped.

Le 18/11/2019 à 17:55, Julian Rickards a écrit :
> My apologies
>
> On Mon, Nov 18, 2019 at 10:58 AM Aurélien Pierre
> mailto:rese...@aurelienpierre.com>> wrote:
>
> Forget about segfaults and crashes, our first priority is now to
> make every native English speaker feel respected in his difference
> and identity, offering fully differentiated support of every
> English dialect : en_GB, en_AU, en_US, en_ZA, en_CA, en_NZ, en_FJ…
>
> Meanwhile, people speaking languages for which darktable is
> partially or even not translated can die screaming, we are too
> busy fixing gr(e|a)ys. Come back later (or preferably never).
>
> Next top-priority project : translating dt in signs language. The
> chief-translator position is open, send your resumes now.
>
> Obligatory mention :
>
> "We are a dynamic team of motivated people trying to change the
> world one step at a time. Join us to expand your horizons in an
> exciting opportunity to face new challenges and give a new start
> to your carreer."
>
>  Le 18/11/2019 à 16:30, Richard Hobday a écrit :
>> -- Tongue firmly in cheek --
>> en-GB or en-US ?
>> That is the real question.
>> They are two very different languages, at present darktable seems to
>> assume they are the same!
>>
>> R.
>>
>> On 18/11/2019 14:36, Julian Rickards wrote:
>>> grAy is the American spelling and there's nothing wrong with that
>>> (despite being a Canadian and using British spelling, LoL) but
>>> Timur is
>>> correct, there should be consistency and, in addition to
>>> sticking to the
>>> same spelling of gray/grey, I think that that the English
>>> version of the
>>> documentation (and GUI) should be either all British or all
>>> American
>>> English spelling, not just American for some and British for
>>> others.
>>>
>>> On Mon, Nov 18, 2019 at 8:29 AM Timur Irikovich Davletshin
>>> mailto:timur.davlets...@gmail.com>
>>> <mailto:timur.davlets...@gmail.com>
>>> <mailto:timur.davlets...@gmail.com>> wrote:
>>>
>>>     Actually I agree. Darktable lacks of terminology unification
>>> across
>>>     modules. E.g. AFAIR there are around 6 different names for
>>> 18% gray. In
>>>     some places it is "grey" but in others it is "gray".
>>>
>>>     Timur.
>>>
>>>     On Mon, 2019-11-18 at 14:03 +0100, Moritz Moeller wrote:
>>>  > On 15.11.19 12:02, parafin wrote:
>>>  > > I think these numbers don't have units, so why do
>>> expect them to
>>>  > > mean
>>>  > > the same thing in different modules, even if we ignore
>>> the pipe
>>>  > > order?
>>>  >
>>>  > Because that's the most basic requirement of usability.
>>> That things
>>>  > named the same way act the same way and mean the same
>>> thing across a
>>>  > single app – at the very least.
>>>  >
>>>  > > It's not promised anywhere in the documentation as far
>>> as I can
>>>  > > see.
>>>  >
>>>  > 
>>>  > I went through the Photoshop & Lightroom docs and I can't
>>> find any
>>>  > promise in the entirety of them that slider ranges of
>>> things sharing
>>>  > a
>>>  > name will match in range & effect.
>>>  > Yet they do.
>>>  > Go figure ...
>>>  > 
>>>  >
>>>  > Beers,
>>>  >
>>>  > .mm
>>>  >
>>> _
>>>
>>>  > __
>>>  > darktable developer mailing list
>>>  > to unsubscribe send a mail to
>>>  > darktable-dev+

Re: [darktable-dev] basic adjustments

2019-11-18 Thread Aurélien Pierre
Forget about segfaults and crashes, our first priority is now to make
every native English speaker feel respected in his difference and
identity, offering fully differentiated support of every English dialect
: en_GB, en_AU, en_US, en_ZA, en_CA, en_NZ, en_FJ…

Meanwhile, people speaking languages for which darktable is partially or
even not translated can die screaming, we are too busy fixing gr(e|a)ys.
Come back later (or preferably never).

Next top-priority project : translating dt in signs language. The
chief-translator position is open, send your resumes now.

Obligatory mention :

"We are a dynamic team of motivated people trying to change the world
one step at a time. Join us to expand your horizons in an exciting
opportunity to face new challenges and give a new start to your carreer."

 Le 18/11/2019 à 16:30, Richard Hobday a écrit :
> -- Tongue firmly in cheek --
> en-GB or en-US ?
> That is the real question.
> They are two very different languages, at present darktable seems to
> assume they are the same!
>
> R.
>
> On 18/11/2019 14:36, Julian Rickards wrote:
>> grAy is the American spelling and there's nothing wrong with that
>> (despite being a Canadian and using British spelling, LoL) but Timur is
>> correct, there should be consistency and, in addition to sticking to the
>> same spelling of gray/grey, I think that that the English version of the
>> documentation (and GUI) should be either all British or all American
>> English spelling, not just American for some and British for others.
>>
>> On Mon, Nov 18, 2019 at 8:29 AM Timur Irikovich Davletshin
>> mailto:timur.davlets...@gmail.com>> wrote:
>>
>>     Actually I agree. Darktable lacks of terminology unification across
>>     modules. E.g. AFAIR there are around 6 different names for 18%
>> gray. In
>>     some places it is "grey" but in others it is "gray".
>>
>>     Timur.
>>
>>     On Mon, 2019-11-18 at 14:03 +0100, Moritz Moeller wrote:
>>  > On 15.11.19 12:02, parafin wrote:
>>  > > I think these numbers don't have units, so why do expect
>> them to
>>  > > mean
>>  > > the same thing in different modules, even if we ignore the pipe
>>  > > order?
>>  >
>>  > Because that's the most basic requirement of usability. That
>> things
>>  > named the same way act the same way and mean the same thing
>> across a
>>  > single app – at the very least.
>>  >
>>  > > It's not promised anywhere in the documentation as far as I can
>>  > > see.
>>  >
>>  > 
>>  > I went through the Photoshop & Lightroom docs and I can't find
>> any
>>  > promise in the entirety of them that slider ranges of things
>> sharing
>>  > a
>>  > name will match in range & effect.
>>  > Yet they do.
>>  > Go figure ...
>>  > 
>>  >
>>  > Beers,
>>  >
>>  > .mm
>>  >
>> _
>>  > __
>>  > darktable developer mailing list
>>  > to unsubscribe send a mail to
>>  > darktable-dev+unsubscr...@lists.darktable.org
>>     
>>  >
>>
>>    
>> ___
>>     darktable developer mailing list
>>     to unsubscribe send a mail to
>>     darktable-dev+unsubscr...@lists.darktable.org
>>     
>>
>>
>> ___
>>
>> darktable developer mailing list to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>
> -- 
> http://lukecarville.jalbum.net
> https://facebook.com/rlc.hobday
> ___
>
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] basic adjustments

2019-11-18 Thread Aurélien Pierre
darktable also lacks a board of investors, a CTO, a head of UX, a
project management, a panel of test users, a famous photographer as a
brand ambassador, many tracking features to collect statistics on usage
"to improve the performance", an annual coding sprint in the Bahamas and
a couple of off-shore accounts to pay for the whores and whisky during
that sprint.

Until that, darktable is done be piling up code randomly produced by
random people, 90% of them being non-native English speakers, some
writing grey, some writing gray, and contrast still means "pushing
whites further away from blacks" with no implication on the actual
algorithm or colour space used to do so.

By the way, changing strings variables in C code does not actually
require coding knowledge (just how to use grep and git - /which might be
worse than actually coding/) and only needs a proper design backed with
some argumentation (same-named settings having the same effect is a very
good argument, now how do we actually fix the soft in a UI-consistent
yet technically-accurate way ?). I'm waiting for the pull requests…

Le 18/11/2019 à 14:29, Timur Irikovich Davletshin a écrit :
> Actually I agree. Darktable lacks of terminology unification across
> modules. E.g. AFAIR there are around 6 different names for 18% gray. In
> some places it is "grey" but in others it is "gray".
>
> Timur.
>
> On Mon, 2019-11-18 at 14:03 +0100, Moritz Moeller wrote:
>> On 15.11.19 12:02, parafin wrote:
>>> I think these numbers don't have units, so why do expect them to
>>> mean
>>> the same thing in different modules, even if we ignore the pipe
>>> order?
>> Because that's the most basic requirement of usability. That things 
>> named the same way act the same way and mean the same thing across a 
>> single app – at the very least.
>>
>>> It's not promised anywhere in the documentation as far as I can
>>> see.
>> 
>> I went through the Photoshop & Lightroom docs and I can't find any 
>> promise in the entirety of them that slider ranges of things sharing
>> a 
>> name will match in range & effect.
>> Yet they do.
>> Go figure ...
>> 
>>
>> Beers,
>>
>> .mm
>> _
>> __
>> darktable developer mailing list
>> to unsubscribe send a mail to 
>> darktable-dev+unsubscr...@lists.darktable.org
>>
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] dt 3.0rc1 → gamma module

2019-11-17 Thread Aurélien Pierre
Hi,

the gamma module does not do what it says anymore. It's a leftover of dt
0.x before the output colour profile was added.

It has always been there but didn't appear in the history before. It
still doesn't appear in UI. I don't remember why we made every module
appear in the history now (except the mask manager, which function I
don't understand), but here it is.

The gamma module is now only converting the floating point output of the
pipe (image or masks) to 8 bits unsigned integers, to be sent to the
display. As such, it is not used when exporting to a file.

Aurélien.

Le 17/11/2019 à 14:02, Andreas Schneider a écrit :
> On Sunday, 17 November 2019 10:30:00 CET Timur Irikovich Davletshin wrote:
>> Hi dear developers!
>>
>> Short question. What is gamma module which has appeared in history
>> stack in dt 3.0 builds? There is no such module in darktable nor it can
>> be disabled in history stack. Is it something that has got into history
>> stack by mistake or some hidden darkroom module?
> See, e.g.
>
> https://www.cambridgeincolour.com/tutorials/gamma-correction.htm
>
>
>   Andreas
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] tone equalizer, filmic rgb → icon background

2019-11-15 Thread Aurélien Pierre
Will have a look in the next few days.

Le 15/11/2019 à 21:10, Nicolas Auffray a écrit :
> Timur, see here for issue posted :
> https://github.com/darktable-org/darktable/issues/3445
>
> I'm now waiting for someone with GTK/Cairo skills.
>
>
> Le 15/11/2019 à 20:55, Nicolas Auffray a écrit :
>> I just look on that and color pickers with hover background are
>> indeed buttons gtk items as color pickers in filmic rgb, exposer,
>> color balance, tone equalizer (or some other modules) are not button
>> widget ones but seems included in bauhaus-slider widget (or with).
>> And these ones doesn't have specific CSS tag so I can't change that
>> with CSS as it is without adding other glitches. So probably need a
>> GTK code change to make all color pickers here have same behavior. I
>> will post an issue on Github.
>>
>>
>> Le 15/11/2019 à 20:12, Nicolas Auffray a écrit :
>>> Oups, sorry I just see that I was not on same color pickers than
>>> you. We recently update color pickers on levels modules and I don't
>>> know why, by answering you, I was thinking of that levels modules
>>> (so the black, grey and white pickers I was pointing) and not the
>>> curve rgb one...
>>>
>>> I've just edited an image and opening curve rgb (not level...) and I
>>> now understand what you point. Indeed, you're right, this pickers
>>> are the same as filmic, exposure or tones ones and not have the same
>>> behavior. Again, sorry for my last answers, not on purpose...
>>>
>>> I will see if I can fix that with CSS this week-end. Thanks for
>>> pointing that.
>>>
>>> Nicolas
>>>
>>>
>>> Le 15/11/2019 à 19:41, Timur Irikovich Davletshin a écrit :
 Nic, I'm just saying that this behavior looks a bit confusing from
 user
 standpoint.

 I'm sorry for not posting to github. MS decided that I cannot use it —
 https://help.github.com/en/github/site-policy/github-and-trade-controls

   and I'm too lazy to use VPN for that purpose.

 Timur.

 P.S. there is no need to reply both to personal email and mailing
 list.

 On Fri, 2019-11-15 at 19:32 +0100, Nicolas Auffray wrote:
> I have never say they are not pickers, just they are not same one...
> Just see how they renders and where they are on modules UI. Size is
> not
> the same, and also color. just see that they are visibly
> different...
> But remember that it's an open source software, so feel free to
> change
> that the way you want if you don't like that, for you or you could
> even
> propose a PR on Github... Hmm
>
> Color pickers on curve modules are here to apply black, grey and
> white
> parts of curve (so they are colored in... black, grey, white). If
> hover,
> that could only be on background. Other color pickers are not related
> to
> black, grey and white. So if we had a hover effect, it would be
> better
> to do that just on the picker (not the background but the picker
> himself). It's a possibility, but for me it's good as it is actually
> (and nobody else seems to have problem with that...). That's my view
> here !
>
>
> Le 15/11/2019 à 18:40, Timur Irikovich Davletshin a écrit :
>> If they are not pickers why use same icon? And if all other icons
>> have
>> hover action, why not these? Hmm...
>>
>> Timur.
>>
>> On Fri, 2019-11-15 at 18:25 +0100, Nicolas Auffray wrote:
>>> Hi Timur,
>>>
>>> It would really be better to post issues on Github (to let all
>>> know
>>> and
>>> follow that). Anyway, some answers on what I see :
>>>
>>> 1. Yes, this icon doesn't seems to have CSS editing possibility
>>> (maybe I
>>> make a mistake but on what I see with GTK inspector...). So
>>> posting
>>> on
>>> Github to let Aurélien see that (as it is his module work) would
>>> be
>>> better.
>>>
>>> 2 and 3. That's not issues. All color pickers like these ones
>>> (see
>>> exposure module for example) are displayed the same way. Color
>>> picker
>>> on
>>> rgb curve are not same color picker (colored one, black, grey and
>>> white
>>> and not on same place like others color picker ; so having a
>>> different
>>> way is not a problem as they are not totally the same). Not
>>> needed
>>> to
>>> have them have the exact same behavior.
>>>
>>> Nicolas
>>>
>>>
>>> Le 15/11/2019 à 18:09, Timur Irikovich Davletshin a écrit :
 Hi developers!

 1. tone equalizer module → display exposure mask icon has no
 hover
 background lightening. Compare with display mask icon in drawn
 mask
 blending.

 2. Same problem in tone equalizer module → masking tab → picker
 tool
 next to mask exposure compensation and mask contrast
 compensation.

 3. filmic rgb module → scene tab → 

Re: [darktable-dev] Q: Store GPS location in RAW instead of XMP?

2019-11-15 Thread Aurélien Pierre
Hi,

we never touch the raw files to avoid any risk of data corruption. There
exist precedents of exif edits permanently damaging raw files on saving,
possibly in DigiKam (you would have to ask Roman Lebedev about that, I
don't remember the specifics).

Aurélien.

Le 15/11/2019 à 21:26, Christian a écrit :
> When a picture is geo-located (in the map view) the position is
> stored in the XMP but not in the RAW file.
>
> Is there a reason for this behavior?
>
> The only way to get the RAW updated is an external
> script which reads the xmp and calls exiftool
> to set the gps tags in the raw file?
>
> christian
>
> ___
>
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Information about the Pixel pipe...

2019-11-12 Thread Aurélien Pierre
Here : https://youtu.be/3FFU-Ltvm0I

Le 12/11/2019 à 14:46, Julian Rickards a écrit :
> This is great Aurélien, thanks
>
> On Mon, Nov 11, 2019 at 4:35 PM Aurélien Pierre
> mailto:rese...@aurelienpierre.com>> wrote:
>
> Every module advertises its input, output and working colour
> spaces to the pixelpipe and conversion are done on-the-fly when
> needed. Don't bother about it.
>
> Yet, you still need to ensure you do the right pixel operation at
> the right time. Details :
> 
> https://github.com/darktable-org/darktable/pull/3075#issuecomment-538410131
> and
> https://github.com/darktable-org/darktable/pull/2905#issue-313652085.
>
> Upcoming video in preparation about that.
>
> Good luck,
>
> Aurélien.
>
> Le 11/11/2019 à 22:14, Coding Dave a écrit :
>> What would help I think is if each module tells the color space
>> it expects and the one it returns. In my limited knowledge and
>> understanding I think there must be a module that gets color
>> space A as input and returns color spacd B as output. But I might
>> be wrong.
>>
>> Cheers
>>
>> jys > <mailto:junkyardspar...@yepmail.net>> schrieb am Mo., 11. Nov.
>> 2019, 22:08:
>>
>>
>>
>> On Mon, Nov 11, 2019, at 12:43, Keresztes Barna wrote:
>> > Hi,
>> > Can somebody summarize me the stages of the pixel pipe in
>> darktable, and
>> > the color space in each stage (if the stages have specific
>> color spaces).
>>
>> There's an early basic overview of the theory behind it here:
>> https://github.com/darktable-org/darktable/pull/2905
>>
>> -- 
>> jys
>> 
>> ___
>> darktable developer mailing list
>> to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>> <mailto:darktable-dev%2bunsubscr...@lists.darktable.org>
>>
>>
>> 
>> ___
>> darktable developer mailing list to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>> <mailto:darktable-dev+unsubscr...@lists.darktable.org>
>
> 
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
> <mailto:darktable-dev%2bunsubscr...@lists.darktable.org>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Information about the Pixel pipe...

2019-11-11 Thread Aurélien Pierre
Every module advertises its input, output and working colour spaces to
the pixelpipe and conversion are done on-the-fly when needed. Don't
bother about it.

Yet, you still need to ensure you do the right pixel operation at the
right time. Details :
https://github.com/darktable-org/darktable/pull/3075#issuecomment-538410131
and https://github.com/darktable-org/darktable/pull/2905#issue-313652085.

Upcoming video in preparation about that.

Good luck,

Aurélien.

Le 11/11/2019 à 22:14, Coding Dave a écrit :
> What would help I think is if each module tells the color space it
> expects and the one it returns. In my limited knowledge and
> understanding I think there must be a module that gets color space A
> as input and returns color spacd B as output. But I might be wrong.
>
> Cheers
>
> jys mailto:junkyardspar...@yepmail.net>>
> schrieb am Mo., 11. Nov. 2019, 22:08:
>
>
>
> On Mon, Nov 11, 2019, at 12:43, Keresztes Barna wrote:
> > Hi,
> > Can somebody summarize me the stages of the pixel pipe in
> darktable, and
> > the color space in each stage (if the stages have specific color
> spaces).
>
> There's an early basic overview of the theory behind it here:
> https://github.com/darktable-org/darktable/pull/2905
>
> -- 
> jys
> 
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
> 
>
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Increased memory requirements?

2019-11-09 Thread Aurélien Pierre
Hi,

atrous (contrast equalizer) is quite heavy on hardware and needs a lot
of memory to store the intermediate wavelets scales. I remember having
the same issue on my previous laptop (GPU Nvidia 635M, 2 GB too) with
certain updates of the driver.

Maybe you could try to decrease the memory headroom
(opencl_memory_headroom=150) or don't cache modules output
(opencl_synch_cache=false).

Good luck,

Aurélien.

Le 09/11/2019 à 19:07, KOVÁCS István a écrit :
> Hi,
>
> I think it's a user's question, but quite technical and with the dev
> version, so I'm posting it here.
> I've been using darktable since the 1.x days, maybe longer. I have an
> ancient Core2 Duo machine with just 4 GB of RAM and an NVidia GTX 650
> with 2 GB onboard.
> For that reason, I have used the instructions for 32-bit systems, even
> though I run 64-bit Linux
> (https://darktable.gitlab.io/doc/en/special_topics_chapter.html#darktable_and_memory_32bit).
> Namely:
> worker_threads=1
> cache_memory=268435456
> host_memory_limit=500
> opencl_memory_headroom=600
> opencl_memory_requirement=600
> singlebuffer_limit=8
>
> However, no matter what I do (e.g. increasing/decreasing opencl memory
> headroom and requirement), I keep getting '[opencl_atrous] couldn't
> enqueue kernel! -4' with the version compiled from master. On the UI,
> I get: "tiling failed for module 'atrous'. output may be garbled". At
> the console:
> 10.892248 [opencl memory] device 0: 0 bytes (0.0 MB) in use
> 10.893898 [default_process_tiling_cl_ptp] use tiling on module
> 'atrous' for image with full size 5668 x 2160
> 10.893911 [default_process_tiling_cl_ptp] (3 x 1) tiles with max
> dimensions 3836 x 2160 and overlap 512
> 10.893922 [default_process_tiling_cl_ptp] tile (0, 0) with 3836 x 2160
> at origin [0, 0]
> 10.893938 [opencl memory] device 0: 132572160 bytes (126.4 MB) in use
> 10.893951 [opencl memory] device 0: 265144320 bytes (252.9 MB) in use
> 10.955908 [opencl memory] device 0: 265144420 bytes (252.9 MB) in use
> 10.955952 [opencl memory] device 0: 397716580 bytes (379.3 MB) in use
> 10.955965 [opencl memory] device 0: 530288740 bytes (505.7 MB) in use
> 10.955976 [opencl memory] device 0: 662860900 bytes (632.2 MB) in use
> 10.955989 [opencl memory] device 0: 795433060 bytes (758.6 MB) in use
> 10.956001 [opencl memory] device 0: 928005220 bytes (885.0 MB) in use
> 10.956013 [opencl memory] device 0: 1060577380 bytes (1011.4 MB) in use
> 10.956024 [opencl memory] device 0: 1193149540 bytes (1137.9 MB) in use
> 10.956036 [opencl memory] device 0: 1325721700 bytes (1264.3 MB) in use
> 10.956047 [opencl memory] device 0: 1458293860 bytes (1390.7 MB) in use
> 10.972430 [opencl memory] device 0: 1458293760 bytes (1390.7 MB) in use
> 11.046771 [opencl memory] device 0: 1325721600 bytes (1264.3 MB) in use
> 11.047131 [opencl memory] device 0: 1193149440 bytes (1137.9 MB) in use
> 11.047477 [opencl memory] device 0: 1060577280 bytes (1011.4 MB) in use
> 11.047821 [opencl memory] device 0: 928005120 bytes (885.0 MB) in use
> 11.047842 [opencl memory] device 0: 795432960 bytes (758.6 MB) in use
> 11.047860 [opencl memory] device 0: 662860800 bytes (632.2 MB) in use
> 11.047876 [opencl memory] device 0: 530288640 bytes (505.7 MB) in use
> 11.047893 [opencl memory] device 0: 397716480 bytes (379.3 MB) in use
> 11.047910 [opencl memory] device 0: 265144320 bytes (252.9 MB) in use
> 11.047926 [opencl_atrous] couldn't enqueue kernel! -4
> 11.047941 [opencl memory] device 0: 132572160 bytes (126.4 MB) in use
> 11.048264 [opencl memory] device 0: 0 bytes (0.0 MB) in use
> 11.048599 [default_process_tiling_opencl_ptp] couldn't run
> process_cl() for module 'atrous' in tiling mode: 0
> 11.048618 [opencl_pixelpipe] could not run module 'atrous' on gpu.
> falling back to cpu path
> [mipmap_cache] thumbs fill 0.31/256.00 MB (0.12%)
> [mipmap_cache] float fill 0/2 slots (0.00%)
> [mipmap_cache] full  fill 1/2 slots (50.00%)
> [mipmap_cache] level | near match | miss | stand-in | fetches | total rq
> [mipmap_cache] thumb |  13.64% |  13.64% |   -nan%  |   0.00% | 100.00%
> [mipmap_cache] float |   -nan% |   -nan% |   -nan%  |   0.00% |   0.00%
> [mipmap_cache] full  |   -nan% |   -nan% |   -nan%  | 100.00% |   0.00%
>
>
> [mipmap_cache] thumbs fill 0.31/256.00 MB (0.12%)
> [mipmap_cache] float fill 0/2 slots (0.00%)
> [mipmap_cache] full  fill 1/2 slots (50.00%)
> [mipmap_cache] level | near match | miss | stand-in | fetches | total rq
> [mipmap_cache] thumb |  13.04% |  13.04% |   -nan%  |   0.00% | 100.00%
> [mipmap_cache] float |   -nan% |   -nan% |   -nan%  |   0.00% |   0.00%
> [mipmap_cache] full  |   -nan% |   -nan% |   -nan%  | 100.00% |   0.00%
>
>
> 58.728739 [dev_pixelpipe] took 47.965 secs (70.883 CPU) processed
> `contrast equalizer' on CPU with tiling, blended on CPU [export]
> 58.734511 [opencl memory] device 0: 195886080 bytes (186.8 MB) in use
> 58.845848 [opencl memory] device 0: 391772160 bytes (373.6 MB) in use
> 58.864929 [opencl memory] device 0: 195886080 

Re: [darktable-dev] darktable 3.0.0rc0 released

2019-11-08 Thread Aurélien Pierre
Hi,

filmic RGB works in a native RGB space, while the previous filmic worked
in a native Lab space and did the RGB conversion internally.

The difference is in the blending and masking options : old filmic had
(non-linear) Lab blending parameters, while filmic RGB has (linear) RGB
blending parameters. Also, the old filmic used a fixed ProPhoto RGB
space, while the new uses the user-defined RGB working space.

I put "RGB" in the name because many users asked me which modules work
in RGB and which work in Lab, and there is no way to differenciate them
in UI.

Cheers,

Aurélien.

Le 06/11/2019 à 19:33, François Tissandier a écrit :
> Well, look at Sony RX100. Sony keeps producing and selling several
> generations at the same time, right ? 
>
>     François
>
> Le mer. 6 nov. 2019 à 18:56, Julian Rickards
> mailto:julian.ricka...@gmail.com>> a écrit :
>
> Not quite sure how to interpret "entertaining". I was approaching
> this from a different point of view, one that we take here at my
> workplace. When a new model of a camera is released (OMD E-M5 Mk
> III for example), the manufacturer generally doesn't continue to
> produce any of the previous versions (the original or Mk 1 model
> or the Mk II model).
>
> Denoise is offered in 3 forms (not #1, #2 and #3) because they
> work differently, more differently than simply adding a drop down
> within one copy of this module and for that reason, the names of
> the denoise modules do not end in 1, 2 or 3 but with a descriptive
> word or two. The Tone Curve now offers RGB, Lab and XYZ as drop
> downs because they are similar enough to bundle together but if
> filmic2 operates much differently than filmic, this might be an
> argument to keep both "versions".
>
> However, if the intent of the developers is to have users use
> filmic2 instead of filmic, don't give them the option, just offer
> filmic2 as the only filmic.
>
> Another consideration might be: if it is unlikely that a user
> might want to use both filmic and filmic2, then again, don't give
> them the option.
>
> Just my 2¢, that's all.
>
> On Wed, Nov 6, 2019 at 12:19 PM Moritz Moeller
> mailto:virtualr...@gmail.com>> wrote:
>
> On 6.11.19 14:35, Julian Rickards wrote:
> >  From my perspective, filmic2 isn't appropriate. I work in the
> > publication services section of an Ontario (Canada)
> government ministry
> > and when we assign "2" to a publication, "1" is gone and no
> longer
> > available.
>
> That's the most entertaining reasoning for naming something
> I've ever
> read on a software mailing list. :D
>
> > If the original filmic is gone from this version, then I
> would just name
> > the new one filmic. If the original filmic will also be
> available, just
> > like the various denoise modules, this new filmic should
> have an
> > additional descriptor such as filmic rgb.
>
> How about naming the old module 'filmic (deprecated)' in the
> UI and the
> new one just 'filmic'.
> Behind the scenes (XMP etc.) they can be called 'filmic' and
> 'filmic2'.
>
>
> Beers,
>
> .mm
>
>
> 
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
> 
>
>
> 
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
> 
>
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] darktable 3.0.0rc0 released

2019-11-05 Thread Aurélien Pierre
Hi,

I have a PR waiting with a possible fix :
https://github.com/darktable-org/darktable/pull/3068

I have tested dt 3.0rc0 on Mac OS (temporary access to a MacBook) and
fixed the oversizing with dpi set to 80 too. But I reproduced. Same way,
the fonts in tone-equalizer are oversized.

But I have no access to a development environment either on Mac or Win,
so it's really tiring to remote-debug what's going on based on guesses.

Setting font size to 1em should tell GTK to use the default OS font
size, no matter its value. Seems to work on Win and Linux, but not on
Mac for some reason. Maybe Mac returns a font size that is already
scaled by the DPI factor, so it gets applied twice.

Anyway, it would be nice if we had more debugging workforce on these
godforsaken OS.

Aurélien.

Le 05/11/2019 à 20:23, Nicolas Auffray a écrit :
> Hi Timur,
>
> Real size of 1em is 13.33px on my OS (with default OS font on 10
> size). If I increase my OS font on 11 size, the size of 1em become
> 14.66. 1em = 1.px, so just the normal behaviour.
>
>
> Le 05/11/2019 à 19:57, Timur Irikovich Davletshin a écrit :
>> Hi, Nicolas!
>>
>> Don't take offense, I'm not yelling :)
>>
>> But can you tell me, what is real size of 1em (used in your theme too)
>> in GTK/Linux and GTK/MacOS?
>>
>> Thanks for your work on themes, btw.
>>
>> Timur.
>>
>> On Tue, 2019-11-05 at 19:49 +0100, Nicolas Auffray wrote:
>>> Many people I know use darktable, even on Linux correctly. And I'm
>>> the
>>> author of grey theme so work a lot on the UI and made many tests on
>>> my
>>> system. I never had issue you relate, even on my previous 27" full
>>> HD
>>> screen, my Macbook Pro retina 13" or my new 24" WQHD screen. So that
>>> would say it's about a specific config (probably darktable default
>>> theme
>>> as it's OS font size related). So with no more details (screen
>>> resolution, theme used, default OS font (font used and size)
>>> especially
>>> if you use darktable theme), that would be difficult to help and
>>> find
>>> why you have what you describe.
>>>
>>> It's easy to yell, not so easy to find why some people have specific
>>> issues and not other, especially if few details are given. And about
>>> UI
>>> issues, a capture screen is always a good idea. And probably Github
>>> issues part is a better way (with the whole details I post here) to
>>> continue. For example, I anwser on this related issue (maybe the
>>> author
>>> is one of you) :
>>> https://github.com/darktable-org/darktable/issues/3295
>>>
>>>
>>> Le 05/11/2019 à 19:39, Timur Irikovich Davletshin a écrit :
 Actually I have same issue on Linux too. But I was tired telling
 Aurelien about font size issues and modified theme to match my
 needs.

 On Tue, 2019-11-05 at 19:21 +0100, Moritz Moeller wrote:
> On 5.11.19 19:18, Nicolas Auffray wrote:
>> Anyway, that's not ok as the new UI is now size adjustable (by
>> using
>> CSS, not have anymore hardcoded things and use emphasis sizes
>> on
>> most
>> parts to adjust correctly). It seems to be a MacOS specific
>> issue...
> It's not an issue for me but for a first time user ... they'd
> probably
> close the app and never open it again. :)
>
> I couldn't even resize the main window to fit my screen. It would
> lock
> at a width about 20% larger.
>
> .mm
> _
> 
> __
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>
 ___
 
 darktable developer mailing list
 to unsubscribe send a mail to
 darktable-dev+unsubscr...@lists.darktable.org

>> ___
>>
>> darktable developer mailing list
>> to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>>
>
> ___
>
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] filmic RGB presets

2019-10-22 Thread Aurélien Pierre
Hi,

presets are just that : presets. It's hard to have them truly universal.
If you like other presets better, just save them.

Filmic RGB uses a different spline interpolation method that behaves
differently, closer to the densitometric curves of actual film emulsion.
But the previous presets don't make sense now.

All the current presets put the middle-grey… in the middle of the
dynamic range. You will find that it's how the interpolation behaves
better, and it only make sense if you think about it. If you find you
image is blown-out (is it really though, or did you just lose local
contrast in highlights… which is the goal ?), you can use the dynamic
range scaling setting, to quickly and symmetrically give a safety margin
to the black and white points.

Cheers,

Aurélien.

Le 16/10/2019 à 10:34, KOVÁCS István a écrit :
> Hi,
>
> I'm on darktable 1a11ea71 compiled using build.sh; OS is Kubuntu 19.04.
>
> I started with a reasonably exposed landscape, and tried applying the
> 'outdoors' setting as a starting point (that, and 'bright outdoors',
> often worked well for me in 'classic' filmic in darktable 2.6.2). WIth
> the version built from master, I get a rather blown-out picture, with
> the grey point set to 2.31%. Auto-tuning sets it to around 5%, and
> that results in a much more pleasant rendering (my preferred value,
> set manually, is 6%). In 2.6.2, the outdoors preset used 9%; the one
> most similar to master's 'outdoors' setting from was 'HDR' of v2.6.2
> (2.25%).
> I've uploaded the NEF and screenshots with the different settings here:
> https://drive.google.com/open?id=1Abif5j70a0aaVTanGBd7O0JyMntOcRlg
>
> I started my 'master' version with separate config/cache/library, so I
> don't think I have any automatically applied settings other than those
> shipped out of the box. 'Classic' filmic is no longer available as a
> module, although I can open old history stacks and see a '/!\ filmic'
> step in them; editing is not possible.
>
> A slightly confusing entry in the history: there is a step called
> 'gamma', but it's not tied to any module. I guess it's internal
> processing, something the users can't do anything about.
>
> Thanks,
> Kofa
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Lut 3D module

2019-10-13 Thread Aurélien Pierre
If your LUT is built against sRGB, linear should be set to 0.04045 and
gamma to 0.416667. The values you mention are very subtle, which could
indicate 2 things :

  * it is actually a linear to linear LUT, but you still need to
increase the lightness for this particular image,
  * the LUT has been ill-made against non-normalized data.

I would recommend to avoid LUTs done by "nice guys trying to help" you
may find over the internet. Profiling colour transforms needs an evenly
spaced sample of colour patches over the whole gamut, properly
normalized in white balance and luminance range, under standard
lighting, and will probably need to be optimized locally for skin tones
or sky/earth tones (but hardy for both) to be homogenous enough. It
requires some gear, serious skills and a fair amount of educated
double-checking of the results to be done properly, and your average
opensource fellow probably doesn't have the least idea of the complexity
of the task he butchered over a Saturday afternoon.

Good luck !

Aurélien.

Le 13/10/2019 à 22:00, Christian a écrit :
> Hello,
> for me works:
>
> lin=0, gamma=0.90
> or
> lin=0.4, gamma=0.46
>
> (With basecurve turned OFF)
>
> Christian
>
>
> Am 12.10.2019 um 22:29 schrieb Aurélien Pierre:
>>  2. if it expects gamma encoding, enable the "unbreak color profile"
>>     module (mode set to "gamma", linear set to 0, gamma set to
>>     0.45), move it in the pipe right before the LUT 3D module (Ctrl
>>     + Shift + drag and drop), then enable the LUT 3D module.
>>  3. in case of a doubt, most LUT you will find on the internet are built
>>     against rec. 701 gamma 2.2 encoding (1/2.2 = 0.45, so set up unbreak
>>     colour profile gamma to 1/gamma).
>
> ___
>
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Vibrance slider in the Basic adjustments tool

2019-10-11 Thread Aurélien Pierre
Le 11/10/2019 à 13:31, Keresztes Barna a écrit :
> Thanks for the feedback, Aurélien!
>
> 1. I used this formula for the
> vibrance: http://redqueengraphics.com/category/color-adjustments/ (it
> can also be computed together with saturation). 
> I changed it to your proposed formula, to be consistent with other
> software. It's also cleaner to have the two formulas in one line

Actually, I made a mistake. The formula I gave you will produce hue shifts.

If you think about it, saturation means "how rich the colour is", which
translates in math by "how far the RGB components are from their
corresponding luminance", so adjusting the saturation means rescaling
the difference ({ RGB } - Y) around Y (that's what
Y+saturation*(pixel-Y) does).

The idea of vibrance is to do the same, but with a penalty for colours
that are already rich. Power-like functions are great at doing that (for
powers < 1) : you get a large correction close to 0, and small
correction close to 1. However, doing so, you need to ensure the 3
components get the same ratio of correction, otherwise hue will shift
(and the formula I gave you will likely make hues shifts to blue).

*{ RGB } = Y + saturation * ( ( Y * ({ RGB } / Y)^vibrance - Y)) *should
behave as expected, and you can factorize it as *Y * ( 1 + saturation *
(({ RGB } / Y)^vibrance - 1))*. Basically, you scale by Y before
applying the power. It's the same logic as the fulcrum contrast. You
need to be sure that Y is non-zero though (but if Y == 0 at this point
of the pipe, something is seriously screwed before).

>
> Normally this adds a minimal computational complexity (a power
> operation for every pixel), so it shouldn't have an impact on
> processing speed. I can optimize this a bit (if only the saturation is
> changed, but not the vibrance), but I don't think it would have any
> measurable speed gain. 
> Instead of:
>     if(saturation OR vibrance) pixel=Y+saturation*(pixel-Y)^vibrance
> I could use:
>     if(saturation)
>         if(vibrance) pixel=Y+saturation*(pixel-Y)^vibrance
>         else pixel=Y+saturation*(pixel-Y)
Actually, this kind of trick is good for non-vectorized code (pure C
code path), but it will slow-down computations for vectorized paths
(OpenCL and SSE2). For vector computations, the code sequence should be
independent from the pixel values, so vectors can be processed linearly,
otherwise values check are breaking points. No-branching loops are
better for OpenCL and SSE, just compute the native_powr no matter what.
>
> 2. Yep, I imagine that Basic adjustments is difficult to program (as
> it's basically several filters in one tool) and it brakes the one
> tool-one function philosophy, but IMHO it's a great help from a
> usability point of view. I missed it a lot when switched from
> AftershotPro to darktable.
>
> On Thu, Oct 10, 2019 at 10:06 PM Aurélien Pierre
> mailto:rese...@aurelienpierre.com>> wrote:
>
> Hi,
>
> 1. Vibrance is supposed to be *{ RGB } = Y + saturation * ( { RGB
> } - Y )^vibrance*, where Y is the luminance channel from XYZ
> space, and { RGB } a pixel vector. This way, you perform both
> operations at once. Also, using (R + G + B) / 3 instead of Y is
> generally not a good idea since it has no perceptual background (R
> = G = B values does not equal visual grey in every RGB space, RGB
> is just a random vector space until a proper profile is applied). 
>
> 2. The basic adjustments module is a double edged sword : yes,
> it's nice to have a unified *UI* to quickely set-up pictures, but
> it's not just an UI, it's also an image operation that gathers at
> once several operations that need to be done at different places
> of the pixel pipe to get proper results :
>
>   * exposure is a scene-referred operations that needs to be done
> preferable before input coulour profile, since input matrices
> are optimized approximations for mid-tones and skin-tones, in
> order to put the RGB values in this sweet spot before applying
> the profile
>   * saturation needs to happen after the input matrice, when the Y
> channel is defined (actually, input matrices only profile the
> conversion from camera RGB to XYZ),
>   * contrast needs to happen after any frequency filter.
>
> TL;DR the model/view/controller architecture got violated here.
>
> Cheers,
>
> Aurélien.
>
> Le 10/10/2019 à 21:21, Keresztes Barna a écrit :
>> On Thu, Oct 10, 2019 at 8:58 PM parafin > <mailto:para...@paraf.in>> wrote:
>>
>> Is it any different from velvia iop in darktable?
>>
>>
>> Yes, it affects colors less than Velvia. It doesn't intend to
>> emulate film. 

Re: [darktable-dev] Vibrance slider in the Basic adjustments tool

2019-10-10 Thread Aurélien Pierre
Hi,

1. Vibrance is supposed to be *{ RGB } = Y + saturation * ( { RGB } - Y
)^vibrance*, where Y is the luminance channel from XYZ space, and { RGB
} a pixel vector. This way, you perform both operations at once. Also,
using (R + G + B) / 3 instead of Y is generally not a good idea since it
has no perceptual background (R = G = B values does not equal visual
grey in every RGB space, RGB is just a random vector space until a
proper profile is applied). 

2. The basic adjustments module is a double edged sword : yes, it's nice
to have a unified *UI* to quickely set-up pictures, but it's not just an
UI, it's also an image operation that gathers at once several operations
that need to be done at different places of the pixel pipe to get proper
results :

  * exposure is a scene-referred operations that needs to be done
preferable before input coulour profile, since input matrices are
optimized approximations for mid-tones and skin-tones, in order to
put the RGB values in this sweet spot before applying the profile
  * saturation needs to happen after the input matrice, when the Y
channel is defined (actually, input matrices only profile the
conversion from camera RGB to XYZ),
  * contrast needs to happen after any frequency filter.

TL;DR the model/view/controller architecture got violated here.

Cheers,

Aurélien.

Le 10/10/2019 à 21:21, Keresztes Barna a écrit :
> On Thu, Oct 10, 2019 at 8:58 PM parafin  > wrote:
>
> Is it any different from velvia iop in darktable?
>
>
> Yes, it affects colors less than Velvia. It doesn't intend to emulate
> film. 
> The results of this slider are very similar to the standalone Vibrance
> tool.
> There are some important differences:
> - The formula I use is different. It is run earlier in the pipeline,
> in the RGB color space (Vibrance tool runs on Lab)
> - This slider allows negative vibrance settings (so you can fade the
> colors without totally desaturating them)
> - it's a part of the Basic Adjustments tool, it can be used easily to
> set the color "amplitude" in the image together with the saturation
> slider.
> - It has a larger amplitude, so the results are more visible
> (sometimes the original Vibrance tool is barely perceptible).
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Automatically select the most focused photo in a burst of photos

2019-10-06 Thread Aurélien Pierre
Ok, let's go for a "sharpness score" + focus peaking feature. That way,
users will have a way to check which area is in focus, and compare
images between them with a score.

It won't made it to next release, but maybe first minor update.

Le 06/10/2019 à 17:05, Robert Krawitz a écrit :
> On Sun, 6 Oct 2019 16:40:37 +0200, =?UTF-8?Q?Aur=c3=a9lien_Pierre?= wrote:
>> argh. Tales of over-engineering…
> I don't really disagree with you, just want to point out that getting
> it anywhere near correct (i. e. without a huge number of false
> positives and false negatives) is a difficult problem.
>
>> Just overlay the euclidean norm of the 2D laplacian on top of the
>> pictures (some cameras call that focus-peaking), and let the
>> photographer eyeball them. That will do for subjects at large aperture,
>> when the subject is supposed to pop out of the background. For small
>> apertures, the L2 norm will do a fair job. And it's a Saturday afternoon
>> job, hence a very realistic project given our current resources.
> That's fair, I just think that this kind of algorithm will likely
> select a lot of photos that are badly out of focus (because the focus
> locked on a much more expansive background) and miss ones where it's
> the relatively small subject that's in focus.
>
>> What you ask for is AI, it's a big project for a specialist, and it's
>> almost sure we will never make it work reliably. The drawback of AIs,
>> even when they work, is they fail inconsistently and need to be
>> double-checked anyway.
>>
>> So, better give users meaningful scopes and let them take their
>> responsibility, rather than rely on witchcraft that works only in
>> Nvidia's papers on carefully curated samples.
> Or maybe just implement focus peaking, as you say, but with a UI
> similar to the camera's UI (flashing regions that are in best focus).
> Then it's up to the user to select the best photos based on their
> knowledge of the desired subject.
>
>> Le 06/10/2019 à 16:18, Robert Krawitz a écrit :
>>> On Sun, 6 Oct 2019 15:02:39 +0200, =?UTF-8?Q?Aur=c3=a9lien_Pierre?= wrote:
 That can be easily done by computing the L2 norm of the laplacian of the
 pictures, or the L2 norm of the first level of wavelets decomposition
 (which is used in the focus preview), and taking the maximum.

 As usual, it will be more work to wire the UI to the functionality than
 writing the core image processing.
>>> Consider the case where the AF locks onto the background.  This will
>>> likely result in a very large fraction of the image being in focus,
>>> but this will be exactly the wrong photo to select.
>>>
>>> Perhaps center-weighting, luminosity-weighting (if an assumption is
>>> made that the desired subject is usually brighter than the background,
>>> but not extremely light), skin tone recognition (with all of the
>>> attendant problems of what constitutes "skin tone"), and face
>>> recognition would have to feed into it.

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Automatically select the most focused photo in a burst of photos

2019-10-06 Thread Aurélien Pierre
argh. Tales of over-engineering…

Just overlay the euclidean norm of the 2D laplacian on top of the
pictures (some cameras call that focus-peaking), and let the
photographer eyeball them. That will do for subjects at large aperture,
when the subject is supposed to pop out of the background. For small
apertures, the L2 norm will do a fair job. And it's a Saturday afternoon
job, hence a very realistic project given our current resources.

What you ask for is AI, it's a big project for a specialist, and it's
almost sure we will never make it work reliably. The drawback of AIs,
even when they work, is they fail inconsistently and need to be
double-checked anyway.

So, better give users meaningful scopes and let them take their
responsibility, rather than rely on witchcraft that works only in
Nvidia's papers on carefully curated samples.

Le 06/10/2019 à 16:18, Robert Krawitz a écrit :
> On Sun, 6 Oct 2019 15:02:39 +0200, =?UTF-8?Q?Aur=c3=a9lien_Pierre?= wrote:
>> That can be easily done by computing the L2 norm of the laplacian of the
>> pictures, or the L2 norm of the first level of wavelets decomposition
>> (which is used in the focus preview), and taking the maximum.
>>
>> As usual, it will be more work to wire the UI to the functionality than
>> writing the core image processing.
> Consider the case where the AF locks onto the background.  This will
> likely result in a very large fraction of the image being in focus,
> but this will be exactly the wrong photo to select.
>
> Perhaps center-weighting, luminosity-weighting (if an assumption is
> made that the desired subject is usually brighter than the background,
> but not extremely light), skin tone recognition (with all of the
> attendant problems of what constitutes "skin tone"), and face
> recognition would have to feed into it.
>
>> Le 06/10/2019 à 14:14, Germano Massullo a écrit :
>>> Il giorno dom 6 ott 2019 alle ore 13:32 Moritz Mœller
>>>  ha scritto:
 Define 'most focused'.
 I give you an example to understand this request better. [...]
>>> Yes you are right. but in your case, the couple is the main thing that
>>> is moving in the picture. For my use case imagine I am taking photos
>>> to people that are giving a talk. Some photos of the burst may be
>>> blurred because I moved the camera while shooting, instead some other
>>> shoots of the same burst could have less blur effect beause my hands
>>> were not moving during its exposure time so the photo will have less
>>> blur effect.
>>> It would be great if an algoritm could detect the best shots

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] GCC version, optimisation options, split-loops

2019-10-06 Thread Aurélien Pierre
Hi,

We support GCC 8 and 9. GCC 6 is quite old already.

The commit you refer to affects only CLang.

Cheers,

Aurélien.

Le 06/10/2019 à 15:26, Marco Tedaldi a écrit :
> Hi Everyone
> After a long time away from this list (but still regularly working
> with git master) I'm back here...
>
>
> I've just tried to compile dt master again and it failed on me...
> The reason is that my GCC doesn't recognize the option split-loops
>
> Error:
> /home/marco/build/darktable/src/iop/toneequal.c:1312:1: error:
> unrecognized command line option ‘-fsplit-loops’
>
> My GCC-Version
> marco@schwipschwap:~/build/darktable$ gcc --version
> gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
>
> Interestingly,the last time I compiled dt it worked. It was: dt
> 2.7.0+1709~g580bf49da
>
>
> As a workaround for me I've just removed the option "split-loops" from
> the following files:
> src/common/fast_guided_filter.h
> src/common/luminance_mask.h
> src/iop/choleski.h
> src/iop/filmicrgb.c
> src/iop/toneequal.c
>
>
> So my question is: what version of gcc is required to compile it?
>
> could it be that commit 50742fa02bdf511e62f3bbe10b11c61c2036e4e5 
> https://github.com/darktable-org/darktable/commit/50742fa02bdf511e62f3bbe10b11c61c2036e4e5#diff-b93b6846a64705e34a1eb02a9d620317
>
> made my version of gcc stumble?
>
> best regards
>
> Marco
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Automatically select the most focused photo in a burst of photos

2019-10-06 Thread Aurélien Pierre
That can be easily done by computing the L2 norm of the laplacian of the
pictures, or the L2 norm of the first level of wavelets decomposition
(which is used in the focus preview), and taking the maximum.

As usual, it will be more work to wire the UI to the functionality than
writing the core image processing.

Le 06/10/2019 à 14:14, Germano Massullo a écrit :
> Il giorno dom 6 ott 2019 alle ore 13:32 Moritz Mœller
>  ha scritto:
>> Define 'most focused'.
>> I give you an example to understand this request better. [...]
>
> Yes you are right. but in your case, the couple is the main thing that
> is moving in the picture. For my use case imagine I am taking photos
> to people that are giving a talk. Some photos of the burst may be
> blurred because I moved the camera while shooting, instead some other
> shoots of the same burst could have less blur effect beause my hands
> were not moving during its exposure time so the photo will have less
> blur effect.
> It would be great if an algoritm could detect the best shots
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Darktable for ARM

2019-10-06 Thread Aurélien Pierre
Hi,

Image processing filters on raw pictures (even more now, with 24-52 Mpx
images) are really demanding on computational power (especially the
nicest ones, unsurprisingly), and darktable uses 32 bits floating point
arithmetic to perform them, in order to avoid most numerical issues you
would get with integer processing.

Although they can theoretically be performed on any 32 bits CPU, these
operations are optimized at a relatively low-level only for x86_64
architectures and will most probably run too slow to be practically
usable on ARM and 32 bits architectures. Optimizing for performance is
already a challenge (and a burden) to support all the existing x86_64
generations of SIMD instructions (desktop users being the core market of
photographers), plus GPU offloading through OpenCL, plus ensuring
consistent behaviour between GPU and CPU code paths and between
different vendors (Intel / Nvidia / AMD) OpenCL.

For this reason, there is no active support of 32 bits plateforms in
darktable, especially since most Linux distributions have dropped 32
bits kernels, so it might or might not compile/work, but don't expect
bug fixes for that at this point (unless someone steps out to do it).

For ARM CPU, anyway, you might want to get rid of all the GTK UI +
third-party libs bloat in darktable and start fresh with an
embedded/lightweight approach, instead of force-fitting a GTK desktop
app into something that will never be fluid enough to be practically
usable for a photographer, outside of a geek
playground/proof-of-concept. Any denoising module, or even local
contrast enhancement, will put your ARM on its knees, either 64 or 32
bits, and even if it's enough to shoot a YouTube video to prove
opensource zealots you did it and FOSS rocks, it's unrealistic for a
daily use by today's standards.

Cheers,

Aurélien.

Le 06/10/2019 à 12:43, Holger Klemm a écrit :
> Hello, 
> I installed Raspbian Buster on a Raspberry Pi 3B + and tried to compile 
> Darktable 2.6.2.
>
> cmake is aborted with the error message not supported platform.
> Is this a bug or is it due to the 32bit operating system?
>
> Cheers
> Holger
>
> Am Samstag, 5. Oktober 2019, 13:41:46 CEST schrieben Sie:
>> On Saturday, 5 October 2019 08:59:57 CEST Holger Klemm wrote:
>>> Hello,
>> Hi,
>>
>>> is an ARM version planned for Darktable 3.0.0?
>>> The current Raspberry Pi 4, Rock Pi 4 and NanoPi M4 are available with 4GB
>>> of RAM and should be powerful enough to handle small tasks.
>>> In particular, with the camera control then new applications would arise.
>>>
>>> I would be very happy about an ARM version for Rasbian / Armbian.
>> there is one for openSUSE
>>
>> http://download.opensuse.org/repositories/graphics:/darktable/
>> openSUSE_Tumbleweed_ARM/
>>
>> and Fedora has also aarch64
>>
>> https://koji.fedoraproject.org/koji/buildinfo?buildID=1322464
>>
>>
>> Cheers,
>>
>>  Andreas
>
>
>
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: Fwd: [darktable-dev] Feature freeze for 3.0

2019-08-18 Thread Aurélien Pierre
Hi,

for last 2.6 version, it took me a full-time week to write the doc for 3
modules, trying to accurately describe how they worked in a vulgarized
way, in my 3rd language. Instead, will it be helpful if I recorded
videos showing what's new and how it works (possibly in a
disorganized/non-linear order), and let you put the information in shape
for a documentation ? Not that I want to give you the bad job, but I
might be more useful on the debugging team, as the release approaches.
I'm still happy to do proof-reading.

Cheers,

Aurélien.

Le 18/08/2019 à 06:34, Bruce Williams a écrit :
> Thanks, I'll check it out!
>
> Cheers,
> Bruce Williams.
>
> -- Forwarded message -
> From: *Ariel Kanterewicz* mailto:ar...@aknt.com.ar>>
> Date: Sun., 18 Aug. 2019, 09:06
> Subject: Re: [darktable-dev] Feature freeze for 3.0
> To: darktable  >
>
>
> @Bruce: if you're willing to pay for it, there's a *really* nice git
> course on pluralsight, it's the best intro to the topic I've found
> yet: https://app.pluralsight.com/library/courses/how-git-works/
>
> I'm excited to see the feature list in the new release, especially the
> Culling mode :-D
>
> On Sat, 17 Aug 2019 at 22:38, Bruce Williams  > wrote:
>
> I will certainly do what I can.
> As yet, I have not delved into the world of github, pull requests,
> merges, and so forth.
> Nor compiling!
> But when I get home from holiday (another 8 days), that is on my
> list of things to get up to speed on. 
>
> Cheers,
> Bruce Williams.
>
> -- Forwarded message -
> From: *Ulrich Pegelow*  >
> Date: Sun., 18 Aug. 2019, 01:31
> Subject: Re: [darktable-dev] Feature freeze for 3.0
> To:  >
>
>
> Hi,
>
> frankly speaking I am still using the 2.6 branch and have no
> experience
> of any of the 2.7 features at all. I would be happy if someone
> closer to
> the recent development (Bruce?) could take over the manual.
>
> Best wishes
>
> Ulrich
>
> Am 13.08.19 um 17:42 schrieb Pascal Obry:
> >
> > Hi Matthieu,
> >
> > I fully agree here. I'm not the maintainer of the documentation so I
> > cannot speak for Ulrich.
> >
> > That being said, I can certainly ping developers to start a draft of
> > the code they have introduced in dt 3.0.
> >
> > I have noted that Bruce will be happy to help doing the
> preparation of
> > the documentation from draft to final version. That's really nice,
> > thanks Bruce for stepping in!
> >
> > So let's try to achieve that, we'll see how it works.
> >
> > Cheers,
> >
>
> 
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
> 
>
>
> 
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
> 
>
>
>
> -- 
> Ariel Kanterewicz
> http://aknt.com.ar
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
> 
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Darktable current master fails to build on Fedora 30

2019-07-25 Thread Aurélien Pierre
Hi,

someone needs to add preserve_colors in the dt_omp_firstprivate(…) in
the next line, to please GCC 9.

Cheers,

Aurélien.

Le 25/07/2019 à 11:48, Terry Duell a écrit :
> Hello,
> Attempting to do an rpmbuild of current master (e31d89cbff35b) on
> Fedora 30, fails with the following message...
>
> /home/terry/rpmbuild/BUILD/darktable-2.7.0/src/iop/basecurve.c:982:29:
> error: 'preserve_colors' not specified in enclosing 'parallel'
>   982 | const float lum = mul * dt_rgb_norm(inp, preserve_colors,
> work_profile);
>   |
> ^~~
> /home/terry/rpmbuild/BUILD/darktable-2.7.0/src/iop/basecurve.c:971:9:
> error: enclosing 'parallel'
>   971 | #pragma omp parallel for
> default(none)    \
>   | ^~~
> make[2]: *** [src/iop/CMakeFiles/basecurve.dir/build.make:71:
> src/iop/CMakeFiles/basecurve.dir/introspection_basecurve.c.o] Error 1
> make[2]: Leaving directory
> '/home/terry/rpmbuild/BUILD/darktable-2.7.0/build'
> make[1]: *** [CMakeFiles/Makefile2:5656:
> src/iop/CMakeFiles/basecurve.dir/all] Error 2
> make[1]: Leaving directory
> '/home/terry/rpmbuild/BUILD/darktable-2.7.0/build'
> make: *** [Makefile:155: all] Error 2
> error: Bad exit status from /var/tmp/rpm-tmp.RDfswD (%build)
>
> Hope this helps,
>
> Cheers,

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] implementation question: remove all unused modules

2019-06-13 Thread Aurélien Pierre
General means every module should behave the same and avoid the burden
of having to manage them locally and separately, and forget later that
some of them need special care because *there is no dev documentation to
write that on *(that's what happened with the lighttable button).

Now, some modules need to be always enabled because a raw photograph is
not an image without them. What remains to decide is the scope of these
modules : are we only speaking of basic "codecs" modules (demosaicing,
input & output colour profiles), or are we integrating less basic ones
although very much needed ?

The fact that highlights reconstruction and white balance have been
moved from the "always on" to the "off upon user request but still on by
default" bin with no further handing/checking is a mistake, and possibly
bad design until someone is able to tell me what the use case for having
them disabled is. Because, without them, your clipped highlights are
magenta, your input color profile is useless and your whole image is
possibly green or else, so why anyone would like to have that ? Infrared
photographers ?

These 2 modules need to be synchronized with the default behaviour
handled by the SQL queries, instead of having the SQL queries patch
every possible combination of "user on/off" + "default on/off" +
"good/bad assumptions if no data". Because the list of modules is ever
growing, that will soon be a nightmare (already is).

Otherwise, having an "enabled by default if not actively disabled by
user" IOP state means all disabled modules will need to stay in the
history no matter what, because it's a fallacy to think that the ones
maintaining modules will think of updating well-hidden
sub-functionnalities like that when they rework the pixelpipe.
Especially in the absence of a documentation.

Le 13/06/2019 à 14:56, Julian Rickards a écrit :
> Not to throw gas on the fire but it seems to me that some of this is
> already done.
>
> In Lighttable, when you wish to copy the history from one image to
> another, there are some modules missing (input and output) and also,
> the "off" modules aren't included in the possible modules to copy over.
>
> On Thu, Jun 13, 2019 at 5:30 AM  <mailto:dt-l...@stefan-klinger.de>> wrote:
>
> Aurélien Pierre (2019-Jun-13, excerpt):
> > There is no image if the white balance has been disabled, just
> random
> > bits for debugging purposes. The input color profile expects
> > D50-balanced input, and this one is always on. Same with the
> highlight
> > clipping module, most software have it built in the input
> profile and
> > don't expose it. Disable these 2 and you mess up your whole
> module stack.
> >
> > So, again, what is the purpose of disabling these 2 modules,
> except for
> > debugging, and why can they be disabled at all but are still
> > auto-applied at opening ?
>
> It might well be a crap idea to do this, but it's possible, so someone
> will do it.
>
> I see two possible outcomes to the current situation:
>
>   1. Some modules must not be disabled.
>
>     Then the possibility to disable them is a bug that must be
>     resolved.
>
>     This will "break" all existing XMPs that disable one of these
>     modules.
>
>   2. All modules can be disabled.
>
>     Then they all must be reported to the history stack, and
>     compression must not modify their topmost recorded state.
>
>     This will "break" all existing XMPs that lack a mentioning of
>     these modules in their stack.
>
> Either way, it's a messy situation that needs to be resolved.
>
> > The stack of exceptions handling in the SQL instructions smells bad.
>
> I agree.  My first implementation was just like that.
>
> > Things should stay general.
>
> General would mean: No exception for "white balance" and others.
> Later modules should simply refuse to work if no appropriate input is
> available (message: “need white-balanced image”).  To implement this,
> something like the 'type' of an image could be propagated upwards
> along the stack of IOPs (is this really not happening right now?).
> Just as an oversimplified example, "white balance" might require 'RAW'
> and provide 'D50-balanced', while "color profile" would require
> 'D50-balanced'.  This would allow to implement other modules providing
> 'D50-balanced' without technical debt hindering their adoption later
> on.
>
> Regarding the length of this thread, I assume that it will take quite
> some time to reach consensus

Re: [darktable-dev] implementation question: remove all unused modules

2019-06-13 Thread Aurélien Pierre
There is no image if the white balance has been disabled, just random
bits for debugging purposes. The input color profile expects
D50-balanced input, and this one is always on. Same with the highlight
clipping module, most software have it built in the input profile and
don't expose it. Disable these 2 and you mess up your whole module stack.

So, again, what is the purpose of disabling these 2 modules, except for
debugging, and why can they be disabled at all but are still
auto-applied at opening ?

The stack of exceptions handling in the SQL instructions smells bad.
Things should stay general.

Le 13/06/2019 à 08:36, para...@paraf.in a écrit :
>
>
> On 12 Jun 2019, at 20:21, Aurélien Pierre  <mailto:rese...@aurelienpierre.com>> wrote:
>
>> Hi,
>>
>> I'm the author of the lighttable's compress history button. White
>> balance and highlight reconstruction should never be turned off, so
>> what would be the purpose of having them disabled in the first place ?
>>
>
> Yet they can be disabled, some other modules can’t.
>
>> Then, we can discuss what the history compression should
>> do/avoid/prevent.
>>
>
> History compression shouldn’t modify the image.
>
>> Aurélien.
>>
>> Le 12/06/2019 à 11:40, dt-l...@stefan-klinger.de a écrit :
>>> thokster (2019-Jun-11, excerpt):
>>>> Hi,
>>>>
>>>> Am 11.06.19 um 15:32 schrieb dt-l...@stefan-klinger.de:
>>>>
>>>>> This option is not about saving disk space, but rather about cleaning
>>>>> up.
>>>> Is the "compress history" button in lighttable view doing anything else?
>>> Woha.  I have been talking about "compress history" in darkroom, not
>>> lighttable.
>>>
>>> They seem not to use the same code internally.
>>>
>>> I have to admit that I have not thought about the "compress history"
>>> button in lighttable.  That one already seems to remove switched-off
>>> modules in current master [1].  And it seems to do this incorrectly
>>> wrt. parafin's email:
>>>
>>> * In darkroom, disable "white balance" and/or "highlight
>>>   reconstruction".
>>>
>>> * Go to lighttable, select that image, and apply "history stack →
>>>   compress history"
>>>
>>> * When you open the image again, "white balance" and "highlight
>>>   reconstruction" will be enabled again.
>>>
>>> So if compressing should not change the image, then my current
>>> implementation [2] is even more correct, although it only is
>>> applicable from darkroom, not lighttable.
>>>
>>> Cheers
>>> Stefan
>>>
>>> 
>>> [1] 2.7.0+1443~g9bfbb225e
>>> [2] 
>>> https://github.com/darktable-org/darktable/compare/master...s5k6:compressHistory
>>>
>>>
>>> --
>>> http://stefan-klinger.deo/X
>>> I prefer receiving plain text messages, not exceeding 32kB. /\/
>>>   \
>>> ___
>>> darktable developer mailing list
>>> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>>>
>>
>> ___
>> darktable developer mailing list to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>> <mailto:darktable-dev+unsubscr...@lists.darktable.org>
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] implementation question: remove all unused modules

2019-06-12 Thread Aurélien Pierre
Hi,

I'm the author of the lighttable's compress history button. White
balance and highlight reconstruction should never be turned off, so what
would be the purpose of having them disabled in the first place ? Then,
we can discuss what the history compression should do/avoid/prevent.

Aurélien.

Le 12/06/2019 à 11:40, dt-l...@stefan-klinger.de a écrit :
> thokster (2019-Jun-11, excerpt):
>> Hi,
>>
>> Am 11.06.19 um 15:32 schrieb dt-l...@stefan-klinger.de:
>>
>>> This option is not about saving disk space, but rather about cleaning
>>> up.
>> Is the "compress history" button in lighttable view doing anything else?
> Woha.  I have been talking about "compress history" in darkroom, not
> lighttable.
>
> They seem not to use the same code internally.
>
> I have to admit that I have not thought about the "compress history"
> button in lighttable.  That one already seems to remove switched-off
> modules in current master [1].  And it seems to do this incorrectly
> wrt. parafin's email:
>
> * In darkroom, disable "white balance" and/or "highlight
>   reconstruction".
>
> * Go to lighttable, select that image, and apply "history stack →
>   compress history"
>
> * When you open the image again, "white balance" and "highlight
>   reconstruction" will be enabled again.
>
> So if compressing should not change the image, then my current
> implementation [2] is even more correct, although it only is
> applicable from darkroom, not lighttable.
>
> Cheers
> Stefan
>
> 
> [1] 2.7.0+1443~g9bfbb225e
> [2] 
> https://github.com/darktable-org/darktable/compare/master...s5k6:compressHistory
>
>
> --
> http://stefan-klinger.deo/X
> I prefer receiving plain text messages, not exceeding 32kB. /\/
>   \
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Magnification of modules in Darkroom

2019-06-03 Thread Aurélien Pierre
The point of collapsing the histogram is to spare some room. Adding a
GUI control to do it is equivalent to using more space to save some
space. How does that solve anything ?

Remember keyboard shortcuts are user-configurable.

Le 31/05/2019 à 04:26, Patrick Shanahan a écrit :
> * David Vincent-Jones  [05-30-19 19:50]:
>> Remembering all of these multi-key shortcuts is a bit of a pain
>> (particularly for us old-timers) would it be practical to incorporate a
>> further option in the current set or provide an on/off icon on a 'histogram'
>> header line.
>>
>> David
>>
>> On 2019-05-30 10:14 a.m., William Ferguson wrote:
>>> 2.7 already has histogram collapsing, CTRL+SHIFT+H
>>>
>>>
>>> On Thu, May 30, 2019 at 12:54 PM Florian W >> > wrote:
>>>
>>> @julian
>>>
>>> Collapsing the histogram is also part of my bucket of features I
>>> would like to implement soon enough.
>>> I see that I'm not the only one often bored by the space it takes,
>>> even on a 24 inches display.
>>>
>>> I think it could be associated with a key shortcut as one still
>>> often have to look at it to control what he's doing.
>>>
>>> And at some other times I would like to be able to enlarge it as
>>> Giuseppe suggests for any module.
>>>
>>> I keep in mind your inputs regarding this when I'll be able to
>>> take some time on it.
>>>
>>> Le jeu. 30 mai 2019 15:18, Julian Rickards
>>> mailto:julian.ricka...@gmail.com>> a
>>> écrit :
>>>
>>> I use an older laptop with limited screen real estate and some
>>> modules are quite large (height is the issue on my laptop) and
>>> any means of improving this would be appreciated. I don't know
>>> if it would be part of the consideration but sometimes
>>> (recognize that I'm not an expert at dt or photography) I'd
>>> like to collapse the histogram to have more room. In my
>>> experience, the histogram is not always necessary for all
>>> steps in the editing process so the ability to collapse or
>>> open it as needed would help with screen real estate.
>>>
>>> On Thu, May 30, 2019 at 4:51 AM Florian W
>>> mailto:flo.wern...@gmail.com>> wrote:
>>>
>>> Hi Giuseppe,
>>>
>>> I also found myself thinking the same in some cases (for
>>> the histogram, for modules involving curves and nodes, for
>>> the fine tuning of a parametric mask...).
>>>
>>> I added this to my bucket list of features I would like to
>>> implement, I see I'm not the only one who miss it.
>>>
>>> Le mer. 29 mai 2019 18:08, Maurizio Paglia
>>> mailto:mpagl...@gmail.com>> a écrit :
>>>
>>> Ciao Giuseppe,
>>> I have the same difficulty.
>>> Please try to look the new dt GUI design because I
>>> think it could solve this problem.
>>> You can test it compiling dt 2.7 from source.
>>>
>>> Thank you,
>>> Maurizio
>>>
>>> Il giorno mer 29 mag 2019 alle ore 17:30
>>> giuseppe.in...@gmail.com
>>> 
>>> >> > ha scritto:
>>>
>>> Hello,
>>> I think it could be useful to have a feature that
>>> allows you to view a development module in a
>>> separate window with larger dimensions than the
>>> one currently planned. There are in fact some
>>> modules, such as the color zone (there may be
>>> other modules), with which it is not easy to
>>> interact precisely because of the limited size of
>>> the window on which to operate. Perhaps a button
>>> on the module's header is sufficient to display
>>> the module in a larger window.
>>>
>>> What do you think about this?
>>>
>>> Regards,
>>> Giuseppe.
>>>
>>> 
>>> ___
>>> darktable developer mailing list to unsubscribe
>>> send a mail to
>>> darktable-dev+unsubscr...@lists.darktable.org
>>> 
>>>
>>>
>>>
>>> 
>>> ___
>>> darktable developer mailing list to unsubscribe send a
>>> mail to darktable-dev+unsubscr...@lists.darktable.org
>>> 
>>>
>>>
>>> 
>>> 

Re: [darktable-dev] separate RGB in Tone Curve

2019-06-03 Thread Aurélien Pierre
It's coming in v 2.8 next Christmas.

Le 03/06/2019 à 14:32, Wiktor Nowak a écrit :

> When it's posible to have tone curve separated for L, A, B channels why
> it's not possible with RGB? I'm actually more familliar to think in RGB
> therms so it would be convenient to have an option aside that it's
> possible to achieve same results in LAB.
>
> Aside of that, R, G, B channels in parametric mask would be also
> convenient I think.
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] DT bad on skin tones?

2019-05-28 Thread Aurélien Pierre
A raw "picture" is just a text file filled with RGB code values. These
code values are not data and need to be decoded, by the software but
also by the user, to be turned into data (meaning : colors). The
software can do some of the low-level decoding, but at the end, only the
user knows if the 80 % luminance in his picture is supposed to encode
white or grey, and do the proper exposure compensation accordingly.

The original approach of darktable was to give users an image that
looked already good, meaning all the decoding was done in software, and
the high-level part was done using assumptions (based on cameras
manufacturers default look). The result is : open the darkroom, what you
see is (almost) what you got on the camera display. Problem 1 : begin
tweaking one thing or another, and it will blow up in your face. Problem
2 : having a truly all-purpose default look is an impossible quest.

I personally think darktable should default to a neutral look (dark and
ugly), but not all the dev team agrees. I guess having a very neutral
picture (only demosaiced) as a default starting point in the darkroom
would freak out most users. (Just see how many don't understand why dt's
preview doesn't look the same as the camera's one…). So, as of now, the
neutral default is an option in parameters (in darktable 2.7/git master).

For some reason, there is also far less education in color science
amongst photographers than in the cinema industry. So, while your
average VFX artist or videographer can build his own pixelpipe using
nodal editors and gets the XYZ/RGB/Lab/LUT/spectral stuff, your average
photographer struggles to understand that a pixelpipe has an order that
matters and you don't just fiddle around modules depending how you feel
today. It's not simply a matter of upgrading the doc to use the soft
(which not many people read anyway), there are theoretical things to
understand as well, and people need to do their homework (not on
Youtube). Retouching is not bounded to any kind of software, you can do
it with film and paper, and learning how to use the soft won't teach
people how to actually retouch (e.g. what needs to be done on the picture).

Anyway, there will be discussions in a few days at the Libre Graphics
Meeting about the pixelpipe, so we will see what can be done to fix
that. Dealing with compatibility and legacy while improving is never simple.

Le 28/05/2019 à 11:16, François Tissandier a écrit :
> But since apparently the people in charge of Darktable know about this :
>  "which was a design mistake in the first place because you are
> applying a matrice profile expecting scene-linear input on
> perceptually-encoded data"
>
> are there plans to fix the design mistake ? 
>
> I know that you are offering a solution, but if you take the example
> of a newcomer : she or he hears about Darktable, visits the website,
> downloads and installs. Maybe reads the documentation if that's
> someone trying to put things in the right order, like you with the
> modules ;) But then, even in the documentation, there is nothing
> stating "BASE CURVES ARE EVIL, CRAP, GARBAGE, NO-GO, DON'T TOUCH, BIO
> HAZARD, KEEP AWAY, HUN HUN, SURVIVORS WILL BE SHOT AGAIN." People will
> just try different modules and even use base curve, since it's on by
> default. If it's really broken by design, I hope that version 2.8 will
> turn it off by default. Or at least explain in the documentation. If
> you explain it here Aurélien, it's great for people actively following
> the project, but for most users, I'm not too sure if it can have an
> effect. Those people are not necessarily "asking for trouble", since
> we can't even really know about it without reading the mailing list,
> or I missed something in the darktable communication somewhere ? But
> I'm always trying to put myself in the shoes of the very very lambda
> user, instead of my own case. 
>
> Maybe you have taken all this into account and the default settings as
> well as the documentation of 2.8 will be better, and you can ignore
> this email. The image of Darktable is important, to gain new users. If
> by default, without digging in the mailing list, you can't know what
> settings are wrong, you will have a bad first experience with DT.  
>
> Just my two cents, it would be a pity to lose users because of this. 
>
>     François
>
> On Tue, May 28, 2019 at 11:02 AM Aurélien Pierre
> mailto:rese...@aurelienpierre.com>> wrote:
>
> Sure, there are people who want to fight the theory of signal
> processing to complain about the consequences, and people who do
> the right thing at the right time in the pixelpipe. Surprisingly,
> the latter get better results faster.
>
> Filmic is simple to use if you understand what exposure means in
> the scene-referred space and in the di

Re: [darktable-dev] DT bad on skin tones?

2019-05-28 Thread Aurélien Pierre
Hi !

That falls back to the difference between Lightroom and Capture One
colors… If the in-camera software has been tweaked for warmer skins, you
won't get the same result with an all-purpose matrice alone.

In color balance, push the highlights slider towards magenta (opposite
of green) with a small saturation, that should do the trick. Maybe do
the same in mid-tones, and push the shadows to green (to soften the shift).

A.

Le 28/05/2019 à 10:52, Christian a écrit :
> Hello,
>
> :-)
>
> So I did a quick re-test on the X-T3 RAF.
>
> basecurve vs tonecurve (basecurve off) vs filmic (basecurve off).
>
> So, filmic is better but I could not completely get rid
> of the green tint.
>
> Chris
>
>
>
> Am 28.05.2019 um 09:33 schrieb Aurélien Pierre:
>> For the last time :
>>
>>     *BASE CURVES ARE EVIL, CRAP, GARBAGE, NO-GO, DON'T TOUCH, BIO
>>     HAZARD, KEEP AWAY, HUN HUN, SURVIVORS WILL BE SHOT AGAIN.*
>>
>
> ___
>
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] DT bad on skin tones?

2019-05-28 Thread Aurélien Pierre
Sure, there are people who want to fight the theory of signal processing
to complain about the consequences, and people who do the right thing at
the right time in the pixelpipe. Surprisingly, the latter get better
results faster.

Filmic is simple to use if you understand what exposure means in the
scene-referred space and in the display-referred space. It just remaps
scene-referred exposure (in ]0 ; + inf[) to display-referred one (in
[0.0 ; 1.0]), by letting users control what is considered white, black
and grey in the picture, and to what values they should be mapped on the
display. But it does so at the end of the pipe, leaving your pixelpipe
linear before, to preserve the consistency of every physically-bounded
filter :

  * blur == optical diffusion -> needs linearity,
  * denoising == gradient smoothing -> needs linearity,
  * color profile == linear algebra -> needs linearity,
  * etc.

Base curves do exactly the same (look at the shape of the curves… S
curves with raised mid-tones), but too early in the pipe, and with
pre-baked curves done by reverse-engineering raw->JPEG conversion from
cameras manufacturers. Thus, you cross the non-linearity wall too early
in the pipe, and get a one-size-fits-all look. No matter how you put it,
call it different retouching approaches or masochism, it's wrong. I
don't care about opinions or styles, this is signal processing, not
democracy.

Why ? Because photons live on a linear scale of energy, and good-looking
filters do nothing but simulate numerically on RGB channels what
would/should have happened on photons in real life. So, blurring,
sharpening, masking and denoising need scene-linear RGB code values.
Whereas every tone/base-curve (including filmic) is raising the
mid-tones and adding more contrast (S curve) to reproduce our
logarithmic scale of *perceived* energy. You go from scene-linear to
perceptual space with a logarithm, so you rescale the gradients of
energy (EVs are a log scale, perceptually uniform).

Once you have crossed that wall of non-linearity, you can kiss your
color accuracy good-bye if you try to apply physically-bounded filters
in a perceptual space. That's exactly what people see with the "wrong
profiles" inconsistencies in this thread. The profiles are not faulty
here, proof is made by using dcraw with the same matrices. But the input
profiles are applied *after* the base curve in the pipe, which was a
design mistake in the first place because you are applying a matrice
profile expecting scene-linear input on perceptually-encoded data.

As of now, I will stop answering messages from people complaining about
color artifacts while using base curves. They are asking for trouble,
they get it. I have offered an alternative, if people don't want to
listen, it's not my problem.

Le 28/05/2019 à 10:04, François Tissandier a écrit :
> The base curve can be still used with the standard one instead of the
> camera one, colours are quite fine then. I was doing that before the
> arrival of filmic. So the base curve can be kept. And indeed it's good
> to have the choice. 
>
> Le mar. 28 mai 2019 à 10:00, Florian W  <mailto:flo.wern...@gmail.com>> a écrit :
>
> Not everyone has the same approach of digital development (eg.
> Film like response vs more creative curve editing, with its
> disadvantages) and one of the strong advantage of Darktable is
> allowing all these use cases. Starting a war about this won't get
> us anywhere in the issue at hand here.
>
>
> Le mar. 28 mai 2019 09:33, Aurélien Pierre
> mailto:rese...@aurelienpierre.com>> a
> écrit :
>
> For the last time :
>
> *BASE CURVES ARE EVIL, CRAP, GARBAGE, NO-GO, DON'T TOUCH,
> BIO HAZARD, KEEP AWAY, HUN HUN, SURVIVORS WILL BE SHOT AGAIN.*
>
> I wouldn't have taken 2 months of my life to develop filmic if
> base curves had worked as expected. Base curves are a broken
> design and will always destroy colors. I have repeated that
> multiple times in the past years, it would be great if people
> started to listen.
>
> In darktable 2.8, there will be a global preference to have
> the base curves disabled by default because they really harm,
> especially for the newest HDR cameras. Until then, the first
> thing you need to do while opening a raw picture is to disable
> that god-forsaken module manually.
>
> Thanks for confirming it has nothing to do with matrices
> though. That means everything works as expected.
>
> Aurélien.
>
> Le 28/05/2019 à 09:00, Florian Hühn a écrit :
>>
>>
>> If RawTherapee is really using the same matrices, it
>> would be interesting to find out what's being done
>> 

Re: [darktable-dev] DT bad on skin tones?

2019-05-28 Thread Aurélien Pierre
For the last time :

*BASE CURVES ARE EVIL, CRAP, GARBAGE, NO-GO, DON'T TOUCH, BIO
HAZARD, KEEP AWAY, HUN HUN, SURVIVORS WILL BE SHOT AGAIN.*

I wouldn't have taken 2 months of my life to develop filmic if base
curves had worked as expected. Base curves are a broken design and will
always destroy colors. I have repeated that multiple times in the past
years, it would be great if people started to listen.

In darktable 2.8, there will be a global preference to have the base
curves disabled by default because they really harm, especially for the
newest HDR cameras. Until then, the first thing you need to do while
opening a raw picture is to disable that god-forsaken module manually.

Thanks for confirming it has nothing to do with matrices though. That
means everything works as expected.

Aurélien.

Le 28/05/2019 à 09:00, Florian Hühn a écrit :
>
>
> If RawTherapee is really using the same matrices, it would be
> interesting to find out what's being done differently (or
> additionally)...
>
> RawTherapee uses dcraw for import. I took the  A7RIII testchart raw
> and ran it through  'dcraw -v -w -o 1 -T DSC00157.ARW', then imported
> the .ARW and the TIFF created by dcraw into DarkTable. The TIFF lokes
> more natural to me. Especially the skin color of the guy on the right
> looks somehow a bit yellowish / ill in the .ARW but more natural in
> the TIFF from dcraw.
> BUT: When importing the TIFF no base curve is applied. When I disable
> base curve on the .ARW and instead use levels and tone curve manually
> i can get a look that is closer to the TIFF (i.e. the dcraw variant).
> Maybe it comes down to different default settings in DarkTable
> importing vs. dcraw. At some point I'd like to double-check that the
> matrix calculations done by DT are indeed carried out as intended, but
> so far I didn't find a way to artificially create a raw-file for this
> purpose.
>
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] DT bad on skin tones?

2019-05-26 Thread Aurélien Pierre
darktable handles the matrices all the same way. It's simple linear
algebra, if it works for some matrices, it will work for all matrices.
But if the coefficients of the matrice are wrong, nothing magical will
happen.

Le 25/05/2019 à 21:11, Christian a écrit :
> Hi,
> I re-did the test with the Fuji X-E2 version of the test-chart shoot
> and the skin colors are much better.
>
> Maybe this helps in identifying the problem.
>
> Link to RAW file:
>
> https://www.dpreview.com/reviews/image-comparison/download-image?s3Key=6641b9cbeec04777b85655fad61e4b4d.raf
>
>
> Chris
>
> Am 24.05.2019 um 22:21 schrieb Florian Hühn:
>>  > So the input color profile (camera RGB -> XYZ transformation matrice)
>> is wrong in darktable.
>
>> Maybe it is not the the color matrix itself that is wrong (source code
>> says it was extracted from Adobe DNG Converter and those matrices are
>> usually very good), but handling the color matrix in DT is somehow off.
>
> ___
>
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] DT bad on skin tones?

2019-05-24 Thread Aurélien Pierre
So the input color profile (camera RGB -> XYZ transformation matrice) is
wrong in darktable. Can you export an ICC input profile from whatever
software the good picture comes from ?

Aurélien.

Le 24/05/2019 à 20:40, William Ferguson a écrit :
> Perhaps a better check for color accuracy is checking the color
> checker included in the raw?
>
>
> Regards,
>
> Bill
>
> On Fri, May 24, 2019 at 2:01 PM Moritz Moeller  > wrote:
>
> On 24.5.19 19:26, David Vincent-Jones wrote:
> > Raw is RAW  without processing. It is up to you to
> manipulate the
> > raw data to achieve the look that you want. [...]
>
> OP clearly said they had no modules that shift color turned on.
> The DT processed image clearly has green tint in the shadows.
>
> For giggles, I opened the image in RawTherapee and Capture One and
> neither shows any green tint.
>
> So the conclusion seems to be that something in DT needs fixing.
>
>
> Beers,
>
> .mm
>
>
> P.S.:
> I am always flabbergasted about the arrogance/passive aggressiveness
> regularly displayed by some people on this list.
>
> Particular when users report bugs or shortcomings of DT.
>
> DT is not perfect. No software is. Get over it.
> 
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
> 
>
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] meaning of RGB values

2019-05-22 Thread Aurélien Pierre
Hi,

that's the problem I got with filmic. Once the wall of the non-linearity
is crossed, there is no coming back, so all you can do is move your
module to the right place in the pipe.

Basically every filter dealing with physical phenomenons (diffusion,
refraction) should be put in the linear part of the pipe, early. I
strongly think the signal-processing part of the pipe should be kept
fully separate from the artistic retouching, which is not the case now
in dt, with non-local denoising performed in Lab space late in the pipe,
or base curves applied early.

Be aware that RGB spaces are always linear, the gamma thing in RGB
spaces is only an encoding trick to avoid quantization artifacts when
dealing with (rounded) integers in files, but color management systems
always revert/decode them before converting from RGB to other RGB spaces
(and the conversion usually happen in floating point precision).

Camera RGB are supposed to be more or less linear as they are 3
colorimeters readings (filter light with coloured glass, convert photons
to electrons, count them). Camera profiles are usually simple 3×3
matrices converting from camera RGB to XYZ (linear vector base change).
You can also use 3D tonecurves profiles (one tonecurve per RGB channel)
or LUTs (dt now has Lab and RGB LUTs modules), but doing calibration
this way can badly backfire if your calibration shot is not 100 % clean
(evenly lit, no parasite color cast from walls reflection, white light
with a full daylight spectrum).

The camera-independant RGB representation is the best case scenario that
happens after the input profile module if the camera profile is accurate
and applied on correct data. In practice, I suspect the camera RGB space
is the cleanest to perform physically-bounded transformations, except if
you need some estimation of the luminance (which is a linear combination
of RGB components, and needs a proper white balance correction plus a
profiling), because they are the closest to photons you can get.

So, in practice, you need to open each IOP source file, analyse what can
break your things and what messes up colors, and find the right
trade-off for your module. In practice, since signal-processing and
artistic transforms are mixed up, you will probably end up issuing
warrants in the doc to prevent users from using some ill-placed modules
if they want to use yours, and get a shit-load of emails from users
complaining your module doesn't work while they have done everything you
advised not to do.

Good luck !

Aurélien.

Le 23/05/2019 à 01:02, Heiko Bauke a écrit :
> Hi,
>
> Am 23.05.19 um 00:32 schrieb Moritz Mœller:
>> each module that relies on a certain color
>> space must take into consideration everything in the pipe before (and
>> possible undo it, which is not always really possible) to push stuff
>> back
>> into linear, if necessary.
>
> I completely agree.
>
> My question is how can one "take into consideration everything in the
> pipe before" in practice?  My mental picture of the pixel pipe was
> that it starts from a highly non-linear camera-dependent RGB profile
> and reaches somewhere along the pixel pipeline a camera independent
> color representation before switching to LAB space.  Once, RGB color
> representation is linear or has some specific (known) gamma encoding
> switching to linear RGB becomes trivial.  However, I am not sure if my
> picture is right.
>
>
> Heiko
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] sharpen module

2019-04-18 Thread Aurélien Pierre
Hi,

please don't do a deconvolution in the Lab color model. Natural blur is
a phenomenon happening to photons and described by a convolution
product. If you want to revert it, you need to use a physically
meaningful color space, e.g. a linear one as close as possible to the
spectral space.

That would be camera RGB, linearly encoded (be sure the gamma is
reverted if the input file is a JPEG/TIFF/PNG file), before it is messed
up with non-linear transfer functions in the pipe (from Parseval's
theorem of energy conservation in convolutions).

The sharpening module happens too late in the pipe (after tone curves)
and works in Lab, which is nonsensical (Lab == perceptual model built
upon human vision : that means nothing in optics). I would squeeze it
right after the denoising modules, but I'm not sure yet if it's better
to have it before or after the lens correction.

I have worked for 2 years on regularized blind deconvolutions, you can
use my code and remove the kernel update if you want :

  * 
https://github.com/aurelienpierre/Image-Cases-Studies/blob/master/deconvolve.py
  * 
https://github.com/aurelienpierre/Image-Cases-Studies/blob/master/lib/deconvolution.pyx

Also, Edgardo began to translate my Python code into a dt module 1.5
years ago (although the algo has changed). The major limitation was the
performance :
https://github.com/edgardoh/darktable/blob/rlucy/src/iop/rlucy_deblur.c

Anyway, I have delayed this work because I wasn't ready to code it in C,
but I would be glad to help.

Good luck,

Aurélien.

Le 17/04/2019 à 20:16, Heiko Bauke a écrit :
> Hi,
>
> I just started to implement a simple (non-blind) deconvolution filter
> as an addition sharpening method.  I plan to add this as an additional
> mode for the sharpening module.
>
> Looking at the code of the current unsharp mask method I realized that
> the module uses its own implementation of a gaussian blur filter.  Is
> there any reason not to employ the one in src/common/gaussian.[ch]? 
> Are there any concerns against simplifying the sharpening module's
> implementation and just calling the functions from
> src/common/gaussian.[ch]?
>
> The deconvolution method also needs to apply gaussian blur kernels and
> I would prefer to use the ones in src/common/gaussian.[ch] only.
>
>
> Heiko
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Grey theme

2019-02-25 Thread Aurélien Pierre
Hi,

there are lots of visual artifacts that can result from
background/foreground contrast and the surround lightness adaptation.

Please note that :

  * darktable UI is currently under clean-up to wire every color and
font-size to the CSS sylesheet :
https://github.com/darktable-org/darktable/pull/2037
  * there is a new option in master to switch between themes (hence CSS
sheets). Thanks to Pascal !
  * a grey theme will be added once everything is settled, so you might
not need to bother about CSS anymore.

The general recommendation is to use middle grey (50 % Lab) or light
grey (70 % Lab) backgrounds during editing to minimize the contrast
between image and surround, in order to avoid tricking your eye.
Basically, dark surrounds make images look brighter than they are, but
less contrasted at the same time, which may result in under-exposing
them and crushing their blacks (expect a bad surprise when you print),
but also affects color saturation perception (they look more saturated).
Ignoring that, most photo editors choose to have a dark background, for
marketing reasons : images look prettier (more saturated) out of the box
(so the software should be better, right ?), and interfaces look sexier
(me likes sexy).

More details :

 1. https://xritephoto.com/documents/literature/en/StandardViewingNTK_EN.pdf
 2. Chapters 6-7-8 of

http://last.hit.bme.hu/download/firtha/video/Colorimetry/Fairchild_M._Color_appearance_models__2005.pdf
and more specifically, sections 6.1 (p.111) 6.9 (p.126)

The brightness of your screen should be such that a fully white screen
matches the brightness of a white paper sheet sitting next to your
computer and lit by surround light. Be carefull at night because
domestic lights have a color temperature between 5000 K (high-end
daylight balanced LED lights) and 3000 K (tungsten light bulbs), so they
won't match the D65 illuminant (temperature and light spectrum) on which
your screen should be set and can be very misleading.

Also, don't lose too much time figuring out what your pictures will look
like on other people's display, because it's a lost cause.

Good luck,

Aurélien.

Le 19-02-24 à 07 h 16, KOVÁCS István a écrit :
> Hi,
>
> I've posted this on the users' list a week ago, but have not received
> any reply.
>
> Could you please provide a recommendation for general-purpose
> processing? I mainy post online (on Smugmug, which uses dark
> backgound, but the primary surface are our family blog and Facebook,
> which are both on light background). I mostly edit at night, in a
> quite dark room, with a 'warm-white' LED desk lap illuminating the
> white wall behind the monitor.
>
> Thanks in advance,
> Kofa
>
> -- Forwarded message -
> From: *KOVÁCS István*  >
> Date: Sun, 17 Feb 2019 22:38
> Subject: Grey theme
> To: Darktable Users List  >
>
>
> Hi,
>
> A question has come up on Facebook
> (https://www.facebook.com/groups/darktable/permalink/1200810193417890/)
> about theming. I answered with a link from the announcement of 2.6.0
> (https://www.darktable.org/2018/12/darktable-26/), which provides the
> CSS for a white and a grey theme, adding the following:
> "[regarding the white theme] Note that with such setup, images will
> look darker, hence the aspect of the GUI may push the user to
> over-expose the images. A white background is interesting for people
> working on images meant to be displayed on white background, though.
> To avoid being influenced towards over- or under-exposing pictures, a
> grey theme like the following is much more advisable" - and then the
> article provides a grey theme.
>
> So, returning to the question: if this a recommendation for
> print-oriented people (whom a white theme may push to overexpose
> images)? The text mentions underexposure, too, which I'd link to a
> very dark UI.
>
> If it's for everyone, and it is "much more advisable" (than the
> default black or a custom white theme), why is it not the default with
> 2.6?
>
> Thanks in advance,
> Kofa
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

[darktable-dev] [admin] Issue deleted by accident on Redmine

2019-02-25 Thread Aurélien Pierre
Hi,

I have deleted by accident a bug report on Redmine, by Martin Straeten
(IIRC). Is there any way to restore it ?

My apologies, I thought the "delete" option applied to the (spam)
comment, not to the whole issue. It's not clear when there is only one
comment on the issue.

Aurélien.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

[darktable-dev] Color management in HDR setups

2019-02-04 Thread Aurélien Pierre
Hi everyone,

I was advised to watch this presentation regarding color transform for
HDR displays from the Society of Motion Picture Engineers :
https://www.youtube.com/watch?v=Z0DS7cnAYPk=youtu.be

It applies to HDR images processing as well and sums up pretty much all
my work around Filmic module and its chroma presevation mode (20:00).
Roughly 75 % of darktable modules are impacted by the color concepts
described in the video, so I think it could be a good watch for everyone
involved in dt's dev.

Incidentally, it explains why naive tone curves are inherently bad (7:30)

Cheers,

Aurélien.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] GSoC Organizations Application open from tomorrow for 3 weeks

2019-01-15 Thread Aurélien Pierre
On a side note, I would suggest to reach out developers from other
projects. While developing filmic, I worked with Troy Sobotka (Blender),
now Andrea Ferrero (Photoflow) is working with us on implementing the
same feature for his soft, so we share results and algos. Ingo Weyrich
(Rawtherapee) has helped me a few times on low-level SIMD stuff and
offered his help a few times more. There are bunches of nice guys around
there who will be happy to help when you need technical guidance. You
need darktable's developers to guide you on how to wire the I/O inside
dt's guts, but what's going on in-between is pretty much the same for
each soft. Since everything is GNU/GPL nobody cares about competition so
we can share experience with no after-thoughts.

Le 19-01-15 à 05 h 05, johannes hanika a écrit :
> heya,
>
> in general the gsoc concept is a bit heavy on our personal resources.
> i cannot promise to do mentoring.
>
> i think your project ideas are great though. some of them could be
> very large scale projects (full featured timelapse), some more sunday
> afternoon (black frame). the list might require some sorting.
>
> i would really like to see lens center moving and hdr alignment (multi
> scale patch match as in google phones), both of which require some low
> level math and careful implementation.
>
> sooner or later the colour checker lut repository will have to be
> installed, too. that might be a cross-project effort, i know
> rawtherapee also has great interest in collecting chart images for
> profiling. i hope we can talk about this at LGM .
>
> fwiw i'm no fan of automatic rating/selection/face detection. feels
> like an algorithmic social media bubble. also, henrik did have an
> implementation of similar image search and he removed it after it has
> been there for quite a while and was apparently left unused.
>
> i like aurelien's idea about exif/iptc/dam. and yes, i very much agree
> that c++ is not my favourite langauge. that said it's perfectly
> possible to interface into it, that's just a matter of doing it. the
> metadata handling would need a good concept to get it right and not
> make a mess of it. we'd probably want configurable stuff to be shown
> as overlay and in the metadata tab, as well as a generic way of
> handling pretty much arbitrary tags in the sqlite database. i'm not
> sure how a generic user interface could be much better than the
> straight sqlite3 command line binary. i think houz spent some thought
> on it in the past.
>
> cheers,
>   jo
>
> On Tue, Jan 15, 2019 at 12:08 AM Oskar Maier  wrote:
>> Hello,
>>
>> out of personal interest in working on Darktable this summer I send this 
>> quick reminder: Organizations can apply for the Google Summer of Code (GSoC) 
>> 2019 from tomorrow until 6th of February (so more than 3 weeks). You can 
>> read more about it here: https://summerofcode.withgoogle.com/
>> If someone is willing to do the mentoring but is not interested in writing 
>> the application I offer my help with that.
>>
>> Some of my project ideas in case you need inspiration:
>> - a Timelapse module with keyframing
>> - dark frame subtraction
>> - sort and select images based on geotag
>> - correct exif time with offset
>> - automatic grouping of similar pictures + suggesting which one is best
>> - automatic rating
>> - implement lens mount adapters from lensfun project
>> - Lens correction - move image center feature (recently discussed on the 
>> mailing list)
>> - align-image-stack for HDR generation (I saw there already is a branch)
>> - online database for real and JPEG color LUTs for camera models (according 
>> to https://pixls.us/articles/profiling-a-camera-with-darktable-chart/)
>> - importing a picture as new layer in another one (similar to the watermark 
>> hack) for basic composing using the existing masking framework
>>
>>
>> Thanks for considering this and have a nice week,
>> Oskar
>>
>> ___ 
>> darktable developer mailing list to unsubscribe send a mail to 
>> darktable-dev+unsubscr...@lists.darktable.org
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] GSoC Organizations Application open from tomorrow for 3 weeks

2019-01-14 Thread Aurélien Pierre
Hi,

there is a less sexy project that would unlock a lot of others,
regarding the rewriting of the metadata handling. Currently, EXIFs are
handled by a C++ class (exiv2) manipulated inside C wrappers functions
(instead of private methods), which actually mixes the drawbacks of both
functional and OO paradigms with none of the benefits.

(Maybe) Because of this, adding options to exif export is impossible,
and the ability to remove selected metadata from exported files is among
the most demanded "legitimate & realistic" features (and has been for 8
years).

Basically :

 1. professional photographers working with news agencies need to be
able to write more IPTC/XMP metadata in the exported pictures,
because that's how these agencies handle credits and copyrights,
 2. photographers with GPS-enabled cameras need to remove GPS tags from
the exported pictures before posting on the internet, for privacy
 3. everyone may want to remove the maker & lens from their files,
before posting pictures on the internet, to avoid tempting burglars,
 4. exif time correction/shift indeed would be nice, provided it affects
only darktable's database, and not the actual raw file (ask Roman
Lebedev about the reason, but Digikam and Gimp have had raw files
corruption issues with direct EXIF writing through exiv2)
 5. selecting the option "write XMP sidecars on disk" in dt duplicates
the database history in a file in regard of the raw (as it should),
however it also adds the XMP history in the exported image file,
which can be bandwidth consuming. So it would be nice to have an
option to drop it at export time.
 6. optionally, having JpegOptim and OptiPNG embedded to perform
non-destructive compression at exportation time would be great.

My feeling is the Digital Assets Management would need a little love in
dt, and metadata is a good way to begin, I think.

And now, about a tutor… I will let Johannes and Tobias answer that.

Cheers and thanks !

Aurélien.

Le 19-01-14 à 06 h 08, Oskar Maier a écrit :
> Hello,
>
> out of personal interest in working on Darktable this summer I send
> this quick reminder: Organizations can apply for the Google Summer of
> Code (GSoC) 2019 from tomorrow until 6th of February (so more than 3
> weeks). You can read more about it here:
> https://summerofcode.withgoogle.com/
> If someone is willing to do the mentoring but is not interested in
> writing the application I offer my help with that.
>
> Some of my project ideas in case you need inspiration:
> - a Timelapse module with keyframing
> - dark frame subtraction
> - sort and select images based on geotag
> - correct exif time with offset
> - automatic grouping of similar pictures + suggesting which one is best
> - automatic rating
> - implement lens mount adapters from lensfun project
> - Lens correction - move image center feature (recently discussed on
> the mailing list)
> - align-image-stack for HDR generation (I saw there already is a branch)
> - online database for real and JPEG color LUTs for camera models
> (according to
> https://pixls.us/articles/profiling-a-camera-with-darktable-chart/)
> - importing a picture as new layer in another one (similar to the
> watermark hack) for basic composing using the existing masking framework
>
>
> Thanks for considering this and have a nice week,
> Oskar
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

[darktable-dev] How to report a bug

2018-12-10 Thread Aurélien Pierre
Hi everyone,

I have noticed several people have reported bugs in darktable 2.6 rc0
and rc1 by replying to other messages on the mailing list, sometimes
unrelated. This makes it difficult to sort the bugs, assign them to the
right developer and track their fixing state.

Please use https://redmine.darktable.org/projects/darktable/issues to
report problems, and check first if a similar bug is not already open.
Then, to give us the best chances to understand what's going on:

 1. explain what you were doing when the crash/issue happened,
 2. explain what you expected,
 3. explain what you got instead (behaviour, crash),
 4. provide a description of your system : distribution, OS & kernel
version, OpenCL in use or not (if so: which graphic driver ?).

Backtraces are welcome, but most of the time are not enough. So, don't
just dump your backtraces with no explanation please.

Also, if you could try to reproduct your bugs with a clean fresh
installation (backup and reset darktablerc, data.db and library.db in
your config directory), that would help to separate bugs related to
versions migration and those inherent to darkable 2.6.

Have a nice pre-release,

Aurélien.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] 2.6.0rc1 filmic - exposure interaction

2018-12-08 Thread Aurélien Pierre
Hi,

exposure comes before filmic in the pipe. That means everytime you
change the exposure, you wrong filmic settings (black, grey and white
levels). You should always follow this order:

 1. disable base curves,
 2. adjust the white balance,
 3. fix the exposure (avoid clipping in blacks and in whites, use the
color-picker),
 4. set filmic,
 5. perform additional color corrections (the enhanced color balance is
designed to be filmic's complement),
 6. everything else comes after (but, at high ISO, maybe denoise before
anything else if you want to use filmic's black detection).

Inside filmic, you should set the sliders in the order they are
presented in the UI and use the color-pickers to avoid guesses (if you
set the exposure right on the verge of clipping, the white exposure in
filmic shall always be 2.45 EV). That way, you avoid circular editings
and accomplish 80 % of the job in 4 modules (2 if your camera did her
job properly). Once you have it figured out, even the tricky
backlighting cases take no more than 45 s to edit, provided your
subjects/details are roughly in the center of the histogram (recovering
dark subjects *and* bright backgrounds as well might require additional
masking jobs).

Of course, if you fly around modules in whatever order, I guarantee
nothing. I'm a lazy fool who once took wedding jobs, rest assured I did
not design tools to just work, I designed them to be efficient and based
on years of editing pain/experience. That's why I wanted to have the
modules re-ordered in the UI, as much as possible, in the same order
they are applied in the pipe, a few months ago.

The history behaviour you observe will happen with any combination of
modules : as soon as you switch from one module to another, a new
history entry is created. The fact that you needed this sort of back and
forth editing means your workflow was wrong in the first place and
should generally alert you. You can read my article to understand how to
use filmic :
https://eng.aurelienpierre.com/2018/11/30/filmic-darktable-and-the-quest-of-the-hdr-tone-mapping/

I have got feedback from several users now, who tried to squeeze filmic
into their old flawed workflow : it will never work. Don't enable filmic
if you are using base curves, global tone-mapping, or using the exposure
module to get the mid-tones right (meaning letting the highlights clip).
Bad software design and forgiving low-dynamic range pictures have given
users an acquired taste for self-harming practices. Time to re-evaluate
and work in a clever way to work less.

Regards,

Aurélien.

Le 18-12-08 à 14 h 30, William Ferguson a écrit :
> Open an image in darkroom.  Activate filmic.  Click the eye dropper
> for middle gray luminance.  Move the mouse to the histogram and adjust
> the exposure.  Each adjustment results in two entries in the history
> stack, one from exposure and one from filmic. The exposure does
> adjust, but the history stack grows quickly.
>
> I also tried starting filmic, calculating the middle gray luminance,
> then adjusting exposure in the exposure module.  That didn't exhibit
> the history stack issue.  After adjusting exposure with the exposure
> module, adjusting exposure in the histogram didn't result in the
> filmic/exposure history stack pairs.
>
> ubuntu 18.04
> dt 2.6.0rc1
>
> Bill
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] what happens with linux color management ?

2018-11-24 Thread Aurélien Pierre
Le 24/11/2018 à 04:51, David Houlder a écrit :
> On 24/11/18 3:22 am, Aurélien Pierre wrote:
>>
>> Hi everyone,
>>
>> my darktable is installed on Ubuntu Budgie (fork of Gnome 3), but it
>> was the same when I used Gnome Shell.  I have a custom screen ICC
>> profile installed in gnome-color-manager, and loaded in darktable
>> through colord.
>>
>> When I change the ICC profile on Gnome with darktable open, the
>> colors of the darkroom preview change too (no matter if darktable
>> uses the system display profile or one built-in profile, like Adobe
>> RGB). That is the contrast and white point of the picture, plus the
>> color of the UI.
>>
>> So that means that the OS is stacking another color transformation on
>> top of darktable's one.
>>
> Is it possible that you're just seeing the effects of the gamma ramps
> changing when they're loaded from the VCGT of the profile that you
> switch to?
>
When changing between D65 and D55 profiles, the colors take an amber
shift, so it's not just a problem of VCGT.
>
> If you can change the profile without loading the VCGT (or restore the
> gamma ramps manually after the change) and the supposedly
> non-colour-managed parts of the UI change, then I'd say that there's
> some kind of double-correction happening, otherwise it's probably
> working as intended.
>
>
I don't know if messing directly with the VCGT is even possible.
>>
>> Fromthis article
>> <http://libregraphicsworld.org/blog/entry/richard-hughes-on-color-management-in-linux-and-gnome>
>> (2011), I get that gnome expects apps to take care of themselves :
>>
>> One of the things I tried to deliberately ignore designing colord
>> was actually flipping pixel values. Colord is a very high level
>> daemon that can say to apps like Krita or GIMP “Use this profile
>> for this device” rather than them supplying a 140Mb memory buffer
>> and an operation list. This means we can do the conversion on the
>> CPU using lcms2 for some programs and using the GPU using a
>> shader in things that use 3D. By not trying to wrap lcms we can
>> let the application do the pixel conversion in the right layer in
>> the right way.
>>
>> Of course, the downside of this is that you have to patch
>> applications to actually do the right thing. We can make this
>> easier by doing some framework code for Clutter and Cairo, but
>> sooner or later the application has to know about color
>> management in one form or another. This is going to be my main
>> focus for GNOME 3.4; now we have all the framework installed and
>> working and we can say to application authors “It's already
>> installed, so don't worry about the additional dependency, just
>> commit my patch and it'll just work”.
>>
>> But gnome-color-manager has no documentation, and even the Gnome
>> color dev documentation is pretty useless (a lot of "how to", no
>> "what's going on", but they found time to design a cheesy
>> kindergarten theme).
>>
>> Looking at GDK pixbuf doc, they don't have tags to explicitely say
>> "hey that's already color-corrected so bug off". The Wikipedia entry
>> ofLinux color management
>> <https://en.wikipedia.org/wiki/Linux_color_management> is as helpful
>> and factual as a marketing director motivational speech (let's
>> increase the leverage of color management by ensuring the quality of
>> good devices, with a pro-active method to supervise critical elements
>> in a proficent way — sure !).
>>
>> As of now, I have seen no block diagram to describe the full color
>> pipe in Linux, nor any way to ensure the quality of the transform.
>>
>> From the info I have gathered, the pipe I have put together is as follow:
>>
>> || darktable pipe -> LCMS/(Internal cmatrix color correction + TRC)
>> -> Cairo surface -> GDK pixbuff -> || -> Mutter compositor -> (OS
>> color correction ? TRC ?) -> Xorg -> Nvidia/Intel GPU driver ->
>> (Color correction ? VCGT ?) -> || -> HDMI DAC (gamma 2.2) -> Screen
>>
>> So my question is : does anyone have any idea of what's going on with
>> color on Linux, or are we stacking ICC on top of shit just to pretend
>> it's color-managed magically, somehow ?
>>
>> Thanks,
>>
>> Aurélien.
>>
>>
>> ___
>> darktable developer mailing list to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>
>
> -- 
> David Houlder
> +61 2 6248 7463
> da...@davidhoulder.com
> https://davidhoulder.com
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

[darktable-dev] what happens with linux color management ?

2018-11-23 Thread Aurélien Pierre
Hi everyone,

my darktable is installed on Ubuntu Budgie (fork of Gnome 3), but it was
the same when I used Gnome Shell.  I have a custom screen ICC profile
installed in gnome-color-manager, and loaded in darktable through colord.

When I change the ICC profile on Gnome with darktable open, the colors
of the darkroom preview change too (no matter if darktable uses the
system display profile or one built-in profile, like Adobe RGB). That is
the contrast and white point of the picture, plus the color of the UI.

So that means that the OS is stacking another color transformation on
top of darktable's one.

Fromthis article

(2011), I get that gnome expects apps to take care of themselves :

One of the things I tried to deliberately ignore designing colord
was actually flipping pixel values. Colord is a very high level
daemon that can say to apps like Krita or GIMP “Use this profile for
this device” rather than them supplying a 140Mb memory buffer and an
operation list. This means we can do the conversion on the CPU using
lcms2 for some programs and using the GPU using a shader in things
that use 3D. By not trying to wrap lcms we can let the application
do the pixel conversion in the right layer in the right way.

Of course, the downside of this is that you have to patch
applications to actually do the right thing. We can make this easier
by doing some framework code for Clutter and Cairo, but sooner or
later the application has to know about color management in one form
or another. This is going to be my main focus for GNOME 3.4; now we
have all the framework installed and working and we can say to
application authors “It's already installed, so don't worry about
the additional dependency, just commit my patch and it'll just work”.

But gnome-color-manager has no documentation, and even the Gnome color
dev documentation is pretty useless (a lot of "how to", no "what's going
on", but they found time to design a cheesy kindergarten theme).

Looking at GDK pixbuf doc, they don't have tags to explicitely say "hey
that's already color-corrected so bug off". The Wikipedia entry ofLinux
color management 
is as helpful and factual as a marketing director motivational speech
(let's increase the leverage of color management by ensuring the quality
of good devices, with a pro-active method to supervise critical elements
in a proficent way — sure !).

As of now, I have seen no block diagram to describe the full color pipe
in Linux, nor any way to ensure the quality of the transform.

>From the info I have gathered, the pipe I have put together is as follow:

|| darktable pipe -> LCMS/(Internal cmatrix color correction + TRC) ->
Cairo surface -> GDK pixbuff -> || -> Mutter compositor -> (OS color
correction ? TRC ?) -> Xorg -> Nvidia/Intel GPU driver -> (Color
correction ? VCGT ?) -> || -> HDMI DAC (gamma 2.2) -> Screen

So my question is : does anyone have any idea of what's going on with
color on Linux, or are we stacking ICC on top of shit just to pretend
it's color-managed magically, somehow ?

Thanks,

Aurélien.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

[darktable-dev] Merge Github Issues and Redmine tracker ?

2018-11-04 Thread Aurélien Pierre
Hi,

having Redmine bug tracker and Github PR apart makes it unnecessarly
complicated to follow/reference bugs and PR/fixes, or to ping people.

Wouldn't it be better to switch to Github issues tracker, wiki and such ?

Cheers,

Aurélien.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] scaling produces corrupted image

2018-11-04 Thread Aurélien Pierre
My bad. I'm on that. Sorry.

Le 04/11/2018 à 14:12, Ulrich Pegelow a écrit :
> Same here. With and without OpenCL.
>
> Am 04.11.18 um 19:58 schrieb Alexander Rabtchevich:
>> Current git produces corrupted image if export settings require
>> downsampling. Mint 19 x64 Mate. Memory is enough.
>>
>> With respect,
>> Alexander Rabtchevich
>> ___
>>
>> darktable developer mailing list
>> to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>>
>>
>
> ___
>
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] What is the gamma.c IOP ?

2018-11-04 Thread Aurélien Pierre
hello,

I have partially solved the issue with the wrong readings of the color
picker : there is actually a function
dt_dev_pixelpipe_process_no_gamma() in src/dev/pixelpipe_hb.c that
temporarily disables the gamma IOP to get the reading. I have modified
it to disable the colorout IOP instead. That fixes the problem when
LittleCMS is not used.

I think the color management needs a cleaning. Looking at the code in
pixelpipe_hb.c, a lot of bad things happen :

  * the histogram and color picker are computed after the display
profile is applied. So, in a multi-screen setup, you don't get the
same histogram depending on the screen you are using. So much for
consistency and reproductibility…
  * the Lab values of the color picker are computed by reverting the
display profile (which means creating a new inverse virtual profile,
and applying it each time the pixelpipe is run). So much for
performance…

I think these parts would be better moved in the colorout IOP, to take
real Lab readings before any RGB conversion and to use the (proper)
output color profile instead of the display one to perform RGB conversions.

Also, continuing with the display color profile… The color management
pipe in dt seems to be as follow :

camera RGB/CFA -> [ input profile ] -> Lab -> [ output profile ] ->
display RGB -> [ OS color management / display ICC profile ] ->
video output.

When darktable is set to use the system display profile, aren't we
correcting twice the picture for the display (one in dt, one before the
GPU output) ? I'm kind of lost here.

Moreover, the internal display correction applies the gamma tonecurve of
the display profile, so the actual 50 % Lab grey is always displayed at
75 % on the screen, which shouldn't happen. It doesn't make sense to
have linear REC 709 and 2020 profiles built-in, and never ever be able
to see actual linear data.

I'm currently adding an option in colorout to bypass the gamma and
enable a linear workflow. It works with the internal dt color
management, but LittleCMS doesn't seem to allow to remove the TRC and
keep only the RGB primaries. Any help welcome.

Thanks,

Aurélien.

Le 03/11/2018 à 09:15, johannes hanika a écrit :
> Hey,
>
> One more reason for gamma.c: legacy. In 2009 when the pipeline was the
> one of ufraw, this was actually used. So in case someone opens 10yo
> shots now they depend on the iop.
>
> Re: colorpicker. that's a known issue, we don't run multiple pipelines
> so the colorpicker reads whatever there is at the end.
>
> No clue about your logspace issue. Maybe you picked in lab after
> display profile transform? that'll have gamma and cbrt().
>
> Cheers
> Jo
>
> On Sat, 3 Nov 2018, 10:05 Andreas Schneider,  <mailto:a...@cryptomilk.org>> wrote:
>
> On Saturday, 3 November 2018 09:59:23 CET Andreas Schneider wrote:
> > On Friday, 2 November 2018 21:10:33 CET Aurélien Pierre wrote:
> > > Hi Jo !
> > >
> > > Thanks for your answer.
> > >
> > > So this doesn't explain why the global colorpicker readings change
> > > depending on the display ICC profile, nor why using a log encoding
> > > between -2.5 and 2.5 EV, the grey gets remapped to L = 77
> instead of 50.
> > >
> > > I believe there is a gamma correction where it doesn't belong.
> Where is
> > > the code for the colorpicker ?
> >
> > Aurélien,
> >
> > you broke image export with the latest change:
> >
> > See:
> >
> > https://xor.cryptomilk.org/darktable/broken.jpg
>
> Oh, I get the same if I open it in the darkroom!
>
> -- 
> Andreas Schneider                 a...@cryptomilk.org
> <mailto:a...@cryptomilk.org>
> GPG-ID:     8DFF53E18F2ABC8D8F3C92237EE0FC4DCC014E3D
>
>
> 
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
> <mailto:darktable-dev%2bunsubscr...@lists.darktable.org>
>
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

[darktable-dev] What is the gamma.c IOP ?

2018-11-02 Thread Aurélien Pierre
Hi Jo !

Thanks for your answer.

So this doesn't explain why the global colorpicker readings change
depending on the display ICC profile, nor why using a log encoding
between -2.5 and 2.5 EV, the grey gets remapped to L = 77 instead of 50.

I believe there is a gamma correction where it doesn't belong. Where is
the code for the colorpicker ?

Regards,

Aurélien.

Le 02/11/2018 à 06:17, johannes hanika a écrit :
> heya,
>
> it used to do the gamma in the olden days. now it just does what you
> observed: convert to uint8 for display and any marshalling of data at
> the end of the pipeline in gui mode can take place here. it's
> certainly not applying gamma. i suppose we could delete these old
> gamma lut table lines of code and just round directly in process().
>
> cheers,
>  jo
> On Thu, Nov 1, 2018 at 3:52 PM Aurélien Pierre
>  wrote:
>> Hi,
>>
>> there is in dt an hidden IOP named gamma.c that is the last one in the 
>> pixelpipe. Looking at the code, it is useful to display the masks and 
>> convert the pixels to uint8 values.
>>
>> However, it does a gamma conversion too, with a linear and gamma parameters. 
>> First of all, this conversion seems damaging in a calibrated and linear 
>> workflow because it (seems to) prevents the color picker to get accurate 
>> pixel values readings.
>>
>> Then, I don't see where the gamma and linear parameters are grabbed. Could 
>> anyone tell me what it does and how ? Having an arbitrary gamma conversion 
>> there could be redundant with the display gamma and is a serious matter for 
>> color management.
>>
>> Thanks,
>>
>> Aurélien.
>>
>>
>> ___ 
>> darktable developer mailing list to unsubscribe send a mail to 
>> darktable-dev+unsubscr...@lists.darktable.org
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

[darktable-dev] What is the gamma.c IOP ?

2018-10-31 Thread Aurélien Pierre
Hi,

there is in dt an hidden IOP named gamma.c that is the last one in the
pixelpipe. Looking at the code, it is useful to display the masks and
convert the pixels to uint8 values.

However, it does a gamma conversion too, with a linear and gamma
parameters. First of all, this conversion seems damaging in a calibrated
and linear workflow because it (seems to) prevents the color picker to
get accurate pixel values readings.

Then, I don't see where the gamma and linear parameters are grabbed.
Could anyone tell me what it does and how ? Having an arbitrary gamma
conversion there could be redundant with the display gamma and is a
serious matter for color management.

Thanks,

Aurélien.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] OpenCL issues

2018-10-29 Thread Aurélien Pierre
Hallo Heiko !

Very pleased to hear that (not the bug part, though).

Did you try with smaller pictures ? Usually, darktable falls back to CPU
when there is not enough ressource available on the GPU.

You can try :

 1. the command *nvidia-smi* to see how the GPU RAM is used (if there is
not enough vRAM available, you will see an OpenCL error code -4).
 2. setting opencl_async_pixelpipe=true in darktablerc
 3. setting opencl_mandatory_timeout > 200 in darktablerc

Also, I have discovered this week that Gnome 3.28.2 with Xorg has
serious memory leaks issues and can affect OpenCL performance. After
several hours of uptime, Xorg consumes up to 1GB RAM/vRAM on Ubuntu
18.04, so OpenCL has not enough space.

I hope this helps,

Aurélien.

Le 29/10/2018 à 17:23, Heiko Bauke a écrit :
> Hi,
>
> I own a laptop with a low-end OpenCL capable graphics card.  Usually I
> explicitly turn OpenCL support off for darktable.
>
> Today I enabled OpenCL support in darktable for some testing purposes.
> Starting darktable (current git master with some extensions not
> related to OpenCL) with the options '-d opencl -d perf' yields
>
>> 0.062332 [opencl_init] opencl related configuration options:
>> 0.062345 [opencl_init] 0.062359 [opencl_init] opencl: 1
>> 0.062361 [opencl_init] opencl_library: ''
>> 0.062364 [opencl_init] opencl_memory_requirement: 200
>> 0.062367 [opencl_init] opencl_memory_headroom: 0
>> 0.062372 [opencl_init] opencl_device_priority: ''
>> 0.062377 [opencl_init] opencl_mandatory_timeout: 0
>> 0.062382 [opencl_init] opencl_size_roundup: 16
>> 0.062386 [opencl_init] opencl_async_pixelpipe: 0
>> 0.062389 [opencl_init] opencl_synch_cache: 0
>> 0.062392 [opencl_init] opencl_number_event_handles: 0
>> 0.062396 [opencl_init] opencl_micro_nap: 0
>> 0.062399 [opencl_init] opencl_use_pinned_memory: 0
>> 0.062402 [opencl_init] opencl_use_cpu_devices: 0
>> 0.062404 [opencl_init] opencl_avoid_atomics: 0
>> 0.062407 [opencl_init] 0.062615 [opencl_init] found opencl runtime
>> library 'libOpenCL'
>> 0.062652 [opencl_init] opencl library 'libOpenCL' found on your
>> system and loaded
>> 0.078292 [opencl_init] found 1 platform
>> 0.078310 [opencl_init] found 1 device
>> 0.078526 [opencl_init] device 0 `GeForce GT 730M' has sm_20 support.
>> 0.078613 [opencl_init] device 0 `GeForce GT 730M' supports image
>> sizes of 16384 x 16384
>> 0.078619 [opencl_init] device 0 `GeForce GT 730M' allows GPU memory
>> allocations of up to 501MB
>> [opencl_init] device 0: GeForce GT 730M 
>> GLOBAL_MEM_SIZE:  2004MB
>>  MAX_WORK_GROUP_SIZE:  1024
>>  MAX_WORK_ITEM_DIMENSIONS: 3
>>  MAX_WORK_ITEM_SIZES:  [ 1024 1024 64 ]
>>  DRIVER_VERSION:   390.77
>>  DEVICE_VERSION:   OpenCL 1.2 CUDA
>> 0.157247 [opencl_init] options for OpenCL compiler:
>> -cl-fast-relaxed-math  -DNVIDIA_SM_20=1 -DNVIDIA=1
>> -I"/usr/local/darktable_guided/share/darktable/kernels"
>
> [...]
>
>> 0.177151 [opencl_init] compiling program `heal.cl' ..
>> 0.177158 [opencl_fopen_stat] could not open file
>> `/usr/local/darktable_guided/share/darktable/kernels/heal.cl'!
>> 0.177163 [opencl_init] kernel loading time: 0.0198 0.177170
>> [opencl_init] OpenCL successfully initialized.
>> 0.177173 [opencl_init] here are the internal numbers and names of
>> OpenCL devices available to darktable:
>> 0.177176 [opencl_init]    0    'GeForce GT 730M'
>> 0.177180 [opencl_init] FINALLY: opencl is AVAILABLE on this system.
>> 0.177183 [opencl_init] initial status of opencl enabled flag is ON.
>> 0.177205 [opencl_create_kernel] successfully loaded kernel
>> `blendop_mask_Lab' (0) for device 0
>> 0.177213 [opencl_create_kernel] successfully loaded kernel
>> `blendop_mask_RAW' (1) for device 0
>> 0.177222 [opencl_create_kernel] successfully loaded kernel
>> `blendop_mask_rgb' (2) for device 0
>
> [...]
>
> Neglecting the fact that the kernel heal.cl cannot be loaded
> everything looks fine to me.  Nevertheless, all modules utilize CPUs
> only but not my GPU, including 'denoise (profiled)'.
>
>> 38.392474 [dev_pixelpipe] took 0.405 secs (1.365 CPU) processed
>> `denoise (profiled)' on CPU, blended on CPU [full]
>
> How can I enable GPU processing?
>
> In particular, I need to enable blending on GPU.  Currently I am
> working on automatic mask refinement based on a guided filter, see
> http://kaiminghe.com/publications/eccv10guidedfilter.pdf For this
> purpose I have extended the function dt_develop_blend_process and now
> I also have to ajust dt_develop_blend_process_cl.  But currently, the
> latter function is never called.  Any hint?
>
>
> Heiko
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Darkroom UI refactoring

2018-10-25 Thread Aurélien Pierre

Le 26/10/2018 à 00:49, Jason Polak a écrit :
> Dear Aurelien,
>
> It's clear that you put a lot of thought into this and I am eager to try
> it. It is very helpful to see the GUI screenshots, and based on those I
> do have a few comments/questions:
>
> 1) Don't you think that the equalizer/local contrast module are more
> similar to the sharpening module rather than the tone curve/fill light
> module? Especially with the equalizer, part of it performs a very
> similar effect to sharpening. I understand though that the algorithms
> behind them might be different.

There is still a certain amount of arbitrary choices in this order, I
won't deny it. Local contrast, at a very local scale, is similar to a
sharpening, but it's a perceptual sharpening (you fool the eye), not an
optical sharpening (you don't restore blurry edges). Even with a tone
curve, if carefully adjusted, you can increase the feeling of sharpness,
as a side-effect. But local contrast stays contrast, and as equalizer
and local contrast can affect the global contrast dramatically (the same
way as shadows-highlights), I put them with tones. Sharpen is a sort of
high-pass filter so its effect will always be more selective.

Besides, local contrast and equalizer are best used before high-pass and
sharpen (retouch from more global to more specific).

> 2) In your description of the correction tab, you say that after leaving
> this tab, the image should look clean and dull. That makes sense -
> though I am wondering how this works considering the automatic
> application of the base curve? If the base curve is applied upon image
> opening, the tones of the image look already pretty manipulated compared
> to the dull-looking image without the base curve. In other words, it
> seems as though having the base curve applied can already push the tones
> in the image pretty far, leaving little room for colour balance
> adjustments later on in the tone-modules tab.

I believe the basecurve is a mistake and should be deprecated. First and
foremost, it's applied before the input color profile, so you have to
disable it if you work with an enhanced/custom matrix, otherwise your
profile is useless (never add contrast before the input color profile,
do a gamma or log correction but don't darken blacks while you light
mid-tones). The basecurve was intended, at first, to emulate in-camera
JPEG color-rendition with filmic curves
<http://filmicworlds.com/blog/filmic-tonemapping-with-piecewise-power-curves/>.
Turns out the quality of the users-provided curves is not equal from
brand to brand, it creates out-of-gamut colors and over-saturation (on
Nikon, the reds get boosted like crazy - people all look like
alcoholics) and the devs have stopped adding new curves a few years ago.
The new module for that purpose is the colorchecker/LUT, which can
"easily" be used by anyone to create custom LUTs from color charts and
in-camera JPGs and occurs later in the pixelpipe, where it is safe.

The base curve can still be used to (carefully) tonemap HDRs from a
single exposure. But do yourself a favor, buy a colorchart
<http://www.colorreference.de/targets/index.html> (30 €), make your own
camera profile
<https://encrypted.pcode.nl/blog/2010/06/28/darktable-camera-color-profiling/>,
and never ever use the default base curves.

> I am just wondering how having a default base curve fits in with your
> editing paradigm?
>
> Sincerely,
> Jason

Thanks for your input,

Aurélien.

> On 2018-10-25 08:28 PM, Aurélien Pierre wrote:
>> Hi everyone !
>>
>> To follow up on that matter, I have done a pull request doing what I
>> discussed here : https://github.com/darktable-org/darktable/pull/1745
>>
>> You will find screenshots showing the changes, a sum-up of the benefits
>> and a poll to vote for/against the change and give your feedback. After
>> that, I suppose the core devs will decide what they want to do.
>>
>> I know it's still not the flexible UI some of you asked, the problem is
>> we don't have the workforce for it. darktable 2.6 is supposed to be
>> released in 2 months, so now is not the time for ground-breaking
>> changes. This is intended to make things more logical (or less bad)
>> using realistic means. I changed 30 lines of code, so I'm pretty sure it
>> won't break anything.
>>
>> Cheers,
>>
>> Aurélien.
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] How to write on dt.org blog ?

2018-10-19 Thread Aurélien Pierre
Hi Pierre !

The modules will be documented, that's not even open for discussion. But
the doc is written module by module. These 2 modules show their full
potential in a specific workflow that involves 4 modules, used the right
way at the right time. That's why I want to write a full tutorial.

Regards,

Aurélien.

Le 19/10/2018 à 14:23, openhab.doc a écrit :
> Hi Aurélien,
>
> I assume you mean pull requests #1759 and #1757.
> I agree to Andreas that the improvement has to be documented into the 
> darktable user manual. You will find the documentation for color balance 
> module here:
> https://github.com/darktable-org/darktable/blob/master/doc/usermanual/
> darkroom/modules/color/colorbalance.xml
>
> You will also a guidline for the user manual here:
> https://github.com/darktable-org/darktable/blob/master/doc/usermanual/
> README
>
> If you have any questions or do you need someone for proof reading, do not 
> hesitate to ask.
>
> Pierre
>
> Am Donnerstag, 18. Oktober 2018, 23:46:50 CEST schrieb Aurélien Pierre:
>> Hi everyone,
>>
>> My 2 pull requests have been merged into master, I'm very happy about
>> it. These tools are very powerfull but certainly not intuitive and need
>> a proper tutorial.
>>
>> Could I get access to darktable.org blog so I can write a full tuto in
>> English ?
>>
>> Thanks,
>>
>> Aurélien.
>>
>>
>> _
>> __ darktable developer mailing list
>> to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>
>
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

[darktable-dev] How to write on dt.org blog ?

2018-10-18 Thread Aurélien Pierre
Hi everyone,

My 2 pull requests have been merged into master, I'm very happy about
it. These tools are very powerfull but certainly not intuitive and need
a proper tutorial.

Could I get access to darktable.org blog so I can write a full tuto in
English ?

Thanks,

Aurélien.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Darkroom UI refactoring

2018-10-09 Thread Aurélien Pierre
Le 10/10/2018 à 00:14, William Ferguson a écrit :
> On Tue, Oct 9, 2018 at 7:18 PM Aurélien Pierre
> mailto:rese...@aurelienpierre.com>> wrote:
>
> What I call "signal-processing" here are all the module intended
> to clean the data and turn an (always) damaged picture into what
> it is supposed to look like in theory. That is :
>
>  1. reconstructing missing parts (clipped highlights)
>  2. recovering the dynamic range (tonemapping)
>  3. reconstructing the damaged parts (denoising)
>  4. reverting the optical artefacts (vignette, CA, distorsion),
>  5. reverting the color inaccuracies (white balance and ICC color
> correction).
>
> You think you can waltz around modules and do the retouch in the
> order you like. Well, you can, but that is asking for trouble.
>
> Take 2 examples :
>
> 1. Open a noisy image, turn on the laplacian local contrast, save
> a snapshot, then enable a heavy denoising, and compare the 2
> outputs : in some case, the local contrast output will look
> harsher with denoising. That means you should fix the noise before
> setting the local contrast.
>
> I tried this with as ISO 6400 (max normal ISO) and an ISO 12800
> (extended ISO) and didn't see this.  I use a profiled denoise triplet
> of color, lightness, and average plus hot pixels, demosiac, and
> lowpass.  The only difference I see is in the lowpass, since it occurs
> after local contrast in the pixel pipe.  All the other modules I use
> for denoising occur before local contrast in the pixel pipe so it
> doesn't affect the denoise.  How are you doing heavy denoise?  Perhaps
> the heavy denoise is producing artifacts that the local contrast is
> exaggerating.
This effect is not systematic, but it's annoying when it happens.
>
> 2. On a portrait photo done with a camera for which you have an
> enhanced matrix (basecurve = OFF), tweak the exposure until you
> get a nice contrast (Lmax = 100, Lmin = 0). Then, in the color
> balance, tweak the gamma factor to get the L_average on the face =
> 50. Save the snapshot. Now, disable the color balance, tweak the
> exposure again to get a dull image (fix Lmax = 96, Lmin = 18).
> Then, in the color balance, tweak the gain factor to get Lmax =
> 100, the lift factor to get Lmin = 0 and the gamma factor to get
> L_average on the face = 50. Which skin tones look the more natural
> and which has less out-of-gamut issues ? (spoiler alert : #2)
>
> I tried this also, and it did work.  In addition, I learned how to use
> the color balance module in a way that I hadn't thought of.  Thanks. 
> Mostly I shoot sports, so this isn't something that I would use. 
> However, I do shoot some portraits so it's a nice trick to have up my
> sleeve.
This works for every kind of photography, when you want color accuracy.
I took portrait as an example because we are more sensible to skin tones
shifts, so the example is more dramatic.
>
> Nobody will think of crushing the contrast first in the exposure
> module, then bring it up later in the pixelpipe, in order to get
> better colors, until he has seen the math going on inside… In
> fact, the autoexposure tool even lures you into doing the opposite.
>
> Because darktable is modular by nature, modules are fully
> independant and don't share data, but that leads to a fair amount
> of inconsistency. You can tweak the contrast and lightness in 8
> different modules (exposure, contrast/saturation/lightness, tone
> curve, base curve, zone system, color balance, unbreak input
> profile, levels) and people may think they are equivalent, but
> they are not. I believe this inconsistency should be adressed from
> the UI.
>
> In order to get the color correction right, you need to input a
> well-behaved, dull-looking, image. So I want to put all the
> modules doing that (before the color correction) in a #1 tab, and
> say to the user : at the end of this tab, your image should look
> dull (not beautiful). If this is done, proceed to tab #2 and never
> go back.
>
> Between tabs #1 and #2 -> correct the colors on (now) valid data.
>
> In the tab #2, I want to put all the cleaning modules that can
> affect all upcoming local contrast ones, and say to the user : at
> the end of this tab, your image should look clean. If this is
> done, proceed to tab #3 and never go back.
>
> After tab #2, the image should be clean so do whatever you want
> and enjoy your artistic life. And more importantly, if tabs/steps
> #1 and #2 are done properly, you can copy/paste the editing done
> in the following tabs

Re: [darktable-dev] Darkroom UI refactoring

2018-10-09 Thread Aurélien Pierre
What I call "signal-processing" here are all the module intended to
clean the data and turn an (always) damaged picture into what it is
supposed to look like in theory. That is :

 1. reconstructing missing parts (clipped highlights)
 2. recovering the dynamic range (tonemapping)
 3. reconstructing the damaged parts (denoising)
 4. reverting the optical artefacts (vignette, CA, distorsion),
 5. reverting the color inaccuracies (white balance and ICC color
correction).

You think you can waltz around modules and do the retouch in the order
you like. Well, you can, but that is asking for trouble.

Take 2 examples :

1. Open a noisy image, turn on the laplacian local contrast, save a
snapshot, then enable a heavy denoising, and compare the 2 outputs : in
some case, the local contrast output will look harsher with denoising.
That means you should fix the noise before setting the local contrast.

2. On a portrait photo done with a camera for which you have an enhanced
matrix (basecurve = OFF), tweak the exposure until you get a nice
contrast (Lmax = 100, Lmin = 0). Then, in the color balance, tweak the
gamma factor to get the L_average on the face = 50. Save the snapshot.
Now, disable the color balance, tweak the exposure again to get a dull
image (fix Lmax = 96, Lmin = 18). Then, in the color balance, tweak the
gain factor to get Lmax = 100, the lift factor to get Lmin = 0 and the
gamma factor to get L_average on the face = 50. Which skin tones look
the more natural and which has less out-of-gamut issues ? (spoiler alert
: #2)

Nobody will think of crushing the contrast first in the exposure module,
then bring it up later in the pixelpipe, in order to get better colors,
until he has seen the math going on inside… In fact, the autoexposure
tool even lures you into doing the opposite.

Because darktable is modular by nature, modules are fully independant
and don't share data, but that leads to a fair amount of inconsistency.
You can tweak the contrast and lightness in 8 different modules
(exposure, contrast/saturation/lightness, tone curve, base curve, zone
system, color balance, unbreak input profile, levels) and people may
think they are equivalent, but they are not. I believe this
inconsistency should be adressed from the UI.

In order to get the color correction right, you need to input a
well-behaved, dull-looking, image. So I want to put all the modules
doing that (before the color correction) in a #1 tab, and say to the
user : at the end of this tab, your image should look dull (not
beautiful). If this is done, proceed to tab #2 and never go back.

Between tabs #1 and #2 -> correct the colors on (now) valid data.

In the tab #2, I want to put all the cleaning modules that can affect
all upcoming local contrast ones, and say to the user : at the end of
this tab, your image should look clean. If this is done, proceed to tab
#3 and never go back.

After tab #2, the image should be clean so do whatever you want and
enjoy your artistic life. And more importantly, if tabs/steps #1 and #2
are done properly, you can copy/paste the editing done in the following
tabs from one picture to the other (almost) without any adjustement. So
you gain in reproductibility.

If you want your daily-needed modules close to you, you will still have
the favorite tab. Currently, I bet they are mostly redundant with the
basic modules for most of us.

As for high-pass and sharpen modules, the maths inside are exactly the
same : they are an unsharp masking
. The sharpen module has
just an hard-set overlay blending mode whereas the high-pass lets you
choose.

What do you think ?

Aurélien.

Le 09/10/2018 à 15:11, Jason Polak a écrit :
>>   * in/out color profiles are stored in the color tabs, whereas they are
>> "basic" in the sense they are needed from technical requirements and
>> always on,
> Yes they are needed, but I wouldn't want them cluttering up the 'basic'
> group. If they have to be modified, it's likely to be not very often and
> then they can be found in the color group because they have to do with
> color (at least, not coming from an image processing background, it
> makes sense for them to be there).
>
>>   * signal-processing modules are mixed with creative ones
> Do you mean highpass and lowpass? If so, I don't think it's really
> strange. The highpass and lowpass modules create a sort of more advanced
> mask that could be used for sharpening, but they could also be used for
> more creative effects by using different blend modes. Regular sharpening
> and equalizer seem like more basic corrective modules (sharpening is
> typically because of anti-aliasing filters or softer lenses, equalizer
> for microconstrast-to-macroconstrast adjustments.
>
> The creative group consists of modules that provide a little more
> unusual effects that you might not really need for most shots but can
> often radically alter the look of them, or else things like framing and
> 

Re: [darktable-dev] Darkroom UI refactoring

2018-10-09 Thread Aurélien Pierre
But even if we keep the actual disposition, don't you think it's weird
that :

  * in/out color profiles are stored in the color tabs, whereas they are
"basic" in the sense they are needed from technical requirements and
always on,
  * signal-processing modules are mixed with creative ones
  * you get sharpen in enhancements and high-pass in effects, but they
do exactly the same thing
  * same for local-contrast and wavelet equalizer
  * same for the crop/flip and the perspective correction.

I mean, even with my own workflow set apart, I just think it would make
sense to separate technical and creative modules completely. And by
technical, I mean everything related to image reconstruction and
normalization (things you would do in Matlab). Especially since the
technical modules mostly come early in the pipe, it would make sense to
have them grouped explicitely so that you set them first. Because late
modules like levels and tonecurves depend on the exposure and
tonemapping set earlier in the pipe, so if you begin with the curves and
end with the tonemapping, you will have to redo the curves. Not funny.

Same with unbreak color profile and contrast/lightness/saturation. In
both cases, they perform a gamma correction, but the former does it in
RGB before the input color profile, while the later does it in Lab much
later in the pipe, and they are both specialized on different issues :
don't try to get creative with the unbreak color profile, it should be
used to put the average lightness at 50 % L(ab), nothing else.

Let the UI reflect that, because I have used dt for 7 years, and it's
only now that I look at the code that I understand what's going on. To
be efficient, there are things you need to fix first, for example before
the ICC input profile is applied, and in RGB. A regular user doesn't
stand a chance to figure out what comes first when the unbreak color
profile module is hidden between creative color modules.

And that's just an example. If you don't follow at least roughly the
pixelpipe order, you will end up passing 2-5 times on each module,
whereas it could be done in 2 passes in a reproductible way, fixing
first, tweaking after.

What I would like is to draw a path between critical milestones, that is
between the technical requirements that should be met :

 1. having your tonal range mapped between 16 % and 96 % L(ab) before
you apply any input profile, otherwise you mess up the saturation
and shift the colors
 2. linearize the tones (beat up the contrast) early and add contrast
late in the pipe,
 3. having the contrast fixed in RGB as much as possible to avoid
de/over-saturation, 
 4. fix the color after you fix the contrast and lightness, especially
if you work in Lab modules
 5. normalize the histogram just at the end
 6. …

The main problem I have with the current disposition is low-level stuff
comes last in the UI.


Le 08/10/2018 à 22:07, Jason Polak a écrit :
> I've been thinking a little more about this idea, and while some modules
> might be better moved to other tabs (or a new set of tabs) like perhaps
> 'color reconstruction', the current setup still seems to make more sense
> in some ways too. For example:
>
> 1. I prefer the idea of the 'effects' tab (like watermark and framing)
> to be separate from other things like perspective correction or spot
> removal. Those things that are sometimes useful or interesting like
> split toning seem to be genuinely different than correcting spots.
>
> 2. I don't think the base tab should have tonemap in it. It is a more
> advanced tone module that belongs more in a separate tone group.
>
> One of the arguments made in this thread is that there have been many
> usability requests in the past, and therefore that there is a huge
> demand to improve the software. These requests only imply that there is
> a subset of people who have trouble using darktable. Changing darktable
> to be more aligned with their style might make darktable easier to use
> for them, but in turn it might make it more cumbersome to use for others.
>
> Now, I'm not saying that these ideas are bad, but because people who
> want darktable to change are going to be naturally more vocal than
> people who like it as it is, we need to be careful that we don't make it
> worse for many users by changing the UI for a few.
>
> For example, Aurelien's modifications are designed with a certain
> workflow in mind. But for a different type of workflow, the new design
> might actually be worse. For example, I like the idea of having the
> modules grouped by what they do, not what order they come in during the
> workflow. That's just the way my mind works. I suspect that some people
> will prefer a more chronological ordering, and some people will prefer
> an ordering based on their similarities in algorithm. darktable isn't
> exactly like that now because of the basic tab, but aside from the basic
> tab, it mostly is like that.
>
> Another thing we have to be 

Re: [darktable-dev] Darkroom UI refactoring

2018-10-08 Thread Aurélien Pierre
Thanks Andreas !

So I tried the modifications I have suggested and I agree that the base
tab would be too crowded.

What I did was to split the technical modules in 2 tabs :

  * the I/O and dynamic range ones in the base tab (color profile,
tonemap, exposure, crop etc.)
  * the de-* (denoising, defringing) and color reconstruction in the
correction tab, that is now just after the base one
  * everything else as planned.

The branch is here :
https://github.com/aurelienpierre/darktable/tree/UI-refactor.

Now we have about the same number of modules in each tab.

I think defringe and haze removal are more saturation modules (one 
decreases 
and the other increases saturation) 

No they are truly signal processing algorithms based on optics
equations. It's a bad idea to put them among creative modules, in my
opinion. The core idea of my project is to separate modules you have to
set following rules (signal normalization and restoration) and the ones
you are free to use as you wish.

Le 08/10/2018 à 02:53, Andreas Schneider a écrit :
> On Monday, 8 October 2018 03:06:34 CEST Aurélien Pierre wrote:
>> Hi everyone !
> Hi Aurélien,
>  
>> I would like to propose a lifting for the UI in the darkroom.
> I like the idea. However the crop and rotate module should be in the 'optics 
> handling' subcategory.
>
> Also the corrections group is too big, we need to split it or reduce it. As 
> not everyone has all modules enabled, reducing it a bit might be enough.
>
> I think defringe and haze removal are more saturation modules (one decreases 
> and the other increases saturation) and maybe create a category for noise 
> handling.
>
>
> Best regards,
>
>
>   Andreas
>


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Darkroom UI refactoring

2018-10-07 Thread Aurélien Pierre
Le 08/10/2018 à 00:42, Jochen Keil a écrit :

> Hi,
>
> On Mon, Oct 8, 2018 at 5:39 AM Aurélien Pierre
>  wrote:
>> The real question here is : could you get past the change and benefit from 
>> it ?
>>
>> I'm biased here, since I developed repetitive strain injury in the wrist at 
>> the early age of 23. So I'm basically trying to improve the efficiency of 
>> the workflow by decreasing as much as possible the number of user 
>> interactions on each picture, especially the mouse interactions.
>>
>> If it's only for cropping, it can be fixed. At the end, I think it really 
>> depends on how many hours you spend each week on darktable. Because editing 
>> a whole wedding is definitely not the same as editing a bunch of holidays 
>> pictures, so I guess every user will have a different sensibility to 
>> workflow matters and the occasionnal users will mostly care about the 
>> overhead of the refactoring (having to learn things again) while the regular 
>> users will see it as a long-term investment.
> So, how about custom tabs, that can be named freely and where users
> can add and arrange modules to their liking?
Because :

 1. things have already been decided for everyone, hence the
inconsistencies we have now,
 2. moving modules between tabs is one line to edit in each IOP
file, implementing a whole configurable layout is another (GTK)
game. I'm trying to stay realistic here.

There are dozens of things inside dt that should be user-edited,
beginning with the color theme of the UI. But given the limited
ressources we have, I'm trying to solve simple problems in a simple way,
not trying to build spaceships. GTK is not Qt.

>
> The existing arrangement could be shipped as a preset, and other
> presets could be added easily.
>
> Make it configurable instead of trying to figure out what's right for
> everyone (hint: won't happen)
>
> Cheers,
>
>   Jochen
>
>
>> Le 07/10/2018 à 23:02, Jason Polak a écrit :
>>
>> Hi!
>>
>> I can certainly see the logic of your idea. I definitely prefer the
>> current setup, if only because that's what I started with. I think the
>> only way to see if this is a good idea is to poll users because I am
>> sure there are some that would like your way and some that prefer the
>> current way.
>>
>> I do have a specific criticism about your approach, though. I think
>> cropping should come early in the editing process. I care much more
>> about adjusting the general exposure and crop (composition) before I
>> could even think about lens correction or noise reduction. This is
>> doubly so because I take a multi-pass view on editing. I first do some
>> basic edits of exposure, cropping, and tone curve adjustments to the
>> shots I think are half-decent, and then promote the best ones to the
>> next star level. Only with the highest star rating do I even consider
>> spending time on noise reduction and lens correction as there is not
>> much point on noise reduction in the bad images.
>>
>> Personally, I have found after a couple months it's easy to remember
>> where all the modules are and changing it would only make it worse for me.
>>
>> Jason
>>
>> On 2018-10-07 09:06 PM, Aurélien Pierre wrote:
>>
>> Hi everyone !
>>
>> I would like to propose a lifting for the UI in the darkroom.
>>
>> *Problem**
>> *
>>
>> Currently, the modules are separated in 5 tabs :
>>
>>   * base
>>   * tones
>>   * colors
>>   * enhancements
>>   * effects
>>
>> But :
>>
>>   * some modules in the color group affect the tones as well (color
>> zones, color balance)
>>   * some modules in the tone group affect the colors as well (tone
>> curves)
>>   * what is a "basic" module is rather arbitrary (basic == low-level
>> signal processing | traditionnal all-purpose features | simple
>> general settings ?)
>>   * some modules do basically the same thing (local contrast &
>> equalizer, sharpen & high-pass filter, tonecurve & basecurve)
>> and yet you find them in different tabs
>>
>> *Workflow**
>> *
>>
>> Over 7-8 years using dt, I have converged (and advocated) to the
>> following systematic workflow :
>>
>> /Step 1 : clean and neutralize the picture/
>>
>>  1. normalize the white balance
>>  2. normalize the exposure to fit the histogram
>>  3. normalize the contrast and tonemap
>>  4. clean the noise
>>

Re: [darktable-dev] Darkroom UI refactoring

2018-10-07 Thread Aurélien Pierre
The real question here is : could you get past the change and benefit
from it ?

I'm biased here, since I developed repetitive strain injury in the wrist
at the early age of 23. So I'm basically trying to improve the
efficiency of the workflow by decreasing as much as possible the number
of user interactions on each picture, especially the mouse interactions.

If it's only for cropping, it can be fixed. At the end, I think it
really depends on how many hours you spend each week on darktable.
Because editing a whole wedding is definitely not the same as editing a
bunch of holidays pictures, so I guess every user will have a different
sensibility to workflow matters and the occasionnal users will mostly
care about the overhead of the refactoring (having to learn things
again) while the regular users will see it as a long-term investment.


Le 07/10/2018 à 23:02, Jason Polak a écrit :
> Hi!
>
> I can certainly see the logic of your idea. I definitely prefer the
> current setup, if only because that's what I started with. I think the
> only way to see if this is a good idea is to poll users because I am
> sure there are some that would like your way and some that prefer the
> current way.
>
> I do have a specific criticism about your approach, though. I think
> cropping should come early in the editing process. I care much more
> about adjusting the general exposure and crop (composition) before I
> could even think about lens correction or noise reduction. This is
> doubly so because I take a multi-pass view on editing. I first do some
> basic edits of exposure, cropping, and tone curve adjustments to the
> shots I think are half-decent, and then promote the best ones to the
> next star level. Only with the highest star rating do I even consider
> spending time on noise reduction and lens correction as there is not
> much point on noise reduction in the bad images.
>
> Personally, I have found after a couple months it's easy to remember
> where all the modules are and changing it would only make it worse for me.
>
> Jason
>
> On 2018-10-07 09:06 PM, Aurélien Pierre wrote:
>> Hi everyone !
>>
>> I would like to propose a lifting for the UI in the darkroom.
>>
>> *Problem**
>> *
>>
>> Currently, the modules are separated in 5 tabs :
>>
>>   * base
>>   * tones
>>   * colors
>>   * enhancements
>>   * effects
>>
>> But :
>>
>>   * some modules in the color group affect the tones as well (color
>> zones, color balance)
>>   * some modules in the tone group affect the colors as well (tone
>> curves)
>>   * what is a "basic" module is rather arbitrary (basic == low-level
>> signal processing | traditionnal all-purpose features | simple
>> general settings ?)
>>   * some modules do basically the same thing (local contrast &
>> equalizer, sharpen & high-pass filter, tonecurve & basecurve)
>> and yet you find them in different tabs
>>
>> *Workflow**
>> *
>>
>> Over 7-8 years using dt, I have converged (and advocated) to the
>> following systematic workflow :
>>
>> /Step 1 : clean and neutralize the picture/
>>
>>  1. normalize the white balance
>>  2. normalize the exposure to fit the histogram
>>  3. normalize the contrast and tonemap
>>  4. clean the noise
>>  5. correct the lens
>>  6. recover the saturated highlights
>>  7. apply a color profile and LUT
>>
>> At the end of this step, the image should look as close as possible
>> to the reality. This step is only aimed at correcting the input
>> signal to revert the flaws of the sensor technology
>>
>> /Step 2 : tone the picture/
>>
>>  1. adjust the local and global contrast to be visually pleasing and
>> fit the photographer's intentions
>>  2. adjust the lightness
>>
>> This step is the first "artistic" step and is more efficient if the
>> image has been cleaned before. But this uses the colorbalance to fit
>> the gamma.
>>
>> /Step 3 : grade the picture/
>>
>>  1. adjust the hue to set the atmosphere
>>  2. adjust the saturation to get natural colors
>>  3. remap some colors to get better skin or sky tones
>>
>> This step is exactly what is done in video post-production.
>>
>> /Step 4 : enhance the picture/
>>
>>  1. crop
>>  2. fix the rotation and the perspective
>>  3. fix the sharpness (sharpening, high-pass)
>>  4. correct the skin, spots,

[darktable-dev] Darkroom UI refactoring

2018-10-07 Thread Aurélien Pierre
Hi everyone !

I would like to propose a lifting for the UI in the darkroom.

*Problem**
*

Currently, the modules are separated in 5 tabs :

  * base
  * tones
  * colors
  * enhancements
  * effects

But :

  * some modules in the color group affect the tones as well (color
zones, color balance)
  * some modules in the tone group affect the colors as well (tone
curves)
  * what is a "basic" module is rather arbitrary (basic == low-level
signal processing | traditionnal all-purpose features | simple
general settings ?)
  * some modules do basically the same thing (local contrast &
equalizer, sharpen & high-pass filter, tonecurve & basecurve)
and yet you find them in different tabs

*Workflow**
*

Over 7-8 years using dt, I have converged (and advocated) to the
following systematic workflow :

/Step 1 : clean and neutralize the picture/

 1. normalize the white balance
 2. normalize the exposure to fit the histogram
 3. normalize the contrast and tonemap
 4. clean the noise
 5. correct the lens
 6. recover the saturated highlights
 7. apply a color profile and LUT

At the end of this step, the image should look as close as possible
to the reality. This step is only aimed at correcting the input
signal to revert the flaws of the sensor technology

/Step 2 : tone the picture/

 1. adjust the local and global contrast to be visually pleasing and
fit the photographer's intentions
 2. adjust the lightness

This step is the first "artistic" step and is more efficient if the
image has been cleaned before. But this uses the colorbalance to fit
the gamma.

/Step 3 : grade the picture/

 1. adjust the hue to set the atmosphere
 2. adjust the saturation to get natural colors
 3. remap some colors to get better skin or sky tones

This step is exactly what is done in video post-production.

/Step 4 : enhance the picture/

 1. crop
 2. fix the rotation and the perspective
 3. fix the sharpness (sharpening, high-pass)
 4. correct the skin, spots, stains, sensor dust, etc. (spots and
retouch)
 5. correct the shapes (liquify)
 6. add filters (vignette, frame, watermark).

This step is more or less what you would do in pixels editors (Gimp,
Photoshop).

*Proposal*

I would like to refactor the UI in 4 tabs :

 1. *correction :* for all the signal-processing and purely technical
modules (mostly, the first in the pixelpipe, working in
camera-relative RGB) :
  * *sensor patterns handling :*
  o scalepixels
  o rotatepixels
  o demosaic
  o flip
  o rawprepare
  * *color correction handling :*
  o invert
  o temperature
  o colorout
  o colorin
  o colorchecker
  * *dynamic range handling:*
  o exposure
  o clipping
  o colorreconstruction
  o shadhi
  o highlights
  o profile_gamma
  o tonemap
  o graduatednd
  o dither
  * *optics handling :*
  o defringe
  o hazeremoval
  o lens
  o cacorrect
  * *noise handling :*
  o bilateral
  o nlmeans
  o denoiseprofile
  o rawdenoise
  o hotpixels
 2. *tones**: *for creative modules affecting lightness and contrast
  * *global contrast :*
  o tonecurves
  o basecurves
  o colisa
  o levels
  * *tone-mapping :*
  o zonesystem
  o global tonemap
  o relight
  * *local contrast :*
  o atrous
  o clahe
  o equalizer (legacy)
 3. *colors :* for creative modules affecting lightness and contrast
  * *RGB :*
  o colorbalance
  o channelmixer
  * *HSL :*
  o colorzones
  o splittoning
  * *Lab* :
  o colorcontrast
  o colorcorrection
  * *color-mapping :*
  o colormapping
  o colortransfer
  o lowlight
  o colorize
  * *saturation* :
  o vibrance
  o velvia
  o monochrome
 4. *enhancements :* for creative filters and pixel alteration modules
  * *sharpness* :
  o sharpen
  o highpass
  * *shoftness* :
  o bloom
  o lowpass
  * *inpainting* :
  o spots
  o retouch
  * *structure deformation :*
  o crop and rotate (what's its IOP name ?)
  o liquify
  o ashift
  * *creative* :
  o watermark
  o borders
  o grain
  o vignette

*Benefits*

I think that would draw a path, mostly one-directional, to follow during
edits : every tab is a step, you go into the next tab only when you are
finished with the previous one. It would result in less clicking and
browsing and more guidance for new users. It would draw less 

Re: [darktable-dev] Color balance wheels

2018-09-26 Thread Aurélien Pierre
Hi everyone,

The colorbalance is done and I have added HSL sliders to make it really
quick to set (altough now, the UI is quite crowded). It's the closest I
can go to the color wheels, given that I don't know much about cairo.

The new mode works in ProPhoto RGB, I still have a problem with the
colorspace conversion in the SSE2 version : converting back and forth
produces a slight magenta cast. I'm bumping my head to understand why,
since the pure C and OpenCL versions work flawlessly with the same
algorithm.

If you want to test or review :
https://github.com/darktable-org/darktable/pull/1734

If someone wants to finish the color wheels, you just have to map the
radius to what is passed to the current saturation slider, and the angle
to what is passed to the current hue slider. The maths are ready and
working.

Have a nice day,

Aurélien.


Le 21/09/2018 à 20:32, Aurélien Pierre a écrit :
>
> Hi,
>
> I'm to add the ASC CDL mode (https://en.wikipedia.org/wiki/ASC_CDL) in
> RecRGB 2020 into the colorbalance module.
>
> I have seen the prototype of the code for color wheels, from Tobias,
> and the comment :
>
> // these are not in a state to be useful. but they look nice. too
> bad i couldn't map the enhanced mode with
> // negative values to the wheels :(
>
> What exactly didn't work ? If the problem is merely mathematical, I
> can work on it. I'm just bad at GUIs and GTK stuff.
>
> On another note, is it mandatory to make the adjustement in sRGB ?
> (why not a larger RGB space ?)
>
> Cheers,
>
> Aurélien.
>
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Color balance wheels

2018-09-21 Thread Aurélien Pierre
It's not even a matter of display, because we are deep inside the
pixelpipe here. It's rather a matter of available range to perform the
intermediate computations and adjustments. You want to be able to do
them in a smooth way and have a lot of control. The point being I have
never had good results for color-grading with this module, it's always
too harsh. I don't know if it is because of the color space of the
lift/gamma/gain transfer function.


Le 21/09/2018 à 21:11, Per Inge Oestmoen a écrit :
> Aurélien Pierre wrote:
>
>> On another note, is it mandatory to make the adjustement in sRGB ?
>> (why not a larger RGB space ?)
>
> In my opinion, any dependence on sRGB is undesirable - and sRGB should
> be avoided in the workflow until the files are presented through a
> media which cannot show a larger color space. In a not too distant
> future, all monitors and projectors will be able to show a color space
> larger than Adobe RGB.
>
> Per Inge Oestmoen, Norway


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] How do you feel about code bounties ?

2018-09-19 Thread Aurélien Pierre
Hi everyone,

just following up on that matter with the results of the poll.

Over the 105 people who responded :

  * 94 % are ready to found the project, some are very enthusiastic,
  * 57 % would do it on a feature bounty basis,
  * 45 % would do it an a recurring basis,
  * 8 % would prefer to found directly the devs on a private basis,
  * we could secure a total yearly amount of nearly 11 000 € (eleven
thousands), that is  an average of 104 ± 483 €/year/user and a max
of 5000 €/year/user
  * we could secure an average total bounty amount of 1527 €/feature,
that is an average of 14 ± 19 €/feature/user and a may of 100
€/feature/user.

That is only for the French-speaking community (France, Switzerland,
Belgium). I feel most are happy to turn the amount of their former
Lightroom subscription into a sponsorship of something forever open and
non-bounding.

What are the news on the senior devs end ?

Have a good day,

Aurélien.


Le 11/09/2018 à 05:30, johannes hanika a écrit :
> hi aurelien!
>
> any help is always appreciated. you should by now have dev access on
> redmine (houz is our admin, he granted you access).
>
> your numbers sound encouraging. 660EUR for a bounty that shouldn't
> take you more than your sunday afternoon to fix sounds like a good
> deal. now we'll also have to work on the conditions of our daytime
> jobs to free up time. this should be possible.
>
> cheers,
>  jo
> On Mon, Sep 10, 2018 at 7:51 PM Aurélien Pierre
>  wrote:
>> Hi Jo !
>>
>> I launched a poll on the french community blog (darktable.fr) to see if a 
>> founding of some sort would get support from users. The poll has been up for 
>> 10 hours, we have for now 42 answers and the good news are :
>>
>> 95 % of users are ready to put money in the project,
>> 2/3 would go for a feature-based bounty founding, 1/3 would go for a 
>> recurring crowdfunding (like Patreon/Liberapay),
>> the average contribution per bounty per founder would be 16 ± 15 € (average 
>> ± 1 std) summing up to 660 €/bounty in total
>> the yearly total participation per founder would be 62 ± 49 € summing up to 
>> 2545 €/year in total (for now)
>>
>> So my question is : if we were to secure a recurring monthly revenue to pay 
>> for seniors/core devs time, would they be able to clear some hours each 
>> month to review code contributions or mentor new devs ? Platforms like 
>> Liberapay allow easily (as far as I know) to split teams earnings between 
>> the team members.
>>
>> Also, if you trusted me enough to give me admin rights on redmine, I would 
>> be happy to merge duplicates or close the solved issues to make some room.
>>
>> Have a good day !
>>
>> Aurélien.
>>
>>
>> Le 10/09/2018 à 08:55, johannes hanika a écrit :
>>
>> hi,
>>
>> thanks for putting this list together! yes we're terrible dealing with
>> these things. time seems to be the limiting factor here. the problem
>> with recruiting new people to go in and fix these things is that you
>> need someone to review the changes. which pretty much amounts to the
>> bottle neck for our github pull requests or google summer of code and
>> the like. now funding a full time employee for a secured amount of
>> time is a completely different story than fixing a bug here and there.
>>
>> i agree that the procedure needs to be changed though. somehow the
>> core issue seems to be senior dev time to me.
>>
>> cheers,
>>  jo
>>
>> On Sat, Sep 8, 2018 at 7:24 AM Aurélien Pierre
>>  wrote:
>>
>> Hi everyone,
>>
>> looking at the Redmine feature requests, it seems that a lot of legitimate 
>> requests are left idle, and some have been so for several years, generating 
>> duplicates. Most of these features are cosmetic User Interface improvements 
>> or making variables writable, such as :
>>
>> delete some EXIF+GPS meta-data in exported files (for privacy)
>> set the export file resolution from paper size and printing DPI, set the DPI 
>> value right in EXIF,
>> add/edit unique names/titles for modules instances, also within styles,
>> decompose the darkroom history by module AND module controls (decrease the 
>> history granularity),
>> add more JPEG exportation options (progressive, optimized, subsampling),
>> apply conditional styles automatically (the same way as the presets of the 
>> modules),
>> make styles hierarchical (to clean-up the list),
>> allow drawn mask edition (size, feathering, opacity) from the masks list and 
>> keyboard input values,
>> lock position and size of drawn masks for safe panning/zooming,
>> EXIF and IP

Re: [darktable-dev] Code reviews requested : Profile gamma correction

2018-09-18 Thread Aurélien Pierre
Thanks Jo !

so now I'm working on the OpenCL port. I have this function in the IOP :

#ifdef HAVE_OPENCL int process_cl(struct dt_iop_module_t *self,
dt_dev_pixelpipe_iop_t *piece, cl_mem dev_in, cl_mem dev_out,
   const dt_iop_roi_t *const roi_in, const dt_iop_roi_t
*const roi_out) {   dt_iop_profilegamma_data_t *d =
(dt_iop_profilegamma_data_t *)piece->data;  
dt_iop_profilegamma_global_data_t *gd =
(dt_iop_profilegamma_global_data_t *)self->data;   cl_int err = -999;  
const int devid = piece->pipe->devid;   const int width = roi_in->width;
  const int height = roi_in->height;   const float noise = powf(2.,
d->noise_level);   const float grey = d->grey_point / 100.;     size_t
sizes[3] = { ROUNDUPWD(width), ROUNDUPHT(height), 1 };  
dt_opencl_set_kernel_arg(devid, gd->kernel_profilegamma_log, 0,
sizeof(cl_mem), (void *)_in);   dt_opencl_set_kernel_arg(devid,
gd->kernel_profilegamma_log, 1, sizeof(cl_mem), (void *)_out);  
dt_opencl_set_kernel_arg(devid, gd->kernel_profilegamma_log, 2,
sizeof(int), (void *));   dt_opencl_set_kernel_arg(devid,
gd->kernel_profilegamma_log, 3, sizeof(int), (void *));  
dt_opencl_set_kernel_arg(devid, gd->kernel_profilegamma_log, 4,
sizeof(float), (void *)&(d->camera_factor));  
dt_opencl_set_kernel_arg(devid, gd->kernel_profilegamma_log, 5,
sizeof(float), (void *)&(d->dynamic_range));  
dt_opencl_set_kernel_arg(devid, gd->kernel_profilegamma_log, 6,
sizeof(float), (void *));   dt_opencl_set_kernel_arg(devid,
gd->kernel_profilegamma_log, 7, sizeof(float), (void
*)&(d->shadows_range));   dt_opencl_set_kernel_arg(devid,
gd->kernel_profilegamma_log, 8, sizeof(float), (void *));     err =
dt_opencl_enqueue_kernel_2d(devid, gd->kernel_profilegamma_log, sizes);
  if(err != CL_SUCCESS) goto error;   return TRUE; error:  
dt_print(DT_DEBUG_OPENCL, "[opencl_profilegamma_log] couldn't enqueue
kernel! %d\n", err);   return FALSE; } #endif

and this one in basic.cl :

kernel void profilegamma_log (read_only image2d_t in, write_only
image2d_t out, unsigned int width, unsigned int height,     float
factor, float range, float noise_level, float shadows_range, float grey)
{   const unsigned int x = get_global_id(0);   const unsigned int y =
get_global_id(1);   if(x >= height || y >= width) return;   float4 i =
read_imagef(in, sampleri, (int2)(x, y));   const float4 min_val =
noise_level / (noise_level + grey);   const float4 Logmin_val =
log2(min_val);   i.xyz = (log2(factor * (i.xyz + noise_level) / (grey +
noise_level)) - shadows_range) / range;   write_imagef(out, (int2)(x,
y), i); }

While trying to run it, I get :

1,376046 [opencl_create_kernel] successfully loaded kernel
`profilegamma_log' (165) for device 0 …

7,995011 [opencl_events_flush] could not get event info for '[Read Image
(from device to host)]': -

7,995018 [opencl_events_flush] could not get event info for
'profilegamma_log': -

7,995021 [opencl_pixelpipe] could not run module 'profile_gamma' on gpu.
falling back to cpu path

7,995208 [opencl_pixelpipe (b)] late opencl error detected while copying
back to cpu buffer: -5

Any clue about what could be happening ?

Thanks,

Aurélien.


Le 18/09/2018 à 03:26, johannes hanika a écrit :
> heya,
>
> [..]
>> Actually, I even wonder if OpenCL is relevant for this as it's a linear 
>> operation performed on only one pixel at the time over the flatten array. I 
>> wouldn't be surprised if the OpenCL version were slower on some systems than 
>> a good SSE2 version.
> maybe on some systems. the thing with opencl is that you need to copy
> the buffer to the gpu and back at the end. if you have one module that
> interrupts the pipeline, you'll need to copy more (get your input
> buffer back to the cpu, process, copy back to gpu). this slows down
> the whole process significantly, even if the module would run at same
> speed on both devices.
>
>
>> Considering the code itself, my only remarks are for this line:
>>   for(size_t k = 1; k < (size_t)ch * roi_out->width * roi_out->height; 
>> k++)
>> First, is there a reason why you are using a size_t type? int or unsigned 
>> would be fine I think, and you wouldn't need a cast.
> you definitely want 64 bits for the counter if you go width*height
> (times channel count here, too). size_t happens to be unsigned 64-bit
> int on many systems. using stdint.h you could use uint64_t to be even
> clearer and maybe more portable. note that you could have used a
> nested loop for y and for x together with an openmp annotation
> "collapse(2)" to get similar results.
>
> and yes, please start at 0 :)
>
> cheers,
>  jo
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Code reviews requested : Profile gamma correction

2018-09-12 Thread Aurélien Pierre
Le 12/09/2018 à 12:59, rawfiner a écrit :

> Hi Aurélien
>
> Fist, thank you for showing me this interesting video.
> I just compiled your branch.
>
> My first question is, is it possible to find shift power slope values
> that reproduce the result we had before with linear and gamma?
> If yes, I think you should compute the new parameters values from the
> old ones.
> You can take a look at function "legacy_param" in denoiseprofile.c to
> see an example.
> If that is not possible, we could imagine to have a "mode" selector in
> the GUI to switch between "linear and gamma" and "shift power slope".
It is not and I will re-integrate the legacy code as soon as I fix some
problems. I will definitely add the legacy_param().
>
> Considering opencl, I cannot help you here as I have never coded in
> opencl and I do not have a GPU.
> Yet, even without opencl, code seems already quite fast.
Actually, I even wonder if OpenCL is relevant for this as it's a linear
operation performed on only one pixel at the time over the flatten
array. I wouldn't be surprised if the OpenCL version were slower on some
systems than a good SSE2 version.
>
> Considering the code itself, my only remarks are for this line:
>   for(size_t k = 1; k < (size_t)ch * roi_out->width *
> roi_out->height; k++)
> First, is there a reason why you are using a size_t type? int or
> unsigned would be fine I think, and you wouldn't need a cast.
I took that line from the exposure.c IOP. I think size_t is an explicit
array index type so the compiler can do cleverer things with it than
just a regular int.
> Second, in C, array indexes start at 0, so the red value of the pixel
> at the top left corner is not processed by your loop (you can see it
> on exported image)
> So I guess you want the for loop to be:
>  for(unsigned k = 0; k < ch * roi_out->width * roi_out->height; k++)
That's a copy/paste error from another loop I removed since then.
>
> I know that C is hard to learn, so congratulations Aurélien! :-)
Thanks ! I have posted the first results :
https://discuss.pixls.us/t/solving-dynamic-range-problems-in-a-linear-way/9006

>
> rawfiner
>
>
> Le mer. 12 sept. 2018 à 14:46, Aurélien Pierre
> mailto:rese...@aurelienpierre.com>> a écrit :
>
> Hi everyone,
>
> when working with color profiles, the main historic issue was the
> non-linearity of the sensors/films. Now, it is rather that the
> color profile is performed on a chart having 6-7 EV of dynamic
> range while modern cameras have 12-15 EV. Simple gamma corrections
> (invented for CRT screens) don't work anymore, and video editors
> have invented a new standard able to remap the dynamic range and
> to fix the mid-tones at once :
> 
> https://www.youtube.com/watch?v=kVKnhJN-BrQ=7=PLa1F2ddGya_9XER0wnFS6Mgnp3T-hgSZO
>
> I have embedded the formula used in Blender
> (https://en.wikipedia.org/wiki/ASC_CDL) into the profile
> correction module of dt (using the same parameters for each RGB
> channel). The result looks more natural than the current version,
> without gamut or saturation issues in the highlights. It also
> speeds-up the worflow, since all is needed is this module to
> adjust the dynamic range, then a tone curve in auto RGB mode
> shaped as a stiff S to bring back the contrast. The result is much
> better than with the tonemapping modules, with less color fixes.
>
> I'm a newbie at C and it's the first time I achieve something
> inside dt, so I could use some reviews on my code and also some
> help on the OpenCL part (the kernel does not load, I don't know
> why) : https://github.com/aurelienpierre/darktable/tree/color-grading
>
> Thanks a lot !
>
> Aurélien.
>
>
> 
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
> <mailto:darktable-dev%2bunsubscr...@lists.darktable.org>
>
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

[darktable-dev] Code reviews requested : Profile gamma correction

2018-09-12 Thread Aurélien Pierre
Hi everyone,

when working with color profiles, the main historic issue was the
non-linearity of the sensors/films. Now, it is rather that the color
profile is performed on a chart having 6-7 EV of dynamic range while
modern cameras have 12-15 EV. Simple gamma corrections (invented for CRT
screens) don't work anymore, and video editors have invented a new
standard able to remap the dynamic range and to fix the mid-tones at
once :
https://www.youtube.com/watch?v=kVKnhJN-BrQ=7=PLa1F2ddGya_9XER0wnFS6Mgnp3T-hgSZO

I have embedded the formula used in Blender
(https://en.wikipedia.org/wiki/ASC_CDL) into the profile correction
module of dt (using the same parameters for each RGB channel). The
result looks more natural than the current version, without gamut or
saturation issues in the highlights. It also speeds-up the worflow,
since all is needed is this module to adjust the dynamic range, then a
tone curve in auto RGB mode shaped as a stiff S to bring back the
contrast. The result is much better than with the tonemapping modules,
with less color fixes.

I'm a newbie at C and it's the first time I achieve something inside dt,
so I could use some reviews on my code and also some help on the OpenCL
part (the kernel does not load, I don't know why) :
https://github.com/aurelienpierre/darktable/tree/color-grading

Thanks a lot !

Aurélien.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

[darktable-dev] Add more tags on Redmine

2018-09-11 Thread Aurélien Pierre
Hi everyone !

Doing some cleanup on Redmine, I see bugs and feature requests are
sorted by categories, which are the place where they occur (lighttable,
darkroom, Lua, masks).

I would like to propose an additional taxinomy, for example as tags, to
narrow the field of expertise they require :

  * *G**UI design,* for everything related to workflow and ergonomics
optimization /(mostly GTK programming)/
  * *data**management*, for everything related to EXIF/IPTC metadata,
tags, collections and database management///(mostly text handling//,
exiftools and S//qlite)/
  * *I/O**handling,* for everything related to files decoding, encoding,
printing, exporting, sharing /(mostly libs handl//ing and //Rawspeed
programmin//g//)/
  * *image processing,* for everything related to core algorithms and
modules/(mostly maths)/
  * *performance,* for everything related to low-level optimizations to
make algorithms run faster /(mostly SSE4, OpenCL and fundamental
//computer-science)/
  * *compatibility,* for everything related to make the above stack work
on various devices, OS and architectures /(mostly wiring OS specs)/
  * *documentation and translation,* for everything related to redaction
and pedagogy/(text, video, tutorials, etc.)/

This is in the spirit of encouraging external contributions and
collaboration, in order to make the skills set required clearer.I hope
that, looking at that taxinomy, everyone could figure out where he can
put his skills.

What do you think ?

Have a good day,

Aurélien.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] How do you feel about code bounties ?

2018-09-10 Thread Aurélien Pierre
Hi Jo !

I launched a poll on the french community blog (darktable.fr) to see if
a founding of some sort would get support from users. The poll has been
up for 10 hours, we have for now 42 answers and the good news are :

  * 95 % of users are ready to put money in the project,
  * 2/3 would go for a feature-based bounty founding, 1/3 would go for a
recurring crowdfunding (like Patreon/Liberapay),
  * the average contribution per bounty per founder would be 16 ± 15 €
(average ± 1 std) summing up to 660 €/bounty in total
  * the yearly total participation per founder would be 62 ± 49 €
summing up to 2545 €/year in total (for now)

So my question is : if we were to secure a recurring monthly revenue to
pay for seniors/core devs time, would they be able to clear some hours
each month to review code contributions or mentor new devs ? Platforms
like Liberapay allow easily (as far as I know) to split teams earnings
between the team members.

Also, if you trusted me enough to give me admin rights on redmine, I
would be happy to merge duplicates or close the solved issues to make
some room.

Have a good day !

Aurélien.


Le 10/09/2018 à 08:55, johannes hanika a écrit :
> hi,
>
> thanks for putting this list together! yes we're terrible dealing with
> these things. time seems to be the limiting factor here. the problem
> with recruiting new people to go in and fix these things is that you
> need someone to review the changes. which pretty much amounts to the
> bottle neck for our github pull requests or google summer of code and
> the like. now funding a full time employee for a secured amount of
> time is a completely different story than fixing a bug here and there.
>
> i agree that the procedure needs to be changed though. somehow the
> core issue seems to be senior dev time to me.
>
> cheers,
>  jo
>
> On Sat, Sep 8, 2018 at 7:24 AM Aurélien Pierre
>  wrote:
>> Hi everyone,
>>
>> looking at the Redmine feature requests, it seems that a lot of legitimate 
>> requests are left idle, and some have been so for several years, generating 
>> duplicates. Most of these features are cosmetic User Interface improvements 
>> or making variables writable, such as :
>>
>> delete some EXIF+GPS meta-data in exported files (for privacy)
>> set the export file resolution from paper size and printing DPI, set the DPI 
>> value right in EXIF,
>> add/edit unique names/titles for modules instances, also within styles,
>> decompose the darkroom history by module AND module controls (decrease the 
>> history granularity),
>> add more JPEG exportation options (progressive, optimized, subsampling),
>> apply conditional styles automatically (the same way as the presets of the 
>> modules),
>> make styles hierarchical (to clean-up the list),
>> allow drawn mask edition (size, feathering, opacity) from the masks list and 
>> keyboard input values,
>> lock position and size of drawn masks for safe panning/zooming,
>> EXIF and IPTC management/edition requests (date, time, names of lenses 
>> without processor, scans with no/wrong metadata),
>> create arbitrary collections/catalogs of images (ex : family, perso, 
>> assignments),
>> implement ESC and RETURN shortcuts in every dialog to cancel and validate,
>> implement a coarse/fine tuning option to increment/decrement values with the 
>> mouse wheel
>> lots of small GTK glitches with scroll bars, lighttable selections and 
>> hovers,
>> link exported pictures paths to original RAW files,
>> allow to set the UI main color and create user-friendly theme/template 
>> (whitout editing CSS),
>> etc.
>>
>> Some are more algorithmically challenging :
>>
>> make the RGB gains independant in wavelets/non-locals means denoising module,
>> rotate/flip the sampling patch in the spot removal module and in the freshly 
>> merged retouch module,
>> add a color correction on A and B channels to fix the desaturation happening 
>> in the local contrast module (laplacian) with heavy settings,
>> display the locked AF point on previews
>> detecting duplicates and similar pictures in database
>> etc.
>> plus all the Windows portability issues.
>>
>> And there are still #TODOs in the source code.
>>
>> Most of these changes are for sure not the most challenging and don't make 
>> for the sexiest coding party, so I have no trouble imagining how little 
>> appealing they can be to hobbyist developers, but they are nonetheless 
>> useful and game changing for professionnals who are bound to efficiency 
>> constraints.
>>
>> I find quite remarkable the dramatic improvements that software such as 
>> Blender have

Re: [darktable-dev] How do you feel about code bounties ?

2018-09-08 Thread Aurélien Pierre
Do the (hard)core devs need more help reviewing merge requests ? How
could we solve that issue ?


Le 08/09/2018 à 05:02, Andreas Schneider a écrit :
> On Saturday, 8 September 2018 07:24:51 CEST Aurélien Pierre wrote:
>> Hi everyone,
> Hi Aurélien,
>
>> Shouldn't we merge Github issues and Redmine bugs/FR, and promote
>> bountysource ?
> I doesn't really make sense to start with bounties while github merge 
> requests 
> are bit rotting. Even small changes which are easy to review ...
>
>
>   Andreas
>
>
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

[darktable-dev] How do you feel about code bounties ?

2018-09-07 Thread Aurélien Pierre
Hi everyone,

looking at the Redmine feature requests, it seems that a lot of
legitimate requests are left idle, and some have been so for several
years, generating duplicates. Most of these features are cosmetic User
Interface improvements or making variables writable, such as :

  * delete some EXIF+GPS meta-data
 in exported files (for
privacy)
  * set the export file resolution
 from paper size and
printing DPI, set the DPI value right in EXIF,
  * add/edit unique names/titles
 for modules instances,
also within styles ,
  * decompose the darkroom history
by module AND module
controls (decrease the history granularity),
  * add more JPEG exportation options (progressive
, optimized
, subsampling)
,
  * apply conditional styles automatically (the same way as the presets
of the modules),
  * make styles hierarchical (to clean-up the list),
  * allow drawn mask edition
 (size, feathering,
opacity) from the masks list and keyboard input values,
  * lock position and size 
of drawn masks for safe panning/zooming,
  * EXIF


and IPTC


management/edition requests (date, time, names of lenses without
processor, scans with no/wrong metadata),
  * create arbitrary collections/catalogs
 of images (ex : family,
perso, assignments),
  * implement ESC and RETURN
 shortcuts in every
dialog to cancel and validate,
  * implement a coarse/fine tuning
 option to
increment/decrement values with the mouse wheel
  * lots of small GTK glitches with scroll bars

,
lighttable selections 
and hovers ,
  * link exported pictures paths to original RAW files,
  * allow to set the UI main color and create user-friendly
theme/template (whitout editing CSS),
  * etc.

Some are more algorithmically challenging :

  * make the RGB gains independant
 in wavelets/non-locals
means denoising module,
  * rotate/flip the sampling patch in the spot removal module and in the
freshly merged retouch module,
  * add a color correction on A and B channels to fix the desaturation
happening in the local contrast module (laplacian) with heavy settings,
  * display the locked AF point
on previews
  * detecting duplicates and similar pictures
in database
  * etc.
  * plus all the Windows portability issues.

And there are still #TODOs in the source code.

Most of these changes are for sure not the most challenging and don't
make for the sexiest coding party, so I have no trouble imagining how
little appealing they can be to hobbyist developers, but they are
nonetheless useful and game changing for professionnals who are bound to
efficiency constraints.

I find quite remarkable the dramatic improvements that software such as
Blender have known in the past decade, and though I get why dt
developers aren't thrilled by the admin overhead involved in a similar
fundation to pay full-time developpers, I think the above requests will
stay idle for some more time if we don't go next-level. That would be a
shame considering the core is stable and sane, and what is needed is
mainly cosmetic.

As more and more professional photographers adopt dt in their job and
are asking for more efficency-driven features, I know that some would be
happy to fund developpers to smooth all the sharp edges listed above.
For now, the 

[darktable-dev] Reverse mouse wheel scrolling direction

2018-09-07 Thread Aurélien Pierre
Hi,

I'm using a 3D (6 axes) mouse in dt on Ubuntu to zoom and set the size
of the drawn masks. On my system, I'm not using the natural scrolling,
meaning that the scrolling moves the view, not the content.

In dt, I have to scroll to the bottom to increase the masks size (and
scroll to the top to decrease), which is quite disturbing. But on
modules sliders, scrolling to top increases the value as expected.

Is there any way to reverse the wheel scrolling behaviour for masks
only, in order to make it consistent with the sliders behaviour ?

Thanks,

Aurélien.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] More exporting threads

2018-09-03 Thread Aurélien Pierre
Hi,

I don't think threaded jobs would be of much use here, since the image
processing is already parallelized at the array level. The OpenCL
interface should already use all the available cores of your GPUs on one
single image at the time. What you suggest would, I think, degrade the
overall performance by increasing I/Os overhead.

Aurélien.


Le 03/09/2018 à 14:11, Lutz Labusch a écrit :
> Hi all,
>
> i want to say thank you for darktable, which is a pretty nice program,
> first.
>
> As a owner of two (n)  powerful GPU devices it will be very
> interesting to have the possibility to have two (n) exporting threads
> for larger jobs. Is there somewhere an hidden option for this?
>
> Thanks and best regards
>
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Idea: grouping effects together

2018-07-29 Thread Aurélien Pierre
Also, a thing I have thought about for a while would be the ability to
give names/titles to modules and especially their instances. Checking
online, it's been on for 2 years :
https://redmine.darktable.org/issues/10935


Le 29/07/2018 à 10:32, Marek Jędrzejewski a écrit :
> I think it would be useful if there was an option to make a group with
> effect instances to allow for setting mixing for them together
>
> ___
>
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-07-01 Thread Aurélien Pierre
You're welcome ;-)

That's true : the multiplication is equivalent to an "AND" operation,
the resulting mask has non-zero values where both TV AND Laplacian masks
has non-zero values, which - from my tests - is where the real noise is.

That is because TV alone is too sensitive : when the image is noisy, it
works fine, but whenever the image is clean or barely noisy, it detect
edges as well, thus false-positive in the case of noise detection.

The TV × Laplacian is a safety jacket that allows the TV to work as
expected on noisy images (see the example) but will protect sharp edges
on clean images (on the example, the masks barely grabs a few pixels in
the in-focus area).

I have found that the only way we could overcome the oversensibility of
the TV alone is by setting a window (like a band-pass filter) instead of
a threshold (high-pass filter) because, in a noisy picture, depending on
the noise level, the TV values of noisy and edgy pixels are very close.
>From an end-user perspective, this is tricky.

Using TV × Laplacian, given that the noise stats should not vary much
for a given sensor at a given ISO, allows to confidently set a simple
threshold as a factor of the standard deviation. It gives more
reproductibility and allows to build preset/styles for given camera/ISO.
Assuming gaussian noise, if you set your threshold factor to X (which
means "unmask everything above the mean (TV × Laplacian) + X standard
deviation), you know beforehand how many high-frequency pixels will be
affected, no matter what :

  * X = -1 =>  84 %,
  * 0 => 50 %,
  * 1 =>  16 % ,
  * 2 =>  2.5 %,
  * 3 => 0.15 %
  * …

Le 01/07/2018 à 14:13, rawfiner a écrit :
> Thank you for this study Aurélien
>
> As far as I understand, TV and Laplacians are complementary as they
> detect noise in different regions of the image (noise in sharp edge
> for Laplacian, noise elsewhere for TV).
> Though, I do not understand why you multiply the TV and Laplacian
> results to get the mask.
> Multiplying them would result in a mask containing non-zero values
> only for pixels that are detected as noise both by TV and Laplacian.
> Is there a particular reason for multiplying (or did I misunderstood
> something?), or could we take the maximum value among TV and Laplacian
> for each pixel instead?
>
> Thanks again
>
> Cheers,
> rawfiner
>
>
> 2018-07-01 3:45 GMT+02:00 Aurélien Pierre  <mailto:rese...@aurelienpierre.com>>:
>
> Hi,
>
> I have done experiments on that matter and took the opportunity to
> correct/test further my code.
>
> So here are my attempts to code a noise mask and a sharpness mask
> with total variation and laplacian norms :
> 
> https://github.com/aurelienpierre/Image-Cases-Studies/blob/master/notebooks/Total%20Variation%20masking.ipynb
> 
> <https://github.com/aurelienpierre/Image-Cases-Studies/blob/master/notebooks/Total%20Variation%20masking.ipynb>
>
> Performance benchmarks are at the end.
>
> Cheers,
>
> Aurélien.
>
>
> Le 17/06/2018 à 15:03, rawfiner a écrit :
>>
>>
>> Le dimanche 17 juin 2018, Aurélien Pierre
>> mailto:rese...@aurelienpierre.com>>
>> a écrit :
>>
>>
>>
>> Le 13/06/2018 à 17:31, rawfiner a écrit :
>>>
>>>
>>> Le mercredi 13 juin 2018, Aurélien Pierre
>>> >> <mailto:rese...@aurelienpierre.com>> a écrit :
>>>
>>>
>>>>
>>>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>>> >>> <mailto:rese...@aurelienpierre.com>> wrote:
>>>> > Hi,
>>>> >
>>>> > The problem of a 2-passes denoising method
>>>> involving 2 differents
>>>> > algorithms, the later applied where the former
>>>> failed, could be the grain
>>>> > structure (the shape of the noise) would be
>>>> different along the picture,
>>>> > thus very unpleasing.
>>>>
>>>>
>>>> I agree that the grain structure could be different.
>>>> Indeed, the grain could be different, but my feeling
>>>> (that may be wrong) is that it would be still better
>>>> than just no further processing, that leaves some
>>>> pixels unprocessed (they could form grain structures
>>>> far from uniform if we are not lucky).
>>>> If you think it is o

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-30 Thread Aurélien Pierre
Hi,

I have done experiments on that matter and took the opportunity to
correct/test further my code.

So here are my attempts to code a noise mask and a sharpness mask with
total variation and laplacian norms :
https://github.com/aurelienpierre/Image-Cases-Studies/blob/master/notebooks/Total%20Variation%20masking.ipynb

Performance benchmarks are at the end.

Cheers,

Aurélien.


Le 17/06/2018 à 15:03, rawfiner a écrit :
>
>
> Le dimanche 17 juin 2018, Aurélien Pierre  <mailto:rese...@aurelienpierre.com>> a écrit :
>
>
>
> Le 13/06/2018 à 17:31, rawfiner a écrit :
>>
>>
>> Le mercredi 13 juin 2018, Aurélien Pierre
>> mailto:rese...@aurelienpierre.com>>
>> a écrit :
>>
>>
>>>
>>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>> >> <mailto:rese...@aurelienpierre.com>> wrote:
>>> > Hi,
>>> >
>>> > The problem of a 2-passes denoising method involving 2
>>> differents
>>> > algorithms, the later applied where the former failed,
>>> could be the grain
>>> > structure (the shape of the noise) would be different
>>> along the picture,
>>> > thus very unpleasing.
>>>
>>>
>>> I agree that the grain structure could be different. Indeed,
>>> the grain could be different, but my feeling (that may be
>>> wrong) is that it would be still better than just no further
>>> processing, that leaves some pixels unprocessed (they could
>>> form grain structures far from uniform if we are not lucky).
>>> If you think it is only due to a change of algorithm, I
>>> guess we could apply non local means again on pixels where a
>>> first pass failed, but with different parameters to be quite
>>> confident that the second pass will work.
>> That sounds better to me… but practice will have the last word.
>>
>>
>> Ok :-) 
>>
>>>  
>>>
>>> >
>>> > I thought maybe we could instead create some sort of
>>> total variation
>>> > threshold on other denoising modules :
>>> >
>>> > compute the total variation of each channel of each
>>> pixel as the divergence
>>> > divided by the L1 norm of the gradient - we then
>>> obtain a "heatmap" of the
>>> > gradients over the picture (contours and noise)
>>> > let the user define a total variation threshold and
>>> form a mask where the
>>> > weights above the threshold are the total variation
>>> and the weights below
>>> > the threshold are zeros (sort of a highpass filter
>>> actually)
>>> > apply the bilateral filter according to this mask.
>>> >
>>> > This way, if the user wants to stack several denoising
>>> modules, he could
>>> > protect the already-cleaned areas from further denoising.
>>> >
>>> > What do you think ?
>>>
>>>
>>> That sounds interesting.
>>> This would maybe allow to keep some small variations/details
>>> that are not due to noise or not disturbing, while denoising
>>> the other parts.
>>> Also, it may be computationally interesting (depends on the
>>> complexity of the total variation computation, I don't know
>>> it), as it could reduce the number of pixels to process.
>>> I guess the user could use something like that also the
>>> other way?: to protect high detailed zones and apply the
>>> denoising on quite smoothed zones only, in order to be able
>>> to use stronger denoising on zones that are supposed to be
>>> background blur.
>>
>> The noise is high frequency, so the TV (total variation)
>> threshold will have to be high pass only. The hypothesis
>> behind the TV thresholding is noisy pixels should have
>> abnormally higher gradients than true details, so you isolate
>> them this way.  Selecting noise in low 

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-17 Thread Aurélien Pierre


Le 13/06/2018 à 17:31, rawfiner a écrit :
>
>
> Le mercredi 13 juin 2018, Aurélien Pierre  <mailto:rese...@aurelienpierre.com>> a écrit :
>
>
>>
>>     On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>> > <mailto:rese...@aurelienpierre.com>> wrote:
>> > Hi,
>> >
>> > The problem of a 2-passes denoising method involving 2
>> differents
>> > algorithms, the later applied where the former failed,
>> could be the grain
>> > structure (the shape of the noise) would be different along
>> the picture,
>> > thus very unpleasing.
>>
>>
>> I agree that the grain structure could be different. Indeed, the
>> grain could be different, but my feeling (that may be wrong) is
>> that it would be still better than just no further processing,
>> that leaves some pixels unprocessed (they could form grain
>> structures far from uniform if we are not lucky).
>> If you think it is only due to a change of algorithm, I guess we
>> could apply non local means again on pixels where a first pass
>> failed, but with different parameters to be quite confident that
>> the second pass will work.
> That sounds better to me… but practice will have the last word.
>
>
> Ok :-) 
>
>>  
>>
>> >
>> > I thought maybe we could instead create some sort of total
>> variation
>> > threshold on other denoising modules :
>> >
>> > compute the total variation of each channel of each pixel
>> as the divergence
>> > divided by the L1 norm of the gradient - we then obtain a
>> "heatmap" of the
>> > gradients over the picture (contours and noise)
>> > let the user define a total variation threshold and form a
>> mask where the
>> > weights above the threshold are the total variation and the
>> weights below
>> > the threshold are zeros (sort of a highpass filter actually)
>> > apply the bilateral filter according to this mask.
>> >
>> > This way, if the user wants to stack several denoising
>> modules, he could
>> > protect the already-cleaned areas from further denoising.
>> >
>> > What do you think ?
>>
>>
>> That sounds interesting.
>> This would maybe allow to keep some small variations/details that
>> are not due to noise or not disturbing, while denoising the other
>> parts.
>> Also, it may be computationally interesting (depends on the
>> complexity of the total variation computation, I don't know it),
>> as it could reduce the number of pixels to process.
>> I guess the user could use something like that also the other
>> way?: to protect high detailed zones and apply the denoising on
>> quite smoothed zones only, in order to be able to use stronger
>> denoising on zones that are supposed to be background blur.
>
> The noise is high frequency, so the TV (total variation) threshold
> will have to be high pass only. The hypothesis behind the TV
> thresholding is noisy pixels should have abnormally higher
> gradients than true details, so you isolate them this way. 
> Selecting noise in low frequencies areas would require in addition
> something like a guided filter, which I believe is what is used in
> the dehaze module. The complexity of the TV computation depends on
> the order of accuracy you expect.
>
> A classic approximation of the gradient is using a convolution
> product with Sobel or Prewitt operators (3×3 arrays, very
> efficient, fairly accurate for edges, probably less accurate for
> punctual noise). I have developped myself optimized methods using
> 2, 4, and 8 neighbouring pixels that give higher order accuracy,
> given the sparsity of the data, at the expense of computing cost :
> 
> https://github.com/aurelienpierre/Image-Cases-Studies/blob/947fd8d5c2e4c3384c80c1045d86f8cf89ddcc7e/lib/deconvolution.pyx#L342
> 
> <https://github.com/aurelienpierre/Image-Cases-Studies/blob/947fd8d5c2e4c3384c80c1045d86f8cf89ddcc7e/lib/deconvolution.pyx#L342>
> (ignore the variable ut in the code, only u is relevant for us here).
>
> Great, thanks for the explanations.
> Looking at the code of the 8 neighbouring pixels, I wonder if we would
> make sense to compute something li

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-13 Thread Aurélien Pierre

Le 13/06/2018 à 14:48, rawfiner a écrit :
>
> Le mercredi 13 juin 2018, johannes hanika  <mailto:hana...@gmail.com>> a écrit :
>
> hi,
>
> that doesn't sound like a bad idea at all. for what it's worth, in
> practice the nlmeans doesn't let any grain at all through due to the
> piecewise constant prior that it's based on. well, only in regions
> where it finds enough other patches that is. in the current
> implementation with a radius of 7 that is not always the case.
>
>
> That's precisely the type of grain that I thought to try to tackle
> with a 2 pass.
> When the image is very noisy, it is quite frequent to have pixels
> without enough other patches.
> It sometimes forces me to raise the strength sliders, resulting in an
> overly smoothed image.
> The idea is to give the user the choice of how to handle these pixels,
> either by leaving them like this, either by using another denoising
> algorithm so that they integrate better with their surroundings.
> Anyway, I guess I may try that and come back after some results to
> discuss if it's worth it or no ;-)
>  
>
>
> also, i usually use some blending to add the input buffer back on top
> of the output. this essentially leaves the grain alone but tones it
> down.
>
>
> I do the same ;-)
Me too
>  
>
>
> cheers,
>  jo
>
>
> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
> mailto:rese...@aurelienpierre.com>>
> wrote:
> > Hi,
> >
> > The problem of a 2-passes denoising method involving 2 differents
> > algorithms, the later applied where the former failed, could be
> the grain
> > structure (the shape of the noise) would be different along the
> picture,
> > thus very unpleasing.
>
>
> I agree that the grain structure could be different. Indeed, the grain
> could be different, but my feeling (that may be wrong) is that it
> would be still better than just no further processing, that leaves
> some pixels unprocessed (they could form grain structures far from
> uniform if we are not lucky).
> If you think it is only due to a change of algorithm, I guess we could
> apply non local means again on pixels where a first pass failed, but
> with different parameters to be quite confident that the second pass
> will work.
That sounds better to me… but practice will have the last word.
>  
>
> >
> > I thought maybe we could instead create some sort of total variation
> > threshold on other denoising modules :
> >
> > compute the total variation of each channel of each pixel as the
> divergence
> > divided by the L1 norm of the gradient - we then obtain a
> "heatmap" of the
> > gradients over the picture (contours and noise)
> > let the user define a total variation threshold and form a mask
> where the
> > weights above the threshold are the total variation and the
> weights below
> > the threshold are zeros (sort of a highpass filter actually)
> > apply the bilateral filter according to this mask.
> >
> > This way, if the user wants to stack several denoising modules,
> he could
> > protect the already-cleaned areas from further denoising.
> >
> > What do you think ?
>
>
> That sounds interesting.
> This would maybe allow to keep some small variations/details that are
> not due to noise or not disturbing, while denoising the other parts.
> Also, it may be computationally interesting (depends on the complexity
> of the total variation computation, I don't know it), as it could
> reduce the number of pixels to process.
> I guess the user could use something like that also the other way?: to
> protect high detailed zones and apply the denoising on quite smoothed
> zones only, in order to be able to use stronger denoising on zones
> that are supposed to be background blur.

The noise is high frequency, so the TV (total variation) threshold will
have to be high pass only. The hypothesis behind the TV thresholding is
noisy pixels should have abnormally higher gradients than true details,
so you isolate them this way.  Selecting noise in low frequencies areas
would require in addition something like a guided filter, which I
believe is what is used in the dehaze module. The complexity of the TV
computation depends on the order of accuracy you expect.

A classic approximation of the gradient is using a convolution product
with Sobel or Prewitt operators (3×3 arrays, very efficient, fairly
accurate for edges, probably less accurate for punctual noise). I have
developped myself optimized methods using 2, 4, and 8 neighbouring
pixel

Re: [darktable-dev] denoise profile non local means: neighborhood parameter

2018-06-13 Thread Aurélien Pierre
Hi,

The problem of a 2-passes denoising method involving 2 differents
algorithms, the later applied where the former failed, could be the
grain structure (the shape of the noise) would be different along the
picture, thus very unpleasing.

I thought maybe we could instead create some sort of total variation
threshold on other denoising modules :

 1. compute the total variation of each channel of each pixel as the
divergence divided by the L1 norm of the gradient - we then obtain a
"heatmap" of the gradients over the picture (contours and noise)
 2. let the user define a total variation threshold and form a mask
where the weights above the threshold are the total variation and
the weights below the threshold are zeros (sort of a highpass filter
actually)
 3. apply the bilateral filter according to this mask.

This way, if the user wants to stack several denoising modules, he could
protect the already-cleaned areas from further denoising.

What do you think ?

Aurélien.


Le 13/06/2018 à 03:16, rawfiner a écrit :
> Hi,
>
> I don't have the feeling that increasing K is the best way to improve
> noise reduction anymore.
> I will upload the raw next week (if I don't forget to), as I am not at
> home this week.
> My feeling is that doing non local means on raw data gives much bigger
> improvement than that.
> I still have to work on it yet.
> I am currently testing some raw downsizing ideas to allow a fast
> execution of the algorithm.
>
> Apart of that, I also think that to improve noise reduction such as
> the denoise profile in nlm mode and the denoise non local means, we
> could do a 2 passes algorithm, with non local means applied first, and
> then a bilateral filter (or median filter or something else) applied
> only on pixels where non local means failed to find suitable patches
> (i.e. pixels where the sum of weights was close to 0).
> The user would have a slider to adjust this setting.
> I think that it would make easier to have a "uniform" output (i.e. an
> output where noise has been reduced quite uniformly)
> I have not tested this idea yet.
>
> Cheers,
> rawfiner
>
> Le lundi 11 juin 2018, johannes hanika  > a écrit :
>
> hi,
>
> i was playing with noise reduction presets again and tried the large
> neighbourhood search window. on my shots i could very rarely spot a
> difference at all increasing K above 7, and even less so going above
> 10. the image you posted earlier did show quite a substantial
> improvement however. i was wondering whether you'd be able to share
> the image so i can evaluate on it? maybe i just haven't found the
> right test image yet, or maybe it's camera dependent?
>
> (and yes, automatic and adaptive would be better but if we can ship a
> simple slider that can improve matters, maybe we should)
>
> cheers,
>  jo
>
>
>
> On Mon, Jan 29, 2018 at 2:05 AM, rawfiner  > wrote:
> > Hi
> >
> > Yes, the patch size is set to 1 from the GUI, so it is not a
> bilateral
> > filter, and I guess it corresponds to a patch window size of 3x3
> in the
> > code.
> > The runtime difference is near the expected quadratic slowdown:
> > 1,460 secs (8,379 CPU) for 7 and 12,794 secs (85,972 CPU) for
> 25, which
> > means about 10.26x slowdown
> >
> > If you want to make your mind on it, I have pushed a branch here
> that
> > integrates the K parameter in the GUI:
> > https://github.com/rawfiner/darktable.git
> 
> > The branch is denoise-profile-GUI-K
> >
> > I think that it may be worth to see if an automated approach for
> the choice
> > of K may work, in order not to integrate the parameter in the GUI.
> > I may try to implement the approach of Kervann and Boulanger
> (the reference
> > from the darktable blog post) to see how it performs.
> >
> > cheers,
> > rawfiner
> >
> >
> > 2018-01-27 13:50 GMT+01:00 johannes hanika  >:
> >>
> >> heya,
> >>
> >> thanks for the reference! interesting interpretation how the
> blotches
> >> form. not sure i'm entirely convinced by that argument.
> >> your image does look convincing though. let me get this right.. you
> >> ran with radius 1 which means patch window size 3x3? not 1x1 which
> >> would be a bilateral filter effectively?
> >>
> >> also what was the run time difference? is it near the expected
> >> quadratic slowdown from 7 (i.e. 15x15) to 25 (51x51) so about
> 11.56x
> >> slower with the large window size? (test with darktable -d perf)
> >>
> >> since nlmeans isn't the fastest thing, even with this coalesced
> way of
> >> implementing it, we should certainly keep an eye on this.
> >>
> >> that being said if we can often times get much better 

[darktable-dev] Another denoising method for darktable

2018-01-07 Thread Aurélien PIERRE
Hi,

I would like to propose a new denoising method for dt : the Total
Variation. It plays in the same category as the bilateral denoising :

  * it smoothens surfaces while retaining edges
  * it's very efficient to recover chroma noise and noise in bokeh areas
without affecting too much the in-focus areas
  * but it could be more computing efficient.

A researcher has published an article
<http://www.ipol.im/pub/art/2016/141/> comparing several sub-methods to
do so (with various regularizations) and released his C++ code
<http://www.ipol.im/pub/art/2016/141/DMSC_TVdenoising.tgz> under the
GPL. Regular people can test the algorithm online
<http://demo.ipol.im/demo/141/>with their own pictures. The rest of the
details (and more candies) are on his page :
https://joandurangrimalt.wordpress.com/software/

The background of this method is to minimize the energy of the picture,
hence the noise, defined as the integral of the gradients over the
picture. I have succesfully achieved a faster gradient estimation using
a 2D convolution with separable filters (same way as the Sobel operator)
with this research <https://cdn.intechopen.com/pdfs-wm/39346.pdf>.

I don't have time now to work on integrating this myself, as I'm already
buisy working on the blind deconvolution and stuggling with C, but I
believe it would be a great add-on to offer a more efficient alternative
to the bilateral filter, and all the maths and C libs are already there,
so most of the work would be UI.

Anybody willing to work on that ?

Thanks !

PS : there is also a non-local total-variation regularization available,
able to both denoise and reconstruct details by inpainting, but it's
just maths and no GPL code for now :

  * https://hal.archives-ouvertes.fr/hal-01342111/
  * 
https://joandurangrimalt.wordpress.com/research/novel-tv-based-regularization/

-- 

*Aurélien PIERRE*
aurelienpierre.com <http://aurelienpierre.com>



___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] New module: mosaic

2018-01-02 Thread Aurélien PIERRE
Am Dienstag, 2. Januar 2018, 19:29:54 CET schrieb Aurélien PIERRE:
>> because I'm able to deconvolve it :-P
> I'd like to challenge that claim. Please deconvolve this image:
>
> https://houz.org/tmp/img_0001.pfm
Never challenge the power of my algorithms :
https://photo.aurelienpierre.com/wp-content/uploads/sites/3/2018/01/img_0001.jpg
>> *Aurélien PIERRE*
>> aurelienpierre.com <http://aurelienpierre.com>
>>
>> 
>>
>> Le 2018-01-02 à 12:44, Pascal Obry a écrit :
>>> Le mardi 02 janvier 2018 à 12:38 -0200, Jefferson Ferreira a écrit :
>>>> I finish my first module called "mosaic". This module creates a
>>>> mosaic over the image to hide some confidential information (use
>>>> masks to restrict the area).
>>> Why using a gaussian mask over confidential information isn't enough?
>>> I'm not sure to understand the purpose of this module.
> Tobias
>
> [...]


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] New module: mosaic

2018-01-02 Thread Aurélien PIERRE
because I'm able to deconvolve it :-P

*Aurélien PIERRE*
aurelienpierre.com <http://aurelienpierre.com>


Le 2018-01-02 à 12:44, Pascal Obry a écrit :
> Le mardi 02 janvier 2018 à 12:38 -0200, Jefferson Ferreira a écrit :
>> I finish my first module called "mosaic". This module creates a
>> mosaic over the image to hide some confidential information (use
>> masks to restrict the area).
> Why using a gaussian mask over confidential information isn't enough?
> I'm not sure to understand the purpose of this module.
>  
>> - link to download the module
>> https://www.dropbox.com/s/i9sn3lrek67m56p/mosaic.c?dl=0
>>
>> - copy this file to /src/iop
>>
>> - in /src/iop/CMakeLists.txt, add the red line below:
>>
>> add_iop(finalscale "finalscale.c")
>> add_iop(globaltonemap "globaltonemap.c")
>> add_iop(bilat "bilat.c")
>> add_iop(denoiseprofile "denoiseprofile.c")
>> add_iop(defringe "defringe.c")
>> add_iop(ashift "ashift.c")
>> add_iop(hazeremoval "hazeremoval.c")
>> add_iop(mosaic "mosaic.c")
>>
>> - recompile and installing the darktable
> Sounds wrong to distribute a new module. Can you do a commit and a
> pull-request? Using some random code download on same random server is
> certainly not something I would do anyway.
>
> Regards,
>


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

[darktable-dev] Disable tone-curve generation in darktable-chart

2018-01-02 Thread Aurélien PIERRE
Happy New Year and thanks to the devs for the 2.4 candy T

I began using darktable-chart with a Spyder ColorChecker. By the way,
the option "always use LittleCMS" has to be off when you output the .pfm
LAB file to the tool, otherwise it doesn't open the image.

The dtstyle produced includes a tone curve and coefficients for the LUT.
In my case, the tone curve is almost linear, with small (unconsitents)
bumps. These bumps appear to come from numerical errors and don't make
sense, so I disabled the tone curve, but now some luma values of the LUT
patches are a little off the theoretical target. I had to adjust them
individually, using the absolute target color, which is not what you
want to do on 48 patches.

I wonder if there is any ways to prevent darktable-chart from generating
tone curves and force it to only adjust the LUT ?

Thank you !

-- 

*Aurélien PIERRE*
aurelienpierre.com <http://aurelienpierre.com>


☎ Canada : +1 514 756 2207


Ce message a reçu une signature cryptographique par clé privée OpenPGP
attachée en PJ sous forme d'une somme de contrôle qui garantit
l'identité de son émetteur et l'intégrité du message entre son émission
et sa réception. Ce mode d'authentification a valeur légale, d'après
l'article 1316-1 du Code Civil modifié par la loi n° 2000-230 du 13 mars
2000, et d'après la directive européenne 1999/93/CE du 13 décembre 1999.
Une signature invalide ou inexistante d'un message provenant de cette
adresse peut être la conséquence d'un piratage. Merci de chiffrer tout
message contenant des informations sensibles et/ou confidentielles avec
votre clé privée et ma clé publique, via le protocole OpenPGP, gratuit
et opensource. Je décline toute responsabilité en cas de divulgation
d'informations confidentielles non chiffrées advenant une intrusion dans
ma boîte de courrier électronique.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



[darktable-dev] Local contrast with laplacian filter washes highlights tones

2017-11-30 Thread Aurélien PIERRE
Hi,

I have been using the local contrast module with the laplacian filter on
the git/master version for a few months now, and I noticed that it's
impossible to push highlights farther than 150 % without having the
matching tones washed/faded. I understand it's applied on the L channel,
and thus colors remain unaffected but this looks sometimes weird on
tanned skin tones which end up whiter/more desaturated than they should be.

Would it be possible to add an option to push the color contrast
accordingly in the affected zones as well ? Sort of the same as the
highlights/shadows module ?

Thanks,

-- 

*Aurélien PIERRE*
aurelienpierre.com <http://aurelienpierre.com>





signature.asc
Description: OpenPGP digital signature


  1   2   >