Re: [darktable-user] optimization of opencl performance for NVIDIA GTX .... : settings in file darktablerc

2020-11-20 Thread Šarūnas
On 11/20/20 2:57 PM, Marc Cabuy wrote:
> Indeed Patrick,
> 
> the default in the current version is: 
> opencl_synch_cache=active module
> 
> The option I would advise to test for response time comfort during finetuning 
> in a 2nd pass is:
> opencl_synch_cache=true.
> 
> That is if it does not compromise too hard overall response time. Which is ok 
> on my system/configuration.

Whatever optimizations, it would be nice if it was mentioned how they
differ from darktable defaults.

-- 
Šarūnas Burdulis
math.dartmouth.edu/~sarunas

· https://useplaintext.email ·



OpenPGP_signature
Description: OpenPGP digital signature


Re: [darktable-user] Need help with darktable 3 scene-referred modules

2020-11-20 Thread Terry Pinfold
Hi,
  I would like to share some of the notes I have put together for using
Filmic. Filmic V4 is very easy to use. Avoid making it unnecessarily
complicated. I rarely use the highlights reconstruction as bleaching
highlights should be avoided in the camera in the first place. I love
Filmic V4 and what it does to my images.  I have watched Aurelien Pierre's
videos and they are very informative but math heavy so I have tried to
avoid that here.

Filmic V4 Guide

https://discuss.pixls.us/t/darktables-filmic-faq/20138

*What is filmic? *

If you open a raw photograph and simply demosaic and turn off any base
curve or filmic correction the image will lack contrast, brightness and
saturation. Your memory of the scene is the result of your brain
interpreting and constructing an enhanced image in your mind. Your colour
memory and perception are usually more saturated and punchier than the real
scene.

So, we need to convert your sensor’s readings to match your colour memory.
All imaging software does this already, whether it shows it to you or not,
by using some sort of tone curve, base curve or LUT when it converts the
raw image file for display on your screen. This is called tone mapping. The
process usually brightens the mid-tones and shadows, darkens highlights,
while ensuring a smooth tonal transition from black to white and retaining
as much detail of the scene as possible.

Base curve maps camera RGB so blacks are given 0% RGB values and whites are
given 100% RGB values get mapped to 100%. Base curves come in different
presets trying to match the JPG images from your camera brand. On the other
hand filmic does not try emulating your in-camera JPEG. Instead, filmic
uses a similar strategy as analog film (which is close in its logic to
human vision) and puts emphasis on harmonious tonal transitions, at the
expense of detail in highlights (which will be compressed, as human vision
would be too).

If you want skies with lots of local contrast, you may consider adding
local contrast module and perhaps start darkening the whole sky with a
masked exposure module or the tone equalizer in order to relax a bit
filmic’s tone mapping bounds.

*Is filmic difficult to use?*

Typical imaging workflows come from the 1980’s when computers dealt with
images encoded as 8 bits with 256 codes values, between 0 and 255.  The
approach assigned “black” the value 0 and “white” the value 255.

Now, darktable and other software use 32 bits floating point numbers
internally. They can represent values between virtually -infinity and
+infinity. Rather than assigning values between 0-255 they now assign these
values as 0-100%, but they can actually extend beyond this range. This new
approach better lets us work with operations that preserve the physical
properties of the light emissions encoded by your RGB values.

Filmic redefines the workflow and the pipeline by bringing new assumptions
and concepts inherited from physics and actual light, following the trend
introduced by the cinema industry, because these new concepts make it
scalable to whatever dynamic range you get from the modern camera’s sensor.
Filmic is conceptually difficult, but it can be quite simple if you forget
about the theory and focus on the results.


Editing an image using the filmic RGB module

   1. It is optional to activate the under/over exposure indicator which
   shows highlight clipping in red and shadows clipping in blue. It can be
   deactivated and reactivated at any time.
   2. It is optional to activate the ISO 12646 color assessment conditions
   which places a white border around the image to assist with exposure and
   contrast adjustments. I find this helpful when using filmic.
   3. Check *exposure module* and adjust as required if the image is too
   bright or dark
   4. Go to the *scene tab* in the *filmic module* and check if moving the
   white or black sliders improves the image.
  - To maximise contrast move the *white relative exposure* slider to
  the left just before the point of clipping the highlights as
photos should
  rarely contain pure white.
  - To maximise the contrast move the *black relative exposure* slider
  to the right, but avoid excessive clipping of the shadows. Some
clipping of
  the darkest shadows may be desirable.
  - If the black and white sliders cannot be adjusted enough then
the *dynamic
  range slider* may have to be decreased to stretch the histogram or
  increased to compress the histogram.

   i.  The
dynamic range scaling can be decreased by moving the slider to the left.
This stretches the histogram and can be desirable for images taken with
very low contrast lighting such as hazy days.

 ii.  The
dynamic range scaling can be increased to compress the histogram. This can
be beneficial with high dynamic lighting situations with bright highlights

Re: [darktable-user] optimization of opencl performance for NVIDIA GTX .... : settings in file darktablerc

2020-11-20 Thread Marc Cabuy
Indeed Patrick,

the default in the current version is: 
opencl_synch_cache=active module

The option I would advise to test for response time comfort during finetuning 
in a 2nd pass is:
opencl_synch_cache=true.

That is if it does not compromise too hard overall response time. Which is ok 
on my system/configuration.

Met vriendelijke groet,

Marc.


> Op 20 nov. 2020 om 19:58 heeft Patrick Shanahan  het 
> volgende geschreven:
> 
> * Marc Cabuy  [11-20-20 12:18]:
>> Gian,
>> 
>> 
>> 
>> Good to hear that.  And yes I expect that that XEON processor will surely 
>> help (*).
>> 
>> 
>> 
>> GPU performance can be further boosted by optimization of the opencl 
>> settings in file darktablerc.  You can read about that in the darktable 
>> manual.  
>> 
>> 
>> 
>> But possibly adequate parameters for a nvidia gtx 1650 can be found in this 
>> thread: 
>> 
>> https://discuss.pixls.us/t/darktable-opencl-performance-with-nvidia-gtx1650-super/19965/17
>>  as follows
>> 
>> opencl_async_pixelpipe=true
>> 
>> opencl_avoid_atomics=false
>> 
>> opencl_mandatory_timeout=200
>> 
>> opencl_memory_headroom=1024
>> 
>> opencl_memory_requirement=2048
>> 
>> opencl_micro_nap=10
>> 
>> opencl_number_event_handles=1000
>> 
>> opencl_scheduling_profile=very fast GPU
>> 
>> opencl_size_roundup=16
>> 
>> opencl_synch_cache=false
>> 
>> opencl_use_cpu_devices=false
>> 
>> opencl_use_pinned_memory=false
>> 
>> 
>> 
>> I gained like 3 to 6 times better interactive response time for pictures 
>> with about 25 to 30 modules activated, by tuning these parameters on my GPU 
>> close to these values.
>> 
>> 
>> 
>> Further I also have put opencl_synch_cache=true.  If that doesn’t penalize 
>> general response time too much, it gives the advantage of better response 
>> when reviewing/finetuning module parameters further in a 2nd pass even with 
>> “denoise profiled” activated. That’s typical for my workflow.
>> 
>> 
>> 
>> Regards,
>> 
>> Marc.
>> 
>> 
>> 
>> (*) But I am not an expert, since I use dt only since 6 months on an 
>> off-the-shelve overconfigured pseudo-gaming laptop from the MSI Creator 
>> series that I bought after my dual core I7 Acer laptop died.
>> 
>> 
>> 
>> 
>> 
>> Van: GianLuca Sarto [mailto:glsa...@tiscali.it] 
>> Verzonden: vrijdag 20 november 2020 13:30
>> Aan: darktable-user@lists.darktable.org
>> Onderwerp: Re: [darktable-user] Tone equaliser and response time during 
>> zooming/panning on 4K display (dt 3.2.1-Windows 10)
>> 
>> 
>> 
>> Marc,
>> 
>> 
>> 
>> thank you very much for your warning!
>> 
>> 
>> 
>> As you see, the first step is done, and I can see a huge improvement in 
>> browsing through the lighttable with the GTX 1650.
>> 
>> 
>> 
>> I was wondering too if it is worth to go to 4K, I wouldn't like to go back 
>> to square one, performance wise.
>> 
>> 
>> 
>> Maybe 2560x1440 is the wise compromise, and I save a lot of money: I do not 
>> really feel constraint at 1920x1200, I was suffering more for the slow 
>> paging.
>> 
>> 
>> 
>> In the next few days I should receive a second hand Xeon E3, that might 
>> boost further the overall performance.
>> 
>> 
>> 
>> -Gian
>> 
>> 
>> 
>> 
>> 
>> On 20/11/20 12:53, Marc Cabuy wrote:
>> 
>> Gian-Luca,
>> 
>> 
>> 
>> Just a little warning about purchasing a 4K display.  You indeed will need a 
>> good GPU. But still there is (or are) module(s) that are fully cpu-bound. 
>> That’s the case with tone equaliser. Interactive response when zooming and 
>> panning in the darkroom may suffer from poor response time on a 4K display 
>> (3840x2160) when you have a tone equaliser module activated in the module 
>> stack.  That’s my experience with dt 3.2.1 for Windows on my 16mp pictures. 
>> That issue is not existing that much on my WQHD display (2560x1440) or 
>> almost not on a HD display  (1920x1080). 
>> 
>> 
>> 
>> I repeat: it’s experience with dt 3.2.1 for Windows. I am not aware on how 
>> it behaves on Linux.
>> 
>> 
>> 
>> 
>> 
>> Marc 
>> 
>> 
>> 
>> 
>> 
>> Van: GianLuca Sarto  
>> Verzonden: zondag 15 november 2020 21:30
>> Aan: darktable-user@lists.darktable.org 
>>  
>> Onderwerp: Re: [darktable-user] Ubuntu 20.04, OpenCL, DT3.2.1, Radeon HD5400
>> 
>> 
>> 
>> thanks, Šarūnas,
>> 
>> 
>> 
>> I have two Darktable systems, based on Lenovo TS140, one with AMD, the 
>> 
>> other with Nvidia Quadro 400.
>> 
>> Neither of the two manage OpenCL..
>> 
>> 
>> 
>> 0.039390 [opencl_init] found opencl runtime library 'libOpenCL'
>> 
>> 0.039412 [opencl_init] opencl library 'libOpenCL' found on your system 
>> 
>> and loaded
>> 
>> 0.042560 [opencl_init] found 1 platform
>> 
>> 0.042582 [opencl_init] found 1 device
>> 
>> 0.042727 [opencl_init] device 0 `Quadro 400' has sm_20 support.
>> 
>> 0.042778 [opencl_init] discarding device 0 `Quadro 400' due to 
>> 
>> insufficient global memory (511MB).
>> 
>> 0.042784 [opencl_init] no suitable devices found.
>> 
>> 0.042788 [opencl_init] FINALLY:

Re: [darktable-user] optimization of opencl performance for NVIDIA GTX .... : settings in file darktablerc

2020-11-20 Thread Patrick Shanahan
* Marc Cabuy  [11-20-20 12:18]:
> Gian,
> 
>  
> 
> Good to hear that.  And yes I expect that that XEON processor will surely 
> help (*).
> 
>  
> 
> GPU performance can be further boosted by optimization of the opencl settings 
> in file darktablerc.  You can read about that in the darktable manual.  
> 
>  
> 
> But possibly adequate parameters for a nvidia gtx 1650 can be found in this 
> thread: 
> 
> https://discuss.pixls.us/t/darktable-opencl-performance-with-nvidia-gtx1650-super/19965/17
>  as follows
> 
> opencl_async_pixelpipe=true
> 
> opencl_avoid_atomics=false
> 
> opencl_mandatory_timeout=200
> 
> opencl_memory_headroom=1024
> 
> opencl_memory_requirement=2048
> 
> opencl_micro_nap=10
> 
> opencl_number_event_handles=1000
> 
> opencl_scheduling_profile=very fast GPU
> 
> opencl_size_roundup=16
> 
> opencl_synch_cache=false
> 
> opencl_use_cpu_devices=false
> 
> opencl_use_pinned_memory=false
> 
>  
> 
> I gained like 3 to 6 times better interactive response time for pictures with 
> about 25 to 30 modules activated, by tuning these parameters on my GPU close 
> to these values.
> 
>  
> 
> Further I also have put opencl_synch_cache=true.  If that doesn’t penalize 
> general response time too much, it gives the advantage of better response 
> when reviewing/finetuning module parameters further in a 2nd pass even with 
> “denoise profiled” activated. That’s typical for my workflow.
> 
>  
> 
> Regards,
> 
> Marc.
> 
>  
> 
> (*) But I am not an expert, since I use dt only since 6 months on an 
> off-the-shelve overconfigured pseudo-gaming laptop from the MSI Creator 
> series that I bought after my dual core I7 Acer laptop died.
> 
>  
> 
>  
> 
> Van: GianLuca Sarto [mailto:glsa...@tiscali.it] 
> Verzonden: vrijdag 20 november 2020 13:30
> Aan: darktable-user@lists.darktable.org
> Onderwerp: Re: [darktable-user] Tone equaliser and response time during 
> zooming/panning on 4K display (dt 3.2.1-Windows 10)
> 
>  
> 
> Marc,
> 
>  
> 
> thank you very much for your warning!
> 
>  
> 
> As you see, the first step is done, and I can see a huge improvement in 
> browsing through the lighttable with the GTX 1650.
> 
>  
> 
> I was wondering too if it is worth to go to 4K, I wouldn't like to go back to 
> square one, performance wise.
> 
>  
> 
> Maybe 2560x1440 is the wise compromise, and I save a lot of money: I do not 
> really feel constraint at 1920x1200, I was suffering more for the slow paging.
> 
>  
> 
> In the next few days I should receive a second hand Xeon E3, that might boost 
> further the overall performance.
> 
>  
> 
> -Gian
> 
>  
> 
>  
> 
> On 20/11/20 12:53, Marc Cabuy wrote:
> 
> Gian-Luca,
> 
>  
> 
> Just a little warning about purchasing a 4K display.  You indeed will need a 
> good GPU. But still there is (or are) module(s) that are fully cpu-bound. 
> That’s the case with tone equaliser. Interactive response when zooming and 
> panning in the darkroom may suffer from poor response time on a 4K display 
> (3840x2160) when you have a tone equaliser module activated in the module 
> stack.  That’s my experience with dt 3.2.1 for Windows on my 16mp pictures. 
> That issue is not existing that much on my WQHD display (2560x1440) or almost 
> not on a HD display  (1920x1080). 
> 
>  
> 
> I repeat: it’s experience with dt 3.2.1 for Windows. I am not aware on how it 
> behaves on Linux.
> 
>  
> 
>  
> 
> Marc 
> 
>  
> 
>  
> 
> Van: GianLuca Sarto  
> Verzonden: zondag 15 november 2020 21:30
> Aan: darktable-user@lists.darktable.org 
>  
> Onderwerp: Re: [darktable-user] Ubuntu 20.04, OpenCL, DT3.2.1, Radeon HD5400
> 
>  
> 
> thanks, Šarūnas,
> 
>  
> 
> I have two Darktable systems, based on Lenovo TS140, one with AMD, the 
> 
> other with Nvidia Quadro 400.
> 
> Neither of the two manage OpenCL..
> 
>  
> 
> 0.039390 [opencl_init] found opencl runtime library 'libOpenCL'
> 
> 0.039412 [opencl_init] opencl library 'libOpenCL' found on your system 
> 
> and loaded
> 
> 0.042560 [opencl_init] found 1 platform
> 
> 0.042582 [opencl_init] found 1 device
> 
> 0.042727 [opencl_init] device 0 `Quadro 400' has sm_20 support.
> 
> 0.042778 [opencl_init] discarding device 0 `Quadro 400' due to 
> 
> insufficient global memory (511MB).
> 
> 0.042784 [opencl_init] no suitable devices found.
> 
> 0.042788 [opencl_init] FINALLY: opencl is NOT AVAILABLE on this system.
> 
>  
> 
> I would like to upgrade one of the two systems to a 4K display, so was 
> 
> already decided to purchase a new video card.
> 
>  
> 
> Is there a tested solution that works out of the box, or a list of DT 
> 
> OpenCL compliant cards for Linux?
> 
>  
> 
>  
> 
>  
> 
> On 15/11/20 15:34, Šarūnas wrote:
> 
> > On 11/14/20 3:41 PM, GianLuca Sarto wrote:
> 
> >> Hello All,
> 
> >> 
> 
> >> DT 3.2.1 here, running on Ubuntu 20.04, Radeon HD5400.
> 
> >> 
> 
> >> OpenCL should be working, however DT complains "could not get platforms"
> 
> >> (see be

Re: [darktable-user] DT3.2.1, NVidia, Linux kernel 5.8 vs 5.9 & OpenCL

2020-11-20 Thread Hervé Sainct
Here,

Lenovo Thinkpad P53, GPU Nvidia Quadro T2000

Darktable 3.2.1-3 (installed, not snap)

on Debian Sid

with the recent NVidia drivers, 450.80.02

and kernel 5.8 (OpenCL works) or with kernel 5.9 (Open CL doesn't work)

The difference with/without OpenCL is visible as soon as a raw image
reaches some dozen Mb even for simple corrections.



Le 20/11/2020 à 11:31, GianLuca Sarto a écrit :
> to the benefit of all readers, I would like to confirm that OpenCL
> works in this environment:
>
>   * Ubuntu 20.04LTS
>   * Lenovo TS140
>   * Palix Nvidia GeForce GTX1650, 4GB
>   * Darktable 3.2.1 (installed via apt, NOT snap!)
>
>
> On 15/11/20 23:07, Šarūnas wrote:
>> On 11/15/20 3:28 PM, GianLuca Sarto wrote:
>>> thanks, Šarūnas,
>>>
>>> I have two Darktable systems, based on Lenovo TS140, one with AMD, the
>>> other with Nvidia Quadro 400.
>>> Neither of the two manage OpenCL..
>>>
>>> 0.039390 [opencl_init] found opencl runtime library 'libOpenCL'
>>> 0.039412 [opencl_init] opencl library 'libOpenCL' found on your system
>>> and loaded
>>> 0.042560 [opencl_init] found 1 platform
>>> 0.042582 [opencl_init] found 1 device
>>> 0.042727 [opencl_init] device 0 `Quadro 400' has sm_20 support.
>>> 0.042778 [opencl_init] discarding device 0 `Quadro 400' due to
>>> insufficient global memory (511MB).
>> Too small video memory for useful OpenCL.
>>
>>> I would like to upgrade one of the two systems to a 4K display, so was
>>> already decided to purchase a new video card.
>>>
>>> Is there a tested solution that works out of the box, or a list of DT
>>> OpenCL compliant cards for Linux?
>> I don't know whether such a list exists[1]. The more memory and more
>> parallel processing units (“GPU cores”) a video card has, the more
>> useful it will be for OpenCL processing. The useful minimum these days
>> might be 2GB, but I would look for 4GB and more.
>>
>> “Out of the box” would probably only happen if you buy a computer from a
>> company that sells them configured with Linux. Whether you install AMD
>> or Nvidia GPU, there will be additional steps.
>>
>> In case of AMD, Linux kernel already supports AMD cards with the open
>> source ‘amdgpu’ module, so that part will be “out of the box”. OpenCL
>> support will have to come from either 1) open source ROCm or 2)
>> proprietary AMDGPU-PRO.
>>
>> In case of Nvidia, Linux kernel's ‘nouveau’ module will need to be
>> replaced with the proprietary ‘nv’ one from Nvidia, plus OpenCL part
>> from the same Nvidia. Ubuntu has them in standard repositories. One can
>> also use Nvidia repositories for perhaps slightly newer software.
>>
>> Intel appears to have a completely open source system, but usable GPUs
>> are still to come.
>>
>
>
> 
> darktable user mailing list to unsubscribe send a mail to
> darktable-user+unsubscr...@lists.darktable.org

-- 
herve.sai...@laposte.net



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



pEpkey.asc
Description: application/pgp-keys


[darktable-user] optimization of opencl performance for NVIDIA GTX .... : settings in file darktablerc

2020-11-20 Thread Marc Cabuy
Gian,

 

Good to hear that.  And yes I expect that that XEON processor will surely help 
(*).

 

GPU performance can be further boosted by optimization of the opencl settings 
in file darktablerc.  You can read about that in the darktable manual.  

 

But possibly adequate parameters for a nvidia gtx 1650 can be found in this 
thread: 

https://discuss.pixls.us/t/darktable-opencl-performance-with-nvidia-gtx1650-super/19965/17
 as follows

opencl_async_pixelpipe=true

opencl_avoid_atomics=false

opencl_mandatory_timeout=200

opencl_memory_headroom=1024

opencl_memory_requirement=2048

opencl_micro_nap=10

opencl_number_event_handles=1000

opencl_scheduling_profile=very fast GPU

opencl_size_roundup=16

opencl_synch_cache=false

opencl_use_cpu_devices=false

opencl_use_pinned_memory=false

 

I gained like 3 to 6 times better interactive response time for pictures with 
about 25 to 30 modules activated, by tuning these parameters on my GPU close to 
these values.

 

Further I also have put opencl_synch_cache=true.  If that doesn’t penalize 
general response time too much, it gives the advantage of better response when 
reviewing/finetuning module parameters further in a 2nd pass even with “denoise 
profiled” activated. That’s typical for my workflow.

 

Regards,

Marc.

 

(*) But I am not an expert, since I use dt only since 6 months on an 
off-the-shelve overconfigured pseudo-gaming laptop from the MSI Creator series 
that I bought after my dual core I7 Acer laptop died.

 

 

Van: GianLuca Sarto [mailto:glsa...@tiscali.it] 
Verzonden: vrijdag 20 november 2020 13:30
Aan: darktable-user@lists.darktable.org
Onderwerp: Re: [darktable-user] Tone equaliser and response time during 
zooming/panning on 4K display (dt 3.2.1-Windows 10)

 

Marc,

 

thank you very much for your warning!

 

As you see, the first step is done, and I can see a huge improvement in 
browsing through the lighttable with the GTX 1650.

 

I was wondering too if it is worth to go to 4K, I wouldn't like to go back to 
square one, performance wise.

 

Maybe 2560x1440 is the wise compromise, and I save a lot of money: I do not 
really feel constraint at 1920x1200, I was suffering more for the slow paging.

 

In the next few days I should receive a second hand Xeon E3, that might boost 
further the overall performance.

 

-Gian

 

 

On 20/11/20 12:53, Marc Cabuy wrote:

Gian-Luca,

 

Just a little warning about purchasing a 4K display.  You indeed will need a 
good GPU. But still there is (or are) module(s) that are fully cpu-bound. 
That’s the case with tone equaliser. Interactive response when zooming and 
panning in the darkroom may suffer from poor response time on a 4K display 
(3840x2160) when you have a tone equaliser module activated in the module 
stack.  That’s my experience with dt 3.2.1 for Windows on my 16mp pictures. 
That issue is not existing that much on my WQHD display (2560x1440) or almost 
not on a HD display  (1920x1080). 

 

I repeat: it’s experience with dt 3.2.1 for Windows. I am not aware on how it 
behaves on Linux.

 

 

Marc 

 

 

Van: GianLuca Sarto  
Verzonden: zondag 15 november 2020 21:30
Aan: darktable-user@lists.darktable.org 
 
Onderwerp: Re: [darktable-user] Ubuntu 20.04, OpenCL, DT3.2.1, Radeon HD5400

 

thanks, Šarūnas,

 

I have two Darktable systems, based on Lenovo TS140, one with AMD, the 

other with Nvidia Quadro 400.

Neither of the two manage OpenCL..

 

0.039390 [opencl_init] found opencl runtime library 'libOpenCL'

0.039412 [opencl_init] opencl library 'libOpenCL' found on your system 

and loaded

0.042560 [opencl_init] found 1 platform

0.042582 [opencl_init] found 1 device

0.042727 [opencl_init] device 0 `Quadro 400' has sm_20 support.

0.042778 [opencl_init] discarding device 0 `Quadro 400' due to 

insufficient global memory (511MB).

0.042784 [opencl_init] no suitable devices found.

0.042788 [opencl_init] FINALLY: opencl is NOT AVAILABLE on this system.

 

I would like to upgrade one of the two systems to a 4K display, so was 

already decided to purchase a new video card.

 

Is there a tested solution that works out of the box, or a list of DT 

OpenCL compliant cards for Linux?

 

 

 

On 15/11/20 15:34, Šarūnas wrote:

> On 11/14/20 3:41 PM, GianLuca Sarto wrote:

>> Hello All,

>> 

>> DT 3.2.1 here, running on Ubuntu 20.04, Radeon HD5400.

>> 

>> OpenCL should be working, however DT complains "could not get platforms"

>> (see below).

> Support for Radeon HD5xxx may have ended with fglrx (Ubuntu 16.04?).

> 

> You may check what is currently supported,

> by proprietary AMDGPU-PRO:

> https://www.amd.com/en/support

> 

> by open source ROCm:

> https://github.com/RadeonOpenCompute/ROCm#Hardware-and-Software-Support

> 

 



darktable user mailing list

to unsubscribe send a mail to darktable-user+unsubscr...@l

Re: [darktable-user] Tone equaliser and response time during zooming/panning on 4K display (dt 3.2.1-Windows 10)

2020-11-20 Thread GianLuca Sarto

Marc,

thank you very much for your warning!

As you see, the first step is done, and I can see a huge improvement in 
browsing through the lighttable with the GTX 1650.


I was wondering too if it is worth to go to 4K, I wouldn't like to go 
back to square one, performance wise.


Maybe 2560x1440 is the wise compromise, and I save a lot of money: I do 
not really feel constraint at 1920x1200, I was suffering more for the 
slow paging.


In the next few days I should receive a second hand Xeon E3, that might 
boost further the overall performance.


-Gian


On 20/11/20 12:53, Marc Cabuy wrote:


Gian-Luca,

Just a little warning about purchasing a 4K display.  You indeed will 
need a good GPU. But still there is (or are) module(s) that are fully 
cpu-bound. That’s the case with tone equaliser. Interactive response 
when zooming and panning in the darkroom may suffer from poor response 
time on a 4K display (3840x2160) when you have a tone equaliser module 
activated in the module stack.  That’s my experience with dt 3.2.1 for 
Windows on my 16mp pictures. That issue is not existing that much on 
my WQHD display (2560x1440) or almost not on a HD display  (1920x1080).


I repeat: it’s experience with dt 3.2.1 for Windows. I am not aware on 
how it behaves on Linux.


Marc

*Van: *GianLuca Sarto 
*Verzonden: *zondag 15 november 2020 21:30
*Aan: *darktable-user@lists.darktable.org 

*Onderwerp: *Re: [darktable-user] Ubuntu 20.04, OpenCL, DT3.2.1, 
Radeon HD5400


thanks, Šarūnas,

I have two Darktable systems, based on Lenovo TS140, one with AMD, the

other with Nvidia Quadro 400.

Neither of the two manage OpenCL..

0.039390 [opencl_init] found opencl runtime library 'libOpenCL'

0.039412 [opencl_init] opencl library 'libOpenCL' found on your system

and loaded

0.042560 [opencl_init] found 1 platform

0.042582 [opencl_init] found 1 device

0.042727 [opencl_init] device 0 `Quadro 400' has sm_20 support.

0.042778 [opencl_init] discarding device 0 `Quadro 400' due to

insufficient global memory (511MB).

0.042784 [opencl_init] no suitable devices found.

0.042788 [opencl_init] FINALLY: opencl is NOT AVAILABLE on this system.

I would like to upgrade one of the two systems to a 4K display, so was

already decided to purchase a new video card.

Is there a tested solution that works out of the box, or a list of DT

OpenCL compliant cards for Linux?

On 15/11/20 15:34, Šarūnas wrote:

> On 11/14/20 3:41 PM, GianLuca Sarto wrote:

>> Hello All,

>>

>> DT 3.2.1 here, running on Ubuntu 20.04, Radeon HD5400.

>>

>> OpenCL should be working, however DT complains "could not get 
platforms"


>> (see below).

> Support for Radeon HD5xxx may have ended with fglrx (Ubuntu 16.04?).

>

> You may check what is currently supported,

> by proprietary AMDGPU-PRO:

> https://www.amd.com/en/support

>

> by open source ROCm:

> https://github.com/RadeonOpenCompute/ROCm#Hardware-and-Software-Support

>



darktable user mailing list

to unsubscribe send a mail to 
darktable-user+unsubscr...@lists.darktable.org



 
darktable user mailing list to unsubscribe send a mail to 
darktable-user+unsubscr...@lists.darktable.org





darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org

[darktable-user] Tone equaliser and response time during zooming/panning on 4K display (dt 3.2.1-Windows 10)

2020-11-20 Thread Marc Cabuy
Gian-Luca, Just a little warning about purchasing a 4K display.  You indeed will need a good GPU. But still there is (or are) module(s) that are fully cpu-bound. That’s the case with tone equaliser. Interactive response when zooming and panning in the darkroom may suffer from poor response time on a 4K display (3840x2160) when you have a tone equaliser module activated in the module stack.  That’s my experience with dt 3.2.1 for Windows on my 16mp pictures. That issue is not existing that much on my WQHD display (2560x1440) or almost not on a HD display  (1920x1080).  I repeat: it’s experience with dt 3.2.1 for Windows. I am not aware on how it behaves on Linux.  Marc   Van: GianLuca SartoVerzonden: zondag 15 november 2020 21:30Aan: darktable-user@lists.darktable.orgOnderwerp: Re: [darktable-user] Ubuntu 20.04, OpenCL, DT3.2.1, Radeon HD5400 thanks, Šarūnas, I have two Darktable systems, based on Lenovo TS140, one with AMD, the other with Nvidia Quadro 400.Neither of the two manage OpenCL.. 0.039390 [opencl_init] found opencl runtime library 'libOpenCL'0.039412 [opencl_init] opencl library 'libOpenCL' found on your system and loaded0.042560 [opencl_init] found 1 platform0.042582 [opencl_init] found 1 device0.042727 [opencl_init] device 0 `Quadro 400' has sm_20 support.0.042778 [opencl_init] discarding device 0 `Quadro 400' due to insufficient global memory (511MB).0.042784 [opencl_init] no suitable devices found.0.042788 [opencl_init] FINALLY: opencl is NOT AVAILABLE on this system. I would like to upgrade one of the two systems to a 4K display, so was already decided to purchase a new video card. Is there a tested solution that works out of the box, or a list of DT OpenCL compliant cards for Linux?   On 15/11/20 15:34, Šarūnas wrote:> On 11/14/20 3:41 PM, GianLuca Sarto wrote:>> Hello All,>> >> DT 3.2.1 here, running on Ubuntu 20.04, Radeon HD5400.>> >> OpenCL should be working, however DT complains "could not get platforms">> (see below).> Support for Radeon HD5xxx may have ended with fglrx (Ubuntu 16.04?).> > You may check what is currently supported,> by proprietary AMDGPU-PRO:> https://www.amd.com/en/support> > by open source ROCm:> https://github.com/RadeonOpenCompute/ROCm#Hardware-and-Software-Support>  darktable user mailing listto unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org  

darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org





Re: [darktable-user] Ubuntu 20.04, OpenCL, DT3.2.1, Radeon HD5400

2020-11-20 Thread GianLuca Sarto
to the benefit of all readers, I would like to confirm that OpenCL works 
in this environment:


 * Ubuntu 20.04LTS
 * Lenovo TS140
 * Palix Nvidia GeForce GTX1650, 4GB
 * Darktable 3.2.1 (installed via apt, NOT snap!)


On 15/11/20 23:07, Šarūnas wrote:

On 11/15/20 3:28 PM, GianLuca Sarto wrote:

thanks, Šarūnas,

I have two Darktable systems, based on Lenovo TS140, one with AMD, the
other with Nvidia Quadro 400.
Neither of the two manage OpenCL..

0.039390 [opencl_init] found opencl runtime library 'libOpenCL'
0.039412 [opencl_init] opencl library 'libOpenCL' found on your system
and loaded
0.042560 [opencl_init] found 1 platform
0.042582 [opencl_init] found 1 device
0.042727 [opencl_init] device 0 `Quadro 400' has sm_20 support.
0.042778 [opencl_init] discarding device 0 `Quadro 400' due to
insufficient global memory (511MB).

Too small video memory for useful OpenCL.


I would like to upgrade one of the two systems to a 4K display, so was
already decided to purchase a new video card.

Is there a tested solution that works out of the box, or a list of DT
OpenCL compliant cards for Linux?

I don't know whether such a list exists[1]. The more memory and more
parallel processing units (“GPU cores”) a video card has, the more
useful it will be for OpenCL processing. The useful minimum these days
might be 2GB, but I would look for 4GB and more.

“Out of the box” would probably only happen if you buy a computer from a
company that sells them configured with Linux. Whether you install AMD
or Nvidia GPU, there will be additional steps.

In case of AMD, Linux kernel already supports AMD cards with the open
source ‘amdgpu’ module, so that part will be “out of the box”. OpenCL
support will have to come from either 1) open source ROCm or 2)
proprietary AMDGPU-PRO.

In case of Nvidia, Linux kernel's ‘nouveau’ module will need to be
replaced with the proprietary ‘nv’ one from Nvidia, plus OpenCL part
from the same Nvidia. Ubuntu has them in standard repositories. One can
also use Nvidia repositories for perhaps slightly newer software.

Intel appears to have a completely open source system, but usable GPUs
are still to come.





darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org

[darktable-user] Need help with darktable 3 scene-referred modules

2020-11-20 Thread ternaryd
Hi,

Sorry for such a long message, but I think,
it's really a tough topic, which is so
interlinked that it can't be split into pieces.

Since quite a while I'm trying to get started
with the new modules in darktable 3, but
without success. The promise of gracefully
dealing with high dynamic range scenes, on the
other hand, is really seductive.

I've watched Aurelien Pierre's videos several
times and searched the net for answers, but
only got the impression, that most aren't doing
so much better than myself. Everything seems to
indicate that this can't be learned without a
considerable amount of understanding, while all
these modules are less artistic than technical,
mathematical issues. This, in spite of some
obvious efforts to make it 'easy to use' (which
actually complicates matters, as it makes it
harder to understand the inner workings, or
matching concepts about color science, I've
already learned in a different context).

I think to have understood, that exposure,
filmic and tone equalizer are there to map the
scene's dynamic into a range of intensities
that fits into the dynamic, the internal color
space is able to work with. At the beginning I
though that 'linear RGB' meant some sort of
radiometric proportionality, but reading some
Wikipedia articles about ACES tells me that
this should all be in the photometric domain.
But it is sort of a mixture, at least in my
head, as the exposure module comes very early
in the pixel pipe, is declared to work with
linear RBG but uses EV units, which are a
logarithmic concept. I'm using a self made ICC
camera profile, which I always thought of
dealing with photometric entities. On the other
hand, there are many articles pointing out,
that a 'gamma transform' is an inheritance of
the cathode ray tube monitors, which today
isn't really needed anymore and, if used, will
be undone before lighting up a LED on a modern
monitor. So what is the color space, which the
exposure module is working in, actually linear
to? Filmic's S-shaped curve reminds terribly of
the gamma correction. Aren't photometric
entities already adjusted for the behaviour of
the human eye and brain?

Being a technical/mathematical issue, it must
be possible to state a goal which should be
achieved with each of these three modules.

I believe to have understood, that the black
level correction in exposure must be so low as
to avoid zero or negative RGB values later, in
a more logarithmic domain. So the goal might be
just to accept the setting which can be found
by clicking on the pipette icon near ''clipping
threshold'', no matter how washed out the image
might look.

If that lowest level of intensities is fixed,
any change of exposure in the same module needs
to perform some sort of uniform compression or
dilation. But the drawings in one of Aurelien
Pierre's videos seem to suggest a more linear
shift of the scene's dynamic range with regard
to the dynamic range of the internal working
color space. Where did I miss the turn?

At one point, Mr. Aurelien Pierre states, that
one can either try to fit the highest
intensities into the histogram, or trying to
set the mid tones right, even if this makes the
highest intensities appear overexposed; it
wouldn't matter. At another point he says, that
it's the second option which should be chosen,
which may be in sync with the fact that in the
later versions of filmic:scene, the setting of
the middle grey is already only optional, as
that should have been done before in the
exposure module. For that, I should (mentally)
adjust the exposure module, such that a
rectangle within the image, enclosing an area
which in the scene has about the same
brightness as my grey card, yields something
close 18.42%. If this was right, I wonder, why
the exposure module doesn't have a function
similar to the fulcrum pipette in color
balance. So, what is the goal I have to look
for, such that the following modules allow for
a successful adjustment? If the exposure module
with regard to some unknown rules is rather
off, there is no chance to get the other
modules right. I've found out that much.

I've found even less clues for the final goal
in the filmic rgb module. I think I did
understand, that it is the shape of the
S-curve, which marks the transition of the
linear to a more logarithmic distribution of
the intensities. Beyond that, I'm really lost.
Where should I start? The module opens with the
scene-tab, but the look-tab already has some
settings that directly affect the effect of
the white and black relative exposures. Both
tabs together have 7 parameters. Just trying to
guess a combination of them that 'looks good to
my eye' is really difficult. As a matter of
fact, watching the videos, Aurelien Pierre also
switches back and forth many times, the only
difference beeing, that he does know exactly
what he's doing and being successful, and I
don't. So, where do I start, and what should I
look out for at each point?

Also the tone equalizer seems to be much easier
when Aureli