I confirm that the color zones module easily produces artifacts. This
happens especially on changes of the lightness curve. And obviously with
"process mode = strong" you may get artifacts almost immediately.
As far as I can see 2.6 has not been much different. In the old version
everything
Am 23.12.19 um 16:37 schrieb Аl Воgnеr:
7,393968 [dev_process_export] pixel pipeline processing took 0,608 secs
(0,915 CPU)
This is blazingly fast.
Interesting is:
Spitzlicht-Rekonstruktion' on GPU, blended on GPU [export]
With the bench.srw it was calculated by the cpu. So the use of the
Am 23.12.19 um 10:22 schrieb Аl Воgnеr:
I tried "opencl_memory_headroom=100"
10,485934 [dev_process_export] pixel pipeline processing took 3,299
secs (8,980 CPU)
So there is no big difference.
Assuming that the history you provided is a typical one then there is
only one direction where
Also note that in your case there is are a few modules which are
processed on the CPU:
6,983092 [dev_pixelpipe] took 0,096 secs (0,997 CPU) processed
`Spitzlicht-Rekonstruktion' on CPU, blended on CPU [export]
7,069714 [dev_pixelpipe] took 0,087 secs (1,365 CPU) processed
`Entrastern' on CPU,
Am 22.12.19 um 23:10 schrieb Аl Воgnеr:
You might need to adjust parameter opencl_memory_headroom in
darktablerc.
Thanks to remember me to do this. Could you please tell me the exact
name of the variable(s) to change?
As previously written: opencl_memory_headroom
$ nvidia-smi
Sun Dec 22
Am 22.12.19 um 19:32 schrieb Jochen Keil:
However, both denoise module, which used to run perfectly on the GPU,
now fall back to CPU and add around 14s each!
According to the log there's a tiling issue, but I don't understand why.
It's still the same 6GB GPU that used to work with 2.6.x.
Am 20.12.19 um 09:37 schrieb Jochen Keil:
*Now* I'm really looking forward to Christmas
There is one caveat. The main performance improvement in this context
has been the implementation of guided filter on GPU. The guided filter
has huge memory requirements. In your example it needs about
Am 17.12.19 um 07:30 schrieb Jochen Keil:
However, I usually
make broad use of parametric masks with feathering.
I would be very surprised if feathering was the bottleneck. I have a
1060 myself and feathering adds like <100ms per module on export (7k x
4.5k image). That's with 3.0.rc2.
Am 17.12.19 um 00:26 schrieb Holger Wünsche:
The most expensive modules are the exposure 1+2 and tone curve 3. These
are the three modules with masks. When removing them the time is down to
6s.
Drawn mask rendering is really slow and does not profit from the GPU.
There is currently work
Am 17.12.19 um 00:38 schrieb Michael Rasmussen:
The reason the amount for such long time is that the GPU run out of
memory and had to be processed by the CPU.
Then you should see error messages when going with '-d opencl' and
effectively there would be something really wrong.
Is there any
Your crash report shows a critical fault in Lua. Maybe an incompatible
update of the Lua libraries on your system? Or maybe some change to your
luarc file (in darktables config directory)?
Am 20.11.19 um 18:06 schrieb Bernhard:
Hi,
I'd been working happily with dt 2.4.4 on Mint 18 for the
Error number -6 means CL_OUT_OF_HOST_MEMORY.
Your diver tried to allocate memory on the computer (not the graphics
card) and failed.
Happened to me once with NVIDIA and a system with 8GB RAM. There were a
lot of processes running in the background. After terminating them it
worked.
Not
You don't give details on your HW setup. However, from past experience
this error code looks like a NVIDIA specific one. It tends to be
generated when the program is not able to access the required device
special files:
crw-rw+ 1 root video 195, 0 17. Aug 08:19 /dev/nvidia0
crw-rw+
(to me). It is a module that,
without my fully understanding it, I have found fairly useful strangely
enough.
On 2019-04-24 9:19 a.m., Ulrich Pegelow wrote:
The module enhances color contrast, i.e. the two sliders enhance
separation between positive and negative a and b values, respectively
The module enhances color contrast, i.e. the two sliders enhance
separation between positive and negative a and b values, respectively.
In this sense it works similar to a tonal value contrast that acts on
lighter and darker gray tones.
The overall effect depends on your image. Example: if
Good question :)
darktable (mostly) uses an unbounded color workflow, which means that L
values outside their normal definition range, i.e. above 100, are
handled like any other values.
The threshold parameter tells the module above which L value a pixel
should be subject to color
Am 15.01.19 um 15:33 schrieb Christian Stromberg:
Hi,
thanks for the hints! It took me some time to fill up all missing
dependencies. Because it's the first time I am compiling a whole
program, I wasn't familiar with the specific library names for Linux
Mint. Looking them up and installing
Hi,
you will need to compile from source. Best you follow the procedure
described as "git version" further down on
https://www.darktable.org/install/. This will give you darktable in its
most recent development version.
In order to test the PR I suggest:
cd $HOME/darktable
git checkout -b
Am 12.01.19 um 17:42 schrieb johannes hanika:
thanks for this! it works fine on my machine.
the conversion is done in lcms2, right? they do support unbounded
transforms now. if we're not clipping to [0,1] the colour space should
not matter at all. i'm not sure i understand the difference
There is now a PR in github that makes darktable read LAB Tiff files
(PR1996). Please give it a try.
FYI, the Tiff file gets internally converted to sRGB when opened. sRGB
represents a much smaller gamut than Lab. Please check if this gives
problems like much too low color dynamics. A larger
Am 09.01.19 um 15:43 schrieb Michael Below:
Hi,
I remember similar issues some years ago, iirc only imagemagick was able to
process similar tiff images correctly. I think most other programs were using a
certain library (gtk?) that caused these issues.
Good luck!
Any knowledge if
Am 27.12.18 um 14:10 schrieb FK:
Hi Ulrich,
thanks for your advice! Is this a quickfix, meaning with the next update
/ upgrade the problem pops up again or is this the best way to go?
If this is a known bug - can I help in any way to get it solved?
Please post the lines starting with
Hi,
there have been sporadic reports of this issue before. As a quick fix
you can go into ~/.config/darktable/darktablerc and manually delete all
lines starting with "plugins/lighttable/collect/".
When you start darktable again you once again see all your images but
now you should be able
Am 09.12.18 um 08:59 schrieb Stéphane Gourichon:
My guess is a memory corruption error in some part of darktable. Of
course it would be nice if somehow could trace this. (I'm willing to,
but currently not using darktable much, and quite busy with other projects.)
Well, no. The error message
I don't see anything specifically broken here. Let's take for example
the snippet from the quote below.
Time spent in OpenCL kernels including memory I/O is 0.055 seconds.
Total time in pixelpipe is 0.123 seconds. There is a bit of overhead
here but that's not dramatical. You may try to set
Two observations:
1) the total time that darktable reports per pixelpipe lies in the range
of 0.15s. That's not particularly fast, given your very undemanding
history stack, but it's also not extremely slow either. What wonders me
is your observation that the time spent in pixelpipe does not
Am 02.12.18 um 15:52 schrieb Volker Lenhardt:
Activated openCL
Next image:
1118,913519 [dev_process_image] pixel pipeline processing took 0,434
secs (1,555 CPU)
Profiled denoise:
1219,319104 [dev_process_image] pixel pipeline processing took 18,669
secs (72,306 CPU)
There's practically no
Looks like OpenCL is properly loaded. To better analyse the slow
response time of your system run with 'darktable -d opencl -d perf'.
After each processing step you get profiling output. For the start only
look at lines like
17,607291 [dev_process_image] pixel pipeline processing took 0,212
Am 01.12.18 um 20:32 schrieb Volker Lenhardt:
This is one more riddle. I had tried so and have repeated it just now
from the terminal. I get "-bash: darktable: command not found". It seems
the best I can do is to reinstall Darktable. What do you think?
You probably need to give the full
You are probably aware that you can find out more about OpenCL problems
by starting darktable from a shell with option 'darktable -d opencl'.
Likewise you may try 'darktable -d perf' for more info on what (module)
makes darktable feel sluggish.
Ulrich
Am 01.12.18 um 19:38 schrieb Volker
Maybe a problem with your installation of lensfun on which darktable's
lens correction relies. I would try to update the lensfun database by
running lensfun-update-data.
Ulrich
Am 20.11.18 um 20:59 schrieb kneops:
I can't. The dropdown only shows Nikon, and then only this lens.
I'm using the
Looks like your camera has not been detected in the first place (see the
empty field above the lens name). Please try to select the camera manually.
Best wishes
Ulrich
Am 20.11.18 um 20:40 schrieb kneops:
I'm puzzled why this lens correction is not working.
The lens as noted in the lens
Am 25.05.2018 um 17:48 schrieb Matthias Bodenbinder:
Am 25.05.2018 um 14:35 schrieb Peter McD:
Is there a rule of thumb for headroom settings?
Very good question.
Specifically because I do not see a performance difference between
opencl_memory_headroom=1000 and opencl_memory_headroom=400.
Ulrich Pegelow:
I have put all relevant files on dropbox:
https://www.dropbox.com/sh/yarghqgncirjd0w/AADUNyFGaGUpyBTDil_qaxsUa?dl=0
There you will find a script test-CR2.sh which infinitly calls darktable-cli
with the big 5Ds file. And it creates a log file for this. I have attached 2 of
those log
Am 25.05.2018 um 06:21 schrieb Matthias Bodenbinder:
But for me the issue is more: Why does it work with opencl_memory_headroom=400
for the first couple of cycles and then I get the issue. Then I increase the
value to opencl_memory_headroom=500 which fixes it for a while before it
happens
Am 24.05.2018 um 18:15 schrieb Matthias Bodenbinder:
Am 24.05.2018 um 17:43 schrieb Matthias Bodenbinder:
37,787315 [opencl_summary_statistics] device 'GeForce GTX 1050 Ti' (0): peak
memory usage 3807805440 bytes
37,787326 [opencl_summary_statistics] device 'GeForce GTX 1050 Ti' (0): 499 out
Am 24.05.2018 um 14:10 schrieb Matthias Bodenbinder:
Can it be that the DT GUI is not releasing all the GPU memory and then the
commandline fails? And the worst thing is that the system is not recovering
from that. Even with bench.SRW which is a lot smaller it is not working
anymore. The GPU
Am 14.05.2018 um 02:07 schrieb David Vincent-Jones:
I use this module extensively and find it very useful.
If I have any thoughts regarding change it would be the ability to use a
variable sized brush eraser rather than the current picker. A brush
would allow large areas to be 'swept' more
Am 18.04.2018 um 21:40 schrieb frieder:
Am Mittwoch, den 18.04.2018, 18:13 +0200 schrieb Ulrich Pegelow:
No, that's not what I would want to see (which version of darktable
are
you using). The final lines of the output should look something like:
It is darktable 2.2.1 on Debian Linux
'GeForce GTX 1060 6GB' (0):
164 out of 164 events were successful and 0 events lost
Did you combine '-d opencl' with '-d memory'?
Am 18.04.2018 um 14:07 schrieb frieder:
Am Mittwoch, den 18.04.2018, 06:51 +0200 schrieb Ulrich Pegelow:
This is what I did, and this is the output of the last lines
You may also run darktable with -d opencl -d memory, do some typical
stuff and then close darktable at which point it will tell you the peak
usage of GPU memory.
Peak usage will be very depending on your usage scheme. It will be high
during export of large images and reasonably low and
Am 07.03.2018 um 17:10 schrieb Matthieu Moy:
Now, what "subtract" should do with colors is debatable, especially in Lab.
On the L channel, something like max(in - out, 0) would be rather natural.
On the a an b channels, I'm not sure what to do. Just doing in-out would
mean the final image would
You obviously have a point here, the current implementation does not do
what the word "substact" implies.
Before considering a fix: any idea how the "substract" operator should
act in the Lab color space?
Ulrich
Am 05.03.2018 um 22:46 schrieb Matthieu Moy:
- Original Message -
Additionally you may want to play with opencl_number_event_handles.
Start with an extreme setting of zero. In case this would solve your
issues you can then try different values in-between.
Ulrich
Am 05.03.2018 um 11:37 schrieb KOVÁCS István:
The only memory-transfer related setting that I
What are your opencl related settings in darktablerc (i.e. all config
parameters of the form opencl_*) ?
Ulrich
Am 26.02.2018 um 23:28 schrieb Bernhard:
I was told yesterday that this card doesn't have that sensor.
Updating to newer driver shows N/A instead:
|~ $ nvidia-smi Sun Feb 25
ry
that's really not an issue any longer.
Ulrich
Am 20.02.2018 um 20:46 schrieb Peter Mc Donough:
Am 20.02.2018 um 20:09 schrieb Ulrich Pegelow:
That's an out-of-resources problem on your graphics card. Try to
increase darktable's config variable opencl_memory_headroom (in file
darktablerc) to some
That's an out-of-resources problem on your graphics card. Try to
increase darktable's config variable opencl_memory_headroom (in file
darktablerc) to something like 400.
Please also make sure that no other application uses substantial amounts
of GPU memory. You can use program nvidia-smi to
There is a feature as you suggested in the upcoming 2.4.0. When holding
down the CTRL key while entering one of the parametric masks sliders you
see the mask preview as long as you stay in the slider. On leaving the
view goes back to normal (with a 1 second delay).
Likewise if you press the
Am 22.09.2017 um 16:12 schrieb Howard Helsinger:
yes, as suggested, I'm running two displays at 1920.
problems seem to derive from the demosaic module
Well, that's not fully correct. Problem derives from the fact that there
is less GPU memory available than darktable thinks it has. The
Am 21.09.2017 um 22:00 schrieb Howard Helsinger:
However, I don't quite understand why.
I attach the output of $ darktable -d opencl. It says my GeForce
GTS 450 "allows GPU memory allocations of up to 239 MB.
I think I don't understand the numbers.
I try to explain and there are
Am 07.08.2017 um 04:03 schrieb David Vincent-Jones:
I have just moved my openSUSE version from 42.2 to 42.3 and now dt does
not appear to recognize the installed openCL.
[opencl_init] could not find opencl runtime library 'libOpenCL'
[opencl_init] could not find opencl runtime library
Hi David,
see PR1441 (https://github.com/darktable-org/darktable/pull/1441) and
redmine ticket #11497 to which it refers.
Inconsistent output has been a long standing issue with some few modules
where temporarily the center view would be incorrectly processed. PR1441
tries to prevent those
Am 27.03.2017 um 19:58 schrieb Pascal Obry:
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Venus
XTX [Radeon HD 8890M / R9 M275X/M375X] (rev 83) (prog-if 00 [VGA controller])
Subsystem: Dell Venus XTX [Radeon HD 8890M / R9 M275X/M375X]
Flags: bus
Hi,
so far there is no such option. What we have are blend modes that limit
a module's effect to only Lab lightness (= L channel) or Lab color but
there is no option to have the effect to individual a and b channels.
But please bear in mind that darktable is fundamentally different to PS.
So
These are offered in modules that work in the RGB color space.
The grayscale value g is calculated as 0.3*R + 0.59*G + 0.11*B - that's
a simple and rather common way of weighting the individual channels to
get a grey image.
OTOH the L value is calculated out of a RGB -> HSL color space
Am 06.01.2017 um 22:20 schrieb darkta...@911networks.com:
I just switched from 2.0.7 to 2.2.1 (archlinux) and I have some very significant
slow downs.
Things that would take only a second or two, now take 4, 5 6 seconds.
[ ... a lot of output that tells us OpenCL is working ... ]
Anything I
Am 22.12.2016 um 22:22 schrieb Lorenzo Bossi:
Thank you for the quick reply. The color clipping works well for b
Just to understand better the problem, is it related to the fact that
the blue leds emit some light camera sensors are too much sensitive?
Mostly yes. darktable uses the Lab color
No acros but the other ones on a Fuji X-T1. Basis has been the
darktable-chart output with some manual tweaking referring to a set of
images.
Ulrich
Am 30.11.2016 um 19:54 schrieb darkta...@911networks.com:
DT 2.0.7.
Has anybody created fuji style that match mostly the fujifilm
emulations,
Am 30.11.2016 um 07:47 schrieb Mark Heieis:
On 2016-11-29 21:50, Ulrich Pegelow wrote:
Am 30.11.2016 um 00:23 schrieb Mark Heieis:
some darktable-cltest output:
[opencl_init] found opencl runtime library 'libOpenCL.so.1'
[opencl_init] opencl library 'libOpenCL.so.1' found on your system
Am 30.11.2016 um 00:23 schrieb Mark Heieis:
some darktable-cltest output:
[opencl_init] found opencl runtime library 'libOpenCL.so.1'
[opencl_init] opencl library 'libOpenCL.so.1' found on your system and
loaded
[opencl_init] found 1 platform
[opencl_init] found 1 device
Your system does in
Am 27.11.2016 um 16:25 schrieb Rico Heil:
It should be grayed out already.
It's not on my machine:
At least I am not able to see any difference.
Actually not really visible. See the attached file how it looks here.
The difference is not very big, though. That's a style issue. Not even
Am 27.11.2016 um 16:04 schrieb Rico Heil:
Am 27.11.2016 um 15:38 schrieb Christian Kanzian:
Am Sonntag, 27. November 2016, 15:32:24 schrieb Rico Heil:
This discussion made me check the OpenCL paramters in my current
darktable installation.
"activate opencl support" is checked and I cannot
Am 27.11.2016 um 11:00 schrieb Rico Heil:
Am 26.11.2016 um 18:11 schrieb Niccolò Belli:
You will probably get better performance saving some bucks on the CPU
and buying a very fast GPU for OpenCL acceleration. Something like the
RADEON RX 480 would be an optimal solution because of FOSS
Hi,
short update from my side. Looks like I have found a way to restore the
original OpenCL performance of NVIDIA devices with recent driver versions.
Currently we have some other issues with the OpenCL codepath in master
which prevents me from working there. If this gets sorted out soon, I
Am 15.09.2016 um 09:31 schrieb Tobias Ellinghaus:
With a speed difference like that, couldn't we run a small benchmark at init
time (we already compare the speed to the CPU) and set the flag accordingly at
runtime?
Probably we should.
Ulrich
.
Ulrich
Am 15.09.2016 um 06:00 schrieb Jack Bowling:
On 09/14/2016 09:56 AM, Ulrich Pegelow wrote:
Well, there obviously is an issue with OpenCL and NVIDIA. However, a
quick check reveals that this is not related to 2.0.6 versus 2.0.5.
In fact it seems that NVIDIA did some changes to their drivers
Any backtrace?
Am 14.09.2016 um 19:12 schrieb Colin Adams:
It causes darktable 2.0.5 (Fedora) to crash. Switching back to false
cures the problem. So please don't change.
On Wed, 14 Sep 2016 at 17:56 Ulrich Pegelow <ulrich.pege...@tongareva.de
<mailto:ulrich.pege...@tongareva.de&g
Well, there obviously is an issue with OpenCL and NVIDIA. However, a
quick check reveals that this is not related to 2.0.6 versus 2.0.5.
In fact it seems that NVIDIA did some changes to their drivers in the
way they handle memory transfers over the IDE interface.
There is a quick fix for
Am 26.05.2016 um 21:58 schrieb Peter Mc Donough:
Is there any information somewhere available on which OpenGL
implemantations/Linux kernels presently support Darktable?
AMD/Nvidia/Intel.
We have good experience with AMD and Nvidia. The proprietary drivers of
both vendors run well with a
69 matches
Mail list logo