Joao S. O. Bueno Calligaris wrote:
- Initially using X memory
duplicate image - now using x * 2 memory.
That shouldn't be true. The vast majority of the memory
used by the duplicate is tile data which GIMP simply
copy-on-write shares between the original and the duplicate.
(Unless COW stuff got
Hi,
Gerhard Gauling [EMAIL PROTECTED] writes:
I agree with you, but wouldn't it be nice to have once on a day
something similar to the file automation dialog in Photoshop?
There one can choose for example contact sheet or batch processing.
In batch processing one can select an
Hi,
Daniel Egger [EMAIL PROTECTED] writes:
Hm, there's still no idea floating around how to benchmark.
There are very clear ideas on how to do it. Someone just needs to sit
down and write the script (or dig out script-fu-bench.scm which is
what we used to use a long time ago).
I'd rather
Hi,
we released a new snapshot (0.7) of the user manual, which is mainly a bugfix
snapshot. You can download this snapshot from ftp.gimp.org:
ftp://ftp.gimp.org/pub/gimp/help/testing/gimp-help-2-0.7.tar.gz
There is also a browsable up to date CVS version online on:
Hi,
[EMAIL PROTECTED] (2005-02-20 at 0203.45 -0200):
Let X bre a Huge Amount of Memory (tm) taken by said image, and Y be A
Couple Kilobytes (tm) used by scaled down version
Your proccess:
- Initially using X memory
duplicate image - now using x * 2 memory.
I assume there is Copy On Write,
Sven Neumann wrote:
Gerhard Gauling [EMAIL PROTECTED] writes:
In batch processing one can select an action (i.e. gimp-script), a
folder for loading the files for processing and one for saving them,
and in which kind the naming is to be managed, and so on. [...]
I think this is
On 20.02.2005, at 14:09, Sven Neumann wrote:
There are very clear ideas on how to do it.
Hm, must have missed that...
Someone just needs to sit down and write the script (or dig out
script-fu-bench.scm which is what we used to use a long time ago).
You'd still do me a favor if you would try
Sven Neumann wrote:
Daniel Egger [EMAIL PROTECTED] writes:
Hm, there's still no idea floating around how to benchmark.
There are very clear ideas on how to do it. Someone just needs to sit
down and write the script (or dig out script-fu-bench.scm which is
what we used to use a long time
Hija,
while building GIMP for the threading test I noticed that
builds where the source dir is not the object directory are
slightly broken; in particular the intlfoo magic in the tips
directory doesn't take the SRCDIR into account and thus failes
badly and the following (generated) files are
That is great.
I did not know that the GIMP could reuse tiles from one image to
another. Actually, I didi not tought this was done even from a layer
to another - that explains why adding new layers to large images goes
so smoothly.
I am more than happy.
How does this memory usega behave
Hi,
Daniel Egger [EMAIL PROTECTED] writes:
while building GIMP for the threading test I noticed that
builds where the source dir is not the object directory are
slightly broken; in particular the intlfoo magic in the tips
directory doesn't take the SRCDIR into account and thus failes
badly
On 20.02.2005, at 14:09, Sven Neumann wrote:
You'd still do me a favor if you would try current CVS and told me
whether it feels faster or not.
It's slower, measurable and reproducible slower.
As a benchmark I used a gradient fill in a 3000x3000px (68.8M)
image. I get consistently times of 8s for
Hi,
Daniel Egger [EMAIL PROTECTED] writes:
It's slower, measurable and reproducible slower.
As a benchmark I used a gradient fill in a 3000x3000px (68.8M)
image. I get consistently times of 8s for 1 thread and between
9.2s and 9.6s for 2 threads. With a running application, after a
restart
On 20.02.2005, at 19:29, Sven Neumann wrote:
Anyone here who can fix that with some Makefile-fu?
Why do you think those files would be looked for in $(srcdir)?
Because the compiler complained that it couldn't find the
files in $(srcdir), so what I did was copying them over
one-by-one from the
On 20.02.2005, at 21:55, Sven Neumann wrote:
As a benchmark I used a gradient fill in a 3000x3000px (68.8M)
image. I get consistently times of 8s for 1 thread and between
9.2s and 9.6s for 2 threads. With a running application, after a
restart -- doesn't matter.
What is strange though, is that it
Hi,
Daniel Egger [EMAIL PROTECTED] writes:
What is strange though, is that it only seems two use one CPU
for both threads; maybe a stupid gthread implementation?
Since gthread is just a very thin wrapper around pthreads, that would
mean that it's a stupid pthread implementation. To me this
On Sun, Feb 20, 2005 at 10:55:18PM +0100, Sven Neumann [EMAIL PROTECTED]
wrote:
mean that it's a stupid pthread implementation. To me this looks like
the kernel believes that it would be better to keep the threads local
than to move one to the other CPU.
Linux will not keep two threads
On 20.02.2005, at 22:55, Sven Neumann wrote:
Since gthread is just a very thin wrapper around pthreads, that would
mean that it's a stupid pthread implementation. To me this looks like
the kernel believes that it would be better to keep the threads local
than to move one to the other CPU. I wonder
On 20.02.2005, at 23:47, [EMAIL PROTECTED] ( Marc) (A.) (Lehmann ) wrote:
Linux will not keep two threads running on a single cpu if both are
ready
and nothing else is running, regardless of locality etc., as the kernel
lacks the tools to effectively decide wether threads should stay on a
cpu
or
Daniel Egger wrote:
I can force it to use both CPUs now, but even with
200% utilization it is 2s slower to run this stupid
ubenchmark than on 1 CPU without threads.
Just a vague guess, but the multiprocessor GIMP pixel
work scheduler might* farm alternating tiles to alternating
CPUs. These are
It sounds like the granularity of parallelism is too fine. That is, each
task is too short and the overhead of task dispatching (your task queue
processing, the kernels thread context switching, any IPC required, etc.) is
longer then the duration of a single task.
I hit the same problem a decade
Hi,
David Bonnell [EMAIL PROTECTED] writes:
It sounds like the granularity of parallelism is too fine. That is,
each task is too short and the overhead of task dispatching (your
task queue processing, the kernels thread context switching, any IPC
required, etc.) is longer then the duration
Thanks for the clarification.
If each thread obtains an (exclusive) lock on the pixel region then the
tasks will effectively be serialized and overall execution time will
increase compared to a non-threaded implementation due to the threading
overheads. (Queue manipulation, thread creation and
23 matches
Mail list logo