Have a look at the Limiera mailing list, espcially their current
threads on scheduling. LInks to these messages can be found at
http://lists.lumiera.org/pipermail/lumiera/2009-June/thread.html
It seems to me that they're dealing with
related issues, although with a different set of constraints.
2009/6/16 Øyvind Kolås :
> Locking mutexes for such accesses, perhaps even over batches of tiles
> when reading/writing a rectangle of pixels I expect to be detrimental
> to performance only if the parallellization is implemented so that the
> threads will be blocking each others access. This shoul
On Tue, Jun 16, 2009 at 11:38 AM, wrote:
> 2009/6/16 Øyvind Kolås :
>> On Tue, Jun 16, 2009 at 11:11 AM, wrote:
> Another thing worth mentioning is that caches on every node doesn't
> scale well to concurrent evaluation of the graph since the evaluators
> would need to all the time sy
2009/6/16 Øyvind Kolås :
> On Tue, Jun 16, 2009 at 11:11 AM, wrote:
Another thing worth mentioning is that caches on every node doesn't
scale well to concurrent evaluation of the graph since the evaluators
would need to all the time synchronize usage of the caches, preventing
n
On Tue, Jun 16, 2009 at 11:11 AM, wrote:
>>> Another thing worth mentioning is that caches on every node doesn't
>>> scale well to concurrent evaluation of the graph since the evaluators
>>> would need to all the time synchronize usage of the caches, preventing
>>> nice scaling of performance as y
>> Another thing worth mentioning is that caches on every node doesn't
>> scale well to concurrent evaluation of the graph since the evaluators
>> would need to all the time synchronize usage of the caches, preventing
>> nice scaling of performance as you use more CPU cores/CPUs.
>
> In most instan
On Tue, Jun 16, 2009 at 6:31 AM, Martin Nordholts wrote:
> One caching strategy that I think we will come pretty far with is to
> handle caching in a wrapper function to GeglOperation::process(). That
> is, the GeglEvalVisitor would not call GeglOperation::process() directly
> but instead something
On Tue, Jun 16, 2009 at 8:58 AM, johannes hanika wrote:
> this is handled well because changing the last operation in the graph
> will need the output of the previous one, thus incrementing the ``more
> recently used'' value of this one, preventing the important previous
> cache line from being swa
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi,
i'm new to the list and was following your caching discussion with some
interest, as i was just implementing something similar for an
open-source interactive photo-development software and was thinking
about using GEGL instead.
for this applicati
Øyvind Kolås wrote:
> On Mon, Jun 15, 2009 at 8:12 PM, Patrik Östman wrote:
>
>> node 1 to node 2. Are there any significant changes of cache
>> handling done between 0.0.20 and 0.0.22 or are there a
>> setting that must be turned on to get 'per node caches'
>> functionality?
>>
>
> The ca
On Mon, Jun 15, 2009 at 8:12 PM, Patrik Östman wrote:
> node 1 to node 2. Are there any significant changes of cache
> handling done between 0.0.20 and 0.0.22 or are there a
> setting that must be turned on to get 'per node caches'
> functionality?
The caching policies and mechanisms in GEGL are
Hi.
I repost my previous message because of bad formatting.
Hope this will be better...
I have some questions regarding the cache strategy in gegl.
I have tested using both 0.0.20 and 0.0.22 and I find them
different in the way nodes are cached.
GEGL 0.0.20:
Creating the graph below:
1
|
2
|
3
Hi. I have some questions regarding the cache strategy in gegl. I have
tested using both 0.0.20 and 0.0.22 and I find them different in the way
nodes are cached. GEGL 0.0.20: Creating the graph below: 1 | 2 | 3 | 4
Rendering node 4 and after that rendering node 2. When rendering node 2 the
output i
13 matches
Mail list logo