On Aug 4, 2007, at 13:22, Vincent Hennebert wrote:

Manuel Mall a écrit :
<snip/>
I have been following this discussion with very little attempt to
understand the intricate technical details of concurrent maps etc, but
I am wondering why we don't apply the KISS principle here?

Oh, but it is Simple and Stupid. :-)
Much simpler and much more stupid than our layoutengine or hyphenator or property resolution code or...

The initial version I posted earlier was in fact so stupid it still contained a huge design flaw. Lucky for me, nobody actually took a look at the code... Either that, or I should assume that a whole lot of people looked at it, but no one understood it. But that doesn't make it 'not simple'!

IIRC the original problem was that the FOP memory footprint for
rendering large documents was causing issues. One set of culprits that were identified were the properties. Given that a FOP rendering run is
single threaded, i.e. there are no threads created within FOP, why
don't we start with a property cache per run?

We have already started (a couple of months ago)... with static final caches. An idea that seems to do the job nicely, although there are some reservations on the performance penalty because the cache needs to be synchronized in that case.

OTOH, Richard also raised the question, and to be honest, I don't have a clue for the moment:
What are realistic numbers to measure that penalty with?

I tried spawning 10 threads, and the penalty was already significant, but... This is supposing that all those 10 threads would need to access the /same/ cache at the exact /same/ instant.

If each thread spends less than 10% of the time parsing properties, the chances of that happening become very little.

No threading issues, no performance issues, and large gains in memory footprint reduction.

Indeed, but I think that implementing that would be much more difficult than the current solution. Much less adhering to the beloved KISS-principle than a Simple and Stupid home-made Hashtable... :-)

That will even benefit the memory footprint of concurrent FOP runs.
Admittedly not to the extend as globally shared cache would do, but it
would be much simpler and we can use the standard Java collection
classes to implement it.

I’m afraid I must agree with Manuel here; although I’m not very familiar
with that whole area and I may well be also missing something.

It seems to me that the main problem of FOP is that it isn’t able to
render big documents, and that properties only play a part in that
problem. It might be more useful to try and optimize the whole rendering process, from which everyone will benefit, those running FOP on a server
as well as all others. That’s not the same kind of effort but it’s as
much important IMHO.

A cache per rendering run would do the thing, wouldn’t it? Coupled with a flyweight factory for those properties with a small number of possible
values, which themselves could be shared among the different threads.

Yes, but see my comments above: to implement this, we would be introducing yet another set of classes, making the property resolution code even more complicated than it already is. It only / seems/ simpler...

Also, maybe it’s worth keeping in mind that while that’s not currently
the case, we want to eventually make the rendering process
multi-threaded. Although the two issues might actually not interfere.

Well, if anything, the big PRO for a concurrent, thread-safe cache should precisely be the prospect of FOP multithreading *internally*. In that case, the current solution --possibly over time again backed by a standard 1.5 ConcurrentHashMap-- at least offers something which a rendering-run-local cache would lack.


Cheers

Andreas

Reply via email to