Hi,
I do not have the T5 source checked out / buildable - so thats not a
very easy task for me. I will see if I find time for that.
Our app is not particular lightweight - more a J2EE monster style app -
650 entities, over 2500 spring beans
The pages we are initializing in Tapestray are
You don't need to build the tapestry source.
Create a custom PageSource implementation by copy / paste / tweaking
PageSourceImpl (lookup source on github) and removing SoftReference usage.
Then use tapestry ioc to override the builtin PageSource service with your
custom impl.
Hi,
I now found time to sum up a short report about that topic.
I summarized my results in following pdf file:
http://www.schmelzer.cc/Downloads/Files/Tapestry-Memory-Performance.pdf
The main issue is, that you are able to bring a Tapestry based system
into a situation where it gets slower
Actually Robert, I'd love it if you could patch/override T5 core just
enough to disable SoftReferences and re-run your test. The results may
surprise you. I could almost guarantee you'd see the same performance
pattern for any modern jpa 2.x application. At 1.2GB, it doesn't look like
your test
I'm feeling that Robert is making a very good case here. I could imagine a
page-level annotation to either enable or disable evication of a page
instance after a period of time ... but that can come later. I do think
that hard-caching of pages will leading to more predictable response
performance.
A configurable cache might be ok but what Robert is showing is a highly
typical performance degradation pattern for any sufficiently large Java
application. Tapestry's page cache is hardly the only place where soft
references are used. When your memory budget is too small, most system
engineers
By removing the SoftReference in PageSourceImpl. You would get an
OutOfMemoryError directly when you reach memory limit and the GC would
not try to fix this by throwing away PageImpl instances.
So you would fail on you test env earlier. Otherwise things would come
up during
Sorry, I was unprecise - my example should have referenced to the
EntityManagerFactory (SessionFactoryImpl in Hibernate). You would not
expect them, to throw away its cached configuration on memory preasure.
I do not either expect that from Tapestry.
I cannot make our results public because
On Thu, Mar 19, 2015 at 12:24 AM, Robert Schmelzer rob...@schmelzer.cc
wrote:
Sorry, I was unprecise - my example should have referenced to the
EntityManagerFactory (SessionFactoryImpl in Hibernate). You would not
expect them, to throw away its cached configuration on memory preasure. I
do
On Wed, Mar 18, 2015 at 12:44 AM, Robert Schmelzer rob...@schmelzer.cc
wrote:
I do not agree with you on that point. Tapestry is designed to cache the
page. When you do not have enough memory to hold your pages cached
basically the system does not work as designed so you should fail early.
A time or LRU algorithm is not really a good thing here even when I use
a page just once a day, I do not want to have it initialzed on the fly.
You might run into problems with holding you SLA.
In my opinion Tapestry is designed to Cache the pages. If it cannot do
so - it must throw an error
I do not agree with you on that point. Tapestry is designed to cache
the page. When you do not have enough memory to hold your pages cached
basically the system does not work as designed so you should fail early.
Otherwise you possible defer the problem to production use. Fail early
means you
On Wed, 18 Mar 2015 04:44:10 -0300, Robert Schmelzer rob...@schmelzer.cc
wrote:
I do not agree with you on that point. Tapestry is designed to cache
the page. When you do not have enough memory to hold your pages cached
basically the system does not work as designed so you should fail
Hello,
I recently came accross the implementation of PageSourceImpl where
PageImpl instances are softly refereneced into the pageCache:
private final MapCachedPageKey, SoftReferencePage pageCache =
CollectionFactory.newConcurrentMap();
This implementation caused troubles, when you bring
In my opinion, soft referencing page objects is highly appropriate usage
here. If there's pressure on the available memory, it makes sense to trade
performance for memory instead of exiting with OoM. This is simple
condition to detect and should be visible with any reasonable monitoring
tool. If
Possibly we need something more advanced; our own reference type that can
react to memory pressure by discarding pages that haven't been used in
configurable amount of time.
Or perhaps we could just assume that any page that has been used once need
to be used in the future and get rid of the
16 matches
Mail list logo