Hi,

I'd like to react to a couple of your points, bearing in mind that I'm not 
claiming to be "right", just putting it out there for discussion.
See comments below

Date: Wed, 1 Nov 2006 09:29:58 -0800
From: "Igor Vaynberg" <[EMAIL PROTECTED]>
Subject: Re: [Wicket-user] shades, and caching
To: wicket-user@lists.sourceforge.net
Message-ID:
    <[EMAIL PROTECTED]>
Content-Type: text/plain; charset="iso-8859-1"

On 11/1/06, Geoff hendrey <[EMAIL PROTECTED]> wrote:
>>
>> Is the Hibernate L2 cache a distributed cache?


>in hibernate it is a pluggable implementation. by default it uses ehcache
>which as of 1.2 has clustering support afaik. but i hope it doesnt replicate
>entities over the cluster and just replicates the evict calls instead.

The problem with a non-replicated cache is that, it doesn't work well for your 
use case -- serializing state accros members of a cluster. What will happen is 
that your state get's unserialized on some machine, then a so called "sepuku" 
eviction occurs cluster wide, and the fresh results get reloaded *from the 
database*, onto that single machine where your session now is live, and all the 
other nodes drop the televant stuff from cache. Certainly believe you are 
seeing benefit from your cache, but I suspect that most of the benefit is not 
in session replication or failover. You definitely will see benefits if you are 
doing N+1 loading, but why do N+1 loading?


>the cache doesnt live in httpsession so that is ok. httpsession is the
>important part because it is expensive to replicate for clustering so you
>want to keep it as small as possible. of course expensive is a relative
?term.

I see this perspective. It's always a "time vs. space" tradeoff. You want to 
incur the cost of hitting the database, in order to minimize the amount of 
state you have to squirt between machines. However, I'd like to point out that 
unless you are using a distributed, hard cache, this architecture is just going 
to batter the database and turn it into a bottleneck, for some applications, 
and it might not be a good idea to assume the system has a distributed hard 
cache.

I think Shades might just be optimized enough that serializing the 
DatabaseSession probably only approximately doubles the space that you would 
need to serialize the Pojo's alone. It seems to me that if your pojos have a 
dozen or so primitive fields, that simply detaching the serializable pojo, and 
hanging onto it for attachment yields a better performing system than hitting 
the database once for each pojo being displayed on the page, when you attach. I 
think the determining factor is the size of the POJO's themselves. My rule of 
thumb would be, "if the pojo's have a few dozen primitve fields, just serialize 
them out along with the pojo."


>hope that example explains it. notice i didnt make cache transient to really
>drive the point home. so when serialized the model only keeps the long id,
>the other information is redundant and would be a waste to serialized and
>replicate because

>(a) it can be recreated

yes, but it is very expensive unless you use distributed cache, in which case, 
they are not really "recreated", and you then you still have the volatility 
issue (b).

>(b) db data is volatile, so it needs to be reloaded next request anyways.

>>I want to point out, that Shades is extemely efficient with regards to
>> memory. First off, in Shades, POJO's loaded by Shades *never* have live
>>references to other POJO's. It's a radical design choice, but one I feel is
>> justified, for both performance, and streamlining. In Shades, if you want to
>> access a non-primitive field of a POJO, you must retrieve it explciitly by
>> query. So when you serialize out a POJO, that was loaded by Shades, you
>> never have to worry that it might have dragged an entire object-web out of
> >the database with it.


>yes, but you still dont want to serialize the pojo for the two reasons i
>outlined in the above paragraph.

Is (a) is a hard/fast rule? I tend to think it depends on how big a footprint 
the POJO has.
And, data volatility depends on the application. Volatility can be addresses 
through optimistic oncurrency, which Shades uses. Also, I'm going to add "read 
only" queries to Shades, in which case there will be close to zero 
space-overhead associated with each POJO beyond the size of the POJO itself, so 
long as you don't plan on updating the data.

Considering the phonebook example, the query that puts the 10 items on the 
page, is a read-only query. When you go the detail page, there is only one 
record at a time that can be edited. So solve volatility, and state-size, in 
one fell swoop, by using read-only style queries for the "listing" pages, and 
regular queries, with optimistic concurrency for the "edit" pages.

This is a great discussion --- I think I will go add read only queries to 
Shades! :-)






 
____________________________________________________________________________________
Cheap Talk? Check out Yahoo! Messenger's low PC-to-Phone call rates 
(http://voice.yahoo.com)


-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Wicket-user mailing list
Wicket-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/wicket-user

Reply via email to