Thankyou James. That certainly helps me understand why... On Sun, Mar 13, 2011 at 10:01 AM, [email protected] <[email protected]>wrote:
> On Fri, Mar 11, 2011 at 6:51 AM, Roy Mathew <[email protected]> wrote: > > Hi Gustavo, Thanks for your reply. > > My assumption was that the the goal of caching was to avoid making the > > same query more than once in a given transaction. > > However, my (perhaps flawed) analysis after a bit of digging suggests > > that this is not the case, and the caching model used in the storm > > container is more about managing objects in memory that correspond to > > table rows, and doing the right thing as far as their state is concerned > > when dirty, etc... > > Our application uses a SQL container model. I turned on logging on > > the postgres backend, and saw that doing something like this inside > > a transaction: > > obj1 = C['key1'] > > obj1 = C['key1'] # and again > > (that is to say, it seems that invoking __getitem__ twice), causes the > > same SQL query to run twice. Please help me understand if I > > understand this correctly. thanks! > > Storm's object cache works off of the table's primary key. It doesn't > have any knowledge of any alternative keys for the table, so queries > that rely on those keys won't benefit from the cache. > > As a general rule, calls to Store.get() (and code that calls it, such > as References to the primary key of a table) may avoid a query if > there is a cache hit, while calls to Store.find() will always issue a > query. > > James. > -- Roy.
-- storm mailing list [email protected] Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/storm
