The focus of the plugin was mainly around caching model instances, it just 
happened that digging into the dataset gave us some additional flexibility. 
The default behavior will only cache queries where limit = 1. All other 
queries have to be cached explicitly so, the expectation is that the 
developer should know what he's doing and be aware of invalidation issues. 
(And yeah, there are plenty of cases where it can happen.)

That makes sense, thanks.  It seems like this approach may have cache 
> invalidation issues, though:
>
>   user = User.where(:id=>[1]).all
>
  User[:email =>user.email].destroy
>   User.where(:id=>[1]).all # what's the result?
>

The result here is going to be [] because .all wouldn't cache. However:

user = User.cached.where(:id=>[1]).all
User[:email =>user.email].destroy
User.where(:id=>[1]).all

Would end up returning [<deleted_user_instance>]. This is also why caching 
EVERYTHING is not on by default. Haha. 

Or:
>
>   User[:email =>'[email protected] <javascript:>'].update(:email=>'
> [email protected] <javascript:>') 
>
  User[:email =>'[email protected] <javascript:>'] # what's the result?
>

This one should actually work just fine. Dataset#update deletes the cache 
related to the dataset. The cache is wiped before the actual update is 
executed so the result would be nil.

-- 
You received this message because you are subscribed to the Google Groups 
"sequel-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sequel-talk.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to