>
> In general, the dataset that retrieves the rows is decoupled from the 
> model instance that is created.  What happens is the dataset gets the rows 
> as hashes (Dataset#fetch_rows), and then for model datasets, just calls the 
> dataset's row_proc with that hash (inside Dataset#each).  One way to handle 
> this to override the dataset's row_proc and Dataset#each so that 
> Dataset#each passes both the values hash and self to the row_proc, and the 
> row_proc can then assign the dataset to an instance variable of the created 
> model instance (or store it some other way).
>

I actually did something very similar to that. I was originally messing 
with the row_proc, but it turned out that the simplest thing to do was just 
to override Dataset#each, have it check to see if the object responds the 
setter method I'm using for the source dataset and if it does, assign it 
there.

I'm curious as to why you'd want to use the dataset later.  What's the end 
> goal?
>

It's kind of a long story. I inherited part of a project that was using 
this plugin: https://github.com/rosylilly/sequel-cacheable 

I wanted more flexibility and to fix a number of common issues we were 
dealing with using this plugin while maintaining compatibility with 
existing code. What ended up happening was this:

https://github.com/binarypaladin/sequel-query-cache

Instead of making the caching model centric, I made it dataset centric. In 
fact, the caching (both reading and writing) happens on Dataset#fetch_rows. 
Effectively what happens is when you want a dataset cached, a hash of the 
existing dataset's sql is made and used as a key and the values are stored 
in either memcached or Redis. The dataset approach was taken because there 
were times we wanted specific queries cached that had nothing to do with 
models at all and it was a more generic way to do it anyway.

So, the reason for the dataset...

The caches are keyed by a hash of the dataset and set and/or retrieved 
during Dataset#fetch_rows. This means the key for the dataset used by 
User[email: '[email protected]'] is different than the dataset from 
User[1]. If the model instance returned by a dataset is updated in some way 
it automatically updates the cache for user_instance.this (which is almost 
always identical to User[1]). A common issue where we regularly make use of 
those caches is a model being retrieved either by a foreign key or a unique 
field like an email address. Having knowledge the source dataset lets the 
model instance clear and/or recache the values related to that dataset too.

I hope that makes sense.

-- 
You received this message because you are subscribed to the Google Groups 
"sequel-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sequel-talk.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to