Hi!

On Feb 14, 2008, at 11:39 PM, Dustin Sallings wrote:

On Feb 13, 2008, at 23:02, Brian Aker wrote:

As to order, memcached has to feed you keys back in the order you sent them, or you have to keep a map of key to opaque ID. That map is expensive. Dormando is saying that the end user should keep this, but you believe it belongs in the library.

I just don't understand why you think you need a map. An array of char* is perfectly sufficient.

To me the creation of that array is pretty expensive. It is going to be a system call (or series of system calls).

Likewise, order doesn't matter. They do come back in the same order, but it's not guaranteed. We could guarantee it, I suppose. I just don't see the gain.

Ok, I see your point with this.

We could conditionally place the key in front of that, set the key length and adjust the header appropriately. It's for the sake of solving a problem that I still don't think exists.

        In your API, you have this:

memcached_return memcached_mget(memcached_st *ptr, char **keys, size_t *key_length, unsigned int number_of_keys)

``char **keys'' is all the mapping you need for O(1) opaque -> key lookups. If you start your opaque at 0, you don't even have to do subtraction. The key for a given response is:

        keys[opaque]

memcached_fethch() then has to be called to fetch the keys (or the execute method). So either the user will have to keep around the keys for my usage... which means setting pointers to the opaque...


Right, but I am going to have to malloc the keys and lengths. I have no idea if the user still has that structure around during a fetch. There is no requirement that you have to stay in scope of the memcached_mget(). A user can call that, and lazily grab values as they want them.


Similarly, you can figure out which keys you received and which you didn't with memcmp on a bitmap. Which ones you didn't receive are the keys corresponding to the 0's in that bitmap. Using the keys in the response limits your flexibility around this.


Now that is a good reason... but right now user's just do this on their own in the upper layer.

I am starting to see your point... but it seems like a lot of effort for not a lot of gain. The evidence that I might need is proof that this is going to make a difference in performance. The two mallocs (well... perhaps one if I screw around with a single memory block) are expensive. Plus it means bloat in the driver for memory size (something I have been trying to keep down).

So, I'll commit to creating a branch on this and seeing how performance works out, and putting in the memory allocations.

Thanks for arguing with me over this.

Cheers,
        -Brian

--
_______________________________________________________
Brian "Krow" Aker, brian at tangent.org
Seattle, Washington
http://krow.net/                     <-- Me
http://tangent.org/                <-- Software
http://exploitseattle.com/    <-- Fun
_______________________________________________________
You can't grep a dead tree.


Reply via email to