On Monday, January 23, 2012 2:16:34 PM UTC-8, Wendy Cheng wrote: > > 1. any plan to move (merge) the engine-pu back to main line (memcached) > source ? > That's the intention. We're trying to work this out now.
> 2. a newbie question .. using the default engine as a reference, > within "do_store_item()", I don't seem to be able to find the exact > logic where the data gets filled into "item" ? Intuitively, I could > imagine the item data pointer may have been passed to network layer as > part of the network received buffer (?). So the data could have been > filled into the right place before "do_store_item()" actually invoked > ? Could someone give a short description of how this piece of logic > actually works (i.e. how "item_store" is implemented) ? > The core server does this. Perhaps this would be a good "flow" documentation bug to file since it could use some pictures and stuff. Basically, it works like this: 1) The server requests the engine allocate a specific amount of storage for the specified key. 2) The server does magic(*) to fill in that buffer with the value for the key. 3) The server asks the engine to store the item. #1 is allocate(). Basically, in a naïve engine, that's malloc(). #2 is a tiny abstraction above setting the current buffer for network reads on that file descriptor. The core manages reading the data and putting it into that buffer. #3 is where you actually link it in. Again, in your naïve engine, you'd lock your hash table, link the new item in, and unlock it. The value has already been transmitted so you have the entire key and value at this point. The reason you separate #1 and #3 is so that #2 can take as long as it wants without having any performance impact on #3. The amount of data you have to copy at this point never changes because you've already received it all and stored it into a dedicated location. The core could do #1 on its own, but this gives you the opportunity to either return an OOM or at least start performing evictions *before* you ever start reading values. I think implementing the most primitive engine as possible (malloc and a single giant linked list) would be a useful exercise here.
