Hi!
On Oct 2, 2007, at 10:59 AM, Dustin Sallings wrote:
2) The protocol always sends an ACK of some sort.
The interface provided to my client doesn't require the caller to
wait for ACKs. You tend to want to do that for get requests, but
you may not care in the case of deletes or sets.
I was thinking about doing exactly this. I pulled the source to the
perl client this morning to figure out how it works after I got some
feedback that the Ruby client is currently faster then mine. The perl
code sets non-blocking, but then loops for the read to handle the
data (since it always responds with an answer (and this was confirmed
by Jamie McCarthy). Somewhat defeating the purpose of non-blocking
unless you open up multiple sockets.
That is to say, you generally don't want to not know when
something is over (in the case of quiet gets in the binary
protocol, you'll want a noop or a regular get at the end), but you
can't really send a quiet get and then wait just in case something
starts arriving. Instead, just stream requests out and stream
responses in. Line them up, and
Yes, I already do this for gets currently. The nature of the protocol
makes this very natural.
You don't have to at all. A set is issued, and the state of the
op is changed to waiting_for_response or something and it's added
to an input queue. Then you start sending the next operation from
your output queue. If a server starts sending stuff back to you,
it's for whatever's on the top of your input queue (in the binary
protocol, you can double-check this).
I was thinking to myself this morning that I was hoping the binary
protocol would be smart enough to do this.
So I see what you are doing, you have created an API which doesn't
provide for a user to set/get an answer unless they want to. It is
easy enough to add that sort of call but I am left wandering what
users expect. Giving them both options is fine. The perl driver makes
me think that users do expect a "this is how it is going to be". And
from looking at it, they expect a send/ack.
A lot of what I am looking at if I have a question is the Perl driver
at the moment. I've got a copy of the Ruby one as well, but from
looking at its code, it looks to be blocking as well.
On a different related note, I've noticed another issue with
"set". When I send a "set foo 0 0 20\r\n", I have to just send
that message. I can't just drop the "set" and the data to be
stored in the same socket. If I do that, then the server removes
whatever portion of the key that was contained in the "set". Maybe
this is my bug (though I can demonstrate it), but that seems like
a waste. AKA if on the server its doing a read() for the set and
tossing out the rest of the packet then its purposely causing two
roundtrips for the same data.
By ``socket,'' do you mean ``packet?'' My client pipelines
request in such a way that multiple gets, sets, deletes, etc... can
easily get stuffed into the same packet.
Interesting. Do you have a test case of sending the SET and the value
from within the same write()/send() call?
We can create a qset, but the semantics would need to be carefully
considered. qget just keeps its errors silent and only returns
positive results. Should a qset do the opposite, or should it
never return anything at all?
Off the top of my head what we would want is a way to send data, and
then receive back asynchronously what committed and an "end of
transaction" statement. From that we can deduce what was what.
Thanks Dustin!
Cheers,
-Brian
--
_______________________________________________________
Brian "Krow" Aker, brian at tangent.org
Seattle, Washington
http://krow.net/ <-- Me
http://tangent.org/ <-- Software
http://exploitseattle.com/ <-- Fun
_______________________________________________________
You can't grep a dead tree.