Am 19.01.2014 21:58, schrieb Armin Ronacher:
Hi,

I'm currently wrapping all of redis in a fairly high-level library similar to the Python binding. It's currently living here:
  https://github.com/mitsuhiko/redis-rs

Cool, another redis library :). This is mine: [1]


In general I did not encounter many problems with that but there are some open questions in regards to how pipelining and connection pooling should work.

In general, this is roughly how the library works:

  extern mod redis;

  fn main() {
    let client = redis::Client::open("redis://127.0.0.1/").unwrap();
    let mut con = client.get_connection().unwrap();
    println!("Got value: {}", con.get("my_key").unwrap_or("<no value>"));
  }

Pipelining:

I currently have no idea how to implement this. The API I had in mind was this:

  let mut con = client.get_connection().unwrap();
  let mut counter, data;
  con.pipeline()
    .incr("counter").tap(|value| { counter = value; })
    .get("data_key").tap(|value| { data = value; })
    .execute();

The general idea is pretty simple: whereas a regular redis connection immediately returns the results the pipeline buffers them up and will execute the tap'ed callbacks to return the data. Unfortunately I have no idea how this can be implemented currently. There are two issues with that: first of all I don't fancy implementing all methods twice (once for the connection and once for the pipeline), secondly the .tap() method needs to change signature depending on the return value of the most recent operation.

I think, if you add something like "Postpone(&mut Connection)" to the Value type it could work. Method tap would only be defined for type Value and will fails it it's value is not Postpone.
Something like that:

enum Value {
  Nil,
  Int(int64),
  Data(~[u8]),
  Error(~str),
  Status(~str),
  Postpone(&mut Connection)
}

impl Value {
  fn tap(&self, fn callback) -> Connection {
     match *self {
         Postpone(conn) => { conn.add_callback(callback); conn }
         _ => fail!()
    }
  }
}

Of course incr() etc. will only return Postpone if it is in pipeline mode, otherwise it will execute
normally and return the redis value.

Lastly because the pipeline borrows the connection as mutable the code currently would need to be placed in a separate scope, otherwise the con object becomes unusable after the pipeline call.

Connections:

I don't know what the best way to deal with connections is. Right now I have a client object which tries to connect to redis and does the address resolution. The actual connection however is provided by a get_connection() function on it which will connect and return a connection object. This way two tasks can have a connection each. I was thinking of extending this with a connection pool but I'm not sure how to do this properly since I don't want that the client needs to be mutable to get a connection. That would make it much harder to use with multiple tasks.

Hm, in my rust-redis library, I just connect in redis::Client::new(). That's pretty simple. What's the problem if each task just calls Client::new() instead of get_connection()? If address resolution is your problem, I'd solve it differently.

Regards,

  Michael

[1]: https://github.com/mneumann/rust-redis

_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to