We've had some luck with bulk known key reads with grouping by replica and
doing SELECT... WHERE key IN(...). Not compatible with all data models, but
it works well where we can get away with it.
As a more general purpose construct it makes sense to me. In our driver
layer we have abstracted batches to support read batches (under which the
above method is applied) even though Cassandra doesn't support it first
On Tue, Oct 18, 2016, 5:00 PM Dikang Gu <dikan...@gmail.com> wrote:
> Hi there,
> We have couple use cases that are doing fanout read for their data, means
> one single read request from client contains multiple keys which live on
> different physical hosts. (I know it's not recommended way to access C*).
> Right now, on the coordinator, it will issue separate read commands even
> though they will go to the same physical host, which I think is causing a
> lot of overheads.
> I'm wondering is it valuable to provide a new read command, that
> coordinator can batch the reads to one datanode, and send to it in one
> message, and datanode will return the results for all keys belong to it?
> Any similar ideas before?