As far as I can understand Alex was trying to avoid the scenario when user
needs to bring 1Tb dataset to each node of 50 nodes cluster and then
discard 49/50 of data loaded. For me this seems to be a very good catch.

However, I agree with Val that this may be implemented apart from store and
user can continue using store for read/write-through and there is probably
no need to alter any API.

Maybe we need to outline Val's suggestion in the documentation and describe
this as one of the possible scenarios. Thoughts?

--Yakov

Reply via email to