Writing a large amount of data in really small pieces is going to be slower
than larger pieces.

This might reverse at very large sizes.

But you should test this if you really need to know the correct answer.

On Tue, Apr 13, 2010 at 7:22 PM, li li <liqiyuan...@gmail.com> wrote:

> Dear developer,
>     We are just making research using zookeeper in our experiment.Now,I do
> research about the performance of the zookeeper.
>    Now ,we need to know if the scale of data setting in the node influences
> the speed of writing action.
>    For example,when we have 1 M bytes data to write to the znode,which is
> better between the two cases.NO.1,we set the 1M bytes data once.NO.2, we
> break up the 1M data into several sections ,each section with 128 bytes.And
> then,we write the data using several clients.Do these two cases have
> different effection in the writing action?Which one is better?
>    Thanks for your reading,I'm looking forward to your reply.
>    With best wishes!
> Lily

Reply via email to