If the amount of data handled is fairly huge, {say of the order of MBs of
GBs}, you can even consider, storing the actual data in hdfs and meta data
on ZooKeeper. I had followed this approach in implementing one of the
use-cases, which required handling data limits that exceeded beyond that of
zk.

Thanks.

On Sat, Jun 4, 2011 at 12:56 AM, Patrick Hunt <[email protected]> wrote:

> FYI, the default is 1mb, see:
> http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#Unsafe+Options
>
> Patrick
>
> On Fri, Jun 3, 2011 at 11:49 AM, Avinash Lakshman
> <[email protected]> wrote:
> > 1 GB shouldn't be a problem. But the per znode limit is 2 MB.
> >
> > Avinash
> >
> > On Fri, Jun 3, 2011 at 11:46 AM, Vaibhav Aggarwal <[email protected]
> >wrote:
> >
> >> Hi
> >>
> >> I am Vaibhav, I am new to using zookeeper.
> >>
> >> I was wondering whether there are any experimental results around the
> >> amount of data you can maintain using zookeeper .
> >> Can zookeeper be used to hold 1 GB of data?
> >>
> >> I tried to search some of the old archives but could not find the
> answer.
> >> Any pointers would be very helpful.
> >>
> >> Thanks
> >> Vaibhav
> >>
> >
>

Reply via email to