I'm not sure that there is a reason for it honestly. Possibly no one ever
used it for anything larger than 1k. I don't know the package well myself
but assuming it's storing this data inside of zk nodes having large files
in there will fill up the zk memory fast and may be risky, but other than
that I can't think of a reason.

Best
C
On Jun 13, 2015 1:52 PM, "Bharat Singh" <[email protected]>
wrote:

> Ping ..
>
> On Wed, Jun 10, 2015 at 5:01 PM, Bharat Singh <
> [email protected]
> > wrote:
>
> > Hi,
> >
> > I am trying to use zookeeper with zkfuse interface. I have some doubts
> > regarding max file size supported.
> >
> > Zookeeper max file size: (1MB)
> >
> >
> http://zookeeper.apache.org/doc/r3.1.2/api/org/apache/zookeeper/ZooKeeper.htm
> >
> > The maximum allowable size of the data array is 1 MB
> >
> > ZKfuse max file size: (1KB)
> >
> >
> http://code.metager.de/source/xref/apache/zookeeper/src/contrib/zkfuse/src/zkfuse.cc
> > *static* *const* *unsigned* maxDataFileSize
> > <http://code.metager.de/source/s?refs=maxDataFileSize&project=apache> =
> > MAX_DATA_SIZE
> > <
> http://code.metager.de/source/xref/apache/zookeeper/src/contrib/zkfuse/src/zkfuse.cc#MAX_DATA_SIZE
> >
> > ;
> >
> >
> > Why ZKfuse reduces the size further since we already have a cap at
> > zookeeper.
> > Is there any explanation behind this check in ZKfuse.
> >
> > Please point me to appropriate forum if this is not the one.
> >
> > Suggestions are appreciated.
> >
> >
> > Thanks,
> > Bharat
> >
>

Reply via email to