Hi Ted,

I am not concerned about wide rows here. My schema has only 1 column in it
but it has a "value" of 50-100K bytes. The block size is configured to be
32K bytes. How does that work in practice - does it mean that the effective
block size is upwards of 50K ?

Varun


On Mon, Feb 24, 2014 at 10:07 AM, Ted Yu <[email protected]> wrote:

> Cycling old bits:
>
>
> http://search-hadoop.com/m/DHED4v7stT1/larger+HFile+block+size+for+very+wide+row&subj=larger+HFile+block+size+for+very+wide+row+
>
>
> On Mon, Feb 24, 2014 at 11:51 AM, Varun Sharma <[email protected]>
> wrote:
>
> > Hi,
> >
> > What happens if my block size is 32K while the cells are 50K. Do Hfile
> > blocks round up to 50K or are values split across blocks ? Also how does
> > this play with the block cache ?
> >
> > Thanks
> > Varun
> >
>

Reply via email to