Check the block size of HDFS not HBase cache block size. When you make a
put, data is written to HLog and memstore.

David

On Sat, Nov 8, 2014 at 6:13 PM, mail list <[email protected]> wrote:

> Hi David,
>
> Thanks for your reply.
> The default block size is 64K, So you mean that after i write enough rows
> which exceeds the block size, then
> the put command will not success?
>
>
> On Nov 9, 2014, at 4:14, David <[email protected]> wrote:
>
> > The region server only need to talk to name ode when the current block
> is done
> >
> > Sent from my iPhone
> >
> >> On Nov 8, 2014, at 02:44, mail list <[email protected]> wrote:
> >>
> >> hi all,
> >>
> >>  We are testing the HBase. We shutdown all the name nodes,
> >>  but we can still put rows into HBase, and the “list” command is
> blocking.
> >>  Then we stop and restart all the HBase system, the rows which were put
> before
> >>  is persistent.
> >>
> >>  So i want to know if the name node goes wrong, the HBase can handle
> “put” command?
> >>  Am i right? and why ?
> >>
> >>  Any idea will be appreciated!!
> >>
>
>

Reply via email to