Is there any place where hdfs command history is stored on lines
.bash_history in shell. Since the regions have increased for the table
about 100 over a night (From 120 to 211)... I am suspecting that some thing
is wrong from hbase side...


On Fri, Feb 28, 2014 at 12:07 AM, kiran <[email protected]> wrote:

> TTL setting is Integer.MAX_VALUE. so it should not be problem.
>
>
> On Thu, Feb 27, 2014 at 11:49 PM, Jimmy Xiang <[email protected]> wrote:
>
>> Hi Kiran,
>>
>> Can you check your table TTL setting? Is it possible that the data are
>> expired and purged?
>>
>> Thanks,
>> Jimmy
>>
>>
>>
>> On Thu, Feb 27, 2014 at 10:11 AM, Stack <[email protected]> wrote:
>>
>> > Anything in your logs that might give you a clue?  Master logs?  HDFS
>> > NameNode logs?
>> > St.Ack
>> >
>> >
>> > On Thu, Feb 27, 2014 at 7:53 AM, kiran <[email protected]>
>> > wrote:
>> >
>> > > Hi All,
>> > >
>> > > We have been experiencing severe data loss issues from few hours.
>> There
>> > are
>> > > some wierd things going on in the cluster. We were unable to locate
>> the
>> > > data even in hdfs
>> > >
>> > > Hbase version 0.94.1
>> > >
>> > > Here is the wierd things that are going on:
>> > >
>> > > 1) Table which was once 1TB has now become 170GB with many of the
>> regions
>> > > which we once 7gb are now becoming few MB's. We are no clue  what is
>> > > happening at all
>> > >
>> > > 2) Table is splitting (or what ever) (100 regions have become 200
>> > regions)
>> > > and ours is constantregionsplitpolicy with region size 20gb. I don't
>> know
>> > > why it is even spltting
>> > >
>> > > 3) HDFS namenode dump size which we periodically backup is decreasing
>> > >
>> > > 4) And there is a region chain with start keys and end keys as, I
>> can't
>> > > copy paste the exact thing. For example
>> > >
>> > > K1.xxx K2.xyz
>> > > K2.xyz K3.xyz,138798010000.xyp
>> > > K3.xyz,138798010000.xyp K4.xyq
>> > >
>> > > I have never seen a wierd start key and end key like this. We also
>> > suspect
>> > > a failed split of a region around 20GB. We looked at logs many times
>> but
>> > > unable to get any sense out of it. Please help us out and we can't
>> afford
>> > > data loss.
>> > >
>> > > Yesterday, There was an cluster crash of root region but we thought we
>> > > sucessfully restored that.But things did n't go that way.... There
>> was a
>> > > consitent data loss after that.
>> > >
>> > >
>> > > --
>> > > Thank you
>> > > Kiran Sarvabhotla
>> > >
>> > > -----Even a correct decision is wrong when it is taken late
>> > >
>> >
>>
>
>
>
> --
> Thank you
> Kiran Sarvabhotla
>
> -----Even a correct decision is wrong when it is taken late
>
>


-- 
Thank you
Kiran Sarvabhotla

-----Even a correct decision is wrong when it is taken late

Reply via email to