Hi Ted
Sorry I forgot to mention, hbase-0.94.6 cdh 4.4.
Yeah, it was a pretty write intensive scenario that I think triggered it
(importing a lot of datapoints into opentsdb).
Do I flush the region manually using shell?
Cheers,
-Kristoffer
On Sat, Mar 14, 2015 at 9:22 PM, Ted Yu
That's fine as long as it doesn't end up in this state afterwards. I'll
restart it when back to work on Monday.
Thank you!
-Kristoffer
On Sat, Mar 14, 2015 at 11:32 PM, Ted Yu yuzhih...@gmail.com wrote:
Assuming it was thread
RS_CLOSE_REGION-hdfs-ix03.se-ix.delta.prod,60020,1424687995350-1
We're hitting this type of HDFS issue in production too. Your best option
is to kill the regionserver process forcefully, start a replacement, and
let the region(s) affected recover. All edits should be persisted to the
WAL regardless of what Ted said about flushing.
We are working on the
Assuming it was thread
RS_CLOSE_REGION-hdfs-ix03.se-ix.delta.prod,60020,1424687995350-1
which got stuck, there might be data loss if server is restarted since
there would be some data unable to be flushed.
Cheers
On Sat, Mar 14, 2015 at 2:58 PM, Kristoffer Sjögren sto...@gmail.com
wrote:
I
Which release of HBase are you using ?
I wonder if your cluster was hit with HBASE-10499.
Cheers
On Sat, Mar 14, 2015 at 1:13 PM, Kristoffer Sjögren sto...@gmail.com
wrote:
Hi
It seems one of our region servers has been stuck closing a region for
almost 22 hours. Puts or gets eventually
I think I found the thread that is stuck. Is restarting the server harmless
in this state?
RS_CLOSE_REGION-hdfs-ix03.se-ix.delta.prod,60020,1424687995350-1 prio=10
tid=0x7f75a0008000 nid=0x23ee in Object.wait() [0x7f757d30b000]
java.lang.Thread.State: WAITING (on object monitor)
at
bq. flush the region manually using shell?
I doubt that would work - you can give it a try.
Please take jstack of region server in case you need to restart the server.
BTW HBASE-10499 didn't go into 0.94 (maybe it should have). Please consider
upgrading.
Cheers
On Sat, Mar 14, 2015 at 1:30 PM,
Hi,
We have a secured cluster. All components are working well, except hbase.
Specifically, this is what I see on regionserver:
2015-03-14 02:16:11,657 DEBUG [RpcServer.reader=5,port=60020]
ipc.RpcServer: Kerberos principal name is hbase/
sfdvgctsn001.xx...@sfdvgct.com
2015-03-14 02:16:11,658
We are using cheap HW to run our HBase. Problem is in toshiba disks.
Thanks!
2015-03-13 20:44 GMT+03:00 Nick Dimiduk ndimi...@gmail.com:
HBase is telling you that writes to those datanodes are slow. Is it the
same host names over and over? Probably they have high system load, a bad
or dying
Hi,
Traces (especially one for region server) look a bit incomplete, did
you copy them fully?
Also may help if you post relevant pieces of hbase-site.xml (with
security configs).
Thanks,
Mikhail
On Fri, Mar 13, 2015 at 11:28 PM, Manoj Murumkar
manoj.murum...@gmail.com wrote:
Hi,
We have a
10 matches
Mail list logo