Re: What can corrupt HBase table and what is Cannot find row in .META. for table?

2014-11-19 Thread Serega Sheypak
Hi, I'm using Java API. I see mentioned exception in java log. I'll provide full stacktrace next time. 2014-11-19 1:01 GMT+03:00 Ted Yu yuzhih...@gmail.com: The thread you mentioned was more about thrift API rather than TableNotFoundException. Can you show us the stack trace of

hbase: secure login and connection management

2014-11-19 Thread Bogala, Chandra Reddy
Hi, I am trying to login to secure cluster with keytabs using below methods. It works fine if the token is not expired. My process runs for long time ( web app from tomcat). Keep getting below exceptions after the token expire time and connection fails if the user tries to view data from web

Re: hbase: secure login and connection management

2014-11-19 Thread Matteo Bertozzi
Take a look at the patch added to https://issues.apache.org/jira/browse/HBASE-12366 There will be a new AuthUtil. launchAuthChore() which should help in your case. (The doc patch is here https://issues.apache.org/jira/browse/HBASE-12528) Matteo On Wed, Nov 19, 2014 at 11:19 AM, Bogala, Chandra

RPC Timeout - DoNotRetryIOException

2014-11-19 Thread xuge...@longshine.com
Hello: I also have encountered the exception? do you have some solution? please tell me. tks. xuge...@longshine.com

[ANNOUNCE] HBase 0.98.8 is now available for download

2014-11-19 Thread Andrew Purtell
Apache HBase 0.98.8 is now available for download. Get it from an Apache mirror [1] or Maven repository. The list of changes in this release can be found in the release notes [2] or following this announcement. This release contains a fix for a security issue, please see HBASE-12536 [3] for more

Re: HBase concurrent.RejectedExecutionException

2014-11-19 Thread Nicolas Liochon
Hi Arul, It's a pure client exception: it means that the client has not even tried to send the query to the server, it failed before. Why the client failed is another question. I see that the pool size is 7, have you changed the default configuration? Cheers, Nicolas On Tue, Nov 18, 2014 at

scan column qualifiers in column family

2014-11-19 Thread beeshma r
Hi i need to find whether particular column qualifier present in column family so i did code like this As per document public boolean containsColumn(byte[] family, byte[] qualifier) Checks for existence of a value for the specified column (empty or not). Parameters:family

Re: scan column qualifiers in column family

2014-11-19 Thread Ted Yu
bq. org.freinds_rep.java.Insert_friend.search_column(Insert_friend.java:106) Does line 106 correspond to result.containsColumn() call ? If so, result was null. On Wed, Nov 19, 2014 at 9:47 AM, beeshma r beeshm...@gmail.com wrote: Hi i need to find whether particular column qualifier present

can't start region server after crash

2014-11-19 Thread Li Li
I am running a single node pseudo hbase cluster on top of a pseudo hadoop. hadoop is 1.2.1 and replication factor of hdfs is 1. And the hbase version is 0.98.5 Last night, I found the region server crashed (the process is gone) I found many logs say [JvmPauseMonitor] util.JvmPauseMonitor: Detected

Re: can't start region server after crash

2014-11-19 Thread Li Li
also in hdfs ui, I found Number of Under-Replicated Blocks : 497741 it seems there are many bad blocks. is there any method to rescue good data? On Thu, Nov 20, 2014 at 10:52 AM, Li Li fancye...@gmail.com wrote: I am running a single node pseudo hbase cluster on top of a pseudo hadoop. hadoop

Re: can't start region server after crash

2014-11-19 Thread Ted Yu
Have you tried using fsck ? Cheers On Wed, Nov 19, 2014 at 6:56 PM, Li Li fancye...@gmail.com wrote: also in hdfs ui, I found Number of Under-Replicated Blocks : 497741 it seems there are many bad blocks. is there any method to rescue good data? On Thu, Nov 20, 2014 at 10:52 AM, Li Li

Re: can't start region server after crash

2014-11-19 Thread Li Li
I have tried and found many file's replication factor is 3(dfs.replication is 1 in hdfs.xml). So I try to set it to 1 now. there are so many files that it takes more than 30 minutes now and still not finished. I will try fsck later On Thu, Nov 20, 2014 at 11:25 AM, Ted Yu yuzhih...@gmail.com

Re: can't start region server after crash

2014-11-19 Thread Li Li
hadoop fsck / Status: HEALTHY Total size:1382743735840 B Total dirs:1127 Total files: 476753 Total blocks (validated): 490085 (avg. block size 2821436 B) Minimally replicated blocks: 490085 (100.0 %) Over-replicated blocks:0 (0.0 %) Under-replicated blocks: