Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master

2013-10-03 Thread Sandeep L
Hi,
We are using hbase-0.94.1

While trying to access hbase with API all of suddenly we got following 
exception in our production cluster:
from org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker in 
pool-61-thread-3Check the value configured in 'zookeeper.znode.parent'. There 
could be a mismatch with the one configured in the master
Due to this issue our application not responding for at least 15 to 20 minutes 
and after this time its started responding automatically.
We are unable to get any clue about it, can someone help us to resolve this 
issue.
Thanks,Sandeep.   

Re: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master

2013-10-03 Thread Bharath Vissapragada
Are the client side zk configs in sync with the other nodes? Was it working
before and suddenly stopped working now?


On Thu, Oct 3, 2013 at 12:14 PM, Sandeep L sandeepvre...@outlook.comwrote:

 Hi,
 We are using hbase-0.94.1

 While trying to access hbase with API all of suddenly we got following
 exception in our production cluster:
 from org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker in
 pool-61-thread-3Check the value configured in 'zookeeper.znode.parent'.
 There could be a mismatch with the one configured in the master
 Due to this issue our application not responding for at least 15 to 20
 minutes and after this time its started responding automatically.
 We are unable to get any clue about it, can someone help us to resolve
 this issue.
 Thanks,Sandeep.




-- 
Bharath Vissapragada
http://www.cloudera.com


RE: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master

2013-10-03 Thread Sandeep L
I rechecked and all client side zk configs are in sync and it was working 
before and suddenly stopped working with the exception. After some time again 
it started working.We are facing same issue repeatedly. 

Thanks,Sandeep.

 From: bhara...@cloudera.com
 Date: Thu, 3 Oct 2013 12:36:54 +0530
 Subject: Re: Check the value configured in 'zookeeper.znode.parent'. There 
 could be a mismatch with the one configured in the master
 To: user@hbase.apache.org
 
 Are the client side zk configs in sync with the other nodes? Was it working
 before and suddenly stopped working now?
 
 
 On Thu, Oct 3, 2013 at 12:14 PM, Sandeep L sandeepvre...@outlook.comwrote:
 
  Hi,
  We are using hbase-0.94.1
 
  While trying to access hbase with API all of suddenly we got following
  exception in our production cluster:
  from org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker in
  pool-61-thread-3Check the value configured in 'zookeeper.znode.parent'.
  There could be a mismatch with the one configured in the master
  Due to this issue our application not responding for at least 15 to 20
  minutes and after this time its started responding automatically.
  We are unable to get any clue about it, can someone help us to resolve
  this issue.
  Thanks,Sandeep.
 
 
 
 
 -- 
 Bharath Vissapragada
 http://www.cloudera.com
  

Re: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master

2013-10-03 Thread Bharath Vissapragada
Is your zookeeper running fine? I suspect the random zk node that the
client is trying to connect to is coming out of the quorum and then
rejoining back due to some problem. You can check the zk logs for that.


On Thu, Oct 3, 2013 at 1:28 PM, Sandeep L sandeepvre...@outlook.com wrote:

 I rechecked and all client side zk configs are in sync and it was working
 before and suddenly stopped working with the exception. After some time
 again it started working.We are facing same issue repeatedly.

 Thanks,Sandeep.

  From: bhara...@cloudera.com
  Date: Thu, 3 Oct 2013 12:36:54 +0530
  Subject: Re: Check the value configured in 'zookeeper.znode.parent'.
 There could be a mismatch with the one configured in the master
  To: user@hbase.apache.org
 
  Are the client side zk configs in sync with the other nodes? Was it
 working
  before and suddenly stopped working now?
 
 
  On Thu, Oct 3, 2013 at 12:14 PM, Sandeep L sandeepvre...@outlook.com
 wrote:
 
   Hi,
   We are using hbase-0.94.1
  
   While trying to access hbase with API all of suddenly we got following
   exception in our production cluster:
   from org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker in
   pool-61-thread-3Check the value configured in 'zookeeper.znode.parent'.
   There could be a mismatch with the one configured in the master
   Due to this issue our application not responding for at least 15 to 20
   minutes and after this time its started responding automatically.
   We are unable to get any clue about it, can someone help us to resolve
   this issue.
   Thanks,Sandeep.
 
 
 
 
  --
  Bharath Vissapragada
  http://www.cloudera.com





-- 
Bharath Vissapragada
http://www.cloudera.com


RE: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master

2013-10-03 Thread Sandeep L
Zookeeper Quorum is running fine at without any issue. I also checked zookeeper 
logs in all quorum machines and I don't seen any errors in logs.

Thanks,Sandeep.

 From: bhara...@cloudera.com
 Date: Thu, 3 Oct 2013 13:55:09 +0530
 Subject: Re: Check the value configured in 'zookeeper.znode.parent'. There 
 could be a mismatch with the one configured in the master
 To: user@hbase.apache.org
 
 Is your zookeeper running fine? I suspect the random zk node that the
 client is trying to connect to is coming out of the quorum and then
 rejoining back due to some problem. You can check the zk logs for that.
 
 
 On Thu, Oct 3, 2013 at 1:28 PM, Sandeep L sandeepvre...@outlook.com wrote:
 
  I rechecked and all client side zk configs are in sync and it was working
  before and suddenly stopped working with the exception. After some time
  again it started working.We are facing same issue repeatedly.
 
  Thanks,Sandeep.
 
   From: bhara...@cloudera.com
   Date: Thu, 3 Oct 2013 12:36:54 +0530
   Subject: Re: Check the value configured in 'zookeeper.znode.parent'.
  There could be a mismatch with the one configured in the master
   To: user@hbase.apache.org
  
   Are the client side zk configs in sync with the other nodes? Was it
  working
   before and suddenly stopped working now?
  
  
   On Thu, Oct 3, 2013 at 12:14 PM, Sandeep L sandeepvre...@outlook.com
  wrote:
  
Hi,
We are using hbase-0.94.1
   
While trying to access hbase with API all of suddenly we got following
exception in our production cluster:
from org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker in
pool-61-thread-3Check the value configured in 'zookeeper.znode.parent'.
There could be a mismatch with the one configured in the master
Due to this issue our application not responding for at least 15 to 20
minutes and after this time its started responding automatically.
We are unable to get any clue about it, can someone help us to resolve
this issue.
Thanks,Sandeep.
  
  
  
  
   --
   Bharath Vissapragada
   http://www.cloudera.com
 
 
 
 
 
 -- 
 Bharath Vissapragada
 http://www.cloudera.com
  

Re: Row Filters using BitComparator

2013-10-03 Thread takeshi
hi,

I think may be the 'BinaryPrefixComparator' is good for your case.
{code:java}
table = ...;
Scan scan = new Scan();
Filter filter =
  new RowFilter(CompareFilter.CompareOp.EQUAL, new BinaryPrefixComparator(
  Bytes.toBytes(abc)));
scan.setFilter(filter);
ResultScanner rs = table.getScanner(scan);
for (Result r : rs) {
... // will get the Results start with rowkey abc
}
{code}

Best regards

takeshi


2013/10/3 abhinavpundir abhinavmast...@gmail.com

 Hello takishi

 Thnks fr the link. I have already looked that link. It doesnt shows how to
 use bit comparators works with row filters . Moroever i have tried using
 rowfilters with bitcomparators .
  In that case you have specify exactly the full fields of rows which i dnt
 want to do. It works with XOR and NOT_EQUAL

 I want to compare lets say some 10 bits domewhere in between .
 How would i achieve that with row filters

 On Wednesday, October 2, 2013, takeshi [via Apache HBase] wrote:

  Hi,
 
  Here is the RowFilter sample
 
 
 https://github.com/larsgeorge/hbase-book/blob/master/ch04/src/main/java/filters/RowFilterExample.java
  provided by Lars George's book; I think you can combine the
  TesdtBitComparator.java and this link to figure out what you need.
 
  Best regards
 
  takeshi
 
 
  2013/10/1 abhinavpundir [hidden email]
 http://user/SendEmail.jtp?type=nodenode=4051391i=0
 
 
   Thanks Amit but that link doesnt really help.
   I havent got a single link which showsbitcomparators  together with row
   filters .
  
   On Monday, September 23, 2013, anil gupta [via Apache HBase] wrote:
  
Inline
   
   
On Sun, Sep 22, 2013 at 1:05 PM, abhinavpundir [hidden email]
   http://user/SendEmail.jtp?type=nodenode=4051068i=0wrote:
   
   
 Thnx a lot anil

 By any chance do you know how to use bitcomparators in hbase ?

I havent used it till now. Have a look at this JUnit of BitComparator
  for
learning how to use it:
   
   
  
 
 http://svn.apache.org/viewvc/hbase/tags/0.94.9/src/test/java/org/apache/hadoop/hbase/filter/TestBitComparator.java?revision=1500225view=markup
   

 On Monday, September 16, 2013, anil gupta [via Apache HBase] wrote:

  Inline.
 
  On Sun, Sep 15, 2013 at 12:04 AM, abhinavpundir [hidden email]
 http://user/SendEmail.jtp?type=nodenode=4050761i=0wrote:
 
 
   I have rows in my Hbase whose keys are made up of 5 components.
  I
would
   like
   to search my hbase using rowFilters by using only some(May be
  only
the
   first
   2 components ) of the components. I don't want to use
   RegexStringComparator.
  
   If you are using the first two components(i.e. prefix of the
   rowkey)
  then
  you can use PrefixFilter(
 
 

   
  
 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/PrefixFilter.html
 ).
 
  Also, dont forget to set startRow and StopRow.
 
   I would like to use BitComparator because it is fast.
  
   how would I do that?
  
  
  
  
   --
   View this message in context:
  
 

   
  
 
 http://apache-hbase.679495.n3.nabble.com/Row-Filters-using-BitComparator-tp4050739.html
   Sent from the HBase User mailing list archive at Nabble.com.
  
 
 
 
  --
  Thanks  Regards,
  Anil Gupta
 
 
  --
   If you reply to this email, your message will be added to the
discussion
  below:
 
 

   
  
 
 http://apache-hbase.679495.n3.nabble.com/Row-Filters-using-BitComparator-tp4050739p4050761.html
   To unsubscribe from Row Filters using BitComparator, click here
 
  .
  NAML

   
  
 
 http://apache-hbase.679495.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3
 
 http://apache-hbase.679495.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml
 
 
 
 http://apache-hbase.679495.n3.nabble.com/Row-Filters-using-BitComparator-tp4050739p4051068.html
 To unsubscribe from Row Filters using BitComparator, click here
   
.
NAML
  
 
 

Re: Is loopback = bad news for hbase client?

2013-10-03 Thread Michael Segel
Yeah. What Jean-Marc says.

AND... don't get rid of 127.0.0.1 (loopback aka localhost , 
localhost.localdomain) . Bad things can happen.


On Oct 2, 2013, at 10:31 AM, Jean-Marc Spaggiari jean-m...@spaggiari.org 
wrote:

 You need you FQDN to be assigned to your external IP and not 127.0.0.1.
 That might be your issue. 127.0.0.1 need to be only for localhost. Not FQDN.
 
 
 2013/10/2 Jay Vyas jayunit...@gmail.com
 
 no, iirc even with 127.0.0.1 i have seen this issue, but maybe i just
 havent distilled the error down enough?
 
 
 On Wed, Oct 2, 2013 at 11:23 AM, Matteo Bertozzi theo.berto...@gmail.com
 wrote:
 
 I guess your problem was related to the 127.0.1.1, there's a note about
 that in the manual.
 http://hbase.apache.org/book.html#quickstart
 http://blog.devving.com/why-does-hbase-care-about-etchosts/
 
 Matteo
 
 
 
 On Wed, Oct 2, 2013 at 4:20 PM, Jay Vyas jayunit...@gmail.com wrote:
 
 Ive noticed that alltogether removing the 127* addresses
 from my /etc/hosts fixes my hbase configuration so that
 my client can create tables without failing and the hmaster initialies
 fully.
 
 Anyone else notice this ?
 
 --
 Jay Vyas
 http://jayunit100.blogspot.com
 
 
 
 
 
 --
 Jay Vyas
 http://jayunit100.blogspot.com
 

The opinions expressed here are mine, while they may reflect a cognitive 
thought, that is purely accidental. 
Use at your own risk. 
Michael Segel
michael_segel (AT) hotmail.com







RE: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master

2013-10-03 Thread cto
Hi ,

I am very new using hbase.

I am also facing the same issue :

I started the hbase shell and then executed the list command.

Got the following error.

13/10/03 16:07:37 ERROR client.HConnectionManager$HConnectionImplementation:
Check the value configured in 'zookeeper.znode.parent'. There could be a
mismatch with the one configured in the master.
13/10/03 16:07:38 ERROR client.HConnectionManager$HConnectionImplementation:
Check the value configured in 'zookeeper.znode.parent'. There could be a
mismatch with the one configured in the master.
13/10/03 16:07:39 ERROR client.HConnectionManager$HConnectionImplementation:
Check the value configured in 'zookeeper.znode.parent'. There could be a
mismatch with the one configured in the master.
13/10/03 16:07:41 ERROR client.HConnectionManager$HConnectionImplementation:
Check the value configured in 'zookeeper.znode.parent'. There could be a
mismatch with the one configured in the master.
13/10/03 16:07:43 ERROR client.HConnectionManager$HConnectionImplementation:
Check the value configured in 'zookeeper.znode.parent'. There could be a
mismatch with the one configured in the master.
13/10/03 16:07:47 ERROR client.HConnectionManager$HConnectionImplementation:
Check the value configured in 'zookeeper.znode.parent'. There could be a
mismatch with the one configured in the master.
13/10/03 16:07:51 ERROR client.HConnectionManager$HConnectionImplementation:
Check the value configured in 'zookeeper.znode.parent'. There could be a
mismatch with the one configured in the master.

ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7 times

Please suggest .

This happened suddenly , initially all was working fine.

Thanks in advance.



--
View this message in context: 
http://apache-hbase.679495.n3.nabble.com/Check-the-value-configured-in-zookeeper-znode-parent-There-could-be-a-mismatch-with-the-one-configurr-tp4051424p4051429.html
Sent from the HBase User mailing list archive at Nabble.com.


Building HBase 0.94.12 for Hadoop 3.0.0

2013-10-03 Thread Siddharth Karandikar
Hi,

16.8.3. Building against various hadoop versions. suggests to use
HBase 0.96. Still, Is there any way by which I can build 0.94.12 for
running against Hadoop 3.0.0?

By sticking to 0.94.12, makes life much easy. :)


Thanks,
Siddharth


Re: Building HBase 0.94.12 for Hadoop 3.0.0

2013-10-03 Thread Ted Yu
There is effort to build HBase trunk with hadoop 3.0:
https://issues.apache.org/jira/browse/HBASE-6581

Is hadoop 3.0 released ?

What're the features unavailable in hadoop 2.1.x-beta that you need ?

Thanks


On Thu, Oct 3, 2013 at 7:07 AM, Siddharth Karandikar 
siddharth.karandi...@gmail.com wrote:

 Hi,

 16.8.3. Building against various hadoop versions. suggests to use
 HBase 0.96. Still, Is there any way by which I can build 0.94.12 for
 running against Hadoop 3.0.0?

 By sticking to 0.94.12, makes life much easy. :)


 Thanks,
 Siddharth



Re: Batch method

2013-10-03 Thread Renato Marroquín Mogrovejo
Hi Ted,

Thank you very much for your answers once again (:
That piece of code certainly looks very interesting, but I haven't been
able to find it neither in [2], nor on the definition of my
HConnectionManager class, so I am guessing that must have been included in
0.94.11 or 0.94.12 (I looked into release notes but I didn't find anything
either), but it was on GitHub [3]. Maybe we will think on moving into the
latest 0.94.X.


Renato M.

[2]
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html
[3]
https://github.com/apache/hbase/blob/0.94/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java#L1636


2013/10/2 Ted Yu yuzhih...@gmail.com

 You can take a look at HConnectionManager#processBatchCallback(), starting
 line 1774 (0.94 branch).
 Here is the relevant code:

   if (!exceptions.isEmpty()) {

 throw new RetriesExhaustedWithDetailsException(exceptions,

 actions,

 addresses);

   }

 You should be able to extract failure information from
 RetriesExhaustedWithDetailsException thrown.


 Cheers


 On Wed, Oct 2, 2013 at 4:15 PM, Renato Marroquín Mogrovejo 
 renatoj.marroq...@gmail.com wrote:

  Yeah, I am using such method, but if I get a null in the Objects array,
 how
  do I know which operation failed?
 
 
  2013/10/2 Ted Yu yuzhih...@gmail.com
 
   How about this method ?
  
  
 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch(java.util.List
   ,
   java.lang.Object[])
  
  
   On Wed, Oct 2, 2013 at 3:57 PM, Renato Marroquín Mogrovejo 
   renatoj.marroq...@gmail.com wrote:
  
Hi Ted,
   
Thank you very much for answering! But I don't think HBASE-8112 is
   related
to my question. I saw that the signature of the method changed in
 order
   to
retrieve partial results. I am using HBase 0.94.10 so does this
 version
will work like this?
And anyway, my problem is to determine which operation failed within
 a
batch of operations. Is this possible?
   
   
Renato M.
   
   
2013/10/2 Ted Yu yuzhih...@gmail.com
   
 Looks like this is related:
 HBASE-8112 Deprecate HTable#batch(final List? extends Row)


 On Wed, Oct 2, 2013 at 3:35 PM, Renato Marroquín Mogrovejo 
 renatoj.marroq...@gmail.com wrote:

  Hi all,
 
  I am using the batch method[1] and it states that I will get an
  array
of
  objects containing possibly null values. So my question is if
 there
   is
a
  way on knowing which operation was the one that failed from this
  null
  value? Or any other way in which I could check for the failing
operation?
  Thanks in advance.
 
 
  Renato M.
 
  [1]
 
 

   
  
 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch(java.util.List)
 

   
  
 



Re: Building HBase 0.94.12 for Hadoop 3.0.0

2013-10-03 Thread lars hofhansl
Hi Siddarth,

did you try to build against Hadoop 3.0.0 and it failed?


-- Lars




 From: Siddharth Karandikar siddharth.karandi...@gmail.com
To: user@hbase.apache.org 
Sent: Thursday, October 3, 2013 7:37 AM
Subject: Building HBase 0.94.12 for Hadoop 3.0.0
 

Hi,

16.8.3. Building against various hadoop versions. suggests to use
HBase 0.96. Still, Is there any way by which I can build 0.94.12 for
running against Hadoop 3.0.0?

By sticking to 0.94.12, makes life much easy. :)


Thanks,
Siddharth

Re: Batch method

2013-10-03 Thread Ted Yu
Take a look at:
https://github.com/apache/hbase/blob/0.94/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java#L1786

Cheers


On Thu, Oct 3, 2013 at 9:02 AM, Renato Marroquín Mogrovejo 
renatoj.marroq...@gmail.com wrote:

 Hi Ted,

 Thank you very much for your answers once again (:
 That piece of code certainly looks very interesting, but I haven't been
 able to find it neither in [2], nor on the definition of my
 HConnectionManager class, so I am guessing that must have been included in
 0.94.11 or 0.94.12 (I looked into release notes but I didn't find anything
 either), but it was on GitHub [3]. Maybe we will think on moving into the
 latest 0.94.X.


 Renato M.

 [2]

 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html
 [3]

 https://github.com/apache/hbase/blob/0.94/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java#L1636


 2013/10/2 Ted Yu yuzhih...@gmail.com

  You can take a look at HConnectionManager#processBatchCallback(),
 starting
  line 1774 (0.94 branch).
  Here is the relevant code:
 
if (!exceptions.isEmpty()) {
 
  throw new RetriesExhaustedWithDetailsException(exceptions,
 
  actions,
 
  addresses);
 
}
 
  You should be able to extract failure information from
  RetriesExhaustedWithDetailsException thrown.
 
 
  Cheers
 
 
  On Wed, Oct 2, 2013 at 4:15 PM, Renato Marroquín Mogrovejo 
  renatoj.marroq...@gmail.com wrote:
 
   Yeah, I am using such method, but if I get a null in the Objects array,
  how
   do I know which operation failed?
  
  
   2013/10/2 Ted Yu yuzhih...@gmail.com
  
How about this method ?
   
   
  
 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch(java.util.List
,
java.lang.Object[])
   
   
On Wed, Oct 2, 2013 at 3:57 PM, Renato Marroquín Mogrovejo 
renatoj.marroq...@gmail.com wrote:
   
 Hi Ted,

 Thank you very much for answering! But I don't think HBASE-8112 is
related
 to my question. I saw that the signature of the method changed in
  order
to
 retrieve partial results. I am using HBase 0.94.10 so does this
  version
 will work like this?
 And anyway, my problem is to determine which operation failed
 within
  a
 batch of operations. Is this possible?


 Renato M.


 2013/10/2 Ted Yu yuzhih...@gmail.com

  Looks like this is related:
  HBASE-8112 Deprecate HTable#batch(final List? extends Row)
 
 
  On Wed, Oct 2, 2013 at 3:35 PM, Renato Marroquín Mogrovejo 
  renatoj.marroq...@gmail.com wrote:
 
   Hi all,
  
   I am using the batch method[1] and it states that I will get an
   array
 of
   objects containing possibly null values. So my question is if
  there
is
 a
   way on knowing which operation was the one that failed from
 this
   null
   value? Or any other way in which I could check for the failing
 operation?
   Thanks in advance.
  
  
   Renato M.
  
   [1]
  
  
 

   
  
 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#batch(java.util.List)
  
 

   
  
 



Re: Temp directory permission issue in LoadIncrementalHFiles

2013-10-03 Thread hardik doshi
Adding dev@hbase.

-Hardik.



 From: hardik doshi kool_har...@yahoo.com
To: user@hbase.apache.org user@hbase.apache.org 
Sent: Wednesday, October 2, 2013 4:31 PM
Subject: Temp directory permission issue in LoadIncrementalHFiles
 


Hello,

We're using LoadIncrementalHFiles.doBulkLoad to upload HFiles to an hbase table 
in bulk programmatically.
But at times, the HFiles are too big to fit in a single region. So, the 
doBulkLoad method splits and stores the parts
in a _tmp directory. This _tmp directory has 755 as permission and because of 
it, when actual upload
starts we get an exception since the hbase user do not have permission to move 
this _tmp directory.

org.apache.hadoop.security.AccessControlException: Permission denied: 
user=hbase, access=WRITE, 
inode=/tmp/BulkHBaseLoad/1380673269233/E/_tmp:bulkloader:supergroup:drwxr-xr-x

We do set fs.permissions.umask-mode to 000 at the start of the job. (Which I 
believe has no effect)

I understand that this the same problem described in 
https://issues.apache.org/jira/browse/HBASE-8495

I was wondering if there's any work around that i can use before the HBASE-8495 
gets fixed.

Thanks,
Hardik.

Re: HBase stucked because HDFS fails to replicate blocks

2013-10-03 Thread Jean-Daniel Cryans
I like the way you were able to dig down into multiple logs and present us
the information, but it looks more like GC than an HDFS failure. In your
region server log, go back to the first FATAL and see if it got a session
expired from ZK and other messages like a client not being able to talk to
a server for some amount of time. If it's the case then what you are seeing
is the result of IO fencing by the master.

J-D


On Wed, Oct 2, 2013 at 10:15 AM, Ionut Ignatescu
ionut.ignate...@gmail.comwrote:

 Hi,

 I have a HadoopHBase cluster, that runs Hadoop 1.1.2 and HBase 0.94.7.
 I notice an issue that stops normal cluster running.
 My use case: I have several MR jobs that read data from one HBase table in
 map phase and write data in 3 different tables during the reduce phase. I
 create table handler on my own, I don't
 TableOutputFormat. The only way out I found is to restart region server
 deamon on region server with problems.

 On namenode:
 cat namenode.2013-10-02 | grep blk_3136705509461132997_43329
 Wed Oct 02 13:32:17 2013 GMT namenode 3852-0@namenode:0 [INFO] (IPC Server
 handler 29 on 22700) org.apache.hadoop.hdfs.StateChange: BLOCK*
 NameSystem.allocateBlock:

 /hbase/.logs/datanode1,60020,1380637389766/datanode1%2C60020%2C1380637389766.1380720737247.
 blk_3136705509461132997_43329
 Wed Oct 02 13:33:38 2013 GMT namenode 3852-0@namenode:0 [INFO] (IPC Server
 handler 13 on 22700) org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 commitBlockSynchronization(lastblock=blk_3136705509461132997_43329,
 newgenerationstamp=43366, newlength=40045568, newtargets=[
 10.81.18.101:50010],
 closeFile=false, deleteBlock=false)

 On region server:
 cat regionserver.2013-10-02 | grep 1380720737247
 Wed Oct 02 13:32:17 2013 GMT regionserver 5854-0@datanode1:0 [INFO]
 (regionserver60020.logRoller)
 org.apache.hadoop.hbase.regionserver.wal.HLog: Roll

 /hbase/.logs/datanode1,60020,1380637389766/datanode1%2C60020%2C1380637389766.1380720701436,
 entries=149, filesize=63934833.  for

 /hbase/.logs/datanode1,60020,1380637389766/datanode1%2C60020%2C1380637389766.1380720737247
 Wed Oct 02 13:33:37 2013 GMT regionserver 5854-0@datanode1:0 [WARN]
 (DataStreamer for file

 /hbase/.logs/datanode1,60020,1380637389766/datanode1%2C60020%2C1380637389766.1380720737247
 block blk_3136705509461132997_43329) org.apache.hadoop.hdfs.DFSClient:
 Error Recovery for block blk_3136705509461132997_43329 bad datanode[0]
 10.80.40.176:50010
 Wed Oct 02 13:33:37 2013 GMT regionserver 5854-0@datanode1:0 [WARN]
 (DataStreamer for file

 /hbase/.logs/datanode1,60020,1380637389766/datanode1%2C60020%2C1380637389766.1380720737247
 block blk_3136705509461132997_43329) org.apache.hadoop.hdfs.DFSClient:
 Error Recovery for block blk_3136705509461132997_43329 in pipeline
 10.80.40.176:50010, 10.81.111.8:50010, 10.81.18.101:50010: bad datanode
 10.80.40.176:50010
 Wed Oct 02 13:33:43 2013 GMT regionserver 5854-0@datanode1:0 [INFO]
 (regionserver60020.logRoller) org.apache.hadoop.hdfs.DFSClient: Could not
 complete file

 /hbase/.logs/datanode1,60020,1380637389766/datanode1%2C60020%2C1380637389766.1380720737247
 retrying...
 Wed Oct 02 13:33:43 2013 GMT regionserver 5854-0@datanode1:0 [INFO]
 (regionserver60020.logRoller) org.apache.hadoop.hdfs.DFSClient: Could not
 complete file

 /hbase/.logs/datanode1,60020,1380637389766/datanode1%2C60020%2C1380637389766.1380720737247
 retrying...
 Wed Oct 02 13:33:44 2013 GMT regionserver 5854-0@datanode1:0 [INFO]
 (regionserver60020.logRoller) org.apache.hadoop.hdfs.DFSClient: Could not
 complete file

 /hbase/.logs/datanode1,60020,1380637389766/datanode1%2C60020%2C1380637389766.1380720737247
 retrying...

 cat regionserver.2013-10-02 | grep 1380720737247 | grep 'Could not
 complete' | wc -l
 5640


 In datanode logs, that runs on the same host with region server:
 cat datanode.2013-10-02 | grep blk_3136705509461132997_43329
 Wed Oct 02 13:32:17 2013 GMT datanode 5651-0@datanode1:0 [INFO]
 (org.apache.hadoop.hdfs.server.datanode.DataXceiver@ca6b1e3)
 org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
 blk_3136705509461132997_43329 src: /10.80.40.176:36721 dest: /
 10.80.40.176:50010
 Wed Oct 02 13:33:37 2013 GMT datanode 5651-0@datanode1:0 [INFO]
 (org.apache.hadoop.hdfs.server.datanode.DataXceiver@ca6b1e3)
 org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
 10.80.40.176:50010,
 storageID=DS-812180968-10.80.40.176-50010-1380263000454, infoPort=50075,
 ipcPort=50020): Exception writing block blk_3136705509461132997_43329 to
 mirror 10.81.111.8:50010
 Wed Oct 02 13:33:37 2013 GMT datanode 5651-0@datanode1:0 [INFO]
 (org.apache.hadoop.hdfs.server.datanode.DataXceiver@ca6b1e3)
 org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock
 for block blk_3136705509461132997_43329 java.io.IOException: Connection
 reset by peer
 Wed Oct 02 13:33:38 2013 GMT datanode 5651-0@datanode1:0 [INFO]
 (PacketResponder 2 for Block blk_3136705509461132997_43329)
 

hbase.master parameter?

2013-10-03 Thread Jay Vyas
What happened to the hbase.master parameter?

I dont see it in the docs... was it deprecated?

It appears to still have an effect in 94.7

-- 
Jay Vyas
http://jayunit100.blogspot.com


Re: hbase.master parameter?

2013-10-03 Thread takeshi
hi,

I not sure which option you're meant to, in our env., using hbase-0.94.2,
but I can not find the option hbase.master in the configs
{code}
 egrep -in --color hbase\.master /etc/hbase/conf/*
/etc/hbase/conf/hbase-site.xml:21:namehbase.master.port/name
/etc/hbase/conf/hbase-site.xml:25:namehbase.master.info.port/name
/etc/hbase/conf/hbase-site.xml:86:namehbase.master.keytab.file/name
/etc/hbase/conf/hbase-site.xml:90:
namehbase.master.kerberos.principal/name
/etc/hbase/conf/hbase-site.xml:94:
namehbase.master.kerberos.https.principal/name
{code}

Also in the git branch
{code}
$ cd hbase

$ git status
# On branch 0.94
# Untracked files:
#   (use git add file... to include in what will be committed)
...

$ egrep -in -B 3 version.0\.94.* pom.xml
36-  groupIdorg.apache.hbase/groupId
37-  artifactIdhbase/artifactId
38-  packagingjar/packaging
39:  version0.94.13-SNAPSHOT/version

$ egrep -inr hbase\.master conf/
# no any found

$ egrep -inr hbase\.master ./src/main/resources/
./src/main/resources/hbase-default.xml:37:namehbase.master.port/name
./src/main/resources/hbase-default.xml:67:
namehbase.master.info.port/name
./src/main/resources/hbase-default.xml:74:
namehbase.master.info.bindAddress/name
./src/main/resources/hbase-default.xml:282:
namehbase.master.dns.interface/name
./src/main/resources/hbase-default.xml:289:
namehbase.master.dns.nameserver/name
./src/main/resources/hbase-default.xml:311:
namehbase.master.logcleaner.ttl/name
./src/main/resources/hbase-default.xml:318:
namehbase.master.logcleaner.plugins/name
./src/main/resources/hbase-default.xml:319:
valueorg.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner/value
./src/main/resources/hbase-default.xml:571:
namehbase.master.keytab.file/name
./src/main/resources/hbase-default.xml:578:
namehbase.master.kerberos.principal/name
./src/main/resources/hbase-default.xml:951:
namehbase.master.hfilecleaner.plugins/name
./src/main/resources/hbase-default.xml:952:
valueorg.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner/value
./src/main/resources/hbase-webapps/master/zk.jsp:30:
import=org.apache.hadoop.hbase.master.HMaster
./src/main/resources/hbase-webapps/master/table.jsp:35:
import=org.apache.hadoop.hbase.master.HMaster
./src/main/resources/hbase-webapps/master/table.jsp:47:  boolean
showFragmentation =
conf.getBoolean(hbase.master.ui.fragmentation.enabled, false);
./src/main/resources/hbase-webapps/master/table.jsp:48:  boolean readOnly =
conf.getBoolean(hbase.master.ui.readonly, false);
./src/main/resources/hbase-webapps/master/snapshot.jsp:27:
import=org.apache.hadoop.hbase.master.HMaster
./src/main/resources/hbase-webapps/master/snapshot.jsp:40:  boolean
readOnly = conf.getBoolean(hbase.master.ui.readonly, false);
./src/main/resources/hbase-webapps/master/tablesDetailed.jsp:24:
import=org.apache.hadoop.hbase.master.HMaster
{code}


Best regards

takeshi


2013/10/4 Jay Vyas jayunit...@gmail.com

 What happened to the hbase.master parameter?

 I dont see it in the docs... was it deprecated?

 It appears to still have an effect in 94.7

 --
 Jay Vyas
 http://jayunit100.blogspot.com



Re: Building HBase 0.94.12 for Hadoop 3.0.0

2013-10-03 Thread Siddharth Karandikar
Hi,

We are using HDFS snapshots which are readily available in trunk (3.0.0).
Looks like they are also available from 2.1.0+ (HDFS-2802). So I will
see if we can work with 2.1 instead of going to 3.0.

I will also give a try to actually fire a build against 3.0 and see what fails.

Thanks,
Siddharth


On Thu, Oct 3, 2013 at 9:31 PM, lars hofhansl la...@apache.org wrote:
 Hi Siddarth,

 did you try to build against Hadoop 3.0.0 and it failed?


 -- Lars



 
  From: Siddharth Karandikar siddharth.karandi...@gmail.com
 To: user@hbase.apache.org
 Sent: Thursday, October 3, 2013 7:37 AM
 Subject: Building HBase 0.94.12 for Hadoop 3.0.0


 Hi,

 16.8.3. Building against various hadoop versions. suggests to use
 HBase 0.96. Still, Is there any way by which I can build 0.94.12 for
 running against Hadoop 3.0.0?

 By sticking to 0.94.12, makes life much easy. :)


 Thanks,
 Siddharth